Optical Switching and Networking 5 (2008) 196–218 www.elsevier.com/locate/osn
Managing Layer 1 VPN servicesI N. Malheiros a,∗ , E. Madeira a , F.L. Verdi b , M. Magalh˜aes b a Institute of Computing (IC), State University of Campinas (UNICAMP), C.P. 6176, Campinas - SP, 13083-970, Brazil b School of Electrical and Computer Engineering (FEEC), UNICAMP, C.P. 6101, Campinas - SP, 13083-970, Brazil
Received 4 April 2007; received in revised form 23 January 2008; accepted 13 February 2008 Available online 25 February 2008
Abstract Control Plane architectures enhance transport networks with distributed signaling and routing mechanisms which allow dynamic connection control. As a result, layer 1 switching networks enabled with a distributed control plane can support the provisioning of advanced connectivity services like Virtual Private Networks (VPNs). Such Layer 1 VPN (L1VPN) service allows multiple customer networks to share a single transport network in a cost-effective way. However, L1VPN deployment still faces many challenges. In this work, we are concerned on configuration management and interdomain provisioning of L1VPN services. We propose an L1VPN management architecture based on the Policy-Based Management (PBM) approach. First, we describe the architecture and how it allows a single service provider to support multiple L1VPNs while providing customers with some level of control over their respective service. Then we explain how the architecture was extended to support interdomain L1VPNs by using the Virtual Topology approach. We also discuss the prototype implementation and evaluation of the proposed architecture. Moreover, this work is a tentative note before raising a more deep discussion related to interdomain provisioning of L1VPN services and implications of a policy-based approach for L1VPN configuration management. c 2008 Elsevier B.V. All rights reserved.
Keywords: Layer 1 virtual private network (L1VPN); L1VPN Management; Policy-based management (PBM); Interdomain L1VPN; Virtual topology
1. Introduction Traditional transport networks must be enhanced in order to deal with the increasing growth in traffic, service network convergence, and the stringent I This paper is an extended version of the paper presented at the Third IEEE International Conference on Broadband Communications, Networks, and Systems (Broadnets’06). ∗ Corresponding author. Tel.: +55 19 3521 0336; fax: +55 19 3521 5847. E-mail addresses:
[email protected] (N. Malheiros),
[email protected] (E. Madeira),
[email protected] (F.L. Verdi),
[email protected] (M. Magalh˜aes).
c 2008 Elsevier B.V. All rights reserved. 1573-4277/$ - see front matter doi:10.1016/j.osn.2008.02.002
quality of service requirements of new advanced applications. In this context, the Automatic Switched Transport Network (ASTN) framework, specified by the International Telecommunications Union (ITU), has emerged as a key approach to design the next generation transport networks. ASTN enhances transport networks with a control plane architecture that enables dynamic topology and resource discovery, automated connection provisioning, and efficient recovery mechanisms. ITU has also specified the Automatic Switched Optical Network (ASON) to address the details of a control plane architecture for optical transport networks. Moreover, an important control plane solution is the
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
Generalized Multi-Protocol Label Switching (GMPLS) architecture [1], defined within the Internet Engineering Task Force (IETF). GMPLS extends IP-based routing and signaling protocols to build a distributed control plane architecture that supports multiple switching technologies. The control plane allows transport networks to dynamically provide connections to customer networks. As a result, it is possible for carriers to offer more advanced connectivity services, such as Virtual Private Networks (VPNs), on layer 1 networks like optical or time division multiplexing networks. Such Layer 1 VPN (L1VPN) service enables multiple customer service networks to share a single layer 1 transport network [2]. It allows customers to establish connections between remote sites by dynamically allocating resources from the service provider network. Indeed, with the advances in layer 1 networks (mainly optical networks) and the development of intelligent IP-based control plane architectures, carriers can build L1VPNs in order to provide cost effective, flexible and on demand bandwidth services to multiple customers. L1VPN has drawn attention from research community and standardization organizations as a key service for next generation networks. Research work has mainly focused on control plane issues in the provisioning of L1VPN services. Most work has investigated how to provide L1VPN services on GMPLS-enabled networks and evaluated necessary extensions in the control plane protocols with the aim of supporting L1VPNs. However, there are still many challenges to be faced in order to deploy L1VPN services in a fully functional way. The main problems include resource management, L1VPN control by customers, independent management and control per L1VPN, and provisioning of interdomain L1VPN services. We have been investigating how to provide L1VPN services on transport networks enhanced with a distributed control plane. In this work, we are concerned with the L1VPN configuration management issues and provisioning of interdomain L1VPN services. We propose an architecture for L1VPN service management. One problem here is how the provider network supports multiple L1VPN services, while providing customers with independent control and management over their L1VPN. In order to meet such requirement the architecture is built on the Policy-Based Management [3] approach. Moreover, due to the increasing growth of enterprise networks, an important requirement of large customers is the ability to interconnect sites attached to different administrative domains. In this case, service
197
providers must cooperate in order to allow the provisioning of interdomain L1VPNs. Therefore, we have extended the intradomain policy-based architecture to support the provisioning of L1VPN connections across multiple domains. We advocate in favor of a service-oriented architecture (SOA) and the Virtual Topology approach [4] to deal with the issues involved in the provisioning of interdomain L1VPN services. This work is also a tentative note before raising a more deep discussion related to interdomain provisioning of L1VPN services and implications of a policy-based approach for L1VPN configuration management. Furthermore, we have performed simulations to evaluate the proposed architecture. In the described scenarios, we consider that provider networks are optical transport networks. Initially, we present basic concepts on L1VPN services. Then, we discuss related work in Section 3. We describe the proposed architecture in two parts. First, in Section 4, we propose a policy-based architecture which allows a single provider (intradomain) to support independent management of multiple L1VPN services. In this section, we define major classes of policies for L1VPN services, discuss how the IETF Policy Framework has been used in the context of L1VPN management, and describe the architecture functional model and deployment scenarios. Second, in Section 5, we elaborate on how providers can cooperate to support interdomain L1VPN services. We discuss interdomain requirements and challenges and describe how the Virtual Topology approach is used to extend the management architecture in order to support the provisioning of interdomain L1VPN connections. In Section 6, we evaluate the proposed architecture, discuss the prototype implementation and present some evaluation results. Finally, we conclude the paper in Section 7. 2. L1VPN services A Virtual Private Network (VPN) is a secure connectivity service that interconnects a restricted group of sites despite using a shared public network. The wide deployment of VPN services on IP networks, the advances of optical networks and the development of distributed control plane architectures have motivated research work on how to provide VPNs on layer 1 transport networks. The Layer 1 VPN extends the concept of VPN service to layer 1 networks. In particular, L1VPN on optical networks is called Optical VPN (OVPN) [5]. The L1VPN service enables multiple client networks to share a common transport infrastructure. It defines
198
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
Fig. 1. L1VPN reference model.
an interface through which customers can establish connections across the provider network in order to interconnect their remote sites. A fundamental requirement is that customers should be given some level of management and control over their L1VPN, which includes support to modify the VPN topology and add or remove bandwidth. L1VPNs are expected to support higher-layer services and advanced applications which demand huge bandwidth traffic between remote known sites. L1VPN services allow customers to benefit from dynamic deployment of high speed transport services without the cost and complexity of managing and operating their own layer 1 network. On the other hand, carriers can share operational cost among multiple clients and create business opportunities by offering new on demand transport services. Fig. 1 shows the basic elements of the L1VPN reference model [2] and also possible topologies of two L1VPN services (A and B). A Customer Edge (CE) is a device within the customer network that receives L1VPN services from the provider network. A Provider Edge (PE) is a device within the provider to which at least one CE is connected. It provides L1VPN service functionalities to CEs through the L1VPN service interface. A Provider (P) node is a device within the provider network that is not connected to any CE, but only to PE and P devices. L1VPN is a port-based provider provisioned VPN. The Customer Port Identifier (CPI) and the Provider Port Identifier (PPI) are the logical endpoints of a link between CE and PE respectively. A VPN member is identified by
a CPI–PPI pair. A Port Information Table (PIT) stores membership information related to a specific L1VPN service. Another important requirement is that control and data connectivity should be restricted to the L1VPN membership. The IETF has been working on L1VPN auto-discovery mechanisms which allow L1VPN members dynamically discover each other and possibly share link information. One mechanism for membership auto-discovery is based on the Border Gateway Protocol (BGP) [6]. A second mechanism is based on the Open Shortest Path First protocol [7]. Through the L1VPN service interface, customers can dynamically allocate resources from the provider network to build their virtually dedicated wide-area network without the burden of operating the physical layer 1 network. There are two L1VPN resource allocation models. In the dedicated model, provider network resources are exclusively reserved to a specific L1VPN and they cannot be allocated by another L1VPN. This model provides to customers more control over their L1VPNs since the provider may offer customers detailed information about their dedicated resources. On the other hand, in the shared model, resources can be allocated by different L1VPNs in a time-sharing manner. The main problems here are contentions and a limited “view” of the provider network by the customer. Furthermore, the L1VPN Framework [8] defines three service models based on the L1VPN service interface. In the management-based service model, the L1VPN service is provided on a management
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
interface by which the customer management system communicates with the provider management system in order to control and manage the respective L1VPN. In this case, there is no control plane message exchange between the customer and the provider. In the signalingbased service model, the service interface is based on control plane signaling between CE and PE devices. Finally, in the signaling and routing service model, routing mechanisms are supported in addition to signaling mechanisms. In this case, there is routing exchange between CE and PE. Thus CE may obtain provider network topology information and remote site reachability information through control plane routing protocols. 3. Related work Recently, standardization organizations have been working on L1VPN services. The ITU has specified service requirements and a reference model [9], as well as functions and architectures to support L1VPNs [10]. Moreover, the IETF has created a specific working group which is aimed at specifying how to provide L1VPN services on GMPLS-enabled networks. The first steps concern service requirements and framework [8], analysis of applying GMPLS protocols and mechanisms in the support of L1VPN services [11–13], and L1VPN auto-discovery mechanisms [6,7]. The work presented in [14] compares types of L1VPN architectures namely, centralized, distributed and hybrid architectures. It focus on the managementbased service model as a suitable approach for initial steps in L1VPN deployment. In that context, it evaluates centralized and hybrid architectures and argues that the last one achieves more scalable, resilient and fast operations. L1VPN management by users is proposed in [15]. The proposed architecture is aimed at offering increased flexibility and providing management functions as close to users as possible. That work is focused on two main aspects: management of carrier network partitions in support of L1VPN services and L1VPN management by users. Provider network resources are treated and managed by Web Services and then users manage resource partitions by accessing the respective Web Service. Users can compose resources to build their own L1VPN. The proposed functionalities define a business model for a physical network broker, since users may trade idle resources within their dedicated partition of the layer 1 transport network. The work presented in [16] reviews existing GMPLS mechanisms for performing L1VPN functionalities.
199
That paper discusses L1VPN deployment on GMPLS networks in terms of addressing, membership discovery and signaling aspects. It also discusses open issues in L1VPN provisioning which include configuration, security, and resource management. As described in [1], there are two main solutions to L1VPN provisioning on GMPLS-enabled networks. The first one proposes the so-called Generalized VPN (GVPN) services [17]. Such services use the BGP protocol to VPN membership auto-discovery and GMPLS signaling and routing mechanisms to establish Layer 1 VPN connections. The authors argue that GVPN services could support interdomain scenarios since GMPLS signaling and BGP work across multiple domains. The second solution comes from GMPLS Overlays [18], which addresses the application of GMPLS to the overlay deployment scenario. The proposed extensions consider the support of VPN services. In this case, remote customer sites are interconnected through the core (provider) network. Therefore, the VPN services are overlay networks and core connections are used as single links in the establishment of VPN connections by means of hierarchy mechanisms. Another work which considers a multidomain environment is the one presented in [19]. The solution presented in that work is a resource management scheme for L1VPN provisioning on GMPLS-enabled next generation optical networks. The authors propose a shared capacity management algorithm and demonstrate that distributed control L1VPN model achieves highest network load carrying capacity. Furthermore, they emphasize two important research topics which we consider in this work, namely, topology abstraction techniques to improve routing performance and Quality of Service (QoS) routing. The main works are summarized in Table 1. Since the L1VPN service was specified by ITU in 2003, main advances in L1VPN provisioning are related to control plane issues. Differently from the aforementioned work, our contribution focus on L1VPN configuration management and support to interdomain L1VPN services. We show how the policy-based approach is suitable for flexible and independent L1VPN management. In addition, our proposal for interdomain L1VPN aims at dealing with QoS requirements, which are the main shortcomings the mentioned solutions are related to. 4. Policy-based L1VPN management We propose a policy-based architecture for L1VPN service management considering a single administrative
200
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
Table 1 Related work summary Author
Year
Main points
ITU ITU Takeda et al. Takeda et al. Ould-Brahim et al. Takeda et al.
2003 2004 2004 2005 2005 2005
Ould-Brahim et al. Bryskin and Berger Wu et al.
2006 2006 2006
L1VPN service requirements and reference model. L1VPN architectures and functional model. Evaluation of L1VPN architectures. (IETF) L1VPN service requirements and framework. GVPN services. L1VPN services on GMPLS networks. (addressing, membership discovery and signaling). (IETF) BGP-based L1VPN auto-discovery. (IETF) OSPF-based L1VPN auto-discovery. L1VPN management by users. (resource partition, physical network broker model).
domain. The proposed architecture is aimed at supporting independent control and management of L1VPN services. Instead of defining specific policies, the main focus of interest is to discuss L1VPN policy classes and demonstrate that Policy-Based Management (PBM) is a suitable approach for L1VPN management. In this section, we present L1VPN management requirements, discuss how the IETF policy framework can be applied to L1VPN management, and describe the proposed architecture. In the next section, we present how this architecture is extended to support interdomain L1VPN services. 4.1. Requirements The main L1VPN management functionalities include security functions (authentication, accounting and authorization), fault handling and performance monitoring, membership information management, resource management, and routing and connection control. Some of them are common management functions and are already present in most Network Management Systems. However, extensions may be needed to meet L1VPN specific requirements. The management functions may be centralized or distributed. A distributed implementation provides more scalable and fast operations, but it is usually a more complex solution and makes necessary protocol extensions or even new ones. As mentioned, managing L1VPN services include common management functions as performance, security, and configuration management. However, the service provider must relate management decisions and events to the respective L1VPN [2]. Therefore, the service provider should be able to separate management and control information per L1VPN, in such a way that one L1VPN operation cannot be affected by another one. Moreover, a primary requirement is that customers should be given some level of control and management
over their L1VPNs. Other important requirements related to L1VPN management are as follows: • Layer 1 connection control by customers (they must be able to dynamically allocate resources from the provider network); • Customer should receive fault, performance and connection blocking information; • Isolated management and control channels per VPN. 4.2. Policy framework We have decided on a policy-based approach in order to give customers some level of control and management over their L1VPN service. Indeed, an L1VPN requirement is that customers must be able to specify policies to control their L1VPN operation. Policy-Based Management (PBM) provides dynamic network-wide management and it has been widely used to address the complexity of network and service management. In a Policy Framework, an administrator defines policies to be enforced within the network in order to control the behavior of the system as a whole. Policies are a set of rules that govern service and device operations and define how network resources can be used. Each rule consists of conditions and actions. If the conditions are evaluated true, then the actions are executed. We have defined three major categories of L1VPN policies, namely, configuration, admission control, and routing policies. Configuration Policies are used to define configuration parameters which control L1VPN service operation. Main service operational aspects to which those policies may be applied are described as follows: • Those policies can be used in resource management and to specify resource allocation models for different L1VPN customers;
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
201
Fig. 2. Policy-based framework for Layer 1 VPN management.
• Configuration policies can also be applied in fault handling [20]. The provider network may support several restoration and protection schemes to recover from link and node failures. Such policies may determine which recovery scheme should be used for each L1VPN or each L1VPN member; • The provider network may support differentiated classes of L1VPN services. Configuration policies allow operators to determine the class of service for each L1VPN and even to configure the classes themselves; • Configuration policies can also define routing algorithm parameters, since each L1VPN service may use different path computation algorithms, link weight schemes, and other routing attributes in connection routing. Admission Control Policies are used in the L1VPN connection admission control which also considers membership information. Beyond common admission control aspects, like resource availability, those policies allow to specify additional constraints on connection admission as follows: • Such policies can enhance membership control with additional connectivity restrictions. They allow to define restrictions within an L1VPN by defining which members can establish connections to each other; • Admission control policies can be used to limit resources per L1VPN service, as well as to limit the number of connections per L1VPN service or member; • In the admission of customer connection requests, pre-provisioned connections in the provider network may be used in the establishment of the L1VPN
connections. Admission control policies can be used to optimize the selection of the pre-provisioned core connections. In a previous work, we have described an architecture for policy-based grooming responsible for managing the installation and aggregation of customer traffic flows within core optical connections [21]. Routing Policies aim at controlling path computation for L1VPN connections. The major applications of these policies are described as follows: • Routing policies can be applied in support of Constraint-Based Routing (CBR). CBR is mainly used in Traffic Engineering (TE) and fast connection reroute mechanisms. In this case, path computation is subject to resource and administrative constraints, for example, route restriction. Those policies can be used to specify such constraints on L1VPN connection routing; • Routing policies can also be used in resource management as a way to support dedicated and shared allocation models when path computation is centralized; • When there are several suitable routes for a connection, routing policies can be used to optimize route selection, for instance, to improve resource utilization. We have adapted the IETF policy framework [3] taking into account L1VPN management aspects. The adapted framework is presented in Fig. 2. In this framework, the provider network operator should firstly specify policies to manage the L1VPN services based on administrative and business goals. This is represented by the Modeling process. Policy modeling must consider the provider network infrastructure, the L1VPN
202
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
Fig. 3. L1VPN management architecture.
service framework supported by the provider, and customer requirements. Furthermore, defined policies should be in compliance with a policy information model to assure interoperability. In the modeling process, there is a need for policy management functionalities to support policy specification, including syntax check, conflict resolution, and so on. The defined policies for L1VPN control and management are represented in the figure as Service Configuration Policies. A Policy Decision Point (PDP) should evaluate the service configuration policies in order to decide which management actions must be enforced. Decisions may be requested or triggered by the occurrence of specific events. Such management decisions should consider the current network conditions (Network Status Information). PDP also may translate configuration policies to configuration information specific to network elements based on specific capabilities of each one (Device Capabilities). Finally, a Policy Enforcement Point (PEP) is responsible for performing configuration changes according to actions sent by the PDP. They may also be responsible for the report of device capabilities and network status to the PDP. The Decision process performed by the PDP is logically centralized. The Enforcement process is distributed over the network by implementing the PEP functionality on the managed entities. However, Decision and Enforcement processes may be implemented in a centralized management system. 4.3. Proposed architecture The architecture design is based on the assumption that the transport network is enhanced with a distributed
control plane which provides dynamic connection provisioning and network topology discovery. In this subsection, we describe the architecture functionalities and present an example of how its modules interact with each other. The proposed architecture is presented in Fig. 3. It defines a user interface represented by the Access Interface module (in the top of the figure). Through this interface the customer or the provider network operator can request L1VPN connections, define policies or receive performance information about the respective L1VPN service. On the other hand, the Control Plane Interface module is the interface between the management system and the control plane (in the bottom of the figure). The core modules are isolated by the interfaces. This design is aimed at achieving flexibility and making interoperability easier. The Access Interface module is responsible for processing the requests from customers and provider network operators, as well as for authentication and authorization procedures. According to the requests it invokes one of the three modules described as follows: • The Service Monitor is responsible for providing performance and fault information about L1VPN services for their customers. Some of those information can be obtained from the control plane; • The Service Provider is the main module. It is responsible for the configuration and accounting management of L1VPN services. It performs admission control on customer connection requests considering policies and membership information. It is also responsible for connection control by requesting the control plane to create, modify or
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218 Table 2 Summary of architecture functionalities Module
Functions
Access Interface
Process customer and provider request; authorization and authentication. Report fault and performance information. Configure L1VPN service; Perform admission and connection control. Support policy modeling; Process management decision request. Add/remove members; Verify membership. Compute path. Manage resource information. Control plane functions request.
Service Monitor Service provider Policy Manager Membership Manager Routing Controller Resource Manager Control Plane Interface
delete layer 1 connections according to the desired L1VPN topology; • The Policy Manager allows customers and provider network operators to add, edit, remove, and activate policies. It is responsible for managing policies and processing decision requests from the other modules. The Membership Manager is responsible for managing L1VPN membership information. It includes functions for adding or removing L1VPN members and verifying membership. The Resource Manager is responsible for managing provider network resources. It should provide support for shared and dedicated resource allocation models. The Routing Controller is responsible for computing the route for L1VPN connection requests by making use of resource availability information from the Resource Manager and collecting topology information from the routing mechanism of the control plane. The main functionalities are summarized in Table 2. Some functionalities, like path computation, are not specific to L1VPN services. Despite the functionalities having been already implemented on the provider network, extensions are needed in order to support L1VPNs. On the other hand, new functionalities must be completely implemented, since they are specific to L1VPN services, for instance, the management of membership information. L1VPN policies may be specified by provider network operators according to a Service-Level Agreement (SLA) and customer requirements. Moreover, the customers should be allowed to specify high-level policies to configure and control their own L1VPN services. The operation of the Service Provider, Routing Controller, and Resource Manager modules is subjected to the rules defined by the policies from the Policy Manager. As a result, separate L1VPN control and management are
203
provided to customers. Like the management solution proposed by Wu et al. [15], this approach gives customers some level of control over their L1VPN. However, our approach is more conservative. In this case, the provider can limit the level of customer control by determining which policies they are allowed to specify. The work we have referred to is more suitable in scenarios with high trust level between client and provider networks. The simplification and automation of the service management process are the main advantages of the policy-based approach [3]. Simplification is achieved through two key factors. First, all configuration is defined in a centralized way instead of configuring each device itself. In this way, when a new CE device is connected to a PE, the PE can be provisioned with the respective L1VPN configuration through the policy protocol. Second, high-level abstractions simplify policy specification. Administrators can specify service-level policies taking into consideration L1VPN service aspects rather than technology specific details. Automation is achieved since administrators do not need to configure the service themselves. They only state systemwide policies which should guide the entities involved in the provisioning of L1VPN services. A customer should be able to specify policies to manage their L1VPN services. However providers must supervise that task since a customer policy influences the overall provider network performance and may affect another customer service. Therefore L1VPN service provisioning is managed by a combination of provider and customer policies. Furthermore, providers can use policies to define differentiated classes of L1VPN services. The sequence diagram in Fig. 4 illustrates how the architecture modules interact in the provisioning of L1VPN connections. The Access Interface processes the customer request and then invokes the Service Provider module to establish the connection (1). This module retrieves management decisions from the Policy Manager (2). Such information is used to configure connection setup. The Service provider also requests policy decisions for connection admission control. In addition, it communicates with the Membership Manager in order to verify whether the endpoints specified in the connection request are members of the same VPN (3). Then, the Service provider requests the Routing Controller to compute a route for the requested connection (4). The Routing Controller requests the Policy Manager for routing policy decisions which will control the path computation (5). This computation requires resource
204
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
Fig. 4. Connection setup example.
availability information which is obtained from the Resource Manager (6). Since the Service Provider has received a route for the connection, it requests connection setup to the network control plane through the Control Plane Interface (7). The step (8) is a simplified representation of the connection setup over the provider network by a control plane. In a real scenario, the connection establishment could be done by, for example, a GMPLS signaling mechanism, or any other suite of control protocols. Finally, the customer is reported on the connection establishment through the Access Interface. 4.4. Deployment scenarios As described in Section 2, there are three L1VPN service models: management-based, signaling-based, and signaling and routing model. In the former, L1VPN customers access the service functionalities through a management plane interface. In the other two models, those functionalities are provided through a control plane interface. We name these two models control plane-based service model. In this subsection, we discuss operational aspects of the proposed architecture considering the two models of L1VPN service interface. In a first scenario, we explain a centralized approach to support L1VPN service deployment using the management-based model. In a second scenario, we discuss an alternative to control plane-based model deployment by using a hybrid approach in which some functions of the architecture are centralized, while other ones are distributed. In both scenarios we consider a GMPLS-enabled provider network. In this context, dynamic connection setup can be performed
by the GMPLS RSVP-TE (Resource Reservation Protocol-TE) signaling mechanism [22] and automatic topology discovery can be achieved through the GMPLS OSPF-TE (Open Shortest Path First-TE) routing mechanism [23]. 4.4.1. Centralized approach Fig. 5 shows an L1VPN deployment scenario considering the management-based service model. In this case, the service provisioning is based on a centralized approach. Thus, the management modules of the proposed architecture are implemented in a centralized way. They may be implemented in an L1VPN Management System or integrated into the provider Network Management System (NMS). The control plane functions, namely, connection setup and topology discovery, are distributed since they are performed by the GMPLS architecture. In order to establish an L1VPN connection, the customer sends a connection request to the L1VPN management system. After processing the request, the management system invokes the GMPLS control plane to establish a core connection across the provider network on behalf of the customer, as described in the example of Fig. 4. Such connection initiated by a management system is named a soft permanent connection (SPC). After a route is computed for the connection request, the L1VPN management system requests the SPC to the ingress PE. Then the CE nodes can establish routing adjacencies over such connection, as in an overlay scenario. GMPLS signaling mechanisms are used to setup the connection between PE devices.
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
205
Fig. 5. Management-based L1VPN service.
Fig. 6. Control-based L1VPN service.
4.4.2. Hybrid approach Considering the service models based on a control plane interface, a hybrid approach can be used in L1VPN deployment. In this case, there are centralized and distributed management functions. Compared with the previous approach, this scenario improves scalability and robustness in service provisioning and contributes for fast connection setup. However it is a long term solution since control plane protocol extensions and even new solutions are needed. Fig. 6 illustrates this scenario. In this case, several of the proposed architecture functionalities are also distributed, beyond the control plane ones. Namely, membership information management, admission and connection control, route computation and resource management functions, previously implemented in the centralized system, are now performed on PE devices. Some functions like policy and performance manage-
ment remain centralized. In addition, a centralized management system may be used to provide customer with service monitoring functions like reporting L1VPN service performance and fault information. Customers and provider network operators may also access the management system to manage L1VPN configuration policies. Furthermore, in this L1VPN deployment scenario, PE devices need to implement PEP functionality in order to support policy-based management. PE devices communicate with a Policy Server to request policy decisions. This server supports policy modeling and performs PDP functionality. It is logically centralized and can be implemented in a centralized management system. In this context, L1VPN control is achieved through policies since operations on PE devices are subject to policy rules specified in the policy server. In order to support communication between PEPs on
206
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
PE devices and a policy server there is a need for a policy information exchange protocol like COPS (Common Open Policy Service) [24]. This is a request and response protocol which allows on demand policy decision requests. Furthermore, such protocol could be used to provide PE devices with L1VPN service configuration information [25]. In this section, we have presented a policy-based architecture for provisioning of intradomain L1VPNs. In the next section, we discuss how the architecture is extended to support the provisioning of interdomain L1VPNs.
(AS) with respect to routing aspects. However, the concept of the term domain is generic enough to be extended to other logical, physical or administrative scopes. The design of the interdomain L1VPN architecture is based on the Virtual Topology approach that abstracts internal details of each domain. The objective to highlight interdomain L1VPN as a killer service for next generation transport networks, arise more deep discussion on the problem, describe challenges and point out some initial directions.
5. Interdomain L1VPN services
We focus on two specific problems. First, we are concerned on how to distribute L1VPN policies, membership, and TE information among multiple domains. The second problem is how to establish interdomain QoS-based L1VPNs. In this case, QoS routing for transport networks means that layer 1 connection selected by the routing procedure must satisfy some QoS attributes as specified in the L1VPN connection request. The main QoS attributes include protection level, switching capability, encoding type, and maximum and minimum reservable bandwidth. A recent work [26] has been proposed to support TE information sharing. It defines a new BGP attribute, the TE attribute, that allows BGP to distribute TE information. Such attribute conveys information related to details of the underlying network such as switching capability and available bandwidth. However, that solution depends on the BGP extension what, in our point of view, might not reach a consensus in the international community then becoming an unfeasible proposal. Our view is that the BGP is not capable of supporting the (scalable) sharing of all TE and membership information. At the same time, current solutions do not take into account the business relationships between service providers. We believe that by creating a Service Layer, providers can interact with each other by defining interfaces without having to wait for long processes of standardization. The interactions can be done in the service layer. We present our approach for provisioning of interdomain L1VPN services which does not affect the underlying infrastructure of the transport network. Our solution uses the Virtual Topology approach to distribute QoS routing and membership information in the service layer.
As new services are tentatively being developed for creating new revenue opportunities, the interaction among different providers is becoming even more necessary. No single provider is capable of offering advanced services alone. It is very important that different providers interact with each other to support interdomain transport services. However, most proposed L1VPN architectures focus on single domain L1VPN services and the aspects related to how such services support interdomain L1VPNs are not clear yet. Moreover, the conventional routing architecture present in today’s networks is not suitable for achieving QoS requirements in the provisioning of interdomain services. The problem is that L1VPN traffic itself is mission critical data or multimedia applications which require QoS guarantees. In the management solution proposed by Wu et al. [15], users can compose interdomain L1VPN connections by allocating resources from multiple carrier networks. However, that solution requires a deep trust relationship among carries. Another solution which supports interdomain L1VPN is the GVPN service. The authors discuss that the solution can be used for interdomain since the BGP is the protocol responsible for conveying the membership information. However, that requires complex interdomain GMPLS signaling and it does not solve the distribution of Traffic Engineering (TE) information among different domains. Furthermore, these two solutions cannot assure the computation of the optimal end-to-end path (despite each domain segment being optimal). In the previous section, we have considered the provisioning of L1VPN services in a single domain scenario. Here we describe how the proposed architecture is extended to support the provisioning of interdomain L1VPN services. In this context, the term domain means an administrative autonomous system
5.1. Requirements and challenges
5.2. The Virtual Topology approach The Virtual Topology (VT) concept abstracts the underlying details of a physical network topology. Each
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
207
Fig. 7. Advertising virtual topologies.
Fig. 8. On demand virtual topologies (Pull model).
provider is responsible for establishing connections across its optical network by using local solutions such as GMPLS mechanisms. Internal details on how the optical connections (lightpaths) were set up are hidden from clients. The VT to be advertised to other domains is defined by each local domain administrator. A given domain may advertise several different VTs according to specific business and administrative rules. Fig. 7 shows two VTs of the same physical topology being advertised to different peer providers. Domain A receives VT1 and Domain B receives VT2 from Domain C. The amount and nature of the information advertised depend on the peering contracts established between the domains. Each virtual link represents a set of resources (lightpaths) that can be used to establish interdomain connections, for example to build an interdomain L1VPN. The quantity of lightpaths under each virtual link is a local domain decision and depends on the physical resources of each domain. Typical information that can be advertised with the VT includes information related to the level of protection of each lightpath, bandwidth of each virtual link and cost (financial cost) to use the virtual link. However, if a given domain does not want to show such internal information, for example due to confidentiality constraints, it can only inform an abstract cost that represents the QoS parameter of each virtual link.
We have defined two models to obtain virtual topologies [4]: the Push model and the Pull model. In the Push model, each domain announces the VT to all its neighbors respecting business relationships previously defined. The push model is more indicated to a regional scenario. We envisage that this regional scenario is formed by “condominiums of domains” by which a group of domains agrees on advertising VTs to each other. This advertising is done in a peering approach where all the domains that make part of the same condominium have the VTs of other domains. These condominiums of domains could define tentative business rules in the creation of new relationships that make the interactions more customer-oriented. In the Pull model (also known as “on demand” model), domains do not advertise their VTs to the neighboring domains. The VT is requested by each domain that wants to know the VTs of other domains. When a given provider needs to find an end-to-end QoS-enabled interdomain route, it queries its BGP local table and verifies which are the possible routes to reach the destination. Then, based on these routes, the source domain can invoke each domain towards the destination and gets the VT of such domains. Fig. 8 shows an example of the pull model. Suppose that domain 1 needs to find a route to a destination at domain 4 with certain QoS guarantees. Domain 1
208
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
Fig. 9. Architecture for the provisioning of interdomain L1VPN services.
queries its local BGP table and discovers that the best route (based on BGP) is through domains 1, 3 and 4. Then, domain 1 requests the VT of each other domain. In the case of having more than one BGP path to the destination 4, the source domain can recursively query each domain in each path and find the best path towards the destination, based on the virtual topology information. After having obtained the set of VTs, a given domain can calculate a route over such set. The path calculation depends on the amount of information that was advertised with the virtual topologies. If only the abstract cost was advertised, then the path calculation chooses the shortest path with the smallest cost. This does not guarantee whether the path can support the desired QoS attributes or not. This will be discovered during the negotiation process between domains, that will be explained later. However, if QoS attributes were advertised with the VTs, the path calculation can take into account only the routes that satisfy the specific desired QoS attributes. The Push model is more business-oriented. It allows to offer promotional VTs with specific characteristics and call the attention of other domains. However, the Push model is more complex and generates more traffic, what can result in scalability problems. To solve such problems, groups of domains could be regionally defined as condominiums. Other condominiums can be recursively created in a hierarchical way. The Pull model is simpler than the Push model. There is no scalability issues. Only the VTs of the chosen routes will be obtained. However, the Pull model makes the path calculation to take more time because it needs to obtain the VTs at the time a requisition arrives [4].
5.3. Interdomain L1VPN architecture The Virtual Topology approach presented above is used to support interdomain L1VPN services. In this case, path computation is based on the virtual topologies advertised by domains. The VT mechanism allows the computation of an interdomain QoS-based path. Furthermore, as an alternative to extending the current protocols, we propose to advertise not only the virtual topologies but also the policy and membership information of each L1VPN among domains. Thus, the VT mechanism is also used to distribute the local policies and membership information of each L1VPN service to every other domain that has at least one member of that service. In a previous work, we proposed an architecture for interdomain optical network services [27]. We have used that architecture to extend the proposed policybased architecture (presented in Section 4.3) to support the establishment of interdomain L1VPN connections. The extended architecture is shown in Fig. 9. The design of this extended architecture is an instantiation of that architecture [27] taking into account the L1VPN service requirements. We follow the Service-Oriented Architecture (SOA) principles to construct our model. Basically, SOA requires that applications are constructed using a loose coupling design and the services are dynamically discovered. Indeed, customer-controlled networks and service-oriented architectures are key approaches to deal with the requirements of advanced transport services [28]. Here we describe a feasible solution to interdomain L1VPN services built upon the policybased approach and the SOA-based architecture to
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
meet the specific requirements of L1VPN service provisioning. The extended architecture is divided into two layers: the service layer and the management layer. It advertises VT, policy, and membership information in the service layer as XML messages without having to change the way in which the underlying protocols perform. The service layer is also responsible for offering to clients and other domains an interface by which the functionalities are accessed. The service layer adds five functional modules to the initial architecture that was presented in the last section (see Fig. 3). The Advertising Service (AS) is responsible for advertising the local virtual topologies, policies and membership information (of each L1VPN) to other domains. The Path Computation Element (PCE) is responsible for finding an interdomain route so that an end-to-end interdomain L1VPN connection can be established. The End-to-End Negotiation Service (E2ENS) is responsible for negotiating end-to-end interdomain connections with other domains. The Trading Service (TS) is responsible for reserving and releasing the network resources for a given L1VPN and the L1VPN Service (L1VPNS) is responsible for provisioning L1VPNs. The TS and the L1VPNS together perform the functions of the Access Interface module of the initial architecture. The network management layer is responsible for actually putting the offered services into practice inside the local domain. It consists of the core modules of the initial architecture and the interface with the control plane. Therefore, the management layer is mainly responsible for admission and connection control, resource and policy management, membership management, routing control, and service monitoring and configuration. In the interdomain scenario, L1VPN policy and membership information sharing are performed by the Advertising Service. Local policy and membership information advertised to other domains are obtained from the Policy Manager and the Membership Manager, respectively. The distribution of virtual topology, policy, and membership information is controlled by local domain rules. For example, virtual topologies may be advertised when the cost of a given virtual link changes. Membership information may be advertised when an L1VPN member is added or removed from an interdomain L1VPN service. Policy information sharing allows consistent configuration of the interdomain L1VPN service. In each domain, the membership information may be statically
209
configured or obtained from an auto-discovery mechanism, as discussed earlier. Membership information obtained from other domains is added to the local membership information repository. After an advertising phase, each domain will have the complete membership information about the interdomain L1VPN service. That enables an L1VPN member to know reachability information about members in other domains and then the establishment of interdomain connections. The process to establish a successful interdomain L1VPN connection is as follows. A given client invokes the TS to reserve the resources for an L1VPN connection. The TS forwards the requisition to the Service Provider module in the management layer to perform admission control. That module in turn validates the requisition based on the management policies and invokes the Membership Manager to perform membership verification. Then, the Service Provider invokes the PCE to obtain an interdomain route that connects the source and destination members. The route is found by using the set of virtual topologies that was advertised. After calculating the path, the interdomain negotiation is performed using the E2ENS. In this phase, the resources in each domain are reserved. Note that these steps are necessary to reserve the resources for the L1VPN connection. The L1VPNS is responsible for activating1 the reserved resources when requested by the respective customer. 6. Implementation and evaluation We have implemented a prototype of the proposed architecture. In this section, we evaluate the implications of using the policy-based approach for L1VPN configuration management. First, we focus on the single domain (intradomain) scenario and then we consider interdomain L1VPN issues. 6.1. Intradomain L1VPN services Here, we first discuss the prototype implementation and then, we present a case study which is based on simulation results. 6.1.1. Prototype implementation We have implemented and tested an L1VPN management system prototype in order to validate the proposed architecture. The implementation considers the management-based service model and L1VPN 1 In optical networks, to activate means to crossconnect the optical switches.
210
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
Fig. 10. Implemented prototype.
configuration policies. The main modules of the architecture were developed using the Java Remote Method Invocation (Java RMI) technology. The Service Provider module includes two submodules which are responsible for connection and admission control. Route computation for requested connections are performed by the Routing Controller. It implements the Dijkstra’s Shortest Path algorithm. Fig. 10 depicts the structure of the prototype. The Service Monitor was partially implemented in [20,21]. Currently, the policies are specified in a static way (in compilation time). The Resource Manager simulates an optical network. In order to provide a high level of flexibility to access the management system, the Access Interface module was implemented as a Web Service. The requests are made through a web interface and the Web Service is accessed by using SOAP messages.2 The web interface is illustrated in Fig. 11 which shows the specification of a configuration policy. In this case, the configuration policy specifies a resource allocation model and a path computation scheme for the L1VPN service whose identifier is “100”. If the dedicated model is chosen, then it is possible to define how many wavelengths must be reserved in each link. Two path computation schemes are possible: find the shortest path in terms of number of hops or the path with most available bandwidth in terms of number of available wavelengths. In addition, we have investigated the use of XML technologies for policy representation mainly due to 2 Simple Object Access Protocol (SOAP) is an XML-based protocol that supports information exchange over HTTP.
Fig. 11. Web-based management interface.
the flexibility, interoperability and availability of syntax check tools. To illustrate XML policies we consider the simplified policy model presented in Fig. 12. This model is based on the Policy Core Information Model (PCIM) defined within IETF [29,30]. It was specified according to the proposed policy classes and covers main configuration and admission control policies. In this context, variables are associated with values to define conditions and conditions are associated with actions to define policy rules. The specified policy actions can be used to reject or accept connections and to define VPN configuration parameters as those discussed in the description of configuration policies. An XML policy example is illustrated in Fig. 13 considering the policy model presented before. Such
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
211
Fig. 12. L1VPN policy model.
Fig. 13. XML policy example.
policy specifies a configuration and an admission control rule. Both rules should be applied to “L1VPN service A” as expressed in the respective conditions (lines 7–8, and lines 18–19). The first rule defines actions to configure the resource allocation model as dedicated, the class of service, and the recovery scheme (lines 11–13). The second one is an example of connectivity restriction. It specifies that connections from member CPI1-PPI1 to member CPI2-PPI2, as expressed in the condition (lines 20–23), must be rejected (line 26).
6.1.2. Case study We have also extended the prototype to build a simulation environment where L1VPN services are provided over an optical transport network. Therefore, the L1VPN connections are optical connections through a single provider network. In this environment, L1VPN service customers concurrently send connection requests to the L1VPN management system. Then the system attempts to establish the requested connections over the optical network. An optical connection is established by allocating an available wavelength in each
212
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
requests are based on an exponential distribution. The topology of the simulated provider optical network is presented in Fig. 14 which shows the PE and P devices. For each L1VPN customer there is one CE connected to each PE. The source and destination nodes of the connections are randomly selected according to a uniform distribution. The simulations are repeated 100 times and the presented results are the average. Fig. 14. Simulated topology.
link from the source to the destination node. The connection establishment is done by a simplified distributed signaling mechanism. Such mechanism is performed by control plane agents which are implemented in each node. It includes messages to request and confirm wavelength allocation. After the route is calculated, the Service Provider triggers the connection setup by sending a request to the control plane agent in the ingress node. Then the control agents communicate with each other to set up the optical connections. We have performed simulations in order to evaluate the effects and implications of configuration policies. From the perspective of the L1VPN customer we measure the connection blocking rate. A connection is blocked if there are not enough available wavelengths. From the perspective of the service provider we measure the network resource utilization rate in terms of the number of allocated wavelengths. In all simulated scenarios, the connection request arrival rate is based on a Poisson distribution where the average number of connections per second is 100 or a fraction of 100 when explicitly mentioned (e.g. “Arrival Rate = 0.4” means the average is 40). The connection holding time and the interval between two connection
(a) Blocking rate.
6.1.2.1. First Scenario. In a first scenario we consider four L1VPN services (L1VPN 0–3). Each customer requests a total of 2500 connections and each optical link has 32 wavelengths. Configurations policies define high priority and low priority services: (1) High priority service: Resource allocation model is dedicated and 10 wavelengths on each link are reserved for the L1VPN service. Route computation involves only dedicated resources and must select the path with the maximum available resource. This is achieved by considering the weight of a link as w1 , where w is the number of available wavelengths. (2) Low priority service: Resource allocation model is shared. Path computation involves shared provider network resources and must find the shortest path in terms of the number of hops. Fig. 15(a) shows the blocking rate of the L1VPN 0 and the average blocking rate of the other L1VPNs when all L1VPNs are defined as low priority services. Fig. 15(b) shows new rates when L1VPN 0 is assigned as high priority and the other L1VPNs remain as low priority services. The results demonstrate that the service provider may define configuration policies to differentiate L1VPN services. Moreover, the changes on blocking rate show how policies for a service can affect the others.
(b) Differentiated blocking rate. Fig. 15. Connection blocking rate.
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
213
Fig. 16. Policy activation effect.
Fig. 17. Blocking rate: shared versus dedicated.
6.1.2.2. Second Scenario. A similar scenario illustrates how policies can be used in reaction to specific network conditions. Fig. 16 shows the improvement on L1VPN 0 blocking rate when a policy is activated in order to assign L1VPN 0 as high priority. In this case, before such policy is activated, wavelengths in each link are dedicated for the L1VPN 0. The policy is activated after the L1VPN 0 had requested half of the total number of connections. More elaborated conditions are possible, for instance, such policies can be activated in the occurrence of specific events or when a performance degradation threshold is reached.
wavelengths. For each arrival rate, the utilization rate is first measured with dedicated allocation model for all L1VPNs and then with shared allocation model for all L1VPNs. As shown in Fig. 18(a) and (b), the results demonstrate that the dedicated model is less efficient than the shared model. With low arrival rate there is no significant difference, as shown in Fig. 18(a). However, when the network load increases, the shared model outperforms the dedicated model, as shown in Fig. 18(b). The results show how policies may be used to offer differentiated classes for L1VPN services and to control service behavior and performance. Providers must analyze and elaborate on which configuration policies customers should be allowed to specify, since the same policy may benefit a customer while degrading the overall provider network performance. Moreover, a policy that improves the performance of an L1VPN service may degrade the performance of another service, depending on the network conditions and the configuration of other services. For instance, in the case of the resource allocation configuration policy, the results showed advantages and disadvantages at different perspectives. In the scenarios presented in Figs. 15(b) and 16, the dedicated model was successfully used to improve the performance of the L1VPN 0 by decreasing its blocking rate. Nevertheless, the average blocking rate of the other services was increased. Also, we saw that the dedicated resource allocation model proved to be less efficient with respect to the blocking rate when all services were configured to use the same model, as shown in Fig. 17. We have seen that in some scenarios dedicated model outperformed shared model. One could argue that a connection request that requires dedicated resources is more likely to be blocked than a request that
6.1.2.3. Final Discussion. In the previous scenarios, the policy configuration specifies the dedicated model to high priority L1VPN services in order to decrease the service blocking rate. However, dedicated resources may degrade the overall performance. This is illustrated in Fig. 17. Here, the average blocking rate of all L1VPN services when they use the same allocation model at different connection request arrival rates is compared. The blocking rate is measured with dedicated allocation model for all L1VPNs and then the blocking rate is measured again but with shared allocation model for all L1VPNs. The graph shows the average blocking rate of all L1VPNs in both cases. When all L1VPNs use the dedicated model the network resources (wavelengths) are equally distributed and reserved to each one. The results show that the shared resource allocation model outperforms the dedicated model. Moreover, the customer configuration policies can impact on the overall provider network performance. We evaluated the effect on the provider network resource utilization at different connection request arrival rates. Here, the resource utilization rate is measured in terms of the number of allocated
214
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
(a) Arrival rate = 0.4.
(b) Arrival rate = 1.0. Fig. 18. Resource utilization rate.
requires shared resources. But the point is, in dedicated model, the path computation algorithm can optimize resource allocation since resources are not shared and detailed availability information is possible. The work presented in [31] proposes path computation algorithms and describe a multilayer scenario in which dedicated model outperforms shared model in terms of blocking rate. The authors argue that the efficiency of optimized path computation algorithms in dedicated model can overcome the disadvantage of not sharing provider network resources. In summary, the simulated scenarios demonstrate that by supporting policy-based management the proposed architecture is a suitable and flexible approach to provide separate L1VPN management for customers. However, a challenge issue is how to estimate the overall effect of policies. It is a difficult task to evaluate how configuration policies for one L1VPN service can affect the behavior of others and the provider network. The providers must develop mechanisms and tools to simulate and evaluate policy effects. Also there can be conflict between policies of customers and provider network administrators. Priority mechanisms can be used to resolve policy conflicts and regulate the level of policy control that is given to customers. 6.2. Interdomain L1VPN services In the evaluation of the interdomain provisioning of L1VPN services we are concerned with two main aspects: the overhead of using Web Services to establish interdomain L1VPN connections and the effects of the policies on an interdomain scenario. The evaluation is divided into three parts. The first one analyzes the overhead in sharing Virtual Topology information
by using XML SOAP messages. Then we analyze the overhead of XML SOAP messages to share membership information. In the third part, we present some simulation results on using policies to configure interdomain L1VPN services like we have done in the previous section for the intradomain scenario. It is important to say that we have also investigated time issues with respect to the performance of using Web Services. In a previous work [27], we presented a case study in which interdomain optical connections are provisioned to build optical virtual networks as a first step to provide Optical VPNs. In that work, we discuss some results on the time to establish interdomain optical connections using Web Services. 6.2.1. VT information overhead The size of the SOAP message to distribute virtual topologies depends on the number of nodes and virtual links that compose the virtual topology. We have analyzed the overhead to share VT information considering the XML document format illustrated in Fig. 19. In this format, each node is identified by the element
and each virtual link is identified by the element . The element holds the abstract cost of each virtual link. Currently, the element is not used for nodes but one can use it if nodes have any cost associated to them. For each virtual link added to a virtual topology, there is an increase of 98 bytes and for each node added to the virtual topology there is an increase of 63 bytes. Then, considering that i is the number of virtual links, j is the number of nodes and c is the size of the SOAP message header plus other fixed virtual topology information, we have the following: (i ∗98)+( j ∗63)+c
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
Fig. 19. Example of VT information in XML.
Fig. 20. Example of membership information in XML.
is the total size of the SOAP message to distribute a virtual topology with i virtual links and j nodes. Considering the Push model and n the number of domains, each topology Pis advertised to n − 1 domains. In this case we have: nk=1 ((i ∗ 98) + ( j ∗ 63) + c) ∗ (n − 1) is the total number of bytes to distribute virtual topologies among domains in one round. As a simple example, if a domain with the NSFNet topology (14 nodes and 25 links) advertises its VT with all links and nodes to other two domains (n − 1 = 2) then the resulting overhead is 7,348 bytes (c = 342 bytes in our model). 6.2.2. Membership information overhead In our architecture, Membership information is also described in XML. As illustrated in Fig. 20, the XML needs to have some information such as: • the element domains that holds the information related to the other domains to which membership information need to be advertised; • the element client that identifies the VPN; • the element cpi that holds the port identifier in the CE node; • the element ppi that holds the port identifier in the PE node. For each port (member) that is locally added to an L1VPN, there is an increase of 120 bytes. As an example, an L1VPN with 2 ports in a domain needs a SOAP message of 670 bytes. If a new port is added to
215
that L1VPN, the size of the SOAP message increases to 790 bytes. Then, considering np the number of ports in a domain and c the size of the SOAP message header plus other fixed Port Information Table (PIT) information, we have the following: (np ∗ 120) + c is the total size of the SOAP message to distribute information of an L1VPN in a domain. For each L1VPN, information need to be distributed to the other domains that have ports in the same L1VPN. Considering ndVPN the number of domains that have ports Pnd V PinN a given L1VPN, we have the following: ((np ∗ 120) + c) ∗ (ndVPN − 1) is the total k=1 number of bytes to distribute membership information for a given L1VPN that has members in ndVPN domains. 6.2.3. Simulation results We have evaluated the effect of using the policybased approach to configure L1VPN services in an interdomain scenario (in a similar way as we have previously done for the intradomain scenario). Fig. 21 shows the topology of the simulated optical network. The topology of each domain is the NSFNet topology, as presented in Fig. 14. Each link in each domain has 16 wavelengths, but interdomain links have 128 wavelengths. There are four L1VPN services (L1VPN 1–4). Each service has 27 members, one member connected to each PE device in each domain. In the performed simulations we consider a static scenario. That is, we generate a demand of connections for the L1VPN services and then the network attempts to establish the requested L1VPN connections. For each connection request, source and destination are randomly select among the members of the respective L1VPN based on a uniform distribution. Those connections may be intradomain or interdomain connections. The number of connections to be requested is the same for each service and it is based on a Poisson distribution where the average number of connection requests is 42. The simulation is repeated 400 times and we evaluate the average connection blocking rate and the average resource utilization rate in terms of the number of allocated wavelengths. Interdomain path computation is centralized and performed through Dijkstra’s algorithm. We consider two types of policies: configuration and routing policies. Configuration policies are used to control the resource allocation mechanism of each L1VPN service and routing policies are used to specify different schemes to calculate the weight of a link in the routing of connections for each service.
216
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218 Table 3 Simulated scenarios
Scenario 1 Scenario 2 Scenario 3 Scenario 4
Fig. 21. Simulated interdomain topology.
In a first simulation, we define policies to setup four scenarios. In Scenario 1, a configuration policy assigns the shared allocation model to all of the four L1VPN services. The routing policy defines that the shortest path is computed in terms of the number of hops. In the remaining three scenarios we change the policies only for L1VPN 1. In Scenario 2, a configuration policy assigns the dedicated resource model and reserves 5 wavelengths to L1VPN 1 in each link (except for interdomain links which are always shared among all services). In addition, a routing policy defines that the weight of each link is w1 , where w is the number of available wavelengths in the link. In Scenario 3, the allocation model remains dedicated but the shortest path is again computed in terms of the number of hops. Scenario 4 brings out the last combination of the two policies to L1VPN 1, by configuring the allocation model as shared and the link weight as w1 . Table 3 summarizes the setup of the four scenarios. In this first simulation, we demonstrates how policies can be used to configure different classes of interdomain L1VPN services. The results show the average connection blocking rate for each L1VPN
L1VPN 1
L1VPN 2–4
[Shared] [Min Hops] 1] [Dedicated] [ w [Dedicated] [Min Hops] 1] [Shared] [ w
[Shared] [Min Hops] [Shared] [Min Hops] [Shared] [Min Hops] [Shared] [Min Hops]
service (Fig. 22(a)) and the average resource utilization rate for each domain (Fig. 22(b)). Scenario 2 presents the best connection blocking rate for L1VPN 1. The policies which are defined in that scenario can be used to configure L1VPN 1 as a high priority service and the other L1VPNs as low priority services. However, those policies led to worse resource utilization rates and also degraded the blocking rate of the other L1VPNs. The difference on blocking rate is more significant in scenarios 2 and 3, where the configuration policy assigns the dedicated allocation model to L1VPN 1. We can note that the effect of the configuration policy is more important than the effect of the routing policy. In a second simulation, we evaluate two opposite scenarios. In each one, the policies are the same for all L1VPNs. In the first scenario, the configuration policy assigns the shared allocation model and the routing policy defines the shortest path to be computed in terms of the number of hops. Obviously, this is the Scenario 1. In the other scenario, called Scenario 5, the configuration policy assigns the dedicated model and the routing policy defines the link weight as w1 . The results are shown in Fig. 23(a) and (b). In the first simulation, the configuration policy defined the dedicated model to improve the blocking rate of the L1VPN 1. In the second simulation, we can see that when the domains apply this same policy (dedicated resource allocation) to all services, they
Fig. 22. Policies for service differentiation.
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
217
Fig. 23. Same policies for all services.
degrade the blocking rate of all services. A similar effect is seen when we consider the resource utilization rate. It is important to note that the corresponding domains must hold the policies used to control a given interdomain L1VPN service, in order to keep a consistent behavior of such service. In another way, different domains may apply different policies to the same L1VPN service. Indeed, there may be some scenarios where each domain has different policies for the same L1VPN depending on the business relationship between the customer and each domain. The simulation results also allow to rise the hypothesis that the allocation model is more critical than the path computation model with respect to connection blocking and resource utilization rates. However, rather than measuring performance improvements from specific policies, our main objectives here are to show relevant policy usage scenarios and evaluate the different effects of policies and the implications of the policy-based approach for L1VPN management. 7. Conclusion In this work, we propose a management architecture for L1VPN services. First, we have described a policybased architecture for L1VPN management considering a single domain scenario. We have demonstrated how Policy-Based Management can be used in the configuration management of L1VPN services. Indeed, the proposed management architecture is a suitable approach to enable multiple customers to manage their L1VPN services. However, it is not easy for service providers the task of supervising and isolating the customer control. Furthermore, we have extended the proposed architecture to support the provisioning of interdomain L1VPN services. The goal of the proposed solution
is to create a service layer by which the interactions among domains are performed. Such service layer supports advanced services such as L1VPNs without changing how the current transport infrastructure works and without waiting for long process of standardization. Our proposal uses the virtual topology concept that abstracts the underlying details of each optical domain and conveys L1VPN membership and TE information representing the state underlying network. The virtual topologies are advertised among domains enabling the selection of QoS-based routes and making the offering of interdomain services more customer and businessoriented. Prototype implementation and simulations have demonstrated the feasibility of the policy-based architecture, and the effects and implications of configuration policies considering different points of view, as well as intradomain and interdomain scenarios. Moreover, per-VPN resource management is a challenging problem, mainly in the interdomain scenario. Managing resource partitions presents a tradeoff between level of customer control and resource utilization rate. There is a need for further research in this topic. Acknowledgements The authors would like to thank CAPES, CNPq, FAPESP and Ericsson Brazil for supporting this work. References [1] A. Farrel, I. Bryskin, GMPLS Architecture and Applications, Morgan Kaufmann, 2006. [2] T. Takeda, I. Inoue, R. Aubin, M. Carugi, Layer 1 Virtual Private Networks: Service concepts, architecture requirements, and related advances in standardization, IEEE Communications Magazine 42 (6) (2004) 132–138.
218
N. Malheiros et al. / Optical Switching and Networking 5 (2008) 196–218
[3] D.C. Verma, Simplifying network administration using policybased management, IEEE Network 16 (2) (2002) 20–26. [4] F.L. Verdi, E. Madeira, M. Magalh˜aes, A. Welin, The virtual topology service: A mechanism for QoS-enabled interdomain routing, in: 6th IEEE International Workshop on IP Operations & Management, IPOM’06, in: LNCS, vol. 4268, Springer, Dublin, Ireland, 2006, pp. 205–217. [5] S. French, D. Pendarakis, Optical virtual private networks: Applications, functionality and implementation, Photonic Network Communications 7 (3) (2004) 227–238. [6] H. Ould-Brahim, Y. Rekhter, D. Fedyk, BGP-based autodiscovery for layer-1 VPNs, IETF Internet-Draft, February 2008 (in preparation). [7] I. Bryskin, L. Berger, OSPF Based Layer 1 VPN AutoDiscovery, IETF Internet-Draft, February 2008 (in preparation). [8] T. Takeda, R. Aubin, M. Carugi, I. Inoue, H. Ould-Brahim, Framework and requirements for layer 1 virtual private networks, IETF RFC 4847, April 2007. [9] ITU, Layer 1 virtual private network generic requirements and architecture elements, ITU-T Recommendation Y.1312, September 2003. [10] ITU, Layer 1 virtual private network service and network architectures, ITU-T Recommendation Y.1313, July 2004. [11] D. Fedyk, Y. Rekhter, D. Papadimitriou, R. Rabbat, L. Berger, Layer 1 VPN basic mode, IETF Internet-Draft, February 2008 (in preparation). [12] T. Takeda, D. Brungard, A. Farrel, H. Ould-Brahim, D. Papadimitriou, Applicability statement for layer 1 virtual private networks (L1VPNs) basic mode, IETF Internet-Draft, February 2008 (in preparation). [13] T. Takeda, D. Brungard, A. Farrel, H. Ould-Brahim, D. Papadimitriou, Applicability analysis of GMPLS protocols for the layer 1 virtual private network (L1VPN) enhanced mode, IETF Internet-Draft, November 2006 (in preparation). [14] T. Takeda, H. Kojima, I. Inoue, Layer 1 VPN architecture and its evaluation, in: 10th Asia-Pacific Conference on Communications, vol. 2, 2004, pp. 612–616. [15] J. Wu, M. Savoie, S. Campbell, H. Zhang, B.S. Arnaud, Layer 1 virtual private network management by users, IEEE Communications Magazine 44 (12) (2006) 86–93. [16] T. Takeda, D. Brungard, D. Papadimitriou, H. Ould-Brahim, Layer 1 Virtual Private Networks: Driving forces and realization by GMPLS, IEEE Communications Magazine 43 (7) (2005) 60–67.
[17] H. Ould-Brahim, Y. Rekhter, GVPN Services: Generalized VPN services using BGP and GMPLS toolkit, IETF Internet-Draft, February 2005 (in preparation). [18] G. Swallow, J. Drake, H. Ishimatsu, Y. Rekhter, GMPLS usernetwork interface, IETF RFC 4208 October 2005. [19] D. Benhaddou, W. Alanqar, Layer 1 virtual private networks in multidomain next-generation networks, IEEE Communications Magazine 45 (4) (2007) 52–58. [20] C. Carvalho, E. Madeira, F.L. Verdi, M. Magalh˜aes, Policybased fault management for integrating IP over optical networks, in: 5th IEEE International Workshop on IP Operations and Management, IPOM’05, in: LNCS, vol. 3751, Springer, 2005, pp. 88–97. [21] F.L. Verdi, C. Carvalho, E. Madeira, M. Magalh˜aes, Policy-based grooming in optical networks, Journal of Network and Systems Management, Springer 2007, in press (doi:10.1007/s10922-007-9074-9). [22] L. Berger, GMPLS signaling resource reservation protocoltraffic engineering (RSVP-TE) extensions, IETF RFC 3473, January 2003. [23] K. Kompella, Y. Rekhter, OSPF extensions in support of GMPLS, IETF RFC 4203, October 2005. [24] D. Durham, R. Cohen, J. Boyle, S. Herzog, R. Rajan, A. Sastry, The COPS (Common Open Policy Service) protocol, IETF RFC 2748, January 2000. [25] K. Chan, D. Durham, S. Gai, S. Herzog, K. McCloghrie, F. Reichmeyer, J. Seligson, A. Smith, R. Yavatkar, COPS usage for policy provisioning (COPS-PR), IETF RFC 3084, March 2001. [26] H. Ould-Brahim, D. Fedyk, Y. Rekhter, Traffic Engineering Attribute, July 2006. [27] F.L. Verdi, M.F. Magalh˜aes, E. Cardozo, E.R.M. Madeira, A. Welin, A service oriented architecture-based approach for interdomain optical network services, Journal of Network and Systems Management 15 (2) (2007) 141–170. [28] H. Elbiaze, O. Cherkaoui, Signaling end-to-end optical services over multi-domain networks, Optical Switching and Networking 4 (2007) 58–74. [29] B. Moore, E. Ellesson, J. Strassner, A. Westerinen, Policy core information model–version 1 specification, IETF RFC 3060, February 2001. [30] B. Moore, Policy core information model (PCIM) extensions, IETF RFC 3460, January 2003. [31] T. Takeda, H. Kojima, N. Matsuura, I. Inoue, Resource allocation method for optical VPN, OFC 2004, in: Optical Fiber Communication Conference, 2004, vol. 1, 2004.