Computer Networks 63 (2014) 221–237
Contents lists available at ScienceDirect
Computer Networks journal homepage: www.elsevier.com/locate/comnet
FITS: A flexible virtual network testbed architecture Igor M. Moraes b,⇑, Diogo M.F. Mattos a, Lyno Henrique G. Ferraz a, Miguel Elias M. Campista a, Marcelo G. Rubinstein c, Luís Henrique M.K. Costa a, Marcelo D. de Amorim d, Pedro B. Velloso b, Otto Carlos M.B. Duarte a, Guy Pujolle d a
Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil Universidade Federal Fluminense, Niterói, Brazil c Universidade do Estado do Rio de Janeiro, Rio de Janeiro, Brazil d UPMC Sorbonne Universités, Paris, France b
a r t i c l e
i n f o
Article history: Received 9 July 2012 Received in revised form 24 December 2013 Accepted 6 January 2014 Available online 10 January 2014 Keywords: Network testbeds Future Internet Experimental facilities Virtualization Xen OpenFlow
a b s t r a c t In this paper, we present the design and implementation of FITS (Future Internet Testbed with Security), an open, shared, and general-purpose testbed for the Future Internet. FITS defines an innovative architecture that allows users running experiments with new mechanisms and protocols using both Xen and OpenFlow on the same network infrastructure. FITS integrates several recognized state-of-the-art features such as plane separation, zero-loss network migration, and smartcard-driven security access, to cite a few. The current physical testbed is composed of nodes placed at several Brazilian and European institutions interconnected by encrypted tunnels. Besides presenting the FITS architecture and its features, we also discuss deployment challenges and how we have overcome them. Ó 2014 Elsevier B.V. All rights reserved.
1. Introduction The current Internet architecture has reached its limit and it is no longer expected that it will be able to fulfill all the network requirements such as security, management, mobility, and quality of service. Consequently, a lot of effort has been devoted to develop an architecture for the Future Internet [1–3]. In this context, the pluralist approach is envisioned as the foremost architecture because it enables multiple virtual networks to run in parallel with different protocol stacks. Innovation and legacy support in this case can coexist by considering the current Internet as one of the virtual networks. However, changing to this new architecture is significantly challenging, since no one is ⇑ Corresponding author. Tel.: +55 21 2629 5679. E-mail addresses:
[email protected] (I.M. Moraes),
[email protected] (D.M.F. Mattos),
[email protected] (L.H.G. Ferraz),
[email protected] (M.E.M. Campista),
[email protected] (M.G. Rubinstein),
[email protected] (L.H.M.K. Costa),
[email protected] (M.D. de Amorim), velloso@ ic.uff.br (P.B. Velloso),
[email protected] (O.C.M.B. Duarte), Guy.Pujolle@ lip6.fr (G. Pujolle). 1389-1286/$ - see front matter Ó 2014 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.bjp.2014.01.002
willing to adopt a whole new architecture without rigorous and extensive large-scale tests. Therefore, network testbeds are fundamental to validate novel protocols under realistic conditions before deployment, as Internet Service Providers show reluctance to innovation in the core network due to potential instabilities or service interruptions. In this paper, we describe the architectural principles, features, implementation, and deployment of the Future Internet Testbed with Security (FITS).1 FITS aims at providing an open, shared, and general-purpose facility for experimenting solutions for the next generation Internet. The originality of FITS is in the combination of an innovative architecture with recognized state-of-the-art features. FITS main features, illustrated in Fig. 1, are: Flexible design: FITS follows the pluralist paradigm, in which researchers can have full access to virtual networks spanning several physical nodes [3]. Virtual networks are isolated from each other and may have 1
http://www.gta.ufrj.br/fits.
222
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237
Fig. 1. FITS architecture and main features with two different virtual networks over the physical network substrate. (1) Virtual Network Element (NE) migration from one physical router to another without packet losses. (2) Authentication of all physical and virtual network elements using a Public Key Infrastructure (PKI). (3) Virtual networks isolation that guarantees the secure operation of the pluralist architecture. (4) Decentralized or centralized control, using a controller per physical element, or requiring a connection to all physical elements.
different requirements in terms of Quality of Service (QoS) [4,5]. Researchers can choose experimenting with either OpenFlow or Xen, which are a flow and a machine virtualization approaches, respectively. Efficient packet forwarding: researchers using FITS can also choose a fast packet-forwarding scheme using plane separation, which avoids overhead and memory copies by performing forwarding only at the physical machine. Network migration: FITS provides virtual seamless network migration without losing packets or stopping experiments. Security: testbeds inherit security issues from the legacy Internet when they are implemented over it. One of the main goals of FITS is to consider security as one of its design requirements. FITS relies on a Public Key Infrastructure that certificates all physical servers and virtual machines in the testbed. FITS authenticates users from OpenID Identity Providers using low-cost smartcards [6]. This improves security because all authentication handshakes run in the smartcard tamper-proof microcontroller. Finally, all connections between nodes, or islands of nodes, use a secure Virtual Private Network (VPN), in which all traffic is encrypted with the Transport Layer Security (TLS) protocol. The rest of this paper is organized as follows. In Section 2, we describe related work. The main required features for a Future Internet testbed are summarized in Section 3. In Section 4 we detail the FITS architecture. We present in Section 5 the FITS operational features and insights into how to provide them. In Section 6, we describe the features developed for facilitating users to experiment with FITS. In Section 7, we analyze the interconnection between different FITS islands in terms of bandwidth and round-trip time during a 24-h period. Finally, we conclude the paper in Section 8.
2. Related work Our work is inline with current efforts on experimental facilities for the development and evaluation of new network technologies [7]. Currently, there are several experimental facilities [8–14] and testbed federation proposals [2,15]. PlanetLab somehow pioneered the area by offering a shared infrastructure over which users can experimentally evaluate their protocols and algorithms [8]. VINI has made a step forward by bringing virtual networks into the picture [9]. G-Lab (German Lab) is a Future Internet initiative, currently based on PlanetLab, built upon a German-wide network of wireless and wired equipment [11]. Nevertheless, PlanetLab, VINI, and G-Lab are restricted to a predefined testing environment (operating system). OFELIA provides augmented flexibility to network designers by allowing precise and dynamic network control and extension through the use of OpenFlow [10]; unfortunately, OFELIA lacks fine multiplexing of network resources among different virtual machines. OFELIA imposes that virtual machines act as servers to create traffic in an OpenFlow network, i.e., in OFELIA, virtual machines are edges of a network that is a subset of the physical OpenFlowswitch network. On the other hand, FITS creates virtual machines in the core of the network and, thus, it enables the use of virtual machines as routing or forwarding elements in an arbitrary topology, thanks to FITS installation where Open vSwitch runs in FITS physical servers. In FITS, using OpenFlow experiments is a choice, while in OFELIA, it is mandatory. GENI (Global Environment for Network Innovations) is a virtual laboratory that aims at allowing at-scale experimentation of future network protocols [12]. GENI grants experimenters access to computing resources that can be interconnected using layer-2 links, allowing experimenters to run their own layer-3 protocols. GENI also allows the
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237
installation of custom software and operating systems. JGN-X is an initiative that started as a gigabit network (Japan Gigabit Network – JGN) back in 1999. Over the years, JGN has evolved to the current JGN-X (JGN eXtreme) project, by incorporating novel technologies such as optical links, multicast, and IPv6 [13]. RISE (Research Infrastructure for large-Scale network Experiments) is another project that aims at building a large-scale international OpenFlow testbed based on JGN-X links, interconnected to OFELIA in Europe and OS3E in the US [14]. Another noticeable testbed is Federica [2], which federates nodes that access a private network based on gigabit Ethernet circuits from the GÉANT2 backbone. FITS, on the other hand, is an open experimental facility that privileges pluralism. FITS is agnostic to the operating system or applications running on virtual machines. FITS hosts OpenFlow experiments and shares the physical infrastructure using virtual machines connected through a virtual network. FITS also differs from other testbeds because, in addition to the security mechanism based on smartcard authentication, it implements strong isolation between virtual environments. Finally, FITS interconnects distributed testbed islands composing an experimental network with nodes all over the world. 3. FITS: architectural principles and features FITS provides a testbed infrastructure for network experimentation based on two different virtualization approaches, Xen [16] and OpenFlow [5]. Users can run their experiments in different network environments to compare the results or choose the most suitable one for their new protocols and mechanisms. Currently, most Future Internet testbeds offer to users virtual network slices for large-scale experiments as well as mechanisms for monitoring the network and collecting results. FITS improves this basic service set by offering advanced features such as separation of network control and data planes, zero-loss virtual network migration, strong isolation of virtual environments, freedom of choice between centralized and decentralized control, and security-oriented testbed operation. These features are detailed in Sections 5 and 6. For the time being, they are summarized in Fig. 2 and briefly introduced as follows:
223
Separation of control and data planes: Virtual network slices have their own set of control planes and share data planes of the physical network elements. Consequently, users are free to fully customize the control plane of their slices and to configure them in two different modes – decentralized or centralized. Zero-loss virtual network migration: FITS provides a zeroloss migration facility; thus, users and administrators of the testbed can transparently move virtual nodes. Users are able to perform experiments with new mechanisms that require no packet loss during the movement of virtual network elements. In addition, they can migrate routers and flows on demand during the experiments with no impact on the results. The tradeoff for the administrators is the necessity to dynamically change the mapping of virtual nodes into physical nodes without interfering with running experiments. This feature allows administrators to change on demand the resources allocated to a virtual network slice and also schedule the preventive maintenance of physical nodes. Isolation among virtual environments: With FITS, different virtual routers can be instantiated sharing the same physical router. The main resource to be isolated is packet forwarding, i.e., virtual routers should not interfere with each other during this task. In fact, Xen I/O operations are not isolated from each other and are an important vulnerability to be exploited by an attacker [17]. FITS guarantees isolation between virtual networks by controlling processing capacity, memory, and per-interface bandwidth used by each virtual router. Freedom of choice between centralized and decentralized control: FITS allows users to choose the operation mode of the control plane. With the centralized mode, we offer a NOX controller coordinating OpenFlow switches. With the decentralized mode, we offer a control plane running at each network element. Multiple points of measurement: Different from other testbeds, FITS monitors data of physical nodes and links available to users, and not only data of virtual network elements. Thus, FITS helps users to run their experiments during the process of resource allocation. Security-oriented testbed operation: FITS enforces confidentiality, integrity, and availability of the virtual network infrastructure. FITS allows user authentication
Fig. 2. FITS features: flexible network programmability, improved forwarding performance, security, and manageability are the main characteristics of the testbed.
224
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237
based on OpenID and low-cost smartcards and also defines different access levels. Only authorized users are allowed to create virtual networks and/or perform experiments. In FITS, virtualization techniques play a significant role to provide all these features, since they create abstract virtual environments that are composed of a set of ‘‘sliced’’ resources, such as disk, memory, CPU, flow or network. Each virtual environment is isolated and independent from others – it gives the impression of running directly over the shared underlying substrate. The virtualization abstraction is accomplished by a software layer called hypervisor, which provides virtual environments interfaces abstractions that are quite similar to underlying ones [1]. In this sense, network virtualization decouples the network function from its underlying physical infrastructure [18]. Decoupling the network function from its physical realization is also the idea behind Software-Defined Networking (SDN) [5]. The SDN approach, however, defines that the separation is implemented by a software application. The network element only performs in hardware a basic set of functions, such as packet forwarding. These functions are controlled and configured by a software application not running within the same network element of the hardware in a centralized fashion [5]. In this context, FITS allows users to run experiments in both network virtualization and SDN environments. To provide these environments, FITS uses Xen and OpenFlow. Network virtualization is accomplished with Xen. We have chosen Xen because it is an open-source hypervisor and it has shown the best performance for packet forwarding in virtual environments compared with different hypervisors (VMware ESX and OpenVZ) [19]. A FITS slice may be a collection of virtual machines, each one running on top of a FITS physical node. The FITS physical node provides a virtual machine abstraction based on Xen hypervisor. Xen provides a virtualization layer at conventional hardware level, which offers high flexibility in choice of what runs in the virtual machine, e.g., operating system and applications. Another advantage of using Xen is that it does not share the same kernel with all virtual machines, whereas other virtualization platforms, such as OpenVZ or Linux vServer, provide a kernel virtualization level. On the other hand, Xen requires more CPU and memory resources. Recent enhancements of Xen, such as memory ballooning [20] and on-demand CPU resource allocation [21], allow resizing virtual machines resources as the demand grows in physical servers, improving scalability [19]. In order to enable experimentation in SDN, FITS uses OpenFlow because it is open-source and can be considered as a standard for SDN given its wide acceptance in the area [22]. Therefore, FITS provides OpenFlow slices to researchers. An OpenFlow slice is a FlowVisor2 slice and, thus, FITS allows a researcher to run his own controller or set of OpenFlow applications in an isolated network slice.3 The slice is based 2 FlowVisor operates as a proxy between OpenFlow switches and Openflow controllers. 3 We assume switches running OpenFlow version 1.0 in order to support FlowVisor.
on a subset of the twelve-tuple that defines a flow. This subset defines the characteristics of the packets that should be handled by each controller. In addition, FITS applies isolation techniques, other than FlowVisor, to isolate the four main resources in a OpenFlow network: topology, bandwidth, memory, and processor usage. More details are provided in Section 5.3.
4. FITS architecture FITS creates an environment of experimentation, measurements, and performance evaluation geographically distributed among 17 institutions. In Brazil, we have 13 universities participating in FITS: UFRJ (two research groups), UFF, Unicamp, UFSC, UFES, UFRGS, UECE, UFSCar, UFPE, UFAM, UERJ, LNCC, and UFC. In Europe, we have currently 4 participating institutions: University of Lisbon (Portugal), UPMC Sorbonne Universités (France), École Normale Supérieure (France), and TUM (Germany). Each institution has its own set of nodes with local policies to run experiments. This local set of nodes is an island. FITS interconnects these islands via Virtual Private Network (VPN) connections and Generic Routing Encapsulation (GRE) tunnels to emulate layer-2 links over the Internet [23], as depicted in Fig. 3. The interconnection procedure is detailed later in this section. FITS defines three kinds of nodes: manager, gateway, and operational nodes. The role of each node and the general FITS network infrastructure are illustrated in Fig. 3. An operational node is the kind of node that effectively hosts the experiments. Gateway nodes interconnect two testbed islands. The manager coordinates the testbed, i.e., it authenticates users and nodes, allocates network and node resources, and collects measurements acquired by sensors running within network elements. As explained previously, FITS allows users to run experiments on Xen or OpenFlow platforms. With Xen, users can instantiate different virtual routers and interconnect them to define a virtual network. FITS employs the plane separation technique and thus virtual networks have their own set of control planes and share data planes of the physical elements. In this sense, FITS provides a control plane running within each virtual router to enable decentralized control. With OpenFlow, users instantiate their own controllers, configure a set of OpenFlow switches and control a flow slice. The topology of the OpenFlow network slice, dedicated to a user, is a sub-set of the physical FITS topology. Using Xen, both control and data planes are virtualized, allowing users to run experiments involving different protocol stacks in a decentralized fashion. Using OpenFlow, on the other hand, only the control plane is virtualized, allowing users to more easily change the actions taken by the data plane from a centralized controller. The choice between decentralized or centralized can be undertaken according to users’ needs. Independently of the choice, however, in FITS, there is always a manager node responsible for creating both virtual networks and flow slices as shown in Fig. 4. The manager node provides communication between the operational nodes and the element that creates
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237
225
Fig. 3. FITS autonomous island interconnection architecture. Gateway nodes interconnect islands, FITS operational nodes provide basic virtualization functions, and manager node accomplishes control and management functions.
Fig. 4. FITS architecture. Operational nodes run Xen and OpenFlow platforms and communicate with gateways. Gateway nodes interconnect the islands through encrypted tunnels. The manager node slices the Xen network or OpenFlow network.
network slices, Flowvisor or Xen Virtual Network (VN) Server. Flowvisor shares the OpenFlow switches between different OpenFlow controllers. Xen VN Server, on the other hand, slices the Xen network. Xen VN Server is responsible for instantiating Xen virtual machines in operational nodes and interconnecting them in a virtual topology. Xen VN Server also monitors virtual networks and collects data acquired by sensors. In order to monitor virtual machines, Xen VN Server interacts with the Xen Local Management Server running within each operational node. Moreover, the manager also provides a Web interface, shown in Fig. 5, and the access control mechanism, with OpenID and VPN certificate authentication. The Web interface centralizes information and operations of all network nodes. The manager node deploys the VPN server, which is responsible for securely interconnecting the gateway nodes. The gateway nodes, under the responsibility of the manager node, are responsible for interconnecting islands through the Internet. FITS uses VPN links and GRE to establish tunnels among gateways (as in Fig. 3) and between gateways and the manager. The manager runs a VPN server, which creates two independent VPNs: one for data and one for control. VPNs are used to create an abstraction in which all nodes act as if they were in a single IP subnet. FITS provides facilities to test new network layer protocols by providing a common link layer. FITS employs GRE tunnels between pairs of islands to create a single Ethernet domain. A GRE tunnel represents a virtual link, which may be
created within a VPN, as indicated in Fig. 3. The GRE tunnel assures the connectivity between nodes that are behind firewalls. An operational node is divided into three modules: switching and routing, Xen, and OpenFlow (Fig. 6). The basic idea of the switching and routing module is to perform packet forwarding. When only OpenFlow is operating, the switching module instantiates an OpenFlow switch. When Xen virtual routers are operating, the switching and routing module can operate in two modes: with or without plane separation. The operation is based on the Open vSwitch [24], which is a software switch compliant with the OpenFlow API. The Open vSwitch is, therefore, controlled by a local FlowVisor instance. The local FlowVisor shares the Open vSwitch in an operational node with a local controller and the global guest controllers running over the global instance of FlowVisor. As local controller, we use the NOX OpenFlow controller [25]. The virtual network interfaces of virtual machines and the physical network interfaces are connected by the software switch. The virtual machine interfaces are tagged with a VLAN (Virtual Local Area Network) identifier. This virtual network identifier defines the Ethernet domain that the network interface belongs to. The virtual Ethernet domain may be a virtual link or a virtual Ethernet network, i.e., a set of virtual links. The switching and routing module allows running experiments in three scenarios. In the first scenario, all packets should pass through the virtual machine during
226
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237
Fig. 5. FITS Web interface. Topology visualization interface shows online nodes and their virtual routers.
Fig. 6. FITS operation node module overview. The FITS operational node is the building block of the testbed architecture.
an experiment. Thus, when forwarding packets, the physical node receives the packet, determines the virtual machine the packet is addressed to, and delivers it. Then, the virtual machine processes the packet, determines its next hop and sends it back to the physical node, which sends the packet to the next physical node. The second is the plane separation scenario (Section 5.1), in which the physical node knows a priori the next hop of the packets addressed to virtual machines. Thus, the physical node itself forwards the packets without passing through the virtual machines. With plane separation, the forwarding function is divided into two planes: the control plane and the data plane. The control plane is performed in virtual machines and is responsible for determining the action to be taken for each packet. The data plane is performed by the software switch and is responsible for forwarding packets according to the rules defined by the control plane. Finally, the third scenario is running an OpenFlow experiment. The user runs an OpenFlow controller over the global instance of the FlowVisor and gets control of a slice
of the OpenFlow network. In this scenario, the local instance of FlowVisor delegates the control of a slice of the operational node Open vSwitch to the global FlowVisor. In this case, as the operational node behaves as an OpenFlow switch, the Open vSwitch and the FlowVisor are the only enabled modules. The control plane of the OpenFlow slice runs centralized in a guest OpenFlow controller. Users running a test with a new protocol stack may choose one of the three scenarios depending on performance and flexibility constraints. Both OpenFlow and Xen modules have Control Interface sub-modules. The OpenFlow Control Interface is a set of applications running on top of NOX controller [26]. The Xen Control Interface is a set of applications that collects data from physical and virtual machines, and simplify management tasks such as, creation, deletion, and migration of virtual machines. The OpenFlow Control Interface controls and manages the OpenFlow components of the network, and also enables the development of new OpenFlow applications for the FITS testbed. The Control Interface gives access to a set of tools to control and manage the network. Two measurement tools are worth mentioning, one that collects statistics about the flows, and another that probes the network for obtaining the physical network topology. The OpenFlow Control Interface also allows the creation, modification, and deletion of flow forwarding rules. Moreover, it provides an interface to migrate flows, which allows the user to change the physical path of a flow in the network. The OpenFlow-based flow migration is free of packet losses. The Xen Control Interface manages and controls virtual networks based on the Xen platform. This interface is specific for managing and controlling virtual routers and
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237
networks and, therefore, is different from existing management systems also based on Xen, such as Open Nebula [27] or XCP (Xen Cloud Platform) [28]. The Xen Control Interface is composed by daemons for monitoring and acting on physical and virtual routers, respectively, Xen Local Management Server and Xen Client (Fig. 4). The Xen VN Server exports control and management primitives, which are triggered by the FITS manager to provide the Web interface, and interacts with the daemons executed in each physical and virtual machine for implementing commands of the Xen VN Server and for collecting testbed information. Daemons that collect data are detailed in Section 6.2. The isolation between virtual networks in FITS is based on the OpenFlow switching. FITS has to access Ethernet, VLAN and IP fields, through the twelve fields of OpenFlow specification, in order to route packets directly in the data plane. The key idea of isolating virtual networks is to insert a VLAN tag in each packet that leaves a virtual machine and to remove this tag as the packet is sent to the virtual machine. A VLAN tagger placed between the virtual network interfaces and the OpenFlow switch is responsible for this task. It tags/untags packets leaving/entering a VM with the VLAN ID corresponding to the virtual network. The VLAN tagger is configured at VM creation time with the identifier (ID) of the virtual network that each virtual network interface belongs to. The VLAN tagger is outside the virtual machines and therefore transparent to them. As a VLAN tag defines a virtual network in FITS, defining a flowspace consists on defining a set of VLAN IDs which are controlled by a single OpenFlow controller. The VLAN ID is unique for each virtual network. Hence, a set of unique VLAN IDs composes a slice of the testbed. In the case of an OpenFlow slice, the flowspace is created as a wildcard flowspace, in which the only matching field is the VLAN ID. Each flowspace is dedicated to an experimenter. The experimenter has full control of his flowspace, running an OpenFlow controller in a Xen virtual machine. All configurations concerning the flowspace definition are performed on the Web Interface. On the other hand, while using a Xen slice of the testbed, there is no need to create a new flowspace. In this case, all flows are controlled by the default
227
OpenFlow controller, which is able to isolate flows between virtual networks and to map virtual networks to their correct resource reservation queues. Although there are differences between a Xen slice and an OpenFlow slice of the testbed, both of them are created and configured through the Web Interface. In order to create a new testbed slice, an experimenter with administrator privileges should access the Web Interface and create the virtual network. As soon as all virtual nodes are created, the virtual network can be used by other users. 5. Providing operational features In this section, we present the different features of the testbed that are related to the operation of the virtual networks. 5.1. Plane separation made practical Network virtualization is achieved by virtualizing Network Elements (NEs), which are composed of a control and a data plane, as seen in Fig. 7(a). The control plane runs the software in charge of configuring NE elements, such as routing algorithms. On the other hand, the data plane contains the forwarding engine, which processes and forwards data packets. Building virtual networks can be understood as placing a virtualization layer at some level of the network architecture to permit the creation of multiple virtual network slices. Therefore, the network element architecture allows the coexistence of multiple virtual NEs over a single physical NE. There are two main approaches to virtualize the network: the virtualization of an entire network element, which has both control and data planes inside, and the separation of control and data planes. 5.1.1. Control and data planes in the virtual network element In this approach, each virtual network slice has its own set of virtual NEs and, furthermore, each virtual NE has both control and data planes (Fig. 7(b)).
Fig. 7. Network architectures of control and data planes.
228
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237
FITS implements this virtualization approach with Xen virtual machines (VMs). Each Xen privileged VM (Domain 0) has an Open vSwitch to multiplex the network between the VMs. Thus, it configures a direct path from the physical Network Interface Card (NIC) to the VM. 5.1.2. Separation of control and data planes Another network virtualization approach implemented in FITS separates the control plane from the data plane. In this approach, the virtual network slices share data planes of the physical NEs, but they have their own set of control planes. As a result, the virtual network slice can be configured with the control plane in two different modes: decentralized or centralized. Decentralized mode: In this mode, each virtual network slice has a set of virtual NEs that contain only the control plane. Then, all virtual NEs send their control plane information to their corresponding physical NE hosts. With such information, the physical NEs configure their data planes [29]. Fig. 7(c) shows this approach. This approach allows flexible configuration of the virtual NE, because each virtual NE can have its own software set, including operating system and protocol stack. In FITS, each Xen VM implements the control plane. A daemon that runs inside the VM sends the control plane information to an application running in Domain 0. Then, this application sends the information to the local OpenFlow NOX controller to configure the virtual network flows of Open vSwitch as needed. Centralized mode: In this mode, each virtual network slice has a control plane centralized in one node that configures all data planes of the nodes in the slice. The physical NEs send network information to the control planes of the virtual network slices, which use the information to decide how they should configure the data planes. In this mode, a special network element called Network Controller groups the control planes of all virtual network slices. We show this mode in Fig. 8. In this mode, each virtual network slice is controlled by an OpenFlow controller, which is run in the Network Controller (Fig. 8). FlowVisor is then used to orchestrate the multiple OpenFlow controllers of the various network slices, providing isolation between them. 5.2. Seamless virtual network migration The goal of virtual network migration is moving virtual machines of a network slice, changing the underlying
hardware while keeping the virtual topology. This operation is motivated by the possibility of minimizing the impact of virtual networks on the underlying substrate or even by the possibility of providing a better service to upper-layer applications. Virtual network migration can also be used for maintenance purposes. By migrating all virtual networks to other physical locations, one can stop a physical NE without disrupting the operation of the virtual networks running atop. Virtual network migration depends on the virtual network mode used, decentralized or centralized. In the decentralized mode, the virtual network migration relies on a feature prevalent in most virtual machine platforms, which is the possibility to move a virtual machine from one physical host to another. It is possible to rearrange a whole virtual network by migrating virtual NEs. On the other hand, when virtual networking uses plane separation in centralized mode, the migration procedure is only a matter of reconfiguring forwarding tables in all the physical NEs required by the new virtual network. The migration procedure is, however, more complex if conducted in a production network with ongoing traffic (live migration). The migration of a virtual NE implies the migration of the virtual links connected to it. Therefore, if a virtual NE changes its physical substrate, its links cannot be simply shutdown because this would compromise the network operation. The migration of a virtual NE also implies other practical aspects, such as network reconfiguration, and convergence of routing algorithms in virtual networks [29]. All of them can affect the network performance if not carefully taken into account. Independently of the virtual network mode employed, the migration procedure requires previous knowledge of the physical network topology as well as of all the virtual network topologies and loads [30]. This is because this procedure needs to define the new physical substrate, which will be used by the virtual network about to be migrated. Controlling the migration procedure, however, raises a security issue depending on who is in charge of doing that. Since the procedure needs to manage all virtual networks, the communication among the controlling entity and the virtual or physical NEs must be secure. Otherwise, a malicious node could obtain the whole network control and, as a consequence, could divert resources to its own network. We implement both migration approaches in our testbed, the migration considering data and control planes on the same virtual NE, and the migration that considers data and control plane separation using the decentralized
Fig. 8. Network virtualization approach with multiple data planes, one for each physical network element, and a single (centralized) control plane per slice.
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237
or centralized modes. The migration approaches are described next. 5.2.1. Migration with data and control planes in the same virtual NE This approach considers that the virtual network slice is composed of a set of virtual NEs that have both control and data planes. Each virtual NE is a Xen virtual machine, thus the virtual network migration implies migration of virtual NEs from their respective host physical NE to another. The virtual NE migration uses Xen built-in live migration, which involves two main procedures: copying the virtual NE memory from one physical host NE to the other and reconfiguring the network connectivity. However, at some point, the virtual NE is suspended and is afterwards resumed next in the new physical host NE. During this period, the virtual NE is unavailable and all its packets are lost. Besides, Xen built-in live migration procedure has two assumptions that could affect the operation of the virtual network migration: it considers that the migration occurs inside a local network and the virtual NE disk is shared on the network. FITS overcomes the local network assumption by using Open vSwitch to connect the virtual NEs. Open vSwitch tags packets regarding a virtual link and creates a path between the connected virtual NEs through that virtual link. When a virtual NE migrates, Open vSwitch reconfigures the path and maintains the virtual link connectivity. The testbed shares the virtual NEs disk with a network file system manager. 5.2.2. Migration with control and data plane separation Control and data plane separation can be implemented using decentralized or centralized control planes. Machine-based virtualization, such as Xen, can be used to implement decentralized control plane, while networkbased virtualization, such as OpenFlow, implement centralized control plane. In Xen, the control plane and the data plane run in the virtual environment and in a shared area of the physical NE, respectively. In the OpenFlow, in opposition, the network control is centralized in a special node and the data plane runs on each network node and is shared by all concurrent virtual networks. Both modes are implemented in our testbed, as well as a hybrid approach, which aims at improving both modes. Xen-based migration: Using decentralized plane separation requires a two-step migration procedure for virtual NEs. The goal is to first migrate the control plane and, afterwards, migrate the data plane so as to avoid packet losses [29]. This is feasible because, in a first step, the data plane remains active on the previous physical NE while the control plane is migrated to the new one. Since the data plane is active, it keeps forwarding packets avoiding losses. In the meantime, the control plane installed on the new physical NE starts running all the procedures needed to converge and, consequently, to offer exactly the same conditions to rebuild the data plane. During the control plane migration, the previous physical NE buffers the control packets, and after the migration, the previous physical NE delivers the control packets to the control plane. When the data plane is rebuilt in the new physical NE, the previous one can be removed. All the packets that were sent
229
through the data plane on the previous physical NE can now be forwarded by the new one. This avoids packet losses, which is an important feature for virtual NE migration. We implement decentralized plane separation using Xen virtual machines, which is similar to the implementation of Wang et al. [29]. Nevertheless, instead of using Xen, Wang et al. use OpenVZ, which is a virtualization platform that provides multiple virtual user spaces over the same operating system. OpenFlow-based migration: OpenFlow natively implements plane separation [5]. The OpenFlow controller is aware of the whole network topology and runs applications to configure flows according to virtual network policies. As a consequence, flow migration is simpler than in Xen since the controller can directly reconfigure all OpenFlow switches. The flow migration algorithm used by OpenFlow first creates new flow entries in each switch of the new path, except for the first common switch between the new and previous paths. When the controller modifies the entries in the common switch, redirecting the flows from the initial output port to the new output port, the new path starts its operation. Before terminating its execution, the migration algorithm removes the old flow entries in the switches of the original path. The advantages of OpenFlow migration are that it avoids the need for a local network, such as in Xen, because OpenFlow assumes a wide area switched network with configurable forwarding tables. OpenFlow provides an easy infrastructure for reallocating network resources. It is, however, based on a centralized controller, which may not scale for large area networks. Because both Xen- and OpenFlow-based approaches still provide shortcomings, an alternative solution could be a combination of both, which is in fact proposed by our testbed. Hybrid migration: FITS implementation aims at mitigating the disadvantages of both approaches for virtual network migration [31]. Running Xen adds flexibility to FITS because one can run any kind of protocol stack and also avoids centralization, which may not scale. On the other hand, the simplicity of virtual network migration as provided by OpenFlow is also a key aspect that improves the performance of FITS. In FITS, a virtual link can be mapped into one or more physical links. The routing function is performed by a flow table dynamically controlled by NOX and the topology of the virtual network is decoupled from its physical realization [18]. As a consequence, migrating virtual network elements in a FITS virtual network consists of three steps: migration of control plane, reconstruction of data plane, and migration of virtual links. The control plane is migrated between two physical network nodes through the live-migration mechanism of conventional Xen virtual machines [30]. Then, the reconstruction of the data plane is performed as follows. An application running in the virtual machine connects to the Domain 0 daemon and sends all routes that are known by the virtual machine. When the application running in the virtual machine detects a connection disruption with the Domain 0 caused by a migration, the application reconnects, now on the new server in which the VE is hosted, and sends all information about
230
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237
its routing and ARP tables. Upon receiving such information, Domain 0 reconfigures the NOX controller to execute the data plane according to the control plane of the migrated VE. Thus, all packets that arrive at the physical machine are handled according to the control information computed by the control plane that was previously migrated. Note that these packets are addressed to the migrated virtual element and that the physical machine is the one which the VE was migrated. After migration of the control plane and reconstruction of the data plane, links are migrated. The links migration occurs in OpenFlow switches that are instantiated in Domain 0 of physical servers and other switches in the network. Link migration occurs in order to create a switched path between all neighbors of the migrated VE, in the virtual topology, to the physical router that hosts the virtual element after migration. Thus, the migrated virtual element sends an ARP reply packet with a special destination MAC address (AA:AA:AA:AA:AA:AA). In FITS, this address is reserved and this packet has priority in being processed over other packets sent on the network. Special ARP reply packets are processed at the first switch that they arrive and, then, they are dropped. As a consequence, the migrated virtual router announces where it is available from now on using this special packet. This procedure updates the location of a virtual element in the network associating link migration with a dual behavior, router and switch, of FITS nodes. The dual behavior results in a migration primitive of virtual elements that has no packet loss or interruption of packet-forwarding services. 5.3. Isolating virtual environments When one considers sharing the same physical substrate to run multiple protocol experiments in parallel, isolation comes as the very basic requirement of an experimental testbed. It means that an experiment should neither interfere with others nor should it interfere with production traffic, if the testbed shares the physical infrastructure with the production network. Different challenges, which require different technical solutions, arise depending on the virtualization technique under consideration. The network virtualization model of FITS assumes that different virtual routers, instantiated as different virtual machines, share the same physical router. In this case, the straightforward resource that should be isolated is packet forwarding. Hence, one virtual router should not impact the packet forwarding performance of other routers, independent if there were traffic peaks or malicious attacks or faulty software. More generally, virtual network resources which must be protected to provide isolation can be categorized in different types. The physical infrastructure may be viewed as a set of physical nodes, switches or routers, and a set of links. As a consequence, the primitive resources a virtual network uses from the physical infrastructure are transmission capacity (or bandwidth), processing power (or CPU), and memory. Sherwood et al. [32] propose another set of virtual network resources based on the FlowVisor view of an OpenFlow network, with five resource types: bandwidth; device CPU; forward-
ing tables, which is equivalent to memory; traffic; and topology. Traffic in the case of OpenFlow networks is related to the flowspace of OpenFlow, which is a resource that is sliced and given to each virtual network. Topology, on the other hand, is an abstraction of the set of nodes and links allocated to a virtual network. Using OpenFlow, isolating traffic and topology resources is straightforward. Nevertheless, the remaining bandwidth, CPU, and memory resources present specific challenges in terms of isolation as we explain next. 5.3.1. Challenges of providing isolation within FITS The virtual network model using Xen of FITS considers that virtual machines behave as routers. Therefore, sending and receiving packets are I/O operations, which require the use of the device drivers located at Domain 0. All network operations of Domain Us generate an overhead in both memory and CPU of Domain 0. The Xen hypervisor, however, does not efficiently isolate Domain 0 resource usage, which is a major vulnerability of Xen. Since data transfer between two Domain Us and data transfer between Domain U and Domain 0 are Domain 0 CPU-demanding operations, a malicious action or a fault in a Domain U can easily exhaust the Domain 0 resources and thus compromise the performance of all the other domains. One of the goals of FITS is to prevent that any operation performed on a virtual network breaks the isolation between networks. The packet forwarding performance in Xen is another challenge [1]. Since a Domain U cannot directly access the hardware, the throughput of virtual routers is lower than the one achieved with non-virtualized systems. Plane separation, as discussed in Section 5.1, is a solution to this problem. With plane separation, Domain 0 is in charge of forwarding the packets of all virtual networks (as Domain 0 has direct access to the hardware), obtaining a throughput similar to non-virtualized systems, while each Domain U controls its own virtual network. For this purpose, a replica of the data plane generated inside each virtual machine must be maintained in Domain 0. Another important challenge of providing isolation between virtual machines refers to memory. A virtual machine memory utilization should not interfere with another VM, or a memory-based denial of service attack could be implemented. In our FITS testbed, a virtual router uses the memory needed to load its operating system; that amount of memory is allocated to the virtual machine by Xen. In this case, memory can be statically assigned to VMs guaranteeing isolation. Additionally, a virtual router may also spend memory inside Domain 0, because this is where routing/flow tables are stored with plane separation. On the other hand, there is no mechanism by default to avoid one virtual router spending more memory than the others inside Domain 0. In FITS, we build a mechanism to explicitly control memory usage inside Domain 0 as explained in the next section. 5.3.2. Isolation mechanisms of FITS FITS enforces isolation between different virtual networks by controlling three network resources, namely, processing power, memory, and per-interface bandwidth used by each virtual router.
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237
The amount of processing power of each virtual router in FITS is limited by using Xen configuration parameters. The Xen Credit Scheduler controls the amount of CPU time given to each virtual CPU of user domains. Credit scheduler has two configuration parameters, cap and weight. Cap gives an absolute limitation to the CPU time given to a virtual CPU, whereas weight defines the proportional share given to each virtual CPU in case there is contention for the physical CPU resources. FITS uses the Cap parameter of Xen to statically allocate a defined CPU share to each virtual router, isolating a virtual router from others in terms of CPU resource consumption. Nevertheless, forwarding packets also consume CPU time within the virtual driver domain, Domain 0. In that context, the default CPU scheduling parameters of Xen are unable to prevent packets originating from one virtual router to exhaust CPU resources in Domain 0 and impact other virtual routers performance. There are two solutions to this problem. One of them is to have an additional controller, such as XTC (Xen Throughput Controller) [21], or to use direct I/O mechanisms [33]. We can also consider the plane separation approach with one forwarding queue defined for each virtual router. Thus, packets from/to a specific router can be limited by changing the queue size. A virtual router consumes memory in two different cases. The first one corresponds to the memory allocated to a Xen virtual machine. The amount of memory can be controlled by using Xen configuration mechanism to limit the size of the memory allocated to a Domain U. The second case is inside Domain 0 shared memory area, in which there is no memory limitation mechanism provided by Xen. The approach of FITS is then to control the amount of memory used by a virtual router inside Domain 0. In FITS, memory consumption is a consequence of storing a copy of the routing and ARP tables generated by the virtual routers, as well as the OpenFlow flow table. Thus, memory control acts on the maximum number of entries a virtual router can have within the Domain 0. After reaching the maximum number of entries allowed, FITS inserts a default route into the Domain 0 routing table of the correspondent virtual machine. The default route defines that all packets that do not match any of the previous routes should be forwarded to the Domain U. The same mechanism is applied to the flow table entries. If a virtual machine inserts more flows than it is allowed, a default flow is inserted into the Domain 0 flow table, forwarding all new packets to the Domain U. Bandwidth allowed to each virtual router is limited in FITS by associating each virtual machine in Xen to a specific queue of the OpenFlow switch. OpenFlow allows the creation of isolated queues in each network interface. Each queue uses a Hierarchical Token Bucket (HTB) discipline, which defines a minimum available bandwidth for each queue together with a maximum bandwidth each queue may reach.
231
these features that are related to programmability, monitoring, and secure access. 6.1. Centralized or not? FITS deploys a software-defined network in which researchers can run their own experiments programming a centralized node, as shown in Fig. 9. As FITS is based on OpenFlow, experiments can be deployed as a new OpenFlow controller or as an OpenFlow controller application. Nevertheless, this approach imposes that all new applications, protocols, and architectures have to be centralized in a controller node. Therefore, a centralized experiment environment is where the experiments are deployed as an application or a controller of an OpenFlow network. The experimental network control plane is centralized in an OpenFlow controller and the physical network slicing is done by FlowVisor. The data plane, an OpenFlow switch, runs in testbed nodes. Each user has access to a slice of the network defined by a subset of the OpenFlow twelve-field flow definition. The centralized approach may compromise the network scalability, responsiveness, and reliability [34]. To avoid that, OpenFlow employs several techniques, such as rate control of incoming new flow messages and number of requests sent by the controller to switches [32]. There are also proposals that argue that controllers can be hierarchically connected to improve scalability and reliability [35,36]. An alternative is to employ a physically distributed control plane that is logically centralized [34]. On the other hand, FITS architecture also allows users to run decentralized experiments. The main idea of a decentralized experiment is to provide a complete virtual
Fig. 9. The centralized experimenting approach. A single control plane controls a network slice for each experiment.
6. Providing management features FITS offers functionalities to help users to configure and perform their experiments. In this section, we describe
Fig. 10. The decentralized approach. Each testbed node has an isolated control plane, and each control plane controls only its own data plane.
232
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237
machine environment to the user, in which new protocols can be run. Fig. 10 depicts an experimental scenario with a decentralized network control plane. Each virtual machine deploys a control and a data plane. It is worth mentioning that each control plane is independent and isolated from the others. One of the advantages of FITS is freedom of choice. A researcher aiming at testing a new protocol or application can choose between using the centralized controller, as in OpenFlow, or programming each virtual machine in the network.
6.2. Measurement tools FITS includes tools that help users to easily collect measurements of both physical and virtual network elements. Different from other testbeds [8,15,10,2], data of physical nodes and links are available to users and thus can be used to help network administrators or autonomic agents to allocate resources between the different virtual networks or migrate virtual nodes, for example. By using a simple graphical interface, as shown in Fig. 11, users are able to measure different variables of interest, such as available bandwidth, processor and memory usage, and link and end-to-end delay. Low-level and hardware-specific parameters, such as the number of virtual processors given to a virtual router and priority of processor utilization, are also available to users. We have developed specific measurement tools, referred to as sensors, because tools available for Xen and OpenFlow, such as xentop and OpenFlow statistics application, have limitations in monitoring virtual networks and their elements. These tools do not discover the network topology or measure the throughput, for example. With Xen, each FITS node has sensors running in the background within the VMs (Domain 0 and Domain Us). These sensors periodically acquire data and send them to the Xen Local Management Server, that aggregates data received from all VMs and then sends them to the Xen VN Server. This node processes the received data and transmit them to the Web interface. With OpenFlow, the Global OpenFlow Controller periodically requests data from switches. After, switches sent back the status of flow counters to the controller. The received data is then processed and sent to the Web interface. Fig. 4 illustrates all the network elements mentioned before. Currently, FITS has sensors to monitor throughput and round-trip time of virtual and physical links, physical and virtual topology, virtual router information (domain name, CPU and memory usage, number of virtual CPUs per router, etc.) and flow information (datapaths, forwarding tables, aggregated flows, etc.). Additionally, FITS allows users to easily add new sensors. Basically, users have to develop the sensing function and add the sensor to the polling list of one of the controllers. Data exchanged between sensors and the Xen Local Management Server or the Global OpenFlow Controller is encapsulated in XML (eXtensible Markup Language) messages that are defined per sensor. The XML structure of all messages are validated based on the W3C XML Schema standard [37]. For that, users must de-
fine a schema for each type of message exchanged by a sensor. 6.3. Secure management access Security is one of the main concerns of Future Internet research, and FITS is tightly related with assuring security to network experimentation environments. FITS connects distributed islands of nodes over the Internet. In order to provide security, FITS deploys mechanisms related to: Confidentiality: Ensuring that experimental information is not accessible by unauthorized users. Integrity: Ensuring that the delivered information is not modified by unauthorized users. Availability: Ensuring that a user always accesses his testbed slice from wherever he is. Authentication and access control are also provided by FITS. One of our main goals is to provide a security-oriented experimental facility in order to ensure experiment continuity, minimize the probability of a successful attack, and guarantee isolation between experiments. The three security pillars of FITS are the secure isolation mechanism between virtual environments, the use of cryptographic tunnels connecting distributed nodes, and secure authentication using smartcards and OpenID [38]. FITS ensures confidentiality of the experimental traffic using encrypted tunnels to connect physical nodes. The tunnels use TLS (Transport Layer Security). Moreover, a physical server is authenticated, since a certificate signed by the FITS Certification Authority (CA) is needed to join the FITS network. All control communication is also encrypted by using the TLS protocol. When a virtual machine exports its routes to Domain 0, a TLS connection is used and each virtual machine has a valid certificate signed by FITS CA. Therefore, in order to assure the authentication of the components of the testbed and the confidentiality of information in FITS, we deployed a Public Key Infrastructure (PKI) that is responsible for signing and checking all certificates of physical and virtual machines. TLS tunnels, hereafter also called VPNs, include cryptographic hashes of the forwarded data for integrity check and, as a consequence, to guarantee the integrity of the testbed traffic. In this sense, the tunnels also authenticate the VPN server to improve robustness against ‘‘man in the middle’’ attacks, because if the certificate of the server is not valid, the VPN connection is not completed. FITS guarantees the availability of its infrastructure based on the isolation between virtual environments and on the interconnection of testbeds. Isolation of virtual environments is mandatory to avoid denial of service attacks. Therefore, virtual machine resources must be controlled. Concerning Xen, this is focused on Domain 0 because it represents the main bottleneck for I/O operations. If a virtual machine tries to spend more resources than granted, the controller punishes the violating VM and enforces it to use only the predefined amount of resources agreed. Therefore, the behavior of a virtual machine does not affect other virtual machines that share the host or the network [17]. Interconnection between
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237
233
Fig. 11. Web-based graphical interface: users are able to collect statistics from both physical and virtual network elements.
testbed islands aims at distributing the testbed infrastructure among several locations. In FITS, all experimental networks hosted in one physical router can be migrated to another physical host in case of previously programmed maintenance or power outage. Authentication in FITS is based on component and user authentication procedures. FITS uses a PKI and all components in FITS must have a signed certificate to connect to the testbed. All physical nodes must have a certificate to connect to the VPN. All virtual machines also have a signed certificate to communicate with Domain 0 and, for instance, send routing information. Hence, to send data or control traffic to other FITS nodes, a node, both physical and virtual, must have its own certificate signed by a FITS CA. As already mentioned, FITS uses OpenID authentication [38,6]. OpenID is a tool that allows using an existing account to sign into multiple services. OpenID also allows identity federation, i.e., a user may sign into a service from a site using the identification provided by another site. When a user tries to access a Service Provider, e.g. a web site, authorized to receive identification information provided by an OpenID server, the user is redirected to his Identity Provider, enters his credentials and then is redirected to the Service Provider site. The Identity Server Provider, however, passes the user identification information, provided by the OpenID database, to the Service Provider. This operation is performed using a secure channel with asymmetric encryption. FITS authenticates users through a web interface. Hence, each FITS island has an OpenID Identity Provider that manages the identity of all users of the island and also their privilege levels. OpenID stores the identification attributes of a user and the services or virtual networks that the user can access. FITS, however, introduces a new way for authenticating with OpenID. FITS authenticates users using a smartcard-based authentication. The main goal of the smartcard-based authentication is to avoid phishing and
stealing of credentials. The smartcard stores the public key of its Identity Provider and, thus, it authenticates the Identity Provider before sending user’s credentials. The main idea of the smartcard-based authentication is to use the smartcard as the only credential needed by the Identity Provider. The authentication in FITS uses a Java Applet that runs in the OpenID Identity Provider that accesses the user-smartcard reader. Thus, when authenticating a user through the web interface, the user is redirected to his Identity Provider by the Web Interface site, the smartcard authenticates the Identity Provider and, after that, the smartcard provides the user’s credentials, and finally, after authentication succeeds, the Java Applet redirects the user back to the correct web interface site. This mechanism requires users to have an identity smartcard and a smartcard reader to access the testbed. It is worth noting that as the access control of FITS requires strong security, in order to avoid experiments disruption or unauthorized usage of the computer resources, all smartcard operations are preceded by unlocking the smartcard with a Personal Identification Number (PIN), which is equivalent to a password for each user and smartcard pair. OpenID performs the identity federation in FITS. OpenID identifies who is the Identity Provider of a user by the user URI. FITS relies on a list of trusted Identity Providers and, as a consequence, any of those are authorized to log users in. Besides OpenID federation of identity, the smartcard authentication occurs between the Identity Provider and the smartcard. Therefore, after being redirected to the user Identity Provider, the user’s smartcard negotiates the authentication just with the Identity Provider and, thus, the smartcard does not take part in the identity federation procedure. In addition, as authentication in FITS relies on the Identity Provider, if the Identity Provider is trustworthy and its authentication procedure does not use smartcardbased authentication, users may authenticate in FITS using just username and password credentials. Nevertheless, Identity Providers that provide weak-security authentication
180 160 140 120 100 UFRGS 80 60 LIP6 40 20 0 0 4 8
Round Trip Time (ms)
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237
Throughput (Mb/s)
234
UFF
12
16
20
24
Daytime Hour (h)
1000 800 LIP6
600 400 UFRGS
200 0
UFF
0
4
8
12
16
20
24
Daytime Hour (h)
Fig. 12. Hourly throughput and round trip time measures between UFRJ island and UFF, UFRGS and LIP6 islands, over a 24-h period of a week day.
procedure can only authenticate unprivileged users, i.e., users that access virtual resources but are not able to create a new virtual network or a new network slice. FITS provides security to the testbed infrastructure. Virtual networks, however, are from responsibility of the users that are running tests within them. In this way, FITS is not able to guarantee the security of an experiment against other legitimate users of the same virtual network. Thus, the internal security of virtual network is responsibility of its users. 7. FITS island interconnection performance FITS islands are interconnected through the Internet by using VPN and GRE tunnels. VPN tunnels are used to create direct secure connections through Firewall and NATs, and uses GRE tunnels to emulate a link layer connection to the slices. Control packets are always exchanged within a VPN tunnel, but data packets are sent through a GRE tunnel, if peer islands have permissive Firewalls, or through VPN + GRE, if islands have restrictive Firewalls or NATs. To analyze the real traffic conditions experienced by a testbed slice, we performed throughput and Round Trip Time (RTT) tests over a 24-h period of a week day. We monitored the interconnection tunnels between the UFRJ island and other three islands: UFF, UFRGS, and LIP6. The UFRJ is placed at Rio de Janeiro, RJ, Brazil. The UFF island is placed at Niterói, a neighbor city of Rio de Janeiro. The UFRGS island is located in Porto Alegre, RS, Brazil, the main city of the southern most state of Brazil. The LIP6 island is placed in Paris, France. During the tests, the UFRJ island gateway sent packets to the other peer island gateways. We have performed 24 runs during the test, one run per hour, on the same day. During each run, we have measured the throughput and RTT for the three islands considering the three tunneling modes: VPN alone, GRE, and VPN + GRE. We have generated TCP flows of 30 s by using the Iperf tool to measure the throughput. To measure the RTT, we send 30 Ping packets with 1 s interval. We used short-lived flows times to reduce the impact on the production traffic. Each run is repeated five times and all measurements are presented with 95% confidence intervals represented in the figures by vertical bars.
Figs. 12(a) and (b) show the throughput and the RTT measured between UFRJ island and UFF, UFRGS and LIP6 islands. As the results of all tunnels are quite similar, we only show the VPN + GRE result. As seen in Fig. 12(a), the throughput drastically decreases from 10 AM to 7 PM. As the tests ran on a week day, the production traffic considerably impacted the throughput during business hours, and reduce the throughput from UFRJ to UFF from approximately 160 Mb/s to 10 Mb/s, to UFRGS from approximately 20 Mb/s to 4 Mb/s, and to LIP6 from approximately 2.7 Mb/s to 1.6 Mb/s. The same behavior is can be perceived in Fig. 12(b). RTT considerably increases during the business hours. The RTTs from UFRJ to UFF, UFRGS and LIP6 reach peaks of approximately 100 ms, 600 ms and 700 ms respectively, while the RTT is 5 ms, 40 ms and 430 ms outside business hours. 8. Conclusion In this article, we have proposed the Future Internet Testbed with Security (FITS). We have presented the architecture, the main features of FITS and, also, we have discussed our proposals to overcome the challenges faced during the testbed deployment. The main contribution of FITS is the implementation of an architecture that offers two approaches for network programmability – Xen and OpenFlow – and combines state-of-the-art with wellestablished testbed features. In this sense, FITS provides virtual machines/routers similar to PlanetLab and PanLab and also an experimental OpenFlow environment similar to Ofelia, in which users are able to perform and monitor their experiments and collect results. FITS combines these features and proposes several innovative distinct mechanisms to achieve: (i) flexible experiment design, (ii) efficient packet forwarding, (iii) zero-loss networking migration, and (iv) security-oriented architecture. FITS allows researchers to design centralized or decentralized experiments. A packet forwarding experiment in FITS may use the plane separation paradigm, in which the forwarding function is performed by the physical node instead of the virtual router, while the control functions still run on virtual routers. The benefits of this technique are two. First, users are free to customize the control plane of their slice and to configure it in decentralized or
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237
centralized mode. With the centralized mode, we offer a NOX controller coordinating the OpenFlow switches. With the decentralized mode, we offer a control plane running within each network element. Second, users can migrate routers and flows on demand during the experiments with no impact on the results or just perform experiments with new mechanisms that require no packet loss during the movement of virtual network elements. At least, the security-oriented architecture ensures strong isolation between virtual networks, confidentiality, integrity, and availability of the testbed infrastructure. Moreover, FITS uses smartcard to authenticate users with OpenID and thus only authorized users are able to create virtual networks and perform experiments. Currently, FITS interconnects universities and institutions in Brazil and Europe. The software suite, installation and configuration guides are available at the FITS website. The testbed infrastructure is continuously under development in order to always introduce state-of-the-art features on FITS. We are now deploying a virtual network based on the Content-Centric Network (CCN) architecture using the network infrastructure available on FITS.
[2]
[3] [4]
[5]
[6]
[7]
[8]
[9]
[10] [11]
Acknowledgments
[12]
The authors would like to thank FINEP, FUNTTEL, CNPq, CAPES, FAPERJ, and UOL for the work financial support. We would like to thank Marcelo D.D. Moreira, Natalia C. Fernandes, Callebe T. Gomes, Daniel J.S. Neto, Lucas H. Mauricio, Hugo E.T. Carvalho, Pedro S. Pisa, Rodrigo S. Couto, Victor T. da Costa, Alessandra Y. Portella, Filipe P.B.M. Barretto, Leonardo P. Cardoso, Rafael S. Alves, Tiago N. Ferreira, Victor P. da Costa, Carlo Fragni, Luciano V. dos Santos, and Renan A. Lage for their technical contribution in the testbed implementation. We also would like to express our special thanks to the professors and researchers of the universities that are connected to FITS and help us to deploy and test FITS: Prof. Edmundo Madeira and Prof. Nelson Fonseca, and researchers Carlos Senna, Gustavo Alkmim, Milton Soares, and Esteban Rodriguez from Unicamp; Prof. Luciano Gaspary and Prof. Marinho Barcellos and researcher Lucas Muller from UFRGS; Prof. Andre dos Santos and researchers Davi França, Frederico Freitas, Luiz Barbosa e Edgar Tarton from UECE; Prof. Joni Fraga and researcher Vinicius Moll from UFSC; Prof. Magnos Martinello and researchers Sergio Charpinel from UFES; Prof. Cesar Marcondes from UFSCar; Luci Pirmez, PhD. and researcher Renato Souza from NCE/UFRJ; Prof. Djamel Sadok and researchers Thiago Rodrigues, Thiago Lima, Matheus Arrais, Lucas Inojosa, and Augusto Matos from UFPE; Prof. Eduardo Luzeiro Feitosa and researcher Kaio Rafael de Souza Barbosa, and Hugo Assis Cunha from UFAM; Prof. Paulo Jorge Esteves Veríssimo, Prof. Marcelo Pasin and researchers Oleksandr Malichevskyy, and Diego Kreutz from University of Lisbon; and reseacher Othmen Braham from UPMC.
[13]
References
[27]
[1] N.C. Fernandes, M.D.D. Moreira, I.M. Moraes, L.H.G. Ferraz, R.S. Couto, H.E.T. Carvalho, M.E.M. Campista, L.H.M.K. Costa, O.C.M.B. Duarte,
[14] [15]
[16]
[17]
[18]
[19]
[20]
[21]
[22] [23]
[24]
[25]
[26]
235
Virtual networks: isolation, performance, and trends, Ann. Telecommun. 66 (5) (2011) 339–355. P. Szegedi, S. Figuerola, M. Campanella, V. Maglaris, C. CervelloPastor, With evolution for revolution: managing FEDERICA for future internet research, IEEE Commun. Mag. 47 (7) (2009) 34–39. N. Feamster, L. Gao, J. Rexford, How to lease the internet in your spare time, ACM Comput. Commun. Rev. 37 (1) (2007) 61–66. E. Keller, J. Rexford, The ‘‘platform as a service’’ model for networking, in: Internet Network Management Conference on Research on Enterprise Networking, 2010. N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, J. Turner, OpenFlow: enabling innovation in campus networks, ACM Comput. Commun. Rev. 38 (2) (2008) 69–74. P. Urien, An OpenID provider based on SSL smart cards, in: IEEE Conference on Consumer Communications and Networking Conference, Las Vegas, Nevada, USA, 2010, pp. 444–445. T. Rakotoarivelo, G. Jourjon, M. Ott, I. Seskar, OMF: a control and management framework for networking testbeds, Oper. Syst. Rev. 43 (4) (2009) 54. B. Chun, D. Culler, T. Roscoe, A. Bavier, L. Peterson, M. Wawrzoniak, M. Bowman, Planetlab: an overlay testbed for broad-coverage services, ACM Comput. Commun. Rev. 33 (3) (2003) 3–12. A. Bavier, N. Feamster, M. Huang, L. Peterson, J. Rexford, In VINI veritas: realistic and controlled network experimentation, ACM Comput. Commun. Rev. 36 (4) (2006) 3–14. A. Köpsel, H. Woesner, OFELIA – pan-european test facility for OpenFlow experimentation, in: ServiceWave, 2011, pp. 311–312. D. Schwerdel, B. Reuther, P. Müller, T. Zinner, P. Tran-Gia, Future internet research and experimentation: the G-Lab approach, Comput. Netw., 61 (2014) 102–117. M. Berman, J.S. Chase, L. Landweber, A. Nakao, M. Ott, D. Raychaudhuri, R. Ricci, I. Seskar, GENI: a federated testbed for innovative network experiments, Comput. Netw., 61 (2014) 5–23. Jgn-x (jgn extreme) Project, August 2013.
. Rise (Research Infrastructure for Large-scale Network Experiments), August 2013.
. S. Wahle, B. Harjoc, K. Campowsky, T. Magedanz, Pan-European testbed and experimental facility federation – architecture refinement and implementation, Int. J. Commun. Netw. Distrib. Syst. 5 (1) (2010) 67–87. P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, A. Warfield, Xen and the art of virtualization, ACM SIGOPS Oper. Syst. Rev. 37 (5) (2003) 164–177. N. Fernandes, O. Duarte, XNetMon: a network monitor for securing virtual networks, in: Communications (ICC), in: 2011 IEEE International Conference on, IEEE, 2011, pp. 1–5. M. Casado, T. Koponen, R. Ramanathan, S. Shenker, Virtualizing the network forwarding plane, in: Workshop on Programmable Routers for Extensible Services of Tomorrow (PRESTO), 2010, pp. 8:1–8:6. D.M.F. Mattos, L.H.G. Ferraz, L.H.M.K. Costa, O.C.M.B. Duarte, Virtual network performance evaluation for future internet architectures, J. Emer. Technol. Web Intell. 4 (4) (2012) 304–314. A. Menon, A.L. Cox, W. Zwaenepoel, Optimizing network virtualization in Xen, in: USENIX Annual Technical Conference, 2006, pp. 15–28. R. de S. Couto, M.E.M. Campista, L.H.M.K. Costa, XTC: a throughput control mechanism for Xen-based virtualized software routers, in: IEEE Globecom, 2011, pp. 2496–2501. K. Kirkpatrick, Software-defined networking, Commun. ACM 56 (9) (2013) 16–19. D. Farinacci, S. Hanks, D. Meyer, P. Traina, Generic Routing Encapsulation (GRE), Network Working Group, RFC 2784, March 2000. B. Pfaff, J. Pettit, T. Koponen, K. Amidon, M. Casado, S. Shenker, Extending networking into the virtualization layer, in: ACM Workshop on Hot Topics in Networks, 2009. N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, S. Shenker, NOX: towards an operating system for networks, ACM Comput. Commun. Rev. 38 (3) (2008) 105–110. D.M.F. Mattos, N.C. Fernandes, V.T. da Costa, L.P. Cardoso, M.E.M. Campista, L.H.M.K. Costa, O.C.M.B. Duarte, OMNI: openflow management infrastructure, in: International Conference on the Network of the Future (NoF), 2011, pp. 52–56. R. Moreno-Vozmediano, R.S. Montero, I.M. Llorente, Elastic management of cluster-based services in the cloud, in: Proceedings of the 1st Workshop on Automated Control for Datacenters and Clouds, ACDC ’09, ACM, Barcelona, Spain, 2009, pp. 19–24.
236
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237
[28] Xen Cloud Platform, July 2012. . [29] Y. Wang, E. Keller, B. Biskeborn, J. van der Merwe, J. Rexford, Virtual routers on the move: live router migration as a networkmanagement primitive, ACM Comput. Commun. Rev. 38 (4) (2008) 231–242. [30] C. Clark, K. Fraser, S. Hand, J.G. Hansen, E. Jul, C. Limpach, I. Pratt, A. Warfield, Live migration of virtual machines, in: ACM/USENIX Symposium on Networked Systems Design and Implementation (NSDI), 2005, pp. 273–286. [31] P. Pisa, N. Fernandes, H. Carvalho, M. Moreira, M. Campista, L. Costa, O. Duarte, OpenFlow and Xen-based virtual network migration, Commun.: Wireless Devel. Countries Netw. Future (2010) 170–181. [32] R. Sherwood, G. Gibb, K.-K. Yap, G. Appenzeller, M. Casado, N. McKeown, G. Parulkar, Can the production network be the testbed? in: USENIX Conference on Operating Systems Design and Implementation (OSDI), 2010, pp. 1–6. [33] Intel, PCI-SIG SR-IOV Primer: An Introduction to SR-IOV Technology, Tech. Rep., January 2011. [34] D. Levin, A. Wundsam, B. Heller, N. Handigol, A. Feldmann, Logically centralized?: state distribution trade-offs in software defined networks, in: Workshop on Hot topics in Software Defined Networks, 2012, pp. 1–6. [35] T. Koponen, M. Casado, N. Gude, J. Stribling, L. Poutievski, M. Zhu, R. Ramanathan, Y. Iwata, H. Inoue, T. Hama, S. Shenker, Onix: a distributed control platform for large-scale production networks, in: USENIX conference on Operating Systems Design and Implementation (OSDI), vol. 10, 2010, pp. 1–6. [36] S. Yeganeh, A. Tootoonchian, Y. Ganjali, On scalability of softwaredefined networking, IEEE Commun. Mag. 51 (2) (2013) 136–141. [37] W3C XML Schema, August 2013. . [38] D. Recordon, D. Reed, OpenID 2.0: a platform for user-centric identity management, in: ACM Workshop on Digital Identity Management (DIM), 2006, pp. 11–16.
Igor M. Moraes is currently an Associate Professor in the Instituto de Computação (IC) at Universidade Federal Fluminense (UFF). He received the cum laude Electronic Engineer degree in 2003 and the M.Sc. and the D.Sc. degrees in electrical engineering from Universidade Federal do Rio de Janeiro (UFRJ) in 2006 and 2009, respectively. His major research interests are in architectures for the Future Internet, information-centric networks, peer-to-peer video streaming systems, wireless networks, and security.
Diogo M.F. Mattos is currently a M.Sc. candidate at Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil. He received the Computer Engineer degree from Universidade Federal do Rio de Janeiro, Rio de Janeiro, in 2011. His research interests include virtualization, software-defined networks, Future Internet and network security.
Lyno Henrique G. Ferraz is currently pursuing his Ph.D. degree in the Electrical Engineering Program at Federal University of Rio de Janeiro (Rio de Janeiro, RJ, Brazil). He received his B.Sc. and M.Sc. degrees in Electronical Engineering from the Federal University of Rio de (Rio de Janeiro, RJ, Brazil) in 2010 and 2011 respectively. His current research interests include network virtualization, big data and cloud computing.
Miguel Elias M. Campista received his telecommunications engineering degree from Fluminense Federal University (UFF), Rio de Janeiro, Brazil, in 2003, and M.Sc. and D.Sc degrees in electrical engineering from Federal University of Rio de Janeiro (UFRJ) in 2005 and 2008, respectively. In 2009 he was an associate professor with the Telecommunications Engineering Department of UFF. Since 2010 he has been as associate professor with the Electronic and Computer Engineering Department of UFRJ. His major research interests are in multihop wireless networks, quality of service, wireless routing, and home networking.
Marcelo G. Rubinstein received his B.Sc. degree in electronics engineering, and M.Sc. and D.Sc. degrees in electrical engineering from Universidade Federal do Rio de Janeiro (UFRJ), Brazil, in 1994, 1996, and 2001, respectively. From January to September 2000 he was at the PRiSM Laboratory, University of Versailles, France. He is now an associate professor with Universidade do Estado do Rio de Janeiro (UERJ). His major interests are in wireless networks, home networking, medium access control, and quality of service.
Luís Henrique M.K. Costa received the Electronics Engineer degree and the M.Sc. in electrical engineering from the Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil, in 1997 and 1998, respectively, and the Dr. degree from the University Pierre et Marie Curie (Paris 6), Paris, France, in 2001. Luis spent 2002 as a Post-Doctoral Researcher with Laboratoire d’Informatique de Paris 6, Paris. Then, Luís was awarded a research grant from CAPES (an agency of the Brazilian Ministry of Education), and joined COPPE/UFRJ. Since August 2004, he has been an Associate Professor with UFRJ. His major research interests are in the area of routing, especially on wireless networks, group communication, quality of service, multicast, and largescale routing.
I.M. Moraes et al. / Computer Networks 63 (2014) 221–237 Marcelo Dias de Amorim is a CNRS permanent researcher at the computer science laboratory (LIP6) of UPMC Sorbonne Universités, France. His research interests focus on the design and evaluation of dynamic networks as well as service-oriented architectures. For more information, visit http://www-npa.lip6.fr/amorim.
Pedro B. Velloso received the B.Sc. and M.Sc. degrees in Electrical Engineering from the Universidade Federal do Rio de Janeiro, Brazil, in 2001 and 2003, respectively. He received the Ph.D. degree from the Université Pierre et Marie Curie (Paris 6) in 2008. He spent one year as a post-doc researcher at Laboratoire d’Informatique de Paris 6 in 2008/2009. He has worked as a research engineer at Bell Labs France. He is now an associate professor at the computer science department of the Universidade Federal Fluminense (UFF), in Brazil. His interests are in distributed applications, wireless communications, and security.
Otto Carlos M.B. Duarte received the Electronics Engineer degree and the M.Sc. degree in electrical engineering from Universidade Federal do Rio de Janeiro, Brazil, in 1976 and 1981, respectively, and the Dr. Ing. degree from ENST/Paris, France, in 1985. Since 1978, he has been a Professor with UFRJ. His major research interests are in QoS guarantees, security and big data.
237
Guy Pujolle is currently a Professor at the Pierre et Marie Curie University (Paris 6), a member of the Institut Universitaire de France, and a member of the Scientific Advisory Board of Orange/France Telecom. He is the French representative at the Technical Committee on Networking at IFIP. He is an editor for International Journal of Network Management, WINET, Telecommunication Systems and Editor-In-Chief of the indexed Journal ‘‘Annals of Telecommunications’’. He was an editor for Computer Networks, Operations Research, Editor-In-Chief of Networking and Information Systems Journal, Ad Hoc Journal and several other journals. Guy Pujolle is a pioneer in high-speed networking having led the development of the first Gbps network to be tested in 1980. Guy Pujolle is co-founder of QoSMOS (www.qosmos.fr), Ucopia Communications (www.ucopia.com), GinkgoNetworks (www.ginkgo-networks.com), Virtuor (www.VirtuOR.com), and EtherTrust (www.ethertrust.com).