Software defined autonomic QoS model for future Internet

Software defined autonomic QoS model for future Internet

Accepted Manuscript Software defined autonomic QoS model for future internet Wendong Wang , Ye Tian , Xiangyang Gong , Qinglei Qi , Yannan Hu PII: DO...

1MB Sizes 2 Downloads 48 Views

Accepted Manuscript

Software defined autonomic QoS model for future internet Wendong Wang , Ye Tian , Xiangyang Gong , Qinglei Qi , Yannan Hu PII: DOI: Reference:

S0164-1212(15)00177-6 10.1016/j.jss.2015.08.016 JSS 9564

To appear in:

The Journal of Systems & Software

Received date: Revised date: Accepted date:

2 March 2015 18 July 2015 14 August 2015

Please cite this article as: Wendong Wang , Ye Tian , Xiangyang Gong , Qinglei Qi , Yannan Hu , Software defined autonomic QoS model for future internet, The Journal of Systems & Software (2015), doi: 10.1016/j.jss.2015.08.016

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT

Highlights

CR IP T AN US M ED



PT



CE



We design a hierarchical network model to achieve autonomic QoS management in SDN. QoS components can be combined dynamically with software-defined technology. We propose a Collaborative Borrowing based Packet-Marking (CBBPM) algorithm. CBBPM outperforms other packet-marking algorithms.

AC



ACCEPTED MANUSCRIPT

Software Defined Autonomic QoS Model For Future Internet Wendong Wang*, Ye Tian, Xiangyang Gong, Qinglei Qi, Yannan Hu State Key Laboratory of Networking and Switching Technology Beijing University of Posts and Telecommunications, Beijing, China

Abstract —Software defined technology has gained enormous

loop network architecture, QoS management components apply statistical analysis to different service flows, and then formulate a corresponding QoS strategy. DiffServ (S. Blake et al., 1998) and IntServ (R. Braden et al., 1994) are two representative QoS models in traditional IP networks. DiffServ can be used to provide low-latency service to critical network traffics while providing simple best-effort service to non-critical traffics. Different from DiffServ, IntServ specifies a fine-grained QoS system, which contrasts to DiffServ’s coarse-grained control system. IntServ allows critical traffic data to reach the receiver without interruption. However, each router in network needs to implement IntServ and each application that requires an individual reservation. In the case of a large amount of service flows, IntServ has poor extensibility. Automatic network technology solves the original inability of combining individual QoS management components into a closed-loop to adjust operating parameters in real-time. Especially, with context awareness, QoS management components can generate context-aware QoS control loop by selectively assembling components according to network states and traffic states. And, the control loop is capable of adjusting QoS operating parameters adaptively. Compared to the traditional IP network, the autonomic based QoS control loop, to some extent, can realize higher efficiency when allocating network resources. However, sharing a common problem with the traditional IP QoS management, QoS management components in autonomic network architecture cannot be integrated flexibly. This problem constrains our ability to formulate more reasonable QoS strategies. Software-defined technology has become the focus of both academia and industry. It also announces an era of “Software Defines Everything”. Since software is capable of integrating hardware facilities’ computing and storage functions, the software-defined technology is also related to the integration of infrastructures. Software Defined Network (SDN) is an instance of Software-defined technology being applied in Internet (N. McKeown et al., 2008). In terms of storage, SDN relies on a software layer to conduct complicated management of infrastructures. While in terms of network management, SDN can realize centralized management of all network elements with a software controller without configuring each device individually. In a traditional IP network, control-plane and data-plane together provide the data forwarding and routing service. Once the forwarding path of certain traffic has

CE

PT

ED

M

AN US

momentum in both industry and academia. It may change the existing information flow architecture, which centered at hardware, by granting more privileges to deploy third-party software with more flexible network control and management capabilities. However, how to provide intelligent QoS guarantee in future network is still a hot research issue. Based on top-down design strategy, in this paper, we propose a hierarchical autonomic QoS model by adopting softwaredefined technologies. In this model, the network is divided into: data-plane, control-plane and management-plane. Then the data-plane is sliced as multiple virtual networks with the dynamic resource allocation capabilities, in order to carry different services or applications. The control-plane is abstracted as resource control layer and service control layer for each virtual network respectively. The management-plane then constructs hierarchical autonomic control loops in terms of different services’ bearing virtual networks and the corresponding hierarchical control logic. With the hierarchical autonomic control loop and layered service management functions, this model can provide dynamic QoS guarantee to the underlying data networks. We further proposed a context-aware Collaborative Borrowing Based Packet-Marking (CBBPM) algorithm. As an important underlying QoS component in data-plane, CBBPM can be combined with other QoS components to provide some new QoS functions. Experiment comparisons between different QoS mechanisms demonstrate the effectiveness of our proposed hierarchical autonomic QoS model and the packet marking mechanism.

CR IP T

{wdwang, yetian, xygong, qql, huyannan}@bupt.edu.cn

AC

Index Terms —Software Defined Technology, OpenFlow , Autonomic, QoS, Packet-marking.

I. INTRODUCTION

Quality of Service (QoS) management is an important component of network management. However, being limited by the “Best Effort” principle and the connectionless transport protocol, existing IP network cannot satisfy users’ different QoS demands. Nowadays, it has become an urgent issue to enhance the QoS management mechanism in future network. Traditional Internet adopts loosely coupled architecture, which caused difficulties to realize effective control and management of the overall network resources. Based on open-

ACCEPTED MANUSCRIPT

CR IP T

al., 2012). However, in this paper we mainly focus on the QoS management model within SDN. To our minds, since the QoS management model is close-loop structure, its deployment would not increase the security risks of SDN. The rest of this paper is constructed as the following. In section II, the related works and background technologies are analyzed. Section III describes the overall objectives of the proposed model. In section IV, we detail the architecture of the model and section V elaborates a configurable QoS management component within the software defined autonomic QoS model. In section VI, the Collaborative Borrowing Based Packet-Marking (CBBPM) algorithm is proposed to further verify the effectiveness of the QoS model. Experiments are conducted in section VII, and section VIII concludes the paper. II. RELATED WORK

The essence of IP QoS problem is to, on the one hand, fulfill different services’ QoS requirements with lower operation and management costs; and on the other hand, retain the advantage of connectionless transmission and allocate the reserved limited bandwidth rationally. In order to provide the required network performance, we need to configure certain QoS mechanisms over the whole network infrastructures. Such mechanisms have been adopted to control and transmit service responses. For instance, one mechanism is used to prevent potential resource contention issues. With respect to traditional IP network, ITU-T delivered a proposal Y.1291 (ITU-T, 2004) to specify the QoS management framework in packet switching network. The core concept of this framework is to combine general management components such that network services’ response can be decomposed into service requests, which may be assigned to specified network elements. QoS management components are distributed over three logic planes: control-plane, management-plane and dataplane. Fig. 1 demonstrates the QoS management framework. Each plane contains multiple QoS management components, but these components cannot be adaptively integrated as a closed control loop. Therefore, it is difficult to automatically adjust QoS strategies in real-time according to network context.

AC

CE

PT

ED

M

AN US

been determined, the path information will be delivered to data-plane and consequently be stored in hardware. Data-plane usually selects the latest path information to forward packets. The pattern is high efficient for packet forwarding, but is fundamentally flawed in scalability. First, non-centralized management makes each network device configured repeatedly. Second, since traditional network devices are not programmable, it is difficult to realize complex management tasks (such as QoS management) through programming. With SDN controller’s centralized administration, configuration strategies are delivered in a unified way. More importantly, for certain requirement, this centralized administration is capable of developing new management components and further integrating these individual components to fulfill complicated tasks. As far as QoS management is concerned, softwaredefined technology makes it possible to construct a complete feedback control loop by selecting optimal QoS management components on-demand. While SDN, as a new emerging technology, is attracting people’s attention, there are still inadequacies in QoS management within the existing SDN framework. The deficiencies are embodied in two aspects. First, few current research is dedicated to QoS management framework in SDN specification. Second, SDN framework lacks necessary supports for QoS management components. Though OpenFlow, one of the most representative instances of SDN, has been applied in many scenarios, it so far can only provide limited support to QoS management. Limited by OpenFlow switch’s architecture, OpenFlow controller can configure actions to process only specific protocols that can be recognized by the switch program, such as IP, TCP and etc. However, OpenFlow switch is unable to recognize and support any new protocol at runtime. Moreover, the set of queue configurations supported by OpenFlow switch is also quite limited. For reasons given above, researches on both QoS management framework and QoS management components in SDN are urgent. To solve problems mentioned above, we proposed a software-defined autonomic QoS management model in this paper. By introducing autonomic network framework, the proposed model can perceive and analyze different contexts, and then generate optimal QoS management strategy. According to the optimal strategy, the model can select appropriate QoS management components to create specific feedback control loop. In this paper, we also designed a Collaborative Borrowing Based Packet-Marking (CBBPM) algorithm, which aims to improve the utilization rate of network resource. Packet marker component developed from the CBBPM algorithm can comply well with our proposed QoS management model. Actually, standardisation and security are indeed important problems in SDN framework. ONF (Open Network Foundation) is actively promoting the standardisation process of SDN (McKeown, 2012). Meanwhile, plenty of work has been applied to resolve the security problems which are brought by the centralization management and open programmability features (Z. A. Qazi et al., 2013) (S. Shin et

Admission Control

QoS Routing

Resource Reservation

Service Level Management Traffic Restoration

Control-Plane

Metering

Queue Management Traffic Classificatoin

Data-Plane

Queue Schedule Traffic Shaping

Congestion Avoidance Traffic Policing

Packet Marking

Policy

ManagementPlane

Figure 1. QoS Management Framework in Packet Switching Network Autonomic network technologies have been gradually introduced into network management, especially the QoS

ACCEPTED MANUSCRIPT

CR IP T

(AF). Examples of Autonomic Functions are Autonomic QoS Management-DE, Autonomic Security Management-DE, Autonomic Fault Management-DE, Autonomic Mobility Management-DE, etc. Recently, several approaches have been proposed for QoS management in SDN environment. Most of these solutions are not implemented by extending existing OpenFlow protocol. (Civanlar S et al., 2010) proposed a QoS routing algorithm to distinguish traffic flows with specified QoS requirements from normal traffic flows, such that flows with high priority are forwarded preferentially. As a follow-up work, (Egilmez H E et al., 2012) opens its APIs to users for interfering route planning procedure. Different from the two approaches stated above, (Bari M F et al., 2013) introduced a management layer by utilizing the north bound interface of SDN controller to validate and enforce QoS related policy, thus affecting the route planning result. These approaches all aim to provide QoS means at route selection level. However, no fine-grained QoS strategies (packet classifying, packet marking, queue managing and queue scheduling) are involved in these works. (Kim et al., 2010) proposed an autonomic and scalable QoS architecture, which introduced rate-limiters and queue mapping APIs in OpenFlow. The rate-limiter APIs provide functionalities to map one or more flows to a specific rate limiter, while the queue mapping APIs can map a flow to a specific priority queue. According to OpenFlow version 1.0, (Ishimori et al., 2013) enhanced QoS management capability by introducing hierarchical token bucket based packet shaping as well as stochastic fairness queuing based packet scheduling. Another work we proposed before (Tong Zhou et al., 2013) designed a PindSwitch model to provide output queue controlling ability for queue configuration and queue scheduling. This paper mainly focused on how to deal with the problem of co-existence of multi-protocol in SDN switch. (Wendong Wang et al., 2014) also proposed a QoS Management architecture. In this previous paper we took a first stab at introducing autonomic technologies into softwaredefined network. However, it did not go further to touch the key issues of how to develop flexible QoS management components to comply with the autonomic QoS management model. Compared to our previous works, in this paper, we further studied the hierarchical structure of autonomic QoS management model based on a top-down methodology. Also, we proposed a novel packet-marking algorithm-Collaborative Borrowing Based Packet-Marking (CBBPM), based on which a new QoS management component could be constructed and deployed to SDN nodes. Experiments revealed the effectiveness of our proposed model and the affiliated management components.

AC

CE

PT

ED

M

AN US

management to solve the above problem. Autonomic network is a flexible architecture with dynamic and security natures. Through self-organization, it can provide various network operation strategies and QoS management mechanisms, which may be applied to part of elements or the whole network. Autonomic network based QoS management model can adapt to network environment variations and meet different users’ demands automatically. MAPE model (IBM, 2001) is an abstraction of an autonomic system defined by IBM. Autonomic Manager and Managed Element/Resource entities are designed in this model. Entities of monitoring, analysis, planning and execution form a control loop in the Autonomic Manager. “Knowledge”, being the central of this loop, is used to drive the control loop. “Knowledge” relates to all the pieces connected by the general “knowledge” space. For example, monitoring information supplied by the sensor part of the Managed Element/Resource is part of the “knowledge”. It also includes how to analyze monitoring information, how to plan the actions to be executed as well as how and when to execute the actions on the Managed Element. “Knowledge” also includes information in the cognitive aspects of the Autonomic Manager. Generic Autonomic Network Architecture (GANA) (EFIPSANS, 2009) is another abstract model proposed by EU FP7 EFIPSANS project (EFIPSANS, 2009). The GANA control loop is derived from IBM MAPE model for autonomic networking management specifically. This model illustrates the possible distributed nature of the information suppliers that supply a Decision Element (DE), with information (knowledge) that is used to manage the associated Managed Element(s)/Resource(s). In this model, a Decision Element is adopted as the autonomic management element, which is similar to the role of an Autonomic Manager in the MAPE model. A Managed Entity (ME) is also adopted represent a Managed Resource or a Managed Automated Task instead of a Managed Element. This is to avoid the confusion arising when one begins to think of an element as only a physical network element. It also illustrates the fact that behaviors or actions taken by the DE do not necessarily trigger some behaviors or the enforcement of a policy on the Managed Resource(s), but may have to do with the communication between the DE and other entities, e.g. other DEs in the system or network. For example, the DE may need to self-describe and self-advertise to other DEs in order to be discovered or to discover other DE to communicate or get “views”—knowledge known by the DE. In general, self-manageability in GANA is achieved through instrumenting network elements with autonomic DEs that collaboratively work together. GANA defines a hierarchy of DEs, i.e. four basic levels of self-management: protocol, function, node and network level. Each DE manages one or more lower-level DEs through a control loop. These lowerlevel DEs are therefore considered Managed Entities (MEs). Over the control loop, the DE sends commands, objectives, and policies to an ME and receives feedback in the form of monitoring information or other types of knowledge. Each DE realizes some specific control-loop(s), and therefore, represents an "Autonomic Activity" or "Autonomic Function"

III. DESIGN OBJECTIVE In order to enhance the flexibility of network management, especially the QoS management within a complex network environment, we design a software-defined automatic QoS management model for the future internet in this paper. The following points are key objectives.

ACCEPTED MANUSCRIPT

CR IP T

To let the network support service integration in heterogeneous network environment, we need to introduce autonomic technologies in management-plane to improve the self-managing capacity of both network and services. (3) Virtualization By decoupling the logic control and underlying physical infrastructures, multiple virtual networks can co-exist with no distractions. For each virtual network, a corresponding management-plane should be implemented, thus the network traffic could be flexibly controlled. Here, we would like to further describe the detail of our proposed network architecture. (1) Hierarchical Network Architecture Considering the excellent open nature, physical network devices, which constitute the physical network, are all programmable and intelligent. It would be easy to access and control these devices and even deploy novel network protocols to fulfill specific requirements. Since control-plane and dataplane are separated, users can build their own virtual networks and define personalized network management strategies. Moreover, the decoupling of control-plane and data-plane also brings network operator the convenience to provide end-toend controllable services to customers. In order to enforce more flexible QoS management to services with different traffic characteristics, we defined hierarchical network architecture as illustrated in Fig. 2. Generally, the physical infrastructure is virtualized as different virtual networks for the convenience of deploying different services within certain virtual network.

AC

CE

IV. ARCHITECTURE DESIGN

Based on the objectives we elaborate in previous section, the design principles are summarized as follows: (1) Programmability Networks, which are implemented according to the designed architecture, would be more efficient in network resource utilization, with open standard programmability. Network control is directly programmable because of the decoupling from forwarding functions. (2) Autonomic

Control-Plane Service Control Layer n

Service Control Layer 1 Network Resource Control Layer 1

an Pla agem ne e 0 nt-

ntme ge na ne 1 a a M Pl

Network Resource Control Layer n

...

Virtual Network 1

nt me ge na e n Ma Plan -

Virtual Network n

Data-Plane

Physical Network

M

PT

ED

M

AN US

(1) Componentized architecture in Software-defined pattern Functions provides by traditional network are inadequate in managing network service. In order to provide a dynamic, manageable, cost-effective and adaptable architecture, the network control and forwarding functions should be decoupled, such that network control is programmable and the underlying infrastructure is abstracted from applications and network services. Based on the decoupling characteristics, basic network functions and advanced functions can be composited automatically and the components management and process management can be software defined. (2) Autonomic network management The management complexity of the Internet caused by its rapid growth becomes a major problem that limits its usability in the future. The ultimate objective of introducing automaticity is to decrease the complexity of network control and management. By introducing an autonomic control loop, a series of self-* attributes, such as self-awareness, selfconfiguration, self-protection, self-healing, self-optimization, and self-organization, are implemented to realize the automaticity and minimize the burden of manual operations. (3) Context-aware management enabled Context-awareness is important for QoS policy formulation. Within this model, network context, service context and packet context are synthesized to seek optimal QoS strategy, such that network resource can be allocated effectively among multiple data flows. In this case, key packets of flows with high priority can be prioritized for delivery. Specifically, high efficient packet-marking solutions, which conform to the specifications of OpenFlow framework, would be explored. (4) Programmability To build a new network with diverse features, it is required that the new network should be designed like software. This network architecture is programmable and it decouples the control and data forwarding. The network could also be better controlled and operated more efficiently by using software. (5) Supporting multi-heterogeneous service delivery The network architecture should satisfy different service requirements, such as constant bit-rate real-time voice service, variable bit-rate real-time broadcast video service and variable bit-rate non real-time Internet service. According to the property of services, this architecture should provide different QoS, data transfer modes, communication protocols and different service program APIs to different services. Each service, without affecting other services, has its own service control logic and independent network architecture.

Figure 2. Hierarchical Network Architecture Based on the consideration, we designed the hierarchical architecture, which is composed of the following layers and planes: Physical Network Layer: It is the data-plane that comprises physical network infrastructures, which achieve the basic connectivity of the networks. Physical network layer is the bear network of all kinds of services and applications. It also provides secure channel to support the control information communication between management plane and data plane. Virtual Networks Layer: Physical resources, such as switches and servers at data-plane are first virtualized, and then these virtual resources consist of different kinds of virtual networks to support specified services and applications. These virtual networks belong to data-plane, but are virtualized in different network topology layers for different services and applications. Virtual network infrastructure is realized by gathering appropriate virtual resources from the data-plane. A

ACCEPTED MANUSCRIPT

CR IP T

performs the function of physical resource configuration and physical network organization. Working with all VNMs, GNM could implement several self-* attributes, such as selfconfiguration, self-organization and self-optimization. VNM is responsible for managing virtual networks, virtual resources and associated services or applications. VNM is defined as an autonomic manager (or DE) for virtual networks. The Virtual Networks layer, Virtual Network Resources Control layer and Service layer could be considered as its MEs. (2) Key Solutions To realize the proposed architecture, the following mechanisms are indispensable. First, context awareness should be introduced to dynamically allocate virtual resources among different virtual networks. On this basis, virtual networks are partitioned through rationally combining virtual resources. Actually, virtual networks are isolated from each other, in order to realize effective management; autonomic control loop is deployed in each virtual network. The following sections give descriptions of the key mechanisms in detail. A. Context-aware based Virtual Resource Redistribution Since multiple virtual networks share same network resources (such as network bandwidth, CPU, queue, and etc.), it is necessary to make a sound resource allocation mechanism to ensure the overall equity and effectiveness. Traditionally, each virtual network in SDN is allocated with a static and fixed share of network resource. However, this approach is of low efficiency in actual deployment. For example, service may be experienced with high end-to-end delay in a virtual network, while other virtual networks are still lightly loaded. The direct reason is that the bandwidth of each substrate link is ensured by resource isolation. Therefore, to address this problem, Generic Network Management Plane functions to dynamically allocate network resources between various virtual networks via context-based reasoning (The detail procedure is elaborated in Section VI). In general, the overall resources of underlying physical network infrastructures are sliced according to the requirements of each service or application.

AC

CE

PT

ED

M

AN US

new control-plane and management-plane is created to control and manage those virtual resources. These virtual networks can support different network architectures and different services, and even the relevant control and management functions can be customized according to the specified application/service carried by different virtual networks. Network Resources Control Layer: By using a kind of universal description of the virtual resources, each virtual network provides a universal virtual resource abstraction to the virtual network resources control layer. The control logic in the virtual network resource control layer is responsible for detecting, maintaining and allocating the virtual resources, as well as granting the virtual network resource control logic programmatic access to the virtual network. These virtual network resource control logic, such as path creation, data forwarding roles, traffic engineering and etc. determine the designed virtual network behaviors. Note that, since each virtual network typically carries different services, the control logic of different virtual network resources control layer can be different. In order to support the virtual network resource controlling, this layer is further divided into two sub-layers: Basic Switch Function sub Layer and Basic Network Function sub Layer. Generally, Basic Switch Function sub Layer is composed of protocol description management module, queue resource management module, action management module and flowtable management module. The core functions that Basic Switch Function sub Layer performs are mainly reflected in aspects of: (1) Data link layer protocol analyzing, based on which switch is able to transmit and receive data encapsulated with different data link layer specifications. (2) Protocol parsing, based on which switch conducts field extracting and packet analyzing with any protocol format. (3) Flowtable and actions management. (4) Queue resource management, through which the queue related operations, such as queue management and queue scheduling, are implemented. Virtual Basic Network Function sub Layer is responsible for global network management. Generally, this layer is composed of network monitoring and management module, topology discovery and management module, routing management module as well as resource management module. Functions performed by this layer are independent with the underlying physical infrastructures. Actually, Basic Switch Function sub Layer can be regarded as a managed entity of its upper layer according to the perspective of GANA reference model. Service Control Layer: When the specific services or applications are deployed on a virtual network, the dedicated service control logic would be constructed automatically to control and manage the service deployment according to its specific requirements. Management-Plane: There are two types of managementplane: Generic Network Management (GNM) plane and Virtual Network Management (VNM) sub-plane. GNM is a generic autonomic manager (or called Decision Element (DE)), which manages all VNMs. While each VNM could be considered as a Managed Entity (ME) in GNM plane, those VNM MEs together with GNM DE consist of an autonomic system on GNM plane. GNM is responsible for managing all VNMs. It also

Figure 3. Context-aware Based Virtual Network Redistribution As illustrated in Fig.3, an autonomic control loop, this is comprised of four connected components: Monitor Entity, Analyze Entity, Plan Entity and Execute Entity, is implemented on the GNM plane. The monitor entity perceives the environment of physical and virtual networks (e.g. the

ACCEPTED MANUSCRIPT

virtual network 1, while the virtual network 2 and 3 are used to support label-based switch/routing and named-based routing/forwarding respectively. Slice1

Slice3

Slice2

Topology module

Topology module

Topology module

Routing module

Routing module

LinkLoad module

TE module

Routing module NOX1.1 Global Component

LinkLoad module SliceID flow_in_even t packet_in link_event

Spanning Tree module

QoS Module

Content Switch

Packet Switch

Authenticator module

NOX1.1 VNM Component

Vlan module

Authenticator module

Flow_entry

CR IP T



Circuit Switch

Flow_entries Events

Virtual Network Management Component

flow_in_event

link_event

Flow_entry constructor and installer

Virtual Network Creator

Events Filter and Dispatcher

Virtual Network Configuration Module

Physical Topo

Events

Discovery Component

Switch Component

Authenticator Component

Topology Component



Routing Component

NOX1.1 Core Platform

Table 0

Table 2 Lable Action

CE

Standard+ Action Extension

Table 4 ToS Action

PT

Table 1 IP Action

ED

M

AN US

state of the networks and available resources) as well as the resources demanded from the services running on virtual networks in real-time. Then, the analyze entity applies comprehensive analysis according to the remained network resources, service requirements in different virtual networks and the network’s overall objectives. With appropriate inference model, plan entity generates policies to specify how to adjust the network resources. Finally, the execute entity carries on a series of operations to complete the resource redistribution among multiple virtual networks. Within the framework of autonomic control loop in GNM, virtual resources management is achieved by the network itself through the autonomic attributes, such as self-configuration, self-organization and self-optimization of the inter-virtualnetwork resources, which maximize the utility of the network resources. B. Network Virtualization Network virtualization is the key approach to implement the sliced virtual network on a common underlying physical network. Basically, two key mechanisms are involved in this task: network resource isolation and virtual network management. For the former, we proposed an extensible packet matching and pipeline processing based mechanism, which is applied on the data-plane. For the latter, we designed a virtual network management component (VNMC), which is located on the management-plane. According to the specification of OpenFlow version 1.3, the extensible packet matching and the pipeline feature could be used to provide more flexible network virtualization mechanism. In order to support multiple forwarding paradigms, all flow-tables should be designed to support the extensible packet matching mechanism. The OpenFlow pipeline is also divided into multiple sub-pipelines. The number of sub-pipelines is equivalent to the number of virtual networks that were designed for multiple forwarding paradigms.

Table 5

Traffic Class

Action

Table 3 Name Action

Virtual Network 1

Virtual Network 2 Virtual Network 3

AC

Figure 4. Network Virtualization based on extensible packet matching and pipeline processing

Pipeline processing always starts at the first flow-table, and every packet is first matched with entries of flow table 0. Depending on the matching results in the first table, other flow-tables would then be used. Thus, in our approach, the flow-table 0 is used as a de-multiplexer, which dispatches packets and flows to different sub-pipelines. Figure 4 illustrates a specific case of the proposed mechanism, where the switch carries flows of three virtual networks. As shown in Figure 4, different kinds of forwarding approaches could be used in different virtual networks. For example, the traditional IP-based packets routing and forwarding mechanism is used in

Figure 5. Virtual Network Management Component

Regarding to the management-plane, we designed a virtual network management component (VNMC) to apply network resource allocation and other virtual network management relevant tasks. As illustrated in Figure 5, VNMC works above the virtual network operations system (NOX1.1 Core Platform). The main objectives include: (1) Establishment, modification and destruction of virtual network. (2) Network resource perception and network resource allocation. The two tasks correspond to the key functions of management-plane described in previous section. Above the virtual network management component (VNMC) is various virtual slices. Each virtual slice corresponds to a specified virtual network. Virtual slice is composed of multiple control modules, which function as decision elements of the virtual network and the services that are running on the virtual network. C.

Autonomic Network Structure In order to improve the capacity of supporting adaptive management on both customer service and network infrastructures, a series of automatic control loops are introduced into the management-plane. Actually, the linkage mechanism by integrating the four entities enables the model to automatically adjust the configurations of network parameters by analyzing the collected contexts from different layers. Monitor entity functions as a context collector to perceive and organize context from required managed resources. By analyzing the collected contexts, analyze entity produces sets of control policies. Then the plan entity would

ACCEPTED MANUSCRIPT

Management Plane Plan

Execute

Analyze

Monitor

Service Control Layer

Network Resource Control Layer Plan

Execute

Analyze

Monitor

components are enabled on both the management-plane and the data-plane. Specifically, QoS management DE, which determines QoS rules dynamically, is located on the management-plane as an application. Meanwhile, the basic capability of QoS components in data-plane is also extended. Generally, QoS management DE is composed of six modules. The functions of each module are elaborated as below: Requirement DB

Analysis

Policy DB

Plan

Execute

Analyze

Monitor

CR IP T

Virtual Network Layer Context Manager

Rule DB

Rule Decision

QoS Management DE

Figure.7 QoS Management DE Figure 6. The Structure of Autonomic Network

AC

CE

PT

ED

M

AN US

be activated to decompose the scheduled control polices into a series of operations. Finally, the execute entity is responsible of putting the operations into practice via configuring settings on physical network elements. Three control loops indwell in a certain virtual network management-plane. Each of the three control loops is associated with one of the three layers: virtual network layer, network resource control layer and service control layer. For virtual network layer, the control loop is in charges of the autonomic attributes of virtual network nodes. In particular, the monitor entity in the control loop monitors the information such as the size of flow-table, the utilization of CPU and the queue length. And the monitor entity on network resource level monitors the data such as virtual network topology, traffic load, available bandwidth and etc. While the monitor entity on service control layer aims to grasp the context about service-relevant information, such as service type, service parameters, service QoS requirements, service performance and etc. By aggregating all the raw context data, analyze entities indwelling in each layer build models to capture the changes of network environment and determine whether any actions should be carried on. Based on the policies generated by analyze entity, plan entities then make decisions to guide various self-reconfigurations to dynamically adjust the network environments. Here, the policies would be regarded as a set of rules, and each of which is composed of a set of conditions and actions. Once a certain condition is satisfied, the corresponding set of actions would be triggered. Policy based network management, a systematic network management system, emphasizes that the self-* attributes should be inherent and the virtual resources utilization within a virtual network should be able to optimize by themselves. Note that, the three control loops are not isolated but interconnected, such that the contexts collected in certain layer could be shared. Moreover, the policies and actions that are generated in different layers may be interdependent.

Requirement DB: a database, which stores the detailed QoS requirements of each data flow. Actually, this information is translated from Service Level Agreement (SLA), which has been made between the user and network service provider. Rule DB: a database in which QoS rules specially planned for certain network environment or specific QoS requirements are stored. Policy DB: a database in which global QoS policies (such as the working mode of packet marker) are stored. Context Manager: an entity, which is responsible for analyzing the context information perceived from the dataplane. Generally, here the context refers to any information that characterizes the status of network, service and packet. Specifically, network context is defined as a set of information, which characterizes the performance of network nodes (e.g., utilization of CPU, memory of node and the queue length) and link status (e.g., the packet loss ratio, delay, jitter and the available bandwidth). Service context is defined as a set of information, which depicts the inherent features (e.g., the service type and the QoS requirements) and real-time features (e.g., the burst rate) of a service flow. The inherent features are carried in the first packet of flow. They could be extracted by leveraging Deep Packet Inspection (DPI) technology, whereas the real-time features are obtained through statistical analyze. The packet context refers to the semantic priority of each packet within a flow, and packets with different priority have different impacts on a certain service in terms of quality of experience. Analysis: this module is responsible for judging whether the QoS requirements of a service could be satisfied and whether conflicts exist among these requirements. Once the QoS requirements are satisfied, they along with the corresponding contexts are delivered to the QoS rule decision module. Rule Decision: if appropriate QoS rules exist, this module would select an optimal rule for service. Otherwise, Rule Decision module would plan a new rule based on the QoS requirements, contexts and the QoS policy. In general, the basic QoS functions implemented in our model include packet marking, queue management and queue scheduling. In practical terms, packet marking is realized

V.QOS MANAGEMENT COMPONENTS To guarantee the quality of services, QoS management

ACCEPTED MANUSCRIPT

Rate

Counters

Spec_arg #1 Type

Spec_arg #1 Length

Spec_arg #1 Value

...

Spec_arg #n Type

Spec_arg #n Length

Spec_arg #n Value

CE

PT

ED

M

Figure.8 Format of Meter band with Variable Specific Arguments The queue of SDN switch is configured via OF-Config, which is transported with NetConf protocol. Current available operations, such as and could only provide basic functionalities to read or modify (set the minimum or maximum transmission rate of a queue) the configurations. In order to support queue management and queue scheduling, the content layer of NetConf protocol needs to be enriched to provide more operation sets via OF-Config. Thus in our model, Weighted Random Early Detection (WRED) queue management algorithm and Priority Queuing (PQ) as well as Weighted Round-Robin (WRR) queue scheduling algorithms are added into the content layer of NetConf. VI. COLLABORATIVE BORROWING-BASED PACKET-MARKING

Packet-marking is a key mechanism in QoS Management. Once network congestion occurs, network nodes would selectively discard packets according to its flag. In this section, we would elaborate the proposed Collaborative Borrowing Based Packet-Marking (CBBPM) algorithm that complies with the concept of Software-defined Autonomic QoS model. Being compatible with the traditional packet marking algorithms, our proposed approach possesses a variety of autonomous abilities as stated below: (1) CBBPM can automatically perceive network context, service context and packet context. (2) CBBPM can automatically configure corresponding policy and parameters of packet marker according to the perceived service type and performance data.

AC

CR IP T

Band Type

(3) CBBPM can provide semantic-level differentiated services to different data flows, and bring differentiated marking to packets with different semantic priority. A. Design Principle Traditional Single Rate Three Color Marker (srTCM) is used as a key component in DiffServ traffic conditioner (J. Heinanen et al., 2006). srTCM meters a traffic flow and marks its packets to be either green, yellow or red according to three traffic parameters: Committed Information Rate (CIR), Committed Burst Size (CBS) and Excess Burst Size (EBS):  Committed Information Rate (CIR): an index which defines the permitted average rate of a flow. It also represents the rate of token injection.  Committed Burst Size (CBS): a maximum volume of token which is allowed for every burst of each flow. It also represents the maximum rate of destruction of token.  Excess Burst Size (EBS): a maximum volume of token which is allowed for exceeding CBS for every burst of each flow. A packet is marked as green if it does not exceed the CBS, yellow if it does exceed the CBS but not the EBS, and red otherwise. The packet marking behavior depends on both operation modes (i.e., Color-Blind and Color-Aware, ColorBlind mode is adapted in this paper) and two token buckets: C and E. The two token buckets share the common rate CIR. The maximum size of C token bucket and E token bucket are CBS and EBS respectively. Initially, the two token buckets are full. Then, the number of tokens is updated per 1/CIR second according to the following rules:  If Tc (the number of tokens left in C token bucket) is less than CBS, then a token is injected into C token bucket;  If Te (the number of tokens left in E token bucket) is less than EBS, then a token is injected into E token bucket;  Neither token account in C token bucket nor E token bucket is increased. In Color-Blind mode, packets are assumed to be uncolored. When a packet P arrives at time t, it would be handled as follows:  If Tc is greater than P.size (the size of packet P), packet P is marked as green and Tc is decremented by P.size,  Else if Te is greater than P.size, packet P is marked as yellow and Te is decremented by P.size,  Else packet P is marked as red, neither Tc nor Te is decreased. The srTCM process flows independently. However it is liable for the waste of tokens considering the situation that some flows may discard surplus tokens if they are under light load, while other flows with heavy load are utterly lacking of enough tokens. Furthermore, even within a flow, packets with different priority may be of different importance to service. However, traditional packet marking algorithms do not process these packets with the consideration of their different semantic priorities. To achieve better resource management and allocation, it is necessary to improve the existing packet marking algorithms by introducing autonomic technology, such that the quality of service of key packets would be enhanced.

AN US

through OpenFlow protocol, while queue management and queue scheduling are implemented in compliance with OFConfig protocol. Existing OpenFlow protocol provides basic functions to configure flow-table, group-table as well as meter-table. The enqueue/set-queue action defined in flow-table specifies the proper port for a certain flow. Also, the meter-table and meter band provides basic operations for flow. For example, the remarking meter band could lower the drop precedence level of a packet. However, these basic functions are far from fulfilling QoS functionalities. Therefore, in order to support abundant means to diverse packet marking options, it is necessary to extend the existing meter band. For this reason, we proposed a novel context aware packet-marking algorithm (which would be elaborated in next section) and then add it into the meter table as a new meter band. Since fields in meter band are not fixed, in order to avoid the inconvenience caused by unawareness of the structure of a new meter band in advance, Type-Length-Value format (as shown in Fig. 8) is explored to define the newly introduced meter band.

ACCEPTED MANUSCRIPT

CBS[i+1]

Overflow tokens from C EBS[i+1]

rcri*CBS[i+1] Ccri

E

C

Borrow E tokens

Borrow C tokens flow[i+1]

New token New token rcri*CIR[i] times/sec CIR[i] times/sec

Overflow tokens from C

CBS[i] EBS[i]

rcri*CBS[i] Ccri

E

C

flow[i] New token New token Overflow tokens rcri*CIR[i-1] times/sec CIR[i-1] times/sec from C CBS[i-1] EBS[i-1] rcri*CBS[i-1] Ccri

E

C

flow[i-1] Regain E tokens

Regain C tokens

AN US



New token New token Overflow rcri*CIR[j] times/sec CIR[j] times/sec tokens from C CBS[j] EBS[j] rcri*CBS[j] Ccri

E

C

flow[j]

Figure 9 Collaborative Borrowing Based Packet Marker

CE

PT

ED

M

Fig. 9 illustrates the basic concepts of CBBPM algorithm. Besides C and E token bucket, Ccri token bucket is introduced within each flow. Tokens in Ccri bucket are reserved for critical packets only. Different from traditional packetmarking algorithms, CBBPM could adjust token allocation between multiple streams in a dynamic and adaptive way. The adaptive features reflect in the following aspects: (1) Tokens are scheduled between flows according to the utilization. It means flows with heavy load could regain or borrow tokens from other flows if they have relative sufficient tokens. (2) Tokens are allocated to critical packets preferentially, such that the global quality of certain service could be guaranteed. (3) Degrading mechanism is introduced to lower down the transmission priority of non-critical packets once congestion occurs, such that critical packets could be processed with high priority. In CBBPM algorithm, packet marker applies marking operation to packets according to two categories of parameters: service related parameters and policy related parameters. Service related parameters are those statistical indexes, which depict traffic characteristics. Herein, they refer to CIR, CBS and EBS. While policy related parameters determine the global policy of packet marker. Generally, the following policy related parameters are used:  Packet Dropping Profile: it specifies the maximum number of packets, which are tolerant to be dropped continuously. It is denoted as L.  Ratio of Key Packet: it specifies the ratio of key packets in all packets, denoted as rcri. B. Definition of Context

AC

Three types of context are involved in this packet-marking algorithm. Detailed descriptions of the contexts are given below:  Service Context: it refers to the set of information, which reflects the attributes and status of current flow. Service context is collected by context agents, which reside in autonomic terminals. Service context is encapsulated within the QoS option of packet header. Generally, traffic relevant service context and service performance related context are two main types of service context which are considered in CBBPM algorithm. Traffic relevant service context (such as CIR, CBS and EBS) is extracted from expansion options of IPv6 header to determine the parameter configuration of packet marker, whereas the service performance related service context, i.e., PLRflow (Packet Loss Rate per flow) is employed to adjust the packet marking policy by evaluating the performance of network links. In practice, PLRflow is measured at the receiving terminal and then be encapsulated in IPv6 expansion option field and sent back to the sending side. Afterwards, the sending terminal would deliver the packet, which contains PLRflow to all the packet markers residing in edge nodes.  Network Context: it refers to the set of information that describes the status of current network. Out of the many parameters depicting network status, the average PLRpath (Packet Loss Rate per path) of end-to-end link reflects the congestion level statistically. Thus, the value of PLRpath could be regarded as the indicator for adjusting the token allocation policy. Once PLRpath arises, the congestion level increases. Thus the loss rate of non-critical packets needs to be increased appropriately to guarantee the success of transmitting critical packets. Actually, the value of PLRpath approximately equals to the estimates of PLRflow, and the actual value of PLRflow could be derived from QoS option field of IPv6 packet header. To better reflect the actual packet loss rate on network link, Exponentially Weighted Moving Average (EWMA) means is adopted to calculate PLRflow according to the historical PLRpath and current PLRflow:

CR IP T

New token New token rcri*CIR[i+1] times/sec CIR[i] times/sec

PLRpath = α ∙ PLR'path + (1−α) ∙ PLRflow 0≤α≤1 (1) where, PLR'path denotes the historical value, while PLRflow is the feedback value extracted from IPv6 packet header. The smooth factor α balances the weight of PLR'path and PLRflow.  Packet Context: it defines the semantic priority of a certain packet within its subordinate flow. Context agent, which is running on sending terminal, firstly recognizes the semantic priority of a packet and then marks it with corresponding precedence level in Traffic Class (TC) field. For data flows with K different precedence levels, we define a packet as critical packet if its precedence level k satisfies the condition that 1≤k≤k0 , where k0 is a predefined parameter. Packets with precedence level k0 (k0 ≤k≤K) are defined as non-critical packets. C. Algorithm Description In CBBPM algorithm, three token buckets (Ccri, C and E token bucket) are involved for each data flow, where the specification of C token bucket and E token bucket follows the definition in srTCM. As a newly introduced token bucket,

ACCEPTED MANUSCRIPT

Procedure: TokenInject_CE

Procedure: TokenInject_Ccri

M

1. { 2. if (Tcir
AN US

1. { if (Te<0) 2. Te := Te+1; 3. else if (Tc
CE

PT

ED

CBBPM algorithm can be decomposed into three phases: token updating, packet marking and adaptive adjustment. Generally, during token updating phase, tokens are injected to three token buckets periodically. During packet marking stage, packets are marked as green, yellow or red according to perceived contexts and configured policy. Adaptive adjustment phase aims to protect critical packets by selectively degrade non-critical packets if congestion occurs. We would give a detailed description to each stage in the following: i. Token Updating In initial state, the three token buckets (C, E and Ccri) are all full of tokens. Hereafter, the system would inject tokens to the three buckets periodically. If committed information rate were set to CIR byte/s, the system would inject a token to C token bucket and E token bucket per 1/CIR second. Meanwhile, the system would also inject a token to Ccri bucket per 1/(rcri*CIR) second. The procedure of token update is described in Fig.10. ii. Packet Marking To provide differentiated service, critical packets and noncritical packets are processed discriminatively in this stage. In order to improve the possibility that tokens could be allocated to critical packets, a collaborative token allocating concept is introduced to improve the existing stTCM algorithm. According to the collaborative token allocating concept, if the tokens remained are not enough for a critical packet, system would regulate tokens from other flows to meet its requirement. However, for non-critical packets, the collaborative mechanism takes effect only if the total length of

AC

packets which are marked as non-conformed exceeds L*B, where B denotes the average length of packets within a data flow. The significance of collaborative token allocation lies in: (1) By scheduling the tokens globally among flows, the possibility of marking critical packet as conformed (green) is bounded. (2) The total length of packets that are marked as non-conformed (red) should be less than L*B. Suppose that |S| flows exist and each flow maintains its own packet marker respectively. In order to record the loan relationship of tokens between different flows, two matrixes, i.e., CMatrix and EMatrix are defined. CMatrix [i][j] records the number of tokens that flowi borrows from flowj’s C token bucket. Similarly, EMatrix [i][j] records the number of tokens that flowi borrows from flowj’s E token bucket. Once a packet arrives at a certain edge node, the packet marker would first extract its precedence level from the Traffic Class field of IPv6 header and then judge whether it is a critical or noncritical packet. The marking procedure for critical packets is described in Fig.11. The function CRegain(i,P) is designed for recycling tokens that were borrowed from flowi’s C token bucket. To avoid the potential spillage, the maximum amount of tokens that flowi expects to regain is limited to P.size-Tc. The algorithm of

CR IP T

tokens contained in Ccri are just allocated to critical packets. We define the bucket size of Ccri as rcri*CBS and the rate of token injection is rcri*CIR. Tcri is adopted to represent the number of tokens remained in Ccri token bucket. To better characterize the status of packet marker, ng, ny, and nr are defined to record the total bytes of packets, which are marked as green, yellow and red respectively. The purpose of introducing the variables of ng, ny, and nr is to guarantee the fairness and to avoid the situation of excessive amount of packets being marked as red.

PROCEDURE: Marking_CriticalPkt (Flow i, Packet P) { 1. if (Tcri[i]>=P.size) { 2. packet P is marked as GREEN; 3. Tcri[i]:= Tcri[i] - P.size ; Tc=Max(Tc [i]- P.size, 0); 4. } 5. else if (Tc [i]>P.size) { 6. packt P is marked as GREEN; 7. Tc [i]:= Tc [i] - P.size; 8. } 9. else if (Tc [i] + CRegain (i,P)>P.size ) { 10. packet P is marked as GREEN; 11. Tc [i]:= Tc [i]- P.size; 12. } 13. else if (Tc[i] + CBorrow(i,P)>P.size) { 14. packet P is marked as GREEN; 15. Tc [i]:= Tc [i] - P.size; 16. } 17. else if (Te[i]>P.size) { 18. packet P is marked as YELLOW; 19. Te[i]:= Te[i] - P.size; 20. } 21. else if (Te[i] + ERegain (i,P)>P.size ) { 22. packet P is marked as YELLOW; 23. Te[i]:= Te[i]- P.size; 24. } 25. else if (Te[i]+ EBorrow(i,P)>P.size) { 26. packet P is marked as YELLOW; 27. Te[i]:= Te[i] - P.size; 28. } 29. else packet P is market as RED; 30. } Figure.11 The procedure of Critical Packet Marking

ACCEPTED MANUSCRIPT

PT

ED

M

 P.size  Tc i   CMatrix i  j  / totalLend  ;  

CE

23.

Tc[i]:= Tc[i] +

 P.size  Tc i   CMatrix i  j  / totalLend  ;  

Procedure: CBorrow(i,P) { 1. Flow Stream = loadRank(γ); 2. num=Tc[i]; 3. for (j=1;j<=m;j++) { 4. if (Tc[i]δindex) { 7. Tc[index]: = Tc[index]-

Tc index   index – load index  ;  

8.

CMatrix[i][j]=CMatrix[i][j] -

 P.size  Tc i   CMatrix i  j  / totalLend  ;  

AC

22.

CR IP T

Procedure: CRegain (i, Packet P) { 1. num= Tc[i]; 2. for (j=1;j<=S;j++) 3. totalLend:=totalLend+CMatrix[i][j]; 4. if ((P.size-Tc[i])>totalLend) { 5. for (j=1;j<=S;j++) { 6. if (CMatrix[i][j]>0) { 7. if (Tc[j]>=CMatrix[i][j]) { 8. Tc[j]:= Tc[j] - CMatrix[i][j]; 9. Tc[i]:= Tc[i] + CMatrix[i][j]; 10. CMatrix[i][j]=0; 11. } 12. else if (Tc[j]=totalLend) { 18. for (j=1;j<=S;j++) { 19. if (CMatrix[i][j]>0) { 20. if (Tc[j]>=CMatrix[i][j]) { 21. Tc[j]:= Tc[j] –

bucket for the purpose of marking critical packet as green. Two conditions must be satisfied for a flow to be selected as creditor. First, adequate tokens are available in this flow’s C token Bucket. Second, payload of this flow is relatively light, such that there would not be a large burst in a short term. Thus, we define δk=CBS[k]/2 to evaluate the adequacy ratio of tokens in flowk’s C token bucket. If the condition that Tc[k]>δk is satisfied, flowk is considered to possess enough tokens. To evaluate the load of flowk, parameter load[k]=Avg(packetRate[k],T)/Cir[k] is defined, where Avg(packetRate[k],T) denotes the average arriving rate of packets within T time windows. If load[k] is less than a predefined threshold γ, it could be deduced so that there would not be a large traffic burst in flowk in a short term. It is preferred to select flows with light load as credit sides, so all the flows, which satisfy the condition that its load is less than γ, are sorted according to its load in descending order. After the sorting, a collection Stream = {stream1, stream2,…,streamm} is obtained. Fig.13 illustrates the detailed procedure of CBorrow function. For tokens in E token bucket, the processing procedure of token regaining and borrowing is just similar to the way of C token bucket, so we would not go into the details here. However, when it comes to the non-critical packets, the packet marking procedure is different from the way that critical packet being processed. The detail procedure is stated in Fig. 14. For non-critical packets, it is necessary to take the total number of packets, which are continuously marked as conformed (in green or yellow color) into account. If the

AN US

CRegain(i,P) is described in Fig.12. In the procedure described above, flowi firstly determines if the amount of tokens on loan is enough to fill the gap of insufficient tokens. If the amount is less than flowi’s expectation, returning the loans is the priority for the flows, which still owe flowi. In this case, if the amount of tokens remained in debit side’s C token bucket is not enough to repay the loan in full, only the remained tokens are repaid to flowi. Otherwise, the amount of tokens each debit should return is determined according to the ratio of its loan to the total amount of tokens that flowi has ever loaned out. If the number of remained tokens is less than the amount it should undertake, only the remained tokens are repaid back. The function CBorrow(i,P) is designed to realize the functionality of borrowing tokens from other flow’s C token

24. } 25. else if (Tc[j]
9.

Tc[i]:=Tc[i] +

Tc index   index – load index  ;   CMatrix[index][i]:=CMatrix[index][i]+

Tc index   index – load index  ;  

10. } 11. } 12. else Break; 13. } 14. return Tc[i]-num; 15. } Figure.13 The Procedure of Borrowing Token from Critical Packet’s C Token Bucket

ACCEPTED MANUSCRIPT

CR IP T

PROCEDURE: Degrade (Packet P) { 1. if (ColorOf (P) ==Green) { 2. Tc[i]:= Tc[i]+P.size; 3. packet P is marked as Red; 4. ng:=0,ny:=0,nr:=nr+P.size; 5. } 6. if (ColorOf (P) ==Yellow) { 7. Te[i]:= Te[i]+P.size; 8. packet P is marked as Red; 9. ng:=0,ny:=0,nr:=nr+P.size; 10. } 11. } Figure. 15 The Procedure of Degrading of non-critical Packets denotes the proportion of packets marked as green to the total packets in current time slice, and rcri denotes the average rate of critical packet to CIR. The detailed description of packet degrading is illustrated in Figure. 15.

VII. PROTOTYPE AND EXPERIMENT

A test bed is constructed to verify the autonomic QoS management performance of our proposed model. Controller is implemented based on NOX platform (N.Gude et al., 2008) with extending Proto-table Management DE, QoS Management DE and other Des, which are introduced in section IV. While the switch is implemented based on Ofsoftswitche13 (CPqD, 2012), and the pipeline processing functionality is deployed in compliance with OpenFlow Switch 1.3. Meanwhile, actions and match fields are extended in order to support generic processing functions and userdefined OXM_TLV. A. Testbed Establishment The test bed is established as demonstrated in Fig.16. More specifically, three Dell R410 servers and four R710 servers are deployed in this network. One Dell R710 server running NOX platform is designated as controller, while other servers running Ofsoftswitche13 act as either core switches or edge switches. All switch nodes are DiffServ enabled. Besides, several other PCs serving as autonomic terminals are connected to the edge switches. Controller Sb1

Db1 ES 1

S1

D1

ES 3



D2 S10

Db6 ES 2

CS 1

D11

CS 2

S19 ES 4

……

……

CE

PT

Adaptive Adjustment Since queue management mechanisms (such as RED, WRED) would determine the dropped packet according to the congestion level of network and the packet’s marked result, to provide better quality of service to critical packets, it is feasible to dynamically adjust the relative proportion of critical packets and non-critical packets to reduce the possibility of dropping the critical packets. By degrading the non-critical packets, the proportion of critical packets among the conformed packets (marked in green or yellow color) would be relatively enhanced. Once PLRpath arises, the congestion level of network link becomes sharping, so the preventative measures of degrading non-critical packets would consequently improve the possibility of successful delivering critical packets. In particular, those non-critical packets marked as green or yellow would be degraded as red. Various schemes could be adopted to apply this adjustment. PLRpath is utilized as a key factor in our approach. Concretely, if the condition PLRpath<(1 – rg) is satisfied, the packet need not to be degraded, and if PLRpath >(1 – rcri), the degrade operation would be performed under probability P1, otherwise the degrade operation is applied under probability P2. Here, rg

AC

iii.

ED

M

AN US

PROCEDURE: Marking_NormalPkt (Flow i, Packet P) { 1. if (Tc[i]>P.size) { 2. if ((ng+ny)/B>L×(1/ PLRpath-1)) { 3. packt P is marked as RED; 4. nr:= nr + P.size, ng:=0, ny:=0; 5. } 6. else { 7. packt P is marked as GREEN; 8. Tc[i]:= Tc[i] - P.size; 9. ng:= ng + P.size, nr:=0; 10. }} 11. else if (Te[i]>P.size) { 12. if ((ng+ny)/B>L×(1/ PLRpath-1)) { 13. packt P is marked as RED; 14. nr:= nr + P.size, ng:=0, ny:=0; 15. } 16. else { 17. packt P is marked as YELLOW; 18. Te[i]:=Te[i]-P.size; 19. ny:=ny+P.size, nr:=0; 20. }} 21. else { 22. packt P is marked as RED; 23. ng:=0, ny:=0,nr:=nr+P.size;} 24. } Figure.14 The Procedure of Packet Marking of non-critical Packet account exceeds the pre-defined threshold, punitive measures are applied to these non-critical packets to reduce the possibility of dropping critical packets. That is to say, even though there are still enough tokens in C or E token bucket, the non-critical packets still need to be marked as red.

S20 D20 Sb6

Medeia flow

Background flow

Figure. 16 The Network Topology of Test bed Three videos with different characteristics provided by the Faculty of Electrical Engineering and Computer Science, Technische Universität Berlin are selected as testing data. All of the videos are coded and encoded in H.264/AVC JM 18.5.

ACCEPTED MANUSCRIPT

Table I. Flow attributions after encoding of ten video services No. 1 2 3

video Bridge_far Waterfall Tempete

frames

AR1

AR2cri

Burstiness

300 300 2000

139

86

low

266 254

151 65

low middle

Note: 1. AR - the average information rate of the stream (KB/s). 2. ARcri- the average information rate of critical packets (KB/s).

Red Packets 20 45 0.8

AC

CE

PT

ED

Table III WRED related parameters Parameters Green Yellow Packet Packets Minimum packet length 60 35 (thmin) Maximum packet length 90 70 (thmax) Maximum possibility of 0.01 0.2 packet loss rate (pmax)

M

Table II. Traffic related features of ten videos CIR CBS EBS rcri Video (Kbit/s) (bytes) (bytes) 0.57 Bridge_far 136 3750 5500 Waterfall 456 14000 28000 0.64 Tempete 126 20500 40100 0.26

AN US

B. Verification of QoS Supporting Components Three virtual networks are first abstracted from the underlying network infrastructures. Different QoS components are assembled in the three virtual networks to verify the software-defined characteristics of our proposed model. The first virtual network is derived from the switch ES1, CS1, CS2 and ES3. The second virtual network is derived from ES1, CS1, CS2 and ES4, while the third virtual network is derived from ES2, CS1, CS2 and ES4. Three virtual networks adopt the same

flow classification policy, i.e., video flows are all mapped to AF class and the background traffic is mapped to BE class. Meanwhile, PQ+WRR combination is enabled in all three virtual networks as queue scheduling mechanism, while different packet-marking algorithms are configured as meter band in three different virtual networks. In virtual network 1, srTCM is enabled. CAPM (Xiangyang Gong, 2011) and CBBPM are configured as meter band in virtual network 2 and virtual network 3 respectively. Table II lists the traffic related features of the three videos. WRED related parameters are listed in table III. Bandwidth of the bottleneck between two core nodes, i.e., CS1 and CS2 is set as 20Mbit/s, and the delay is set as 5ms. The bandwidth and delay of any other links are set as 100Mbit/s and 2-10ms respectively. By injecting background traffic with different flow rate, we could acquire the contrast effect when applying different packet marking algorithms in three virtual networks. Fig. 17 illustrates the average PSNR (Peak Signal to Noise Ratio) of each video, which is delivered in three virtual networks with srTCM, CAPM and CBBPM packet marking algorithm enabled respectively. Obviously, the statistical results demonstrate CBBPM’s superior performance to CAPM algorithm and stTCM algorithm. Fig. 18 shows the distribution of packet-marking results of three video streams with different packet-marking algorithms under different background traffic. As could be seen, for a given video, the proportion of green-marked packets, which delivers I frame by CBBPM algorithm, far surpasses that of the other two packetmarking algorithms. In general, the cause is clear: CBBPM algorithm could dynamically adjust the resource allocation according to real-time context, such that the resource utilization is maximized. To take a deeper look at the change of PSNR of each video over time, 300 continuous frames are selected. The results are demonstrated in Fig.19. As demonstrated in Fig.19, the PSNR of three videos, which are processed with CBBPM packet marking algorithm, exceeds the results that achieved under the other two

CR IP T

Meanwhile, the three videos are all in Common Intermediate Format (CIF, 352×288 solution, 30fps). For each video, the semantic precedence of I frame, P frame and B frame satisfies the condition PI>PP>PB. In addition, the frame order of H.264 is IBPBPB…, and the interval between two I frames is 20 frames. Table I lists the statistical features of the three videos.

a. The rate of background traffic flow is 10Mb/s

b. The rate of background traffic flow is 20Mb/s

Figure 17. The Average PSNR of Each Video

ACCEPTED MANUSCRIPT

1.0

1.0

B-red P-red

0.8

P-red

0.8

I-red B-yel

0.6

B-red I-red B-yel

0.6

P-yel

P-yel 0.4

I-yel

0.4

I-yel

B-gre

B-gre 0.2

P-gre I-gre

0.0

a b c bridge_far

a b c warterfall

0.0

a b c tempete

a b c bridge_far

a:CAPM b: CBBPM c: srTCM i. The rate of background traffic flow is 10Mb/s

CR IP T

0.2

a b c waterfall

P-gre

I-gre

a b c tempete

ii. The rate of background traffic flow is 20Mb/s

M

AN US

Figure 18. The Distribution of Packet-Marking Results

i. Bridge_far

ii. Waterfall

iii. Tempete

CE

PT

ED

a. The rate of background traffic flow rate is 10Mb/s

AC

i. Bridge_far

i.

ii. Waterfall

iii. Tempete

b. The rate of background traffic flow rate is 20Mb/s

Figure. 19 PSNR of 300 Continuous Frames Under Different Background Traffic

Bridge_far_CBBPM

i.

Bridge_far_CAPM

i.

Bridge_far_srTCM

ACCEPTED MANUSCRIPT

ii.

Waterfall_CAPM

iii.

Tempete_CBBPM

iii.

Tempete_CAPM

ii.

Waterfall_srTCM

CR IP T

Waterfall_CBBPM

iii.

AN US

ii.

Tempete_srTCM

Figure. 20 Screenshot of the Frames of Four Videos Processed with Different Packet Marking Algorithm

ED

M

algorithms in different jamming environments. Furthermore, certain frames of these videos are picked out as Fig. 20. Obviously, from the procedures stated above we can conclude that within our proposed software-defined autonomic QoS model: (1) Various components could be assembled dynamically in software-defined way to meet different QoS management requirement in different virtual networks. (2) Autonomic QoS management could be realized by obeying the autonomic network structure. (3) By adopting appropriate QoS mechanisms in software-defined environment, network resource (i.e., token) could be allocated among different parties in different levels of granularity.

PT

VIII. CONCLUSION

AC

CE

In this paper, a novel software-defined automatic QoS management model for future network has been proposed. Based on top-down design strategy, autonomic technologies and GANA architecture reference model have been introduced for remodeling data-plane, control-plane and managementplane. This model provides a universal scheme for autonomic network management in SDN. Furthermore, a context-aware Collaborative Borrowing Based Packet-Marking (CBBPM) algorithm is proposed. As an important underlying QoS component in data-plane, it can be combined with other QoS components to provide some new QoS functions. Experiments demonstrated the effectiveness of this model. In the future, we hope that further investigations and deep analysis could be continued, and a demo development is also necessary to prove our design.

IX. ACKNOWLEDGEMENT This work was supported in part by the National High Technology Research and Development Program (863

Program) of China under Grant No. 2011AA01A101, the EU FP7 Project EFIPSANS (INFSO-ICT-215549), the National Natural Science Foundation of China under Grant No. 61370197 and 61271041. X. REFERENCE

S. Blake, D. Black, M. Calson, E. Davies, Z. Wang, W. Weiss, “An Architecture for Differentiated Services”, IETF RFC 2475, December. 1998 R. Braden, D. Clark, S. Shenker, “Integrated Services in the Internet Architecture: an Overview”, IETF RFC 1633, June.1994 N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “Openflow: enabling innovation in campus networks,” vol. 38, no. 2, pp. 69–74, Mar. 2008. McKeown, Nick. "Software-defined networking: the new norm for network". ONF White Paper, April 13, 2012. https://www.opennetworking.org/images/stories/downloads/whitepapers/wp-sdnnewnorm.pdf. Z. A. Qazi, C.-C. Tu, L. Chiang, R. Miao, V. Sekar, and M. Yu,“SIMPLE-fying Middlebox Policy Enforcement Using SDN.” in Proc. ACM SIGCOMM, August 2013. S. Shin and G. Gu, “CloudWatcher: Network security monitoring using OpenFlow in dynamic cloud networks (or: How to provide security monitoring as a service in clouds?),” in Proc. 20th IEEE International Conference on Network Protocols (ICNP). IEEE, 2012, pp. 1–6. ITU-T Recommendations Y.1291. An architectural framework for support of Quality of Service in packet networks, 2004 IBM article, 2011. Understand the autonomic manager concept: http://www128.ibm.com/developerworks/library/ac-amconcept/ EFIPSANS, 2009. Deliverable 1.7: Final draft of Autonomic Behaviours Specifications (ABs) for Selected Diverse

ACCEPTED MANUSCRIPT

Biography

Wendong Wang: is a full professor of State Key Lab of Networking and Switching Technology at Beijing University of Posts and Telecommunications. His research interests are IP QoS, next generation Internet, and next generation internet services.

CR IP T

Ye Tian: received his PhD degree from Beijing University of Posts and Telecommunications in 2013, he is now a Postdoctoral in State Key Lab of Networking and Switching Technology. His research interests are IP QoS, next generation internet services and data mining.

Xiangyang Gong: is a full professor of the State Key Lab of Networking and Switching Technology at BUPT. His research interests are IP QoS, network security, advanced switching technologies and next generation Internet. Xiangyang Gong received his PhD communication science from Beijing University of Posts and Telecommunications.

AC

CE

PT

ED

M

AN US

Networking Environments, at http://www.efipsans.org/dmdocuments/INFSO-ICT215549_EFIPSANS_CO_WP1_D1_7_Part1.pdf EFIPSANS, 2009. Exposing the Features in IPv6 protocols that can be exploited/extended for the purposes of designing and build autonomic Networks and Services project, at http://efipsans.org Civanlar S., Parlakisik M., Murat Tekalp et al, “A QoSenabled OpenFlow environment for scalable video streaming”[C]. in Proc. IEEE GLOBECOM Workshops (GC Workshops), 2010: 351–356. Egilmez H E, Dane S T, Bagci K T, et al. “OpenQoS: An OpenFlow controller design for multimedia delivery with end-to-end Quality of Service over Software-Defined Networks”[C]//Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC), 2012: 1-8 Bari M F, Chowdhury S R, Ahmed R, et al. “PolicyCop: An Autonomic QoS Policy Enforcement Framework for Software Defined Networks”[C]. Software Defined Networks for Future Networks and Services Conference (SDN4FNS), 2013, Trento, Italy. Kim, Wonho, et al. “Automated and scalable QoS control for network convergence”. Proc. INM/WREN 10 (2010): 1-1. Ishimori, Airton, et al. “Control of Multiple Packet Schedulers for Improving QoS on OpenFlow/SDN Networking”. Software Defined Networks (EWSDN), 2013 Second European Workshop on. IEEE, 2013. Tong ZHOU, GONG Xiangyang, Yannan HU, et al. “PindSwitch: A SDN-based Protocol-independent Autonomic Flow Processing Platform”. Globecom Workshops (GC Wkshps), 2013:842-847. Wendong WANG, Qinglei QI, Xiangyang GONG, HU Yannan HU, Xirong QUE. “Autonomic QoS management mechanism in Software Defined Network”, China Communications, vol.11, no.7, pp.13-23, July, 2014. N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner. "OpenFlow: enabling innovation in campus networks". ACM SIGCOMM Computer Communication Review 38.2 (2008): 69-74. EU, 2007. IST Future and Emerging Technologies, Situated and Autonomic Communication (SAC), at http://cordis.europa.eu/ist/fet/comms.htm J. Heinanen, R. Guerin, “A Single Rate Three Color Marker”, IETF RFC 2697, September. 1999. Xiangyang Gong. “Research on Key Technologies of Qualityof-Service in Next Generation Internet”, Doctoral Dissertation, Beijing University of Posts and Telecommunications, 2011 (in Chinese) N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, and S. Shenker. "NOX: towards an operating system for networks." ACM SIGCOMM Computer Communication Review 38.3 (2008): 105-110 CPqD, 2012. Ofsoftswitch13, https://github.com/CPqD/ofsoftswitch13

Qinglei Qi: is currently a Ph.D candidate student of the State Key Lab of Networking and Switching Technology at BUPT. His research interests are IP QoS, Software-defined Network. Yannan Hu: is currently a Ph.D candidate student of the State Key Lab of Networking and Switching Technology at BUPT. His research interests include next generation Internet, Software-Defined Networking and network optimization.