Accepted Manuscript
STAR: Preventing Flow-table Overflow in Software-Defined Networks Zehua Guo, Ruoyan Liu, Yang Xu, Andrey Gushchin, Anwar Walid, H.Jonathan Chao PII: DOI: Reference:
S1389-1286(17)30177-9 10.1016/j.comnet.2017.04.046 COMPNW 6185
To appear in:
Computer Networks
Received date: Revised date: Accepted date:
13 October 2016 3 March 2017 18 April 2017
Please cite this article as: Zehua Guo, Ruoyan Liu, Yang Xu, Andrey Gushchin, Anwar Walid, H.Jonathan Chao, STAR: Preventing Flow-table Overflow in Software-Defined Networks, Computer Networks (2017), doi: 10.1016/j.comnet.2017.04.046
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
STAR: Preventing Flow-table Overflow in Software-Defined Networks Zehua Guoa,∗, Ruoyan Liub , Yang Xuc,∗∗, Andrey Gushchind , Anwar Walide , H. Jonathan Chaoc a ChinaCache,
Beijing, China, 100015 Seattle, USA, 98109 c New York University, New York, USA, 11201 d Waltz Networks, San Francisco, USA, 94134 e Nokia Bell Labs, Murray Hill, USA, 07974
CR IP T
b Amazon,
Abstract
M
AN US
The emerging Software-Defined Networking (SDN) enables network innovation and flexible control for network operations. A key component of SDN is the flow table at each switch, which stores flow entries that define how to process the received flows. In a network that has a large number of active flows, flow tables at switches can be easily overflowed, which could cause blocking of new flows or eviction of entries of some active flows. The eviction of active flow entries, however, can severely degrade the network performance and overload the SDN controller. In this paper, we propose SofTware-defined Adaptive Routing (STAR), an online routing scheme that efficiently utilizes limited flow-table resources to achieve maximized network performance. In particular, STAR detects the real-time flow-table utilization of each switch, intelligently evicts expired flow entries when needed to accommodate new flows, and selects routing paths for new flows based on flow-table utilizations of switches across the network. Simulation results based on the Spanish backbone network show that, STAR outperforms existing schemes by decreasing the controller’s workload for routing new flows by about 87%, reducing packet delay by 49%, and increasing average throughput by 123% on average when the flow table resource is scarce.
ED
Keywords: Software-Defined Networking, SDN, OpenFlow, Flow-table Overflow, Entry Eviction
1. Introduction
AC
CE
PT
Software-Defined Networking (SDN) is an emerging technology that enables flexible flow control in networks. Different from the traditional IP network, where distributed forwarding devices (e.g., switches and routers) with limited network information apply traditional routing schemes (e.g., OSPF and ECMP) to perform the routing function, SDN employs a centralized controller with a global network view to adaptively route flows based on the time-varying network status [1, 2, 3, 4, 5]. OpenFlow, a key enabler of SDN, applies flow entries to abstract network functions, such as MAC-based forwarding in layer 2, IP-based routing in layer 3, and packet classification for layers 2-4 [6]. With flow entries, SDN can also simplify network configuration. For instance, in data center networks, when a Virtual Machine (VM) is migrated to a new server, one can simply add/delete flow entries associated with the VM’s IP/MAC address in the related switches to avoid the service interruption. However, due to the consideration of power consumption and manufacturing cost, most OpenFlow switches are equipped with limited flowtable space, which may become insufficient when there is a large number of concurrent flows in the network [7, 8]. For example, recent measurement shows that data centers can have up to 10,000 concurrent flows per second per server rack [9], but most commercial OpenFlow switches that mainly use Ternary Content ∗ This
work was conducted and completed before Dr. Zehua Guo joined ChinaCache. author Email addresses:
[email protected] (Zehua Guo),
[email protected] (Ruoyan Liu),
[email protected] (Yang Xu),
[email protected] (Andrey Gushchin),
[email protected] (Anwar Walid),
[email protected] (H. Jonathan Chao) ∗∗ Corresponding
Preprint submitted to Elsevier
April 20, 2017
ACCEPTED MANUSCRIPT
PT
ED
M
AN US
CR IP T
Addressable Memory (TCAM) as their flow tables can only accommodate 750 to 2,000 OpenFlow rules [3, 10]. Therefore, flow table size of commercial switches becomes of critical concern when OpenFlow is deployed in production network environments [11]. We can see that future switches will support larger flow tables. However, flow table efficiency is still a big concern because TCAMs use around 100 times greater power consumption [12] and 100 times greater cost [13], compared to conventional Random Access Memory (RAM). OpenFlow 1.3 [14] maintains the flow table efficiency by featuring hard-timeout and idle-timeout: the entries of flows are deleted from flow tables when their timers expire. When a flow table is full, OpenFlow 1.3 enables a naive admission control: its switch will reject newly inserted flow entries and send error messages to the controller. The minimum timeout value specified in the current OpenFlow specifications is 1 second [15], while recent studies show that almost 67% of the flows have duration of less than 1 second [16]. Therefore, due to the coarse-grained setup of the timeout, flow-table utilization is usually misinterpreted since some entries of inactive flows are still cached in the flow tables until their timers expire [8, 17], leading to flow-table bloat (detailed in Section 2.2). To efficiently use limited flow-table space, OpenFlow 1.4 [6] allows a customized eviction mechanism that enables a switch to automatically eliminate entries of lower importance to make space for newer entries. However, we find that when a flow table is actually full, an eviction scheme (e.g., Least Recently Used, LRU) must kick out some active entries that are currently used by active flows in the table to accommodate new flows, resulting in actual flow-table overflow. The overflow phenomena increases the burden of the controller to repeatedly update flow tables and degrades the TCP performance by disrupting existing flows (detailed in Section 2.3). Additionally, since a switch lacks the global view of a network, evicting active flows based on its local information could incur more overflow in flow tables. Some schemes that proactively deploy flow entries in the switches before flows arrive are immune to the flow-table overflow problem. In order to achieve optimal network performance, such schemes require the knowledge of traffic distribution and characteristics in advance (e.g., flow bandwidth demand, flow duration) [7, 18, 19, 20]. However, there are a lot of dynamic applications whose traffic cannot be accurately predicted. For example, in on-demand applications (e.g., VoIP and video-on-demand), flows come and go unexpectedly, and their bandwidth requirements and durations cannot be known in advance [2, 3, 17]. Due to the randomness of such applications, a reactive scheme for flow entry deployment is more suitable to achieve optimal network performance. In this paper, we propose the SofTware-defined Adaptive Routing (STAR), an adaptive routing scheme with a wisely designed admission control mechanism. The major contributions of this paper are summarized below:
AC
CE
1. We identify the flow-table bloat and flow-table overflow problems, and discuss the impact of flow-table overflow on the controller’s workload and TCP performance based on the results of experiments. 2. We present STAR to explore effective flow-table management. STAR mitigates the flow-table bloat problem by intelligently evicting unused flow entries and prevents the flow-table overflow problem by adaptively routing new flows to achieve the balance of flow-table utilizations of switches across the network. 3. We compare STAR with baseline schemes on the Spanish backbone network. Simulation results demonstrate that STAR achieves good performance: as flow-table overflow increases, STAR decreases the controller’s workload by 87%-99%, reduces packet delay by 10%-72% and 49% on average, and increases average throughput by 2%-242% and 123% on average, compared with existing schemes.
The organization of the paper is as follows: Section 2 analyzes the problems of flow-table bloat and flowtable overflow. Section 3 details our proposed STAR to mitigate the flow-table bloat problem and prevents the flow-table overflow problem. In Section 4, we evaluate STAR’s performance against two baseline schemes. Section 5 reviews the related work. Section 6 concludes the paper.
2
ACCEPTED MANUSCRIPT
5
x 10
4300 4200 4100 4000
1 1.5 2 Flow Data Rate(Mbps)
(a) 850 concurrent flows
5 4 3 2
1
1.5 2 Flow Data Rate(Mbps)
(b) 1,100 concurrent flows
CR IP T
Requests/s
Requests/s
4400
Figure 1: Flow-table overflow increases the controller’s routing workload. The rates of flow routing request received by the controller when a switch with 1,000 flow entries that (a) behaves normally or (b) experiences flow-table overflow.
2. Impact of Flow-table Bloat and Flow-table Overflow
AN US
In this section, we present two phenomena and discuss the impact of the two phenomena on network performance.
ED
M
2.1. Flow-table Update Scheme Typically, an entry, once installed in an OpenFlow switch’s flow table, can be removed under one of the following three conditions: (1) hard-timeout, an entry gets deleted after timeout even if there are still packets arriving for this flow, (2) idle-timeout, an entry is deleted if no packets arrive for this flow in this timeout period, and (3) deletion command, the switch deletes an entry upon receiving the controller’s deletion command [14]. The first and third conditions are necessary to keep the states of OpenFlow switches consistent with the controller’s state. The second condition saves the precious flow-table space when an entry is no longer needed. For a switch that reaches its flow-table capacity, if receiving a new flow, the switch will send an error message to the controller and reject the flow [14]. We call such a scheme flow admission control.
AC
CE
PT
2.2. Flow-table Bloat and Its Impact In OpenFlow specification, two types of timeouts target different flows: (1) The hard-timeout is used to delete active entries of flows with long durations (e.g., 60 s). Internet traffic statistics show that about 20% of flows last longer than 10 minutes [21]. (2) The idle-timeout is typically set on the order of seconds (e.g., 1 s) to remove active entries of flow with short durations. However, such an idle-timeout setup is longer than most short flows’ live time [3, 6] and thus could lead to misinterpretation of flow-table utilizations [8]. As a result, flow tables could reach their capacities easily and become unavailable for new flows much earlier than they actually should be since a huge amount of entries of inactive flows are cached in the flow tables of different switches until their timeouts expire, wasting the precious flow-table space for a long time. We call it flow-table bloat. If the controller uses a naive admission control scheme, where new flows are rejected when flows table are full, the controller would make decision based on the falsely interpreted flow-table utilizations and misleadingly reject many new flows to maintain the transmission of existing flows. In Section 4, we show that the naive admission control performs very poorly. The flow-table bloat can be mitigated by changing the idle-timeout to a smaller value, but considering the finite processing ability of CPU in current switches, the small timeout could significantly burden switches. 2.3. Flow-table Overflow and Its Impacts OpenFlow 1.4 [6] adds another condition to delete an entry: an entry is selected to be evicted when the flow table is overflowed. We call it flow-table overflow. Different from the first three OpenFlow 1.3 conditions for flow removal that occur quite frequently, this one should be avoided whenever possible since the eviction of the entry of an active flow would lead to undesired phenomena and degrade network performance. We will detail it below by analyzing three OPNET-based[22] experiments. 3
ACCEPTED MANUSCRIPT
Switch
Flow f1
Port 1
Flow f2
Port 2
Flow f3
Port 3
Flow table f1 f2
200 150 Buffer limit
50 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Time (s)
(a) Topology setup
250 Number of Packets
150 Buffer limit ↓
100
Congestion Window Buffer 3
200 150
AN US
Number of Packets
(b) Variation of flow f1 ’s congestion window and port pt1 ’s buffer
Congestion Window Buffer 2
200
↓
100
0.2s
250
Congestion Window Buffer 1
CR IP T
Number of Packets
250
50
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Time (s)
Buffer limit
100
↓
50
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Time (s)
M
(c) Variation of flow f2 ’s congestion window (d) Variation of (late arriving) flow f3 ’s conand port pt2 ’s buffer gestion window and port pt3 ’s buffer
ED
Figure 2: Flow-table overflow degrades the TCP performance. (a) In Experiment 2, at 0.2 s, the switch evicts the entry of flow f1 (an active flow) to cater to flow f3 (the late arriving flow). (b) Due to frequent flow-entry lookup misses, flow f1 ’s congestion window size is frequently reduced by half. (c) Due to frequent flow-entry lookup misses, flow f2 ’s congestion window size is frequently reduced by half. (d) Because of the late arrival and frequent flow-entry lookup misses, flow f3 does not have a chance to increase its congestion window to a large size.
AC
CE
PT
2.3.1. Impact 1: Increasing the Controller’s Routing Workload We conducted Experiment 1 to simulate the impact of flow-table overflow on the controller’s routing workload. Experiment 1 uses a simple topology, in which one switch connects to two hosts A and B. The switch’s flow table has a capacity of 1,000 entries, and the switch uses LRU entry replacement method upon overflow. Host A generates random TCP flows with an average connection time of 200 ms and a varied flow data rate from 1 to 2 Mbps. A TCP flow is simulated by defining a sequence of packets with the same source-destination addresses. TCP flows have random duration and random arrival time based on Poisson distribution. To avoid some flows matching a previously installed flow table entry at the switch, we uniformly and randomly select ports for TCP flows ranging from 1 to 65,535. We use TCP-Reno for congestion management and use a parameter to control the number of concurrent flows allowed in the system. In test (a), host A generates 850 concurrent flows. In test (b), host A generates 1,100 concurrent flows, 10% more than the flow-table capacity, and results in the flow-table overflow. Figure 1 shows the experiment results, depicting the relationship between the rates of flow routing request generated by the switch and flow data rates before and after the flow-table overflow. In Figure 1(a), flow routing request rates are independent of flow data rates because a flow request is generated only when the first packet of each flow arrives. However, Figure 1(b) shows that flow request rates grow almost proportionally to flow data rates. This is because when the flow-table overflow occurs, some entries that are currently being used by active flows need to be evicted. When future packets of those evicted 4
ACCEPTED MANUSCRIPT
400
Congestion Window Buffer 1
Switch
Flow f3 Flow f1 Port 1 Flow f2
Port 2
Flow table f1 f2
↓
100
(b) Variation of flow f1 ’s congestion window and port pt1 ’s buffer
400 Number of Packets
350
300 250 200 Buffer limit ↓
100
Buffer limit
150
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Time (s)
Congestion Window Buffer 2
150
200
300 250
Congestion Window Buffer 1
AN US
Number of Packets
350
250
50
(a) Topology setup
400
300
CR IP T
0.3s
Number of Packets
350
50
200 150 100
Buffer limit ↓
50
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Time (s)
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Time (s)
M
(c) Variation of flow f2 ’s congestion window (d) Variation of (late arriving) flow f3 ’s conand port pt2 ’s buffer gestion window and port pt1 ’s buffer
PT
ED
Figure 3: Flow-table overflow degrades the TCP performance. (a) Flows f1 and f3 share port pt1 ’s buffer, and at 0.3 s, the switch evicts an active flow f1 ’s entry to cater to the late arriving flow f3 . (b) As flow f1 experiences a flow-table lookup miss, its congestion window is halved, and its packets quickly fill the buffer of port pt1 . (c) As flow f2 experiences a flow-table lookup miss, its congestion window is halved, and its packets quickly fill the buffer of port pt2 . (d) As flow f3 experiences a flow-table lookup miss, the fully-loaded buffer of port pt1 blocks the transmission of the packets of flow f3 and limits flow f3 ’s congestion window.
AC
CE
flows arrive at the switch, the switch has to query the controller again to download those flows’ entries. The reinstallment of those entries, however, will in turn evict other entries in the flow table, leading to a chain reaction of entry eviction and download. In our experiments, each switch can update every entry immediately. However, a recent measurement on OpenFlow switches show that flow-entry insertion, deletion and modification exhibit unexpected delays [23], and most switches have limited ability to update entries per second [24]. For example, today’s hardware switches can update around 40 to 50 entries in flow tables per second [25, 26]. Under such conditions, the flow-table overflow can also create a bottleneck with respect to flow-entry update on a switch and thus overwhelm the switch’s CPU. Therefore, flow-table overflow could cause a surge of flow routing requests generated from the switch, which can overwhelm the controller. 2.3.2. Impact 2: Degrading TCP performance In Experiment 2, we studied the impact of flow-table overflow on flows’ TCP throughput by simulating a scenario in which three flows compete for two entries in a flow table. In the switch, each port contains an input buffer that can store up to 100 packets, and each flow occupies 20% of 1 Gbps link bandwidth. At 0 s, flows f1 and f2 arrive at the switch from ports pt1 and pt2 , respectively. In the experiment, three flows use three different buffers. At 0.2 s, the third flow, f3 , arrives at the switch from port pt3 , as shown in Figure 2(a). OPNET provides functions to observe and record the variation of 5
ACCEPTED MANUSCRIPT
ED
M
AN US
CR IP T
TCP flows’ buffers and congestion windows. Figures 2(b) shows the experiment result of flow f1 . When flow f1 ’s entry is evicted, it experiences a flow-entry lookup miss, which lasts 5 ms on average, and the switch sends a new routing request for flow f1 to the controller. Before flow f1 ’s entry is reinstalled, all subsequent arriving packets of f1 have to be buffered in port pt1 . If flow f1 ’s TCP window is large enough, these packets can quickly fill the switch’s buffer and lead to packet loss. After flow f1 ’s entry is reinstalled, the queued packets will be drained out, and the sender will sense the packet loss of flow f1 by duplicate acknowledgments. Then the sender will change its state to the fast retransmission/recovery mode, which reduces flow f1 ’s congestion window size by half. One flow-table lookup miss leads to a chain reaction: query of flow entry redeployment, switch buffer overflow, packet loss, and congestion window reduction. Flows f2 and f3 experience the similar situation, and Figures 2(c) and 2(d) show similar results as of flow f1 . In this experiment, the network throughput is 356 Mbps, about 40% lower than the normal case, where three flows can transfer up to 600 Mbps in a non-blocking network. In summary, flow-table overflow from flow entry competition could frequently reduce a flow’s congestion window, hence resulting in low TCP throughput and large packet delay. In Experiment 3, we studied the impact of flow-table overflow on two correlated flows’ TCP throughput. In Figure 3(a), flow f3 enters the switch from port pt1 at 0.3 s1 , and thus flows f1 and f3 share the buffer of port pt1 since 0.3 s. Figures 3(b), 3(c) and 3(d) show the experiment results. Particularly, since flow f1 starts transmitting earlier than flow f3 , its congestion window is larger than flow f3 ’s. If flow f1 experiences a flow-table lookup miss, the buffer of port pt1 is quickly filled with flow f1 ’s packets, and a whole window of flow f3 ’s packets is blocked. As a consequence, flow f3 cannot be transmitted until its Retransmission Timeout (RTO) expires, leading to flow f3 with a small congestion window, as shown in Figure 3(d). Even worse, the small congestion window will, in turn, result in another transmission block on flow f3 when flow f1 experiences another flow-entry lookup miss, and so forth. Consequently, due to the constant influence of flow f1 , flow f3 ’s congestion window is kept at a small value, and so does its TCP throughput. Flow f2 experiences the similar situation of flow f1 . Since three flows always compete for two flow entry space, the entry of flow f2 at some point will become the least recently used one and be removed to accommodate the newest flow. Flow f2 ’s buffer then will be filled up and its packets will start to be dropped. Since the entry kicking happens at a high frequency, the congestion window of flow f2 is halved almost the same time as that of flow f1 . Therefore, we can see that flow-table overflow may severely degrade the TCP performance when some flows share buffers. 3. Software-defined Adaptive Routing
PT
In this section, we first introduce STAR, then explain the modules of STAR and analyze their complexity. 3.1. STAR Overview
AC
CE
Figure 4 shows the architecture of STAR, which is achieved by the controller and switches and composed of three modules: Path-set Database Generation, Flow-table Management, and Routing Decision Module. The Path-set Database Generation module generates a set of candidate paths for each pair of sourcedestination edge switches in the network. The Flow-table Management module detects the actual flow-table utilizations of switches and effectively evicts entries using LRU. Upon the receiving of requests for routing new flows, the Routing Decision module determines whether to accept or reject new flows based on flow table usages periodically polled from Flow-table Management module. If new flows are accepted, the module intelligently selects the routing paths for the flows from the corresponding path-set to achieve network-wide flow-table utilization balancing. 1 In
this experiment, the retransmission timeout is set at 0.2 s.
6
ACCEPTED MANUSCRIPT
Controller Path-set Database Generation
(0A) Routing Decision
(1A)
(2B)
Flow-table Management Network
CR IP T
(2A)
Figure 4: Architecture of STAR. (0A) The Path-set Database Generation module sends path-set information to the Routing Decision module. (1A) The Routing Decision module pulls out the statistics from switches from Flow-table Management module. (2A) The switches send requests for routing new flows to the controller. (2B) The selected paths for new flows are configured by reactively installing flow entries at the corresponding switches.
ED
M
AN US
3.2. Path-set Database Generation Module A network’s topology does not change very frequently. To reduce the controller’s workload for dynamically calculating routes, we can pre-generate a path-set PS,D for each source-destination edge switch pair (S, D). This module is only triggered during the network initialization or after special network events that typically do not happen frequently, such as change of topology or link/node failure. The path-set for each pair of source-destination edge switches contains the top K shortest paths from a given source edge switch to a given destination edge switch, and each path is stored in the form of IP address pair: (source edge switch’s IP address and destination edge switch’s IP address). The lengths of the K selected paths can be different, and K is chosen based on the computation capability of the controller. For instance, the controller can only compare K paths in a reasonable time. This module can be implemented by simply modifying the classical K equal-shortest-path selection method in [27]: A path’s length is calculated based on the number of hops between a source edge switch and a destination edge switch. If the number of the shortest paths is less than K, we select the path with the minimum length from the rest paths of the source-destination edge switch pair until the number of paths in the path-set reaches K.
AC
CE
PT
3.3. Flow-table Management Module The module is operated at switches and enables two functions: (1) effective entry management and (2) actual flow-table utilization detection. For the first function, we allow each switch to remove entries using LRU and idle timeout to mitigate the impact of flow-table bloat. The second function helps the controller to decide when to reject new flows. A simple method to achieve the second function is to let each switch count the number of active flows in its flow table. For instance, the controller installs two entries to flow tables to respectively calculate the number of received SYN and FIN packets, and obtains the number of active flows by checking the difference between the number of SYN and FIN packets. However, this method may lead to miscounting when there are duplicate SYN/FIN packets. We propose a solution to this possible miscounting problem: in the flow table of each switch, every entry is associated with a binary flag such that values 0 and 1 indicate the entry’s inactive and active states, respectively. Each switch keeps an active-flow counter to calculate the number of entries with value 1. When a switch receives a command from the controller to install an entry in its flow table to establish a path for a new flow, it increases its active-flow counters by one, and the entry’s binary flag is set to 1 by default. If a switch receives the last packet (FIN packet) of a flow, it changes the flag of the corresponding entry to 0 and decreases the active-flow counter by one. The entry will be evicted by LRU or when its timeout expires. In this way, the controller is able to track the actual flow-table utilizations by periodically pulling the active-flow counters from switches. With OpenFlow Extensible Match (OXM) supported in OpenFlow 1.2 and higher versions, OpenFlow can be extended to support matching on SYN and FIN header flags [8, 28]. Some works are concerned to 7
ACCEPTED MANUSCRIPT
Controller Flow-table Utilization-aware Routing Routing Decision
(2A.2)
Admission Control (2A.1)
Network
Switch
Flow-table Management
(2B)
CR IP T
(1A)
Switch Connection
AN US
Figure 5: Interaction of Flow-table Management and Routing Decision modules. (1A) The actual flow-table utilization information is pulled out from switches and sent to Admission Control and Flow-table Utilization-aware Routing functions. (2A.1) Switches send the requests for routing new flows to Admission Control function. (2A.2) If new flows are accepted, the routing requests are further sent to Flow-table Utilization-aware Routing function. (2B) Flow entries are installed to the corresponding switches to establish the chosen paths for new flows.
M
design new ASICs that support software defined counters [29] and customized matched fields for OpenFlow and flexibly parse packet headers [30, 31].
PT
ED
3.4. Routing Decision Module The module resides in the controller and consists of two functions: (1) admission control and (2) flowtable utilization-aware routing. To prevent flow-table overflow, the usage of flow tables at specific switches is considered to determine whether to accept or reject new flows. If a new flow is accepted, the flow’s path is further dynamically selected from the corresponding path-set generated in advance. After making the routing decisions, the controller installs flow entries at the corresponding switches to establish the routing path. Figure 5 shows the interaction of Flow-table Management and Routing Decision modules. We now provide a detailed description of these functions.
AC
CE
3.4.1. Admission Control This function is used to prevent flow-table overflow and operates as follows: when the controller receives a request for routing a new flow, it first finds the corresponding path-set for the new flow by matching the new flow’s source and destination IP addresses with the IP addresses of path-sets. Limiting the problem space in the pre-generated path-set significantly reduces the controller’s computation complexity to choose an optimal path from all possible paths for each new flow. Then, the new flow is determined whether to accept or reject based on flow-table utilizations of the switches associated with the selected path-set. Among the path-set, if each path contains a switch with the fully-loaded flow table, this new flow will be rejected. With admission control, some new flows could be rejected when flow tables reach their capacities, which may throw negative impact on the network throughput. Surprisingly, this action actually significantly improves the network throughput. This is because by preventing flow-table overflow, it keeps flows are being forwarded on their paths without the frequent congestion window reduction. In turn, flow transmission accelerates and packet delay reduces, which further increases the number of accepted and completed flows and eventually improves the network throughput. In addition, this action also avoids frequent flow-table lookup miss, and greatly reduces the controller’s workload for routing new flows. The above analysis is demonstrated by our simulation results in Section 4. 8
ACCEPTED MANUSCRIPT
n
p ∈ PS,D , 1 ≤ n ≤ N
CR IP T
3.4.2. Flow-table Utilization-aware Routing This function achieves balancing of flow-table utilizations across the network by routing each new flow on a path that consists of switches with low flow-table utilizations. If a new flow is accepted, the controller will calculate the path cost of the K paths in the path-set and select the path with the minimum path cost for this new flow. The design of path cost function takes into account two factors: (1) flow-table delay and (2) path length. First, flow-table delay is formulated to mimic the link delay, where a higher utilization a link 1 , has, a higher probability to experience congestion. Thus, switch v’s flow-table delay is expressed as 1−u v where uv denotes the flow-table utilization of switch v. Second, forwarding a flow on a longer path would consume more flow entries than on a shorter path, which could aggravate the insufficiency of flow-table capacity. Based on these two considerations, we define the path cost function for path p from switch S to switch D as follows: X 1 (1) 1 − uvn v ∈p
AN US
where vn is the n-th switch on path p from switch S to switch D, uvn is the flow-table utilization of switch vn , PS,D is the path-set from switch S to switch D, and N is the number of switches on path p.
ED
M
3.4.3. Selection of Polling Period This polling function and its polling period impact the performance of STAR. The shorter of the period, the more accurate flow-table utilization that can be acquired by STAR, resulting in a better performance. However, it is not easy to achieve the optimal performance since it has to poll flow-table utilization across the network in real time, and the polling period could not be set to infinitesimal. We argue that the selection of polling frequency is a trade-off between the performance of STAR and overhead such as the controller’s control channel bandwidth usage. A moderate polling period (e.g., 100 ms) is good enough to detect the network variation since about 54% flows’ duration are less that 100 ms [16]. If a network experiences a bursty traffic variation, the controller could make routing decision based on inaccurate flow-table utilizations between two polls and result in temporary non-optimal performance. However, after getting the accurate information from the next polling, the controller would compensate the short-term performance degradation in the next periods. In the long run, the overall performance would maintain at a good level. 3.5. Computational Complexity
AC
CE
PT
The computational complexity of STAR depends on the complexity of its three modules. The first module is an offline module and is triggered only once during the network initialization or after some infrequent network events. In Flow-table Management module, the flow-table utilization detection is only periodically conducted, and its complexity comes from its two functions. For the effective entry management function, we use LRU and idle timeout, and their timeout’s complexity are O(1). For actual flow-table utilization detection function, a switch at most incur two actions, and its computation complexity is O(1). Therefore, the computation complexity of the module is O(1) ∗ O(1) = O(1). The admission control function checks every switch in a path-set of each new flow. We use M to denote the number of switches on the longest path in the path-set. Since K is a given constant, its complexity is less than O(M ). The flow-table utilizationaware routing compares the cost of K paths, and thus its complexity is O(1). Thus, the total computational complexity of STAR is polynomial: O(1) + O(M ) + O(K) ≈ O(M ). 4. Simulation
In this section, we evaluate the proposed STAR against baseline schemes.
9
ACCEPTED MANUSCRIPT
2
12 3
11
4
13 16
15
14
10
6 5
9
8
7
18 20
25
19
26
27
17
21
28
22 23
29
24 30
AN US
Figure 6: Spain backbone topology.
CR IP T
1
ED
M
4.1. Comparison Schemes We compare STAR with two existing schemes. The classical Open Shortest Path First (OSPF) is used for baseline routing scheme, and the number of hops is used for path calculation. LRU+OSPF: new flows are routed on their shortest paths. Switches evict entries if the entries’ timeouts expire or use LRU to evict the least recent used entries when they detect their flow tables reach capacities. AC+OSPF: new flows are routed on their shortest paths, and this scheme prevents flow-table overflow using a naive Admission Control (AC): when a flow table is full, its switch will reject newly arrived flows. STAR: new flows are routed based on our proposed scheme STAR: we use timeout and LRU to evict entries; new flows are routed or rejected based on the actual flow-table utilizations of switches and the network status, as detailed in Section 3.
AC
CE
PT
4.2. Simulation Setup For performance study, we have done simulations on an OPNET-based [22] simulation platform with Spain backbone, as shown in Figure 6. We compare STAR with baseline schemes with five performance metrics: the controller’s workload, end-to-end packet delay, average server throughput, mice and elephant flow completion percentages. For each source-destination edge switch pair, Path-set Database Generation Module can generate K candidate paths to be considered for routing a new flow. In the simulation, the link capacity is 1 Gbps, and each node is an OpenFlow switch connected to 10 servers. Each switch is an input-buffered switch which has a buffer size of 100 packets per port and connects to a server that generates two types of TCP flows: elephant and mice flows. Similar to existing works [2, 32], each elephant flow’s size is a random value between 10 MB and 20 MB and its transmitting rate can reach up to 200 Mbps; each mouse flow’s size is a random value between 100 KB and 200 KB, and its rate cannot exceed 3.3 Mbps. The average ratio of mice flows to elephant flows is 9:1 [17]. The size of each packet is fixed at 1,500 Bytes. The flow arrival rate follows the Poisson distribution to generate 600-1,800 flows/s. The lifetime of each flow depends on the dynamic network conditions. All flows’ destinations are randomly chosen. The controller uses 5 ms to process a flow routing query and pulls network statistics every 100 ms. The switch’s flow-table capacity is limited to 1,000 flow entries, and each entry’s hard and idle timeouts are 60 s and 10 s, respectively. We ran our simulation 10 rounds to get the statistical trend. 4.3. Simulation Results 4.3.1. The controller’s workload Figure 7 shows the controller’s workload of different schemes under different flow arrival rates. When the flow arrival rate is less than 350 flows/s, flow tables do not exceed their capacities. Thus, each scheme only 10
10
2
10
1
10
0
10
LRU+OSPF AC+OSPF STAR
-1
200
250
300
350
400
450
500
Flow arrival rate (flows/s)
CR IP T
Routing requests/flow
ACCEPTED MANUSCRIPT
550
600
LRU+OSPF AC+OSPF STAR
7 6 5 4 3 2 1 0 200
250
M
Average packet delay (ms)
8
AN US
Figure 7: The controller’s workload under different flow arrival rates.
300
350
400
450
500
550
600
ED
Flow arrival rate (flows/s)
Figure 8: Average end-to-end packet delay under different flow arrival rates.
AC
CE
PT
generates about one request for routing each new flow on average. When the flow arrival rate reaches 350 flows/s, the flow-table overflow starts to occur. As the flow arrival rate increases, flow-table overflow becomes more seriously since flow-table lookup miss and the number of bottleneck switches increases significantly. However, when flow arrival rate reaches 550 flows/s, nearly all of switches are experiencing flow-tables overflow. With LRU+OSPF, the number of routing requests per second generated to the controller is approximately same as the arrival rate of new flows. AC+OSPF and STAR are equipped with AC to reject new flows when they detect flow-table overflow. Hence, they prevent flow-table overflow and do not result in high workload of the controller. 4.3.2. Packet delay Figure 8 shows the end-to-end packet delay of different schemes under different flow arrival rates. In Figure 8, LRU+OSPF’s packet delay increases sharply as flow arrival rate exceeds 350 flows/s. This is because LRU+OSPF experiences flow-table overflow under high traffic loads. With AC, AC+OSPF and STAR prevent flow-table overflow and frequent flows’ congestion window reduction, and hence maintain low packet delay. AC+OSPF holds the lowest delay but does not achieve the good performance. Based on the mis-presented flow table utilizations, AC+OSPF over-rejects new flows and leads to low completion of new flows. Figure 9 and Figure 10 show that mouse flow and elephant flow completion percentages of different schemes under different flow arrival rates. We can see that AC+OSPF performs worst among all schemes. By considering partial variation of the network, STAR makes better routing decision and performs 11
100
LRU+OSPF AC+OSPF STAR
90 80 70 60 50 40 30 20 10 0 200
250
300
350
400
450
500
Flow arrival rate (flows/s)
CR IP T
Completion percent of mice flows (%)
ACCEPTED MANUSCRIPT
550
600
100
AN US
LRU+OSPF AC+OSPF STAR
90 80 70 60 50 40 30 20 10 0 200
250
M
Completion percent of elephant flows (%)
Figure 9: Mouse flow completion percentage under different flow arrival rates.
300
350
400
450
500
550
600
ED
Flow arrival rate (flows/s)
Figure 10: Elephant flow completion percentage under different flow arrival rates.
CE
PT
best under all flow arrival rates. Recent network measurements show that about 95% of traffic in current networks are TCP flows [8, 33]. Hence, STAR is effective for most cases. For persistent TCP connections, STAR does not interrupt their transmission since their packets come very frequently.
AC
4.3.3. Throughput Figure 11 shows the average server throughput of different schemes under different flow arrival rates. For all schemes, the servers have no idea of the flow-table utilizations and routing process, and continue injecting flows even when some flow tables are overflowed. Among all schemes, AC+OSPF performs worst under all flow arrival rates because its irrational flow rejection resulted from flow-table bloat greatly reduces the number of transmitted flows and thus keeps the server’s throughput very low, as discussed above. At the flow arrival rate of 350 flows/s, flow-table overflow occurs and LRU+OSPF reaches its performance peaks. Due to the entry evicting/reloading chain reaction as mentioned in Section 2.3.2, existing active flows suffers from throughput degradation. Therefore, as the flow arrival rate increases, more switches will experience flow-table overflow, and the server’s throughput will continue dropping. STAR takes advantages of the two baseline schemes and achieves the highest average server throughput. When the flow arrival rate is low, STAR mitigates the flow-table bloats by intelligently removing out-of-time entries. At the rate of 350 flows/s, STAR’s throughput is higher than LRU+OSPF since it accepts more flows by considering the network-wide flow-table utilization balancing to mitigate flow-table overflow. As the flow arrival rate 12
CR IP T
ACCEPTED MANUSCRIPT
Figure 11: Average server throughput under different flow arrival rates.
70
STARSTAR
AN US
50 40 30 20 10 2
ED
0
M
Routing requests/flow
60
4
8
16
Path number K
Figure 12: The impact of K on the controller’s workload at the rate of 400 flows/s. STAR− does not contain admission control.
PT
increases, the flow-table overflow becomes seriously, and STAR prevents the transmission interruption of existing flows by adaptively employing admission control to reject new flows.
AC
CE
4.3.4. The impact of K (Path number) in the path-set Figure 12 shows the impact of K on the controller’s workload for routing new flows at rate 400 flows/s. STAR− denotes STAR scheme without AC. For STAR− , when the flow arrival rate reaches 400 flows/s, some flow tables start to overflow. As K increases from 2 to 8, the routing requests for new flows decrease sharply. This is because, with a large number of candidate paths, the controller considers more flow tables for routing new flows, and the negative impact of the entry evicting/reloading chain reaction is mitigated. However, the overlarge K may lead to the acceptance of more flows, which could compensate its benefit. Thus, when K changes from 8 to 16, the routing requests for new flows increase slightly. STAR employs AC to prevent flow-table overflow and maintains low routing requests for new flows. Figure 13 shows the impact of K on average server throughput at rate 400 flows/s. At flow arrival rate of 400 flows/s, STAR− reaches its highest throughput with K = 8. With such K configuration, STAR− accepts the moderate amount of flows and achieves trade-off between the number of new flows and the number of flow tables considered for routing the new flows. When K is less than 8, STAR− cannot take into account enough flow tables for routing new flows; when K is larger than 8, STAR− accepts too many new flows. Both of two K configuration result in frequent entry evicting/reloading chain reaction. We can 13
ACCEPTED MANUSCRIPT
480
460 450 440 430 420 410 400 390
CR IP T
Throughtput (Mbps)
470
STARSTAR
2
4
8
Path number K
16
Figure 13: The impact of K on average server throughput at the rate of 400 flows/s. STAR− does not contain admission control.
AN US
see that STAR holds the relative high throughput in spite of the variation of K. 5. Related Work
AC
CE
PT
ED
M
Routing schemes for SDN Some works propose to reduce the number of utilized flow entries with novel routing schemes for SDN [2, 3, 32]. Hedera [2] classifies all flows into mice and elephant flows, and reduces entries for mice flows with an oblivious static routing. DevoFlow [3] employs clonable wildcardmatch flow entries for mice flow routing. In [4], an optimization problem for offline routing is presented for hybrid SDN, and the demand matrix is assumed to be known. In [34, 35], the authors propose to reduce the high redundancy of flow tables introduced by OpenFlow by forwarding flows based on the packet headers encapsulated with routing information. However, the flow table constraint is not explicitly introduced in these articles. Proactive rule placement: Palette [18] and One Big Switch [19] introduce table decomposition to get more equal flow-table utilizations across the network without violating the correctness of the original table functionality. In [36], Zhang et al. introduce a model that captures the rules dependency and accounts for the rule compression from different ingress switches to further reduce the number of required rules. In [37], it is proposed to intelligently compose and place rules to meet the constraints of flow-table space and network policies. vCRIB [38] balances the flow-table utilizations with the field space partition and rule set allocation, and enables packets forwarding upon longer paths. In [39], it formulates an offline routing optimization problem with a flow table constraint, and further provides an approximate solution for the NP-hard problem. All the above works assume the traffic demand matrix is given. In contrast, STAR considers another scenario that all flows enter the network randomly and their information are not known in advance. In [40], Nguyen et al. select the paths for flows to share some segments with the default path generated by a sequence of default rules on each node, and leverage default rules to forward flows to reduce flow table requirements. Reactive entry caching: Existing reactive entry caching schemes generally focus on the rule dependency problem. DIFANE [41] and Smart Rule Cache [42] resolve the dependency by generating new non-overlapped micro entries. CacheFlow [43, 44] generates covering rules to break the dependency chain, and save flow-table space by redirecting some packets to software switches with abundant memory. CAB [45] mitigates the dependency issue through partitioning the matched field space into buckets and caching buckets along with all the associated entries. While existing works concentrate on efficiently managing the flow-table utilization of a single switch, STAR introduces an alternative solution, a network-wise optimization for all switches in the network based on dynamic traffic and real-time network status. 14
ACCEPTED MANUSCRIPT
CR IP T
Dynamic Timeout: In [46, 15], the problem of flow-table management has been tackled by an idea of dynamic timeouts for the flow entry eviction from a switch. The authors try to adaptively set up a timeout for each flow based on the packet arrival frequency of the flow. However, the works cannot prevent flow-table overflow since the timeout setup is based on inaccurately predicted information. Zhu et al. [47] propose to dynamically adjust timeouts for flows in SDN-based data centers based on feedback information of flow table occupation in switches. The scheme suffers from inaccurate flow table occupation resulted from flow-table bloat. Liang et al. [48] optimize idle timeout value for instant messaging by balancing flow tables and controller processing resources in OpenFlow-based networks. However, their scheme may overload the control channel links between the controller and switches since it requires switches to generate a large number of packet-in events per second to indicate the controller processing resource cost. Control Delegation: In [49], the authors propose an SDN-based framework for cloud that delegates some network controls to end users, enabling cloud users to monitor and configure their own slices of the underlying networks. In [50], the authors formulate a resource-management problem for multi-domain multi-provider SDN, which algebraically abstracts the consumable network resource, and solve the problem to allow SDN controllers or virtual domains to efficiently slice the network and delegate resources in domains to users.
AN US
6. Conclusion and Future Work
M
This paper presents STAR, an online adaptive routing scheme to engineer flow-table resources in SDN. The essence of STAR is to route TCP flows with the consideration of actual time-varying flow-table sizes to prevent the problems of flow-table bloat and flow-table overflow. Simulation results show that STAR simultaneously achieves low controller’s workload, low packet delay and high server throughput. In the future, we will improve STAR in the 6 aspects: (1) deploying STAR in a real test bed, (2) considering the incorporation of the state-of-the-art SDN technique, such as POF, P4, (3) considering multiple metrics (such as traffic load on links) to calculate paths to achieve jointly traffic balancing and flow-table utilization balancing, (4) including other types of flows.
ED
References
AC
CE
PT
[1] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, J. Turner, Openflow: enabling innovation in campus networks, ACM SIGCOMM CCR 38 (2) (2008) 69–74. [2] M. Al-Fares, S. Radhakrishnan, B. Raghavan, N. Huang, A. Vahdat, Hedera: Dynamic flow scheduling for data center networks., in: NSDI ’10. [3] A. R. Curtis, J. C. Mogul, J. Tourrilhes, P. Yalagandula, P. Sharma, S. Banerjee, Devoflow: Scaling flow management for high-performance networks, in: ACM SIGCOMM CCR, Vol. 41, ACM, 2011, pp. 254–265. [4] S. Agarwal, M. Kodialam, T. Lakshman, Traffic engineering in software defined networks, in: INFOCOM’13, IEEE, 2013, pp. 2211–2219. [5] Z. Guo, M. Su, Y. Xu, Z. Duan, L. Wang, S. Hui, H. J. Chao, Improving the performance of load balancing in softwaredefined networks through load variance-based synchronization, Computer Networks 68 (2014) 95–109. [6] O. N. Foundation, Openflow switch specification v1.4.0 (2013). [7] Z. A. Qazi, C.-C. Tu, L. Chiang, R. Miao, V. Sekar, M. Yu, Simple-fying middlebox policy enforcement using sdn, in: ACM SIGCOMM’13, ACM, 2013, pp. 27–38. [8] A. Zarek, Y. Ganjali, D. Lie, Openflow timeouts demystified, Univ. of Toronto, Toronto, Ontario, Canada. [9] T. Benson, A. Akella, D. A. Maltz, Network traffic characteristics of data centers in the wild, in: Proceedings of the 10th ACM SIGCOMM conference on Internet measurement, ACM, 2010, pp. 267–280. [10] G. Lu, C. Guo, Y. Li, Z. Zhou, T. Yuan, H. Wu, Y. Xiong, R. Gao, Y. Zhang, Serverswitch: A programmable and high performance platform for data center networks., in: Nsdi, Vol. 11, 2011, pp. 2–2. [11] M. Kobayashi, S. Seetharaman, G. Parulkar, G. Appenzeller, J. Little, J. Van Reijendam, P. Weissmann, N. McKeown, Maturing of openflow and software-defined networking through deployments, Computer Networks 61 (2014) 151–175. [12] E. Spitznagel, D. Taylor, J. Turner, Packet classification using extended tcams, in: Network Protocols, 2003. Proceedings. 11th IEEE International Conference on, IEEE, 2003, pp. 120–131. [13] http://www.pica8.com/pica8-deep-dive/sdn-system performance/, Sdn system performance (2012). [14] O. N. Foundation, Openflow switch specification v1.3.0 (2012). [15] A. Vishnoi, R. Poddar, V. Mann, S. Bhattacharya, Effective switch memory management in openflow networks, in: DEBS’14, ACM, 2014, pp. 177–188.
15
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
[16] S. Kandula, S. Sengupta, A. Greenberg, P. Patel, R. Chaiken, The nature of data center traffic: measurements & analysis, in: Proceedings of the 9th ACM SIGCOMM conference on Internet measurement conference, ACM, 2009, pp. 202–208. [17] K.-c. Lan, J. Heidemann, A measurement study of correlations of internet flow characteristics, Computer Networks 50 (1) (2006) 46–62. [18] Y. Kanizo, D. Hay, I. Keslassy, Palette: Distributing tables in software-defined networks, in: INFOCOM’13, IEEE, 2013, pp. 545–549. [19] N. Kang, Z. Liu, J. Rexford, D. Walker, Optimizing the one big switch abstraction in software-defined networks, in: CoNEXT’13, ACM, 2013, pp. 13–24. [20] X. N. Nguyen, D. Saucez, C. Barakat, T. Thierry, et al., Optimizing rules placement in openflow networks: trading routing for better efficiency, in: HotSDN’14, 2014. [21] L. Quan, J. Heidemann, On the characteristics and reasons of long-lived internet flows, in: Proceedings of the 10th ACM SIGCOMM conference on Internet measurement, ACM, 2010, pp. 444–450. [22] Opnet, https://www.opnet.com/. [23] M. Kuzniar, P. Peresini, D. Kostic, What you need to know about sdn flow tables, in: Lecture Notes in Computer Science (LNCS), no. EPFL-CONF-204742, 2015. [24] K. He, J. Khalid, A. Gember-Jacobson, S. Das, C. Prakash, A. Akella, L. E. Li, M. Thottan, Measuring control plane latency in sdn-enabled switches, in: Proceedings of the 1st ACM SIGCOMM Symposium on Software Defined Networking Research, ACM, 2015, p. 25. [25] D. Y. Huang, K. Yocum, A. C. Snoeren, High-fidelity switch models for software-defined network emulation, in: Proceedings of the second ACM SIGCOMM workshop on Hot topics in software defined networking, ACM, 2013, pp. 43–48. [26] X. Jin, H. H. Liu, R. Gandhi, S. Kandula, R. Mahajan, M. Zhang, J. Rexford, R. Wattenhofer, Dynamic scheduling of network updates, in: ACM SIGCOMM Computer Communication Review, Vol. 44, ACM, 2014, pp. 539–550. [27] J. Y. Yen, Finding the k shortest loopless paths in a network, management Science 17 (11) (1971) 712–716. [28] O. N. Foundation, Openflow switch specification v1.2.0 (2011). [29] J. C. Mogul, P. Congdon, Hey, you darned counters!: get off my asic!, in: Proceedings of the first workshop on Hot topics in software defined networks, ACM, 2012, pp. 25–30. [30] P. Bosshart, G. Gibb, H.-S. Kim, G. Varghese, N. McKeown, M. Izzard, F. Mujica, M. Horowitz, Forwarding metamorphosis: Fast programmable match-action processing in hardware for sdn, in: ACM SIGCOMM Computer Communication Review, Vol. 43, ACM, 2013, pp. 99–110. [31] P. Bosshart, D. Daly, G. Gibb, M. Izzard, N. McKeown, J. Rexford, C. Schlesinger, D. Talayco, A. Vahdat, G. Varghese, et al., P4: Programming protocol-independent packet processors, ACM SIGCOMM Computer Communication Review 44 (3) (2014) 87–95. [32] A. R. Curtis, W. Kim, P. Yalagandula, Mahout: Low-overhead datacenter traffic management using end-host-based elephant detection, in: INFOCOM’11, IEEE, 2011, pp. 1629–1637. [33] D. Lee, B. E. Carpenter, N. Brownlee, Observations of udp to tcp ratio and port numbers, in: Internet Monitoring and Protection (ICIMP), 2010 Fifth International Conference on, IEEE, 2010, pp. 99–104. [34] M. Soliman, B. Nandy, I. Lambadaris, P. Ashwood-Smith, Exploring source routed forwarding in sdn-based wans, in: Communications (ICC), 2014 IEEE International Conference on, IEEE, 2014, pp. 3070–3075. [35] Z. Guo, Y. Xu, M. Cello, J. Zhang, Z. Wang, M. Liu, H. J. Chao, Jumpflow: Reducing flow table usage in software-defined networks, Computer Networks 92 (2015) 300–315. [36] S. Zhang, F. Ivancic, C. Lumezanu, Y. Yuan, A. Gupta, S. Malik, An adaptable rule placement for software-defined networks, in: Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, IEEE, 2014, pp. 88–99. [37] N. Shelly, E. J. Jackson, T. Koponen, N. McKeown, J. Rajahalme, Flow caching for high entropy packet fields, in: Proceedings of the third workshop on Hot topics in software defined networking, ACM, 2014, pp. 151–156. [38] M. Moshref, M. Yu, A. B. Sharma, R. Govindan, Scalable rule management for data centers., in: NSDI, 2013, pp. 157–170. [39] R. Cohen, L. Lewin-Eytan, J. S. Naor, D. Raz, On the effect of forwarding table size on sdn network utilization, in: INFOCOM’14, IEEE, 2014, pp. 1734–1742. [40] X.-N. Nguyen, D. Saucez, C. Barakat, T. Turletti, Officer: A general optimization framework for openflow rule allocation and endpoint policy enforcement, in: Computer Communications (INFOCOM), 2015 IEEE Conference on, IEEE, 2015, pp. 478–486. [41] M. Yu, J. Rexford, M. J. Freedman, J. Wang, Scalable flow-based networking with difane, ACM SIGCOMM Computer Communication Review 41 (4) (2011) 351–362. [42] Q. Dong, S. Banerjee, J. Wang, D. Agrawal, Wire speed packet classification without tcams: a few more registers (and a bit of logic) are enough, in: ACM SIGMETRICS Performance Evaluation Review, Vol. 35, ACM, 2007, pp. 253–264. [43] N. Katta, O. Alipourfard, J. Rexford, D. Walker, Infinite cacheflow in software-defined networks, in: Proceedings of the third workshop on Hot topics in software defined networking, ACM, 2014, pp. 175–180. [44] N. Katta, O. Alipourfard, J. Rexford, D. Walker, Cacheflow: Dependency-aware rule-caching for software-defined networks, in: Proceedings of the Symposium on SDN Research, ACM, 2016, p. 6. [45] B. Yan, Y. Xu, H. Xing, K. Xi, H. J. Chao, Cab: a reactive wildcard rule caching system for software-defined networks, in: Proceedings of the third workshop on Hot topics in software defined networking, ACM, 2014, pp. 163–168. [46] K. Kannan, S. Banerjee, Flowmaster: Early eviction of dead flow on sdn switches, in: Distributed Computing and Networking, Springer, 2014, pp. 484–498. [47] H. Zhu, H. Fan, X. Luo, Y. Jin, Intelligent timeout master: Dynamic timeout for sdn-based data centers, in: Integrated Network Management (IM), 2015 IFIP/IEEE International Symposium on, IEEE, 2015, pp. 734–737.
16
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
[48] H. Liang, P. Hong, J. Li, D. Ni, Effective idle timeout value for instant messaging in software defined networks, in: Communication Workshop (ICCW), 2015 IEEE International Conference on, IEEE, 2015, pp. 352–356. [49] M. S. Malik, M. Montanari, J. H. Huh, R. B. Bobba, R. H. Campbell, Towards sdn enabled network control delegation in clouds, in: Dependable Systems and Networks (DSN), 2013 43rd Annual IEEE/IFIP International Conference on, IEEE, 2013, pp. 1–6. [50] I. Baldin, S. Huang, R. Gopidi, A resource delegation framework for software defined networks, in: Proceedings of the third workshop on Hot topics in software defined networking, ACM, 2014, pp. 49–54.
17
ACCEPTED MANUSCRIPT Zehua Guo is a Research Manager at ChinaCache. He received his B.S. from Northwestern Polytechnical University in 2007, and his M.S. from Xidian University in 2010, and his Ph.D. from Northwestern Polytechnical University in 2014. He was a Visiting Scholar in Department of Electrical and Computer Engineering at New York University Polytechnic School of Engineering from 2011 to 2014. He was a Research Fellow at Northwestern Polytechnical University in 2015. His research interests include software-defined networking, data center network, cloud computing, network virtualization, network resilience, high speed networks and signal analysis.
CR IP T
Ruoyan Liu is a software engineer in Amazon. He received his B.S. from Xidian University in 2012, and his M.S. from New York University Polytechnic School of Engineering in 2015. He was a Visiting Scholar of University of Otago in 2013. His research interests include Data Center Network and Software-Defined Networking.
AN US
Yang Xu is a Research Associate Professor in the Department of Electrical and Computer Engineering in New York University Tandon School of Engineering, where his research interests include software-defined networks, data center network, network on chip, and high speed network security. From 2007 to 2008, he was a Visiting Assistant Professor in NYU-Poly. Prior to that, he completed a Ph.D. in Computer Science and Technology from Tsinghua University, China in 2007. He received the Master of Science degree in Computer Science and Technology from Tsinghua University in 2003 and Bachelor of Engineering degree from Beijing University of Posts and Telecommunications in 2001. Andrey Gushchin is a Ph.D. student in Cornell University. He received the B.S. degree and the M.S. degree in Applied Mathematics from Moscow Institute of Physics and Technology, Russia, in 2008 and 2010 respectively. His research interests include networking, neuroscience, optimization, data mining.
CE
PT
ED
M
Anwar Walid is head of the Math of Systems Department at Bell Labs, Nokia (Murray Hill, N.J.) and a Distinguished Member of Technical Staff. He also served as Director of Research Partnership and University Collaborations, Bell Labs Chief Scientist Office. He received the B.S. degree in Electrical and Computer Engineering from Polytechnic of New York University, and the Ph.D. in Electrical Engineering from Columbia University, New York. He has 14 patents granted and more than 10 pending on various aspects of networking and computing. He received Best Paper Award from ACM SIGMETRICS, IFIP Performance, and the IEEE LANMAN. He contributed to the Internet Engineering Task Force (IETF) and co-authored RFCs. He is an associate editor of IEEE/ACM Transactions on Cloud Computing and IEEE Network Magazine, and was associate editor of the IEEE/ACM Transactions on Networking (2009-2014). He served as Technical Program Co-chair of IEEE INFOCOM 2012. Since 2009, he has been an adjunct Professor at Columbia University Electrical Engineering department. Dr. Walid is an IEEE Fellow and an elected member of the IFIP Working Group 7.3.
AC
H. Jonathan Chao is a Professor of Electrical and Computer Engineering at New York University Tandon School of Engineering. He is a Fellow of the IEEE for his contributions to the architecture and application of VLSI circuits in high-speed packet networks. He holds 46 patents with 11 pending and has published more than 200 journal and conference papers. He has also served as a consultant for various companies, such as Huawei, Lucent, NEC, and Telcordia, and an expert witness for several patent litigation cases. His research interests include high-speed networking, data center network designs, terabit switches/routers, network security, network on chip, and medical devices. He received his B.S. and M.S. degrees in electrical engineering from National Chiao Tung University, Taiwan, and his Ph.D. degree in electrical engineering from Ohio State University.
ACCEPTED MANUSCRIPT
CR IP T
Zehua Guo
AN US
Ruoyan Liu
CE
PT
ED
M
Yang Xu
AC
Andrey Gushchin
Anwar Walid
CR IP T
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
H. Jonathan Chao