Accepted Manuscript
Enhancing NDN feasibility via Dedicated Routing and Caching Dohyung Kim, Younghoon Kim PII: DOI: Reference:
S1389-1286(17)30292-X 10.1016/j.comnet.2017.07.011 COMPNW 6266
To appear in:
Computer Networks
Received date: Revised date: Accepted date:
14 February 2017 5 July 2017 20 July 2017
Please cite this article as: Dohyung Kim, Younghoon Kim, Enhancing NDN feasibility via Dedicated Routing and Caching, Computer Networks (2017), doi: 10.1016/j.comnet.2017.07.011
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Enhancing NDN feasibility via Dedicated Routing and Caching Dohyung Kim, Younghoon Kim
CR IP T
Department of Computer Engineering, Sungkyunkwan University, Suwon, South Korea
Abstract
ED
M
AN US
Named-data Networking(NDN) is a new networking regime for efficient content distribution. NDN employs name-based access and in-network caching to implement easy access to content. For that, NDN routers perform named-based routing, packet-level caching, and cache look-up, but these operations impose huge overhead on routers. To solve the feasibility issue of NDN router, in this paper, we propose an alternative architecture where caching and routing are separated. Routers are dedicated to packet routing while the cache server that is installed at every domain takes charge of in-network caching. Requests that have visited the cache server are marked and forwarded without further cache-lookup within the domain. This approach fundamentally relieves routers of heavy burden. Additionally, it introduces following benefits: 1) Huge amount of resource for caching is saved; 2) The effectiveness of in-network caching is improved; 3) The access delay of time-sensitive traffic is minimized. For the proposed architecture, cache look-up is performed via reserved links between the cache server and routers. Therefore, it is important to locate the cache server from the point of installation cost. We address how to place the cache server with the minimum cost, which is proved NP-complete, and present an approximation algorithm. Our simulation study shows that the proposed scheme reduces caching operations by up to 80% while achieving the same caching benefit only with 40% caching space. Keywords: Information-centric networking, Named-data networking, NDN router, router overhead 1. Introduction
AC
CE
PT
of routers while the content is relayed. Hence, future requests for the content can be served from NDN As video traffic explosively increases, Internet links routers instead of the original content provider. are highly congested by massive distribution and replication of content [1]. Information-centric netIn order to implement these properties, NDN working(ICN) paradigm [2, 3, 4, 5] has recently routers are required to do cache-lookup, name-based emerged and been actively researched to resolve this routing and packet-level caching for each content. problem at root. Two distinctive features of ICN are Since these operations need huge amount of compuname-based content access and in-network caching. tation, feasibility issues on NDN routers are continContent is addressed by content names instead of uously being questioned. While the number of IP location identifiers such as IP addresses, which en- addresses is one billion, the address space of NDN ables content to be retrieved from any nodes hav- includes at least one trillion content names. The ening copies of the content. Among several ICN archi- largement of address space requires much more routtectures, Named-Data Networking(NDN) [2] is being ing states to be stored at routers, which incurs addeveloped as a potential candidate for future Inter- ditional processing overhead. Fortunately, with effinet. In NDN, copies of content are stored at caches cient name look-up algorithms [6, 7] forwarding plane Preprint submitted to Computer Networks
July 21, 2017
ACCEPTED MANUSCRIPT
tional NDN architecture, each interest should wait in the queue until the operations for former interests complete. Therefore, real-time traffic could be delayed due to caching operations. In the proposed scheme, however, every packet is simply forwarded to either the next hop router or the cache server. High-priority traffic that is somewhat irrelevant to in-network caching is instantly relayed to the next hop. This property is of great significance in upcoming 5G networks where less than 1ms round-trip time should be guaranteed. The installation cost of the proposed architecture is minimized when the cache server is connected to the minimum subset of NDN routers. Here, it should be guaranteed that every packet visits the cache server. We formulate the problem of how to locate the cache server with the minimum cost. It is proved to be NP-complete, and an approximation algorithm is presented. The effectiveness of the cache is also studied in the proposed scheme. Mathematical analysis shows that the proposed scheme achieves the same in-network cache-hit ratio 3 with a much smaller size of a cache. The rest of this paper is organized as follows. In Section 2, we introduce the related works that optimized ICN performance. Section 3 describes how the proposed scheme works and proposes ways to install the cache server. Performance gain in terms of queuing delay and CPU clocks is quantitatively analyzed in Section 4. In Section 5, we validate the proposed architecture via simulation study. Finally, we discuss several deployment issues in Section 6 and conclude the paper in Section 7.
CE
PT
ED
M
AN US
CR IP T
supports up to 20Gbps real ICN traffic. However, if packet-level caching and cache-lookup are implemented at routers, significant overhead is inevitable. According to [8], more than 45% of CPU clocks are consumed by caching operations in the state-of-theart NDN router. If security mechanisms are employed for preserving cache integrity, a NDN router itself becomes a network bottleneck 1 . In this paper, we propose a fundamental solution to relieve NDN routers of huge computational overhead. In the proposed scheme, caching functionality is separated from NDN routers, and it is delegated to an independent shared cache server. A subset of NDN routers are connected to the cache server for enjoying in-network caching benefit. Here, a packet should not visit the same cache server redundantly. For that, each packet specifies the cache server it has lastly visited. Since the cache server takes full charge of in-network caching, NDN routers are dedicated to routing and forwarding. With a single cache-lookup within a domain, huge amount of resource for caching operation is saved. The most concerning problem in the proposed scheme is communication cost between NDN routers and the cache server. However, the cost is compensated by the decrement of inter-domain traffic 2 . According to our simulation study, the cache server shows much higher cache-hit ratio compared with pervasive caching in NDN, since huge amount of redundant content is eliminated from the caching space. As more requests are served from the cache server, less amount of inter-domain traffic is generated. Based on the arguments of [10, 11, 12] saying that “the primary goal of an ISP is to minimize the cost associated to external links’ utilization”, the communication cost is considered to be acceptable. Via quantitative analysis, it is shown that realtime traffic has a higher chance to be delivered in time under the proposed scheme. In the conven-
2. Related Work
AC
In NDN, communication is initiated by issuing a request packet, called ‘Interest’. The interest does not specify where to go and where it comes from. Instead, it has the target content name. When NDN routers receive an interest, they search for the target con-
1 According to [9], a router with an Intel Core 2 Duo 2.53 GHz CPU can achieve only a throughput of 150Mbps, even when the most convenient RSA public exponent(3) is used for signature verification 2 Inter-domain traffic refers to the traffic that is forwarded to different domains via external links
3 The in-network cache-hit ratio indicates the proportion of content that is served from any network cache among all received content.
2
ACCEPTED MANUSCRIPT
8GTUKQP5GIOGPV
0&0[QWVWDGOQXKGUENKRXAU #RRUWRRNKGFPCOG
CR IP T
Figure 1: Content Name Hierarchy
(DONA). With a proxy server in IP networks, a request is firstly forwarded to the pre-configured proxy server. If the target content is not found in the proxy server, the request is delivered to the content source. This approach corresponds to the edge caching in terms of the number of cache-lookup, and as mentioned above it has limited caching benefit compared to the domain-level pervasive caching of the proposed scheme. DONA is one of ICN proposals, where a request is routed by the content name via a resolution handler(RH). Since RHs are organized in a hierarchical structure, the request which is not handled at the RH is forwarded to the parent RHs until it reaches a copy of the content. RH employs a cache to provide fast and efficient content dissemination. However, RHs have to take care of routing as well as caching like conventional NDN routers, which implies that RHs have the exactly same problem that we mainly focus on in this paper. In [13], edge caching was discussed for incremental deployment of ICN. This approach was presented upon the same argument of our work saying that significant complexity of in-network caching lowers feasibility of ICN. Authors in [13] reported that only 10% caching performance degradation was observed at the edge caching. According to [14], however, the result misses the dependency of caching and routing, and the caching performance is improved with the proper combination of routing and in-network caching policy.
PT
ED
M
AN US
tent in their local cache, named Content Store(CS), first. If the content is successfully found, it is directly served by NDN routers without further propagation of the interest. When the content does not exist in the cache, NDN routers look up Pending Interest Table(PIT). In PIT, interests are stored with their incoming faces, which makes reverse paths back to the users for responses. If an interest for the identical content is found in PIT, NDN routers add the incoming face of the interest to the existing PIT entry, and discard the interest. Otherwise, they look up the routing table called Forwarding Information Base(FIB), and forward the interest. Since the content name has a hierarchical structure as shown in Fig. 1, the outgoing face is determined by longest prefix matching based on FIB. When content is delivered to the users, it is stored in the CS of routers by default (pervasive caching). In order to preserve the integrity of the CS, NDN routers verify content before storing content. All of the operations for forwarding and caching drain away huge amount of computational power. Even though lots of research have been done to optimize each functionality, they could not be fundamental solutions. Efficient name look-up algorithms were designed in [6, 7] for ICN routers. With a well designed hash function, authors in [6] implemented an ICN router that can forward the real ICN traffic at around 20Gbps. In [7], a new data structure called prefix Bloom filter(PBF) was introduced to accelerate name matching speed. Upon PBF, authors designed Caesar that sustains up to 10Gbps traffic when 10 million content prefixes are given. However, caching overhead, which consumes much more system resource, is out of scope in these works. The idea of using dedicated cache servers has already been introduced in the existing literatures, such as a proxy server in IP networks and a resolution handler in Data Oriented Network Architecture
3. The Proposed Scheme
AC
CE
In order to design a feasible system, we insist that NDN router should be relieved from too much work. In our model, therefore, NDN routers are dedicated to routing functionality only, and in-network caching is assigned to an independent caching node. 3.1. Separation of routing and caching To implement the proposed model, we put shared cache servers within the domain. Depending on the size of the domain, one or more cache servers are located within the domain. In Fig. 2(a), a single cache server is installed for nine NDN routers. Let us think 3
ACCEPTED MANUSCRIPT
Interest : #
to Fig. 2(b)), R2 checks the Sid field of the packet to make sure that the value of Sid field is different from the registered cache server, B. Then, R2 does 1) change the value of Sid into B and forward the content to R1; 2) deliver the content to the cache server. Here, it is tricky to route the response to the cache server since intermediate routers may not have the matching PIT entry. To resolve the problem, we use tunneling. R2 encapsulates the response with a wellknown name and every router in the domain has the persistent interest 4 for the name in PIT. By means of the persistent interest, the response packet could be routed to the cache server. After being decapsulated, the content is stored in the cache server with its original name. In the basic NDN architecture, caches are accessed four times to look for the content, and four times to store the newly arriving content. Under the proposed model, however, a cache server is accessed once by an interest and response, respectively, which lessens the burden of routers as well as minimizes waste of computing power for cache operations.
ER_2
ER_1
R2
R1
CR IP T
R3
B ER_4
R5 R4 ER_3
(a) Interest forwarding ZĞƐƉŽŶƐĞ͗ % ZͺϮ
Zͺϭ
Zϯ
Zϰ
Zͺϯ
Zϱ
Zͺϰ
(b) Response forwarding
M
Figure 2: The proposed scheme
AN US
ZϮ
Zϭ
3.2. Cache Server Installation
AC
CE
PT
ED
As described in the Fig. 2(a), a cache server is shared by multiple routers within an Autonomous of the case that an interest packet traverses from the System(AS). In order to make all ingress flows enedge router(ER) ER_1 to ER_2 via R1 and R2. In the proposed scheme, routers that are directly con- joy caching benefit, the following axiom should be nected to the cache server keep the unique ID of the satisfied. cache server. And each packet has a field(Sid ) for Axiom 1. At least one router on the shortest path specifying the id of the cache server that it lastly vis- between any pair of edge routers should be connected ited. When the interest arrives at the router R1, R1 to the cache server. matches the value of Sid field in the interest with the id of its registered cache server. Since they are difInstallation cost of the proposed system is in proferent, R1 forwards the interest to the cache server portion to the number of links between the cache first. If the cache server has the target content, R1 server and NDN routers. Therefore, it is important receives the content and relays it back to the user by to choose the minimal subset of NDN routers that consuming the PIT entry. Otherwise, R1 gets back satisfies Axiom. 1. In our previous example as shown a Negative Acknowledgement (NACK) packet from in Fig. 2(a), (R2, R3, R4) is a feasible subset. the cache server. Then, R1 updates the value of Sid We formulate the problem upon an arbitrary topolfield in the interest, and forwards it to R2. Upon re- ogy, G = (V, E), where V = {v1 , v2 , ..., vm } denotes ceipt of the interest, R2 firstly looks at the Sid field and sees that the packet has already visited the same 4 Persistent Interest has been introduced to make a channel cache server that it is registered at. Then, R2 skips for media stream in NDN [15]. In contrast to the plain interest, the cache look-up and forwards the interest to ER_2. the persistent interest is not deleted after a matching data packet is forwarded.
When the content arrives at R2 from ER_2 (refer 4
ACCEPTED MANUSCRIPT
CR IP T
the set of NDN routers and E = {e1 , e2 , ..., en } is the Algorithm 1 Building a minimal set set of links. Let P = {p1 , p2 , ..., pl } be the routing 1: Inputs: V ← {v1 , v2 , ..., vm } paths between all pairs of edge routers. Then, pi can P ← {p1 , p2 , ..., pn } be represented by a subset of V or a subset of E. Here, 2: we denote pi with a subset of V. 3: Initialize: The solution set, Ω, for Axiom. 1 is a subset of V c(vi ) ← 0, ∀i ∈ [1, m] that satisfies Ω←φ
4: 5: while P 6= φ do 6: COUNTNODES() 7: x ← vi with the largest value of c(vi ), vi ∈ V 8: Ω ← Ω ∪ {x} 9: V ← V − {x} 10: for pi ∈ P do 11: if x ∈ pi then 12: P ← P − {pi } 13: end if 14: end for 15: end while 16: 17: Returns:
(1)
Ω ∩ pi 6= φ, ∀pi ∈ P
AN US
Finding the minimal Ω exactly corresponds to “Set Cover Problem” [16, 17], which is well-known as NPcomplete. Hence, we present an approximation approach based on greedy algorithm. To get the minimal Ω, it is advantageous that NDN routers that appear frequently in pi (∈ P) are priorly included. Based on the observation, we design Algorithm 1 to minimize the installation cost. In the example of Fig. 2(a), P = {L12 , L13 , L14 , L23 , L24 , L34 }, where Lij indicates the shortest path between edge router i and j. L12 , L13 , L24 and L34 correspond to the unique shortest paths of (ER1 , R1 , R2 , ER2 ), (ER1 , R3 , ER3 ), (ER2 , R2 , R5 , ER4 ) and (ER3 , R4 , R5 , ER4 ), respectively. In the case of L14 , both (ER1 , R1 , R2 , R5 , ER4 ) and (ER1 , R3 , R4 , R5 , ER4 ) are possible routing paths. Similarly, L23 is either (ER2 , R2 , R1 , R3 , ER3 ) or (ER2 , R2 , R5 , R4 , ER3 ). If we consider L14 = (ER1 , R1 , R2 , R5 , ER4 ) and L23 = (ER2 , R2 , R1 , R3 , ER3 ), our algorithm firstly takes R2 . Then, L12 , L14 , L23 and L24 are removed from P. The remaining paths in P are L13 and L34 . R3 is chosen for L13 , and either R4 or R5 is chosen to cover L34 . Resultingly, the possible solution becomes (R2 , R3 , R4 ) or (R2 , R3 , R5 ). Due to physical limitation, NDN routers are hardly connected to the cache server directly. Rather, their connections are composed of private lines across multiple hops. Therefore, installation cost for each NDN router varies depending on its location. To reflect the different installation cost on Algorithm. 1, a cost variable, wi , is assigned to each NDN router, vi . Then, the problem is formulated as X minimize wi ∀i,vi ∈Ω (2) subject to Ω ∩ pj 6= φ, ∀pj ∈ P
PT
ED
M
Ω
18: 19: procedure COUNTNODES() 20: c(vi ) ← 0, ∀i ∈ [1, m] 21: 22: for pi ∈ P do 23: for xi ∈ pi do 24: if xi = vi then 25: c(vi ) ← c(vi ) + 1 26: end if 27: end for 28: end for 29: end procedure
AC
CE
This is also a variation of “Set Cover Problem”, and we solve the problem in a similar way (Algorithm 1 is modified as shown in Fig. 3). The rationale behind the algorithm is to pick the NDN router with the smallest ratio of weight and the frequency of node vi , where ∀pi ∈ P. Time complexity of Algorithm 1 is O(mn). 3.3. Cache Size Generally, communication across domains is much more expensive than intra-domain communication. In order to avoid additional budget for inter-domain traffic, Axiom. 2 needs to be addressed. 5
ACCEPTED MANUSCRIPT
· · · added
;ϭͿ
7: x ← vi with the largest value of c(vi ), vi ∈ V
;ϮͿ
;EͲϭͿ
;EͿ
(a) Linear topology with a cascade of multiple caches
· · · replaced by wi x ← vi with the smallest value of , vi ∈ V c(vi ) 13: c(vi ) ← c(vi ) − 1, vi 6= x and ∀vi ∈ pi
^ŝŶŐůĞĂĐŚĞ
· · · added
;ϭͿ
Figure 3: Modification to a minimal set with the cost
CR IP T
1: W ← {w1 , w2 , ..., wm }
;ϮͿ
;EͲϭͿ
;EͿ
(b) Linear topology with a single cache server
j=2
(
j−1 Y
k=1
M(i, k))H(i, j)(j − 1)
PT
R(i) =
N X
ED
M
AN US
Figure 4: Linear topology Axiom 2. The cache-hit ratio at the cache server should be larger than or equal to that in pervasive caching. the order of their popularity, and the content i is the i-th most popular content. Due to the “filtering In order to figure out the minimum cache size that effect” [18, 19], the popularity distribution of arrivsatisfies Axiom. 2, we look at the amount of reduning requests changes at each cache. The conditional dant packets in a cascade of N caches in Fig. 4(b). probability that the arriving request is for the content If a request advances to the next cache from the ji at the j-th cache, Q(i, j), is represented by th cache, its corresponding data is served from either network caches or the content source. Since the data :j=1 q(i) Qj−1 has been already stored at (j + 1)-th cache, the jQ(i, j) = Pq(i) Q (5) k=1 M(i,k) th cache would have a redundant content. From the q(k) j−1 M(k,l) : 2 ≤ j ≤ N l=1 ∀k observation, the amount of redundant content for the content i in a cascade of N caches, R(i), is calculated If Q(i, j) is given, the cache-hit ratio of content i by at the j-th cache, H(i, j), is calculated by using Che
+(
N Y
CE
j=1
M(i, j))(N − 1)
approximation [20] as follows.
H(i, j) = 1 − e−Q(i,j)tC (j)
(6)
(3) where tC (j) is the solution of the following equation. X C= (1 − e−Q(i,j)x ) (7) ∀i
where H(i, j) and M(i, j) indicate the cache-hit and cache-miss ratio of the content i at the j-th cache, respectively. Then, the average number of redundant content in network caches, E[R], is X E[R] = q(i)R(i) (4)
AC
where C is the cache size of a single ICN router. The value of E[R] is depicted in Fig.5(a) by changing the value of N in Fig. 4(b). Even though more caches are connected, more redundant packets are stored in them. As a result, the effective caching space minimally increases as shown in Fig. 5(b). Here ∀i it is worth to note that the effective caching space where q(i) signifies the conditional probability that denotes the value of the overall cache size divided the arriving request is for the content i. by (1 + E[R]). Due to the redundancy in pervasive Here, we assume that the content popularity fol- caching, the caching space of 2C at the single cache lows Zipf distribution. All content are ranked in server(Fig. 4(a)) is enough to guarantee the Axiom. 2. 6
ACCEPTED MANUSCRIPT
200 Effective Cache Size
E[R]
8 6 4 2 0
2
4 6 Number of Caches
8
(a) Expected number of redundant content
100
2
4 6 Number of Caches
6KOGUGPUKVKXG6TCHHKE
(a) Processing Interests in NDN routers: All interests are delayed by forwarding and caching operations
8
(b) Effective Cache Size
4GOQVG%CEJG5GTXGT 2TQEGUU %CEJKPI
2CEMGV5EJGFWNGT
0.3
DH
2TQEGUU(QTYCTFKPI
$GUVGHHQTV6TCHHKE
6KOGUGPUKVKXG6TCHHKE
AN US
Network Hit Ratio (%)
2TQEGUU(QTYCTFKPI $GUVGHHQTV6TCHHKE
0
Single (0.7) Cascade (0.7) Single (1.0) Cascade (1.0)
0.4
DH
DE 2CEMGV5EJGFWNGT
50
Figure 5: Redundancy
0.5
DE+DH
2TQEGUU%CEJKPI
150
CR IP T
10
0.2
(b) Processing Interests in the proposed scheme: Time-sensitive traffic experiences forwarding delay only.
0.1 0
200 400 600 800 1000 Overall Network Cache Size
Figure 7: Queuing Delay of Time-sensitive Traffic
Figure 6: Single vs. Cascade
ED
M
The simulation study is performed to verify the result. We assume that 10,000 pieces of content are grouped into 200 classes with different popularity. The most popular content belong to the class 1, while the class 200 includes the least popular content. The popularity of each class follows Zipf-distribution function with the α value of 0.7 and 1.0, respectively. That is, the content of class k is requested P200 with the probability qk = kθα , where θ = 1/ k=1 k1α . All content are of the same size. As expected, caching space is not efficiently utilized in a cascade cache model. As a result, much lower in-network cache-hit ratio is observed in a cascade of N caches (Refer to Fig. 6). Under the given linear topology, it is confirmed that the single cache server with 2C (200) is enough for guaranteeing Axiom. 2.
time-sensitive traffic may be delayed due to the cachelookup for the interests waiting ahead, which may impair QoS. If we assume that every interest is inserted into a single queue at the router and Ut is the queue length at time t, the expected queuing delay of a newly arriving interest at time t is represented by D = θUt Df + (1 − θ)Ut (Df + Dc )
PT
= Ut Df + (1 − θ)Ut Dc
(8)
AC
CE
where θ is the proportion of time sensitive traffic in the queue. Df and Dc is the forwarding and caching delay, respectively. In the proposed scheme, since caching operations are delegated to the caching server, the expected queuing delay for time sensitive traffic decreases into Ut Df . In practice, differentiated forwarding is supported at routers by taking advantages of multiple priority queues and weighted scheduling. In such a case, time4. Quantitative Analysis sensitive traffic is categorized into the high-priority 4.1. Queuing Delay class and it would be preferentially forwarded (See Here we look at the queuing delay of time-sensitive Fig. 7). Let us assume that each interest belongs to traffic under the proposed scheme. A queuing delay is either high or low priority class. Ut,h and Ut,l repaffected by the amount traffic as well as operations for resents the number of interests waiting in the high each traffic. In the conventional ICN architectures, and low priority queue at time t, respectively. If 7
ACCEPTED MANUSCRIPT
D = Ut,h Df + pUt,h (Df + Dc )
(9)
= Ut,h (1 + p)Df + pBt,h Dc
pBt,h Dc is the additional delay due to the caching operations for the former interests in the queue, which is avoided under the proposed scheme.
CR IP T
the weights for high and low priority traffic in the weighted scheduler have the ratio of 1 : p (p < 1), the queuing delay of high priority traffic which arrives at time t is estimated by
Figure 8: Operation Blocks and CPU Usage (CPU Clocks)
4.2. Computational Overhead
2.8
AC
CE
PT
ED
M
LN / LP
AN US
In this section, we analyse how much overhead is 2.4 alleviated at ICN routers by taking caching function2 ality out. For quantitative analysis on routing overhead, we bring results from [8], which analysed NDNx 1.6 source codes and empirically broke down the CPU usages. Their analysis is summarized in Figure 8. 1.2 Operation blocks and their CPU usages are depicted 0 0.2 0.4 0.6 0.8 1 Hit Ratio in the unit of CPU clocks. Here it is noted that the number of CPU clocks for each operation block deFigure 9: CPU Clock Gains with Varied Cache Hit Ratio pends on experiment settings such as hardware specification and traffic pattern, but their ratios remain unchanged as long as operation logics are of the same. of NDN. Therefore, the number of CPU clocks that It is shown that almost 65% and 47% of overall CPU the proposed scheme consumes is calculated by using clocks for handling a single packet are consumed for αP , a hit ratio of the proposed scheme, as follows. caching when a cache-hit and cache-miss occurs, respectively. Operations are grouped into five blocks from B1 to B5 for the ease of explanation. For every incoming interest, B1, B2, and B5 are applied LP =LB1 + LB5 (11) regardless of cache-hit, and B3 and B4 are executed (1 − αP )(LB3 ) only when cache miss occurred in the cache. Let αN denote the cache hit ratio, then the expected number of CPU cycles in the original NDN architecture, LN , The ratio of LN to LP is described in Fig. 9 by can be expressed as follows. changing the value of cache-hit ratios. Although our simulation study previously reports that αP is much LN =LB1 + LB2 + LB5 (10) higher than αN with the same amount of caching + (1 − αN )(LB3 + LB4 ) space, here we use the same value for αP and αN to In the proposed scheme, shaded blocks in Fig. 8 see the lower bound of performance gain. It is shown are replaced with queries and responses to/from the that the ratio exceeds a value of 2, when the cache-hit cache server. Here we assume that these queries and ratio is less than 0.9. This result is translated into responses are relayed via reserved links without look- the fact that the proposed scheme can support about ing up PIT and FIB. Hence, such operations are two times larger input traffic per second compared minimal compared with the basic operation blocks with the ICN convention. 8
ACCEPTED MANUSCRIPT
Table 1: Simulation scenario
105 200 0.7, 1.0 100 ∼ 300 5,000/sec. 100 seconds
that verification overhead which may be required for content integrity in the cache is not considered in this result. In NDN, routers could verify content by using its signature, which helps to prevent content poisoning attack [23]. If such operations are employed at each router for enhancing security, much more burden can be saved by the separation of caching functionality.
CR IP T
num. of total content num. of priority class α in Zipf cache size of an ICN router user query rate simulation time 5. Evaluation 5.1. Simulation study
PT
ED
M
AN US
In this section, the proposed scheme is evaluated via simulation study. Simulation is performed by using the event-driven simulator which has been used in the existing literature [21, 22]. The simulator includes basic NDN features such as FIB, PIT and Content Store (in-network cache). Real IP/MPLS topologies are considered in our simulation: the 20-node British Telecom(BT) topology, the 21-node Deutsche Telecom(DT) topology, and PIONIER Network(PN) topology (See Fig. 10). A single cache server is considered in each topology, and it is virtually connected to the routers which are chosen by the proposed algorithm. Virtual links are denoted by dotted lines, while internal links are drawn by solid lines. Concentric circles represent edge(interconnection) routers, and the external links from them are denoted by thick arrows. A pair of nodes are additionally connected to every edge routers. One of them is a content server and the other acts as a content requester. Other simulation parameters are arranged in Table. 1. The proposed scheme is compared with the default NDN caching, pervasive caching with LRU.
5.1.2. Network Cache-hit Ratio In-network cache-hit ratio is monitored in both pervasive caching and the proposed scheme. All ICN routers have their own cache with the size of C for pervasive caching. In the proposed scheme, a cache is installed in the cache server and its size is varied from that of a single ICN router (C) to 20C. Since the same traffic pattern is exploited, the same number of cache-hits occurs in the domain with the samesized cache. If C is set to 200, the proposed scheme achieves as much in-network cache-hit ratio as in the NDN pervasive caching only with the caching space of 4 ∼ 6C. The average number of hops between any two edge routers in PN is larger than those in BT and DT, which implies that the interest is looked up in the larger caching space. Hence, a little largersized cache is required in the cache server to obtain the same numbers of cache-hit events in PN. As addressed in the previous section, caching space is wasted by duplicate content in the pervasive caching. Meanwhile, such a redundancy is eliminated and the caching space is more effectively utilized in the proposed scheme. Since in-network cache hit ratio dominantly influences the amount of inter-domain traffic that accounts for high cost, the proposed scheme may have great advantage from the cost perspective.
CE
5.1.1. Computational Overhead Firstly, we estimate the computational overhead at ICN routers with or without cache by monitoring the number of operations, LB1 ∼ LB5 . Fig. 11 shows the overall CPU usage of all routers within the domain. As the size of cache increases, more requests are served at the close location rather than they are forwarded further. Hence, less amount of traffic are handled at routers, which minimizes networkwide computation. When the router is relieved from caching operations, their computational overhead is drastically reduced to less than half. Here, it is noted
AC
5.1.3. Cost and Benefit Here, we look at the overall link usages within the domain as well as the amount of inter-domain traffic. Upon the assumption that cache servers are physically located near the router numbered 13, 9, and 8 at BT, DT, and PN, respectively, the results in bandwidth usages are described in Fig. 13. Because additional packet are exchanged between the cache server and routers, the amount of intra-domain traffic increases in proportion to the hop distance. When 9
ACCEPTED MANUSCRIPT
%CEJG5GTXGT
%CEJG5GTXGT
(b) Deutsche Telecom (DT)
(c) Pioneer Network (PN)
AN US
(a) British Telecom (BT)
%CEJG5GTXGT
CR IP T
Figure 10: Real IP/MPLS Topologies
7
4 3 2 1 0
200
400 600 Cache Size
800
(a) BT
5 4
2 1 0
1000
Basic (0.7) Basic (1.0) Separation (0.7) Separation (1.0)
6
3
15
M
5
Number of CPU clocks
Basic (0.7) Basic (1.0) Separation (0.7) Separation (1.0)
6
x 10
200
400 600 Cache Size
ED
Number of CPU clocks
11
x 10
Number of CPU clocks
11
7
800
11
x 10
Basic (0.7) Basic (1.0) Separation (0.7) Separation (1.0)
10
5
0
1000
200
400 600 Cache Size
(b) DT
800
(c) PN
0.3 0.2
PT
0.1 0
5
10 15 CacheSize (× N)
(a) BT
0.6 0.5
Pervasive 0.7 Pervasive 1.0 Proposed 0.7 Proposed 1.0
0.4 0.3 0.2 0.1 0
5
Network Hit Ratio (%)
0.4
Network Hit Ratio (%)
0.5
Pervasive 0.7 Pervasive 1.0 Proposed 0.7 Proposed 1.0
CE
0.6
AC
Network Hit Ratio (%)
Figure 11: CPU usage in the real topologies
0.6 0.5 0.4 0.3 0.2 0.1
10 15 CacheSize (× N)
(b) DT Figure 12: Network-hit ratio in the real topologies
10
Pervasive 0.7 Pervasive 1.0 Proposed 0.7 Proposed 1.0
0
5
10 15 CacheSize (× N)
(c) PN
1000
ACCEPTED MANUSCRIPT
Increment of Intra−domain Traffic Decrement of Inter−domain Traffic
0.5
0.3 0.2
0.1
0.1
10
Cache Size (xN)
0.2 0.1
0
20
10
(a) BT
Cache Size (xN)
20
(b) DT
0.2
0.2
0.15 0.1
0.15 0.1
Proposed Pervasive
0.05 0
Increment of Intra−domain Traffic Decrement of Inter−domain Traffic
10
20 35 Cache Size (x N)
40
(a) Network Hit Ratio
0.05 0
20 40 Cache Size (x N)
(b) Link Cost Gain/Loss
M
Figure 14: YouTube trace with PN
0
10
Cache Size (xN)
20
(c) PN
than the synthetic data, the in-network cache-hit ratio in Fig. 14(a) is smaller than that in Fig. 12. Other than that, the results with a real trace tend to be similar to those with synthetic data. The caching space is effectively managed and only a cache size of 5C in the cache server yields in-network cache-hit ratio equal to all routers having a cache with the size of C in pervasive caching. Such gain leads to reduced inter-domain traffic as shown in Fig. 14(b).
AN US
0.25
Ratio
Network Hit Ratio
Figure 13: Link Cost Gain/Loss
0.25
0.3
CR IP T
0.2
0
0.4
Ratio
0.3
Increment of Intra−domain Traffic Decrement of Inter−domain Traffic
0.5
0.4
Ratio
0.4
Ratio
Increment of Intra−domain Traffic Decrement of Inter−domain Traffic
0.5
CE
PT
ED
the cache size of a cache server is set to 10C in BT, intra-domain traffic increases by about 30%, compared with the pervasive caching. In return, however, inter-domain traffic is reduced by about 10% due to gains in the hit ratio. If the cache size becomes larger, more packets are served from the cache server instead of being forwarded toward the next domain. As a result, the amount of inter-domain traffic decreases considerably. With the cache size of 20C, more than 20% of bandwidth usage for inter-domain traffic is saved.
AC
5.1.4. YouTube trace Here, we perform a simulation with a real trace of YouTube requests in the topology of PN. The trace data was collected from the UMASS campus network in September 2008 [24], and consists of 668,218 user requests for 323,593 pieces of content. We assume that all content is of the same size. The cache size of a single ICN router is set to 500 and that of the cache server is varied from 500 to 20,000. Since data in the trace has larger content space 11
5.2. Experimental Study In this section, the capacity of the cache server is discussed based on the experimental study. A simple dumbbell topology is composed of four nodes in the testbed network. Each node is equipped with an Intel i7-7700 CPU, 16GB RAM and 256 GB SSD, and it communicates with each other via either an Intel X540-AT2 or X520-2 network card. NetGear PROSAFE S3300-28X switch having two different kinds of ports (i.e., SFP+ ports and 10Gbps RJ 45 ports) is used to connect nodes. NDNx is installed on top of Ubuntu 14.04 at each node and CPU usage is measured by varying the interest generation rate, hit ratio, and cache size. In the first experiment, CPU usage is observed by changing the amount of traffic to be served. To the end, the interest generation rate is varied at the client nodes and every interest is served from the content store (100% cache hit ratio is manipulated.). The CPU usage which is monitored by the linux command, TOP, is plotted in the Fig. 15. It is shown that the CPU usage increases in proportion to the
ACCEPTED MANUSCRIPT
CPU Usage (%)
80
60
40
20
0 0
0.5
1 1.5 Throughput (Gbps)
2
2.5
100
80
80
60
40
20
0 0
6. Discussion
60
6.1. Communication between routers and the cache server
40
20
20
40 60 Hit Ratio (%)
80
100
(a) Cache Size: 5,000
0 0
AN US
100
CPU Usage (%)
CPU Usage (%)
Figure 15: CPU usage with 100% cache hits
CR IP T
not linear to the cache size (We also perform an experiment with the cache size of 50,000, and confirm that the average CPU usage stay in the middle of the two cases, 5,000 and 500,000.). According to our experimental study, it is concluded that the cache server on a single general purpose PC could handle tens of Gbps traffic under moderate cache-hit ratio if mutl-core extension of NFD is supported. If the cache server is implemented by a cluster of multiple computers, it may provide sufficient service rate under the current network bandwidth.
100
20
40 60 Hit Ratio (%)
80
100
(b) Cache Size: 500,000
M
Figure 16: CPU usage with various cache hit ratios
AC
CE
PT
ED
traffic to be served, and saturated with an outgoing traffic of 2.1Gbps. Here it is noted that this is the result of when NDN forwarding daemon (NFD) is run on a single core of the system. If multi-core extension of NFD [25] is used, we believe that multiple times of 2.1Gbps traffic could be served by even a single general purpose PC. The second experiment is to show the variation in CPU usage according to the hit ratio and cache size. In the experiment, a client node generates interests at the rate of 10,000/sec, and the cache-hit ratio at the server changes from 0 to 1 with cache sizes of 5,000 and 500,000. As shown in Fig. 16(a), the CPU usage is reduced as the cache-hit ratio increases, and it becomes halved if the cache-hit ratio becomes 1. This is because less amount of content is inserted into the cache with a higher cache-hit ratio, and smaller number of read/write operations are performed at the cache server. In Fig. 16(b), it is shown that larger CPU usage is resulted when the cache size is 500,000 compared to the size of 5,000. Since the complexity of name-lookup function is log(C), their difference is
12
Utilizing the separated cache server faces several deployment issues. Even though the cache server logically locates within one-hop distance to the NDN routers, their physical connections are composed of several links. For communication between routers and the cache server, we consider both in-band and out-of-band channel models. When the in-band communication model is used, all packets are supposed to be routed based on FIB and PIT. When routers receive content from the content source, however, they cannot forward the content to the cache server since no corresponding PIT entry exists along the path to the cache server. As discussed in the section 3.1, the first possible approach is to use tunneling. The content is encapsulated with the well-known name and it is routed by the persistent interest for the name in PIT along the path. When the encapsulated packet arrives at the cache server, it is decapsulated and stored in the cache server with the original content name. Another approach is to use NACK which is issued by the cache server when the target content is not found. While NACK is delivered from the cache server to routers, it does not consume the PIT entry. Instead, it modifies the PIT entry to make a reverse path. Then, by means of the modified PIT entry, the content is routed to the cache server along the path. Under the in-band communication model, no additional module is required. However, routing
ACCEPTED MANUSCRIPT
Ϯ͘&ŽƌǁĂƌĚƐƚŽ/EŵŽĚƵůĞŝŶƚŚĞ ĐŽŶƚƌŽůůĞƌ
6.2. Deployment of the cache server
ED
M
Algorithm 1 describes how to connect NDN routers to the cache server with the minimal cost, given the location of cache server. Different minimal cost sets are resulted with the different location of the cache server. Hence, we can choose the set having the minimum expected link costs and its corresponding location for the cache server. The cache server is also implemented as a form of cloud service, and it resides on data-center for flexible management. 6.3. Domain Segmentation
+%0/QFWNG
ĂĐŚĞ^ĞƌǀĞƌ
ϭ͘ǀŝƐŝƚƚŚĞĐĂĐŚĞƐĞƌǀĞƌ
KƉĞŶ&ůŽǁ ŽŶƚƌŽůůĞƌ
ϯ͘/ŶƐƚĂůů&ůŽǁŝƐƐĞŶƚ ƚŽƚŚĞƐǁŝƚĐŚĂůŽŶŐƚŚĞƉĂƚŚ
Ϯ͘ƌĞƐƉŽŶƐĞ ys
CR IP T
/ŶƚĞƌĞƐƚ
/EĞŶĂďůĞĚ^ǁŝƚĐŚ
WƵƌĞ^E^ǁŝƚĐŚ
Figure 17: The proposed scheme in NDN over SDN
separating the control plane from the data plane, network services are provided in a more effective way in terms of efficiency, reliability, cost, and security. Recently, studies on the integration of ICN and SDN have been conducted in [26, 27, 28, 29, 30]. In this section, we discuss the application of the proposed scheme in the NDN over SDN architecture. In [28], authors have introduced NDNFlow in which ICN functionality is supported by an application-specific layer of the OpenFlow protocol [31]. By separating the ICN layer from the regular OpenFlow layer, ICN packets are handled in the SDN architecture without modification of the existing OpenFlow communication channel and process. By using NDNFlow, the proposed scheme could be easily applied to the NDN over SDN architecture. When an interest arrives at the first NDN enabled switch in the domain as shown in Fig. 17, it is forwarded to the cache server by using OpenFlow protocol. If the target content is found in the cache server, the response is sent back to the NDN enabled switch by the cache server. Otherwise, the interest reaches the SDN controller. The NDN module in the controller looks up the FIB and computes the appropriate actions to make a feasible path. Then InstallFlow messages are sent to all the switches along the path to install the correct forwarding rules.
AN US
overhead may increase due to the traffic between the cache server and routers. The access delay to the cache server may also increase. In the out-of-band communication model, reserved links and separate routing tables are used to make dedicate channels for the traffic between the cache server and routers. A simple mark (or simplified address) is specified in the interest packet, and routers forward the packet to the pre-configured face without FIB-lookup. To deliver the response back, additional data structure, called qPIT, is added and exclusively used. Since a small value of RTT leads to a smaller size of qPIT, the qPIT-lookup delay is insignificant and and computational overhead at routers is minimized. However, additional cost is incurred to have dedicated channels and new modules such as qPIT should be included.
AC
CE
PT
Throughout the paper, it is assumed that a single cache server is installed within a domain. However, if the size of the domain is too large, a single cache server may not handle all the queries and responses. In that case, the domain could be partitioned into several sub-domains and the cache server is installed at each sub-domain. When the packet traverses across sub-domains, the proposed scheme operates in the same way that it traverses across different domains. 6.4. The proposed scheme in NDN over SDN
7. Conclusion
In this paper, we have presented an alternate ICN Software Defined Networking (SDN) has been pro- architecture in which caching functionality is sepaposed to offer flexible management of the network rated from ICN routers and delegated to the stanthrough the centralized logic, called controller. By dalone cache server. ICN routers are relieved from 13
ACCEPTED MANUSCRIPT
Acknowledgement
[7] D. Perino, M. Varvello, L. Linguaglossa, R. Laufer, R. Boislaigue, Caesar: A content router for high-speed forwarding on content names, in: ACM/IEEE Symposium on Architectures for Networking and Communications Systems, 2014. [8] T. Hasegawa, Y. Nakai, K. Ohsugi, J. Takemasa, Y. Koizumi, I. Psaras, Empirically modeling how a multicore software icn router and an icn network consume power, in: Proceedings of the 1st international conference on Information-centric networking, ACM, 2014, pp. 157–166.
AN US
This work was supported by National Research Foundation (NRF) of Korea grant funded by the Korea government (MEST) (NRF-2016R1C1B1015846 and NRF-2016R1C1B1011682).
[6] W. So, A. Narayanan, D. Oran, M. Stapp, Named data networking on a router: forwarding at 20gbps and beyond, in: ACM SIGCOMM Computer Communication Review, Vol. 43, 2013, pp. 495–496.
CR IP T
huge overhead of caching operations and feasibility of ICN is enhanced under the proposed scheme. Even though additional link costs are incurred for query and responses between the cache server and ICN routers, they are compensated by the reduction of inter-domain traffic and their expenses. Additionally, time sensitive traffic is fully supported without delay due to the caching operations for the former packets and itself. Based on theoretical and simulation study, we validate the proposed architecture and discuss the optimal location of the cache server.
[9] P. Gasti, G. Tsudik, E. Uzun, L. Zhang, Dos and ddos in named-data networking, in: IEEE ICCCN, 2012.
References
AC
CE
PT
ED
M
[1] C. Labovitz, S. Iekel-Johnson, D. McPherson, J. Oberheide, F. Jahanian, Internet inter[10] A. Araldo, D. Rossi, F. Martignon, Cost-aware domain traffic, 2010. caching: Caching more (costly items) for less (isps operational expenditures), IEEE Transac[2] V. Jacobson, D. K. Smetters, J. D. Thornton, tions on Parallel and Distributed Systems 27 (5) M. F. Plass, N. H. Briggs, R. L. Braynard, Net(2016) 1316 – 1330. working named content, in: ACM CoNEXT, 2009. [11] G. Dan, Cache-to-cache: Could isps cooperate to decrease peer-to-peer content distribution [3] T. Koponen, M. Chawla, B. Chun, A. Ercosts?, IEEE Transactions on Parallel and Dismolinskiy, K. Kim, S. Shenker, I. Stoica, A tributed Systems 22 (9) (2011) 1469 – 1482. data-oriented (and beyond) network architecture, ACM SIGCOMM Computer Communica[12] F. Kocac, G. Kesidis, T. Pham, S. Fdida, The tion Review (2007) 181. effect of caching on a model of content and access provider revenues in information-centric [4] P. Jokela, A. Zahemszky, C. E. Rothenberg, networks, in: IEEE SocialCom, 2013, pp. 45– S. Arianfar, P. Nikander, Lipsin: line speed 50. publish/subscribe inter-networking, ACM SIGCOMM Computer Communication Review 39 [13] S. K. Fayazbakhsh, Y. Lin, A. Tootoonchian, (2009) 195–206. A. Ghodsi, T. Koponen, B. Maggs, K. Ng, [5] C. Dannewitz, Netinf: An information-centric V. Sekar, S. Shenker, Less pain, most of the design for the future internet, in: Proc. 3rd gain: Incrementally deployable icn, in: ACM GI/ITG KuVS Workshop on The Future InterSIGCOMM Computer Communication Review, net, 2009. Vol. 43, ACM, 2013, pp. 147–158. 14
ACCEPTED MANUSCRIPT
AN US
CR IP T
[14] G. Rossini, D. Rossi, Coupling caching and for- [24] Youtube traces from the campus network (2008). warding: Benefits, analysis, and implementaURL http://traces.cs.umass.edu/index.php tion, in: Proceedings of the 1st international /Network conference on Information-centric networking, [25] K. Ohsugi, J. Takemasa, Y. Koizumi, 2014, pp. 127 – 136. T. Hasegawa, I. Psaras, Power consumption [15] C. Tsilopoulos, G. Xylomenos, Supporting dimodel of ndn-based multicore software router verse traffic types in information centric netbased on detailed protocol analysis, IEEE works, in: Proceedings of the ACM SIGCOMM Journal on Selected Areas in Communications workshop on Information-centric networking, 34 (5) (2016) 1631–1644. ACM, 2011, pp. 13–18. [26] S. Salsano, N. Blefari-Melazzi, A. Detti, [16] R. Bar-Yehuda, S. Even, A linear-time approxiG. Morabito, L. Veltri, Information centric netmation algorithm for the weighted vertex cover working over sdn and openflow: Architectural problem, Journal of Algorithms 2 (2) (1981) aspects and experiments on the ofelia testbed, 198–203. Computer Networks 57 (16) (2013) 3207–3221.
AC
CE
PT
ED
M
[17] D. S. Hochbaum, Approximation algorithms for [27] H. Luo, J. Cui, Z. Chen, M. Jin, H. Zhang, Efthe set covering and vertex cover problems, ficient integration of software defined networkSIAM Journal on computing 11 (3) (1982) 555– ing and information-centric networking with 556. color, in: Global Communications Conference (GLOBECOM), 2014 IEEE, IEEE, 2014, pp. [18] C. Williamson, On filter effects in web caching 1962–1967. hierarchies, ACM Transactions on Internet Technology (TOIT) 2 (1) (2002) 47–77. [28] N. L. van Adrichem, F. A. Kuipers, Ndnflow: software-defined named data networking, [19] M. Rezazad, Y. Tay, Ccndns: A strategy for in: Network Softwarization (NetSoft), 2015 1st spreading content and decoupling ndn caches, in: IEEE Conference on, IEEE, 2015, pp. 1–5. IFIP Networking Conference (IFIP Networking), 2015, IEEE, 2015, pp. 1–9. [29] E. Aubry, T. Silverston, I. Chrisment, Srsc: Sdn[20] H. Che, Y. Tung, Z. Wang, Hierarchical web based routing scheme for ccn, in: Network Softcaching systems: modeling, design and experwarization (NetSoft), 2015 1st IEEE Conference imental results, IEEE JSAC 20 (2006) 1305– on, IEEE, 2015, pp. 1–5. 1314. [30] A. F. Trajano, M. P. Fernandez, Contentsdn: [21] Y. Kim, I. Yeom, Performance analysis of inA content-based transparent proxy architecnetwork caching for content-centric networking, ture in software-defined networking, in: AdComputer Networks 57 (13) (2013) 2465–2482. vanced Information Networking and Applications (AINA), 2016 IEEE 30th International [22] Y. Kim, I. Yeom, J. Bi, Y. Kim, Scalable and Conference on, IEEE, 2016, pp. 532–539. efficient file sharing in information-centric networking, Journal of Network and Computer Ap- [31] N. McKeown, T. Anderson, H. Balakrishnan, plications 57 (2015) 21–32. G. Parulkar, L. Peterson, J. Rexford, S. Shenker, J. Turner, Openflow: enabling innovation in [23] D. Kim, S. Nam, J. Bi, I. Yeom, Efficient concampus networks, ACM SIGCOMM Computer tent verification in named data networking, in: Communication Review 38 (2) (2008) 69–74. Proceedings of the 2nd International Conference on Information-Centric Networking, ACM, 2015, pp. 109–116.
15
CR IP T
ACCEPTED MANUSCRIPT
PT
ED
M
AN US
Dohyung Kim received the B.S. degree in Information and Computer Engineering from Ajou University, Suwon, Korea, in February 2004, and the Ph.D degrees in Computer Science from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in August 2014. He is currently a research professor in the Department of Computer Engineering at Sungkyunkwan University. His research interests include the design and analysis of computer networking and wireless communication systems especially for future Internet architecture
AC
CE
Younghoon Kim received his B.S. and Ph.D degrees in Computer Science from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea in February 2004 and August 2015, respectively. Currently, he is a research professor in the Department of Electrical and Computer Engineering at Sungkyunkwan University. His research interests broadly cover network research areas including NDN, congestion control, data center networking and visible light communications.