Performance analysis of modified BCube topologies for virtualized data center networks

Performance analysis of modified BCube topologies for virtualized data center networks

ARTICLE IN PRESS JID: COMCOM [m5G;October 12, 2016;21:46] Computer Communications 0 0 0 (2016) 1–10 Contents lists available at ScienceDirect Com...

3MB Sizes 0 Downloads 21 Views

ARTICLE IN PRESS

JID: COMCOM

[m5G;October 12, 2016;21:46]

Computer Communications 0 0 0 (2016) 1–10

Contents lists available at ScienceDirect

Computer Communications journal homepage: www.elsevier.com/locate/comcom

Performance analysis of modified BCube topologies for virtualized data center networks Vahid Asghari a,∗, Reza Farrahi Moghaddam a,b, Mohamed Cheriet a a b

Ecole de Technologie Superieure, University of Quebec, Montreal, Canada Ericsson Canada, Montreal, Canada

a r t i c l e

i n f o

Article history: Received 1 June 2016 Revised 25 August 2016 Accepted 4 October 2016 Available online xxx Keywords: Data center network topologies Performance analysis Bandwidth efficiency Failure resiliency Network virtualization

a b s t r a c t Network virtualization enables the computing networks and data center (DC) providers to manage their networking resources in a flexible manner using software running on physical computers. Thus, using DC topology with high flexibility to the required service-level changes is very important to this purpose. In this paper, we address the existing issues with the classic DC network topology in virtualized DC networks and investigate the performance of a set of modified BCube topologies with the capability of providing dynamic structures according to the bandwidth requirement in a virtual DC network. In particular, we propose three main approaches to modify the structure of a classic BCube topology as a topology benchmark and investigate their associated structural features and maximum achievable interconnected bandwidth for different routing scenarios. Finally, we run an extensive simulation program to check the performance of the proposed modified topologies in a simulation environment which considers the failure analysis and also traffic congestion. Our simulation experiments, which are consistent to our design goals, show the efficiency of the proposed modified topologies comparing to the classic BCube in terms of bandwidth availability and failure resiliency. © 2016 Elsevier B.V. All rights reserved.

1. Introduction In recent years, DCs have evolved from passive elements in computing networks to become an active parts that dynamically adapt to different application requirements and manage their energy efficiency in large scale DC networks [1–3]. In this regard, the concept of virtualized DC network is proposed as a solution to provide better energy efficiency, resource utilization, management, and scalability [4]. For instance, in [5], an energy efficient resource management system is proposed for virtual DC networks which satisfies the required quality of service (QoS) and reduces the operational costs. However, it was observed that there has been much focus on improving the efficiency in computing and cooling systems (representing almost 70% of a DC’s total energy consumption), while less attention has been paid to the infrastructure of the DC network. In fact, by focusing on the efficiency in DC network infrastructure, we can account for a considerable amount of up to 20% of the total energy consumption, as reported in 2006 [6]. In this context, it has been shown in [7] that by interconnecting commodity switches in



Corresponding author. E-mail addresses: [email protected] (V. Asghari), [email protected] (R. Farrahi Moghaddam), [email protected] (M. Cheriet).

a DC network architecture, it is possible to provide a scalable interconnection bandwidth while reducing the architecture cost. In [8], considering a Fat tree topology in a DC network, a networkwide energy optimizer was investigated such that it can dynamically optimize the power consumption at the DC network while satisfying the current traffic conditions. In fact, for a given network topology and input traffic in a DC network, the proposed optimizer was set to find a minimum-power network subset which satisfies current traffic conditions and simply turns off those network components that are not needed under that given traffic condition. As a result, the proposed approach yielded 38% energy savings under varying traffic conditions. In particular, in Section 6, it will be discussed that the proposed topologies in this work could provide a higher chance in finding goal-oriented network subsets (where the goal could be minimum-power consumption, for example). In some similar work, such as [9], server virtualization techniques were used to increase the number of network active components that can be turned off in some traffic conditions, and consequently, increase the overall energy efficiency of the DC network. Furthermore, it has been observed that the classic DC topologies suffer from some infrastructural restrictions that affects their QoS to provide the service-level required by applications [10]. Though these constraints are acceptable for small scale DC networks with a uniform bandwidth requirement, they can

http://dx.doi.org/10.1016/j.comcom.2016.10.001 0140-3664/© 2016 Elsevier B.V. All rights reserved.

Please cite this article as: V. Asghari et al., Performance analysis of modified BCube topologies for virtualized data center networks, Computer Communications (2016), http://dx.doi.org/10.1016/j.comcom.2016.10.001

JID: COMCOM 2

ARTICLE IN PRESS

[m5G;October 12, 2016;21:46]

V. Asghari et al. / Computer Communications 000 (2016) 1–10

Fig. 1. Two illustrative samples of (a) BCube with k1G = 4 and ks = 2, (b) horizontal-BCube with k1G = 4, k10G = 4 and ks = 2, (c) vertical-BCube with k1G = 4, k10G = 2 and V ks = 3 and (d) hybrid-BCube with k1G = 4, k10G = 4, kH 10G = k10G = 2, and ks = 3, topologies. Note that for the clarity of presentation in (b)–(d), the classic BCube 1 Gbps connections are shown in a bundle form.

deteriorate significantly the bandwidth availability and resiliency in large DC networks when multi-level QoS requirements are considered. In this regard, a number of approaches have been proposed in the literature to address these issues [11–15]. For instance, in [12,13], electrical networking technology has been used in combination with the optical switching technology in DC network architectures to combine the advantages of both technologies and providing a flexible interconnected bandwidth according to the varying traffic demand. A well-known approach to solve some of the bandwidth problems in the traditional DC topologies is the BCube topology [16]. However, the classic BCube topology itself suffers from some restrictions in terms of the number of servers and also the number of switch’s ports. For example, the number of servers in the lowest-level cubes is restricted and should be equal to the total number of such cubes. In addition, there is no direct link planned between any two switches or two servers. Although these constraints may be bearable for small-scale DCs, they could limit the scalability in terms of the number of servers for larger DCs. For the definitions and also detailed discussion on these restrictions, please refer to Section 2. In this paper, we investigate a group of modified BCube topologies with the capability to provide dynamic structures according to the service-level agreement required by the active traffic in a virtual DC network. In particular, we propose certain modifications to the classic BCube structure to make it more dynamically adaptive to the required bandwidth by a given DC traffic. In this regard, we first consider a classic BCube topology as a benchmark for our analysis, and investigate its features regarding the network structure and interconnection bandwidth (IBW). Then, we propose the three main approaches to modify a classic BCube topology, namely, Horizontally, Vertically and Hybrid, and investigate their associated structural topology features and maximum achievable IBW for the cases of single- and multi-path routing scenarios. We further define performance metrics based on the physical network topology characteristics and also the achievable bandwidth of the network with multipath routing protocol. Finally, we test the performance of the proposed modified topologies in a simulation environment by running an extensive simulation program which considers the failure analysis and also traffic congestion.

Herein, it is worth to mention that we have chosen the BCube topology as a base topology in our study because it is currently considered by many DC designers toward disaggregated approaches to DCs. However, the contribution of this paper is not limited to a special topology or traffic, and the proposed approach is quite generic and can be easily modified and applied to any other network topology structure. 2. Preliminary works In this section, we review the BCube DC network topology which is one of the well-known topologies used in the practical DC networks such as Sun’s Modular DC [17], HP’s POD [18], and IBM’s Modular DC [19]. The BCube topology construction have been firstly proposed by Guo et al. [16] as a server-centric approach, rather than the traditional switch-oriented practice. The general purpose of proposing such topology design is to ‘scale-out’, which means using multiple commodity switches in place of a single high-end switch. The goal of this design pattern is to reduce the cost of the network, since commodity switches are inexpensive and this makes BCube an appropriate design for large scale DC networks [20]. As shown by an example in Fig. 1-(a), the classic BCube topology has a recursive structure, in which different levels of connectivity can be recognized between the switches and the servers. These levels in a BCube structure are shown as BCube (level-i), where i is the number of levels. A BCube (level-0) is simply composed of K servers connected to a single switch with k1G (= K) number of ports. A BCubeK (level-1) is constructed from K number of BCubeKs (level-0) and K number of switches with k1G number of ports, where K denote as the number of servers in each BCube (level-0) which each server has ks number of ports. Each switch is connected to all N BCubes (level-0) sub-networks through its connection with one server of each BCube (level-0). Continuing the same steps, the general structure for the classic BCube (levelL) topology is composed of N number of BCubes (level-L − 1) and N number of switches with K ports. Again, each switch in (level-L) is connected to all K BCubesK (level-0) sub-networks through its connection with one server of each BCubeK (level-0). It is worth to mention that each server in a classic BCubeK (level-L) topology has L + 1 (= ks ) ports, which are connected to (level-0) to

Please cite this article as: V. Asghari et al., Performance analysis of modified BCube topologies for virtualized data center networks, Computer Communications (2016), http://dx.doi.org/10.1016/j.comcom.2016.10.001

JID: COMCOM

ARTICLE IN PRESS

[m5G;October 12, 2016;21:46]

V. Asghari et al. / Computer Communications 000 (2016) 1–10

(level-L) switches according to the above-mentioned order. Moreover, a classic BCubeK (level-L) consists of K2 number of servers and K × (L + 1 ) number of switches, i.e., K switches in each level, with each having K number of ports. Fig. 1-(a) shows the design for a classic BCube (level-1) with K = 4 (ks = 2 and k1G = 4). As can be understand from the definition of a classic BCube topology, there are some restrictions on the size of the DC networks in terms of the number of servers and also the number of switch’s ports. For instance, in a classic BCube topology, the number of servers in each cube (level-0) are restricted to be equal to the number of Bcube (level-0). Another restriction is that there is no direct link between two switches or servers in a BCube construction. Although these constraints are acceptable for small scale DCs with a normal traffic, they can impose considerable restrictions to the number of servers in a large scale DC networks. Moreover, these restrictions can deteriorate significantly the resource availability and management of large DC networks when multilevel QoS requirements are targeted. Due to the special potential of BCube topology that built from cheap-mini COTS (commercial off the shelf) switches, it is likely that designers use BCube topology as the building block for mega-size DCs [16,21]. In this regard, the MDCube network topology has been proposed as an extension to the BCube topology by utilizing the high-speed interfaces on COTS switches to interconnect multiple BCube modules [21]. It is worth noting that, at that time, the commercial COTS switches had only a few number of high-speed ports, e.g., mostly one or two 10 Gbps ports, and the topology designers has used these high-speed ports to enlarge the size of DC topologies (in terms of the number of servers) as presented in MDCube structure design. Nowadays, the commercial switches consists of more number of high-speed ports in which can be utilized in the design of DC topologies. In this contribution, our aim is to use the high-speed ports to provide a group of modified BCube topologies with high flexibility to the required service-level changes in order to use in virtualized DC networks. In the following, considering the classic BCube topology as a benchmark, we explain our proposed modifications to the BCube topology and evaluate them in terms of features such as bandwidth efficiency and also failure resiliency. 3. Modified BCube topologies for bandwidth efficiency In practice, one of the most important features in the design of DC networks is the network interconnection bandwidth. For a given network topology, the interconnection bandwidth (IBW) is defined as the achievable speed at which the information can be exchanged between two servers. Moreover, the average network bandwidth value is nowadays considered as a parameter for the analysis of different topologies in DC networks. In particular, the network IBW can provide some useful insights along with failure and latency analysis in DC networks, while investigating the maximum number of link failures in which a network can tolerate before being split into multiple connected-component networks. In this section, our aim is to present a few design approaches for maximizing the network IBW values in design of a BCube DC network topology. The classic BCube construction makes sure that servers only connect to switches at different levels and they never connect directly to other servers. Similar rule considers between switches, i.e., switches never connect to other switches. By looking at the classic BCube topology in Fig. 1-(a), we can treat the switches as some intermediate relays in which allow server-to-server communication. Therefore, it can be seen that the classic BCube topology suffers from low IBW values between different switch nodes in a network. It is important to note that considering the aforementioned feature in classic BCube topologies, this can affect the multiple-path flow between different nodes in a DC network, i.e.,

3

server-to-server (srv-srv), switch-to-switch (swc-swc), and serverto-switch (srv-swc) communication links.1 On the other hand, the recent commercial switches provide lots of Gigabit ports and also several number of high speed 10 Gbps ports. It is worth mentioning that the classic BCube topology only uses the Gigabit ports. With our proposed modifications to the classic BCube construction, the high speed 10 Gbps ports are used to connect different level switches of classic BCube topology. In particular, we propose using the high speed ports of switches in different layers (rows) of the modified BCube topology in various forms of Horizontally, Vertically, and Hybrid, i.e., combination of horizontally and vertically, swc-swc connections. Therefore, these high speed ports provide some additional links between switches in a BCube topology without requiring any extra switch or router. The proposed modified BCube topologies simply addresses the low IBW values between switch nodes in classic BCube topology. It also connects directly the high speed ports of the switches at only the cost of more cables.2 In the following, we formally explain the proposed modified BCube topologies, namely, Horizontal, Vertical and Hybrid, and then provide some comparisons among these topologies in terms of some common features. In our formulations, ks (= PSrv ), k1G (= PSwc ) and k10G denote the number of server ports, the number of 1 Gigabit and high-speed 10 Gigabit switch ports, respectively. For the case of Horizontal-BCube topology, the high speed ports of the switches on a given layer in a classic BCube are connected in a horizontal manner. An example of a Horizontal-BCube Topology is shown in Fig. 1-(b) for topology parameters of k1G = 4 and ks = 2. Note that in this example, the number of high speed 10 Gbps ports of the switches, k10G , is also set to 4. In contrast, for the case of Vertical-BCube topology, the high-speed switch ports of a given layer in a classic BCube are connected vertically to the high-speed ports of the overhead layer switches. An example of a VerticalBCube topology is shown in Fig. 1-(c) for topology parameters as k1G = 4, k10G = 2 and ks = 3. Finally, for the case of Hybrid-BCube topology, the high-speed ports of the switches are divided into V H V two sets of ports as kH 10G and k10G , such that k10G = k10G + k10G , and each set of high speed ports are connected to horizontally and vertically, respectively, as explained for the Horizontal- and Vertical-BCube topologies. An example of a Hybrid-BCube topology is shown in Fig. 1-(d) for topology parameters as k1G = 4, k10G = 4, kH = kV10G = 2, and ks = 3. 10G 3.1. Structural properties Herein, our aim is to investigate the topological features of the proposed modified BCube topologies, namely, Horizontal, Vertical, and Hybrid BCubes. Study of these properties can help us to better understand the impact of the designing parameters on the performance of the proposed modified BCube topologies. First, we calculate the number of additional links in HorizontalBCube topology compare to the classic BCube topology. In this regard, note that the number of cubes in a BCube topology is equal to the number of 1 Gigabit switch ports, k1G . Also the number of BCube levels is equal to the number of server ports, ks . Now, considering that there is always one link between two ports, we can obtain the number of additional horizontal links in a modified Horizontal-BCube compare to the classic BCube according to ks (k1G k10G /2 ). However, it is clear that the number of additional vertical links in Horizontal-BCube comparing to the classic BCube 1

Note that srv and swc stand for a server and a switch, respectively. It is worth noting that an appropriate routing mechanism is required to make the modified topologies non-blocking. However, investigating the routing mechanism is not in the scope of our contribution in this paper and we refer this part in our future works. 2

Please cite this article as: V. Asghari et al., Performance analysis of modified BCube topologies for virtualized data center networks, Computer Communications (2016), http://dx.doi.org/10.1016/j.comcom.2016.10.001

ARTICLE IN PRESS

JID: COMCOM 4

[m5G;October 12, 2016;21:46]

V. Asghari et al. / Computer Communications 000 (2016) 1–10 Table 1 Additional number of links in modified BCube topologies compared to classic BCube topology. Horizontal BCube

Vertical BCube

Hybrid BCube V [k10G = kH 10G + k10G ]

Additional horizontal links

ks (k1G k10G /2)

0

ks k1G kH 10G /2

Additional vertical links

0

k1G (ks k10G /2)

k1G ks kV10G /2





 

Table 2 Length of longest shortest path in BCube topologies. Between

Classic BCube

Horizontal BCube

Vertical BCube

Hybrid BCube

srv ←  → srv swc ←  → swc srv ←  → swc

4 4 3

4 4 3

4 4 (3)∗ 3

4 4 (3)∗ 3

Note: ∗ For the most common case of ks < 4. Table 3 Minimum maximal single-path bandwidth in BCube topologies. Between

Classic BCube

Horizontal BCube

Vertical BCube

Hybrid BCube

srv ←  → srv swc ←  → swc srv ←  → swc

B1G B1G B1G

B1G B1G (B10G )∗ B1G

B1G B1G (B10G )∗∗ B1G

B1G B1G (B10G )∗, ∗∗ B1G

Notes: ∗ For the same horizontal level switches.

topology is zero. In this context, following the same approach used for the Horizontal-BCube topology, we can obtain the number of additional horizontal and vertical links for the Vertical- and Hybrid-BCube topologies as shown in Table 1. Considering the BCube topology properties presented by Guo et al. [16], it can easily be shown that the diameter of a classic BCube topology which is defined as the longest shortest path between all the server pairs is 4. It can further be shown that for a classic BCube topology, the longest shortest path between two switches as well is 4, and that of between a pair of one server and one switch is 3. However, using the same definition for the case of proposed modified BCube topologies, we obtain the same numbers for the longest shortest path between any server pairs, switch pairs and a pair of one server and one switch, as shown in Table 2. It is worth to mention that for the most common case of ks < 4 in Vertical and Hybrid topologies, the longest shortest path between switch pairs and a pair of one server and one switch are always equal to 3. 3.2. Bandwidth properties We further investigate the bandwidth efficiency in the proposed modified BCube topologies. In this context, besides the utilization of single-path routing protocols in different networks, recent research has shown interests on multipath routing protocols for their effective role in fault tolerance and load balancing [22]. Herein, we investigate the minimum-maximal IBW in the proposed topologies for both cases of single- and multi-path protocols. First, we calculate the minimum-maximal single-path IBW between pair of network nodes in different BCube topologies. In this case, since we only consider a single-path between any pair of nodes, it can be shown for the classic BCube topology structure that the minimum-maximal IBW between a pair of servers, a pair of switches and a pair of one server and one switch is equal to the bandwidth of a single 1G link, i.e., B1G . However, for the modified BCube topologies, we obtain that for the same level (horizontally and/or vertically) switch to switch connections, the minimum-maximal IBW value can be increased to the bandwidth of a single 10 Gbps link, i.e., B10G , as illustrated in Table 3. This can be interpreted such that using a single-path routing protocol,

∗∗

For the same vertical level switches.

the proposed topology modifications to a classic BCube topology cannot generally affect the achievable IBW value between different pairs of network nodes, compared to the classic BCube topology. It can, however, introduce a significant increase to the same level (horizontally and/or vertically) switches connections in the modified BCube topologies. Finally, for the case of using multi-path routing protocol, we calculate the minimum-maximal IBW between pair of network nodes in different BCube topologies. In fact, it is considered that any pair of nodes can be connected through multiple independent paths. In this case, since each server has only ks number of 1G ports, the achievable minimum-maximal IBW values between a pair of servers and between a pair of one server and one switch are equal to ks B1G , for all BCube topologies, as shown in Table 4. In order to calculate the achievable IBW between a pair of switches, we should note that each switch in a classic BCube topology has only k1G number of 1G links. Accordingly, it can be obtained that the minimum-maximal IBW between a pair of switches in a classic BCube is equal to k1G B1G . However, for the case of modified BCube topologies, it is considered that each switch has also k10G number of 10G links. Considering the possible switch to switch connections in a Horizontal BCube topology, we can obtain the minimum-maximal IBW between a pair of switches by adding the amount of achievable bandwidth through all available 1G and 10G links, i.e., k10G B10G + k1G B1G . From the structure of the proposed Vertical Bcube topology, it can be shown that, in addition to the k10G number of 10G links, there are (k1G − 1 ) number of 1G links at each (ks − 1 ) number of switch levels (not including the chosen switch level). Accordingly, we can obtain the achievable minimum-maximal IBW value between a pair of switches according to k10G B10G + (ks − 1 )(k1G − 1 ) B1G . Considering the same approach, we can obtain the achievable minimum-maximal IBW between a pair of switches in a Hybrid Bcube topology as shown in Table 4. Our calculation for the case of multi-path routing protocol has shown a significant increase in the achievable IBW values between the switch nodes in the proposed modified BCube topologies compared to the classic BCube topology. This confirms the effectiveness of the proposed topology modification to a classic BCube topology for the case of using multi-path routing protocol. However, it has been shown that the proposed topology modifi-

Please cite this article as: V. Asghari et al., Performance analysis of modified BCube topologies for virtualized data center networks, Computer Communications (2016), http://dx.doi.org/10.1016/j.comcom.2016.10.001

ARTICLE IN PRESS

JID: COMCOM

[m5G;October 12, 2016;21:46]

V. Asghari et al. / Computer Communications 000 (2016) 1–10

5

Table 4 Minimum maximal multiple-path bandwidth in BCube topologies. Between

Classic BCube

Horizontal BCube

Vertical BCube

Hybrid BCube

srv ←  → srv swc ←  → swc

ks B1G k1G B1G

srv ←  → swc

ks B1G

ks B1G k10G B10G +k1G B1G ks B1G

ks B1G k10G B10G +(ks − 1 )(k1G − 1 ) B1G ks B1G

ks B1G k10G B10G + k1G B1G +(ks − 1 )(k1G − 1 ) B1G ks B1G

cations cannot affect the achievable IBW value between a pair of servers or between a pair of one server and one switch in a network. 4. Performance metrics In our simulation analysis, we first investigate the topology characteristics in the proposed modified BCube topologies when they are used as the physical layer of the network. In particular, while considering the impact of component level failures in different BCube topology components, i.e., servers, switches and physical links, we investigate the metrics related to the network topology size such as maximum and average relative sizes of the network connected components. In this case, using the same approach similar to those used for the analysis by Albert et al. [23] and Farrahi et al. [20], the relative size (RS) of the largest connected component in relation to the number of existent servers, RSmax , is given by

max {nsrv,i (t1 )} i∈I RSmax (t1 ) = max {RS(t1 , i )} =  , i∈I i∈I nsrv,i (t1 )

(1)

where nsrv, i (t1 ) denotes the number of servers available in the ith connected components at time t1 . In fact, RSmax is defined as the maximum of the individual RSs of all connected components at time t1 . Now, using the RS definition given in (1), we further define the maximum absolute RS (ARSmax ) which is defined as the maximum of the individual ARSs of all connected components at time t1 divided by the total number of available servers in all connected components at time t0 , according to

ARSmax (t1 ) = max {ARS(t1 , i )} = i∈I

max {nsrv,i (t1 )} i∈I

Nsrv

,

(2)

where Nsrv denotes the total number of initial available servers in all the connected components at the start time t0 , i.e., Nsrv =  i∈I nsrv,i (t0 ). Note that while the parameter RSmax represents a RS value for the current state of the DC network, the parameter ARSmax is defined such that it provides a relative size of the network based on the initial state of the network. Although each of these two measures has its own importance in the design of DC networks, we only present results regarding the measure ARSmax in this contribution. We now calculate the achievable bandwidth of the network with the proposed BCube topologies for the case multi-path routing protocol [24]. In our simulation, we also consider a component level failure to investigate the impact of failure on the achievable bandwidth between a pair of servers in different topologies [20]. In this case, we define the average minimum-maximal IBW of the network in the i-th connected components at time t1 according to



IBW(t1 , i ) = average

 

IBW(srvl , srvm )

∀ l, m ∈ i



,

(3)

where IBW(srvl , srvm ) denotes the minimum-maximal IBW between two of the available servers in the i-th connected components. Note that using multi-path routing protocol, all the available paths are considered for communication between srvl and srvm .

In the next section, a simulation analysis approach that generates, evaluates, and aggregates snapshots of a DC network along time considering the failure factor is used to evaluate the performance of the proposed BCube topologies in a data center network. The details of this approach can be found in [20].

5. Simulation results In this section, some of previously-discussed results are verified using a simulation platform. In order to have a realistic evaluation, the platform considers failure of network components along the time [20] in addition to network traffic congestion. In order to model traffic congestion, a congestion degree parameter γ is introduced. When there is no traffic in the DC network, γ = 0, a fully congested network has a γ value of 1. For a specific γ value, the congestion degree of every link is randomly determined from a truncated Gaussian distribution centered at γ . Having all links’ bandwidth adjusted according to their individual congestion degree, the IBW is calculated following a snapshot-based Monte Carlo approach [20]. In summary, in a snapshot-based approach, to generate a data point that corresponds to one of configurations of the system, a large number of snapshots is generated and then aggregated to calculate the mean and also the confidence interval. A snapshot assumes that the system state does not depend on its previous, historical states, and only the age of individual components would determine their state according to some distribution functions. This approach, which is applicable to those systems without repair/maintenance/replacement of failed components which is our case, allows us to cover a bigger number of systems than the actual number of runs/snapshots. In terms of the number of runs used to generate the results, we considered 100 runs per each configuration (data point). The mean values usually follow an expected curve. In terms of confidence level, a 95% confidence variation of the magnitude of 10–20% of the mean value has been observed in our experiments for most of data points. However, for those moments of major events such as when the network is partitioned into two or more connected components because of failure of components, variations of the magnitude of 100% of the mean value has been observed. First, the impact of the introduction of the new BCube topologies on the ARSmax performance is studied using numerical simulations. Considering the nature of the vertical connectivity in classic BCube among the nodes across the network, it is expected to observe a higher performance in terms of ARSmax in the cases of the proposed Horizontal and Hybrid BCube topologies because they introduce a higher number of horizontal links. To illustrate this observation, the profile of ARSmax for various values of ks have been calculated using simulation results in Fig. 2-(a). As can be observed from the figure, a higher number of ports per server amplifies the advantage of horizontal and hybrid topologies. In contrast, the number of 1 Gbps ports of switches has a negligible impact on the performance (please see Fig. 2-(b)). This can be attributed to the nature of 1 Gbps ports links that usually is vertically down toward the servers with a little contribution to horizontal connectivity.

Please cite this article as: V. Asghari et al., Performance analysis of modified BCube topologies for virtualized data center networks, Computer Communications (2016), http://dx.doi.org/10.1016/j.comcom.2016.10.001

JID: COMCOM 6

ARTICLE IN PRESS

[m5G;October 12, 2016;21:46]

V. Asghari et al. / Computer Communications 000 (2016) 1–10

Fig. 2. The impact of the proposed modified BCube topologies on the ARSmax performance with k10G = 4, (a) for k1G = 4, 10 and fixed ks = 2, (b) for ks = 2, 5 and fixed k1G = 4.

Fig. 3. The IBW performance of the classic BCube topology versus the proposed Hybrid-BCube topology for k10G = 4 and ks = 5 and various values of k1G . Note that the curves in this figure are in both different colors and also different thickness to make it easier to distinguish between them.

The impact of traffic congestion on the performance of BCube topologies is illustrated in Fig. 3 in which a high-degree of congestion associated with γ = 0.95 is studied. As can be seen from the figure, the classic BCube highly suffers from the congestion, especially when the number of 1G ports of switches is low. In contrast, the Hybrid-BCube topology shows a stable and robust behavior regardless of the pSwc (= k1G ) ports. It is worth mentioning that the decreasing behavior of the IBW along time is the direct result of the failure of components without any maintenance and replacement. Fig. 4 provides a comparison between the classic BCube and the proposed modified BCube topologies with k10G = 4. In Fig. 4(a), the impact of variation in γ value on the IBW performance of the proposed topologies is shown. Three values for γ is used: γ = 0.05, 0.5, 0.95. As can be seen, both Horizontal- and Hybrid-

BCube topologies show a better performance. As expected, with increase in the degree of congestion, the performance of all topologies is affected. In Fig. 4-(b), the impact of the number of 1G ports of switches is illustrated. The case of low pSwc is a higher degree of degradation, especially for the classic and Vertical-BCube topologies. And, finally, in Fig. 4-(c), the impact of the number of server’s port, i.e., the number of BCube levels, is studied. As expected, with increasing pSrv and the number of levels, the performance of Horizontal- and Hybrid-BCube topologies is significantly improved. In particular, the time to degrade to half initial performance is extended by 10 folds when pSrv is increased from 2 to 5. Although classic and Vertical-BCube topologies also benefit from better IBW at time intervals near t0 , they show a very sensitive behavior with respect to failure as times go on, and all benefits of higher number of levels is diminished. In general, it is con-

Please cite this article as: V. Asghari et al., Performance analysis of modified BCube topologies for virtualized data center networks, Computer Communications (2016), http://dx.doi.org/10.1016/j.comcom.2016.10.001

ARTICLE IN PRESS

JID: COMCOM

[m5G;October 12, 2016;21:46]

V. Asghari et al. / Computer Communications 000 (2016) 1–10

7

Fig. 4. The IBW performance of all the four BCube topologies. (a) The performance resiliency against traffic congestion and failure. (b) The impact of the pSwc on the performance. (c) The impact of the pSrv on the performance. Note that cases (b) and (c) are generated using a Mid Traffic condition similar to the red curves in the case (a).

cluded that both Horizontal- and Hybrid-BCube topologies show a good performance, and even in some cases the Horizontal-BCube has a better performance. However, as will be discussed in the next section, the flexibility of Hybrid-BCube in providing more arbitrary topologies makes us to conclude that this topology is the first choice in designing physical layer of DC networks. Finally, to briefly quantify the cost of the proposed modifications to the classic BCube topology, we further investigate the number of additional switch ports required to form the modified BCube topology comparing to the number of ports of its associated classic BCube. As explained in Section 3, the number of switch ports in a classic BCube topology is NpCls = k1G k1G ks . Following the same approach, the number of switch ports in a modified BCube Hby topology can be calculated as Np = k1G k1G ks + k10G k1G ks . Now, we can define the additional-port ratio ηp :

ηp =

NpHby NpCls

.

(4) Hby

After substituting Np

ηp = 1 +

and NpCls into (4), we have

k10G , k1G

(5)

which can be interpreted as the associated additional cost in a per-switch-port cost calculation basis. For a case of k10G = 4 and k1G = 10, for example, the overhead cost would be 40%. When k1G increases further more, the overhead cost would reduce accordingly. In addition, considering the change in the nature of applications that are run in the DCs toward lower latency and higher throughput of data exchange between the nodes, high-end network switches are becoming the norm of many DC designs especially to avoid the need to replace the hardware at a later time. 6. A discussion On two future applications So far, we illustrated the high resiliency of the proposed modified topologies with respect to both failure and also traffic congestion in terms of their high Interconnected BW and relative size values. However, we would like to take this opportunity and discuss some benefits of the proposed topologies in number of future applications such as topology matching and topology on-demand concepts. Generally, network virtualization techniques aim at decoupling the functions of a classic network providers into two key parts, i.e., the network infrastructure providers (InPs) and network service providers (SePs). InPs are considered to manage the physical infrastructure of the network, and SePs are considered to cre-

ate virtual networks (VNets) by aggregating and sharing resources from multiple InPs and offer end-to-end network services [25]. A VNet generally consists of different types of virtual nodes and virtual links. In fact, each VNet consists of an overlay of virtual nodes and links in which are a subset of the underlying physical network resources. It is worth noting that a VNet in a virtualization environment can specify a network topology based on its needs, such as having low latency for high-performance computing workloads or a high interconnected BW for data processing workloads [26]. In the following, our aim is to explain the advantages of the proposed modified topologies in the context of topology matching and topology on-demand concepts in a virtual environment that could considerably reduce the overhead of virtual networks, and could also bring dynamic adaptability to variations in network requirements. 6.1. Topology matching: beyond topology mapping in virtual networks Topology mapping is a well-known action in operating virtual network that allows mapping an arbitrary virtual network topology on the fix topology of the physical layer. In a virtualization environment, topology mapping stands for expressing a requested VNet topology sent to the service providers in terms of specific layout patterns of the interconnections of network elements, such as links and nodes, along with a set of specific service-oriented constraints, such as CPU capacity and link bandwidth [27]. Various parameters are considered in designing a topology mapping algorithm. Despite the freedom that this approach provides, it could impose a considerable overhead in both mapping action itself and also in the operation of the network because of non-optimal solution obtained from a particular algorithm, especially when many VNets are mapped on the same physical layer. This disadvantage could be very significant in the case of big physical layer networks that are more popular in resource sharing approaches such as cloud computing. In the topology mapping, there is always a downward mapping between the virtual layer elements and the physical layer elements. This one-way approach would result in cases in which the mapped topology on the physical layer and the original requested topology have a significant topological distance despite having a negligible distance in terms of requested service-oriented constraints. This disparity, which is the result of unawareness of the VNet about the actual physical topology, could result in a premature default of the service because of component failures or DC traffic congestion.

Please cite this article as: V. Asghari et al., Performance analysis of modified BCube topologies for virtualized data center networks, Computer Communications (2016), http://dx.doi.org/10.1016/j.comcom.2016.10.001

ARTICLE IN PRESS

JID: COMCOM 8

[m5G;October 12, 2016;21:46]

V. Asghari et al. / Computer Communications 000 (2016) 1–10

Fig. 5. An illustration of topology matching and mapping concepts of a VNet on physical structures of classic and Hybrid-BCube topologies.

Herein, we propose considering the concept of topology matching instead of topology mapping. In the proposed topology matching approach, the topology of a VNet is the outcome of a negotiation and exchange interaction between the VNet, the SeP, and the InP. In such a process, a suggested topology from the VNet would simply denied if the SeP/InP could not match it on the actual physical topology. The feedback that the InP provides to the VNet helps them to converge to a working matchable topology that would perfectly fit on the actual DC topology. Although the concept of topology matching seems very interesting, it could simply fail to provide any converged topology in many cases if the physical layer does not have enough complexity and connectivity in terms of its topology. Motivated by the results obtained in this paper for the modified BCube topologies proposed in Section 3, we believe this modifications to the physical network platforms, especially the Hybrid-BCube topology, are able to increase the chance of providing appropriate match topologies to a VNet request while conforming to the requested interconnection BW by the VNet. As illustrated in Fig. 5-(a) and -(b), VN1 virtual network is requested to overlay on the physical DC networks with Classic and Hybrid-BCube topologies, respectively. As shown in VN1 structure, two high-bandwidth connections were requested between two horizontal switches at each level. As can be seen from this figure, the requested high-bandwidth connections in VN1 cannot be provided by a Classic BCube topology and the requested connection is denied by both topology mapping and topology matching techniques. This is simply because of structural limitations in Classic BCube topology. However, as shown in Fig. 5-(b), considering the proposed Hybrid-BCube topology, the requested high-bandwidth connections between the switches can be allocated in VN1 network while using the topology matching technique to form the VN1 structure. It is worth mentioning that these bandwidth requests were successfully addressed without requiring any mapping thanks to the nature of Hybrid-BCube topology that is used for the physical part. 6.2. Topology on-demand (ToD) A direct consequence of the vision of topology matching is in handling of dynamic changes in the VNet requirements. Many ap-

plications do not require a constant high bandwidth maintained between their nodes. However, they do need temporal, burst-like high-IBW in their operation along time. Such ephemeral requirements are usually addressed using dynamically changing the allocated bandwidth to the VNet network along the links of its topology. In contrast, the interactive concept of topology-matching approach can be generalized in order to handle temporal changes in the required bandwidth. In particular, in this approach that we call Topology on-Demand (ToD), the VNet communicates its specific new requirements in terms of location and bandwidth to the SeP/InP, and the InP proposes a new topology that could handle the new traffic requirements. In particular, the specific topology with certain features such as IBW is provided when and where the VNet users needs it. In other words, the topology can be dynamically changed when the VNet needs it, and it is maintained in an on-demand style. In this context, a virtual network architecture that is operating based on ToD approach has been proposed by Guo et al. [28]. This architecture that is called as SecondNet, is a virtual network architecture that can be built on top of many existing DC network topologies such as modified BCube topologies proposed in this paper. To be more specific, in the ToD approach, the VNet does not need to be aware of the structure of the physical network. Instead, at the moment of a high traffic spark, a ToD middleware would initiate a dynamic change request without requiring any explicit action from the client of the application except determining the bandwidth requirements. The ToD requests could also be initiated by the applications themselves prior to their high-BW actions. Herein, we present an example to better illustrate these concepts in the context of our proposed modified topologies. In Fig. 5, an example of modifying the topologies of two VNets overlayed on the same physical topology is shown. As illustrated in Fig. 6(a), VN1 and VN2 virtual networks are overlayed on the same physical DC network with Hybrid-BCube topology. In the case of VN1, horizontal high-bandwidth links were requested, while for VN2 a full high-bandwidth mesh between the switches was requested. As shown in Fig. 6-(b), these requests were successfully addressed without requiring any mapping thanks to the nature of Hybrid-BCube topology that is used for the physical part. In

Please cite this article as: V. Asghari et al., Performance analysis of modified BCube topologies for virtualized data center networks, Computer Communications (2016), http://dx.doi.org/10.1016/j.comcom.2016.10.001

ARTICLE IN PRESS

JID: COMCOM

[m5G;October 12, 2016;21:46]

V. Asghari et al. / Computer Communications 000 (2016) 1–10

9

Fig. 6. An illustration of two VNets overlayed on the same physical hybrid-BCube topology and addressing on-demand topology changes.

the figure, the mapping between the virtual nodes and physical nodes and also the mapping between the virtual links and physical links are shown. It is worth mentioning that for example the high-bandwidth link between two switch nodes in VN1 network is created using a 2-hop link passing through a third switch at the physical layer topology. It is worth noting that because of limited space, we will investigate actual matching algorithms for both topology matching and ToD concepts in our future work.

7. Conclusion and future prospects This paper investigated a group of DC network topologies with dynamic bandwidth-efficiency and failure-resiliency required as a function of the active traffic in a virtualized DC network. A classic BCube topology was considered as a topology benchmark throughout the paper. In particular, three main approaches were proposed to modify the structure of a classic BCube topology toward improving their associated structural features and maximum achievable interconnected bandwidth for both cases of single- and multi-path routing scenarios. Then, we ran an extensive simulation program to compare the performance of the proposed topologies when failure of components and also traffic congestion were presented. Our simulation experiments showed the efficiency of the proposed modified topologies comparing to the benchmark in terms of the bandwidth availability and failure resiliency. We further introduced two applications to the proposed topologies in the design of the smart DC networks. Finally, it was concluded that both Horizontaland Hybrid-BCube topologies provide a good performance, and even in some cases the Horizontal-BCube has shown a slightly better performance. However, it was discussed that the flexibility of Hybrid-BCube in providing more arbitrary topologies made us to conclude that this topology is the first choice in designing physical layer of DC networks.

Acknowledgments The authors thank the NSERC of Canada for their financial support under grant CRDPJ/424371-11 and also under the Canada Research Chair in Sustainable Smart Eco-Cloud and they also thank MITACS of Canada. The authors are grateful to Yves Lemieux (Ericsson Canada, Montreal, Canada) and Dr. Kim Khoa Nguyen (Ecole de Technologie Superieure, Montreal, Canada) for providing us with their valuable comments on this paper.

References [1] J. Koomey, Growth in Data Center Electricity Use 2005 to 2010, Analytics Press, 2011. [2] P.-X. Gao, A.-R. Curtis, B. Wong, S. Keshav, It’s not easy being green, in: Proceedings of the ACM SIGCOMM, 2012, pp. 1–12. [3] F. Farrahi Moghaddam, R. Farrahi Moghaddam, M. Cheriet, Carbon-aware distributed cloud: multi-level grouping genetic algorithm, Cluster Comput. 18 (1) (2015) 378–382. [4] A. Kulseitova, A.T. Fong, A survey of energy-efficient techniques in cloud data centers, in: IEEE International Conference on ICT for Smart Society (ICISS’13), 2013, pp. 1–5. [5] A. Beloglazov, R. Buyya, Energy efficient resource management in virtualized cloud data centers, in: IEEE/ACM International Conference on Cluster, Cloud and Grid Computing (CCGRID ’10), 2010, pp. 826–831. Washington, DC, USA. [6] M.F. Bari, R. Boutaba, R. Esteves, L. Granvilley, M. Podle sny, M. Rab-bani, Q. Zhang, M.F. Zhani, Data center network virtualization: a survey, IEEE Commun. Surv. Tutorials 15 (2) (2013) 909–928. [7] M. Al-Fares, A. Loukissas, A. Vahdat, A scalable, commodity data center network architecture, SIGCOMM Comput. Commun. Rev. 38 (4) (2008) 63–74. [8] B. Heller, S. Seetharaman, P. Mahadevan, Y. Yiakoumis, P. Sharma, S. Banerjee, N. McKeown, Elastictree: saving energy in data center networks, in: Proceedings of the 7th USENIX Conference on Networked Systems Design and Implementation, 2010, pp. 1–16. [9] H. Shirayanagi, H. Yamada, K. Kono, Honeyguide: a VM migration-aware network topology for saving energy consumption in data center networks, in: IEEE Symposium on Computers and Communications (ISCC’12), 2012, pp. 460–467. [10] N. Farrington, et al., Helios: a hybrid electrical/optical switch architecture for modular data centers, in: SIGCOMM, 2010, pp. 339–350. [11] K. Chen, C. Hu, X. Zhang, K. Zheng, Y. Chen, A.V. Vasilakos, Survey on routing in data centers: insights and future directions, IEEE Netw. 25 (4) (2011) 6–10. [12] K. Chen, et al., WaveCube: a scalable, fault-tolerant, high-performance optical data center architecture, in: IEEE Conference on Computer Communications (INFOCOM’15), 2015, pp. 1903–1911. [13] A. Pal, K. Kant, RODA: a reconfigurable optical data center network architecture, in: IEEE 40th Conference on Local Computer Networks (LCN), 2015, pp. 561–569. [14] R. Rojas-Cessa, Y. Kaymak, Z. Dong, Schemes for fast transmission of flows in data center networks, IEEE Commun. Surv. Tutorials 17 (3) (2015) 1391–1422. [15] J.A. Aroca, A.F. Anta, Bisection (band)width of product networks with application to data centers, IEEE Trans. Parallel Distrib. Syst. 25 (3) (2014) 570–580. [16] C. Guo, G. Lu, D. Li, H. Wu, X. Zhang, Y. Shi, C. Tian, Y. Zhang, S. Lu, BCube: a high performance, server-centric network architecture for modular data centers, in: SIGCOMM’09, Barcelona, Spain, 2009, pp. 63–74. [17] Sun’s Modular Datacenter, http://docs.oracle.com/cd/E19115-01/mod.dc.s20/ index.html. [18] HP Performance Optimized Datacenter (POD), http://h180 0 0.www1.hp.com/ products/servers/solutions/datacentersolutions/pod/. [19] IBM Portable Modular Data Center, http://www-935.ibm.com/services/us/ en/it- services/data- center/it- facilities- assessment- design- and- constructionservices- portable- modular- data- center/index.html. [20] R. Farrahi Moghaddam, V. Asghari, F. Farrahi Moghaddam M. Cheriet, FailureBased Lifespan Performance Analysis of Network Fabric in Modular Data Centers: Toward Deployment in Canada’s North, Submitted (2014). [21] H. Wu, G. Lu, D. Li, C. Guo, Y. Zhang, MDCube: a high performance network structure for modular data center interconnection, in: SIGCOMM’09, Barcelona, Spain, 2009, pp. 25–36.

Please cite this article as: V. Asghari et al., Performance analysis of modified BCube topologies for virtualized data center networks, Computer Communications (2016), http://dx.doi.org/10.1016/j.comcom.2016.10.001

JID: COMCOM 10

ARTICLE IN PRESS

[m5G;October 12, 2016;21:46]

V. Asghari et al. / Computer Communications 000 (2016) 1–10

[22] N. Meghanathan, Performance comparison of link, node and zone disjoint multi-path routing strategies and minimum hop single path routing for mobile ad-hoc networks, Int. J. Wireless Mobile Netw. 2 (4) (2010) 1–13. [23] R. Albert, H. Jeong, A. Barabási, Error and attack tolerance of complex networks, Nature 406 (6794) (20 0 0) 378–382. [24] T. Nath Subedi, K.K. Nguyen, M. Cheriet, Openflow-based in-network layer-2 adaptive multipath aggregation in data centers, Comput. Commun. 61 (2015) 58–69. [25] M.K. Chowdhury, R. Boutaba, Network virtualization: state of the art and research challenges, IEEE Commun. Mag. 47 (9) (2009) 20–26.

[26] K.C. Webb, A.C. Snoeren, K. Yocum, Topology switching for data center networks, in: 11th USENIX Conference on Hot Topics in Management of Internet, Cloud, and Enterprise Networks and Services (Hot-ICE’11), 2011, pp. 1–6. [27] C. Wang, T. Wolf, Virtual network mapping with traffic matrices, in: Seventh ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS’11), 2011, pp. 225–226. [28] C. Guo, G. Lu, H. Wang, S. Yang, C. Kong, P. Sun, W. Wu, Y. Zhang, SecondNet: a data center network virtualization architecture with bandwidth guarantees, in: International Conference on Emerging Networking EXperiments and Technologies (Co-NEXT’10), 2010, pp. 1–12.

Please cite this article as: V. Asghari et al., Performance analysis of modified BCube topologies for virtualized data center networks, Computer Communications (2016), http://dx.doi.org/10.1016/j.comcom.2016.10.001