Accepted Manuscript
Resource Management and Control in Converged Optical Data Center Networks: Survey and Enabling Technologies Weigang Hou , Yejun Liu , Lei Guo , Cunqian Yu , Yue Zong PII: DOI: Reference:
S1389-1286(15)00171-1 10.1016/j.comnet.2015.05.011 COMPNW 5574
To appear in:
Computer Networks
Received date: Revised date: Accepted date:
10 August 2014 22 January 2015 18 May 2015
Please cite this article as: Weigang Hou , Yejun Liu , Lei Guo , Cunqian Yu , Yue Zong , Resource Management and Control in Converged Optical Data Center Networks: Survey and Enabling Technologies, Computer Networks (2015), doi: 10.1016/j.comnet.2015.05.011
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Resource Management and Control in Converged Optical Data Center Networks: Survey and Enabling Technologies
1
College of Information Science and Engineering, Northeastern University, Shenyang 110819, China 2
State Key Laboratory of Networking and Switching Technology, Beijing, 100876, China
State Key Laboratory of Information Photonics and Optical Communications, Beijing, 100876, China
AN US
3
CR IP T
Weigang Hou1, 2, 3, Yejun Liu, Lei Guo* 1, Cunqian Yu1, Yue Zong1
Corresponding author: Dr. Lei Guo
M
P. O. Box 365, College of Information Science and Engineering Northeastern University, Shenyang 110819, China
ED
Phone/Fax: +86-24-83684219
AC
CE
PT
Email:
[email protected]
ACCEPTED MANUSCRIPT
Abstract: In addition to the optical interconnection among servers for the Intra Data Center (IDC), the optical interconnection of geographically distributed data centers also becomes increasingly important since data centers are geographically distributed so that one data center maybe far away from another. So the converged Optical and Data Center Network (ODCN) emerges as the times require. In the ODCN, each optically interconnected IDC locates at the edge of the optical backbone. In this article, we first make an extensive survey on the resource management and control in the ODCN, and we find
CR IP T
that: 1) for this new network paradigm, the intelligent coexistence of heterogeneous technologies should be considered because the ODCN will be required to satisfy diverse and highly dynamic network services; 2) the integrated virtualization of backbone bandwidth and computing resources should be performed for the resource management in ODCNs, with the objective to improve the
AN US
underlying infrastructure utilization; 3) after performing the integrated virtualization, a set of virtual networks are generated and each of them has virtual lightpaths and virtual machines. But a static virtual network merely satisfies a certain range of Service Level Agreements (SLAs), and it merely adapts to a particular network status. When the SLA significantly varies or the Quality of
M
Transmission (QoT) gets worse, it is necessary to trigger the dynamic planning for the virtual network reconfiguration; 4) to decrease the control overhead and the delay of making decisions for the dynamic
ED
planning, it is practical to embed a highly effective network control plane in the intelligent ODCN. Consequently, we make a blueprint where we execute the intelligent coexistence, integrated
PT
virtualization and dynamic planning for the resource management in ODCNs. Some preliminary
CE
works and simulation results will guide the future work.
Keywords: Converged optical and data center network; intelligent coexistence; integrated
AC
virtualization; dynamic planning; network intelligence
ACCEPTED MANUSCRIPT
1. Introduction In recent years, the developing cloud computing has gradually replaced the traditional office computing [1]. For service providers, cloud computing provides the opportunity of achieving enormous profits. For users, it makes service access convenient and quick. To support this application, the server virtualization has been applied to data centers, i.e., Cloud Data Centers (CDCs) such as Amazon EC2 [2], Microsoft Azure [3] and Infrastructure-as-a-Service (IaaS) [4]. In the CDC, by
sharing manner, and the operational cost reduces.
CR IP T
decoupling services from underlying servers, the utilization of computing resources is improved in a
Since the optical transmission has unique advantages (e.g., low energy, huge capacity and high reliability, etc), some academic and industrial communities embedded the optical interconnection into the CDC [5, 6]. But in fact, one data center maybe far away from another, the optical interconnection
AN US
of geographically distributed data centers becomes also important. The converged Optical and Data Center Network (ODCN) thus emerges as a large-scale networking paradigm, where each optically interconnected CDC locates at the edge of the optical backbone.
Once CDCs are involved in the ODCN, the “pay-as-you-go” transaction model makes network services diverse and dynamic. But the existing Wavelength-Division-Multiplexing (WDM) optical
M
transmission is simplistic, and it has a poor adaptation to satisfying high-level services. One promising solution is the intelligent coexistence of heterogeneous optical transmission technologies and elements,
ED
during the phase of network deployment. So the ODCN should allow a diversity of new voices to bloom, such as elastic optical networking [7-16], OFDM (Orthogonal Frequency Division Multiplexing)-based CDC interconnection [17], and Cognitive Heterogeneous Reconfigurable Optical
PT
Networking (CHRON) [18-24], etc.
Additionally, similar to the CDC, it is very necessary to perform the integrated virtualization of
CE
backbone bandwidth and computing resources for the resource management in ODCNs. In fact, as one of forerunners focusing on the software-defined network, the global desktop virtualization vendor
AC
Nicira has the following statement in its white paper: the virtualization of the data center is only the tip of the iceberg [25]. In order to satisfy flexible and programmable requirements of the ODCN, the integrated virtualization has been imminent. The virtualization of wired networks has been reported, and Huawei enterprise put this topic into strategic goals. We can infer that with the unique advantages of optical interconnection, the integrated virtualization will become technically sound and industrially practical in the ODCN. Someone calls this integrated virtualization as link- and node-level mappings, i.e., virtual lightpath vs. physical links and virtual machine vs. physical server. Here, a virtual lightpath is a bridge between the user and the destination data center or a bridge of two different data centers, and a virtual machine is the software implementation of a machine (e.g., a
ACCEPTED MANUSCRIPT
computer) that executes programs like a physical machine. In addition, someone calls this collaborate mapping process as the virtual network embedding, because a group of virtual lightpaths and virtual machines comprise an entire virtual network. But in fact, a static virtual network merely satisfies a certain range of Service Level Agreements (SLAs) or it merely adapts to a particular network status. When the SLA significantly varies or the Quality of Transmission (QoT) gets worse in the ODCN, to ensure the service quality and continuity, it is necessary to reconfigure virtual networks, i.e., performing the dynamic planning. Now, we have a
CR IP T
new problem: what factors will trigger this dynamic planning? Apart from the degradation of SLA and QoT, there are the following triggering factors. Firstly, the energy consumption of cooling devices rises dramatically in the CDC, which makes the energy efficiency become an important factor of triggering the dynamic planning. Secondly, the failure of the fiber link will result in the huge data loss. So we should improve the resilience of the optical backbone in the ODCN. Meanwhile, being faced
AN US
with the hostile attack among virtual machines within the same server, we should not ignore the ODCN security because some private or confidential information maybe leaked. In a word, the ODCN resilience and security is also the triggering factor. Last but not least, with the frequent arrival and departure of requests, the ODCN will generate the fragments of backbone bandwidth and computing resources. This also motivates us to re-manage resources using the dynamic planning.
M
We can see that the above dynamic planning is a reactive process where we execute a series of emergency measures only after the triggering factor occurs. To decrease the control overhead and the
ED
delay of making decision for the dynamic planning, it is practical to embed a highly effective control plane in the intelligent ODCN. This control plane self-analyzes the SLA information and self-probes
PT
the underlying network status. According to the detected information, this control plane makes decision by itself (i.e., cognition) and sends the corresponding action to the underlying components via
CE
the extensional protocol such as OpenFlow [26]. More importantly, the self-learning is also utilized to perform the data mining of historical experiments, so that the proactive dynamic planning can be performed before the triggering factor occurs.
AC
Undoubtedly, the intelligent coexistence, integrated virtualization, dynamic resource management,
and proactive resource control become essential parts in the ODCN. The rest of this article is organized as follows. Section 2 makes an extensive survey on previous resource management and control in the ODCN and Section 3 provides some enabling technologies for the future resource management and control in the ODCN, from the perspectives of data plane, control plane and enhanced database. Finally, we conclude our work in Section 4.
ACCEPTED MANUSCRIPT
2. Survey and future challenges In this section, we first make an extensive survey on the resource management and control in ODCNs before presenting the future challenges. 2.1 Survey on existing works There have been existing solutions focusing on the convergence of optical and data centers, integrated virtualization, dynamic resource management (dynamic planning), as well as resource
CR IP T
control. Though these works have their advantages, but they cannot well satisfy the requirements of the ODCN or they cannot be directly applied to the ODCN. Fortunately, some outstanding techniques are worth learning and they can be modified to make the improvements for ODCNs.
Optical interconnection for intra data center
In order to handle huge amounts of data, multiple servers are interconnected by high-bandwidth
AN US
switches to form a CDC. Since the electronic device has the enormous cost of the power consumption, the optical interconnection of servers begins to receive extensive attentions. So a set of architectures were proposed, for example, c-Through [27], Helios [28], DOS [29], Proteus [30] and petabit [31], etc. However, these architectures have the same problem, i.e., the CDC merely covers a single organization or only one region. This cannot well satisfy the high-level service such as the
M
collaborative research of geographically distributed institutions.
More recently, a flexible-bandwidth and OFDM-based CDC interconnection was presented in [17].
ED
This CDC architecture has the higher spectrum utilization compared with the WDM-based approach, but it is still applied to a single CDC.
Optical interconnection among multiple data centers
PT
In [1], R. Tucker emphasized a cloud-computing-oriented and optically-interconnected architecture with including access, edge and backbone layers. In this network paradigm, each data center locates at
CE
the edge of the optical backbone. While in [32], slightly different from the above architecture, the optimal location needs to be determined for each data center within the optical backbone, so that the
AC
total energy consumption can be minimized. Aside from the programmable task initiated at a certain optical backbone node, the authors in [32] also considered the traffic intercommunicated among data centers. So we make the following survey according to the task type. Meanwhile, we focus on the scenario that each data center locates at the edge of the optical backbone, because a series of classical applications (e.g., big data transmission) are performed under this scenario. In terms of the programmable task initiated by one optical backbone node, anycast routing1 should be executed to establish the connection between the optical backbone node and the selected data center. Here, the selected data center is the most appropriate one to complete the task within all data centers. 1
In anycast routing, the user does not concern about the location of the selected data center, so long as the SLA can be satisfied.
ACCEPTED MANUSCRIPT
The selected data center is usually far away from the optical backbone node because it locates at the border, so the data transmission overhead unexpectedly increases. For this reason, in [33], the Optical Cross-Connect (OXC) was integrated with the mini IT-server. Using this kind of IT-OXC, one user can locally access small-sized computing resources instead of visiting a distant data center. It thus reduces the transmission overhead. On the other hand, some programmable tasks must be completed by multiple data centers. Under this case, manycast routing should be executed to build the tree-based connection between the user and a list of selected data centers. Thus in [34], an EAGLE mechanism
CR IP T
was presented to support manycast routing.
However, whether anycast or manycast routing, the existing solutions still neglect some important issues. For example, the optical interconnection is neglected, and the server virtualization is not investigated (e.g., [33] and [34] merely perform the link-level mapping between the virtual lightpath
AN US
and physical links). So the utilization of the underlying servers cannot be maximized. Rack3
IDC-1
WDM
50GHz
Sub-carriers
OFDM
Guardband
Rack1 (4 servers)
Rack2
M
Elastic optical backbone
BV WSS
ED
BV WSS BV WSS
BV WSS
BV WSS
BV-OXC
PT
BV WSS
IDC-2
CE
BV Transponders
Fig. 1 OFDM-based interconnection of data centers
AC
In terms of the traffic intercommunicated among data centers, the elastic channel with the ability of extending or compressing the capacity becomes very necessary. In [35, 36], the traditional WDM was replaced with the elastic optical backbone for the optical interconnection of data centers. As shown in Fig. 1, using Bandwidth-Variable (BV) transponders, the continuous and partially overlapped subcarriers are appropriately allocated to the traffic demand, which saves more spectrums compared with the WDM-based solution. The virtual lightpath becomes elastic in terms of bandwidth provisioning by using BV Wavelength-Selective Switches (WSSs). Furthermore, both link- and node-level mappings were performed in [35, 36] to minimize the number of blocked requests of embedding the virtual network. But in these works, the service- or network-status-driven dynamic
ACCEPTED MANUSCRIPT
planning and network intelligence never reflect their values for the resource management and control in ODCNs.
Integrated virtualization and dynamic planning
R. Tucker described the integrated virtualization as a classic Logistics and Supply Chain (LSC) problem in [1]. According to this problem, the virtual lightpath is similar to the road while the data center can be seen as the warehouse. As mentioned above, after performing the integrated virtualization, a static virtual network merely satisfies a certain range of SLAs. So we need to trigger
CR IP T
the dynamic planning on the basis of some factors. Obviously, the dynamic planning is a special version of the LSC problem with a certain triggering factor. In this subsection, we make the following survey on the existing solutions of the integrated virtualization and dynamic planning, according to the transmission techniques in the optical backbone.
In terms of the WDM optical interconnection among data centers, whether integrated virtualization or
AN US
dynamic planning, the existing solution is usually achieved by using anycast or manycast routing. And these works mainly focus on the programmable task initiated by one optical backbone node. In addition, the virtual lightpath is always treated as a bridge between the user and the selected data center. In [37], the request of embedding the virtual network was abstracted into the requirement of establishing the virtual lightpath, and these requests were treated sequentially. For each request, the
M
updated backbone bandwidth and computing resources are the inputs of the Integer Linear Programming (ILP) model that returns the result of the virtual lightpath to be established. In [38], a
ED
mathematical model was presented to reflect the linear energy growth of physical links and servers. Through learning about the variable information of energy consumption, the most energy-efficient
PT
virtual lightpath can be determined. Eventually, the proposed approach exhibited convergence to the global optimum performance. Considering the multi-priority requirements of establishing the virtual
CE
lightpath, a multi-period solution of embedding the virtual network was proposed in [39, 40]. Here, the high-priority requirement must be processed instantly, and the low-priority requirement can be served anytime within a maximal delay. With the accurate estimation of time- and priority-varying
AC
requirements, the least energy-consuming virtual lightpath can be found at the current time period. In [41], multiple virtual networks were simultaneously generated. Each virtual network has a list of virtual lightpaths, and it is unique to a specific user group, which is similar to a secure Virtual Private Network (VPN). The aforementioned solutions improve the energy efficiency with a subset of active OXCs. While in the highly dynamic cloud computing environment, it is impracticable to make the frequent start-up operations. Moreover, the integrated virtualization is not yet achieved. Thus in our preliminary work [42], we performed the integrated virtualization under the scenario of power outage and evolving
ACCEPTED MANUSCRIPT
recovery. Here, only some servers run normally due to the limited power budget, which results in a large number of blocked requests of embedding the virtual network. We integrated traffic grooming2 [43] with server consolidation3 [44] to improve the resource utilization in a sharing manner. It mitigates the service rejection caused by the excessive resource consumptions. We also re-managed resources using anycast routing and resource balancing4 to reconfigure of the virtual network, during the process of the dynamic planning. With the combination of the integrated virtualization and dynamic planning, we obtained a considerable improvement of the blocking rate. But we only focus
CR IP T
on improving SLAs, other triggering factors are not taken into account. In addition to this, the WDM-based approach cannot well perform the match between the fix-grid and the actual bandwidth requirement. So the spectrum wastage remains unsolved. Most importantly, without the super and elastic channel, the QoT of the virtual lightpath will become worse once the required transmission rate
AN US
exceeds 100Gb/s, 400Gb/s and even 1Tb/s.
Table I A short survey on existing virtualization and dynamic planning Sources
Routing principle
Transmission technique
Optimization objective
K. Georgakilas[37]
Anycast
WDM
Anastasopoulos[38]
Anycast
WDM
Anastasopoulos[39, 40]
Anycast
WDM
Dynamic planning
Network control
Energy efficiency
ILP
N
Y
N
Energy efficiency
Game theory
N
Y
N
Energy efficiency
MILP
N
Y
N
Anycast
WDM
Security
Hou[42]
Anycast
WDM
Gong [35]
Manycast
OFDM
Zhao[36]
Manycast
OFDM
M
Integrated virtualization
MILP
N
Y
N
Blocking rate
Heuristic
Y
Y
N
Blocking rate
Heuristic
Y
N
N
Blocking rate
ILP & Heuristic
Y
N
Zhu[45]
Manycast
N
OFDM
Blocking rate
ILP & Heuristic
Y
N
Zhang[46]
Manycast
N
OFDM
Revenue-to-cost ratio
Heuristic
Y
N
N
ED
Tzanakaki
[41]
Solving approach
PT
With regard to the OFDM optical interconnection among data centers, the elastic optical backbone becomes the underlying physical infrastructure. Different from the WDM-based approach, the
CE
allocated subcarriers must be continuous in the spectrum domain, and the guardband needs to be configured for mitigating the inter-channel interference of the adjacent two virtual lightpaths [8].
AC
Correspondingly, we list some existing solutions of OFDM-based integrated virtualization, and these solutions mainly focus on the traffic intercommunicated among data centers. The authors in [35] first proposed two negative factors of blocking the request of embedding the virtual network: 1) the backbone bandwidth and computing resources are scare during the process of link- and node-level mappings; 2) even if the computing resources become rich, the backbone bandwidth still cannot comply with the constraints such as the spectrum continuity and guardband, during the process of the 2
The traffic grooming here tries to multiplex several traffic streams into a single virtual lightpath. The server consolidation here tries to put several virtual machines into a single server. 4 The resource balancing here tries to find complementary servers with respect to resource utilization, and these servers exchange their resource-intensive virtual machines to each other, which decreases resource fragments. 3
ACCEPTED MANUSCRIPT
link-level mapping. Considering these two negative factors, the authors in [35] performed the node-level mapping between virtual machines and the server, according to the local information of the server node. They then performed the link-level mapping via the layered auxiliary graph with the constraints of the spectrum continuity and guardband. In [36], to minimize the maximal sub-carrier index, the optimal solution of embedding the virtual network was determined by the ILP under the static network environment. For the dynamic scenario, they designed a heuristic named as Link List (LL) to reduce the number of blocked requests of embedding the virtual network. In [45], the authors
CR IP T
performed transparent/opaque virtual network embedding over flexi-grid elastic optical networks from the ILP to graph-based heuristics. The simulation results demonstrated the effectiveness of these solutions on reducing the number of blocked requests of embedding the virtual network. In [46], the authors performed the dynamic mapping of the virtual network into the substrate network in elastic optical networks. They also proposed a novel performance metric called revenue-to-cost ratio. Their
AN US
solution has the higher revenue-to-cost ratio compared with baseline algorithms in terms of element-level slicing. But in these works, the service- or network-status-driven dynamic planning and network intelligence never reflect their values for the resource management and control in ODCNs. Table I lists a short survey on the existing integrated virtualization and dynamic planning.
Resource control
M
The authors in [5] proposed a novel dynamic planning platform for the underlying physical infrastructure in the form of the WDM optical interconnection among data centers. This GEYSERS
ED
platform has the data transmission plane, middle service layer and resource control plane. Wherein, the convergence of the optical backbone and data centers is achieved on the data transmission plane; the
PT
middle service layer mainly accounts for translating SLA into the technical language; on the resource control plane, the computing resources are operated by an IT manager, and the backbone bandwidth is
CE
managed by the identified Generalized Multi-Protocol Label Switching (GMPLS). According to the technical language, GMPLS performs the link-level mapping and the energy-efficiency-driven dynamic planning. However, GEYSERS is the unintelligent platform without using cognition or
AC
self-learning, and the integrated virtualization is not taken into account. In addition, GMPLS is box-closed, and it has the limited flexibility of the resource control. Although some intelligent approaches of controlling optical network resources have been discussed,
they merely focus on the old-fashioned resource re-optimization instead of the integrated virtualization. So the maximal resource utilization cannot be well achieved. But we can learn from some glittering points from these approaches such as Cognitive Optical Network (CON) and Software-Defined Optical Network (SDON). In terms of the CON, in addition to the control of optical network resources, the existing works also
ACCEPTED MANUSCRIPT
focus on the software-adaptable optical element and the optical network detection technique. On the one hand, the software-adaptable optical element is able to make the resource scheduling on demand using the cognitive process. For example, the software-defined optical transponder can adaptively select the optimal modulation level in the elastic optical backbone [47, 48]. But without self-learning, the overhead of using this software-adaptable optical element inevitably increases because the relatively slow decision is made only after polling all feasible modulation levels. On the other hand, using the Optical Performance Monitoring (OPM) technique, someone acquired the information of the
CR IP T
current optical network status as reference parameters of making a high-speed decision/prediction. More recently, a series of OPM techniques were achieved by means of plug-in devices (e.g., optical spectrum analyzers, radio frequency identification and frequency selective polarimeter, etc). To reduce the overhead of deploying plug-in devices, the OPM technique integrated with Digital Signal Processor (DSP) was also developed in the optical receiver. This novel technology is able to make a
AN US
comprehensive detection of physical-layer parameters such as dispersion, Optical Signal-Noise Ratio (OSNR) and bit error rate, etc. However, these OPM techniques cannot be applied directly to the ODCN. Finally, a well-known CON architecture was presented by FP7 project ‘CHRON’, and it supports the intelligent control of heterogeneous reconfigurable optical network resources [49, 20-21]. The core brain of this architecture is Cognitive Decision System (CDS). CDS makes decisions
M
according to the current or the historical network information acquired by the SLA translation, OPM technique and so forth. The network resource control plane then executes the decision. Another
ED
classical CON architecture is ‘COGNITION’. As described in [50], COGNITION achieves the cognitive process in the application layer, service plane, control plane, MAC layer and physical layer,
PT
and it also supports the cross-layer optimization. But these CON architectures are not virtualization-oriented, and they still rely on the complex and inflexible GMPLS.
CE
In terms of the SDON, the integrated network resource control plane was proposed in [26]. Compared with the box-closed GMPLS, the Software-Defined Network (SDN) approach (e.g., OpenFlow [26]) is more open and flexible. In most cases, SDON integrates the centralized controller
AC
(e.g., NOX [51]) with the Path Computation Element (PCE). So the function of the path computation is totally decoupled from the NOX controller, and someone can fully utilize the technical advantages of the PCE. In the SDON, there exist two types of PCEs, the stateless PCE [51] and the stateful PCE [26]. Without considering the path storage, the stateless PCE reduces the control overhead, and it is easily deployed. While the stateful PCE not only accounts for the original path computation as the stateless PCE does, but also can reestablish connections using the path storage in the database. Though the stateful PCE becomes relatively complex, but the deep network resource optimization (e.g., resource defragmentation) can be achieved. In [51], the authors took the OSNR as an important
ACCEPTED MANUSCRIPT
performance factor of QoT, and they proposed an impairment-aware routing and wavelength assignment algorithm for the SDON. We can see that the SDON solutions merely consider the QoT evaluation, but the integrated virtualization, energy efficiency, resilience and security are not involved. Moreover, they are not tailored to the ODCN.
2.2 Future challenges According to the extensive survey on the existing literatures above, we can infer that there still exist
CR IP T
many theoretical vacancies at the topic of the resource management and control in ODCNs. On the other hand, the currently new findings cannot well harmonize with each other. The energy efficiency, resilience and security of ODCNs confounded fears. Therefore, a comprehensive study is timely and technically sound. As for the future challenges, we urgently make the following improvements for the resource management and control in ODCNs:
AN US
1) Aside from the integrated virtualization, the service or network-status-driven dynamic planning should be taken into account so that we can re-manage network resources to improve the energy efficiency, resilience and security.
2) In the data plane of the ODCN, we should consider the intelligent coexistence of WDM, OFDM and hybrid optical interconnections for the CDC or multiple data centers. Moreover, we need to
M
achieve the adaptive selection of these optical interconnection techniques to satisfy the high-level service flexibly.
ED
3) In the control plane of the ODCN, the SDN approach should be integrated with the PCE, which considerably alleviates the pressure of the data plane and the centralized controller. Meanwhile,
PT
the cognition must be involved so that the centralized controller makes the proactive decision
CE
before the triggering factor occurs.
3. Enabling technologies
AC
In the following, we propose some enabling techniques to make a blueprint for the resource management and control in ODCNs.
3.1 Data plane of ODCN To achieve the intelligent coexistence of heterogeneous optical interconnection techniques, similar to [52], we divide the data plane into component and management layers within the optical backbone. In the component layer, based on the 3D-MEMS technique, the optical back plane initially configures with a list of candidate components such as Liquid Crystal on Silicon (LCOS), Spectrum Selective Switch (SSS), Erbium Doped Fiber Amplifier (EDFA), waveguide, flexi-grid system, (de)-multiplexer, Lanthanum-doped Lead Zirconate-Titanate (PLZT) fast switch, Field Programmable Gata Array
ACCEPTED MANUSCRIPT
(FPGA) and so forth, as shown in Fig. 2. The management layer (i.e., OpenFlow agent interface) directly communicates with the control plane that will be presented in subsection 3.2. This OpenFlow agent interface executes the protocol parsing for the flow table distributed by the control plane. According to the parsed information, the component layer performs the component reconfiguration or the path establishment and restoration. Especially for the component reconfiguration, using Architecture on Demand (AoD) strategy, the optical back plane re-combines candidate components to implement the new node function without any manual hardware changes, In terms of the path
CR IP T
establishment and restoration, the OpenFlow agent interface periodically checks the component-level information such as QoT, energy efficiency, etc. If the QoT value or the energy consumption exceeds an acceptable threshold, this interface automatically sends an Alarm message to the centralized
Extensional OpenFlow protocal
Information collection/Alarm
Protocol Parsing OpenFlow agent interface
Periodical check
AoD Optical back plane
SSS
EDFA
(De) MUX
PLZT
FPGA L2
M
Flexi-grid system
Flexi-grid
Flexi-grid
SSS
Management layer
Node architecture list Component layer
AN US
controller (e.g., NOX).
FPGA L2
SSS
Fix-grid
PLZT
EDFA
(De) MUX Hybrid
(De) MUX
PLZT
EDFA
FPGA L2
AC
ToR 1
DML . 1 . .
DML N
Combiner
CE
PT
ED
Fig. 2 Data plane of ODCN within the optical backbone
Cyclic AWG
PD 1
servers
Left rack 1
ToR 1
Receiver 1 servers
Transmitter 1 Right rack 1
ToR 2
Transmitter 2
servers
ToR: top-of-the-rack switch
PD 2
ToR 2
Receiver 2 Optical spectrum
servers Right rack 2
Left rack 2 Fig. 3 Data plane of each CDC in ODCNs
ACCEPTED MANUSCRIPT
On the other hand, the data plane supports the multi-rate transmission (e.g., megabit and gigabit per second) in the optical backbone. In addition, the inter-link between two distant data centers requires the supper channel so that the data plane can perform the content migration, etc. Thus the data plane have novel characteristics of supporting the multi-technique and self-adaption. More specifically, there exist the fix-grid (WDM), flexi-grid (OFDM) and hybrid transmission techniques in the optical backbone, which provides the sub-wavelength-level, wavelength-level and supper channels, respectively. According to the flow table distributed by the centralized controller, the component
CR IP T
recombination is motivated to form a series of node architectures supporting transmission techniques above. So a node architecture list should be maintained for recording or updating the functional information such as the fix-grid, flexi-grid or hybrid transmission technique, as shown in Fig. 2.
The data plane of each CDC is shown in Fig. 3. The transmitter generates continuous and partially overlapped subcarriers according to the actual traffic demand using the OFDM technique. Each
AN US
sub-carrier occupies one Directly Modulated Laser (DML), and the adaptive modulation format performs with the variation of the transmission distance. These subcarriers are grouped into an appropriate sized optical spectrum via the combiner in the transmitter. Similar to [17], followed by the Multiple Input and Multiple Output (MIMO) principle, each DML also can carry the entire optical spectrum, and multiple optical spectrums are grouped via the combiner. At the receiving side, the
M
Photo-Detector (PD) employs the Parallel Signal Detection (PSD) technique to simultaneously detect OFDM signals from different sources. More specifically, as shown in Fig. 3, every server is able to
ED
transmit its optical spectrum to others. For example, the server in the left rack 1 can transmit its optical spectrum to others of the left rack 1 or all servers in the right rack 2; inversely, multiple servers can
PT
send their optical spectrums to a single server so long as these optical spectrums have different central frequencies without the resource contention. Thus with the cyclic Arrayed Waveguide Grating (AWG),
CE
the migration of the optical spectrum (virtual machine) performs among different servers, which contributes to the implementation of the resource defragmentation.
AC
3.2 Control plane of ODCN In the control plane, the core element is the extensional centralized controller. As shown in Fig. 4,
this controller has the modules of SLA analysis, routing, traffic grooming, server consolidation, virtual topology management, dynamic planning, PCE, enhanced database, network decision and self-learning. In the following, we describe these modules in detail.
Dynamic planning module
Given a triggering factor (e.g., the degradation of SLA or QoT, etc), the dynamic planning module determines whether we should trigger the dynamic planning, according to the technical language from
ACCEPTED MANUSCRIPT
the SLA analysis module or the component-level parameter analyzed from the network decision module. Next, the dynamic planning module provides the feedback to the network decision module.
Network operator Enhanced database Self-learning process
Extensional NOX controller
Routing module
Server consolidation module
Traffic grooming module
Virtual topology managem ent
Network decision module
Dynamic planning module
PCE module
Information collection
AN US
Flow entry modificaion
CR IP T
SLA analysis
Extensional OpenFlow protocal
Fig. 4 Functional modules in extensional centralized controller
Routing module
M
In this article, we only consider the programmable task initiated by one optical backbone node. So the routing module executes anycast routing to establish the connection between the optical backbone
ED
node and the selected data center only when this task can be handled by a single data center. While for the programmable task to be completed by multiple data centers, this module executes manycast routing to build the tree-based connection between the optical backbone node and a set of data centers.
PT
If we trigger the dynamic planning to reconfigure the virtual network, the routing module executes anycast routing to determine the sub-optimal data center even though the nearest one is available. For
CE
example, if the optimal data center tends to be saturated, this module executes anycast routing to find the sub-optimal one for the load balancing. Traffic grooming module
AC
After determining the selected data center, the traffic grooming module first checks whether the
current traffic can be groomed into the pre-established virtual lightpath(s) or the cascade channel with including pre- and newly-established virtual lightpaths. If it is feasible, through querying the node architecture list shown in the left side of Fig. 2, this module executes the traffic grooming according to the node architecture/function that the component layer currently provides. More specifically, this module performs the fix-grid grooming if the component layer currently provides the fix-grid node function shown in the right side of Fig. 2; if the component layer currently provides the flexi-grid node function, this module executes the flexi-grid grooming shown in Fig. 5. In Fig. 5, the number of
ACCEPTED MANUSCRIPT
guardbands can be reduced using the flexi-grid grooming. If we trigger the dynamic planning due to the frequent arrival and departure of traffic requests, the traffic grooming module executes the provident spectrum defragmentation for some virtual lightpaths. We consider the provident approach because the spectrum defragmentation requires the help of the spectrum shifting that interrupts the live connection. But the live connection is instantly interrupted, so the service continuity will not become seriously worsen. The interrupted service can be quickly re-satisfied by the new spectrum. Additionally, if the link failure or the QoT degradation occurs, or we
lightpath to the link-disjoint or a long-duration virtual lightpath.
40G
10G
50G
30G
40G+10G
Guardband
Guardband
30G+50G
AN US
Flexi-grid grooming
CR IP T
want to improve the energy efficiency, this module executes the traffic migration from one virtual
5GHz/sub-carrier
Two guardbands are saved
Fig. 5 Flexi-grid grooming
Server consolidation module
M
After determining the selected data center, as for the virtual machine corresponding to the compute resources required by the current request, the server consolidation module checks whether the virtual
server consolidation.
ED
machine can be consolidated with others in the same server. If it is feasible, this module executes
PT
If we trigger the dynamic planning due to the frequent arrival and departure of virtual machines, the server consolidation module executes the provident defragmentation for the computing resources in
CE
the server. We consider the provident approach because the defragmentation of computing resources is performed through virtual machine migrations, which results in the overhead and service downtime.
AC
Additionally, if one server enters the maintenance period or becomes invalid, or we want to avoid the hostile attack among virtual machines within the same server, this module transfers virtual machines from the abnormal server to normally operating ones.
Virtual topology management module
The output of the above modules is an initial or a reconfigured virtual topology. The virtual topology management module storages this output, and then sends this network view to the PCE module. In the PCE module, the path computation and resource assignment will be performed using some classical algorithms (e.g., RWTA [53], RWA [54], RSA [7] and IA-RWA [14], etc) saved in the enhanced database. Here, RWTA denotes the routing, wavelength and time-slot assignment, RWA denotes the
ACCEPTED MANUSCRIPT
routing and wavelength assignment, RSA denotes the routing and spectrum assignment, and IA-RWA denotes the impairment-aware RWA. RWTA and RWA algorithms are dedicated to the fix-grid mechanism, while RSA and IA-RWA algorithms are dedicated to the flexi-grid mechanism.
Network decision module
Aside from the data mining of component-level parameters and SLAs, the network decision module encapsulates the decision into the flow table that will be sent to the data plane via the extensional OpenFlow protocol. More importantly, this module well utilizes the cognitive process, i.e., a cognition
CR IP T
cycle. Once the self-learning is taken into account, the cognition cycle directly makes decisions according to historical experiences, which significantly mitigates the pressure of the network decision module. As shown in Fig. 6, the cognitive process is described as follows. After parsing component-level parameters and SLAs, we select the optimal one as the decision from candidate plans. Next, we distribute the flow table that describes how the data plane takes actions followed by this
AN US
decision. Using self-learning, the precise predictions can be performed to make proactive decisions without planning. Obviously, component-level parameters and SLAs are the inputs of the cognition cycle, while the output is the final action taken by the data plane.
SLAs
M
Plan
Candidate plans
PT
ED
Parse
Decision
Self-learning
Environment
Action
Component-level parameters and SLAs
CE
Fig. 6 Cognition cycle
50GHz 100GHz
AC
Time
Spectrum
RWA Allocation directions
RWAT
RWA
RSA
Sharing interval Fix-grid Flexi-grid
Fig. 7 Rules design of resource assignment
IA-RWA
ACCEPTED MANUSCRIPT
On the other hand, considering the multi-rate transmission, the network decision module designs rules for the resource assignment, which is similar to [52]. More specifically, we utilize the RWTA algorithm to serve traffic requests with the line rate lower than 10Gb/s, the RWA algorithm to serve 10~100Gb/s traffic requests, and the RSA/IA-RWA algorithm to serve traffic requests with the line rate larger than 100Gb/s. The minimal resource unit is 12.5GHz for RSA and IA-RWA, 50GHz for RWA, as well as 100Mb/s for RWTA. The traffic flow model is first given, and it records the proportion of traffic requests with different line rates. According to this model, the network decision
CR IP T
module makes rules that describe how we reasonably partition the entire spectrum into the dedicated and sharing intervals. Starting from two sides of the spectrum, this module allocates the dedicated spectrum for the fix- and flexi-grid applications, respectively. As an example of Fig. 7, starting from the minimal subcarrier index, this module allocates the dedicated spectrum for RWTA and RWA; starting from the maximal subcarrier index at another side of the spectrum, this module assigns the
AN US
corresponding dedicated spectrum for RSA and IA-RWA. In terms of the hybrid application, if the spectrum remains rich, this module continues to allocate the dedicated spectrum; otherwise, it motivates the spectrum contention/sharing within the middle interval.
3.3 Enhanced database of ODCN
M
We utilize the stateful PCE because the enhanced database has been involved. Once the stateful PCE has computed the path followed by the resource-assignment algorithm or dynamic planning, we
ED
storage the path along with its attributes as an event in the enhanced database. The attributes include component-level parameters, SLAs, resource-assignment algorithms and so forth. So the centralized
PT
controller can exact the similar event from the enhanced database, and it directly makes the decision without using algorithms and the complex evaluation of component-level parameters. If the enhanced
CE
database has the similar event, e.g., the pre-established virtual lightpath with the QoT value, the same QoT value will directly assigned to the newly-established virtual lightpath. More importantly, the controller also can compare the QoT value with the pre-determined threshold, and the prediction can
AC
be made before the triggering factor occurs. In the enhanced database, the plug-in software (i.e., algorithm) occupies the largest storage resource.
Honestly speaking, there have been a series of algorithms to support the operations of these modules, especially for the PCE module. While in this article, we merely list the subset of our preliminary works and some classical algorithms. Of course, the enhanced database can storage any kind of algorithm if it has the enough memory size. 3.3.1 Algorithms: Dynamic planning for traffic grooming In terms of the dynamic planning for traffic grooming, we list the following algorithms for the
ACCEPTED MANUSCRIPT
flexi-grid grooming.
Provident spectrum defragmentation
The existing push-pull spectrum shifting moves the existing spectrum block from a nominal central frequency to another one so that fragmented spectrums can be consolidated as continuous spectrum slots assigned to more traffic requests. However, some live connections will be instantly interrupted. If we want to decrease the number of reconfigured spectrum blocks, a promising solution is the virtual concatenation. We first slice the spectrum requirement into multiple sub-bands, then transmit these
CR IP T
sub-bands via different routes, and finally recombine these sub-bands at the receiving side. Correspondingly, we have designed a provident defragmentation algorithm named as PPDVC. In our PPDVC, for the blocked traffic request due to the insufficient provisioning of contiguous subcarriers, we assign the spectrum for some sub-bands along the optimal route using the spectrum shifting, and other sub-bands are allocated with continuous sub-carriers along the secondary route. So the number
AN US
of reconfigured spectrum blocks becomes less compared with the previous push-pull technique, and the number of live interruptions decreases, which can be seen in Fig. 8 (the horizontal axis denotes the matrix index). The simulation settings are described as follows. The test topology is NSFNET, where each fiber link totally has 200GHz spectrum. In other words, each fiber link has 16 spectrum slots each with 12.5 GHz. We randomly generate 30 different traffic matrices, each of which includes 200
M
various traffic demands. 0.7
0.6
The total number of reconfigured spectrum blocks
0.62
PT
Blocking probability
0.64
25
ED
(%)
0.66
30
push-pull PPDVC
0.68
0.58 0.56
20
15
10
CE
0.54
0.52 0
5
10 15 20 traffic matrices (a)
Push-pull PPDVC
25
30
5 0
5
10 15 20 traffic matrices (b)
25
30
AC
Fig. 8 Performance comparison between PPDVC and push-pull: (a) blocking probability, (b) number of reconfigured spectrum blocks
Traffic migration algorithms
More recently, the bandwidth squeezed restoration [7] was presented to well utilize the unique
advantage of the flexi-grid grooming. In the case of the link failure, in the fix-grid situation, the traffic requests on the disabled virtual lightpath cannot be migrated or recovered unless the available bandwidth on the link-disjoint detour route equals or exceeds the original path bandwidth. While in the flexi-grid situation, even if the available bandwidth on the detour route is not sufficient, the bandwidth of the disabled virtual lightpath can be squeezed using the elastic feature in order to ensure the minimum connectivity.
ACCEPTED MANUSCRIPT
On the other hand, in order to improve the energy efficiency, the authors in [55] proposed a time-aware flexi-grid grooming algorithm. This algorithm puts the current traffic into the pre-established virtual lightpath with the large remaining holding time. The remaining holding time H l denotes the tear-down time of the last traffic request on the pre-established virtual lightpath l. The
energy-saving effect is reflected in the following equations. In Eq. (3), for the current traffic with the bandwidth requirement b , if the holding time h is not larger than H l , the corresponding energy
CR IP T
consumption becomes the lowest for the BV-transponder. The power consumption of each BV-transponder includes the traffic-related part P(b) and the fixed part P0 . So P0
can be omitted
in Eq. (3). In addition, we also can draw on our preliminary work about the survivable and power-efficient fix-grid grooming [56].
(1)
Eexisting _ lightpath (l ) P(b) b h P0 (h Hl ), if h Hl
(2)
AN US
E potential _ lightpath P(b) b h P0 h
Eexisting _ lightpath( l ) P(b) b h, if h H l
(3)
When executing the aforementioned dynamic planning, the module interaction is in Fig. 9. First of all, the network decision module triggers the dynamic planning module that will motivate the traffic
M
grooming module to reconfigure the virtual topology Next, the traffic grooming module sends the Reconfiguring Virtual topology request (RVreq) to the enhanced database. If we can find the similar
ED
event in the database using self-learning, the feasible algorithm will be returned back to the traffic grooming module. The traffic grooming module then sends the RV reply (RVrep) with including the
PT
reconfiguration result to the virtual topology management module and network decision module. So far, the view of the reconfigured virtual topology has been saved in the virtual network management
CE
module, and the network decision module distributes the corresponding flow table to the components on the data plane. We can see that the self-learning is very meaningful because the overhead of
AC
screening the appropriate algorithm can be negligible. LSPDB
③
④
RVreq
RVrep
⑤
Traffic grooming module
⑤ RVrep
⑥
Virtual topology managment
②
Dynamic planning module
① Network decision module
⑥
Flow entries
⑥
⑥
Fig. 9 Module interaction for dynamic planning of traffic grooming
ACCEPTED MANUSCRIPT
3.3.2 Algorithms: Dynamic planning for server consolidation In terms of the dynamic planning for server consolidation, we have the following algorithms.
Provident computing-resource defragmentation
Despite the importance of server consolidation in terms of improving the energy efficiency, we observe that there are two major issues that have not yet been addressed. First, shutting down a server may not be desirable if the same server shall be restarted after a short period due to the increasing
CR IP T
requests. Second, also more important, server consolidation may lead to excessive migrations because some residual resources of servers maybe utilized in the near future. Correspondingly, we proposed a general framework for provident computing-resource defragmentation with the objective to avoid unnecessary migrations. In our framework, defragmentation can be fine-tuned by determining when to trigger the defragmentation operation and how to migrate existing virtual machines. To avoid
AN US
unnecessary migrations, we consider that each virtual machine is associated with the accommodation revenue and a migration cost. We then maximize the profit (i.e., total revenue minus costs) with a prediction that requires only the estimation of the total number of arrival and the request distribution of different virtual machine instances. It should be noted that, to understand the impact of migration overheads, we defined the average migration cost as the total migration cost during the designated
M
duration divided by the total number of times that defragmentation (or consolidation) operations are performed. In our simulations, we consider a homogeneous cloud data center, where the capacity of a
ED
server can be selected from {40, 60, 80}. To simulate dynamic demands, we consider a certain duration that has been partitioned into equal-sized time slots. We assume that virtual machines can
PT
arrive only at the beginning of a time slot. We also assume that each virtual machine can last for a number of slots that is a random number chosen uniformly in {1, 4]. Moreover, we consider that there
CE
are three types of virtual machines with the 16, 8 and 2 units of computing resources, respectively. Our numerical results showed that such an approach can lead to reasonably good performances compared with periodical server consolidation, as shown in Fig. 10. 9
AC
6
8 7
Average migration cost
Average migration cost
5
4
3
2 Our method Consolidation(T=1) Consolidation(T=5)
1
0 150
200
250 Number of VMs (a)
300
6 5 4 3 2
Our method Consolidation(T=1) Consolidation(T=5)
1 350
0 35
40
45 Number of servers (b)
50
Fig. 10 Comparison of average migration cost between our method and periodical server consolidation
55
ACCEPTED MANUSCRIPT
Server security algorithm
We provide an open idea for the preventive measure against the hostile attack among virtual machines in the same server. Considering the hidden risk, we assume the service provider is trustworthy, and the attacker utilizes the cross-channel attack to steal the victim information. The aggressive behavior of the virtual machine is monitored in a server using the physical-layer optical signal detection technology, e.g., the DSP-based OPM technique. In Fig. 11, for the side-channel of the attacker, the OSNR-related burst timing sharply fluctuates when stealing information. The extensional
CR IP T
centralized controller determines candidate attackers according to these fluctuations. Note that we should not immediately terminate the transaction between the service provider and these candidate attackers to avoid ‘let sleeping dogs lie’. When transferring the virtual machine of the victim, the burst timing drastically fluctuates again, as shown in Fig. 11. The controller determines the unique attacker this time and reserves an examination virtual machine that has the same attribute with the victim. This
AN US
examination virtual machine will come into the attacked server to record the computation load variation of the attacker. Finally, we terminate the transaction between the service provider and the attacker. Moreover, we provide the proactive prevention against other attackers with the similar
Side-channel (attacker)
PT
ED
M
computation load variation using self-learning.
Migrating virtual machine of victim
Stealing information
Fig. 11 Fluctuations of OSNR-related bursting time
CE
When executing the aforementioned dynamic planning, the module interaction is very similar to Fig. 9. We merely need to change the traffic grooming module into the server consolidation module. In
AC
addition, the RVreq content is changed into the computation load variation of the attacker. If the similar event can be found in the enhanced database using self-learning, the additional cost of executing the DSP-based detection can be negligible. Similarly, the overhead of screening the appropriate algorithm also can be negligible.
4. Conclusion In recent years, the converged Optical and Data Center Network (ODCN) emerges as a large-scale networking paradigm, where each optically interconnected data center locates at the edge of the
ACCEPTED MANUSCRIPT
optical backbone. For the ODCN, the resource management and control is a very hot topic because both cloud computing and integrated virtualization are both involved to improve the resource utilization. Aside from the integrated virtualization, the dynamic management of heterogeneous resources and the intelligent resource control are also required to be discussed. In this article, we first made the survey on the resource management and control in the ODCN, mainly including the optical interconnection, integrated virtualization, dynamic planning, as well as resource control. Though these works have their advantages, but they cannot well satisfy the
CR IP T
requirements of ODCN. Fortunately, some outstanding techniques are worth learning and they can be modified to make the improvements of the ODCN. More specifically, we made a blueprint where we execute some enabling technologies to achieve the intelligent coexistence, integrated virtualization, dynamic planning of heterogeneous resources in the ODCN. Meanwhile, the network intelligence was also taken into account with the objective to optimize this network paradigm from the perspectives of
AN US
data plane, control plane and enhanced database. In the near future, a series of algorithms and mechanisms will be investigated by us according to this blueprint.
References
J. Baliga, R. Ayre, K. Hinton, R. Tucker. Green Cloud Computing: Balancing Energy in Processing, Storage, and Transport, Proceedings of the IEEE, 99 (1): 149-166, 2011.
M
[1]
“Amazon elastic compute cloud (ec2).” Available: http://aws.amazon.com/ec2/.
[3]
“Windows azure.” Available: http://www.windowsazure.com/en-us/.
[4]
M. Sridharan, P. Calyam, A. Venkataraman, A. Berryman. Defragmentation of Resources in Virtual Desktop Clouds for Cost-aware
ED
[2]
Utility Optimal Allocation, Proc. UCC, pp. 253–260, 2011. E. Escalona, S. Peng, R. Nejabati, et al, GEYSERS: A Novel Architecture for Virtualization and Co-provisioning of Dynamic Optical
PT
[5]
Networks and IT services, Proc. FNMS, pp. 1-8, 2011. [6]
A. Tzanakaki, M. Anastasopoulos, K. Georgakilas, et al, Energy Efficiency in Integrated IT and Optical Network Infrastructures: the
[7]
CE
GEYSERS Approach, Proc. INFOCOM, pp. 343-348, 2011. M. Jinno, H. Takara, B. Kozicki, et al. Spectrum-efficient and Scalable Elastic Optical Path Network: Architecture, Benefits, and Enabling Technologies, IEEE Communications Magazine, 47 (11): 66-73, 2009. M. Jinno, B. Kozicki, H. Takara, et al. Distance-adaptive Spectrum Resource Allocation in Spectrum-sliced Elastic Optical Path
AC
[8]
Network, IEEE Communications Magazine, 48 (8): 138-145, 2010.
[9]
B. Kozicki, H. Takara, M. Jinno. Enabling Technologies for Adaptive Resource Allocation in Elastic Optical Path Network, Proc. ACP, pp. 23-24, 2010.
[10] Y. Wang, X. Cao, Y. Pan. A Study of the Routing and Spectrum Allocation in Spectrum-sliced Elastic Optical Path Networks, Proc. INFOCOM, pp. 1503-1511, 2011. [11] D. Leenheer, A. Morea, B. Mukherjee. A Survey on OFDM-based Elastic Core Optical Networking, IEEE Communications Surveys & Tutorials, PP (99): 1-6, 2012 [12] G. Zhang, M. Klinkowski, K.Walkowiak. Routing and Spectrum Assignment in Spectrum Sliced Elastic Optical Path Network, IEEE Communications Letters, 15 (8): 884-886, 2011. [13] M. Jinno, et al. Elastic and Adaptive Optical Networks: Possible Adoption Scenarios and Future Standardization Aspects, IEEE
ACCEPTED MANUSCRIPT
Communications Magazine, 49 (10): 164-172, 2011. [14] K. Christodoulopoulos, I. Tomkos, E. Varvarigos. Elastic Bandwidth Allocation in Flexible OFDM-based Optical Networks, IEEE/OSA Journal of Lightwave Technology, 29 (12): 1354-1366, 2011. [15] G. Zhang, et al. Optical Traffic Grooming in OFDM-based Elastic Optical Networks, IEEE/OSA Journal of Optical Communications and Networking, 4 (1): B17-B25, 2012. [16] S. Zhang, C. Martel, B. Mukherjee. Dynamic Traffic Grooming in Elastic Optical Networks, IEEE Journal on Selected Areas in Communications, 31 (1): 4-12, 2013. [17] N. Ji, D. Qian, K. Kanonakis, et al. Design and Evaluation of a Flexible-bandwidth OFDM-based Intra Data Center Interconnect, IEEE Journal of selected topics in Quantum Electronics, PP (99): 1-10, 2012.
Technologies and Techniques, Proc. ICTON, pp. 1-4, 2011.
CR IP T
[18] I. Monroy, D. Zibar, N. Gonzalez, et al. Cognitive Heterogeneous Reconfigurable Optical Networks (CHRON): Enabling
[19] W. Wei, C. Wang, J. Yu. Cognitive Optical Networks: Key drivers, Enabling techniques, and Adaptive Bandwidth Services, IEEE Communications Magazine, 50 (1): 106-113, 2012.
[20] I. Tomkos, et al. Next Generation Flexible and Cognitive Heterogeneous Optical Networks, Proc. FIA, pp. 225–236, 2012.
[21] R. J. Duran, et al. A Cognitive Decision System for Heterogeneous Reconfigurable Optical Networks, Proc. FNMS, pp. 1-9, 2012.
NOC, pp. 233-240, 2013.
AN US
[22] D. Siracusa, A. Broglio, A. Francescon, et al. Toward a Control and Management System Enabling Cognitive Optical Networks, Proc.
[23] E. Palkopoulou, I. Stiakogiannakis, et al. Cognitive Heterogeneous Reconfigurable Optical Network: A Techno-economic Evaluation, Proc. FNMS, pp. 1-10, 2013.
[24] I. M. de, R. J. Duran, et al. Cognitive Dynamic Optical Networks, IEEE/OSA Journal of Optical Communications and Networking, 5 (10): 107-118, 2013. [25] It’s time to virtualize the network. Whitepaper of Nicira.
M
[26] R. Casellas, et al. Control and Management of Flexi-grid Optical Networks With an Integrated Stateful Path Computation Element and OpenFlow Controller, IEEE/OSA Journal of Optical Communications and Networking, 5 (10): 57-65, 2013. [27] G. Wang, D. Andersen, M. Kaminsky, et al. C-Through: Part-time Optics in Data Centres, Proc. SIGCOMM, pp. 327–338, 2010.
ED
[28] N. Farrington, G. Porter, S. Radhakrishnan, et al. Helios: A Hybrid Electrical/Optical Switch Architecture for Modular Data Centres, Proc. SIGCOMM, pp. 339–350, 2010.
[29] X. Ye, Y. Yin, S. Yoo, et al. DOS: A Scalable Optical Switch for Data Centres, Proc. ANCS, pp. 1–12, 2010.
PT
[30] A. Singla, A. Singh, K. Ramachandran, et al. Proteus: A Topology Malleable Data Center Network, Proc. SIGCOMM, pp. 1–6, 2010. [31] H. Chao, Z. Jing, K. Deng. PetaStar: A Petabit Photonic Packet Switch, IEEE Journal of selected area on Communications, 21 (7):
CE
1096–1112, 2003.
[32] X. Dong, T. El-Gorashi, J. Elmirghani, Green IP over WDM Networks with Data Centers, IEEE/OSA Journal of Lightwave Technology, 29 (12): 1861-1880, 2011.
AC
[33] C. Abosi, S. Zervas, R. Nejabati, et al. Energy-aware Service Plane Co-scheduling of a Novel Integrated Optical Network-IT Infrastructure, Proc. ONDM, pp. 1-6, 2011.
[34] B. Kantarci, T. Mouftah. Energy-efficient Cloud Services over Wavelength-routed Optical Transport Networks, Proc. GLOBECOM, pp. 1-5, 2011.
[35] L. Gong, W. Zhao, Y. Wen, et al. Dynamic Transparent Virtual Network Embedding over Elastic Optical Infrastructures, Proc. ICC, pp. 2059-2063. 2013. [36] J. Zhao, et al. Virtual Topology Mapping in Elastic Optical networks, Proc. ICC, pp. 2497-2501, 2013. [37] K. Georgakilas, A. Tzanakaki, M. Anastasopoulos, et al. Converged Optical Network and Data Center Virtual Infrastructure Planning, IEEE/OSA Journal of Optical Communications and Networking, 4 (9): 681-691, 2012. [38] M. Anastasopoulos, K. Georgakilas, A. Tzanakaki. Evolutionary Optimization for Energy Efficient Service Provisioning in IT and Optical Network Infrastructures, Proc. ECOC, pp. 1-3, 2011. [39] M. Anastasopoulos, A. Tzanakaki. Adaptive Virtual Infrastructure Planning over Interconnected IT and Optical Network Resources
ACCEPTED MANUSCRIPT
Using Evolutionary Game Theory, Proc. ONDM, pp. 1-5, 2012. [40] M. Anastasopoulos, A. Tzanakaki, K. Georgakilas. Virtual Infrastructure Planning in Elastic Cloud Deploying Optical Networking, Proc. CloudCom, pp. 685-689, 2011. [41] A. Tzanakaki, M. Anastasopoulos, K. Georgakilas, et al. Energy Aware Planning of Multiple Virtual Infrastructures over Converged Optical Network and IT Physical Resources, Proc. ECOC, pp. 1-3, 2011. [42] W. Hou, L. Guo, X. Wei, Y. Liu, Q. Song. Virtual Network Planning for Converged Optical Network and Data Centers: Ideas and Challenges, IEEE Network, 27 (6): 52-58, 2013. [43] G. Shen, R. Tucker. Energy-Minimized Design for IP Over WDM Networks, IEEE/OSA Journal of Optical Communications and Networking, 1 (1): 176-186, 2009.
IEEE Transactions on Services Computing, 3 (4): 266-278, 2010.
CR IP T
[44] B. Speitkamp M. Bichler. A Mathematical Programming Approach for Server Consolidation Problems in Virtualized Data Centers,
[45] L. Gong, Z. Zhu. Virtual Optical Network Embedding (VONE) over Elastic Optical Networks, IEEE/OSA Journal of Lightwave Technology, 32 (3): 450-460, 2014.
[46] J. Zhang, B. Mukherjee, J. Zhang, Y. Zhao. Dynamic Virtual Network Embedding Scheme based on Network Element Slicing for Elastic Optical Networks, Proc. ECOC, pp. 1-3, 2013.
[47] O. Gerstel, M. Jinno, A. Lord, et al. Elastic Optical Networking: A New Dawn for the Optical Layer? IEEE Communication. Magazine,
AN US
50 (2): s12–s20, 2012.
[48] N. Sambo, P. Castoldi, F. Cugini, G. Bottari, P. Iovanna. Toward High-rate and Flexible Optical Networks, IEEE Communication. Magazine, 50: 66–72, 2012.
[49] EU FP7 CHRON project [Online]. Available: http://www.ictchron.eu.
[50] G. S. Zervas, D. Simeonidou. Cognitive Optical Networks: Need, Requirements and Architecture, Proc. ICTON, 2010. [51] L. Liu, et al. Demonstration of a Dynamic Transparent Optical Network Employing Flexible Transmitters/Receivers Controlled by an
M
OpenFlow–Stateless PCE Integrated Control Plane, IEEE/OSA Journal of Optical Communications and Networking, 5 (10): 66-75, 2013.
[52] B. R. Rofoee, G. Zervas, Y. Yan, et al. All Programmable and Synthetic Optical Network: Architecture and Implementation, IEEE/OSA
ED
Journal of Optical Communications and Networking, 5 (9): 1096-1110, 2013. [53] B. Wen, R. Shenai, K. Sivalingam. Routing, Wavelength and Time-slot-assignment Algorithms for Wavelength-routed Optical WDM/TDM networks, IEEE/OSA Journal of Lightwave Technology, 23 (9): 2598–2609, 2005.
PT
[54] K. Christodoulopoulos, K. Manousakis, E. Varvarigos. Comparison of Routing and Wavelength Assignment Algorithms in WDM Networks, Proc. GLOBECOM, pp. 1–6, 2008.
2012.
CE
[55] S. Zhang, B. Mukherjee. Energy-efficient Dynamic Provisioning for Spectrum Elastic Optical Networks, Proc. ICC, pp. 3031-3035,
[56] W. Hou, L. Guo, X. Gong. Survivable Power Efficiency Oriented Integrated Grooming in Green Networks, Journal of Network and
AC
Computer Applications, 36 (1): 420-428, 2013.
Weigang Hou was born in 1984 and received the Ph.D. degree in communication and information systems from School of Information Engineering, Northeastern University, Shenyang, China, in 2013. He is currently an associate professor in the same university. His research interests include optical networks, cloud computing and green networks. He has published over 70 technical papers in the above areas. Dr. Hou is a member of IEEE. He has worked as a research associate in the City University of Hong Kong for one year. Email:
[email protected] Yejun Liu received the M.S. degree in communication and information systems from Northeastern University, Shenyang, China, in 2010. He is currently pursuing the Ph.D. degree in the same university. His research interests include survivable network and Fiber-Wireless (FiWi) access network. Email:
ACCEPTED MANUSCRIPT
[email protected]
CR IP T
Lei Guo received the Ph.D. degree in communication and information systems at University of Electronic Science and Technology of China in 2006. He is currently a professor in College of Information Science and Engineering, Northeastern University, Shenyang, China. His research interests include optical networks, wireless networks and hybrid wireless-optical broadband access networks. He has published over 100 technical papers in the above areas. Dr. Guo is a member of IEEE and OSA. He was the recipient of the best paper award from ICCCAS’04. He is currently servicing as the Editorial Board Member of The Open Optics Journal and the International Journal of Digital Content Technology and its Applications. Email: guolei@ ise.neu.edu.cn Cunqian Yu received the M.S. degree in communication and information systems from Northeastern University, Shenyang, China, in 2008. He is currently pursuing the Ph.D. degree in the same university. His research interests include survivability of optical WDM networks and green grooming in data center networks. Email:
[email protected]
AC
CE
PT
ED
M
AN US
Yue Zong is currently a Ph.D. candidate in Northeastern University, Shenyang, China. Her research interests include optical networks and data center networks. Email:
[email protected]