OptIPuter: Enabling advanced applications with novel optical control planes and backplanes

OptIPuter: Enabling advanced applications with novel optical control planes and backplanes

Future Generation Computer Systems 25 (2009) 137–141 Contents lists available at ScienceDirect Future Generation Computer Systems journal homepage: ...

554KB Sizes 0 Downloads 11 Views

Future Generation Computer Systems 25 (2009) 137–141

Contents lists available at ScienceDirect

Future Generation Computer Systems journal homepage: www.elsevier.com/locate/fgcs

OptIPuter: Enabling advanced applications with novel optical control planes and backplanes Joe Mambretti ∗ International Center for Advanced Internet Research (iCAIR), Northwestern University, United States

article

info

Article history: Received 3 March 2008 Received in revised form 9 June 2008 Accepted 23 June 2008 Available online 5 July 2008 Keywords: Optical networks Control planes Lightpaths GMPLS Dynamic provisioning

a b s t r a c t Many emerging and anticipated advanced applications have demanding requirements that cannot be supported using traditional communication services, architecture, and technologies. Such requirements include those related to high performance, large scale data volume transport, dynamic provisioning, path flexibility, and stringent determinism for all service parameters. The OptIPuter research initiative was established to investigate the potential for meeting these and other exceptional challenges by creating new types of communication services that could be implemented as complements to traditional approaches and, in some special cases, as alternatives. In particular, the OptIPuter initiative has been exploring advanced communication services based on individually addressable, dynamically allocated lightpaths. These approaches require novel control plane and backplane architecture and technology. As part of its core research agenda, the OptIPuter initiative designed and deployed such architecture and technology, which is described here along with a large scale testbed implementation, including national and international extensions. This testbed was used for a series of experiments, and the results are summarized. © 2008 Elsevier B.V. All rights reserved.

1. Introduction Several large classes of emerging and potential advanced applications have demanding requirements that exceed the capabilities of traditional communication services, such as Layer 3 services designed for extremely large numbers of fairly small information flows. Such services do not provide adequate support for some classes of traffic, for example, extremely large, long duration streams required by scientific research. Such applications frequently are the first to encounter problems with the technology limitations of existing infrastructure. Increasingly, global science projects require the sophisticated management, transport, and analysis of extremely large sets of data residing in multiple locations, including highly remote sites. Many such applications are at the terascale and petascale level. The challenging issues that must be addressed are not only those related to bandwidth and traffic volume. To some degree, if these were the only issues, they could be approached, although at a high cost, by adding more resources to existing infrastructure. However, merely adding resources does not sufficiently address requirements for guaranteed high performance, especially on shared infrastructure. Also, this approach would not meet the need for fast responses to changing application data flows through dynamic provisioning.



Tel.: +1 312 503 0735; fax: +1 312 503 0745. E-mail address: [email protected].

0167-739X/$ – see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.future.2008.06.008

General routed network environments are designed to service fairly predictable traffic behavior within set parameters and not to support many sudden changes among multiple large scale traffic flows. The OptIPuter research initiative was funded by the National Science Foundation to investigate alternatives to traditional routed communication services by designing and implementing new services, especially those based on individually addressable, dynamically allocated lightpaths [1]. A research goal of the OptIPuter project is not simply to provide high performance, high capacity circuits, but to provide a totally new type of flexible communications environment extending to optimal path discovery and implementation while ensuring rigorous service parameters are met, for example, by supporting extremely stringent determinism by completely eliminating jitter and loss and providing for the lowest possible latency. This research requires innovative considerations of control plane and backplane architecture. Traditionally, backplane considerations relate only to infrastructure within a local computing environment. In contrast, this architecture allows multiple lightpath channels to be integrated into large scale, extended backplanes within highly distributed infrastructure, among multiple sites that can span a nation, a continent, or the globe. These backplanes become the foundation for large scale adaptable virtual environments that can include many types of resources, including computers, storage systems, instruments, and sensors. Traditional control planes are used to address fairly static resources. This architecture

138

J. Mambretti / Future Generation Computer Systems 25 (2009) 137–141

allows for control planes to be integrated with other processes so that those resources can be used dynamically, with continuous interactions though specialized signaling. These new services will be implemented initially by universities, and a few corporations, which are creating regional and state-wide fiber networks, as well as national fiber infrastructure facilities, such as the National Lambda Rail (NLR) and international facilities, such as the Global Lambda Integrated Facility (GLIF) [2,3]. 2. Problem definition One key goal of OptIPuter research is to create a large scale distributed environment that is virtualized to such an extent that it has no dependencies on specific physical infrastructure and configurations, in part, to remove restrictive performance barriers of such infrastructure. Also, this concept envisions not creating an application that can merely utilize a particular infrastructure (whether or not it is virtual), but allowing applications to dynamically create their own environments, designing, implementing, and continually reconfiguring them. To accomplish these goals, all resources, including communications, must be addressable and configurable through high level processes. Traditional L3 services are external, non-addressable, non-configurable resources that require applications to conform to multiple preset architecture and technology parameters, for example, those that limit support for exceptionally large scale traffic flows, e.g., 1 Gbps, several 1 Gbps, 10 Gbps and higher. The OptIPuter was designed as a highly distributed infrastructure with processes that allow applications to have direct, controllable access to extremely large scale L1 and L2 resources, not as aggregated channels of smaller flows but as channels supporting a few dedicated individual flows or a single large flow. Such channels can be used to extend backplanes so they are no longer isolated resources within local computational environments but foundations for global infrastructure fabrics. For example, an integrated set of international lightpaths can be used as a global backplane to interlink high performance computers on several continents to allow all resources to be used as a single contiguous environment. Also, individual applications can continually reconfigure these backplanes as required by changing conditions. The architecture anticipates that high volume flows will be terascale in the near future, terabits per second serving terabyte data sets, soon to be followed by petascale communications. Although supporting such large scale flows is important, an equally key objective is to allow for the flexible dynamic provisioning of L1 and L2 paths. Currently, such paths are almost always statically provisioned in data networks. Using the OptIPuter control plane, applications can directly implement and manipulate backplane channels. Also, many next generation services and applications require non-traditional stream distributions within supporting infrastructure. For example, general network architecture anticipates small data flows at the edge and large scale aggregations at the core. The OptIPuter architecture anticipates large scale data flows both at the edge and within the core as well as flows that are much larger at the edge than in the core. Also, the majority of applications today are designed to be support by a single communication service layer. The OptIPuter was designed to take advantage of multi-layer service integration. To provide for a communications infrastructure that is significantly more flexible than traditional implementations, a new control pane is required to allow for edge processes to directly interact with the resources within that infrastructure. The primary existing tools used for manipulating L1 and L2 services are centrally administered control planes designed to establish static not dynamic resources. Consequently, a major objective of this research was to design and implement an innovative control plane, created to be invoked by distributed processes used for interacting continually

with communication infrastructure resources. The OptIPuter environment was not developed for static paths established for very long periods, but instead for paths that are continually being configured and reconfigured as application requirements change over time. The majority of these paths are not implemented as a shared resource as they are with traditional services, but as dedicated resources, such that they become extended backplanes of large scale clusters that can be national and international in scale. Another problem that this research is exploring is the integration of edge processes and devices with large scale bandwidth by creating such backplanes with multiple parallel lightpaths. A related problem is optimization for resource allocation, utilization, and monitoring. 3. Control plane architecture To address the requirements of next generation applications and communication services, a new architecture for control planes is required, one that will provide for functions that are abstracted from the specific characteristics of supporting physical devices. Such an architecture must be developed not only for existing devices but also for anticipated future devices and for hybrid environments that contain multiple devices that must be integrated into contiguous environments. In general, the architectural design is oriented to supporting large numbers of 1 Gbps and 10 Gbps streams. An expectation is that many core systems will exist that can provide support for hundreds of concurrent 10 Gbps streams as well as 40 Gbps and 100 Gbps streams. However, any current discussion of novel control plane architecture should be placed within the context of current macro trends in this area, which are forcing the reconsideration of traditional approaches. For example, traditionally, each network layer has generally been provided with a separate control plane. However, current architectural developments are oriented toward designing unified control planes that can operate at all network layers. Traditionally, control planes have been designed as closed proprietary systems closely interlinked with the physical characteristics of specific devices. In contrast, open architecture is now being developed, based on services supported by IP. Also, control planes, especially those for optical networks, have been designed to implement fairly static resources, such as lightpaths. In addition, a basic approach has been using an overlay model, which is based on concepts of separate domains for each layer. More recently, signaled overlay models have been developed, which provide for separate domains, but also support a wide range of interactions between the IP layer and the optical layer. This approach, which is used by the OptIPuter, allows for significantly more flexible networking, for example, supporting resources on-demand, including end-to-end lightpaths between any two or more edge points. This model can also use out-of-band signaling, for example, through a control plane channel that is physically separate from data planes, perhaps implemented on a separate lightpath. The majority of OptIPuter experiments used out-of-band signaling. Also, its architecture supports requirement signaling for types and levels of services, such as defining priority, protection class, direction (e.g., uni-directional vs bi-directional, etc.), resource discovery and availability, lightpath management (e.g., create, delete, change, swap, reserve), optimization and related performance parameters, survival, protection, and restoration processes, etc. The architecture also provides functions for implementation, management, and routing, and includes the use of link-state-protocols. The OptIPuter control plane is an integral part of a distributed system designed for dynamically orchestrating reconfigurable resources. It provides for an API that allows the application to signal for resources. Experimentally, several such methods were used,

J. Mambretti / Future Generation Computer Systems 25 (2009) 137–141

including a process that monitors a TCP socket for requests and responds over a dedicated channel directly linked to the requesting process. Based on separately accessed policy determinations, the process accepts and interprets the requests. A new IP address and subnet mask is returned. Service requests can be basic or highly sophisticated, incorporating multiple parameters and configuration characteristics, e.g., allowing selection of single or multi-service and protocol support. Individual high performance protocols can also be selected. Basic signaling allows the network to be transparent, while details provide for network awareness. Other processes continually monitor and analyze the status of network resources. A resource manager includes a discovery process that provides state information on topology allocations, enabling creation of virtual networks and network topologies, comprising distributed backplanes. Experiments used different types of path and edge device addressing techniques, including, those for lightpaths, private address spaces, specialized resource requests, such as those for multi parallel vs single paths and for capabilities for path duplication. Functions and capabilities related to monitoring and analysis were extensions of standard processes for these capabilities. These components were investigated through a series of experiments using data intensive science applications on a large scale infrastructure. 4. Standards context for control planes The importance of developing a new control plane architecture has been increasingly recognized by research communities and standards associations. For example, the International Telecommunication Union (ITU-T) has been defining architectures for an Optical Transport Network and Automatic Switched Optical Networks — including provisions for NNI, UNI, control signaling and other functions. These designs are being developed to meet the requirements of large communication service providers. These designs assume that provisioning will be managed by a central authority. The Optical Internetworking Forum (OIF) is also developing architectures for UNI and NNI, which are designed to meet requirements of equipment developers. The IETF has established several initiatives within various working groups. Most notable are those that are addressing issues related to the Common Control and Measurement Plane (CCAMP), and the control plane efforts related to Generalized Multi-Protocol Label Switching (GMPLS). Other key initiatives relate to Link Management Protocol (LMP), Routing Extensions in Support of Generalized MPLS, and OSPF Extensions in Support of Generalized MPLS. 5. Testbed description The large scale OptIPputer testbed is based on three regional core facilities, one each in Southern California, Chicago, and Amsterdam. These core facilities are interconnected with dedicated lightpaths within optical channels that comprise the backbone of this distributed facility. Locally at each site, other provisional lightpaths have been allocated as part of an accessible repository that could be dynamically provisioned if required. Downward trends in pricing for optical fiber and related equipment allow for such on-demand capacity to be implemented at reasonable cost. Each core location is provisioned with multiple high performance computational Linux clusters, mass storage systems, and high definition displays. This research initiative is based completely on experiments with real large scale applications and not on attempts to produce simulated traffic or models. These edge facilities are connected to the core facilities with dedicated lightpaths. Each core facility has a high performance OEO packet switch router, a passive OOO switch, based on 2D MEMS, and wavelength selectable

139

switches. The optically transparent OOO switches are used as optical patch panels, which can be used with any protocol, channel frequency, and throughput rate. This large scale distributed infrastructure can support packet routing, L2 switching, and dynamic L1 provisioning. However, the primary focus of the research has been related to mechanisms for allocating and utilizing L1 and L2 paths, which are used as extended backplanes. This infrastructure can be used to implement a wide range of virtual topologies. At the edge, almost all equipment was implemented with 10 Gbps NICs, directly connected to 10 Gbps lightpaths supported by OOO switches or connected to small 10 Gbps Ethernet switches that in turn were connected to OEO or OOO switches. As part of ongoing experiments, this distributed infrastructure has been implemented with different types of control plane architecture, including those based on GMPLS with extensions [4,5]. However, in contrast to traditional central authority implementations, all were examples of integration of distributed edge processes with mechanisms for directly controlling core resources, including dynamic configuration. 6. Experimental descriptions To provide the common interactive Internet experience that is so familiar today, web browsers interact with very small amounts of information. Although large scale science applications require a similar degree of personal interactivity, they also require direct interactivity with data volumes that are exponentially larger than those on the common Internet. Current communication services as generally implemented cannot support such interactivity with large data sets, in part, because they rely on techniques for provisioning static shared resources. Because the OptIPuter must support applications that frequently require multiple large scale streams from among many sources and destinations under continually and rapidly changing conditions, experimental tests were designed to support actual large scale applications — not merely models or simulations using artificially generated traffic. Two key application sciences used as reference contexts for the research are geophysical sciences and bioinformatics, especially visualization and imaging applications. Several other disciplines are also being explored, including high performance large scale digital media and data mining. 7. Results, issues, and implications All experiments conducted using this infrastructure were highly successful in that the applications and services requiring large scale data flows were well supported. On the provisioned extended backplanes, common impediments experienced in L3 networks, such as jitter and loss, were almost completely eliminated and that latency was minimal. Individual streams for several applications exceeded 9.5 Gbps for long terms (hours and days) with no impediments. This environment minimized latency for data flows even for long distance paths. One major latency issue relates to setup timing for the topologies used. As indicated, to establish resources various control plane processes are required. An API signals for resources, specialized messages are sent and processed, resource discovery mechanisms are invoked, virtual networks and network topologies are created, some with specialized resource requests. These processes incur an initial latency penalty, which can be of the order of seconds. Multi-process sequences can take over ten seconds to establish connections. Given the scale of the applications and the performance gain provided by this approach, this type of latency was a negligible consideration. However, these are areas that can be addressed through additional research. There are two components to these timings. One is software processing time, which can be minimized through optimization and embedding processes in chip sets. The other is related to the management

140

J. Mambretti / Future Generation Computer Systems 25 (2009) 137–141

of state information for resources. To effectively use these types of high performance environments, innovative methods for managing state information, especially link-state, are required, and these issues should motivate on-going research. Another issue relates to the timing of MEMS switching, which generally requires milliseconds. OEO switching is much faster, less than milliseconds. Consequently, application processes and configurations that require hybrid provisioning with both L1 and L2 streams must have parameters related to timing considerations. However, for the majority of the applications used for experiments, this L1 latency was not a consideration because they used longterm flows. Such latency may be an issue with smaller scale applications which have strict timing dependencies. A mechanism is required that will provide processes that utilize these resources with a feedback mechanism that provides an indication of timing discrepancies across all domains used. Although this environment can support streaming close to the speed of light, latency may still be an issue for applications using national and international infrastructure, for example, those transporting adapt among sites 200 ms apart. However, to address this issue, several experiments used lightpaths to provide for caching within computing nodes. By pre-fetching and caching data remote repositories could be utilized like local disk [6]. These research experiments also revealed the need for a new method for path and edge device addressing, including for lightpaths, within these environments. The addressing used was standard IP. However, this approach was not sufficient to provide for the types of object identifiers that are required within these environments. When discovering resources and implementing multi-parallel paths and path duplication utilities separate higher level addressing schemes are required, e.g., to allow for signaled call to establish a pre-defined topology. Another important observation is that these experiments demonstrated that using L1 and L2 paths are much more cost effective for supporting large scale streams than L3 channels. The per port costs for 10 Gbps L2 services were 10% of those for L3, and those for optical ports were about 10% of the L2 ports. Furthermore, all of these costs are declining rapidly, indicating that over time these capabilities will be increasingly attractive. An additional cost benefit is that the power consumption is much less for the L1 and l2 channels. Also, these experiments showed how using private dedicated channels could substantially improve data security. Utilizing communication channels segmented at the physical layer end-to-end enhances data security, and forms of encryption requiring the production of high volume streams can be supported with high performance transport. As noted, this research project is not directed at replacing existing services but at providing complementary capabilities for resource intensive applications within specialized environments. Within the context for which they were designed, these capabilities are highly scaleable, especially given the cost curves of optical technologies. In some cases, these types of service can actually replace existing packet routed based service with higher performance, high quality, and lower cost. 8. Related research A number of research projects are investigating new architecture and technology for dynamically provisioned lightpaths based networks, based on Dense Wave Division Multiplexing (DWDM) techniques. Some of these projects are attempting to integrate programmable networking at all layers in large scale distributed environments [3]. In the past few years, many optical fiber based research testbeds have been established to investigate these topics experimentally. OMNInet was established to investigate and

develop new architecture, methods, and technology for high performance dynamically configured metro area networks [5]. In the Netherlands, StarPlane was implemented as a national optical infrastructure to investigate and develop new techniques for optical provisioning. The NSF funded EnLightened testbed was created to explore dynamic adaptive, coordinated and optimized use of highly distributed science resources connected by optical networks. CHEETAH (circuit-switched high-speed end-to-end transport architecture) was established to investigate new methods for high throughput delay controlled end-to-end circuit based on Ethernet over SONET [3]. Also sponsored by the NSF, DRAGON (dynamic resource allocation via GMPLS optical networks) has investigated new methods for dynamic, deterministic, manageable network transport services [3]. In Asia, the Japan Gigabit Network and the AIST G-Lambda research projects are both investigating these topics. These testbeds are described in a recent publication, along with others, for example, those implemented in the EU [3]. 9. Future research The research described here focuses on the design of a new type of large scale distributed environment for data intensive applications, based on lightpaths, which can be dynamically configured and reconfigured to support these applications. Preliminary results indicate that this is an important approach to providing communication services for such applications. However, there are additional interesting topics that can be explored to enhance these types of lightpath based services. OptIPuter experimental research investigated multiple issues related to inter-domain provisioning using the techniques described here. This particularly important topic is beyond the scope of this discussion. Also, another control plane model, which may be explored in the future, supports a single network fabric that completely integrates IP and optical components, using a common routing protocol within both the IP and optical domains, but allows for separate instantiations within those domains. This approach could provide a means for applications to self-select L1, L2, or L3 services or provide for a hybrid integration of such services, an important capability. Another area that deserves additional research is the management and optimal utilization of the state information that is required for these types of environments. Another issue that deserves to be investigated is scheduling resources as opposed to over allocating resources. This issue is the focus of much current debate between those who favor implementing a resource reservation scheme and those who would prefer to use low cost components to over allocate resources. The OptIPuter project has given only preliminary consideration to integration with other environments, for example, communication services that integrate packet routing, high performance routing, packet switching, pseudo-wire techniques, and lightpaths. A major consideration is how to allocate traffic to optimize use of such environments. Several research initiatives are focused on achieving high performance streams through specialized L3 transport protocols, and several have been tested within this environment. Further investigation of the utility of such protocols would be useful. Also, functions and capabilities related to monitoring and analysis could be enhanced and deserve additional experimental analysis. In addition, several control plane and backplane experiments integrated the OptIPuter techniques with Web Services architecture and implementations with positive results, and this topic is another area requiring additional investigation. 10. Summary The OptIPuter research project was initiated to explore new methods for meeting exceptional application and communication services challenges by creating complementary and alternative

J. Mambretti / Future Generation Computer Systems 25 (2009) 137–141

services to those that exist today, especially services based on individually addressable, dynamically allocated lightpaths. These new methods require novel control plane and backplane architecture and technology. Traditional control planes were designed to be used as part of centralized communications systems to implement and adjust fairly static or completely static resources. The OptIPuter has a control plane architecture that is directly integrated within a large scale distributed system designed to discover and use multiple resources within a continually and rapidly changing environment. This control plane can dynamically implement and adaptively adjust extended backplanes, based on lightpaths, that can be used to create large scale, highly distributed virtual environments. To date, the OptIPuter initiative has designed such an architecture and has deployed and instantiated a large scale, testbed. The results from initial experiments indicate that these methods provide for particularly powerful resources for very large scale, data intensive applications. These preliminary results are highly promising, and the success of these experiments has attracted additional domain sciences to the project [7]. Today, a number of large scale applications are beginning to use these approaches in production environments for science research applications. These tools should be considered complementary rather than substitutes for traditional methods. However, for a few applications, they may actually serve as replacements for existing services. Acknowledgements The OptIPuter research program was funded by the National Science Foundation, under NSF ITR Cooperative Agreement OCI0225642. The OptIPuter project was led by Larry Smarr, Tom DeFanti, and Maxine Brown. Other research partners who contributed to the research described here include Jason Leigh, Eric He, Oliver Yu, Jim Chen, Fei Yeh, Robert Grossman, Kees Neggers, and Cees de Laat.

141

References [1] L. Smarr, A. Chien, T. DeFanti, J. Leigh, P. Papadopoulos, The OptIPuter, Communications of the ACM 46 (11) (2003) 58–67. [2] L. Smarr, M. Brown, T. DeFanti, C. de Laat (guest editors) Special section on iGrid 2005: The Global Lambda Integrated Facility, Future Generation Computer Systems 22 (8) (2006) 849–1054. [3] F. Travostino, J. Mambretti, G. Karmous-Edwards (Eds.), Grid Networks: Enabling Grids with Advanced Communication Technology, John Wiley & Sons, 2006. [4] T. DeFanti, M. Brown, J. Leigh, O. Yu, E. He, J. Mambretti, D. Lillethun, J. Weinberger, Optical Switching Middleware for the OptIPuter, Institute of Electronics, Information and Communication Engineers (IEICE) Transactions on Communications E86-B (8) (2003) 2263–2272. (Special issue on Photonic IP Network Technologies for Next Generation Broadband Access), Japan. [5] J. Mambretti, D. Lillethun, J. Lange, J. Weinberger, Optical Dynamic Intelligent Network Services (ODIN): An Experimental Control Plane Architecture for High-Performance Distributed Environments Based on Dynamic Lightpath Provisioning, IEEE Communications Magazine 44 (3) (2006) 92–99 (Special issue on An Optical Control Plane for the Grid Community). [6] N. Krishnaprasad, V. Vishwanath, S. Venkataraman, A. Rao, L. Renambot, J. Leigh, A. Johnson, B. Davis, JuxtaView — A Tool for Interactive Visualization of Large Imagery on Scalable Tiled Displays, Cluster 2004. [7] R. Grossman, G. Yunhong, D. Handley, M. Sabala, J. Mambretti, A. Szalay, A. Thakar, K. Kumazoe, O. Yuji, M. Lee, Y. Kwon, W. Seok, Data mining middleware for wide area high performance networks, Future Generation Computer Systems 22 (8) (2006) 940–948.

Joe Mambretti is the Director of the International Center for Advanced Internet Research at Northwestern University (iCAIR, www.icair.org), the Director of the Metropolitan Research and Education Network (MREN, www.mren.org), co-Director of the StarLight international exchange (www.starlight.net), member of the Executive Committee for I-WIRE, principal investigator for OMNInet and for the Distributed Optical Testbed, and research participant in the OptIPuter initiative. iCAIR accelerates leading edge innovation and enhanced digital communications through advanced Internet technologies, in partnership with the international community. iCAIR accomplishes its mission by undertaking large-scale (e.g., global, national, regional, metro) projects focused on high performance resource intensive applications, advanced communications middleware, and optical and photonic networking. He is co-editor of ‘‘Grid Networks: Enabling Grids With Advanced Communications Technology’’, published by Wiley.