Resilient and survivable networks

Resilient and survivable networks

Computer Networks 54 (2010) 1243–1244 Contents lists available at ScienceDirect Computer Networks journal homepage: www.elsevier.com/locate/comnet ...

94KB Sizes 0 Downloads 138 Views

Computer Networks 54 (2010) 1243–1244

Contents lists available at ScienceDirect

Computer Networks journal homepage: www.elsevier.com/locate/comnet

Editorial

Resilient and survivable networks Who would dare to argue that the Internet is dispensable? Nobody, most probably. However, we need to be aware that the Internet was a pure subject of research in the 60s and 70s and – in a mere 40 years – has gradually morphed into the infrastructure for the information society. In our society, information is an indispensible good, and the means to produce, process, store, communicate, analyze and consume information have become major drivers of the economy. Still, the Internet was not developed in the way we normally develop products or services (via a lab prototype, alpha- and beta-test versions, official product release); instead, at the time when the first users started to use its services productively, the experimental prototype it was at the time became the released version – practically overnight, just through the fact that people began to depend on it. The immense growth the net has experienced ever since, both in size and functionality, makes it an eternal prototype, with inevitable shortcomings. There are ample events that demonstrate that the Internet is not as robust as it should be. The lead paper of this special issue discusses such events and introduces a framework for network and service resilience and survivability which may serve as a reference for future research. The framework comprises strategies for responding to events that may negatively impact the resilience of the Internet, and identifies a set of design principles to be applied for more resilience and better survivability. In addition to the lead paper, this special issue features 5 papers selected from 36 submitted manuscripts which cover various aspects of resilience and survivability in today’s and future networks. The first paper presents topology control algorithms for heterogeneous sensor networks, i.e. sensor networks with low-power sensor nodes which send their data towards more powerful intermediate nodes, which in turn forward the sensor data to the sink node. The algorithms studied by the authors create shortcuts among the intermediate nodes, leading towards the sink node. The analysis shows that the resulting network presents better small world properties and has better resilience in the presence of various failure modes. The second paper introduces a flow-level intrusion detection system which is capable of sustaining high 1389-1286/$ - see front matter Ó 2010 Published by Elsevier B.V. doi:10.1016/j.comnet.2010.03.004

traffic loads, such as seen on 10 Gbit/s network interfaces. The system is capable of simultaneously detecting several attacks, achieves scalability by aggregating flow data from several border routers, and is resilient against Denial-of-Service (DoS) attacks on the system itself. The authors use a reversible sketches technique to efficiently detect changes in the behavior of data streams and to identify the type of attack linked with such changes. The third paper investigates fast reroute mechanisms for IP networks, as currently discussed in the IETF. Two mechanisms are considered, namely loop-free alternates (LFA) and not-via addresses (which identify failed components that must be avoided by rerouted packets). Since both mechanisms have their disadvantages, the authors consider a combination of the two mechanisms to potentially reach 100% single failure coverage. They conclude, however, that this combined usage does not have an advantage in comparison with the use of not-via addresses only. The fourth paper addresses resilient peer-to-peer (P2P) live streaming of multimedia content. P2P streaming approaches are attractive because they automatically scale with the number of peers, as each peer joining the system adds a portion of its resources to the task of content distribution. However, peers leaving the system, or non-cooperating peers, potentially threaten the stability of the system and can incur congestion. The authors describe an incentive mechanism which leads to better resilience and reduced topology maintenance cost. The last paper focuses on resilient fibre metropolitan area networks typically used for the distribution of multicast traffic, such as IPTV or video conferencing and collaboration support systems. The authors propose a Double Rings with Dual Attachment (DRDA) strategy leading towards meshed topologies, and analyze the properties of the resulting topologies. They find that a high service availability degree can be achieved with service repair times of 12 h and little extra backup capacity, due to the built-in resilience of their approach. The breadth of the research topics treated in the five papers is astonishing. Of course, we cannot expect to

1244

Editorial / Computer Networks 54 (2010) 1243–1244

see a master plan for a more resilient Internet in a small selection of papers which were accepted for the high quality of the research they are presenting. The lead paper in this special issue, however, presents a layout of the problems to be tackled and a framework that aims to lead towards such a master plan. The future will show whether this is the right plan to be followed. For sure, the quest for a resilient and survivable Internet

will keep the research community busy for some years to come. Bernhard Plattner, David Hutchison, James P.G. Sterbenz Available online 19 March 2010