ISTR0702.qxd
7/19/02
4:00 PM
Page 11
Information Security Technical Report, Vol 7, No. 2 (2002) 11-21
Is there a Need for Survivable Computation in Critical Infrastructures? Yvo Desmedt Computer Science, Florida State University, Tallahassee Florida FL 32306-4530, USA,
[email protected], and Dept. of Mathematics, Royal Holloway, University of London, UK
Abstract
bookkeeping
Which critical infrastructures are vulnerable to cyber terrorist and information warfare attacks today is decided on an ad-hoc basis. This has lead to incorrect conclusions and may imply a waste of money. Moreover, the issue of which infrastructures are the most critical to the survival of our economy is not obvious. Methods to evaluate these questions are discussed. We also discuss some of the methods that can be used to increase the survivability of computation.
Most corporations today use computers to maintain their books.
1 Introduction 1.1. Our dependency on computers Computers are involved in controlling and managing our society to an extent few CEOs, managers in general, heads of states, law makers, supreme court judges, generals and others at the top realize. Although many predictions made by the artificial intelligence community have never been realized, computers are very good at doing computations and digital communication. While traditional information needed different media, digital information uses digital. Audio, graphics, text, pictures, video and other data all come easily in digital format. Therefore computers are utilized in simple routine operations, such as the following.
0167-4048/01/$22.00 © 2002, Elsevier Science Ltd
communications
Including e-mail, voice (e.g. phone), video using a wide range of different technologies. control
In modern control systems both the sensors, as well as the actual controls are computerized. These systems control physical parameters such as temperature, pressure, voltage, electrical current, position, speed and acceleration, chemical and biological composition of goods, etc. For example, computerized thermostats together with computerized actuators control the temperature of a room, of a chemical process in a plant, of the cooling in a nuclear power plant, etc. Often a centralized computerized control system is also involved (used to change the parameters, perform optimizations, etc.). Other examples include the acceleration/deceleration of a cruise-controlled car (or a car with anti-lock brakes), the movement of an airplane or of a robot in a car manufacturing industry. databases, data processing and data storage
Old fashioned archives have been abolished in many organizations in modern countries.
11
ISTR0702.qxd
7/19/02
4:00 PM
Page 12
Is there a need for survivable computation in critical infrastructures?
Very few organizations keep paper copies of their electronic data. The processing of data allows the easy manipulation of airline reservations, audio, text, traffic lights, etc. Audio, video and the like are stored in digital format. forecasts
These are no longer limited to weather forecast, but include for example predictions about the stock market. sub-optimizations
When the need exists to make choices, computers far outreach humans in speed and correctness in deciding which is the best choice, provided there exists a not too complex method (algorithm) on which the decision can be based mathematically. This feature is, for example, used in routing data (e.g. phone calls) as well as routing goods transported via the air, via rail, via road or by boat.
will perform as it should) the more a successful attack does not only undermine the computation itself, but the application. For example, in the 1980s researchers said that hackers could order the disconnection of the utilities of certain customers by breaking into the database that manages this information [1]. It was also pointed out that hackers could modify certain critical parameters of a computerized control of a chemical plant to cause an explosion [12, p. 257]. In general the indirect impact of a hacker shutting down a computer or succeeding in making it behave incorrectly, may be much worse than just the possibility of hacking the computer itself. To understand this, one needs to know which infrastructures are the most critical and which of those depend the most on computers. We discuss this in more detail in Section 2. To better understand how the dependency could be exploited, one needs to understand what are the the unique aspects of cyber attacks.
1.2 Unique aspects of cyber attacks When combining these simple operations, computers can be involved in more complex operations. We only describe one of many examples worth mentioning. Reverse-GPS combines the Global Positioning System (computerized sensors) with wireless communication. It allows railroad, truck companies (and the like) to know where their vehicles are located. Many of these companies now combine this with a computerized routing system to reroute in real time, e.g. their trucks to pick up goods. This optimization reduces the need for an overcapacity of trucks and personnel. The more our world depends on computers assumed to be available (which means that the computer and its resources, such as storage, can be used by those authorized at any moment) and reliable and robust (which guarantees that the computer
12
It may seem that dependency on a single technology is unique to computer technology. However, it is not. Before our world moved from a mechanical one to an information based one, ball-bearings were critical. What makes computers so vulnerable is the fact that the same attack can be exploited several times. When, during World War II, the allies bombed ball-bearing factories, several attacks using multiple strikes with several airplanes and different bombs were needed. However, in the case of computers, the single effort of developing a computer virus (or computer worms or the like), is sufficient to attack hundreds to millions of computers, roughly in synchrony. The terrorist attack of September 11, 2001 demonstrated the impact of synchronized attacks. Although there are some similarities from the viewpoints of
Information Security Technical Report, Vol. 7, No. 2
ISTR0702.qxd
7/19/02
4:00 PM
Page 13
Is there a need for survivable computation in critical infrastructures?
re-exploiting the same weakness* and resources**, there is a major difference. On September 11, the attack could only be repeated at most as many times as the terrorists had the resources (pilots and terrorist teams). However, from the attacker ’s viewpoints, that is not the case with cyber attacks. Indeed, data can be replicated and so the increase in required resources as a function of the number of items attacked, is almost non-existent. This aspect of replication impacts, for example, how insurance companies need to treat their policies when dealing with cyber threats [3]. While an isolated break-in has some similarities with a fire in a house, a medium scale attack using aforementioned replication techniques should be viewed in a way comparable to earthquake (or hurricane) insurance. Large scale cyber attacks may be regarded, from an insurance viewpoint, as being equivalent to a war or terrorist attacks, against which most insurance companies do not insure. Evidently, what insurance companies would like to know is what the probabilities are of cyber attacks with their level of severity. Society likes predictions about those. We now discuss this.
*[As the author and Prof. Burmester stated, by having similar airport security checks around the nation, a single weakness could be re-exploited [19]] **[In the terrorist attacks, the airplanes (fuel, etc.) that hit and destroyed the buildings did not belong to the terrorists. However, in the cyber world the computer used to launch1 the strike may belong to the attacker. Similar to the September 11 attack, several computers involved, do not belong to the enemy.]
1
Note the difference between the word ``launch’’ and ``program.’’ Today more and more tools are available on the internet. So, these who program tools do not necessarily perform the attack.
Information Security Technical Report, Vol. 7, No. 2
1.3 Predictions There are several ways to highlight the potential impact of a cyber attack on critical infrastructures. A method that usually gets immediate attention from the public is to forecast such an attack. However, several predictions about the evolution of the information age have been plain wrong or exaggerated. Examples are: The Y2K doomsday
Experts predicted that a whole range of systems would shut down on midnight on January 1, 2000. Systems that were supposed to be affected included several systems distributing such commodities as electricity, gas, fuel and water. Some even claimed that milking machines would stop working [41]. Those who believed the hype invested billions of dollars to prevent any possible doomsday scenarios (the US Pentagon alone invested $3.6 billion [6]). In the US, computer system managers in several corporations and organizations had to be at work New Years Eve [40]. In some countries the software was updated only in systems that run very critical applications, as nuclear power plants and financial institutes. In some countries the issue was just ignored and experts predicted disastrous consequences there (see e.g. [38,39]). However, the lack of any major impact worldwide demonstrated the hype [42,10]. The rosy future of dot-coms and e-commerce.
Several experts predicted that e-commerce was going to be the foundation of new companies. The financial world invested billions of dollars into these. Some even claimed that home delivery of products ordered over the internet would imply that shopping malls would have to close (see
13
ISTR0702.qxd
7/19/02
4:00 PM
Page 14
Is there a need for survivable computation in critical infrastructures?
e.g. [32]. Internet ordered airmail tickets would drive travel agency into bankruptcy [33,25], etc. However, hastily developed software turned out not to have taken typical practices into account and frustrated internet e-commerce users. For example: • While shops remove items already ordered for wedding presents of their deposited lists, software used by many e-shops does not. • Buying several items (e.g. food) from different vendors over the internet is often slow, while this is not the case in a shop. • Lots of people in the US ordered over the internet favorite toys as Christmas presents in 1999. They found out that their credit cards were charged by the software, but that the depots ran out of these toys that were delivered too late or not at all [9]. Evidently, these customers did not return for the 2000 Christmas season. Four months later the financial world realized the hype [7]. In the light of these incorrect predictions, it should be no surprise that some now claim that serious threats, such as cyber terrorism, against the computerized part of the critical infrastructures will never materialize nor have disastrous consequences (see e.g. [26]). However, ignoring predictions that look exaggerated may have undesired consequences. In 1985 the Inman report forecasted more terrorist attacks against embassies [37]. The US congress never provided the appropriate funds demanded [35]. However, in 1998 two US embassies were bombed. Before one can forecast a severe cyber attack against the critical infrastructures one needs to know whether a potential enemy has the intent and the knowhow necessary.
14
It is probably correct to claim that several countries are setting up information warfare2 forces. However, if caught, launching such an attack may imply a full scale war. So predicting such an attack is at least as difficult as forecasting war. In general, security events are likely to be non-ergodic3. In terms of what to attack, if the target is poorly chosen, the enemy may fail to have a dramatic impact. We discuss this in more detail in Section 2. As for how to attack, while several terrorist groups and military organizations in the world are experts on traditional explosives, few are very familiar with the combination of tools available and needed. Moreover, the impact of an attack could in some circumstances be increased by combining the cyber attack with other ones, i.e., using explosives, biological agents, etc. The enemy gets into a stronger position when having access to inside experts. For example, in the case of computers used in the control aspects of a chemical plant, experts may include system managers as well as control engineers and chemists. Note that sometimes vendors and remote technicians involved in the maintenance of computerized components of a system have more knowledge of it than the system manager of the plant. So, for all these reasons we avoid making predictions. Instead we focus on analyzing the feasibility of a potential attack and how to evaluate the severeness of an attack. I would argue that it will likely take a while before one 2 Some definitions of the terminology are so broad that these even include the launching a propaganda campaign in case of war. We exclude such a broad definition. We limit ourselves to strikes of which a part is an attack against computers, e.g. by hacking techniques.
3
This is a terminology from statistics and stochastic system theory. It implies that from observing the past one cannot predict the future.
Information Security Technical Report, Vol. 7, No. 2
ISTR0702.qxd
7/19/02
4:00 PM
Page 15
Is there a need for survivable computation in critical infrastructures?
can make scientifically justified and reliable predictions about serious cyber attacks. Therefore, we focus in this paper on methods to evaluate which infrastructures are the most critical, and, of those, which are the most vulnerable to cyber attacks. Indeed, critical infrastructures that did not computerize at all are obviously not at risk to a cyber attack.
critical and of these which are the most vulnerable. The answer to such a question could be a simple list or a list with weights expressing both the relative importance and the degree of its cyber vulnerability.
2.1 Most critical infrastructures 2.1.1 A first approach
1.4 Survivability So far, most of our discussion has been on dealing with denial of service and similar attacks. The issue of survivability is much more complex. Indeed, consider the audio recording industry. It is not excluded that the massive copying of digital music, called copyright infringement or legal sharing, depending from whose viewpoint, may cause the industry collapse. So, other cyber security aspects affect survivability. Therefore, one [31] can describe survivability as:
Information survivability is the capability to deliver desired variable properties (such as security and reliability) for a given mission as a function of variable information resources, and faults (malicious and accidental) as these evolve in time. Obviously such a broad definition implies a discussion of such security issues as privacy and authorized wire-tapping (called law enforcement), (entity and message) authenticity, anonymity and traceability, copyright and watermarking, etc. Such a discussion involves all of the state-of-the-art of communication security (including anti-jamming techniques), computer security, cryptography and reliability, which is far beyond the scope of this paper.
2 Estimating vulnerabilities The question we now address is how to evaluate which infrastructures are the most
Information Security Technical Report, Vol. 7, No. 2
One of the first public efforts, if not the first, to estimate which infrastructures were the most critical to our society, was done in the context of the US Presidential Commission on Critical Infrastructure Protection during the Clinton administration. It considered the following areas as being critical [34]:
Information and Communications, Electrical Power Systems, Gas and Oil Transportation and Storage, Banking and Finance, Transportation, Water Supply Systems, Emergency Services, and Government Services. One can wonder why food distribution, for example, is not on the list while water supply is. Food distribution is heavily computerized. Bar code readers are used extensively in this industry, not only for bookkeeping, but also as the first step of an automated inventory. Seeing that only the sectors that had representatives on the presidential commission correspond with those identified as critical [34], it seems that no serious effort was done by the commission to study the need for expanding the original list. 2.1.1 A second approach
Many would likely agree that these sectors on the above list are indeed critical. So, it could be used as the starting point for making an improved one. To do this, one approach is to list the major sectors of our modern society and to decide whether these should be
15
ISTR0702.qxd
7/19/02
4:00 PM
Page 16
Is there a need for survivable computation in critical infrastructures?
included. The pillars on which our modern world are based are: goods
embracing production and distribution (including the following sectors: agriculture, (bio-)chemical, electrical, electronics, furniture, mechanical (e.g. airplane, car, machinery, robots), mining, pharmaceutical, supplies, textile), services
(including: communication, emergency, financial, governmental, library, medical, transportation, travel, utilities, waste management) Such an approach has several disadvantages. The main one is the lack of foundation to justify the decision. This implies that one lacks the knowledge of: what is truly critical. For example, a manufacturing sector may be involved in the production of several different goods, some more critical than others. the dependencies infrastructures
between
these
(Program Evaluation and Review Technique) directed acyclic graph 4 (see e.g. [20,23]). It allows one to represent the interaction between different aspects of the economy. For example, to make a car one needs a mechanical motor, wheels, a frame, etc. The factories that make those also need components, etc. These may also be represented in the graph. The PERT graph has been used to model scheduling problems. The PERT graph can easily be used to identify the critical components. Remove a process and all the processes that depend on it. One can then measure how critical this component is by counting how many goods/services can no longer be produced/provided. The importance of goods can also be measured in a similar way. E.g. transportation services without trucks are seriously paralyzed. The problem with using PERT graphs is that it does not allow one to deal with redundancy, the cornerstone to achieve survivability (see also Section 3). Although, ball-bearings were the most critical component during World War II, the fact that Nazi Germany and its suppliers had several factories producing those, implied that the allied bombing of such factories did not necessarily cause the desired impact [28]. 2.1.4 Using a concept of artificial intelligence
whether a truly critical infrastructure is vulnerable to cyber attacks and to what degree. 2.1.3 PERT graph
A well known technique to model manufacturing and scheduling is the PERT 4
This is basically a free-way ``map’’ in which the cities are replaced by processes (called vertices or nodes) and freeways (called edges or links) represent interdependencies. The directed aspect corresponds to one-way streets (two-way streets are just two one-way streets). Acyclic means that the graph is such that one can never go around in circles.
16
While the PERT graph models the interdependencies well, it does not model redundancy (see Section 2.2.1 for a model that represents well redundancy only). A model to deal with both issues was suggested in [5]. The idea is to borrow the AND/OR graph concept of artificial intelligence, however for a different purpose than originally intended. Depending on how detailed one likes to study the issue of which sectors are the most critical, one can model a factory/provider as a node, or a machine (e.g. computer) as a node, or a component of a machine, etc. The
Information Security Technical Report, Vol. 7, No. 2
ISTR0702.qxd
7/19/02
4:00 PM
Page 17
Is there a need for survivable computation in critical infrastructures?
interdependencies between the nodes is represented using edges connected by nodes that are labeled ``AND.’’ The redundancy is expressed by having nodes labeled with an ``OR.’’ Let us describe an example. In this model, to make a car one can use a motor produced by different vendors (or factories). This is expressed by connecting the nodes representing these vendors by a node labeled ``OR.’’ Similarly, one can represent the possible factories from which tires can be bought, etc. Since the car is produced from a motor, wheels, etc., all these (i.e. outgoing edges from the aforementioned OR labeled nodes) are connected to a node labeled ``AND.’’ Recently, this approach has been proposed to analyze the most critical infrastructures [13]. One assumes that the enemy has the resources to attack a number of nodes (e.g. factories), let say k nodes, but not more (see Section 2.2.2 for its history). As is typical in networks (e.g. computer, or hydraulic, or transportation) to these nodes and edges correspond maximal possible flows, called capacity. Indeed, a car manufacturing plant can produce a number of cars at maximum capacity. However, when the required components are not present, the car manufacturing plant is idle. So, these nodes introduce a relation between the outgoing flow and the incoming ones. Several questions have been analyzed [13]. In several circumstances, such as food distribution, a minimum flow needs to be guaranteed to maintain survivability. The question then is whether the enemy, by disrupting (e.g. destroying) nodes, will be able to reduce the flow such that it falls below the critically required one. Moreover, the enemy can find which nodes are critical, i.e., which selection of nodes allows the enemy’s disruption to succeed. This
Information Security Technical Report, Vol. 7, No. 2
information can then be used to strengthen the system (see e.g. Section 3.4). If the enemy does not have the resources to reduce the flow below a critical value, then the best the enemy can do is to reduce the flow as much as possible. 2.1.5 Economic model
Recently, it was proposed to use economic and game theoretic models to analyze which infrastructures are critical [16]. This can be viewed as a generalization of what has been discussed in Section 2.1.4. Several economic aspects are added. Instead of assuming the enemy has the resources to attack any k nodes, the enemy’s cost (not necessarily expressed in a monetary value) depends on which set of nodes would be attacked together. This is a more realistic model. Nodes that are better protected may be more costly to attack. This defines a list of sets of nodes the enemy can attack (see also 2.2.2). Flows and capacity are also used in this model. One notes that different applications may have different economic importance. So, to the flows and capacity correspond impact factors that depend on the application. Although the enemy can choose any attack whose cost is below a given budget, a sophisticated opponent will choose the attack to optimize its impact.
2.2 Cyber vulnerabilities 2.2.1 A first approach
The work on cyber vulnerabilities predates most work on identifying the most critical infrastructures. In communication networks data can be transmitted via different paths from the sender to the receiver. Suppose that all nodes in the network have the same vulnerability. Moreover, assume the enemy has the resources to break into k of these nodes. The question, which has been the foundation of lots of research (see e.g. [18]), is how a sender can still communicate reliably over such a network.
17
ISTR0702.qxd
7/19/02
4:00 PM
Page 18
Is there a need for survivable computation in critical infrastructures?
Nodes that have been taken over by the enemy are called Byzantine faults [17,29], a terminology some believe misleading. Indeed, these nodes can behave as the enemy desires. It means ‘shut down’, or ‘modify the output these nodes are supposed to produc’e, or ‘behave as prescribed by the protocol for a while and then misbehave for a while’, ‘conspire with other Byzantine faults’, etc. So, these Byzantine faults do not necessarily act as faults that have been disrupted by some act of God, but may be truly malicious. The model has been applied in a broader context than just communication networks. The issue of how to reliably and securely compute using distributed computing is addressed in [22,8]. 2.2.2 A second approach
In the context of critical infrastructures the idea of always trusting k+1 nodes, but not k, makes no sense. This threshold assumption finds its origin in the research on reliability, which predated the research on robustness (while reliability guarantees availability under accidental errors, robustness deals also with malice). When the probability of errors are independent of each other, then one can choose a positive integer k such that the probability of having k+1 errors is significantly small. Out of this, came the research on how to correct k errors in a communication and storage system (see e.g. [30]). However, this independency makes no sense when dealing with malicious errors. The enemy will try to attack as many computers (or other units) as possible. ItoSaito-Nishizeki [27] were the first to generalize the threshold assumption that k computers (nodes) cannot be trusted, but k+1 can. They introduced the concept of access structure, i.e. a list of sets one can trust not to conspire. This concept was introduced in the context of the research on making backups of data maintaining privacy, called secret sharing [4,36]. This model has also been used in the context of secure distributed computing [24,21].
18
2.2.3 Applying it to critical infrastructures
The assumption made in all these models is that all computers are general purpose machines. This is often unrealistic as we now discuss. If one would try to apply this to the computation done in many critical infrastructures, many computers need to be replaced, which is unrealistic. For, example, a nuclear power plant contains roughly 800 computers of which almost all are dedicated ones. Some dedicated computers are just special purpose hardware devices, such as adders or other very primitive ``machines.’’ Others are microprocessors in which a (small) program is stored in ROM in sensors and actuators. The AND/OR model discussed in Section 2.1.4 can be used to model a mixture of mechanical nodes (etc.), general purpose computers and dedicated ones. The issue of whether it is possible to continue producing some mechanical output while different types of nodes are under attack, has been studied in [5]. Also, the attacks do not necessarily have to be cyber ones. They could be all cyber, all non-cyber, or a mixture. The assumption is that there are at most k nodes that can be taken over. 2.2.4 Platform based
We already mentioned (see Section 2.2.2) that assuming that there are at most break-ins is unrealistic. In the context of cyber attacks this is obvious. Indeed, if the threshold assumption were realistic the cost of attacking k computers on very different platforms would have to be less than attacking k+1 computers using the same platform. This is obviously false due to automated attacks (for example computer worms) exploiting the same weakness. A solution is to use the general access structure model. However, the problem is that it does not specify which access structure to choose. As an example of a solution one considered in [11] a model in which the cost of attacking many computers running the same platform is
Information Security Technical Report, Vol. 7, No. 2
ISTR0702.qxd
7/19/02
4:00 PM
Page 19
Is there a need for survivable computation in critical infrastructures?
identical to the cost of attacking a single computer running the same platform. A further simplification made was to assume that the cost of attacking a Windows 2000 operating system is identical of attacking a Linux system. The economic model (see Section 2.1.5) is evidently more correct when expressing the cost of attacking, but requires more effort to estimate these costs and in general may be too complex (the number of costs is exponential in the number of nodes). 2.2.5 Measuring the vulnerability
If, in order for an enemy (with limited resources) to succeed in the above attacks, the opponent always needs to use cyber attacks, the system is obviously vulnerable to it. If the enemy can succeed by only hacking computers, the vulnerability is extreme. If the system can be attacked without such methods, investing only in computer security will be of no avail. The models that can deal with a mixture of computers and other attacks (today these are based on AND/OR graphs, see Section 2.1.4) can be used to deal with this question.
2.3 Future work From the viewpoint of the survivability of critical infrastructures, today we live in a kind of prehistoric world. An important part of our advances we made has been based on the use of road maps and blue prints. While worldwide road maps exists today, we have no map to represent the dependencies and redundancy on different technologies. Such an AND/OR map of the world (or a part of it) could evidently be used by potential enemies. However, the same is true for geographical maps. Similar methods that were used historically could be adapted to avoid helping the enemy too much when publishing AND/OR maps. Such unclassified maps could then be used to analyze, using scientific methods, how much we truly depend on computer technology. Other work that
Information Security Technical Report, Vol. 7, No. 2
needs to be done is to evaluate the different models proposed and to improve those. Dynamic aspects (taking time into account) need to be integrated into future models.
3 Protection methods 3.1 General approach The general approach when defending is to add redundancy. This technique has been used extensively in the real world to achieve reliability. In many applications, the extra cost of adding the redundancy is rather small. Examples are extra bits stored on Compact Discs and modern RAM. However, to achieve reliability of a (computer) network, the increase in cost is often dramatic.
3.2 Network based models Without the disruption of the network, the existence of a single path between sender and receiver (e.g. a centralized computer and an actuator) guarantees communication. However, if this path can be cut, two vertex disjoint5 paths are necessary, which often doubles the cost. If the enemy can destroy k nodes, k+1 disjoint paths are obviously necessary for reliable communication. If the enemy can on top introduce errors, more redundancy may be necessary to achieve robust communication. If digital signatures can be used, k+1 disjoint paths remain sufficient. However, in very secure applications, this cannot. Then 2k+1 are necessary (if one requires 100% reliability) [17].
3.3 Secure distributed computation Most of the research on robust secure distributed computation (see e.g. [22,8]) is very theoretic. However, a lot of research on threshold cryptography is an exception. Threshold 5
This means that besides the sender and receiver the paths have no common nodes.
19
ISTR0702.qxd
7/19/02
4:00 PM
Page 20
Is there a need for survivable computation in critical infrastructures?
cryptography and its robust variant are the subtopics of secure distributed computation involving cryptography (such as privacy). This way one can reliably produce signatures on a number of computers without trusting k of those. Similar techniques exists to co-decrypt. For a survey of these techniques see [14,15].
3.4 Reducing redundancy Heuristic methods can be used to reduce the required redundancy to achieve robust networks. However, these methods are doomed to be flawed under the threshold assumption (see Section 2.2.1). Under the economic model (see Section 2.1.5) one can reduce the expensive redundancy. Two approaches can then be followed:
robustness of such critical infrastructures. Terrorists will then still be able to cause some damage or serious damage. Society will then have to learn to deal with the psychological impact of such attacks and insurance companies adapt their policies. This illustrates the difference between models that require that everything is protected (see e.g. Section 2.2.1) and a more economic one. Increasing productivity
When not under attack the extra redundancy can be used to increase productivity. In case of attack the redundancy can be used to obtain survivability maintaining the minimum required productivity.
4 Conclusion Increase security
If the cost of hacking or destroying nodes is prohibitive, then the required redundancy may evidently be reduced. An existing network could be analyzed to find which nodes (and edges) need extra protection to avoid adding too much expensive redundancy. However, this requires very secure computers that are too expensive to hack. As long as more and more applications run on operating systems not designed to be secure or reliable (e.g. shutting themselves down), we are moving further and further away from this possibility. Moreover, the assumption that a link can be protected so that it can never be cut, is very questionable. So, some redundancy may have to remain. Only deal with the most critical infrastructures
If it turns out that the number of critical infrastructures is rather small, then one can tolerate cyber terrorist attacks and information warfare provided the enemy is guaranteed unable to win. To guarantee this, one can increase the reliability and
20
The September 11, 2001 attack has demonstrated that conventional terrorism should not be underestimated. The Anthrax scare has elevated bio-terrorism to the level of second most feared potential terrorist attack. Now, cyber terrorism and information warfare are, by many, considered a hype. However, there is no scientific data to confirm this. Statements made today in this area are heuristic in nature. Whether in the meanwhile one needs to wait until one has certainty of our vulnerability/non-vulnerability is questionable. The annoyance factor of powerful computer viruses and worms increases and hackers become more sophisticated. Indeed, for example, at the last RUBICON conference, a hacker broke into the hotel wake-up call system and all customers in the hotel received an unwelcome wake-up call at 4:30 AM. Should one wait to address information survivability seriously until the first person dies as a consequence of a cyber attack? In the meanwhile, we may have become even more dependent and a potential enemy may have increased his know how dramatically.
Information Security Technical Report, Vol. 7, No. 2
ISTR0702.qxd
7/19/02
4:00 PM
Page 21
Is there a need for survivable computation in critical infrastructures?
Bibliography
Cryptology - ASIACRYPT '99, volume 1716 of Lecture Notes in Computer Science, pp. 232-246. Springer-Verlag, November, 14-18 1999.
1 Spies in the wires. Horizon BBC2 (Television), February 5, 1984.
22 O. Goldreich, S. Micali, and A. Wigderson. Proofs that yield nothing but the validity of the assertion and the methodology of cryptographic protocol design. Preliminary version, MIT, 20th March 1986.
2 M. Ben-Or, S. Goldwasser, J. Kilian, and A. Wigderson. Multi-prover interactive proofs: How to remove intractability assumptions. In Proceedings of the twentieth annual ACM Symp. Theory of Computing, STOC, pp. 113-131, May 2-4, 1988. 3 B. Blakley. The measure of information security is dollars. Workshop on Economics and Information Security University of California, Berkeley, May 1617, 2002. 4 R. Blakley. Safeguarding cryptographic keys. In Proc. Nat. Computer Conf. AFIPS Conf. Proc., pp. 313-317, 1979. vol.48. 5 M. Burmester, Y. Desmedt, and Y. Wang. Using approximation hardness to achieve dependable computation. In M. Luby, J. Rolim, and M. Serna, editors, Randomization and Approximation Techniques in Computer Science, Proceedings (Lecture Notes in Computer Science 1518), pp. 172-186. Springer-Verlag, October, 8-10 1998. Barcelona, Spain.
23 M. Gondran and M. Minoux. Graphs and Algorithms. John Wiley & Sons Ltd., New York, 1984. 24 M. Hirt and U. Maurer. Complete characterization of adversaries tolerable in secure multi-party computation. In Proceedings of the Sixteenth Annual ACM Symposium on Principles of Distributed Computing (PODC), pp. 25-34, August 1997. 25 Is there trouble ahead for travel agents? April 2000. http://www.hotelonline.com/Neo/News/PressReleases2000_2nd/ Apr00_IssuesInternetSites.html 26 David Isenberg. Electronic Pearl Harbor? more hype than threat. Cato Institute, http://www.cato.org/dailys/01-03-00a.html, January 3, 2000.
6 C. Burns. Two Asian markets set record with no hint of Y2K bug. January 2, 2000. http://www.cnn.com/2000/TECH/computing/01/02/y2k.reports. roundup.04/index.html
27 M. Ito, A. Saito, and T. Nishizeki. Secret sharing schemes realizing general access structures. In Proc. IEEE Global Telecommunications Conf., Globecom'87, pp. 99-102. IEEE Communications Soc. Press, 1987.
7 Paul Cashman. The dot.com deadpools, the end of the dot.com craze. May 1, 2001. http://www.cs.uwa.edu.au/undergraduate/units/233.410/ book2001/paulc/
28 J. Kessel. Report: Neutral nations' trade kept Nazi war machine going. June 2, 1998. http://www.cnn.com/US/9806/02/us.nazi.gold/
8 D. Chaum, C. Crépeau, and I. Damgård. Multiparty unconditionally secure protocols. In Proceedings of the twentieth annual ACM Symp. Theory of Computing, STOC, pp. 11-19, May 2-4, 1988. 9 Rob Conlin. Toysrus.com slapped with class action lawsuit. E-Commerce Times, see also http://www.ecommercetimes.com/perl/story/2190.html, January 12, 2000. 10Dominique Deckmyn. De Jager defends Y2K hype. January 4, 2000. http://www.cnn.com/2000/TECH/computing/01/04/dejager.y2k.idg/index.h tml 11 Y. Desmedt, M. Burmester, and Y. Wang. Are we on the right track to achieve survivable computer network systems. Accepted at the Fourth Information Survivability Workshop (ISW-2001), March 18-20.
29 L. Lamport, R. Shostak, and M. Pease. The Byzantine generals problem. ACM Transactions on programming languages and systems, 4(2), pp. 382-401, 1982. 30 F. J. MacWilliams and N. J. A. Sloane. The theory of error-correcting codes. North-Holland Publishing Company, 1978. 31 Y. Desmedt (moderator). Towards a funded research agenda. Fourth Information Survivability Workshop, Vancouver, Canada, March 18-20, 2002, http://www.cert.org/research/isw/isw2001/slides/summary-work-group2.pdf. 32 J.R. Mooneyham. The signposts timeline: Shopping and employment are changing drastically in the developed nations (2014). http:// www.jrmooneyham.com/ecom.html, December 1999.
12 Desmedt, J. Vandewalle, and R. Govaerts. Cryptography protects information against several frauds. In Proc. Intern. Carnahan Conference on Security Technology, pp. 255-259, Zürich, Switzerland, October 4-6, 1983. IEEE.
33 Catherine Mulroney. Gunfight at the on-line corral: As airlines, internet travel sites and traditional agencies shoot it out, the battle for ticket sales has become a virtual Wild West. Special to The Globe and Mail, October 10, 2000. http://www.globeandmail.com/travel/reports/10102000/ Gunfight.html
13 Desmedt and Y. Wang. Maximum flows and critical vertices in and/or graphs. Accepted for COCOON'02, August 15-17, 2002, Singapore, and to appear in the proceedings (Lecture Notes in Computer Science), Springer Verlag.
34 Critical foundations, the report of the President's Commission on Critical Infrastructure Protection. http://www.ciao.gov/resource/pccip/ PCCIP_Report.pdf, October 1997.
14 Y. G. Desmedt. Threshold cryptography. European Trans. on Telecommunications, 5(4), pp. 449-457, July-August 1994. (Invited paper).
35 Report of the accountability review boards on the embassy bombings in Nairobi and Dar es Salaam on August 7, 1998. January 1999. http:// www.state.gov/www/regions/africa/accountability_report.html
15 Y. Desmedt. Some recent research aspects of threshold cryptography. In E. Okamoto, G. Davida, and M. Mambo, editors, Information Security, Proceedings (Lecture Notes in Computer Science 1396), pp. 158-173. Springer-Verlag, 1997. Invited lecture, September 17-19, 1997, Tatsunokuchi, Ishikawa, Japan, SpringerVerlag.
36 Shamir. How to share a secret. Commun. ACM, 22, pp. 612-613, November 1979. 37 The Inman report. http://www.fas.org/irp/threat/inman/index.html, 1985.
16 Y. Desmedt, M. Burmester, and Y. Wang. Using economics to model threats and security in distributed computing. Workshop on Economics and Information Security, Berkeley, May 16-17, 2002, http://www.sims.berkeley.edu/resources/ affiliates/workshops/econsecurity/econws/33.ps.
38 L. Scott Tillett. Critics: State Dept. Y2K-travel reports lack detail. September 15, 1999. http://www.cnn.com/TECH/computing/9909/15/critics.y2k.idg/ index.html
17 D. Dolev. The Byzantine generals strike again. Journal of Algorithms, 3, pp. 1430, 1982.
39 U.S. State Department advises travelers to make New Year backup plans. http://www.cnn.com/1999/TRAVEL/NEWS/12/14/Y2K.notice/index.html, December 14, 1999.
18 D. Dolev, C. Dwork, O. Waarts, and M. Yung. Perfectly secure message transmission. Journal of the ACM, 40(1), pp. 17-47, January 1993. 19 G. Ensley. Common flaw in security exploited. Tallahassee Democrat, p. 1, September 19, 2001. 20 S. Even. Graph algorithms. Computer science press, Rockville, Maryland, 1979. 21 M. Fitzi, M. Hirt, and U. Maurer. General adversaries in unconditional multiparty computation. In K. Y. Lam, E. Okamoto, and C. Xing, editors, Advances in
Information Security Technical Report, Vol. 7, No. 2
40 K. Wallace. Clinton's dual New Year's eve role: host, Y2K overseer. December 29, 1999. http://www.cnn.com/1999/ALLPOLITICS/stories/12/29/ whitehouse.y2k/index.html 41 Will Y2K bug stop flow of milk? http://cnn.com/TECH/computing/ 9903/25/foodchain.y2k.hln/index.html. 42 Yawn2K? bug mostly a no-show as world returns to work. January 3, 2000. http://www.cnn.com/2000/TECH/computing/01/03/y2k.bug.reports/index. html
21