Risk assessment

Risk assessment

Random Bits & Bytes 291 Risk Assessment Risk assessment was first proposed by Bernouli and Cramer about 250 years ago. Decision-making under risk wa...

216KB Sizes 2 Downloads 122 Views

Random Bits & Bytes

291

Risk Assessment Risk assessment was first proposed by Bernouli and Cramer about 250 years ago. Decision-making under risk was formalized by John von Neumann and Oskar Morgenstern in their now classic volume, Theory of Games and Economic Behavior, published in 1944. Many have added to the expected utility model including R. Duncan Lute and Howard Raffia in their volume, Games and Decisions: Introduction and Critical Survey, published in 1957. Having read these volumes when they were first published, we have been able to maintain an historical perspective of this technique. Computational risk assessment in computer security is a controversial subject. Too often we overlook the fact that this type of risk assessment is used in many scientific fields, including anticipated life expectancy after exposure to various toxic chemicals. No matter what view you hold about probability-based risk assessment, we strongly recommend a recent article by Mark J. Machine, Decision-Making in the Presence of Risk, which appeared in Volume 236, May 1, 1987 of Science. For six additional articles on this subject, see the Volume 236, April 17th issue of Science. That method is periodically called into question as it was when a bridge on a major highway in New York State collapsed earlier this year. For a view on this subject we have included in this column a short note by Belden Menkus, a member of our International Board of Editors. Inherent Weaknesses Risk Assessment

in

Probability-Based

The collapse on 5 April 1987 of a bridge in New York State illustrates a major weakness of any probability-based risk assessment methodology. Various forms of such a methodology are used widely in both planning for data processing disaster recovery and in assessing the strength of a data encryption algorithm. The failure of the steel-reinforced concrete structure resulted in at least 10 deaths and in a long term disruption of vehicle movement over this segment of the State’s major auto traffic route. The bridge was designed in 1953, and had last been renovated in 1982. It has last been regularly

inspected approximately a year before it collapsed. The bridge’s design conformed to the appropriate US civil engineering practices. Available rainfall and stream flow data were analyzed. What amounted to worst case threat projections were made to determine a likely peak for both rainfall accumulation and stream flow intensity. Both figures were estimated to have had a one chance in 100 years probability of occurring. Based on these numbers, standard protection factors were applied to determine such things as the depth of the bridge piers and the strength of the reinforcing steel used. The bridge’s designers appear to have been following what was considered to be good practice at that time. The problem is that 100 years did not elapse before the projected worst case peak rainfall accumulation and stream flow intensity took place. Only 34 years passed before this happened. This incident demonstrates that life does not operate within a ‘bell curve.’ It is always unwise to assume that an estimate, of itself, of the likelihood of how often some threat might occur guarantees that such a threat will not occur in the near future. For instance, in data processing disaster recovery planning, it is a common practice to project the probability of certain threats occurring. The most widely used method was developed internally by one of the major computer hardware manufacturers. It has been championed for a number of years by the US National Bureau of Standards, and has been widely copied throughout the computer security field. Typically, material that uses this risk assessment method does not explain either its source or the assumption under which it should be used. The basic methodology was derived from consumer product safety experience and relies heavily on essentially unqualified estimates of the probability of certain threats occurring. However, there is in the consumer product safety field a broad base of experience data on threats and risks. Thus, from this experience the probability can be estimated reliably of there being, say, defects in a product batch or of some component of the batch failing in ordinary use. Such a body of experience does not exist in the data processing community. Thus, the estimates used with this method to

298

Random Bits & Bytes

define computer-related risks and the likelihood of their occurrence amount in the real world to little more than wild eyed guesses. Unfortunately, this method has a second defect. It manipulates these guesses in a way that produces results with anywhere from 2 to 4 digits beyond the decimal point. This makes these numbers look much more substantial and significant than they really are. Such manipulation results in spurious precision. Incidentally, probability-based risk assessment has a further defect that is not commonly recognized. That is, most disaster recovery planners tend to downgrade or ignore the importance of threats that they have not personally experienced. Robert Jacobson refers to this phenomenon as the 30 year rule. He points out that none of the officers that sailed in 1912 on the maiden voyage of the S.S. Titanic had experienced a ship sinking. Thus, as has been amply documented, they did not take seriously the need to prepare to deal with a possible need to abandon the vessel. By the same token, the disaster recovery planner who has not experienced a high rise building fire or who has never walked through a flooded basement computer center does not fully appreciate the seriousness of such an event or the difficulties inherent in recovering from it. And, there is no

probability estimation process that can compensate for that lack of experience. Again, it is a common data security practice to project the amount of time that would be required to try in succession all of the possible permutations of a particular encryption key. Too frequently, it is assumed quite unrealistically that it would take that total period to identify successfully an otherwise unknown key in use. In the real world this particular key is very likely to be identified much earlier. This is true for several reasons not associated with algorithm design. These can include poor key choice, unwise message content structure, use of particular key extraction techniques, and pure luck. Actually, the likelihood of successful key identification will be very high during the initial phase of the key extraction process. It will decline during a second phase of the process. Then, it will begin to recover somewhere in the midrange of the process and become progressively more likely to prove successful with each key extraction attempted. Probability-based risk assessment in any field cannot supplement either experience or intuition. It can help, of course, to identify likely risks. It cannot guarantee when they actually will occur. (c) 1987, Belden Menkus.

Very Light Reading Although we do not include any notes about books that do not deal with computer security, we now make an exception because “The Tao of Programming” by Geoffrey James (Santa Monica, CA; InfoBooks, 1987) is enjoyable reading. To give the reader an entertaining, tongue-in-cheek look at computing through the eyes of ancient masters, the author has included 33 epigrams that have been passed through generations since the dawn of the computer age. Recognizing the frustration often encountered by the computer security specialist, the epigrams

form a philosophy of humor to live by and work with. The epigrams are divided into several categories including design, maintenance, management and corporate wisdom. One that many security specialists would appreciate contains the master’s statement: ‘You can demonstrate a program for an executive, can’t make him computer literate.’

but you

Another epigram is included in Fig, 2 together with its Chinese character representation and is directed at those who have come up through the programming ranks to direct a security program.