2 36
Book Reviews
No publication of this nature can be absolute faultless. The Editor had a hard time to bring to a common denominator all chapters submitted by authors coming from a wide compass of specialization. He was, in general, very successful, the result of his meticulous care definitely exceeding the average. The reader can feel it--the Handbook is readable, indeed. However, some foibles can be found, as always. First, definitions of basic concepts should have been distributed to the authors beforehand, and possibly also printed in the Handbook. Perhaps the most striking definition inconsistency is that related to the risk concept, which itself happens to be the main issue of the work. On p. 183 and elsewhere, risk is defined as equal to the failure probability, while on pp. 191,201,393 and others it is defined as the product of the failure probability and failure consequences. The latter definition is now generally accepted. Another remark concerns the lists of references, attached to each Chapter. In the majority of cases, these lists are too detailed. As a rule, common Handbook users are primarily interested in where additional information can be found, i.e. they want to know where to go for details. They do not care what was the background to this or that formula, conclusion, and advice. Thus, many research reports and similar kinds of evidence could be easily dropped out of the majority of lists. Moreover, entries should be restricted to those that are easily accessible. On the other hand, considering the range of topics covered by the Handbook, the Index is very slender. Many important concepts (dealt with through various Chapters), e.g. Failure probability, beta-Index, and Vulnerability escaped the Index's editor entirely, while many concepts are covered only partially, not indicating all important places where they appear. Notwithstanding these drawbacks, which cannot be considered principal, the Handbook can be recommended as an excellent help to engineers involved in "antirisking" and "derisking" of projects of various nature and size. The engineers should not be misled by the somewhat misguiding title of the work. M i l ~ Tich~ Karoliny Sv~tM 14 110 00 Praha 1 Czech Republic
Acceptable Risks for Major Infrastructure, P. Heinrichs and R. Fell (eds), Proceedings of the Seminar on Acceptable Risks for Extreme Events in the Planning and Design of Major Infrastructure, Sydney, N.S.W., Australia, 26-27April 1994, A.A. Balkema, Rotterdam, 1995, vii + 203 pages. Major infrastructure facilities are vulnerable to natural hazard extreme events of low probability. We are accountable and seek to make self-consistent, defensible designs, so design criteria should be both rational and reasonable. In probabilistic design one way to think about the problem of rational design criteria is to assume that there is a generic 'acceptable risk' that can be established a priori for a class of systems (e.g. highway bridges). You can then design such that the calculated risk is 'acceptableL 'Tolerable', by the way, is a better word, because it reflects a critical acceptance only in view of anticipated benefits.
Book Reviews
237
The Australian National Committee on Large Dams (ANCOLD) recognized the need for a discussion of acceptable risks from a broad variety of viewpoints, not just within the dam engineering community. ANCOLD therefore organized a symposium and has published this collection of 20 papers on acceptable risk. The book's perspective is distinctly regional; all authors are from Australia or New Zealand. Yet, the contributions cover a broad spectrum within civil engineering (dams, building structures, water resources, bridges, roads); other engineering specialties (chemical, mining, nuclear, petroleum); other professions, officials and public regulators (law, economics, industry, finance, environmental economics); and science (earthquakes, extreme floods). Together the authors provide a varied composite picture of 'acceptable risk'. Unfortunately, the picture is a bit like the blind men's description of an elephant. Acceptable risk is complex and as yet not sufficiently well understood for a synthesis. Only two papers note that acceptable risk is context-dependent. Acceptability depends on the associated benefits and on available alternatives. Unfortunately, if you ask people about it, you can get answers that depend on context, are self-contradictory or manipulative. Moreover, risks and benefits are unevenly distributed to different people in space and time, so what is 'acceptable' depends on information, human psychology, politics, and ethics - - and on whether it is you or someone else who is at risk. The book's focus is on large dams. There is an urgent need worldwide for new guidelines on design of dams for earthquake and flood. Dam failures are responsible for some of the World's greatest technological disasters. Perhaps the worst so far was the August 7, 1975 failure of the Banqiao and Shimantan Dams in a typhoon in Henan province in central China; estimates range from 10,000 to 230,000 deaths (Globe and Mail, March 3, 1995). Decisions must now be made that involve large amounts of money to curb a potential for high losses of lives. ANCOLD has therefore produced a set of guidelines, including a set of 'Interim ANCOLD Societal Risk Criteria', presented in the form of an F(n)-graph. Such graphs and graphically represented criteria are discussed in many papers in the book. Much thinking about risk to life and acceptable risk is currently focussed on these familiar F(n)-diagrams, showing curves that plot the annual frequency of occurrence F(n) of a loss of n or more lives as function of n. F(n)-diagrams can help you visualize and compare the statistics of accidents of different severity from different kinds of technology and hazards. Their use for this purpose is quite legitimate. F(n)-graphs are, however, inappropriate as a decision making tool, when used to represent an acceptable risk, a tolerable risk or a risk target. The draft ANCOLD Societal Risk Criteria, for example, consist of two curves that divide the diagram into three regions: (1) (2) (3) the
a 'de minimis' region of Acceptable risks, a region of Unacceptable risks and a region labelled 'Risks to be as low as reasonably practicable--the ALARP principle' separating other two regions.
To see the fallacy of this formulation, suppose that there are two projects, both already having been designed to satisfy the requirement that the risks be 'as low as reasonably practicable'. The projects could be two similar dams with different small communities downstream. Project A exposes the public to a risk of 1E-5/a (1/100,000 per year) probability of a loss of 90 lives. Project A is
238
Book Reviews
unacceptable according to the interim ANCOLD graph. Project B exposes the public to the risk of an annual loss of 5, 15, 20, 100, 200 and 2000 lives with respective probabilities of 1E-4, 5E-5, 2E-5, 5E-6, 1E-6 and 1E-8--and is not unacceptable according to the ANCOLD graph. This is absurd, because the expected loss of life is more than twice with Project B than with Project A. It also would be judged absurd by those who merely compare 'worst case' scenarios (90 for Project A vs. 2000 for Project B). The two curves y = FA (n) and y = F B (n) intersect. You cannot draw a curve that neatly separates an F vs. n-diagram into two distinct regions, one that would reasonably have acceptable risk, the other unacceptable. It is therefore not to be hoped that F(n)-diagrams will become permanently established to represent risk management criteria. The tolerability of risk has been and is being established in isolation in many applied science disciplines. Major infrastructure covers many diverse technologies; yet, many consequences of major infrastructure failure are similar. The ANCOLD initiative to make a broad examination of acceptable risk for major infrastructure is a useful effort, and the proceedings deserve careful study by engineers concemed with general design criteria. Niels Lind Institute for Risk Research, Canada
Failures '96 - - Risk Economy and Safety, Failure Minimisation and Analysis, by R.K. Penny (ed.), A.A. Balkema, Rotterdam, 1996, 377pages, ISBN 9054108231. This volume forms the proceedings of the second meeting of the FAILURES international symposium series held near Johannesburg, South Africa, in July 1996. It contains papers grouped under the following headings: general risk issues (3 papers), management of risk (4), case analysis (4), failure minimisation and analysis (8), plant monitoring (8), plant life assessment (4), and health and safety (1). The contributors come from a wide background, both geographically and professionally, with consultants, practitioners and academics all well represented. According to the publishers, "this book will be of interest to risk and loss prevention engineers, inspectors, maintenance engineers, consultants, insurance assessors involved in risk management and others whose professional capacities are impacted upon by failures and all aspects of safety". In other words, anyone concerned with engineering safety. To be of interest to a wider readership, the papers presented in a set of conference proceedings should address a common problem while presenting new ideas or applications, and one rrle of the editor is to extract, where possible, an underlying message. The rrle of the reviewer is to give an opinion on the extent to which these objectives are met and whether they constitute a worthwhile contribution to the state of knowledge in the subject area, and to give a flavour of the overall contents. The title of the proceedings gives an indication of the contents and diversity of the papers, and these reflect the continuing importance of attempts to understand failure processes. Indeed, the central theme of the conference might well be said to be "learning from failures". The editor, in the Preface, argues that a wider understanding of the techniques of analysis of risk and reliability can help clarify issues in the safe and economical operation of systems, and that even if