The interpretation of probability in probabilistic safety assessments

The interpretation of probability in probabilistic safety assessments

Reliability Engineering and System Safety 23 (1988) 247-252 Editorial THE I N T E R P R E T A T I O N OF PROBABILITY IN PROBABILISTIC SAFETY ASSESSM...

316KB Sizes 4 Downloads 88 Views

Reliability Engineering and System Safety 23 (1988) 247-252

Editorial

THE I N T E R P R E T A T I O N OF PROBABILITY IN PROBABILISTIC SAFETY ASSESSMENTS Most engineers and physical scientists have been trained to regard probability as an objective quantity that is relative-frequency based. This view was challenged about twenty years ago, when the need to quantify the risks from large technological systems was recognized and resources were expended to produce numerical results. Uncertainty is, of course, an integral part of the concept of risk. Probability is the numerical measure of this uncertainty. It would appear, therefore, that the quantification of risks would simply be a straightforward application of the Theory of Probability. It was soon realized, however, that the quantification of risks required the frequencies of rare events, e.g. major accidents, for which data were not available. Even though methods like fault tree analysis were useful in decomposing rare events to more frequent events, the problem persisted. Faced with this situation engineers resorted to the long-standing tradition of employing engineering judgment to produce probabilities and frequencies. It became evident that the important problems of Probabilistic Safety Assessment (PSA) could not be handled with the methods of traditional statistics. The Reactor Safety Study (RSS, Ref. 1) was the first comprehensive investigation of nuclear power reactor risks. It developed distributions for the failure rates of equipment and combined these subjective distributions using the rules of the Theory of Probability. This is, of course, the approach that any good Bayesian analyst would take. Yet, the RSS analysts were 247

Reliability Engineering and System Safety (23) (1988)--© 1988 Elsevier Science Publishers Ltd, England. Printed in Great Britain

248

Editorial

reluctant to openly admit that they were implementing the subjectivistic, that is, Bayesian theory. The report states that: 'Treating data as random variables is sometimes associated with the Bayesian approach where the data distributions are treated as priors. The system failure probability and its unavailability are subsequently treated as conditional probabilities and the overall marginal distribution is obtained by integration over the data priors. Because the data distributions were associated with a population (the 100 reactor plants) the data and system characteristics were treated by the study as being simply random variables, however, the Bayesian interpretation can also be used where the data distributions are treated as given Bayesian priors.' (Ref. 1, p. 111-3). The first papers calling for a formal application of the theory of subjective probabilities to safety assessments appeared several years ago. 2-5 Objections were immediately raised. 6-8 The first major studies that explicitly declared that their approach to uncertainty was that of the subjectivistic theory of probability were those for the Zion 9 and Indian Point 10 nuclear power plants. Papers with methods for specific applications began to multiply. 11 - 1, Subtle points concerning correlations 15.16 and the representation of uncertainty were also investigated.~ 7 - 1 9 While these papers and studies have had an impact on the way PSA is performed by certain groups or individuals, it is a fact that significant numbers of practitioners have been impatient with these issues, which they have dismissed as 'philosophical' or 'irrelevant' to the real engineering work that a PSA entails. Most of these people have been content to use a strange hybrid of frequentistic and subjectivistic methods. While the word 'Bayes' is avoided, in the tradition of the Reactor Safety Study, Bayesian, i.e. subjectivistic, methods are used extensively. For example, subjective distributions are developed for failure rates, human error rates, and so forth, and are propagated through fault trees by practitioners who would be very surprised to find out that frequentist methods would not allow them to do this. This surprise would be partly due to the fact that these analysts are under the mistaken impression that one is a Bayesian only when one uses Bayes' theorem. Even the most rigorous application of Bayesian methods, however, has not removed all the skepticism and discomfort with which engineers and applied scientists view PSA. These attitudes have more to do with the fact that judgment, often under the name of expert opinions, is abundant in PSA rather than with the methods themselves. Both the skeptics and the practitioners have contributed to the existing misunderstandings and confusion. While it is true that the subjectivistic theory of probability

Editorial

249

provides the framework for the coherent use of judgment, it is also true that it cannot create probabilities out of nothing. The theory requires probabilities as input and these probabilities are numerical expressions of someone's judgment (an example is the distributions of human error rates). It is well known that these probabilities are subject to a number of possible biases (for a review of these issues in the context of PSA, see Ref. 20). These problems are important and are not eliminated by the use of Bayesian or any other mathematical methods. At the same time, it is inappropriate to identify the subjectivistic theory with the use of judgment and dismiss it as being too subjective. As I state above, the problems with the quantification of judgment are independent of the theory of probability. Expert opinions have to be used not because Bayesians require them, but because, for rare events, they constitute a significant (sometimes the only) part of the evidence that is available to us. To say that the need for expert judgment is a consequence of the use of Bayesian methods is a mistake. In fact, one of the strengths of these methods is that they explicitly recognize the need to work with judgment, they deal with it formally, and they make its use visible. A new major study 21 has made the use of expert opinions very visible, thus attracting attention, once again, to the issue of how uncertainty in safety assessments should be quantified and what constitutes admissible evidence for this quantification. In the hope that this Journal can serve as a forum for the discussion of the various viewpoints that have been advanced, I have invited a number of knowledgeable analysts and practitioners to express their views. To get the debate going, I have asked them to address the following questions: 1. 2.

3. 4.

What is the philosophical basis for your approach to probability as applied to safety assessments? Contrast your approach to alternate approaches. What are the strengths and weaknesses of your approach and alternate approaches? Do your answers to the preceding two questions have any real impact on risk assessment? Any examples? Do these issues affect decision making and risk management? Do you have any examples, where 'wrong' decisions have been made, because of misunderstandings related to the concept of probability?

As expected, the authors have used these questions as points of departure to discuss what they think is important. The following quotations are representative of the variety of viewpoints that are discussed. L. R. Abramson: 'While the relative absence of hard data necessitates the extensive use of engineering judgment in these PRAs, very little attention

250

Editorial

has been paid to validating these judgments or to examining the sensitivity of the results to the often large uncertainties in the engineering judgments.' C. L. Atwood: 'I remain to be convinced that one can say, "This interpretation of probability is philosophically wrong".' M. Berman: 'It is fortunate indeed that the financing of Columbus' voyage did not depend on personal opinion probabilities support for a round world. An old Chinese proverb says that "it is difficult to prophesy, especially about the future".' V. M. Bier andA. Mosleh: 'Some researchers.., have claimed that fuzzy set theory is desirable for use in risk assessments due to the vagueness of the information typically available ... we believe that it is inappropriate to proliferate new axioms and theories to account for phenomena that are already well-explained by probability theory. (In simpler language, "if it ain't broke, don't fix it.")' R. M. Cooke: 'Probability provides a mechanism for using expert opinion in a productive and scientifically responsible manner.' S. Kaplan: 'Although in a fundamental sense this debate is, at bottom, "only" semantic, the resulting misunderstandings and miscommunications have exacted a heavy price indeed.' R. L. Keeney and D. yon Winterfeldt: 'Experts who will not make judgments required for decision making, because they do not have enough knowledge, often implicitly abdicate a possible important influence on those decisions. Decisions simply cannot and will not be postponed until all the data are available and all the potentially relevant calculations and model runs have been completed.' H. F. Martz and R. A. Waller: 'Instead of slipping into the Bayesian interpretation most of us would like to make regarding such intervals, we should be honest with ourselves by accepting the consequences of this way of thinking and do the entire analysis the Bayesian way.' G. Ostberg: 'Even issues which by description appear quite "factual" and straightforward can be found to contain elements based on values.' G. W. Parry: 'To summarize, I believe we have a methodological framework that while it may be somewhat cumbersome can be used, but it must be used with care, particularly with respect to the assumptions of the model, to maintain the distinction between randomness and uncertainty that the methodology has led us to adopt.' S. D. Unwin: 'For these reasons I am a Bayesian, but I am an uncomfortable Bayesian.'

Editorial

251

I hope that it is by now evident that the following technical notes offer interesting ideas. I also hope that they will stimulate further discussions by others who have had to deal with similar issues. Looking at the developments o f the last twenty years, I think that it is safe to say that there has been a slow but steady move toward the rigorous use o f subjectivistic methods in safety assessments, Perhaps de Finetti's prediction 22 that by the year 2020 we will all be Bayesian will actually be confirmed.

References 1. US Nuclear Regulatory Commission, Reactor Safety Study, WASH-1400, NUREG-75/014, Washington, DC, 1975. 2. Apostolakis, G., Probability and risk assessment: The subjectivistic viewpoint and some suggestions. Nuclear Safety, 19 (1978), 305-15. 3. Apostolakis, G. & Mosleh, A. Expert opinion and statistical evidence: an application to reactor core melt frequency. Nuclear Science and Engineering, 70 (1979), 135-49. 4. Parry, G. W. & Winter, P. W., Characterization and evaluation of uncertainty in probabilistic risk assessment. Nuclear Safety, 22 (1981), 28-42. 5. Kaplan, S. & Garrick, B. J., On the quantitative definition of risk. Risk Analysis, 1 (1981), 11-37. 6. Easterling, R. G., Comments on the Bayesian method for estimating reactor core melt frequency. Nuclear Science and Engineering, 75 (1980), 202. 7. Easterling, R. G., Bayesianism. Nuclear Safety, 22 (1981), 464. 8. Abramson, L. R., Some misconceptions about the foundations of risk analysis. Risk Analysis, 1 (1981), 231-6. 9. Pickard, Lowe and Garrick, Inc., Westinghouse Electric Corporation, and Fauske Associates, Inc., Zion probabilistic safety study. Prepared for the Commonwealth Edison Company, Chicago, IL, 1981. 10. Pickard, Lowe and Garrick, Inc., Westinghouse Electric Corporation, and Fauske Associates, Inc., Indian Point probabilistic safety study. Prepared for the Power Authority of the State of New York and Consolidated Edison Company of New York, Inc., 1981. 11. Apostolakis, G., Kaplan, S., Garrick, B. J. & Duphily, R. J., Data specialization for plant specific risk studies. Nuclear Engineering and Design, 56 (1980), 321-9. 12. Kaplan, S., On a 'two stage' Bayesian procedure for determining failure rates from experiential data. IEEE Trans. on Power Apparatus and Systems, PAS102 (1983), 195-202. 13. Papazoglou, I. A., A methodology for assessing uncertainties in the plant specific frequencies for initiating events in the presence of population variability. Paper presented at International ANS Meeting on Thermal Nuclear Reactor Safety, Chicago, IL, Aug. 29-Sept. 2, 1982. 14. Apostolakis, G., Data analysis in risk assessments. Nuclear Engineering and Design, 71 (1982), 375-81. 15. Apostolakis, G. & Kaplan, S., Pitfalls in risk calculations. Reliability Engineering, 2 (1981), 135-45. 16. Kafka, P. & Polke, H., Treatment of uncertainties in reliability models. Nuclear Engineering and Design, 93 (1986), 203 14.

252

Editorial

17. Parry, G. W., On one type of modeling uncertainty in probabilistic risk assessment. Nuclear Safety, 24 (1983), 624-7. 18. Torri, A., A consistent probabilistic methodology for the Seabrook Station containment event tree analysis. Paper presented at the ANS/ENS International Topical Meeting on Probabilistic Safety Methods and Applications, San Francisco, CA, Feb. 24-Mar. 1, 1985. 19. Parry, G. W., A discussion on the use of judgment in representing uncertainty in PRAs. Nuclear Engineering and Design, 93 (1986), 135-44. 20. Mosleh, A., Bier, V. M. & Apostolakis, G., A critique of current practice for the use of expert opinions in probabilistic risk assessment. Reliability Engineering and System Safety, 20 (1988), 63-85. 21. US Nuclear Regulatory Commission, Reactor Risk Reference Document. NUREG-I-150, Washington, DC, 1987. 22. De Finetti, B., Theory of Probability, Vols. 1 and 2. John Wiley and Sons, New York, 1974.

G. E. Apostolakis