Nuclear Engineering and Design 93 (1986) 123-134 North-Holland, Amsterdam
ON THE USE OF DATA AND JUDGMENT
123
IN PROBABILISTIC
RISK AND SAFETY ANALYSIS Stan KAPLAN Pickard, Lowe and Garrick, Inc., 2260 University Drive, Newport Beach, CA 92660, USA
Received August 1985
This paper reviews the line of thought of a nuclear plant probabilistic risk analysis (PRA) identifying the points where data and judgment enter. At the "bottom" of the process, data and judgment are combined, using one and two stage Bayesian methods, to express what is known about the element of variables. Higher in the process, we see the use of judgment in identifying scenarios and developing plant models and specifying initiating event categories. Finally, we discuss the judgments involved in deciding to do a PRA and in applying the results.
1. Introduction and purpose The purpose of this paper is to review the role of data and judgment in probabilistic risk analysis (PRA), particularly in nuclear power plant PRA studies and in the methodology [1] used by Pickard, Lowe and Garrick, Inc. (PLG), for such studies [2-4]. When one mentions data and judgment in connection with risk analysis studies, the subject of Bayes' theorem leaps immediately to mind. The reason for this is that in such studies we are seeking to make inferences, and Bayes' theorem is the fundamental law governing the process of logical inference. We seek in thes studies to say what we know about the likelihoods, or frequencies, of certain undesirable events. What we know is based on all the evidence and information at hand. This includes, in the usual case, the judgments of experts and a small to moderate amount of relevant or partially relevant data. Bayes' theorem is then the fundamental conceptual and computational tool for combining this data and judgment into a coherent expression of our net state of knowledge. It will be no surprise, therefore, that a major part of this paper is allocated to describing the use of Bayes' theorem in PRA. I wish to do a bit more than this, however. Since judgment is involved not only in assessing the parameters that appear in a PRA model, but also in the very layout and structuring of the model itself, as well as in the use of the results and, indeed, in
the decision to do a PRA in the first place, I will comment also on these latter uses of judgment. We shall begin, therefore, in the next section by reviewing the overall line of thought underlying a nuclear PRA. This will set the stage for a discussion of Bayes' theorem, of the use of judgment in structuring the model, in making the decision to do a PRA, and in using its results.
2. The PRA line of thought The overall line of thought involved in PRA is set out in table 1. We comment item by item as follows: (1) The reason for doing PRA is that we have decisions to make. In this connection, we note that a decision tO do nothing is just as important as a decision to do something. Thus, we take the point of view that in operating a plant, we are effectively making decisions at every moment, either to change something or to leave well enough alone. (2) Point 2 in the table notes that all decisions are made under uncertainty since we can never perfectly foresee the future outcomes of our actions and inactions. (3) The task of PRA could be said to be to quantify the uncertainties in the decision problem. More specifically, PRA quantifies the uncertainties related to the occurrence of damage to the plant and to the public
0 0 2 9 - 5 4 9 3 / 8 6 / $ 0 3 . 5 0 © E l s e v i e r S c i e n c e P u b l i s h e r s B.V. ( N o r t h - H o l l a n d Physics P u b l i s h i n g D i v i s i o n )
!24
,S', Kaplan / On the use of data and judgment
Table 1 l'he line of thought in a PRA i.
REASONFOR DOING PRA IS DECISIONS
2.
DECISIONS INVOLVE UNCERTAINTY
3.
TASK OF PRA:
4.
PROBABILITY = NUMERICAL SCALE OF UNCERTAINTY
5.
RISK =
6.
PLANTMODELING: ~i = F i ( X l , X2 • " ")
TO QUANTIFY UNCERTAINTY
t < s i , ~i , xi>)
Xj = "ELEMENTAL" FREQUENCY
7.
INPUTTO PRA:
~
'~'*-'-- P.I (a-j) xj
8.
p j ( X j ) SHOULD EMBODY ALL EVIDENCE
9.
TOOL FOR COMBINING EVIDENCE IS BAYES' THEOREM
i0.
BAYES' THEOREM IN PRA: ONE STAGE OR TWO STAGE
health. Other uncertainties, such as costs, regulatory actions, etc., are certainly part of the decision problem, but we do not think of them as part of PRA as such. (4) The fourth point is that if we are going to quantify degrees of certainty, we must, of course, establish a scale of measurement. For this purpose we a d o p t the probability scale in its " s u b j e c t i v e " interpretation. Thus, let A stand for a statement of the type that can be said to be either true or false. If we say. " t h e probability of A is 1.0," we mean that we are completely certain, completely confident, that A is true. Likewise, if we say, p ( A ) = 0. we mean that we are completely certain that A is false. If we say, p ( A ) - 0.5, we c o m m u n i c a t e thereby complete uncertainty, or equal confidence one way as the other. N o t e that in this interpretation it is a mildly misleading use of the language to say, " t h e probability of A i s . . . " Strictly speaking, we should say, " o u r probability a b o u t A i s . . . ' " Probability is not a property of A. It is a property of us. Probability is the language in which we express our level of confidence a b o u t A. This is an essential point for u n d e r s t a n d i n g the treatment of data and judgment. (5) Having defined a scale for measuring uncertainty. we now need similarly a way of measuring risk. In a t t e m p t i n g this, however, we quickly discover that risk
is too " b i g " a concept for a simple hnear scale. Risk is not a scalar quantity. N o r is it a vector, a c u r v e a matrix, etc. The most useful analytical form for ~:xpressing the concept of risk is tile "set of triplets" [51. Thus. we a d o p t the definition that risk " i s " the set of triplets:
= { <.~,. 4:,,, ~:, )
}.
where { } denotes "set of," ( ) encloses the triplet, and s represents a " s c e n a r i o ; " i~e., a description of " w h a t can h a p p e n ? " or " w h a t can go wrong?" ~; de.notes then the frequency of the i t h scenario, and ~¢ denotes one or more measures of the " c o n s e q u e n c e s " of that scenario; i.e., the damages that result from it. In the P L G methodology, frequency is used to c x press the idea of likelihood. That is, we imagine a thought experiment in which the plant is operated, say, m a n y trillions of years. We ask how frequently, in this experiment, does the postulated scenario occur? Wc designate this frequency, in occurrences per operating year, by ~e. In this way, ~, is, ill effect, a p a r a m e t e r in our model of the plants behavior~ Now. since we have not run this experiment, we do not know % exactly. The best we will be able to do is to express our state of knowledge a b o u t % in the form of a probability curve p, (q~,). This probability curve should a n d will encode everything we know a b o u t the plant p e r t i n e n t to ~,. W h a t do we know? We may have some years of experience with the specific plant in question, a n d we certainly have experience with other similar plants. In addition, we know a great deal about how the p l a n t is constructed and operated. In particular, we can identify those " e l e m e n t a l " events or c o m p o n e n t events. which together constitute the scenario s~. The next step then is to express the frequency of .~~ in terms of the frequencies, ~.j, of these elemental events. (6) This is the plant modeling step indicated m step 6 of table l. It is where the bulk of the work of PRA is done. The o u t p u t of this work may be represented as a set of functions, F i, which express the relationships between the frequencies, ,5,, of the scenarios and the X/, the frequencies (in units of occurrences per year, or occurrences per demand, whichever is appropriate) of the elemental events. (7) Now, of course, we do not know the ;k precisely' either. Therefore, we must express what we do know' in the form of probability curves pj(Xj) against X,. (8) These probability curves, again, should embody all the information a n d evidence we have. Suppose, for example, that )kt stands for the failure rate of a particular p u m p in our plant. We may partition our evidence relevant to Xi into three categories:
S. Kaplan / On the use of data and judgment
format and process for combining these three types of evidence, each with its proper weighting. (10) In PLG's methodology, Bayes' theorem is used in two forms, which we will call the "one stage" and "two stage" methods, and which we will describe below. The overall flow of the computational part of the PRA process is shown in fig. 1. The information base of the process is shown at the bottom, distinguished into categories E 1, E 2, and E 3. From this information using Bayes' theorem, we derive probability curves expressing our knowledge of the elemental frequencies, Xj. These curves are then entered into the functions F,, which constitute the plant model. The process of computational evaluation of these functions is known as "propa-
E 1 --- the experience of this pump in our specific plant. E 2 -= the experience of similar pumps in other plants. E 3 --- everyting else we know, including experts' opinions and engineering knowledge of the pump's design and fabrication, etc. Clearly, evidence of E1 here is the most relevant and should be given the most weight. Evidence type E 2 is also relevant and useful, of course, but not so much as E 1. E 3 is the least convincing type of evidence. It is so called "subjective" rather than "objective" and the hardest to pin down. (9) As we shall see, Bayes' theorem provides us a
( F I N A L RISK C U R V E S ) RISK = R =
{ < si' ~i' xi > }
Pl
x
(ASSEMBLY)
(PROPAGATION)
~ = F i (X 1, X 2 .....
)
(MODELING)
(ELEMENTAL FREQUENCIES) (BAYES' THEOREM) EVIDENCE/EXPERIENCE/INFORMATION E 1: (PLANT-SPECIFIC EXPERIENCE) E2: (OTHER PLANTS EXPERIENCE) E3: (EVERYTHING ELSE WE KNOW, INCLUDING EXPERTS' JUDGMENTS)
125
(THE KNOWLEDGE BASE)
Fig. 1. Overall flow of PRA computational process - The role of Bayes' theorem.
S Kaplan / On the va;eof data and ]udgment
126
gation" of the probabilistic input variables into probability curves for the output variables ~p,. These curves. &(~,), are then entered into the set of triplets. From this set are then calculated the final risk curves including uncertainty.
3. The incorporation of data and judgment in the PRA process via Bayes' theorem
judgments of the manufacturer along with those of our own experts. We express or encode these.judgments m a " p r i o r " probability curve, p0(X). By this notation, we shall mean, exactly, that p0(;~)dX is our level of confidence: i.e., our probability, that the true failure rate .i lies in Ihe interval dh about X That is P0( ,k ) dX = probability that( N
d_i:\
(4}
3.1. Deriuation of Bares" theorem The derivation of Bayes' theorem is exquisitely simple. Let A and B be two statements capable of being ranked as " t r u e " or "false." Then our probability of both being true can be written:
p(A and B ) = p ( A )
p ( B I A ),
(1)
and by the same token:
p(A and B ) = p ( B )
Now, to combine the judgments, eq. (4), with the data of eq. (5), we write Bayes' theorem, eq. 13), in the form
p(X[L,~)--po(X)
p0(E1)
.
(~i
p(XIE ~) here is the "'posterior" distribution, which
p(B'A)] /,(B)
~5)
(2)
These equations follow from the fundamental definition of probability as a scale for measuring degrees of confidence. Setting the two right sides equal and dividing by p(B), we obtain:
p(AIB)=p(A)[
The significance of the subscript zero is that tills probability represents our state of knowledge prior to having any information of type E1 or E:,. Nov,', suppose over a period of time we cali on the pump to operate, say, m times, and suppose thai of these calls, the pumps failed to start k times. Then. the evidence E~ is k~ = {/, failures in m demandsl
p ( A I B ).
i
"
(3)
which is Bayes' theorem. It states that our level of confidence, p(AtB), that A is true after we have learned that B is true, is equal to our level of confidence, p(A), before learning about B, times a correction factor, the term in brackets. This factor " u p d a t e s " our level of confidence in light or the new information B.
3.2. Application to elemental frequencies Bqres approach
The one-stage
Let us now see how Bayes" theorem can be applied to combine data and judgment into an expression of our state of confidence about an elemental frequency. For this purpose, let us again focus, as an example, on the pump in our specific plant and suppose we are interested in the frequency of failure of this pump to start on demand. Refining our notation slightly, let us now use the upper case lambda, A, to denote the "exact" value of this frequency measured in failures per demand. Now suppose, for simplicity in this example, that to begin with we have no evidence of type E 2 or E~. (The pump, for example, might be the first of its kind.) We have only information of type E 3, which includes the
expresses our state of knowledge after we have become aware of El. p(E~ IX) in this case is simply the binomial expression:
t,(&i,~)
nl ! :i,,[i/,~_t_iix~(1
- ~,i .... ~
~v~
and P0( El ) is simply
p~,(,%) = j~p0(X )p( E,!X ) d,\
(s)
Thus, eq. (6) becomes
p(X
P0(X)'kg( 1 - X) /:'~)= 7 T ~ .... Z-. . . . . . . . ;; ;. / P,,(x)x~(l-x) ~ dX
(9)
3.3. Numerical examples Eq. (9) is our formula for combining the prior judgments p0(X) with the data of k failures in m trials. To get a feeling for how this combining operation works out numerically, consider fig. 2. In this figure, we have chosen a " f l a t " prior, p0(X) = 1.0. In this way, we say that before we had any operating experience, we considered all values of A to be equally likely. It was just as
S. Kaplan / On the use of data and judgment
127
12
CURVE 0: CURVE1: CURVE 2: CURVE 3: CURVE 4:
101-[-
THE " P R I O R " m= 1,k=0 m= 2, k = 0 m= 5, k = 0 m = 10, k = 0
>IZ LU E3
>_1
< CO
o
0 0.0
L
~
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Fig. 2. Examples of Bayes' theorem with uniform prior.
likely, for example, that A have the value 0.9 as it is that it be 0.1, or 0.01, etc. Now, suppose we ask the p u m p to start a n d it does. O u r evidence is now E l = { k = 0 failures in m = 1 d e m a n d s } .
(10)
T h e posterior probability curve is then shown as curve (1) in the figure. Observe that the curve shifts significantly to the left c o m p a r e d with the prior. Thus, the evidence of a single successful trial is already suffiCient to prove that A c a n n o t be 1.0. Similarly, it is sufficient to make it quite unlikely that A should be as high as 0.9, for example.
Now, suppose we make another d e m a n d and again the p u m p starts. E l is now: { k = 0, m = 2}. With this evidence, our state of confidence shifts further to the left as shown by curve (2). As more experience accumulates with n o failures, the posterior continues to shift toward zero as shown by curves (3) and (4). Now, suppose that after 19 successful starts, we have a failure on the 20th try. Our posterior now appears as curve (1) in fig. 3. This curve communicates that we are n o w certain that the failure rate c a n n o t be zero and highly confident that it is not bigger than, say, 0.25. Most likely, it is somewhere between a b o u t 0.02 and 0.12.
128
S. Kaplan / On the use of data and judgment 30 28 CURVEO: CURVE1: C U R V E 2: C U R V E 3:
26
22
= = = =
THE "PRIOR" m = 20, k = 1 m=100, k= 5 m=200, k=10
= 200
20
g ~
18
~
15
~
14
~
12
~m
= 100
,
lO 8
•
m/--20
2
0.00
0
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.5~
Fig. 3. Further examples of Bayes' theorem. With further information, 5 failures in 100 trials, the curve narrows to say that we are a b o u t 90% confident that the true failure rate, A, lies between 0.025 a n d 0.10. With 10 failures in 200 trials, it narrows still further. These curves are all based o n a flat (noninformative) prior. However, if we think incrementally a b o u t the family of curves, we can see examples of the effect of n o n u n i f o r m priors. Thus, suppose in fig. 3 that curve (1) was our prior state of knowledge, a n d suppose the evidence were E1 = { k = 4, m = 80}, then o u r posterior probability would b e curve (2). As a n o t h e r example, suppose our prior were curve (4) in fig. 2, indicating high confidence that A was close to zero, a n d suppose the evidence was E~ = { k = 1, m
= 10}. This evidence would change our state of knowledge to that expressed by curve (1) in fig. 3. We see in these examples the way Bayes' theorem works. The more data there is, the less is the impact of the prior a n d therefore of the j u d g m e n t s encoded in the prior. O n the other h a n d , when there is little or no data, the prior dominates the shape of the posterior. This is entirely as it should be. W h e n there is no data, judgm e n t must d o m i n a t e the expression of our knowledge, for it is all we have. 3.4. The two stage Bayes approach The previous section gave examples of c o m b i n i n g data of type E r In this section, we
judgment, E3, with
S. Kaplan / On the use of data andjudgment
129
>IZ ILl Q >,Z I.u
liiii:.
oLtJ rr UI
liiiiiiiii!iiii I ~ ::1::::::::::: U v
X, FAILURE RATE Fig. 4. Frequency distribution for the population (population variability curve).
shall describe a "two-stage" method [6] for using data of type E 2 when we have it. Suppose, therefore, that we have some experience with similar pumps operating under similar circumstances in other plants. Specifically, suppose that the experience of plant 1 has been k~ failures out of m 1 demands, plant 2 has had k 2 out of m 2, and so on, Then E 2 = { ( k , , m , ) , (k2, m 2 ) . . . ( k N ,
raN)},
(11)
where we have data from a total of N plants, excluding our own. To incorporate these data, we imagine that our N plants are a sample taken at random from a very large population of plants. Each member of this population has its own individual failure rate, X. If we had measured all these individual failure rates, we would be able to draw a frequency distribution, qS()~), for the population as in fig. 4. The area under this curve, between any two points, u and v, would then be the fraction of the population having values of 2~ falling between u and v. Now, if we knew the shape of this curve, q~, we would use it as a prior: p0 (2~) = ~ ( X )
"candidate" functions, e. w e then erect over this space a probability surface that reflects our state of knowledge about where in this space • lies (see fig. 5). This probability surface should incorporate the evidence E 2 and E 3. For this purpose, we write Bayes' theorem in the form
p(~[E2, E3)=p(~lE3)[ p(E21e'E3) iF3)
(13)
where
N el
p(E2 l e , E3) = H l ~(X) ,V=]
JO
mn! k,,~( m,,
k,, )~
X)kkn(1 -- )t) m " - G d)~,
(14)
and, analogously to eq. (8):
p(E2IE3)
=fp(elE3) p ( E 2 l e ,
E3)de.
(15)
>i.-
p(¢,)
(12)
in eq. (9), along with the k and m of our specific plant, to obtain our desired posterior. Of course, we do not know the function ~ exactly. However, we do have some evidence about it in the form of E 2, eq. (11). So, the question becomes: " W h a t can we infer about • from the evidence E2?" This question is well formulated for a Bayesian approach. To establish a Bayesian framework in which to answer, we imagine • to be imbedded in a space of possible
,
t3
-n
8
Fig. 5. Representation of a probability surface over the space of functions ~.
I
1
cAus~
gNITIATO~S
o
I
I
XIFRNA[
JTERNAL ~IllATO~S I
REACTO~ COOLANT BOUNDARY FAll UR~
=ig. 6. Master logic diagram (left hand side).
• coM~o~
I
EVELV. I ~P~:~
,EVEL VI
EVEL V
EVEL IV
,TIA I
UBE UPT'~
TE~M
L ....!1
INI~RNA~
|
I
IIi ' 11 ° Loss c
I
I : uss o ~ I::O~ E C O O L HNG
0
I
k
\
I
I
I
0
FNSUFFICIENT CORE I HEAT ;REMOVAL
:ig, 6. Master logic diagram (right hand side).
•COMMON CAUS~INITIATORS
.EVEL Vl
EVIL V
EVEL IV
EVELIII
EVELII
EVELI
[
9
AMAC;E
CURE
XT[RNAI
~TFRNAL
'41T~ATC)~ ~ S
N~AI~MENF
I
0
I INDIRECT
1NIT!ATOllS
NAL RS
~'~
0
~ X ~ HNAL
INITFATC3HS
J
INTERNAL
RELEASE OF MATER~AL
COR~
I
i :!ii
TEXTUAL
0
OFFSITE RELEASE
~NAL ,TORS
~J
S. Kaplan / On the use of data and judgment
132
Eq. (13) is the first stage of our two-stage Bayesian method. Its output distribution, p(ep LE2, E3), incorporates the evidence E 2 and E 3. From this output of the first stage, we calculate the input to the second stage as follows: p(XIE2, E 3 ) = f 4 ( X ) p ( e p l E 2 ,
E3) d~.
(36)
This distribution serves as our prior, p0(~), in a "second-stage" application of Bayes' theorem, which then evaluates the evidence E 1. That is, we now rewrite eq. (6) as p (~ I El, E2, E 3 ) = P ( X I E 2 , E3)[ p(EII~'p(E11Ee,E2'E3 )E3)] (17) and evaluate the bracketed term evaluated just as in eqs. (7) and (8). Eqs. (13) through (17), thus, embody the basic ideas of the two-stage Bayes' approach to combining data and judgment. Details and examples can be found in ref. [6].
4. The use of judgment in modeling In sections 2 and 3, we saw how judgment entered the PRA process through the prior distributions for the elemental frequencies, ?,j, that are the input parameters to the functions, F~, of table 1. This is the most obvious and the simplest use of judgment, but, actually, judgment pervades the entire process. The next most obvious use of judgment is in developing the functions F, themselves and in identifying the corresponding scenarios s/. In doing this, judgment enters in many ways, of which we will briefly discuss two, in the following subsections.
4.1. Identifying the initiating events - The property of completeness The first step in a PRA proper is the identification of a set of "initiating events" that could conceivably lead to radioactive release. This is done using fault tree type thinking as expressed in a "master logic diagram," fig. 6. At the top level, we ask ourselves. "How can radioactivity be released?" then, "How can the core melt?" "How can core cooling be diminished?" etc, This line of thinking leads, at the bottom level, to a set of initiating events or, actually, to a set of categories of initiating events. If the thinking has been done well, the tree has the very important property of "completeness" at every
level so that at the bottom level we may say that unless one (or more) of these initiating events occurs, the core cannot melt, and a large radioactive release cannot occur. This line of thinking and this set of categories is the beginning of the risk model of the plant. Already, at this stage, modeling judgments are made, especially in the definition of categories, in the choice of their number, size, and dividing lines, and in the assertion of completeness. These judgments are based partly on data; i.e., on the operating history of the specific plant being studied, on the experience of other similar plants, and of the industry at large. It is also heavily based on the modeler's knowledge of the plant's design and its operating and maintenance procedures, as well as his knowledge of PRA methodology. For this reason, it is important that the "modeler" actually be a team of experienced people and that this team effectively include people intimately involved in and familiar with plant operations and the plant "as built" construction.
4.2. Laying out the scenarios - The event tree/fault tree controversy Given that an initiating event has occurred, we desire to lay out all the subsequent scenarios that can emanate from this initiating event. This is done customarily with event trees and fault trees. The "top events" in the event trees represent the performance of various safety systems and safety functions in the plant. The fault trees detail the various ways that these systems or functions can fail• A great deal of engineering knowledge and modelers judgment goes into the layout and structuring of these trees. In this connection, there exists a very interesting controversy between two schools of thought about how this laying out ought to be done. There is the "event tree' school that uses large detailed event trees with many top events. In this approach, fault trees play a minor role, are usually small and simple, and solvable by hand. On the other side of the controversy is the "fault tree" school that uses small and simple event trees, but links together very large fault trees, which require large computers and elaborate programs to handle them. Although in principle logically equivalent, these schools represent two very different judgments about how a plant risk model should be structured. These judgments have been argued about at great length by many far more qualified than myself. I would like, however, to mention two things about the event tree approach that appeals to me. First, in the event tree,
S. Kaplan / On the use of data and judgment
each scenario, si, is represented explicitly by a path through the tree. In a fault tree, a scenario corresponds to a cutset scattered throughout the tree. Thus, the event tree type of diagram is more visual and intuitive; one can "see" the scenarios more explicitly than one can in a fault tree. Secondly, the event tree diagram has in it, implicitly, a time variable. It provides at least a rudimentary portrayal of the time sequencing of the events in a scenario. The time dimension is entirely absent in a fault tree type of diagram. On this account, also, the event tree provides a fuller representation of the engineering knowledge and judgments of the modelers.
5. The first judgment: Deciding to do a PRA Perhaps the most interesting judgment made in connection with PRA is the judgment of whether to do or not to do such a study in the first place. Why do some choose yes and others no? Some hope, by doing a PRA study, to get regulators or intervenors off their backs. Others hope to avoid incidents like Three Mile Island, Browns Ferry, Bhopal and Ocean Ranger. Others feel that it is just a good idea in general, that it provides good training for the organization, that it focuses attention and gets people thinking about safety in a more penetrating and rigorous way. What reasoning supports a decision not to do a PRA study? Such studies cost money and can require the time of key people who are busy elsewhere. The judgment can be made that the study is not worth the cost and the distraction. This may in some cases be a valid judgment. However, in making it we need to be wary and to watch our thoughts, for certain attitudes can creep in which would render such a judgment inappropriate. A m o n g these is the very human tendency to believe that it cannot happen here (" We know our plant is safe!"). Another is the tendency to lose patience under repeated questioning and harassment and to come to the position that " W e will not do anything further unless we are forced to." Finally, we must guard against the "Pandora's Box" attitude: " I f there is something wrong, I would rather know about it." Is there proof that a P R A could have prevented a T M I or a Bhopal? Well, absolute proof of anything is hard to come by, but there is suggestive evidence. There are data and evidence to support a judgment that a P R A can bring to light conditions underlying such disasters. In the PRAs that we know about, almost always something has been discovered that was not fully recognized before. For example, the high likelihood of
133
reactor trip failures of the Salem type was discovered before the Salem incident occurred (see Vol. 1, p. 1.5-4)[4]. Changes to plant design or procedure have been identified that could reduce risk at very modest cost [7]. In some cases, design features or design principles were found that had been built into the plants in the belief that they were safety enhancing, but which, when subjected to the quantitative testing of PRA, turned out to be, in fact, detrimental to safety (see p. 277)[8]. There is also some supportive evidence in the datum that, as far as we know, no plant for which a PRA has been done has had a disaster or near miss type of incident *. Finally, we have the general experience of the human race that when we focus our attention on some aspect of design and, particularly, when we apply to it the discipline of quantification, we tend to have a more successful outcome. We have every right to expect that this should also hold true with respect to the safety of our designs.
6. The final judgment: Using a PRA A final judgment remains to be made after a PRA has been done and the immediate benefits received, the immediate decisions made, and immediate actions taken. Should the PRA study now be put on a shelf and gradually forgotten, or should we attempt to make it a "living document," to continously update it, to use it as a tool for training and an ongoing risk and availability management program [9]? Should we use it as the basis for a safety assurance program [10], [11]? Should we use it to search for "practicable modifications" [12] which can make our plants substantially safer? These judgments are not so easy to make as they might seem. Although it may be desirable to use the PRA studies in the above ways, practically speaking it has been difficult. The studies have been voluminous, heavy, detailed and, let us acknowledge it, not always as scrutable as we would like. There is a need for computerized tools that enable one to access a PRA readily, to query it, to update it as new data and judgments become available, to explore effects of proposed changes, etc. Fortunately, such tools are now becoming available. With them, we may soon * Since the number of plants for which PRAs have been done is still small, this datum, while in the favorable direction. does not yet carry high evidentiary value. While we know this intuitively, it is an interesting exercise to work it out as an example of application of Bayes' theorem.
!34
S. Kaplan / On the use of data and judgment
be able to use PRA studies much more fully as a basis for safety and reliability m a n a g e m e n t .
References [1] S. Kaplan, G. Apostolakis, B.J. Garrick, D.C. Bley, and K, Woodard, Methodology for probabilistic risk assessment of nuclear power plants, PLG-0209 (June 1981). ]2} Pickard, Lowe and Garrick, Inc., Westinghouse Electric Corporation, and Fauske & Associates, Inc., Zion Probabilistic Safety Study, prepared for Commonwealth Edison Company (September 1981). [3] Pickard, Lowe and Garrick, Inc., Westinghouse Electric Corporation, and Fauske & Associates, Inc., Indian Point Probabilistic Safety Study, prepared for the Power Authority of the State of New York and Consolidated Edison Company to New York, Inc. (March 1982). [4] Pickard, Lowe and Garrick, Inc., Seabrook Station Probabilistic Safety Assessment, prepared for Public Service Company of New Hampshire and Yankee Atomic Electric Company, PLG-0300 (December 1983). [5] S. Kaplan and B.J. Garrick, On the quantitative definition of risk, PLG--0196. Risk Analysis 1, No. 1 (March 1981).
[6] S. Kaplan, On a 'two-stage' Bayesian procedure for determining failure rates from experiential data, PLG-0191, IEEE Transactions on Power Apparatus and Systems, Vol. PAS-102, No. 1 (January 1983). [7] B.J. Garrick, Lessons learned from first generational nuclear plant probabilistic risk assessments, PLG-0408~ prepared for Society for Risk Analysis, presented at the Workshop on Low-Probability/High-Consequence Risk Analysis, Arlington, Virginia, June 15-17, 1982. [8] B.J. Garrick, Recent case studies and advancements in probabilistic risk assessment, Risk Analysis 4, No. 4 (June 1984). [9] B.J. Garrick, Examining the realities of risk management, PLG-0409, prepared for Society for Risk Analysis, Knoxville, Tennessee, September 30-October 3, 1984. [I01 F. Rowsome and R. Blond, Testimony at Indian Point hearings, U.S. Nuclear Regulatory Commission, Atomic Safety and Licensing Board, Docket Numbers 50-247-SP and 50-286-SP. pp. 137-141 (October 24, 1983). [11 ] W,J. Dircks, Memorandum for the Commissioner,~ (November 2, 1984). [12] NRC says no imminent risk at Indian Point: Assehine says look further, Inside NRC 6, No. 22 (October 29, 1984) p. 9.