Acta Psychologica 0 North-Holland
45 (1980) 301-321 Publishing Company
THE RISKS OF RISK ANALYSIS Lennart Department
S JOBERG of Psychology,
University of Gothenburg,
Sweden
Widespread public concern about risks of technology have made it necessary to develop rational means for measuring risks and to suggest procedures for rational decision making concerning risky technology. Such methods of risk analysis can contribute significantly to improve decision making but there are also several methodological problems. Risk analysis is always uncertain and particularly the overall level of risk of a technology is difficult to determine. When results from risk analysis are used in debate and decision making there is a tendency for different camps to twist and distort information yielded by risk analysis. Extreme elitistic or egalitarian viewpoints tend to be developed, and risk analysis must be used with extreme caution if it is not to contribute further to widespread suspiciousness about the honesty and competence of scientific experts.
In present society risky technologies are becoming more and more prevalent. Nuclear power is the most prominent example, but many others could also be mentioned. Risky technology has created a very polarized social climate in many countries in the industrialized western world. It has also created a need for rational procedures to measure risk: risk analysis. Risk analysis is used in decision making concerning risky technology. The purpose of this paper is to discuss under which conditions risk analysis can most fruitfully be utilized. However, risk analysis is not without its faults. In this paper some of its weaknesses will be discussed. It should also be realized that risk analysis is used in a continuous social decision making process where people from different camps often are in conflict with each other about interpretation and preferences. Risk analysis is discussed in the paper in such a context. Risk debates have a tendency to treat and to create emotionally * The paper was supported by a grant from the Committee for Future Oriented Research. Requests for reprints may be sent to L. Sjoberg, Dept. of Psychology, University of Gothenburg, 40020 Gothenburg, Sweden.
302
L. Sjiiberg / The risks of risk analysis
provocative and threatening contents. Thinking under emotional threat is therefore a relevant subject to discuss in connection with the use of risk analysis in decision making. Also, it is well known that values and subjective probabilities may be distorted due to underlying motives. Such general human mechanisms are illustrated with examples from risk debates. What can a psychologist do to improve the situation? The paper is concluded with some reflections over possible means to improve exchange of information among parties in decision making about risky technology.
Definition
of risk
The word risk is well known to be rather ambiguous and many more or less specific meanings have been attributed to it. There are three broad classes of meaning: those concerned with the probability of negative events, those concerned with these negative events themselves, measured in some suitable way, and those concerned with a joint function of probability and consequences, most often their product. Risk analysis purports to analyze both probabilities and consequences. Perceived risk is seldom well pictured by the product of probability and consequences and the use of this product is inspired by thinking in economics. It can many times be quite misleading in the context of public decision making. It is unfortunate, therefore, that one often finds it to be suggested as the definition of risk. In this paper risk will be used to denote probabilities of negative events unless it is clear from context that a broader denotation is intended.
Risk analysis Methodology Risk analysis is a special case of decision analysis (Keeney and Raiffa 1976). Decision analysis involves trying to decide both expected cost and expected value following upon different actions. Risk analysis, however, regards mostly the ‘cost’ side and ‘cost’ are measured not in
L. SjGberg / The risks of risk analysis
303
monetary terms but in terms of human life. The general course of a risk analysis is as follows: (1) Description of all possible events that can lead to negative consequences given that a certain action is decided upon (e.g. to build and use a new nuclear power plant). (2) Estimation of probability for each of the events that can lead to negative consequences. (3) Estimation of the size of consequences. (4) Computation of the expected loss for each event by multiplication of size of loss with its probability. (5) Computation of expected loss for the whole action by summing over all the expected losses for different events. If so desired it is of course possible to measure the utility of an action in a way which is analogous to the measurement of risk, as just described. It is a common opinion that rational decision making involves weighing utility against loss or risk and then choosing the alternative of action which shows the largest positive difference between utility and loss. However, in this paper discussion is mainly concerned with risk analysis. The analysis of utility and its combination with risk analysis is not treated to any large extent. However, I want to emphasize that the method mentioned for making rational decisions is not the only possible one or in any absolute meaning the best one. There are many different rules for rational decision making and they have all their drawbacks and advantages. Many examples are given by Montgomery and Svenson (1976). Also, human decision making can seldom be described with simple rules of rationality of the suggested type (Sjoberg 1978). Decision analysis and risk analysis are relatively simple methods, principally speaking. They build on the oldest and most basic principles of rational decision making (viz. the expected value principle). How-‘ ever, this simplicity may be misleading since the technical implementation of risk analysis in a special case may carry with it sizeable practical difficulties. The generation of action alternatives is a creative process which is external to decision making. If there is at least one good action alternative there is as a rule no great difficulty in making a decision. Then one simply chooses the good alternative. However, if there are many alter-
304
L. Sj6berg / The risks of risk analysis
natives of action, all with their advantages and drawbacks and about equally attractive, one may strive for formalized decision making. Formalization presupposes that it is possible to describe consequences in a quantitative manner. The consequences must be comparable to each other on some common scale, e.g., a monetary or utility scale. One must also be able to state the relevant events for the realization of different consequences and the probabilities of these events. We will see that these are moments in formalized decision making which all can and should be critically scrutinized. A rule of decision making can in itself be natural and useful, but the result of using it is also dependent upon the quality of the information which the decision rule works with. The purpose of risk analysis Risk analysis, thus, has as its purpose to decide the probability of disastrous events in technical systems and to describe the consequences of such events. The most famous example of risk analysis is the so called Rasmussen report WASH 1400 concerning risks in nuclear power plants. Risk analysis can have as its purpose to give a better information for decision making and to provide arguments for the implementation of certain technical systems. It is usually not controversial that one tries to improve on information for decision making. In particular, it is often argued that risk analysis is quite valuable in improving the safety in technical systems, since one can point to various event sequences that may call for improved safety standards, if the probabilities of disastrous developments are high enough. However, for policy decisions and for public debate risk analysis may be quite destructive if it is used in a non-critical manner. There are many weaknesses of risk analysis and the user must be aware of these. The use of risk analysis to give arguments in a debate can bring about it that uncertain figures are given too much importance and that a rule of decision making, which in itself is arbitrary, comes to be looked upon as the only possible definition of ‘rationality’. There are many examples of such misuse of risk analysis and definitions of rationality in the debate about nuclear power. WASH 1400 used the expected value rule as a matter of routine.
L. Sj6berg / The risks of risk analysis
305
Limitations of risk analysis [ 1 ] Risk analysis assumes a sort of neutrality and objectivity which needs neither be acceptable nor desirable in social decision making. As pointed out above, psychological research shows that the expected value rule or some variation of this rule is hardly useful as a basis for describing human decision making. It is possible that human decision makers act according to rules that are socially acceptable rather than rational and normatively ‘correct’. It is important to understand that the problems of decision making are concerned at least as much with how to make a certain decision justified and arguing for it as with making the decision itself. Risk analysis also assumes rather far-going abstractions; one is forced to disregard certain aspects of possible importance. An example of this problem is given by the case where a certain action can bring about increased societal resources at the same time as certain groups in society are exposed to greater risks to health or economy. How should the distribution of utility and risk among different groups be taken into account in decisions made on basis of risk analysis? And, finally, risk analysis seems to assume that we, in fact, want to be objective and make our decisions on the grounds of facts. But human anxiety is a reality even if it seems to be unjustified in certain perspectives. It is neither desirable nor realistic to disregard human anxiety. If the probability of disastrous events in technical systems is to be measured, one must first describe all the possible event sequences leading to them. Then one must try to estimate the probability for all the separate steps in such a sequence of events. By multiplying the probabilities of all the steps with each other and adding up the products for all the disastrous sequences of events one obtains a total measure of the probability of a disaster. The problem with a technique like this is that one cannot be sure to have considered all of the relevant event sequences. It is evident that even improbable and unexpected events do happen. An example is given by 6stberg et al. (1977). That project aimed at the study of improbable events in the production of reactor vessels. Furthermore, the analysis is made under certain assumptions. It is
[l] An excellent discussion of risk analysis is given by Fischhoff (1977).
306
L. Sjiiberg / The risks of risk analysis
generally very difficult to describe, and even more difficult to estimate, the probability of human errors or sabotage. Often one therefore disregards these types of risk. The analysis is also generally made under the assumption that the environment of the system functions in a normal way, for example that operators are competent, and that certain qualities of replacement components are guaranteed. A technical system of the type under consideration in risk analysis is often a very large and complicated system. Certain dependencies in the system can be very difficult to anticipate when making’ a list of all of the possible disastrous event sequences. It can also be the case that scientific knowledge is simply insufficient in predicting everything that could happen in the system. The difficulties in listing all of the disastrous events sequences lead to an underestimation of the probability of a disaster. The size of the underestimation is not easy to measure. The analysis also assumes that we can decide the probability for each step in the sequence of events. These probabilities can often be very hard to measure. The components of systems which are being analyzed are often very reliable. It is difficult and expensive to decide empirically the exact values of small probabilities. Certain components can be insufficiently tested for such probabilities to be measured. Then we must use expert judgment of these probabilities. Many of the difficulties involved in subjective probability in the area of risk analysis have been discussed elsewhere (Sjoberg 1979b). Subjective probabilities can systematically both under- and overestimate real probabilities, all depending on the circumstances surrounding the judgment, and the way of making the judgment. Fischhoff (1977) pointed out that the judgment method used in the Rasmussen report probably expresses a too high level of trust in the technical components of a nuclear power plant .and thus further contributes to the underestimation of risk. It is interesting to note in this connection that the recently published major attempt in Germany to estimate the risk in nuclear power plants (Birkhofer 1979) came up with a final risk estimate of a core melt-down which gave a higher value of risk than the Rasmussen report. The Birkhofer study used empirical probabilities to a larger extent than did the Rasmussen report and was, thus, to some extent free of the bias pointed out by Fischhoff. The accident of the power plant at Three Mile Island (TMI) in March 1979 has provided a wealth of information about risks and risk analysis. Investigation of the accident was of course motivated both for political
L. Sjiiberg / The risks of risk analysis
301
and technical reasons. In Sweden the accident caused wide-spread concern about risks and nuclear power and a decision was made to have a referendum about nuclear power in the spring of 1980. As a preparatory step the government appointed a committee to evaluate information about the risk of nuclear power in the light of the events at Three Mile Island and other recent pieces of information about nuclear power risks. The central task of the committee was to answer the question whether the risk of nuclear power should now be significantly reevaluated. The committee worked intensely for about six months and published its reports in November and December of 1979, available in the official series of Swedish government investigations [ 23. Serving as a member of the Swedish government committee I had the opportunity of observing the process of evaluating risks and some of this experience will be used here to provide further examples of some problems of risk analysis. The sequence of events leading to the partial core melt-down at TM1 had been partly predicted by the Rasmussen report, in the sense that the physical events were foreseen but not the operator mistakes that partly caused them. It is well known, of course, that the prediction of human error is possible only to a very limited extent, more specifically, to the extent that operators follow instructions. When instructions are lacking, ambiguous, or conflicting or when the operators deviate from established procedures for one reason or the other there is at the present no method of predicting their behavior. Still, a partial core meltdown with a subsequent very limited release of radioactivity to the external environment was quite likely to happen in the United States before 1980. The figures cited by the Kemeny commission are 13-80% probability of a TM1 event before 1980. It is a paradoxical and fascinating fact that risk analysis has rather little been utilized to prevent such events in the standard regulatory work. At the same time risk analysis has of course been widely cited in the public debate on nuclear power as an argument for the alleged extreme safety of nuclear power plants, in spite of the well-known fact that uncertainties in the absolute level of probability of risk are quite large. Several groups reviewing the Ras-
[2] The official references
to these reports are SOU 1979: 86 for the main report and DsI 1979: 22 for a collection of supplementary appendices mainly consisting of reports prepared by consultants to the committee. There are plans to publish an English translation of parts of this material.
308
L. Sjiiberg / The risks of risk analysis
mussen report have pointed out that the uncertainties of the final probabilities are underestimated by the original report and that they can be as large as 102. In other words, if the risk of a core melt-down is estimated to be 10e4 per reactor year, it could be as large as 10e3, or as small as low5 (see, e.g., Lewis et al. 1978). The risk analyses of nuclear power plants that have been reported all end up with estimates of a core melt-down per reactor year in the neighborhood of 10W4. Uncertainties surrounding these estimates are, however, large and there has been a tendency to emphasize the weakness of the method in later reports. At the same time empirical experience of nuclear power technology is accumulating at a rather rapid rate. At the time of writing, the latest estimate is that there is now experience with about 1100 years of large commercial plants in the whole world. Apostolakis and Mosleh (1979) showed that even if rather little confidence is placed in risk analysis, one ends up with a low risk level in the same neighborhood as the analytical estimate when practical experience is integrated into the initial estimate according to Bayes’ equation. Their study was published just before TMI. If TM1 is included in the Bayes’ estimate the risk level increases but the final estimate is of roughly the same order of magnitude as before. One could summarize TM1 from the standpoint of risk analysis as follows: the event was largely predicted and its probability was considerable. The fact that the event did occur does not call for any major revision of the estimate of risk in nuclear power plants. Experience is accumulating and integrating it with analytical estimates according to Bayes’ equation provides us with a much improved estimate and should give us considerably more confidence in a low level of risk for nuclear power plants. There are some flaws to this type of reasoning. They can be illustrated by the case of a rupture in a reactor vessel which is the most serious initiating event that could lead to large releases of radioactivity to the environment. Reactor vessels are made with extreme care and according to theoretical analyses the risk of rupture is in the neighborhood of low7 per year. This probability is so small that it is usually not further considered in risk analysis. However, if quality assurance procedures in producing and checking a vessel are deficient, human error may give a larger probability of such disastrous events. A Swedish project was planned to investigate the level of safety in producing reactor vessels and a preliminary report was published by &tberg et al. (1977). It is
L. Sj6berg / The risks of risk analysis
309
interesting, however, to note that the project could not be carried through because of resistance from the producer of the reactor vessel who did not allow the research group to perform the study. One could ask just how meaningful extremely small probabilities are in the face of the possibility of human error which should be considerably larger than any of these probabilities. Human error accounts for a very large share of all of the possible event sequences leading to disasters, well illustrated by e.g. Birkhofer (1979). But are not all of these possibilities included in the empirical estimates based on actually accumulating experience? This is an argument with some weight, but one should not forget that social and psycholog ical realities are not as constant as the physical world. A high standard of safety may be achieved in some historical intervals but not in others. The TM1 event certainly revealed a number of human malfunctions and deficient operating procedures in the nuclear power industry, examples of which have also been found in Sweden. The fact that the physical principles of nuclear power are well understood and that risk analysis may be compatible with the events observed does not mean that the whole system of nuclear power is also socially and psychologically well understood and that it is under control. It seems that some people trained in technology and the natural sciences sometimes forget that a question of risk is not only a technical or natural science question, but that it is a question in a much larger human context, where possibilities of analysis and control are very considerably smaller. Still, it seems reasonable to believe that risk analysis is after all better than a more intuitive and informal judgment of risk. The chance of missing significant events is probably larger in informal judgment. The task of making a judgment of risk in a complicated system involves the integration of a large amount of information. It is well known from psychological research that people are not very good at integrating information intuitively (Slavic and Lichtenstein 197 1; Tversky and. Kahneman 1974). It is therefore probably better to evaluate various pieces of information analytically and explicitly and to integrate them according to some formalized procedure. Presenting the result of risk analysis involves a number of new problems. The very presentation of a sequence of events may cause its subjective probability to increase in spite of its being very unlikely. A faulttree analysis with many branches may have a provocative and threatening impression which may be quite misleading in giving an idea about
310
L. Sjijberg / The risks of risk analysis
the total risk. Fischhoff et al. (1978) showed that people are very insensitive to branches omitted from fault-trees and Fischhoff (1977) discussed several reasons why even experts may fail to consider significant branches in fault-trees.
Risk and value Risk analysis assumes that it is possible to measure and compare different scales of value, for example money and human life. Extensive discussions of value problems have been given by Juh and Mattsson (1978) and Blomkvist (1977). Only some major points of the discussion will be given here. Since our economical resources are limited, societal decision making in risk questions must involve putting explicitly or implicitly an upper limit to how much human life may cost. Ju& and Mattson (1978) studied extensively how much we in this way are prepared to invest in saving human life. The variability between different societal activities is large when it comes to the value of human life. This can of course depend on the fact that explicit evaluation of human life has not entered the decision making process and that the final levels have been arrived at more or less randomly. It can also depend on safety policy being not only directed towards saving human life; it also purports to create a feeling of safety and to fulfil certain moral obligations. The Swedish requirements made on buildings are for example very strict from the standpoint of safety and also there are very few accidents where buildings in our country collapse. This fact can of course be seen as an expression of a need for the experience of safety in addition to actual safety. We do after all spend most of our lives indoors! It is also well known and often mentioned in risk literature that we are more willing to invest much money in saving an identified human being from a disastrous situation than we are in planning for preventing future accidents. It is often said that this is because in the latter case we are acting towards an unknown and unidentified individual. However, it seems likely that there are certain morally compelling rules guiding our actions which are different in the two cases. A human being who finds himself in a life threatening situation and who cannot save himself has thereby created a moral obligation for other people to come to his rescue. Such moral obligation is not in a similarly clear manner present when it comes for example to safety in road traffic. There is no dis-
L. Sjiiberg / The risks of risk analysis
311
astrous situation present at the time of decision making. Each person who enters road traffic has a free choice between driving in his private car or using, for example, trains or buses. Thus, one enters traffic on one’s own responsibility and risk. Svenson (1978) gives an extensive discussion of risk in traffic. Many people, including leading politicians, consider it immoral and unacceptable to work with explicit values of human life in planning. One argues, seemingly, that such a philosophy of planning would express and perhaps also bring about an inhuman and inhumanitarian attitude. It is also likely that a politician would feel anxious if pressed to take a standpoint on the value of human life, among other things because such a standpoint could be easily criticized in the mass media. He could be seen in the eyes of the public opinion as a cold and cynical man. The counter-argument is that we must at any rate always, at least implicitly, put a value on human life since economical resources are limited. The question then becomes whether explicit values of human life are useful in planning or whether there are other and better ways to pursue rational planning of safety. There are some suggested methods of measuring life value that need to be mentioned here. One idea is to present hypothetical gambles to people where they can state how large a probability of death they will accept for the chance of winning a certain sum of money. The problem of such gambles is that they are completely hypothetical and that it is difficult to know which relation they have to more realistic situations. It is also well known that people find it very difficult to differentiate meaningfully among small probabilities (Sjoberg 1979b). Kahneman and Tversky (1979) showed that the manner of presenting an offer to gamble largely influences the decisions people make. Another suggestion made in literature is to base the measurement procedure on which salary raises are required in order for people to accept risky jobs. There are many difficulties with this procedure. Firstly, one must ask whether the workers were really aware of the increase in risk. Secondly, it is uncertain which alternatives the workers had to the risky task. It is not likely that this method could lead to good measures of the value of human life. An economical approach is to estimate a person’s value of life by computing his likely future income and from this income deduct his likely future consumption. This may be economically reasonable but it is morally quite indefensible since it means that many people will have
312
L. Sjliberg / The risks of risk analysis
a 0 value or even a negative value of their lives. The suggestion shows mostly that life value simply is something quite different from the extent to which a person is an economical resource in society. It is regrettable that this method is being used in serious practical planning. There are many other value aspects except money and human life involved in risk problems. It could be a question of different types of environmental effects such as noise and air pollution or aesthetical aspects of the environment. The difficulty of measuring these values and integrating them with the others is of course very great. Even if one should be able to use reasonable measurements for individuals or groups of individuals, there is still the question of how the differences between different individuals should be taken into account and which weight given to different dimensions in final decision making. Some of these problems are approached by decision analysis. There is a question, however, whether a decision analyst can succeed in trying to distinguish between values and facts in the extremely difficult and often emotionally provocative situations that often arise in the area of risk. Suppose that society has to make a_ choice between various suggested energy programs. The decision analyst is to separate values from judgments of facts. We can describe the values that the different programs purport to fulfil but in order to decide to what extent the programs do fulfil the different values, we must use experts. Now the question arises to what extent these experts can establish a functioning credibility with all of the groups that suggest various action programs. The typical case in many social and political problem areas is that various groups have different views of reality. It does not seem likely that they would generally very easily be willing to give up their reality views in the face of a certain expert opinion, particularly not when it comes to complex social and economical questions where different experts usually come up with widely different recommendations and opinions. In Sweden, the debate on the economical importance of nuclear power before the 1980 referendum nicely illustrates this point. The government committee on this issue, appointed at the same time as the reactor safety committee, split up even before publishing its final report. The report in itself was not unanimous. When it was published it was immediately rejected by anti-nuclear groups as a piece of simple propaganda in favor of nuclear power. Since then, the mass media debate has created what amounts to almost complete confusion on the issue. The chairman of the conservative party and Minister of Econom-
L. Sjiiberg / The risks of risk analysis
313
its has likened the possible future lack of nuclear power to the effects of plague in 14th century Sweden. In the opposite camp one argues, at times, that if we get rid of nuclear power more jobs will be created and we will, in fact, be better off economically! If someone points out an inconsistency between action and values it is easy to adjust the connection between reality and values in order to achieve consistency. This is particularly easy when the scientific knowledge about reality in itself is uncertain and ambiguous as is often the case in risk problems. Considerable correlations between values and beliefs, often used as subjective estimates of probability, have been found in several studies (cf Sjoberg 1979a). Such correlations suggest that facts and values are dependent. Perhaps treating values and facts independently requires an explorative attitude which in itself is very hard to combine with a strong initial involvement. On the other hand, taking a persuasive attitude to a certain topic may involve creating a consistency between values and beliefs which further contributes to a deteriorated level of debate. The concrete manifestations of risk debate constitute our next topic.
Risk debate Conditions of risk debate The nature of risk information We have already seen that a risk analysis is difficult to carry out. It is self evident that experts may have different opinions regarding the value of a certain risk analysis. This may partly be due to having different areas of expertise but differences of opinion can remain in spite of a common basis of knowledge. Scientific and technological knowledge used by risk analysis is, as pointed out above, many times insufficient to make an exact judgment of risk. ‘Subjective judgment can be influenced by many personal factors even if the judgment is performed by skilled experts working with the aim to come up with a maximally correct and objective view of risk level. Thus, it is normal that experts arrive at different opinions. And, when the questions have a large social and human significance, some experts can find it imperative that they carry their view-point further and stimulate in the mass media to a general debate. Thus, one can
314
L. Sjijberg / The risks of risk analysis
expect that there is room for different interpretations of view-points and that there will often be people who contribute to the debate with deviating opinions. This inevitably means that problems under debate will be partly ambiguous. The information can be twisted in many different directions and there is much leeway for subjective factors. Debate atmosphere The climate of debate is in many risk questions very tense. Large values are at play and actors are characterized by personal involvement in the questions. One can differentiate different types of motives from ideological ones to economically vested interest and job security. In turn such individual motives can be understood in a social and cultural context. It seems to be particularly important in risk questions that an earlier optimism of development is being replaced by sceptical and pessimistic attitudes in many people but not in all. This easily leads to a polarized situation where different individuals and groups are in conflict with each other. Basic values and views of the world appear to differ greatly among such groups. Risk questions thus involve very strong motives in people, motives that can be different but always are characterized by their strength. Risk debates involve both attempts to persuade and attempts to inform. All debates in complex and value loaded questions are difficult. Polarized opinions easily develop and positions become locked. In such situations in-group feeling and ideological dogmatism is sometimes developed. This, in combination with differences in power positions, easily leads to the group in power not being sufficiently sensitive to the view-points and needs expressed by the opposition. An emotionally loaded climate where the information is insufficient and ambiguous of course opens the door to interaction between values and beliefs. This interaction will be further discussed below. By adjusting and selecting information suitable to the purposes of each party one tends to contribute to further polarization of the situation, and suspicions of dishonesty easily arise. One can easily see how negative feedback loops develop in this manner. The structure of risk debate Debates and emotions Risk debates are often
concerned
with new and large scale or other-
L. Sjiiberg / The risks of risk analysis
315
wise powerful technology. Polarized opinions and values are often very obvious to everyone who is interested in questions like these. Often, one can easily speak about relatively easily discriminated proponents and opponents when it comes to a certain technology. The correlation between attitudes to different types of technology is worthy of further study. My discussion here will not, however, assume that there is such a correlation. Instead, I try to discern certain general trends that I find most easily discriminable in the nuclear power question. Similar trends can probably be found in discussions about, for example, risks in the work environment. An example is a study of the debate on asbestos in Sweden that was published by Nordfors (1977). Proponents of a technology may be found among technical experts and also among politicians and representatives of labor unions. The opponents are also experts and lay-men. The contributors to the debate naturally claim to be competent at least with regard to the facts that they discuss and comment. Therefore each argument must contain some reality judgment. This reality judgment is a part of the debate that I want to comment particularly on. A lack of consistency between opinions is sometimes painful, especially when it comes to strong wishes and values and reality judgments relevant for them. By reconstructing the picture of reality one could often find a certain consistency between wishes and beliefs. Such a consistency is often efficient to calm down potential anxiety (anticipation of frustration) so that one seldom or never needs to encounter it. However, when people with different view-points are encountered there is a threat to one’s adjusted beliefs and the need arises to strengthen positions further. This may contribute to consistencies in opinions that remain after the threat is no longer acute and one may then also in neutral situations express quite considerable interaction between values and beliefs. As mentioned above strongly consistent belief-value structures are quite common in the area of social attitudes (Sjoberg 1979a). Certain social psychologists have spent much time in studying how we attribute characteristics to other people (Heider 1958; Kelly 1967). If a person expresses opinions that run contrary to what we ourselves believe and that require from us a painful and extensive re-evaluation of central opinions, it would be unlikely for us to attribute to that person both honesty and competence. If we did, we would be required to take his opinion seriously and perhaps re-evaluate our own opinions.
316
L. Sjijberg / The risks of risk analysis
It is of course easier to attribute to the other person dishonesty or incompetence or both. For the purposes of discussion let us simplify the picture considerably. What is said here is not true of proponents and opponents of a technology generally but by simplifying certain characteristics of debate and debaters one may find an interesting trend. Debates often involve mutual threats. Proponents of a technology are threatened in their basic opinion of the technology as something good, about material welfare and material expansion as legitimate goals of society. They may also be threatened in their professional roles and job security. Opponents of a technology on the other hand experience an existential threat to themselves and perhaps to mankind as a whole from what they see as a forthcoming destruction of the environment and the global resources. Material expansion in itself is no longer accepted by opponents as something good in itself. Rather, ecological balance is strived for. In the center of this storm there are certain scientific and technological questions that are very hard to answer in a definitive manner. How large is the risk really of a nuclear power accident? How large is the risk that a certain substance in the work environment can cause cancer? The experts can in certain cases give informative answers to such questions but an absolute certainty can almost never be achieved. This is partly due to our lack of knowledge about complex interactions that arise when different substances occur together or, to take a nuclear power example, that disastrous developments may be due to sabotage, that can hardly be foreseen and the probability of which cannot be estimated. In this situation proponents of a technology tend to develop the extreme attitude of knowledge elitism. Technology opponents, on the other hand, tend to develop the extreme attitude of knowledge egalitarianism. Both the extreme attitudes are further discussed below.
Knowledge elitism The extreme attitude involves uncritical acceptance of the honesty and competence of the own group. In an analogous manner opponents are seen to be, in various combinations, incompetent and dishonest. The proponents’ extreme attitude is called elitism since it builds upon the idea that the actual knowledge in risk questions can be found only within a small group of experts, mostly experts in technology and the natural sciences. Objectivity and impartiality of their viewpoint is
L. Sjiiberg / The risks of risk analysis
317
seldom or never in doubt. One considers oneself to be ‘rational’ but the idea about rationality is typically very superficial. Resistance is seen as emotional and irrational, as politically or commercially opportunistic. Resistance should be counteracted by trying to create a feeling of safety or, if that is not possible, by trying to isolate opponents from more uncertain or indifferent groups by, among other things, pointing out their ‘complex’ social and political viewpoints. The typical element of the elitistic extreme attitude involves, thus, ‘forgetting’ that risk judgments are in fact difficult to make and that the probabilities that are produced in risk analysis should not be taken too literally. The problem area is restricted to those very problems where the competence of experts seems to be well established. In reality, problems are much more multifaceted. Therefore one is guilty of a twisted and simplified view of the problems. The knowledge elitist has, thus, much confidence in his own group but little confidence in the public which he tends to think one should not ‘worry’. The perspective is restricted to a limited area of knowledge and further social and psychological perspectives are disregarded. The confidence in experts can also be combined with a moral claim that representative democracy is the only legitimate expression of ‘the will of the people’. Representative democracy becomes, thus, a way of making legitimate the right of certain experts to make judgments in an alleged impartial manner and to guide social decisions by their judgments. The claim that such impartial judgment in fact is generally made is much too superficial an opinion to be realistic. In this way, the elitistic attitude contributes to difficulties for the public and for society as a whole to profit from the expert knowledge which in fact exists. Scientists often seem to put themselves in the service of certain economical forces which tend to exploit further the environment and to work for development of the type ‘more of the same’. Such a development need not be seen as negative in itself. It is however not unlikely that certain proponents of a technology for their own part would find it of value to broaden and deepen their definitions of problems and that, in doing so, they would find that they had previously, due to restricted definitions of problems, been in the service of forces that they really did not sympathize with.
318
L. Sjiiberg / The risks of risk analysis
Knowledge egalitarianism One can notice among opponents of a technology a well established suspiciousness against experts, a belief that all experts are bought and in extreme cases also a very anti-intellectual. attitude. One prefers to ‘believe one’s own eyes’ rather than believing experts. This attitude leads in extreme cases to knowledge egalitarianism which involves rejecting science altogether. One basis of such an extreme attitude is probably the fact that different experts often have different opinions. The same expert can also have strongly different opinions at different points in time. The relation between expert opinion and organizational membership and employment is sometimes also only much too evident. Knowledge egalitarianism carries with it many risks. Scientific knowledge is not omnipotent but because of that not powerless. Not exploiting the knowledge which exists is very unwise. One cannot reject science or experts as easily as it is sometimes done, If expert knowledge is not used one risks to become guided by more or less temporary mass media interests and opinions rather than by factual consideration about where the protests should be best invested. This leads both to a lack of optimality in the use of resources and to a deterioration of credibility among groups who are not as involved. Vulnerability for criticism from experts increases if opposition is not well justified in factual knowledge. Science is a product of human sense; if you wish, of so called common sense. What differentiates science from the common sense of each human being is simply that science builds on many generations’ accumulated insight and experience rather than on the achievements of a single human being. We should not set up an artificial conflict between common sense and science and we should be careful not to trust too much what ‘we can see with our own eyes’. There are certain tendencies in our way of thinking which easily lead to erroneous conclusions if our opinions are not analyzed critically (e.g. Smedslund 1963; Chapman and Chapman 1967). Can risk debates be improved? An improvement of debates would involve proponents of a technology to be more willing to listen to the arguments of the other side and vice versa. Opponents would more easily profit from the scientific information that actually exists and proponents would avoid debating the
L. Sjijberg / The risks of risk analysis
319
wrong questions and could rather direct their energy to the actual conflicts and differences in reality opinions and perhaps also values that exist. If a risk analysis is to be able to contribute to societal decision making it must of course be made in a competent manner. This means that it must be realistic with regard to the limits of competence when it comes to deciding probabilities and consequences. Risk analysis must not be interpreted too literally and it also seems that this type of ‘soft’ results do not lend themselves very well to use in explicit decision rules of the expected value type. It is however, not uncommon to come across examples of this kind. Further problems arise in communicating results of risk analyses to the public. Some of the cautions and reservations available in the written sources are easily lost in mass media accounts. When for example the Swedish Government Committee published its report on the safety of nuclear power, the Committee stressed several points, viz. (a) the difficulty in arriving at a very precise estimate of the level of risk; (b) the disastrous consequences of large accidents in nuclear power plants; and (c) that the TM1 accident did not in itself contribute any significant knowledge to already existing awareness of the many and complex risks involved in nuclear power. This message was interpreted by many newspapers in Sweden as simply saying that TM1 did not call for any revision of opinion concerning the safety of nuclear power so apparently one need not be further bothered by this issue! How can risk analysts themselves contribute to counteracting a tendency to oversimplification? The need for counteracting misuse of risk analysis is obvious. The very fact that a figure is cited in a report easily lends itself to tremendous over-confidence in this particular figure and to its endless use in mass media and public debate as an argument for one or the other camp. Perhaps the idea of having several parallel risk analyses being made by several groups of researchers with different values may have something to contribute. Some psychologists have argued that conflicts can be solved, given that values are in common, by showing that parties differ in reality beliefs and not in values. However, there seem to be three problems with this approach. First, it presumes that we can establish a credibility of some group of experts whose opinions are generally acceptable. Second, it seems unlikely that one can find agreement on a special set of values since values in themselves are subject to many twistings and
320
L. Siijberg / The risks of risk analysis
readjustments under the influence of emotionally provocative debates. Third, expert opinion may be arrived at in certain restricted areas in the natural sciences at least relatively easily, but certainly cannot be arrived at easily in the social and psychological sciences. These disciplines deal with a large share of the problems concerned with risk questions. Perhaps the very knowledge of the common tendency to twist information and to select suitable information can to some extent counteract such tendencies and thus contribute to an improved climate in debates. It should also be interesting to study more closely how value structures differ among parties in a risk debate in order to map possible differences among them. Perhaps one could at least accept the other party if his value structure is exposed and one finds that although it is different from one’s own it need not be seen as morally unacceptable. It seems likely that communication presumes having an opinion about the other party which is not too different from one’s self image. One must simply stop making the automatic assumption that the other party is incompetent and corrupt. This, however, assumes a willingness to consider that some of one’s own opinions could be subject to debate. Such a willingness is counteracted by all forms of threats; therefore one might be wise in minimizing all threatening parts of arguments put forward.
Conclusion Risk analysis is a rational procedure for evaluating risks. It can contribute significantly to decision making if its values and limitations are taken into account in a realistic manner. However, the prospects for such a realistic use are less than excellent because the risk of technology is, in itself, a highly provocative area and because scientific evaluation of risks is difficult and always incomplete. Thus, risk analysis which aims to contribute to rationality may end up to become just another means of producing polemical argumentation in the service of vested economical interests. It may even contribute to an ongoing development of distrust in the honesty and competence of science.
L. Sjiiberg / The risks of risk analysis
321
References Apostolakis, G. and A. Mosleh, 1979. Expert opinion and statistical evidence: an application to reactor core melt frequency. Nuclear Science and Engineering 70, 135-149. Birkhofer, A., 1979. Die deutsche Risikostudie. Gesellschaft fur Reaktorsicherheit, FRG. Blomkvist, A.C., 1977. Value and the measurement of value. (In Swedish.) Department of Psychology, University of Goteborg, Risk Project Report 7-77. Chapman, L.J. and J.P. Chapman, 1967. Genesis of popular but erroneous psychodiagnostic observations. Journal of Abnormal Psychology 72,193-204. Fischhoff, B., 1977. Cost benefit analysis and the art of motorcycle maintenance. Policy Sciences 8,177-202. Fischhoff, B., P. Slavic and S. Lichtenstein, 1978. Fault trees: sensitivity of estimated failure probabilities to problem presentation. Journal of Experimental Psychology: Human Perception and Performance 4,342-355. Heider, F., 1958. The psychology of interpersonal relations. New York: Wiley. J&s, B. and B. Mattson, 1978. Valuation of personal injuries. A comparison of explicit and implicit values. Department of Psychology, University of Goteborg, Risk Project Report 3-78. Kahneman, D. and A. Tversky, 1979. Prospect theory: an analysis of decision under risk. Econometrica 47,263-291. Keeney, R.L. and H. Raiffa, 1976. Decisions with multiple objectives. New York: Wiley. Kelley, H.H., 1967. Attribution theory in social psychology. In D. Levine (ed.), Nebraska symposium on motivation. Lincoln: University of Nebraska Press. Lewis, H.W. et al., 1978. Risk assessment review. Nuclear Regulatory Commission, Washington, D.C., NUREG ICR-0400. Montgomery, H. and 0. Svenson, 1976. On decision rules and information processing strategies for choices among multiattribute alternatives. Scandinavian Journal of Psychology 17, 283291. Nordfors, L., 1977. Asbestos - debate, alarm and action. (In Swedish.) Department of Psychology, University of GBteborg, Risk Project Report 19-77. Gstberg, G., H. Hoffstedt, G. Holm, B. Klingenstierna, B. Rydnert, V. Samsonowitz and L. Sjoberg, 1977. Inconceivable events in handling material in a heavy mechanical engineering industry. National Defence Research Institute, Stockholm, FTL A-Report A 16 : 71D. Sjoberg, L., 1978. Decision making. (In Swedish.) Stockholm: Natur och Kultur. Sjiiberg, L., 1979a. Beliefs and values as attitude components. Gliteborg Psychological Reports 9, no. 6. Sjiiberg, L., 1979b. Strength of belief and risk. Policy Sciences 11, 39-57. Slavic, P. and S. Lichtenstein, 1971. Comparison of Bayesian and regression approaches to the study of information processing in judgement. Organizational Behavior and Human Performance 6,649-744. Smedslund, J., 1963. The concept of correlation in adults. Scandinavian Journal of Psychology 4,165-173. Svenson, O., 1978. Risks of road transportation in a psychological perspective. Accident Analysis and Prevention 10, 267-280. Tversky, A. and D. Kahneman, 1974. Judgment under uncertainty: heuristics and biases. Science 185,1124-1131.