Assessing model-based and conflict-based uncertainty

Assessing model-based and conflict-based uncertainty

ARTICLE IN PRESS Global Environmental Change 17 (2007) 37–46 www.elsevier.com/locate/gloenvcha Assessing model-based and conflict-based uncertainty A...

194KB Sizes 0 Downloads 89 Views

ARTICLE IN PRESS

Global Environmental Change 17 (2007) 37–46 www.elsevier.com/locate/gloenvcha

Assessing model-based and conflict-based uncertainty Anthony Patt International Institute for Applied Systems Analysis, Schlossplatz 1, A-2361 Laxenburg, Austria Received 1 February 2006; received in revised form 11 October 2006; accepted 23 October 2006

Abstract Assessment panels need to communicate scientific uncertainty, and often face choices about how to simplify or synthesize it. One important distinction is between uncertainty that has been modeled, and that which derives from disagreement among experts. From an economic decision-making perspective the two are in many ways logically equivalent, yet from psychological and social perspectives they are quite different. An experiment on the communication of climate change uncertainty suggests that the two framings of uncertainty differentially influence people’s estimates of likelihood and their motivation to take responsive action. It is recommended that assessment panels pay close attention to the social features of uncertainty, such as conflict between experts. r 2006 Elsevier Ltd. All rights reserved. Keywords: Climate change; Uncertainty; Assessment; Science communication

1. Introduction An increasingly important task of scientists working on assessment panels is to communicate the nature of uncertainty (National Research Council, 2006; Webster, 2003). The extent to which they can provide a complete description of uncertainty is constrained, however, and choices must be made about where to simplify and synthesize in the description of uncertainty. The Intergovernmental Panel on Climate Change (IPCC) has been particularly diligent in its attempts to make these choices in ways that are most useful for policy and decision-makers, and has adopted specific guidelines for uncertainty communication. One issue that has received limited attention by the IPCC and other assessment panels, however, is the distinction between uncertainty that is revealed and quantified through the process of modeling, and uncertainty that arises as a result of expert disagreement. The IPCC has taken multiple approaches of coping with this distinction, either making the distinction clear, or synthesizing information about uncertainty in a way that hides the distinction. Tel.: +43 2236 807 0.

E-mail address: [email protected]. 0959-3780/$ - see front matter r 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.gloenvcha.2006.10.002

This paper examines this distinction—between modelbased and conflict-based uncertainty—from the perspective of different models of decision-making under uncertainty. It is an important issue for two reasons. First, the ways that the IPCC currently treats the issue reveal a reliance on a technical model of decision-making, one that may be insensitive to psychological, social, and political nuances of information. Second, people may react to statements about uncertainty differently, depending on whether such statements emphasize the presence of conflicting opinions. It is important to understand the ways in which they do so, in order to design an effective communication strategy. The paper proceeds as follows. In Section 2, I describe in more detail the difference between conflict-based and model-based uncertainty, and the extent to which the IPCC has considered the issue to date. In Section 3, I show how different models of decision-making carry different implications for how best uncertainty should be analyzed and communicated, with special attention to the distinction between model-based and conflict-based uncertainty. In Section 4, I describe an experiment designed to examine people’s differential responses to the two types of uncertainty. In Section 5 I offer some policy-relevant conclusions.

ARTICLE IN PRESS 38

A. Patt / Global Environmental Change 17 (2007) 37–46

2. Model-based uncertainty, conflict-based uncertainty, and the IPCC Scientific uncertainty surrounding future projections of climate and other global environmental change derives from two basic sources. First, the climate system is complex, meaning that predictions about how it will change over time are sensitive to small differences in assumptions about initial conditions, as well as to the spatial and temporal resolution at which scientists model the system (Linstone, 1999; Waldrop, 1992). Second, there is an incomplete understanding of many important processes and feedbacks, and different assumptions about those processes can translate into very different projections about how the system will change over time (Andreae et al., 2005; Cox et al., 2000; Penner, 2004). Perhaps the poorest understanding is of how people and societies will develop in the future, and this not only plays and important role in how serious climate change will be, but may also be affected by the very process of seeking to predict it (Webster et al., 2003). Climate experts may be able to make independent judgments about the extent of uncertainty within their own area of specialization and knowledge. Others, however, need to learn about the existence and magnitude of uncertainty by observing the statements of experts. These statements can reveal uncertainty in two very different ways. First, one can observe a single statement by an expert or group of experts about the existence and magnitude of uncertainty, what I refer to as model-based uncertainty. Including uncertainty in models of climate change is difficult and time consuming, but progress has been made, including the use of multiple runs of the same model with a range of initial conditions, and ensemble runs using multiple models. Such statements have become more common with advances in modeling techniques (Webster, 2003). These efforts can generate projections of a range of future outcomes, sometimes with probability density functions (PDFs) describing the likelihood of the outcomes within that range, including statements about the likelihood that particular events will occur (e.g., Andronova and Schlesinger, 2001). Often model runs provide ranges of outcomes without probabilities attached to those outcomes, and these serve to quantify uncertainty by revealing the range of possible outcomes (e.g., Stainforth et al., 2005). Second, one can observe multiple experts’ making statements that do not agree with each other. In this case conflict-based uncertainty arises. Experts, or groups of experts, reach different conclusions about future outcomes when they make different subjective judgments about the starting conditions and development of the system they are modeling (Morgan and Henrion, 1990). There have been efforts to resolve these differences of opinion through systematic expert elicitation, in which an interviewer asks specific questions designed to force the expert to reveal the particular subjective judgments used, and his or her

opinion about the range of uncertainty (Arnell et al., 2005; Clemen and Winkler, 1999; Risbey et al., 2000). Such elicitations have often revealed that independent judgments about the ranges of uncertainty do not even overlap, leaving no room for agreement between the different experts (Morgan and Keith, 1995). There have also been efforts to use mathematical techniques to reconcile conflicting opinion, and from the multiple pieces of information to generate a single probability estimate or PDF (Morgan and Henrion, 1990). Using Bayes’ Rule, for example, it is possible to combine multiple statements that differ in their relative reliability into a single probability estimate. The IPCC has considered where and how to simplify, synthesize, and communicate information about climate uncertainty. To inform their work in the Third Assessment Report (TAR), the IPCC requested two climate and climate policy experts, Richard Moss and Stephen Schneider, to write a guidance paper on the communication of uncertainty. That paper summarized much of the relevant literature from the field of risk communication, and suggested that the IPCC take every effort to discuss the scientific nature of different uncertainties, such as system complexity or poor data quality (Moss and Schneider, 2000). The paper did not explicitly consider the issue of model-based versus conflict-based uncertainty, however, or call for uniform treatment of this social background to uncertainty. The IPCC’s three working groups chose to incorporate the Moss and Schneider suggestions in different ways. Working Group I chose to describe the scientific sources of uncertainty where space allowed, and also relied on an uncertainty scale, defining particular words and phrases (e.g. likely, very likely) as corresponding to defined ranges of likelihoods (e.g. 67–90%, 91–99%) of a particular event occurring (Houghton et al., 2001). For example, in the Summary for Policymakers, the Working Group I authors stated: ‘‘By the second half of the 21st century, it is likely that precipitation will have increased over northern mid-tohigh latitudes and Antarctica in winter’’ (Houghton et al., 2001, p. 13, emphasis added). Behind this statement was a careful analysis of results from five different climate models, with four different runs from the model developed at the Hadley Centre. Over eastern North America, for example, the five models showed increases in precipitation of between about 5% and 35%, when only changes in greenhouse gas concentrations were considered. Factoring in changes in sulfate aerosols, the range was between about 10% and 30%; two of the five models showing negative values, and the four runs of the Hadley Centre model indicated a range of between 15% and 30% increase (Houghton et al., 2001, p. 597, Fig. 10.5). The statement in the Summary for Policymakers effectively combined model-based and conflict-based uncertainty into a single probability statement. Working Group II also described the sources of uncertainty where space allowed, but relied on a

ARTICLE IN PRESS A. Patt / Global Environmental Change 17 (2007) 37–46

two-dimensional (2D) method of expressing uncertainty (McCarthy et al., 2001). Like Working Group I, they used a scale to reveal their assessment of confidence, but Working Group II also included, occasionally, qualitative statements about the state of knowledge. For example, in the Summary for Policymakers they stated that ‘‘yimpacts of climate change on agricultural production and prices are estimated to result in small percentage changes in global income (low confidence) y Most studies indicate that global mean annual temperature increases of a few 1C or greater would prompt food prices to increase due to a slowing in the expansion of global food supply relative to growth in global food demand (established, but incomplete)’’ (McCarthy et al., 2001, p. 11, emphasis in the original). The first of these probability statements—low confidence—corresponded to a 5–33% probability range, whereas the second statement corresponded to several qualifications, such as ‘‘current empirical estimates are well founded, but the possibility of changes in governing processes over time is considerable’’ (McCarthy et al., 2001, p. 24). Other qualitative statements they used to describe the state of knowledge included well-established, competing explanations, and speculative. Working Group II thus preserved for the reader some of the distinction between model-based and conflict-based uncertainty. Finally, Working Group III did not adopt a consistent manner of describing uncertainty (Metz et al., 2001). In its preparation for the Fourth Assessment Report, the IPCC went further to inform its choices on the communication of uncertainty. They organized a workshop in Maynooth, Ireland, where they invited a range of natural and social scientists with expertize on the issue of climate change. Some of these participants had been engaged in other efforts to inform scientific assessments about uncertainty communication (e.g., Janssen et al., 2004). Participants at the Maynooth workshop offered different views on the best ways of communicating uncertainty. Most participants agreed on the importance of identifying the sources of uncertainty, space allowing. Some participants suggested that the IPCC could be most helpful by generating PDFs, where possible (as in Morgan and Henrion, 1990). Others suggested that full PDFs are not necessary, but rather the probabilities of exceeding particular thresholds are more important to communicate (as in Jones, 2000). Finally, others suggested that much of the ambiguity cannot be resolved into a PDF or probability estimate, and that the best approach is to identify robust decision-making strategies given ranges of ambiguity (as in Lempert, 2002). From these different views, the organizers of the Maynooth meeting drafted a guidance paper, considered comments from the participants, and ultimately made a set of suggestions that quite closely resembled the approach taken by Working Group II in the TAR. Neither the Moss and Schneider paper nor the participants in the Maynooth meeting specifically considered the distinction between model-based and conflict-based uncertainty, and this is not necessarily surprising. To both

39

natural scientists and those social scientists with expertize in quantitative policy analysis (who were well-represented in Maynooth), coping with these sources of uncertainty is primarily a technical challenge, and the more social issue of whether uncertainty arises in the form of model results or conflicting statements may go entirely unnoticed. 3. Multiple views on decision-making under uncertainty Social and behavioral scientists have yet to agree on a single theory of human decision-making, and instead approach the subject from very different perspectives, with different models—simplifications—of the process of choice. In this section, I briefly introduce several different models, and show the role that scientific information plays within each of them. Different roles for information imply different ways in which scientific assessment can present information about uncertainty in the most useful manner. 3.1. Economic models The basic economic model of decision-making under uncertainty is expected utility theory (von Neumann and Morgenstern, 1944). According to this model, decisionmakers derive utility from different patterns of consumption, and those patterns depend on the outcomes of their choices. Uncertainty implies that that each choice can lead to a range, or distribution, of future outcomes. Decisionmakers decide among the choice options by selecting the option that will provide the greatest expected value of (i.e. average) utility associated with its distribution of outcomes. To calculate the expected value associated with each choice option, they need to integrate the utility function across the distribution of potential outcomes, and this means that they need to know, or make assumptions about, that distribution. Expected utility theory requires that people have beliefs, either consciously or subconsciously formed, about the probabilities of different outcomes, before they can decide on one choice option or another. If they do not know the precise probabilities, then they will estimate them, using the information available and some basic mathematical tools, such as Bayes’ Rule (Fishburn, 1981). Expected utility theory is best accepted as a prescriptive model for decision-makers. If people know their own utility functions, then by following the procedures of statistical decision-analysis they can effectively compare the different choice options, and assure themselves of obtaining the highest possible reward. Expected utility theory can also be used in a descriptive sense. By observing somebody making a number of different decisions, one can infer features of that person’s utility function, and then be able to predict future decisions. Similarly, if one has identified a person’s utility function, and observes their choices under conditions of uncertainty, it is possible to infer their beliefs about the distribution of potential outcomes.

ARTICLE IN PRESS 40

A. Patt / Global Environmental Change 17 (2007) 37–46

For many types of markets to work efficiently, people need to be careful with their math. The betting market is a good example. Some of the people participating in this market believe that a particular event (e.g. a certain horse winning a race) will occur. The payoff received for winning a bet decreases proportionately with the number of people who make that same bet. Longshots are the events that very few people bet on, and have a high payoff. Under the hypothesis that markets such as these are efficient processors of information, the proportion of bets made on any given event is equal to the probability of the event occurring (Arrow, 1964). However, several studies have shown that participants are biased in their behavior, such that they place far too many bets on the longshots; longshots go on to win less often than the proportion of bets placed on them would predict, and the people who win those bets receive payoffs that are too low to justify the low probability of winning (Camerer, 2001). The main driver of longshot bias appears to be many people’s reluctance to equate minority opinion—revealed in the odds listed for the different possible bests—with a slim chance that opinion is correct (Cain et al., 2003; Camerer, 2001). Longshot bias has implications that go beyond payoffs at horse races: the efficiency of markets depends on analysts’ being able to transform a great number of different opinions of the future performance of the market into a probabilistic forecast of that market (Arrow, 1964). Since the early 1970s, a growing number of economists have paid attention to empirical findings that apparently violate axioms of rationality, such as longshot bias, and developed an alternative set of descriptive models. These so-called behavioral economists propose that the utility people anticipate receiving from the outcomes of choices depends on context-specific issues of perception: the perceived departure of outcomes from the status quo (Kahneman and Tversky, 1979; Munroe and Sugden, 2003; Patt and Zeckhauser, 2000; Samuelson and Zeckhauser, 1988); the agents perceived to be causing those changes (Ritov and Baron, 1992); the perceived fairness of the outcomes (Fehr and Schmidt, 1999; Kahneman et al., 1986; Knetch, 1997); and a long list of other factors. Often, there are consistent and predictable differences between the utility that people anticipate receiving from particular outcomes before they make a decision, and that which they do in fact experience once those outcomes actually occur. Likewise, behavioral economists propose that contextspecific perceptions and mental shortcuts influence people’s beliefs about probabilities in ways that cause those beliefs to be systematically biased from their true values (Kahneman and Tversky, 1979; Tversky and Kahneman, 1974). Behavioral economists straddle the line between economics and psychology. Many of the patterns of perception and belief formation they identify are grounded in psychological theory, but behavioral economists hold on to utility maximization as the correct prescriptive model of choice (Zeckhauser and Viscusi, 1996). What they can offer, in contrast to their neo-classical colleagues, is

guidance concerning where many people consistently fail to maximize utility, and thus may need help from experts. The field of risk communication developed out of these efforts, and is based on the idea that the best way to assist decision-makers coping with risk and uncertainty is to give them information in such a way as to correct their mistaken beliefs (Leiss, 1996). In order to do so, the communicator needs to understand how the decision-maker is using information to form beliefs, and become a partner with the decision-maker in working with the new information to arrive at actual decisions (Fischhoff, 1995). Several of the findings within the field of risk communication are relevant to the processing of different statements of uncertainty into a single likelihood estimate, a necessary prerequisite to making an optimal choice. Where people are given precise probabilities of relatively abstract events occurring (e.g. the probabilities of winning different sums of money in a lottery), most people appear to over-react to especially small probabilities (close to 0), and under-react to especially large ones (close to 1) (Allais and Hagen, 1979; Kahneman and Tversky, 1979). Similarly, where there are two potential outcomes, the probability of each occurring are adjusted within most people’s minds towards 0.5 (Bruine de Bruin et al., 2000). In less abstract settings, most people’s estimates are heavily influenced by a set of factors closely associated with the emotional impact of the event itself. Events that are more easily remembered are viewed as more likely than those that are not (Tversky and Kahneman, 1973), and events that generate strong emotional reactions of dread or a loss of control (e.g. a shark attack, a plane crash), not coincidentally because they are then more easily remembered, are also seen as more likely (Covello, 1990). Indeed, even when people are told the probabilities of different events occurring, most of them remember those probabilities differently depending on their emotional reaction to the events and how plausible those probabilities seem (Windschitl and Weber, 1999). Next, decision-makers are sensitive to how decisions are ‘‘framed’’. Framing is the inevitable act of describing a decision and the relevant background information to make it understandable and interesting to decision-makers (Ku¨hberger, 1998). There are often many frames that are logically equivalent, but to which people react very differently. For example, most people show different preferences for risk when decisions are framed as affecting either their gains relative to the status quo, or their losses (Kahneman and Tversky, 1979). 3.2. Psychological models Many prominent behavioral economists had their training in psychology, and there is substantial overlap between the psychological literature and the behavioral economics literature in the area of developing confidence judgments and estimating likelihoods. There is less agreement about what people’s motivations while making decisions are, and hence how their beliefs actually influence

ARTICLE IN PRESS A. Patt / Global Environmental Change 17 (2007) 37–46

their choices. The psychological models of people’s motivations are too numerous to discuss in detail here, but they share a common feature in that, unlike economic models, they do not assume that individuals make decisions in order to maximize the utility derived from consumption. Bounded rationality, for example, suggests that people engage in a mental search of available options, and choose the first one that is satisfactory (Simon, 1956). This socalled satisficing is different from optimizing in that it involves comparing not the outcomes of different choice options, but of each choice option with a set of minimum criteria. Closely linked to bounded rationality is the concept of adaptive heuristics: people develop and use mental shortcuts to identify acceptable options quickly, with a minimal amount of necessary information (Payne et al., 1993). One of the clearest examples is of a person trying to catch a ball hit into the sky, such as in a baseball game. A model based on optimization would have the person calculate where the ball will land, based on an estimation of the speed and direction at which the ball was hit, factoring in the effects of gravity and air resistance. To optimize the chances of catching ball, the person will run to that place as quickly as possible. Actual ball players, however, apparently does not have time for such calculations, and instead rely on the ‘‘fast and frugal’’ gaze heuristic: they keep their eye on the ball and observe the angle at which it appears above the horizon. When that angle appears to be decreasing, they accelerate towards the ball; when the angle is increasing, they accelerate away from the ball. If they can accelerate quickly enough, their path will always intercept that of the ball before it hits the ground, without their ever knowing where that point of interception will be (which is why they sometimes crash into walls while running) (Gigerenzer and Selten, 2001). People continually develop and improve upon such heuristics as they gain familiarity with a decision-domain; they use and refine the techniques that work. An important (and somewhat controversial) claim of some psychologists is that heuristics often outperform attempts at optimization, given actual information and cognitive constraints (Gigerenzer, 2000). This implies that expected utility theory might not even be correct as a prescriptive model. Information can change people’s beliefs and judgments of confidence, but as a result of not only the content of the information, but also its source (Weber et al., 2000). Most people weight information gained from personal experience quite differently than they do information gained from third parties, and the form of the personal experience can also make a difference (Edgell et al., 2004; Griffin and Tversky, 1992). People are more likely to trust expert opinion when they fully understand it, and when they perceive it coming from a source with an obligation to be honest, such as arising out of a previous social relationship (Birnbaum and Mellers, 1983; Birnbaum and Stegner, 1979; Birnbaum et al., 1976; Darr and Kurtzberg, 2000; Patt et al., 2006; Sniezek et al., 2004). Indeed, many people

41

modify their choices in response to new information not necessarily because they believe the information itself to be true, but rather in order to signal that they have accepted the help that was offered by the information provider (Harvey and Fischer, 1997). Perhaps most importantly, information can affect not only beliefs, but also the motivation to act on the basis of those beliefs. For example, information that ought to be most valuable from the perspective of belief updating—that which is quite different from their prior beliefs—often has little effect on people’s actions, either because they reject it out of hand in order to preserve their own self-confidence (Petty and Cacioppo, 1986), or because accepting it reduces their selfconfidence and motivation to take any action at all (Prentice-Dunn and Rogers, 1986). By contrast, offering people information that confirms their prior beliefs can provide additional motivation to act. 3.3. Social models While both economic and psychological theories of decision-making explain people’s actions in social settings, their focus is still on the individual. A separate set of models, which one can loosely label social (although they are rooted in a number of disciplines, including sociology, anthropology, and geography), centers on the social context for decision-making and action. Historically, scholars in these fields have reached very different conclusions from economists and risk communicators about the role of scientific information in decision-making processes. The role that scientific information plays in decisionmaking depends critically on the social processes through which that information is transmitted and processed (Jasanoff et al., 2002), and Kasperson and Kasperson (1996) show how particular social institutions can amplify or attenuate the perception of risk. Proponents of the cultural theory of risk, for example, suggest that there are several distinct worldviews, or discourses, and that people interpret information in ways that are consistent with their own view (Douglas and Wildavsky, 1982; Thompson et al., 1990). The same piece of information about a particular risk may to a heirarchist suggest great need for control, to an egalitarian greater need for caution, to an individualist greater need for individual autonomy, and to a fatalist greater cause for resignation. People can continue to believe that they are ignorant of a particular subject, even after having received a great deal of information about it, in order to maintain their social identity (Michael, 1996). People see their own type of knowledge as tied to their social identity, and often cannot communicate effectively with scientists, whose social identity is quite different (Wynne, 1996). The fault need not lie with the lay decisionmakers, but with the scientists who assume that their own interpretation of evidence is more reliable. Scientific uncertainty influences decision-making by altering political discourse. Policy-makers rely on scientific

ARTICLE IN PRESS 42

A. Patt / Global Environmental Change 17 (2007) 37–46

evidence to add legitimacy to their actions (Ezrahi, 1990). When the scientific community admits that it does not know the answer to policy relevant questions, it undercuts the legitimizing function that science can provide, and becomes a publicly accepted justification for postponing action (Funtowicz and Ravetz, 1990, 1993). It is not surprising, then, that groups interested in maintaining the status quo in the climate change policy arena do not simply deny the problem exists, but rather claim that the science is too uncertain to base any actions upon (Gelbspan, 1997). Conflict about issues of science does not necessarily have the same sedative effect (Dryzek, 1997; Lee, 1993). In highly contested issue areas, experts commonly line up on both sides of the political fence, each group playing a legitimizing role, with the media then highlighting these differences of scientific opinion (Boykoff and Boykoff, 2004). Conflict-based uncertainty may be a signal to the public that a particular issue is important and politically contested, and hence that policy actions may be necessary. 3.4. Different models and climate change uncertainty What, then, are the implications of these different models of decision-making for the choice of whether to maintain the distinction between model-based and conflictbased uncertainty? The economic models do not offer clear guidance. In the expected utility theory model, more information about uncertainty is always better, since it allows people to improve their estimates of the future. However under the strictest of neo-classical assumptions, it does not matter how that information is provided, as long is it is accurate. Decision-makers will process that information in ways similar to the professional analyst, arriving at a single PDF or probability estimate, from which it is possible to maximize expected utility. Behavioral economics suggests that more information is better as long as it does not trigger particular patterns of thought that will lead to errors of judgment, and that decision-makers make consistent errors when they try to process information about uncertainty. Synthesizing information to arrive at single PDFs or probability estimates—as for example the IPCC Working Group I has done—may improve judgment, since it will take many of the calculations out of the hands of decision-makers. At the same time, the risk communication literature is clear that decision-makers need to be involved in the processing of the information if they are to trust it. Synthesis without stakeholder involvement—which is often the reality of budget-constrained assessments—may undermine the credibility or legitimacy of that information. Other social scientists reach clearer conclusions. People’s responses to information are closely tied to their sense of individual and social identity, and the source of information plays an important role in their implicit decision to trust and use it. The social models suggest that conflictbased uncertainty and model-based uncertainty play very different roles in political discourse. It is reasonable, then,

to believe that people have very different reactions to information about uncertainty, depending on whether it is generated by a computer model, or by argument. Some scholars thus suggest that assessment panels pay close attention to both quantitative and qualitative features of uncertainty (van der Sluijs et al., 2005). Moreover, psychological models suggest that these reactions can include a change in beliefs, a change in motivation, or both. Combined, these models suggest that the choice of whether to communicate uncertainty as model-based or conflict-based could have important effects on how people respond to the information. There have been psychological studies examining how people develop their beliefs and attitudes about climate change (e.g., Krosnick et al., 2006; Weber, 1997, 2006). However, there have not been any studies specifically examining the issue of model-based and conflict-based uncertainty, either within the context of climate change or more generally. It is an area open for empirical research, and the lessons could be of use to climate change assessment. 4. Experiment comparing model-based and conflict-based uncertainty I conducted an experiment with undergraduate science students at Boston University, to investigate how the difference in uncertainty framing influences probability estimates and motivation for action. I had several hypotheses that were consistent with the psychological and social models I have described. First, I expected to see an overall bias in probability judgments towards 50/50. Second, for reasons consistent with longshot bias, I expected to see this to be especially so in the case of conflict-based uncertainty. Third, I expected to see an inconsistent relationship between changes in probability judgments and the desire to take policy actions. 4.1. Experimental design The basic structure of the experiment was to ask students to answer two questions about climate change, given hypothetical information. The hypothetical information concerned the likelihood of a certain degree of sea level rise occurring in the Boston area within the next 100 years. The first question asked students to indicate their subjective probability assessment that the sea level rise would occur. They indicated this probability by marking a point on a horizontal line representing a scale from 0% to 100% likelihood. The second question asked students to indicate whether they thought the City of Boston should (a) begin to take action now to guard against the sea level rise, or (b) wait until more information is known. After they had completed the questionnaire, I informed students about the purpose of the experiment, and that the hypothetical information did not necessarily represent the current scientific state of knowledge.

ARTICLE IN PRESS A. Patt / Global Environmental Change 17 (2007) 37–46

There were eight versions of the questionnaire, in which the hypothetical information differed in three ways, allowing for between-subject comparisons. The first difference concerned the magnitude of sea-level rise: half of the questionnaires described the possibility of sea level rise of 50 cm, the other half described sea level rise of 1 m, and the first question then asked students to indicate the likelihood of that same amount of sea level rise (either 50 cm or 1 m). The purpose of this variance in the information was to see whether students’ answers were based primarily on their prior knowledge, or on the hypothetical information in the survey itself. If students based their probability assessments on their prior knowledge (which included, for these science students, the knowledge that 1 m is twice as much as 50 cm), then they should have indicated the likelihood of 50 cm sea level rise being higher than that of 1 m. In fact, there were no significant differences between the two groups of answers (P4.10), and I was able to combine the results. This left two differences in the hypothetical information, and hence four treatment groups for analysis. The differences were in whether the uncertainty was modelbased or conflict-based, and whether the event might generally be described as unlikely or likely. Table 1 presents the language used to describe uncertainty in the four versions. I conducted the experiment on three occasions, with different students in different introductory science classes. The first round was with 220 students in April 2005. Because of a surprising result, which I describe below, I repeated the experiment with an additional 127 students in September 2005. This round generated results that were qualitatively different from those in April. One potential cause of this difference was the occurrence, immediately prior to this second round, of Hurricanes Katrina and Rita, which may have caused students to view the issue of sea level rise somewhat differently. I waited for the media attention to the hurricanes to die down, and conducted the experiment with a third group of 110 students in December 2005.

43

treatment than in the model-based treatment. Among those in the two likely treatments, there was no significant difference (P4.10) between the two groups. Finally, the difference in probability estimates between the unlikely to the likely treatments was significantly greater (Po.05) within the model-based treatments (difference in means ¼ .16) than within the conflict-based treatments (difference in means ¼ .08). Fig. 1B presents results from the September round, which were qualitatively different from those in the April round. There were no significant differences (P4.10) between model-based and conflictbased treatments. Fig. 1C presents results from the December round, and in most ways they were qualitatively the same as the April results: a significant difference between the model-based and conflict-based unlikely treatments (Po.05), no significant difference between the two likely treatments, and a significant difference in difference estimator (Po.05). Fig. 2 shows results from the second question. The vertical axis indicates the proportion of students within each treatment group that favored taking immediate action to guard against sea level rise, rather than waiting for greater certainty or consensus. Fig. 2A shows the results of the April round. Among the unlikely treatments, a significantly greater fraction favored immediate action among those who were in the conflict-based treatment than in the model-based treatment (Po.05). There was no significant difference (P4.10) between the two likely treatments. There was also no significant difference between the unlikely and likely treatments within the model-based treatments (P4.10). Surprisingly, within the two conflict-based groups, students in the unlikely treatment were marginally significantly (Po.10) more

4.2. Experimental results Fig. 1 presents results from the first question. The vertical axis represents the average probability indicated for the described amount of sea level rise occurring within the next 100 years, within each treatment group. Fig. 1A shows results from the April round. For those students in the two unlikely treatments, mean probability estimates were significantly higher (Po.05) in the conflict-based

Fig. 1. Mean probability estimates by experimental treatment.

Table 1 Four experimental treatment groups

Model-based Conflict-based

Unlikely

Likely

Models indicate that there is a 20% chance it will occur Two of ten experts say it will occur

Models indicate that there is an 80% chance it will occur Eight of ten experts say it will occur

ARTICLE IN PRESS 44

A. Patt / Global Environmental Change 17 (2007) 37–46

Fig. 2. Proportion favoring immediate action by experimental treatment.

likely to favor immediate action than those in the likely treatment. This was the surprise that led me to repeat the experiment in September. Fig. 2B shows the results of the September round. In this case, there were no significant differences (P4.10) between any of the groups. As with the results of the first question, this was qualitatively different from the April round. Fig. 2C shows the results from the December round, and in some respects they agreed qualitatively with the April round. Within the unlikely treatments, the students in the conflict-based treatment were significantly more likely to favor immediate action than those in the model-based treatment (Po.05). Within the likely treatments, the difference between the model-based and conflict-based treatments was not quite significantly different (P ¼ .11). 4.3. Discussion If climate information is like a betting market, an analyst learning that two out of ten experts predicted sea level rise, but knowing nothing about the relative reliability of the experts, would be tempted to say that the event is 20% likely to occur. But this is not how the students reacted to the information, at least during the April and December rounds of the experiment. Students’ probability estimates were on average closer to 50/50 than the information they were given would have called for, consistent with Bruine de Bruin et al. (2000). This effect was magnified in the case of conflict-based uncertainty, consistent with longshot bias (Camerer, 2001). The change from two of ten to eight of ten hypothetical experts predicting the event to occur had very little impact on students’ probability estimates. If indeed the proportion of scientists predicting an event to occur is somewhat indicative of the likelihood of that event, then analysts may improve many people’s probability estimates by portraying it as a likelihood or probability. The results from the second question, however, suggest that doing so will not correspondingly change their motivation to act. In the April round, something about the conflict-based unlikely treatment made students espe-

cially eager to take action, as if they trusted the two experts who predicted sea level rise more than the eight experts who did not. I have no good explanation for this result. In the September round there were no significant effects of the different sets of information on students’ answers to question two: all four treatment groups responded qualitatively the same. In the December round, the students reading about conflict-based uncertainty were more likely to favor action, compared to those reading a model-based probability estimate. I also cannot provide a good explanation for the differences observed in the September round, compared to April and December. It is plausible that the occurrence of the two devastating hurricanes, just before the September experiment, had an effect on students’ reactions to the questions or to the hypothetical information. That the December results resembled those from April supports this speculation. Alternatively, it could simply be that the pool of students in September was somehow different, perhaps because they were filling in the questionnaire at the beginning of the semester, rather than at the end. While the differences between rounds make the results inconclusive with respect to general effects from the model-based versus conflict-based distinction, they do indicate that people’s responses to descriptions of probability are sensitive to factors that go beyond the mere description of the event’s likelihood. This finding is consistent with the psychological and social models of decision-making. 5. Conclusion: framing uncertainty I have explored a distinction between different types of uncertainty—model-based and conflict-based—that the IPCC and other assessment panels have largely overlooked. I have shown that the very act of overlooking it is not unexpected if one adopts an economic model of decision-making under uncertainty, in which decisionmakers translate all information into probability estimates, and use these estimates better to maximize utility. Under such a model, the two ways of describing uncertainty are logically equivalent, and the goal for communicators is to improve the accuracy of people’s probability estimates. Other models of decision-making suggest different roles for information about uncertainty, and that the distinction could matter. The experiment explored exactly how the difference could matter in the context of a description of climate change impacts. While the results were inconclusive in terms of indicating precise effects of the distinction, they were consistent with the psychological and social models of decision-making in which framing matters, and in which people are not assumed to optimize on the basis of probability estimates. Assessment panels need to make information more accessible and useful to decision-makers, and for this they need to engage in simplification and synthesis. By transforming uncertainty from conflict-based to modelbased, assessment panels may be helping decision-makers

ARTICLE IN PRESS A. Patt / Global Environmental Change 17 (2007) 37–46

better understand and make further calculations base on the likelihood of an event. At the same time, however, they may be making decision-makers less motivated to take action. Deciding how to represent model-based and conflict-based uncertainty is essentially one of framing, since the two types of uncertainty are, at least from an economic perspective, logically equivalent. But framing is increasingly seen as value-laden and political (Lakoff, 2004), and this leaves assessment panels open to the criticism of making politically-motivated communication choices. How can the IPCC and other assessment panels best make information about uncertainty understandable and useful, while avoiding making overt political judgments? I offer two recommendations. First, assessment panels need as much as possible to engage in the process of telling complete stories about uncertainty. This is the conclusion that the IPCC has come to in their recent guidance paper, but they envision primarily the scientific and technical side of the uncertainty story. They also need to include the social history of uncertainty, echoing the call of van der Sluijs et al. (2005) to describe both the quantitative and qualitative aspects of uncertainty. The fact that conflict has arisen about particular estimates of the future may signal features not only of the science, but also of the politics of that science, that are relevant for decision-makers to learn. By paying attention to the social side of uncertainty, assessment panels can better avoid misrepresenting it, and misrepresentation is what can draw criticism of political motivation. Second, assessment panels need to continue to incorporate the guidance of a broad range of social scientists to understand the psychological, social, and political nuances of scientific communication. Inevitably, assessment panels will face space limitations making it impossible to tell every story about every aspect of uncertainty, and they will need to simplify and synthesize information. Experts who are experienced in scientific communication, in issue areas that include but go beyond climate change, can help to identify where such simplifications may be most useful and least politically sensitive. References Allais, M., Hagen, O. (Eds.), 1979. Expected utility hypotheses and the Allais Paradox. Theory and Decision Library. Kluwer Academic Publishers, Dordrecht. Andreae, M., Jones, C., Cox, P., 2005. Strong present-day aerosol cooling implies a hot future. Nature 435, 1187–1190. Andronova, N.G., Schlesinger, M.E., 2001. Objective estimation of the probability density function for climate sensitivity. Journal of Geophysical Research-Atmospheres 106 (D19), 22605–22611. Arnell, N.W., Tompkins, E., Adger, W.N., 2005. Eliciting information from experts on the likelihood of rapid climate change. Risk Analysis 25 (6), 1419–1431. Arrow, K., 1964. The role of securities in the optimal allocation of riskbearing. Review of Economic Studies 31, 91–96. Birnbaum, M.H., Mellers, B.A., 1983. Bayesian inference: combining base rates with opinions of sources who vary in credibility. Journal of Personality and Social Psychology 45 (4), 792–804.

45

Birnbaum, M.H., Stegner, S., 1979. Source credibility in social judgment: bias, expertise, and the judge’s point of view. Journal of Personality and Social Psychology 37, 48–74. Birnbaum, M.H., Wong, R., Wong, L.K., 1976. Combining information from sources that vary in credibility. Memory and Cognition 4 (3), 330–336. Boykoff, M., Boykoff, J., 2004. Balance as bias: global warming and the US prestige press. Global Environmental Change 14, 125–136. Bruine de Bruin, W., Fischhoff, B., Millstein, S., Halpbern-Felscher, B., 2000. Verbal and numerical expressions of probability: ‘‘It’s a FiftyFifty Chance’’. Organizational Behavior and Human Decision Processes 81 (1), 115–131. Cain, M., Law, D., Peel, D., 2003. The favorite longshot bias, bookmaker margins, and insider trading in a variety of betting markets. Bulletin of Economic Research 55 (3), 263–273. Camerer, C., 2001. Prospect theory in the wild: evidence from the field. In: Kahneman, D., Tversky, A. (Eds.), Choices, Values, and Frames. Cambridge University Press, Cambridge, pp. 288–300. Clemen, R.T., Winkler, R.L., 1999. Combining probability distributions from experts in risk analysis. Risk Analysis 19 (2), 187–203. Covello, V., 1990. Risk comparisons in risk communication: issues and problems in comparing health and environmental risks. In: Kasperson, R., Stallen, D. (Eds.), Communicating Risks to the Public: International Perspectives. Kluwer Academic Publishers, Dordrecht, pp. 79–124. Cox, P., Betts, R., Jones, C., Spall, S., Totterdell, I., 2000. Acceleration of global warming due to carbon-cycle feedbacks in a coupled climate model. Nature 408, 184–187. Darr, E., Kurtzberg, T., 2000. An investigation of partner similarity dimensions on knowledge transfer. Organizational Behavior and Human Decision Processes 82 (1), 28–44. Douglas, M., Wildavsky, A.B., 1982. Risk and Culture. University of California Press, Berkeley, CA. Dryzek, J., 1997. The Politics of the Earth: Environmental Discourses. Oxford University Press, Oxford. Edgell, S.E., Harbison, J.I., Neace, W.P., Nahinsky, I.D., Lajoie, A.S., 2004. What is learned from experience in a probabilistic environment? Journal of Behavioral Decision Making 17, 213–229. Ezrahi, Y., 1990. The Descent of Icarus: Science and the Transformation of Contemporary Democracy. Harvard University Press, Cambridge, MA. Fehr, E., Schmidt, K., 1999. A theory of fairness, competition, and cooperation. Quarterly Journal of Economics 114, 817–868. Fischhoff, B., 1995. Risk communication and perception unplugged: twenty years of process. Risk Analysis 15, 137–145. Fishburn, P., 1981. Subjective expected utility: a review of normative theories. Theory and Decision 13 (2), 139–199. Funtowicz, S.O., Ravetz, J.R., 1990. Uncertainty and Quality in Science for Policy. Kluwer Academic Publishers, Dordrecht, the Netherlands. Funtowicz, S.O., Ravetz, J.R., 1993. Science for the post-normal age. Futures 25 (7), 739–755. Gelbspan, R., 1997. The Heat is On: The High Stakes Battle over Earth’s Threatened Climate. Perseus, Cambridge. Gigerenzer, G., 2000. Adaptive Thinking: Rationality in the Real World. Oxford University Press, Oxford UK. Gigerenzer, G., Selten, R. (Eds.), 2001. Bounded Rationality: The Adaptive Toolbox. MIT Press, Cambridge, MA. Griffin, D., Tversky, A., 1992. The weighting of evidence and the determinants of confidence. Cognitive Psychology 24, 411–435. Harvey, N., Fischer, I., 1997. Taking advice: accepting help, improving judgment, and sharing responsibility. Organizational Behavior and Human Decision Processes 70 (2), 117–133. Houghton, J.T., et al. (Eds.), 2001. Climate Change 2001: The Scientific Basis. Cambridge University Press, Cambridge. Janssen, P.H.M., Petersen, A.C., van der Sluijs, J.P., Risbey, J.S., Ravetz, J.R., 2004. Towards guidance in assessing and communicating uncertainties. Fourth International Conference on Sensitivity Analysis of Model Output, Santa Fe.

ARTICLE IN PRESS 46

A. Patt / Global Environmental Change 17 (2007) 37–46

Jasanoff, S., Markle, G.E., Petersen, J.C., Pinch, T. (Eds.), 2002. Handbook of Science and Technology Studies. Sage Publications, Thousand Oaks, CA. Jones, R.N., 2000. Managing uncertainty in climate change projections: issues for impact analysis. Climatic Change 45 (3-4), 403–419. Kahneman, D., Tversky, A., 1979. Prospect theory: an analysis of decision under risk. Econometrica 47, 263–291. Kahneman, D., Knetch, J., Thaler, R., 1986. Fairness and the assumptions of economics. Journal of Business 59, s285–s300. Kasperson, R.E., Kasperson, J.X., 1996. The social amplification and attenuation of risk. Annals of the American Academy of Political and Social Science 545, 95–105. Knetch, J., 1997. Reference states, fairness, and the choice of measure to value environmental changes. In: Bazerman, M., Messick, D., Tenbrunsel, A., Wade-Benzoni, K. (Eds.), Environment, Ethics, and Behavior. New Lexington Press, San Francisco, pp. 13–32. Krosnick, J.A., Holbrook, A.L., Lowe, L., Visser, P.S., 2006. The origins and consequences of democratic citizens’ policy agendas: a study of popular concern about global warming. Climatic Change 77, 7–43. Ku¨hberger, A., 1998. The influence of framing on risky decisions: a metaanalysis. Organizational Behavior and Human Decision Processes 75 (1), 23–55. Lakoff, G., 2004. Don’t Think of an Elephant: Know Your Values and Frame the Debate. Chelsea Green, New York. Lee, K., 1993. Compass and Gyroscope: Integrating Science and Politics for the Environment. Island Press, Washington, DC. Leiss, W., 1996. Three phases in the evolution of risk communication practice. Annals of the American Academy of Political and Social Science 545, 85–94. Lempert, R., 2002. A new decision sciences for complex systems. Proceedings of the National Academy of Sciences of the United States of America 99, 7309–7313. Linstone, H.A., 1999. Complexity science: implications for forecasting. Technological Forecasting and Social Change 62, 79–90. McCarthy, J.J., Canziani, O.F., Leary, N.A., Dokken, D.J., White, K.S. (Eds.), 2001. Climate Change 2001: Impacts, Adaptation, and Vulnerability. Published for the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, 1032pp. Metz, B., Davidson, O., Swart, R., Pan, J. (Eds.), 2001. Climate Change 2001: Mitigation. Cambridge University Press, Cambridge. Michael, M., 1996. Ignoring science: discourses of ignorance in the public understanding of science. In: Irwin, A., Wynne, B. (Eds.), Misunderstanding Science? The Public Reconstruction of Science and Technology. Cambridge University Press, Cambridge, UK, pp. 107–125. Morgan, M.G., Henrion, M., 1990. Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge University Press, Cambridge. Morgan, M.G., Keith, D.W., 1995. Subjective judgments by climate experts. Environmental Science and Technology 29, 468A. Moss, R., Schneider, S., 2000. Uncertainties in the IPCC TAR: recommendations to lead authors for more consistent assessment and reporting. In: Pachauri, R., Taniguchi, T., Tanaka, K. (Eds.), IPCC Supporting Material, Guidance Papers on the Corss Cutting Issues of the Third Assessment Report of the IPCC. Cambridge University Press, Cambridge, pp. 33–51. Munroe, A., Sugden, R., 2003. On the theory of reference-dependent preferences. Journal of Economic Behavior and Organization 50, 407–428. National Research Council, 2006. Completing the Forecast: Characterizing and Communicating Uncertainty for Better Decisions Using Weather and Climate Forecasts. The National Academies Press, Washington. Patt, A.G., Zeckhauser, R., 2000. Action bias and environmental decisions. Journal of Risk and Uncertainty 21 (1), 45–72.

Patt, A.G., Bowles, H.R., Cash, D., 2006. Mechanisms for enhancing the credibility of an advisor: prepayment and aligned incentives. Journal of Behavioral Decision Making 19 (4), 347–359. Payne, J.W., Bettman, J.R., Johnson, E.J., 1993. The Adaptive Decision Maker. Cambridge University Press, Cambridge. Penner, J., 2004. The cloud conundrum. Nature 432, 962–963. Petty, R.E., Cacioppo, J.T., 1986. The elaboration likelihood model of persuasion. In: Berkowitz, L. (Ed.), Advances in Experimental Social Psychology. Academic Press, New York, pp. 123–205. Prentice-Dunn, S., Rogers, R.W., 1986. Protection motivation theory and preventative health: beyond the health belief model. Health Education Research 1, 153–161. Risbey, J.S., Kandlikar, M., Karoly, D.J., 2000. A protocol to articulate and quantify uncertainties in climate change detection and attribution. Climate Research 16 (1), 61–78. Ritov, I., Baron, J., 1992. Status quo and omission biases. Journal of Risk and Uncertainty 5, 49–61. Samuelson, W., Zeckhauser, R., 1988. Status quo bias in decision making. Journal of Risk and Uncertainty 1, 7–59. Simon, H., 1956. Rational choice and the structure of the environment. Psychological Review 63, 129–138. Sniezek, J., Schrah, G.E., Dalal, R., 2004. Improving judgment with prepaid expert advice. Journal of Behavioral Decision Making 17, 173–190. Stainforth, D.A., et al., 2005. Uncertainty in predictions of the climate response to rising levels of greenhouse gases. Nature 433, 403–406. Thompson, M., Ellis, R., Wildavsky, A.B., 1990. Cultural Theory. Westview Press, New York. Tversky, A., Kahneman, D., 1973. Availability: a heuristic for judging frequency and probability. Cognitive Psychology 5, 207–232. Tversky, A., Kahneman, D., 1974. Judgment under uncertainty: heuristics and biases. Science 211, 1124–1131. van der Sluijs, J.P., et al., 2005. Combining quantitative and qualitative measures of uncertainty in model based environmental assessment: the NUSAP system. Risk Analysis 25 (2), 481–492. von Neumann, J., Morgenstern, O., 1944. Theory of Games and Economic Behavior. Princeton University Press, Princeton. Waldrop, M.M., 1992. Complexity: The Emerging Science at the Edge of Order and Chaos. Simon & Schuster, New York. Weber, E., 1997. Perception and expectation of climate change: precondition for economic and technological adaptation. In: Bazerman, M., Messick, D., Tenbrunsel, A., Wade-Benzoni, K. (Eds.), Environment, Ethics, and Behavior: The Psychology of Environmental Valuation and Degradation. New Lexington Press, San Francisco, pp. 314–341. Weber, E., 2006. Experienced-based and description-based perceptions of long-term risk: why global warming does not scare us (yet). Climatic Change 77, 103–120. Weber, E., Bo¨kenholt, U., Hilton, D., Wallace, B., 2000. Confidence judgments as expressions of experienced decision conflict. Risk Decision and Policy 5, 69–100. Webster, M., 2003. Communicating climate change uncertainty to policymakers and the public. Climatic Change 61, 1–8. Webster, M., et al., 2003. Uncertainty analysis of climate change and policy response. Climatic Change 61, 295–320. Windschitl, P.D., Weber, E., 1999. The intepretation of ‘‘likely’’ depends on the context, but ‘‘70%’’ is 70%—right? The influence of associative processes on perceived certainty. Journal of Experimental Psychology: Learning, Memory, and Cognition 25 (6), 1514–1533. Wynne, B., 1996. Misunderstood misunderstandings: social identities and the public uptake of science. In: Irwin, A., Wynne, B. (Eds.), Misunderstanding Science? The Public Reconstruction of Science and Technology. Cambridge University Press, Cambridge, UK, pp. 19–46. Zeckhauser, R., Viscusi, W.K., 1996. The risk management dilemma. Annals of the American Academy of Political and Social Science 545, 144–155.