Environmental Cognition, Perception, and Attitudes Baruch Fischhoff, Carnegie Mellon University, Pittsburgh, PA, USA Ó 2015 Elsevier Ltd. All rights reserved. This article is a revision of the previous edition article by B. Fischhoff, volume 7, pp. 4596–4602, Ó 2001, Elsevier Ltd.
Abstract Our decisions shape the environment, affecting what is preserved, destroyed, restored, and ignored. The environment shapes our decisions, affecting our health, economics, and well-being. These diverse decisions are often complex and uncertain. As a result, there is no unifying theory of environmental decisions. Rather, the social and behavioral sciences offer a repertoire of methods, results, and theories that, together, can describe environmental decisions with the requisite richness. They draw on two distinct research traditions: holistic approaches, seeking organizing patterns in those decisions and the people who make them, and reductionist approaches, seeking to assemble pictures by building on existing results. As a result, environmental decisions offer a venue for applying, testing, and extending the social and behavioral sciences, as well as integrating their work with that in the natural sciences.
Individuals’ primary interaction with the natural environment is through their senses. They form feelings and impressions that both shape and are shaped by their basic cognitive structures and values. At deeper levels of processing, they form more explicit beliefs and attitudes, including theories and ideologies (Atran, 1990; Kahneman, 2011). These cognitions, perceptions, and attitudes affect both their own well-being and that of their environment. People feel better in a healthy environment and feel better knowing that their environment is healthy (De Young and Princen, 2012). Conversely, the health of that environment depends on choices that people make, as consumers, workers, citizens, officials, tourists, or industrialists (Garner and Stern, 1996; Kates et al., 2012). Indeed, even natural scientists studying the environment are increasingly realizing that understanding human behavior is essential to their work, both to identify the drivers and effects of environmental change, and to make that research useful (Levin and Pacala, 2005; Mooney et al., 2013). Environment-related decisions are enormously varied, in the options, uncertainties, consequences, and decision-makers involved – not to mention their national, social, cultural, institutional, and economic settings. As a result, there is no simple summary of what people want, believe, feel, or do, relative to the environment. For example, over the past decade, in the United States, overt concern over climate change has increased, more or less, steadily for Democrats, while drifting downward for Republicans. On the other hand, overt behavior related to climate change has often tracked its own course, independent of those attitudes, with climate change ‘deniers’ often being relatively proactive in adopting energy conservation measures – but reflecting economic concerns, rather than environmental ones (Krosnick et al., 2006; Leiserowitz, 2006). Given the diversity of environmental decisions, this article focuses on approaches to studying how people address them, rather than on what they do about specific ones – except as illustrative examples.
Research Strategies for Describing Environmental Decisions Environmental decisions are not only diverse, but also often complex, with many options, outcomes, uncertainties, and
706
decision-makers. As a result, doing justice to any specific decision requires accommodating the details of its case study. Scholars grounded in the humanities have produced many such accounts, in fields such as environmental writing (Slovic and Dixon, 1992), retrospective technology assessment (Tarr, 1996), environmental history (Cronon, 1992), and cultural anthropology (Douglas, 1966). Their detailed observations shape and constrain the theories of social and behavioral scientists, in their quest for general accounts, capturing regularities in environmental behavior found across individual cases. Those scientists can adopt one of two basic compromises in their research approach. 1. Holistic approaches look for recurrent behavioral archetypes, in the sense of groups of people who respond in predictable ways (e.g., Cotgrove, 1982; Schwartz, 2006). For example, they might attempt to identify ‘ecofreaks,’ who resolutely oppose any intrusion into pristine areas, or ‘technocrats,’ who automatically dismiss such concerns. ‘Cultural theory’ pursues this approach, seeing whether people can be characterized by their positions on two dimensions and, if so, whether that characterization predicts their attitudes and behaviors (Dake, 1992; Douglas and Wildavsky, 1982). The ongoing ‘Six Americas’ project categorizes people by their positions on climate change, finding patterns of beliefs, attitudes, and behaviors, arrayed across the spectrum of Alarmed-Concerned-Cautious-DisengagedDoubtful-Dismissive (Maibach et al., 2011). Moral Foundations Theory (Haidt, 2012) describes people by their orientation on five or six dimensions, such as their respect for ‘authority’ and their sanctification of ‘purity’ (as those general properties are defined by their social group). 2. Reductionist approaches assume that all people respond to the same factors, but interpret and weigh them differently, sometimes leading to different choices (Edwards, 1954). For example, all people may sometimes rely on the availability heuristic, inferring the likelihood of future events based on how easily they can remember or imagine them happening. However, they may reach different conclusions, if they have observed different events or have different powers of imagination (Tversky and Kahneman, 1974).
International Encyclopedia of the Social & Behavioral Sciences, 2nd edition, Volume 7
http://dx.doi.org/10.1016/B978-0-08-097086-8.91012-2
Environmental Cognition, Perception, and Attitudes
Similarly, all people learn how to extract the gist of recurrent events, but may be exposed to different events and receive different instruction in what to observe (Reyna, 2012). All people have sacred values, which they will only compromise under duress; however, which those are depends on their culture and social groups (Atran, 2002). Thus, from a reductionist perspective, ecofreaks and technocrats (if such archetypes exist) are otherwise similar people, who have been shaped by different social pressures and material conditions. Holistic and reductionist accounts coexist in dual-process theories of attitude and cognition (Kahneman, 2011; Evans, 2003), which posit parallel processes involving a general orientation toward a stimulus (e.g., an environmental cause, scene, or insult) and a detailed analysis of its contents (Slovic, 2010). That general orientation might be guided by holistic general themes or by reductionist general concerns. The more detailed analysis might overturn those impressions or be constrained by them, as can happen when people fall prey to the many cognitive processes that favor initial beliefs. For example, people may disproportionately seek confirmatory, rather than contradictory evidence; resolve ambiguity in favor of current beliefs; or exaggerate the definitiveness of small samples of evidence (Gilovich et al., 2002). How hard people work to analyze an issue should depend on how important it seems, what return they expect from that effort, and how great is their general need for cognition – reflecting and shaping their opportunities to affect the environment. Some researchers favor holistic accounts in principle, because they lament the richness lost when behavior is reduced to the expression of general processes. They may be especially skeptical of processes identified in the rarefied settings of experiments crafted to highlight phenomena that interest investigators. Conversely, some researchers favor reductionist accounts in principle, because they need the rigor of experimental evidence regarding clearly defined processes. They might have little patience with the ‘grounded theories’ that emerge from the immersion that one holistic methodology requires (Charmaz, 2006), unnerved by the limited intersubjectivity that such accounts require. When researchers respect both holistic and reductionist approaches, the tension between them can provide rich accounts, while posing healthy challenges to each. The next section discusses how to realize that potential.
Holistic Accounts A general account of environmental behavior is useful only if researchers can unambiguously translate complex real-world situations into its necessarily abstract terms. For holistic theories that translation process may itself be holistic, attempting to identify specific cases with theoretical archetypes, such as a particular kind of intergroup conflict, stage of socialeconomic development, or level of civic engagement (Atran, 2002; De Young and Princen, 2012; Fischhoff, 1995; Kates et al., 2012). Holistic interpretation allows observers to use any available data to elaborate their account. Moreover, if those observers write well, the coherence of their narrative should
707
help readers to retain and integrate the details of their analyses. Archetypes are a natural way to organize evidence, even if that sometimes means neglecting discordant details. Unfortunately, holistic interpretation also allows observers to exploit the richness of the evidence, in order to fashion accounts that are more convincing than the evidence warrants. For example, they might exploit the privileged position of hindsight to select and interpret facts that affirm their theories (Fischhoff, 1975), perhaps arguing that particular decisions predictably led to success or ruin. Or, they might strategically identify people and groups with archetypes, obscuring inconsistent beliefs, attitudes, and behaviors. Moreover, the very act of categorizing other people is an exercise of power. It may reduce them to stereotypes, while flattering observers as possessing superior insight and even morals, as they reveal others’ failings – feeding the natural tendency to create simplistic views of other people (Dawes, 1988; Nisbett and Ross, 1980). These possibilities are not lost on the partisans in environmental conflicts, who value opportunities to demean their opponents and flatter their allies. As a result, they may seize on holistic accounts in the research literature, and then spin them to their own end, perhaps even loosely citing studies (‘X has shown that the public is irrational about nuclear power’). Indeed, scientists can acquire a political following because of the conclusions they reach, independent of the strength of their evidence. For example, the British Conservative government was accused of establishing its Behavioral Analysis Unit, not because it understood the science (Thaler and Sunstein, 2009), but because it sought cover for antiregulatory policies, by claiming that people could be ‘nudged’ to protect themselves (House of Lords, 2011). The narrative character of holistic accounts means that both scientific and amateur ones have the same outward form, forcing readers to check their scientific pedigree (Funtowicz and Ravetz, 1990). What training did the authors have? What peer-review screens did their work pass? Who paid for the work? (Oreskes and Conway, 2008; Hoffman, 2010) As in other domains where science enters public discourse, environmental researchers bear a special responsibility to ensure that their work is interpreted appropriately. If the behavioral archetypes that recur in holistic accounts are clearly defined, then it should be possible to operationalize them in individual-difference measures (e.g., Dake, 1992; Dunlap, 2013) If such measures pass conventional psychometric criteria (e.g., reliability, construct validity), then they allow testing predictions derived from the underlying theory (e.g., “corporate greens have deeper commitment to consumption-related issues”; “environmental pessimists have greater scientific knowledge and institutional distrust”). Like other empirical tests, these are only as sound as the auxiliary assumptions underlying them, such as how people perceive the costs and benefits of possible actions (e.g., Do pessimists and optimists interpret the same facts differently or possess different facts? Are people vulnerable to framing effects, such that their preferences depend on how the issues are presented?). Here, reductionist approaches can help, by assessing the plausibility of those assumptions (“that information is widely distributed, so most people should have seen it”; “people have stable preferences about such things, so framing shouldn’t have much effect”) or by measuring them directly
708
Environmental Cognition, Perception, and Attitudes
(e.g., finding similar preferences with alternative frames) (Fischhoff, 2005). If archetypes cannot be measured authoritatively, then individuals may be characterized by observable features, such as their gender, race, income, or nationality – at the price of substituting general social theories for the theoretical force of archetype-based ones. For example, African-Americans typically express relatively great concern over environmental problems (Vaughan, 1993). That could reflect their greater exposure to those problems (meaning that they have more reason for concern), as documented by the environmental justice movement (Schlosberg, 2007). Or, it could reflect greater awareness of their immediate surroundings, greater physiological sensitivity, greater suspicion of social institutions, or poorer resources for self-protection, among other things. A similar array of hypotheses follows from Slovic’s (1999) finding that most Americans have similar environmental attitudes, except for some white males, who trust technology more and worry about the environment less than other people. Characterizing the subset of white males responsible for this overall group difference is a starting point for a theory of archetypes, sorting out the experiences or circumstances that create these distinctive views.
Reductionist Approaches Analogous measurement issues confront reductionist approaches, looking at how common behavioral patterns emerge in diverse settings. For example, approaches grounded in decision theory ask how individuals evaluate possible actions in terms of their expected effects on valued outcomes (Edwards, 1954; Fischhoff and Kadvany, 2011). Those outcomes might affect the well-being of individuals (e.g., taxes, wilderness opportunities, social status), their society (e.g., public health, justice, property rights), or their environment (e.g., biodiversity, reproductive success). In these accounts, actions are attractive to the extent that they increase the chances of good outcomes and decrease the chances of bad ones. Although they adopt a rational-actor perspective for characterizing the elements of decisions, these approaches vary widely in terms of how far they expect such normative analyses to be descriptively accurate. At the one extreme, lie economic theories that envision well-informed individuals calculating the discounted lifetime stream of goods flowing from different choices. At the other extreme, lie psychological approaches that envision individuals driven by emotions and uncompromising sacred values. The contrast between those choices and analytically optimal ones shows the price that people pay for their nonrationality (Breakwell, 2014). Such research readily accommodates the heterogeneity of environmental decisions and the importance of specific outcomes in them. For example, social norms may loom large for curbside recycling, where neighbors can see what one does, but not for roadside littering when no one is looking. Thus, while it may be tempting to make general statements of the form ‘what matters to people, when it comes to the environment is X,’ they always need to be qualified by specifying the context in question. For example, people may not litter even when alone, if they have internalized that behavior as a social
norm, have made it a habit, or always fear watching eyes. Even for as important a topic as valuing human life, studies have found widely varying willingness to pay for reducing the probability of premature death (Tengs, 1995; Viscusi, 1992). As a result, behavior appears most consistent in specific domains, such as home energy conservation (Gallagher and Randell, 2011). There, studies often find that conservation increases when people perceive greater benefits, lower costs, and fewer barriers to change. Although such results might seem just to affirm commonsense, they are needed to establish how general behavioral processes play out in specific settings: Are specific incentives (e.g., cost saving) strong enough to motivate specific energy conservation behaviors? How well did consumers understand those incentives (perhaps buried in their electricity bills)? To what extent did extrinsic incentives undermine intrinsic ones (e.g., environmental concern), by diluting the statement made by prosocial actions? Did consumers have realistic ways to save? Were they in a state of energy poverty, so that lower costs allowed them consume the energy needed for basic well-being (e.g., more tolerable indoor temperatures)? Thus, as with holistic approaches, reductionist ones must assess the beliefs and values of the individuals whose behaviors they hope to explain, predict, or manipulate. Even a valid model may fail, if its terms are measured poorly. However, that problem is less acute for models with an additive, or ‘compensatory,’ structure, in which good and bad expectations can cancel one another out (e.g., the health belief model, the theory of reasoned action). Because such models are relatively insensitive to errors in measurement (Dawes, 1988) they will have some predictive validity as long as a study includes rough approximations of the factors affecting behavior (or variables correlated with them). Unfortunately, that very robustness limits these models’ explanatory value, because many versions will produce similar predictions. As a result, it is hard to tell, which variables drive behavior and how important each is. That concern applies less with noncompensatory decisions, where one consideration overrides all others, as with sacred values. For example, there may be no way to overcome the stigma from contaminating part of the natural world – or to diminish the moral high ground from protecting it (Douglas and Wildavsky, 1982; Flynn et al., 2002; Baron and Spranca, 1997).
Evaluating Perceived Risks and Benefits of Environmental Changes Assessing beliefs and values is a central activity for the disciplines of cognitive and social psychology. Assessing them for environmental issues has not only drawn on that basic science, but also extended it. For example, some environmental decisions involve unfamiliar processes (e.g., radiation, invasive species, climate changes) playing out unevenly over time and species (Levin and Pacala, 2005). Those decisions may pose fateful trade-offs, where individuals cannot rely on tradition or trial and error to decide what is right: What should we pay to protect endangered species, poor children from lead in their homes, or groundwater from contamination? How should we weigh what our children (and grandchildren) might think about our choices? Such decisions may leave people uncertain
Environmental Cognition, Perception, and Attitudes about themselves as well as about their world, asking ‘what do I want?’ as well as ‘what can I get?’ Assessing and reducing both kinds of uncertainty has prompted social and behavioral scientists working on environmental issues to break new research grounds. More or less formally, that research begins by analyzing the decisions that people face, in order to get the facts right and to focus on the right facts – in the sense of the ones that matter most (Baron, 2012). Those analyses take an ‘inside view,’ trying to see the world the decision-makers face (Kahneman, 2011), reducing the risk of researchers assuming that others share their environmental concerns or exaggerating their store of common knowledge (Dawes, 1988; Nickerson, 1999). Unlike science education, where experts determine which facts are worth knowing, the study of environmental decisions depends on individuals’ circumstances, goals, and capabilities (Fischhoff, 2013). Having characterized a decision, the research proceeds to assess how well individuals understand it, contrasting what they know with what they need to know, in order to make sound choices. That contrast may reveals gaps in their understanding about the world or about themselves. One strategy for avoiding decision-specific studies is to create tests of environmental ‘literacy.’ Their usefulness depends on whether they include facts that people need to know or just ones that it would be nice to know. For example, studies of energy conservation have found that people can know about many measures that make some difference without knowing which make a real difference or how to defend their choices against critics or temptations (Moser and Dilling, 2007). Researchers in science education have learned much about the knowledge and skills needed to master a domain (Klahr, 2000). Analogous studies are needed to identify core understandings that create the mental models that allow individuals to interpret diverse environmental decisions. Those might include having an intuitive grasp of how gases diffuse, how risks mount up through repeated exposure, how to use numbers, or how to apply decision-making rules. Whereas environmental science can establish the clearest possible picture of how decision will affect outcomes that people value, it cannot determine which outcomes those should be. That depends on what matters to those making the decisions. Scientists can, however, ask how successful those people have been in constructing stable preferences for specific environmental choices, consistent with their basic values. That test recognizes that, even when a topic is familiar (e.g., declining fish populations, invasive species, suburban sprawl, vector-borne disease), the trade-offs posed by a specific choice often are not (e.g., a ballot measure, housing decision, a restaurant meal). As a result, individuals cannot just ‘read off’ a response, from a universal utility or value function, as naïve rational-actor models might hold. Rather, they must ‘articulate’ or ‘construct’ their preferences among the options on offer – or perhaps even create new ones (e.g., expanding their housing search). In such cases, where individuals are not sure what they want, researchers seeking to elicit attitudes and preferences face a choice. They can either take people as they are, perhaps with fragmentary views, or help people to reflect on the issues so as to derive fuller positions. The former strategy risks a sin of omission, if people are swept along by the incomplete set of
709
issues evoked by their initial thinking on the topic. The latter strategy risks a sin of commission, if suggesting alternative perspectives biases decision-makers’ thinking, rather than deepening it. The two strategies draw on complementary research traditions. The former has roots in psychophysical research, the latter in decision theory.
Eliciting Environmental Beliefs and Attitudes Psychophysical Measurement Emerging from the natural sciences, early psychology focused on what came to be called psychophysics, determining the psychological equivalent of physical stimuli. These studies envisioned physiological and psychological mechanisms that translated stimuli into internal states of arousal. If asked, individuals can report on that state, with a word, number, or action (e.g., pressing on a handgrip to show their confidence, adjusting a rheostat to show the light intensity equivalent to the loudness of a tone) (Poulton, 1989). The most straightforward applications of psychophysical approaches to environmental decisions pose questions such as how natural (or attractive or inspiring or calming) a natural scene is. Researchers might elicit direct ratings (Daniel and Vining, 1983) or similarity judgments, if they hope to capture attitudes that people have difficulty expressing, because they lack the words or hesitate to use them (Cantor, 1977). The less familiar a task, the greater the burden on researchers to ensure that it was interpreted as intended: which features were noticed – and overlooked? What did respondents believe about the ecosystem services (Daily, 1997) that the environment in question provides and the peril it faces? Did respondents sense (intended or inadvertent) hints regarding what to say? How real or hypothetical did the task seem? It takes a suite of converging studies, with suitable manipulation checks, to establish what environmental changes people believe they are evaluating, how deep their understanding is, and what they mean by their responses (Fischhoff and Furby, 1988). Such interpretative issues have been at the center of the long-running controversy over contingent valuation (CV), a method advanced by some resource economists for monetizing ‘goods’ not traded in marketplaces where their economic value could be observed (e.g., atmospheric visibility, the survival of endangered species) (Carson and Hanneman, 2005; Mitchell and Carson, 1989). In the psychophysical tradition, CV interviewers ask people how much they are willing to pay in order to prevent an adverse environmental change (or how much they will demand as compensation for it). These responses are meant for use in cost–benefit analyses, where they could represent environmental changes that would otherwise be neglected in policy-making dominated by economic analyses. Whether a CV study captures respondents’ preferences depends on how well they can understand and answer questions demanding explicit, quantitative evaluations of complex, novel stimuli with many potentially relevant details. CV studies illustrate the strengths and weaknesses of eliciting attitudes toward environmental goods. Respondents’ ability to provide some answer to any question suggests that they have some relevant feelings on any environmental issue.
710
Environmental Cognition, Perception, and Attitudes
However, feelings alone may not predict what they would pay where there is a market market for that environmental good (Kahneman et al., 1999). One focal research topic is whether CV judgments are properly sensitive to the amount (or ‘scope’) of the change being evaluated (Arrow et al., 1993). The death of one migratory bird saddens many people, while the death of many birds saddens them even more. However, the differences in those feelings may not be commensurate with the difference in their value (Slovic, 2010). Insensitivity to scope can be invoked as measurement failure. One defense is that people might give as much to ‘adopt-a-bird’ as to an avian conservation program, if they believe the former to be more effective or assume that other people will adopt the other birds. However orderly such judgments might be, some critics question the propriety of asking them at all. They argue that monetization ‘anaesthetizes moral feeling’ (Tribe, 1972), by reducing everything to putative economic equivalents. As a result, CV may win some battles for the environment, by gaining recognition for some otherwise neglected changes, but lose the war, by failing to defend the environment’s intrinsic value.
Decision Theoretic Measurement The early days of the modern environmental movement confronted many technologies with public opposition that their proponents could not, and perhaps would not, understand. One natural response was (and still is) to claim that a technology’s opponents overestimate its expected fatalities. Early studies examined that claim by eliciting fatality estimates, finding (1) a strong correlation between lay and statistical fatality estimates; (2) higher estimates for technologies with more ‘available’ deaths, relative to others with similar statistical frequency; (3) consistent ordinal estimates across response modes; and (4) inconsistent absolute estimates, across response modes. Thus, people seem to have a fairly robust feeling for relative fatalities, which emerge however such (unusual) questions are asked (Lichtenstein et al., 1978). However, they have a poorer feeling for what numbers to use, which makes their absolute judgments sensitive to the contextual cues that a response mode provides (Poulton, 1994). However, quite different judgments emerged when lay respondents made judgments of ‘risk,’ rather than of ‘average year fatalities.’ Moreover, those judgments better predicted their attitudes (e.g., how strictly they should be regulated) (Slovic et al., 1979). Many studies have examined what other aspect of ‘risk’ affects people. One early candidate was that people give added weight to a technology’s ‘catastrophic potential,’ in the sense that they see greater ‘risk’ when many lives could be lost at once. Anecdotal support for that hypothesis can be found in the attention drawn to plane crashes and public concern over nuclear power – despite lay respondents’ recognition that, in an average year, the death toll from nuclear power is negligible. Studies found, however, that what really bothered people about catastrophic accidents was the uncertainty that made them possible (Slovic et al., 1984). Those two ‘risk attributes,’ catastrophic potential and uncertainty, embody quite different ethical principles. Avoiding catastrophes means caring how deaths are distributed
(over time and space), hence preferring small-scale technologies (other things being equal). Avoiding uncertainty means being risk averse, hence requiring stronger evidence before accepting new technologies. The study of attributes that define ‘risk’ can be traced to Starr’s seminal (1969) claim that, for any level of benefit, the public tolerates a higher fatality rate for risks incurred voluntarily (e.g., skiing) than for risks imposed involuntarily (e.g., electric power). As evidence, Starr placed eight activities in a risk–benefit space, based on estimates of their annual fatalities and economic benefits, and then sketched parallel ‘acceptable risk’ lines, an order of magnitude apart, for voluntary and involuntary ones. Such a revealed preference analysis assumes that members of a society are satisfied with the risk– benefit trade-offs that they see around them. A study (Fischhoff et al., 1978) testing this hypothesis asked lay people to judge 30 technologies in terms of their current risks and benefits, as well as rating those risks in terms of their acceptability, voluntariness, and eight other attributes (e.g., catastrophic potential, dread, controllability, known to science, known to the public). It found: 1. a weak correlation between lay judgments of current risks and benefits (indicating that respondents saw no greater benefit from riskier technologies); 2. no greater correlation between judgments of current risks and benefits after controlling for judgments of the risk attributes (indicating no double standards for voluntary and involuntary risks, nor for any other attribute); 3. a belief that most technologies had unacceptably high risks (indicating that current trade-offs did not reveal respondents’ preferences); 4. a strong correlation between judgments of current benefits and acceptable risks (indicating a willingness to make riskbenefit trade-offs); 5. a willingness to accept higher risk levels for voluntary risks, holding benefits constant, and for other risk attributes (indicating a double standard). In addition to their substantive content, such studies demonstrate the importance of disciplining speculations about environmental beliefs, attitudes, and behaviors with direct empirical evidence.
Applying Behavioral Research Setting Public Policy If robust, such social and behavioral science results can guide public policies affecting the environment. The studies just mentioned suggest that the public will accept risk–benefit trade-offs – which might surprise critics who claim that the public demands zero risk. They show where lay and expert judgments of risk and benefit diverge; hence they must be reconciled through better analysis or better communication. They show risks attributes to consider when designing technologies and regulations. They clarify the ethics embedded in seemingly technical terms. For example, ‘mortality risk’ can be summarized as ‘probability of premature death’ or as ‘days of lost life expectancy.’ The latter measure puts a premium on deaths of young people (where many days are lost with each life), whereas the former treats deaths equally
Environmental Cognition, Perception, and Attitudes
(Crouch and Wilson, 1982; Fischhoff and Kadvany, 2011). Similarly, any measure of mortality risk (and any policy dependent on it) must either consider (or ignore) how equitably risks are distributed across population groups – and, if so, which groups matter and what ‘equitably’ means. By raising these questions, behavioral research reveals assumption hidden in the conventions of risk and benefit analysis, so that they can be actively considered in policy making. Considering a large set of risk attributes makes policymaking more complicated and, potentially, too unwieldy to include public input. One simplification strategy looks for redundancy among risk attributes, and then focuses on a few canonical concerns. For example, if dread risks also tend to have catastrophic potential, then considering either attribute will lead to similar policies. Studies have, in fact, found that two factors can account for much of the variance in ratings of the attributes. One seems to reflect how well risks are understood, the second how much they are dreaded (Fischhoff & Kadvany, 2011). Characterizing multiple risks in terms of the same attributes allows comparisons across domains, and increases the chances for consistency (H.M. Treasury, 2005). A second strategy accepts the complexity and works with people to master its details. The need for credible resolution of environmental disputes has prompted many consultative bodies to advocate proactive public involvement (e.g., Blue Ribbon Commission, 2012; Presidential-Congressional Commission, 1997). In one important initiative, the US Environmental Protection Agency (Davies, 1996) fostered some 50 state and regional consultations to set priorities among risks. In them, public representatives set the agenda, while technical experts gathered and explained relevant evidence. Dietz and Stern (2008) provide an authoritative summary of the science of public participation.
Ensuring Citizen Competence Because behavioral and social science research seeks to understand and overcome the limits to lay decision-making, it also provides answers to a critical policy question: Are individuals competent to make sound choices – or should those be left to experts and officials (Lupia and McCubbins, 1998)? Motivated answers to that question are common. A competent public serves the interests of those seeking active public participation or deregulation (and a marketplace where the public fends for itself). An incompetent public serves the interests of those who favor expert opinion and strong regulation (to protect a defenseless public). Behavioral research can provide a disciplined decision-specific answer, comparing what people know with what they need to know. Sometimes, fragmentary knowledge is enough to identify the best course of action. Sometimes, even extensive knowledge is inadequate and people need help. Behavioral research can also expand the public’s envelope of confidence by filling critical gaps in their knowledge. At times, people need specific facts (e.g., the payback period for improved home insulation, the probability of a nuclear core meltdown). In that case, there is a supply curve for information (to use the economic term), arranging facts in order of decreasing usefulness for distinguishing among decision options. Doing so uses recipients’ attention wisely and respectfully, starting with the things that they most need to know. At other times, people need
711
to understand the processes shaping environmental changes. In that case, communications must complete their mental model, so that they can follow the action, see when things have changed, and devise effective responses. Behavioral research can also assess the competence of experts and provide ways to elicit their judgments more systematically (Kammen and Hassendzahl, 1999; O’Hagan et al., 2006). When even the most knowledgeable experts are highly uncertain, a case may be made for moving slowly – unless doing so allows even more poorly understood processes to go unchecked. For example, the debate over genetically manipulated organisms has revolved on whether anyone understands them well enough to make sound choices. Those who doubt that even the experts know enough to proceed often invoke a ‘precautionary principle’ – at times as sign of distrust, at times as a demand for deeper analyses of uncertainty (Löfstedt et al., 2002).
Conclusion Environmental decisions are enormously diverse and often enormously complicated and uncertain. Individuals can have a hard time grasping the issues and deciding how they feel about them. Understanding and reducing such difficulties is part of the challenge and opportunity for researchers studying environmental decision-making. Driven by the importance of these problems, environmental social scientists have cast a wide net, adopting and inventing both holistic and reductionist approaches, drawing on diverse research streams. Their work makes it possible to go beyond speculation and anecdotes to systematic evidence regarding what people believe and want regarding their environment. Its fate and, with it, their own depend on how well they understand what is happening and realizing their desires for its future. Behavioral and social science research can help them, identifying new research topics along the way.
See also: Attitude Measurement; Attitudes and Behavior; Environmental Attitudes and Behavior: Measurement; Environmental History; Environmental Justice in the United States; Environmental Movements; Environmental Sciences; Environmental Sociology; Public Opinion: Social Attitudes.
Bibliography Arrow, K., Solow, R., Portney, P., Leamer, E., Radner, R., Schuman, H., 1993. Report of the NOAA panel on contingent valuation. Federal Register 58, 4601–4614. Atran, S., 1990. Cognitive Foundations of Natural History. Cambridge University Press, Cambridge. Atran, S., 2002. In Gods We Trust. Oxford University Press, Oxford. Baron, J., 2012. The point of normative models in judgment and decision making. Frontiers in Psychology 3 (577). http://dx.doi.org/10.3389/fpsyg.2012.00577. Baron, J., Spranca, M., 1997. Protected values. Organizational Behavior and Human Decision Processes 70, 1–16. Breakwell, G., 2014. The Psychology of Risk, second ed. Cambridge University Press, Cambridge. Cantor, D., 1977. The Psychology of Place. Architectural Press, London, UK. Carson, R.T., Hannemann, W.M., 2005. Contingent valuation. In: Mäler, K.-G., Vincent, J. (Eds.), Handbook of Environmental Economics. Elsevier, Amsterdam, pp. 821–936.
712
Environmental Cognition, Perception, and Attitudes
Charmaz, K., 2006. Constructing Grounded Theory: A Practical Guide through Qualitative Analysis. Sage Publications, Thousand Oaks, CA. Cotgrove, S., 1982. Catastrophe or Cornucopia. Wiley, New York. Cronon, W., 1992. Nature’s Metropolis. Norton, New York. Crouch, E.A.C., Wilson, R., 1982. Risk/Benefit Analysis. Balinger, Cambridge, MA. Daily, G.C., 1997. Nature’s Services: Societal Dependence on Natural Ecosystems. Island Press, Washington, DC. Dake, J.A., 1992. Culture and the social construction of risk. Journal of Social Issues 48 (4), 21–37. Daniel, T.C., Vining, J., 1983. Methodological issues in the assessment of landscape quality. In: Altman, I., Wohlwill, J.F. (Eds.), Human Behavior and Environment, vol. 16. Plenum Press, New York, NY, pp. 39–84. Davies, C. (Ed.), 1996. Comparing Environmental Risks. Resources for the Future, Washington, DC. Dawes, R., 1988. Rational Choice in an Uncertain World. Harcourt Brace Jovanovich, San Diego, CA. De Young, R., Princen, T. (Eds.), 2012. The Localization Reader: Adapting to the Coming Downshift. Cambridge, MIT Press, MA. Dietz, T., Stern, P.C. (Eds.), 2008. Public Participation in Environmental Assessment and Decision Making. National Academy Press, Washington, DC. Douglas, M., 1966. Purity and Danger. Routledge & Kegan Paul, London. Douglas, M., Wildavsky, A., 1982. Risk and Culture. University of California Press, Berkeley, CA. Dunlap, R. (Ed.), 2013. Climate Change Skepticism and Denial. American Behavioral Scientist. Edwards, W., 1954. The theory of decision making. Psychological Bulletin 41, 380–417. Evans, J.StB.T., 2003. In two minds: dual-process accounts. Trends in Cognitive Science 7, 454–459. Fischhoff, B., 1975. Hindsight s foresight: the effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance 104, 288–299. Fischhoff, B., 1995. Risk perception and communication unplugged: twenty years of process. Risk Analysis 15, 137–145. Fischhoff, B., 2005. Cognitive processes in stated preference methods. In: Mäler, K.-G., Vincent, J. (Eds.), Handbook of Environmental Economics. Elsevier, Amsterdam, pp. 937–968. Fischhoff, B., 2013. The sciences of science communication. Proceedings of the National Academy of Sciences 110, 14033–14039. Fischhoff, B., Furby, L., 1988. Measuring values: a conceptual framework for interpreting transactions. Journal of Risk and Uncertainty 1, 147–184. Fischhoff, B., Kadvany, J., 2011. Risk: A Very Short Introduction. Oxford University Press, Oxford. Fischhoff, B., Slovic, P., Lichtenstein, S., Read, S., Combs, B., 1978. How safe is safe enough? A psychometric study of attitudes towards technological risks and benefits. Policy Sciences 8, 127–152. Flynn, J., Kunreuther, H., Slovic, P. (Eds.), 2002. Risk, Media and Stigma. Earthscan, London. Funtowicz, S., Ravetz, J., 1990. Uncertainty and Quality in Science for Policy. Kluwer, London. Gallagher, K.S., Randell, J.C., 2011. What makes the US energy consumers tick? Issues in Science and Technology 28 (4). Garner, G.T., Stern, P.C., 1996. Environmental Problems and Human Behavior. Allyn & Bacon, Needham Heights, MA. Gilovich, T., Griffin, D., D.Kahneman (Eds.), 2002. The Psychology of Judgment: Heuristics and Biases. Cambridge University Press, New York. Haidt, J.H., 2012. The Righteous Mind. Pantheon, New York. Hoffman, A.J., 2010. Climate change as a cultural and behavioral issue. Organizational Dynamics 39, 295–305. House of Lords, 2011. Behavior Change. HL Paper 179. The Stationery Office, London. Kahneman, D., 2011. Thinking, Fast and Slow. Farrar Giroux & Strauss, New York. Kahneman, D., Ritov, I., Schkade, D., 1999. Economic preferences or attitude expression? Journal of Risk and Uncertainty 19, 203–242. Kammen, D., Hassendzahl, D., 1999. Shall We Risk It? Princeton University Press, Princeton, NJ.
Kates, R.W., Travis, W.R., Wilbanks, T.J., 2012. Transformational adaptation when incremental adaptations to climate change are insufficient. PNAS. www.pnas.org/ cgi/doi/10.1073/pnas.1115521109. Klahr, D., 2000. Exploring Science: The Cognition and Development of Discovery Processes. MIT Press, Cambridge, MA. Krosnick, J.A., Holbrook, A.L., Lowe, L., Visser, P.S., 2006. The origins and consequences of democratic citizens’ policy agendas. Climatic Change 77 (1-2), 7–43. Leiserowitz, A., 2006. Climate change risk perception and policy preferences. Climatic Change 77, 45–72. Levin, S.A., Pacala, S.W., 2005. Contingent valuation. In: Mäler, K.-G., Vincent, J. (Eds.), Handbook of Environmental Economics. Elsevier, Amsterdam, pp. 61–95. Lichtenstein, S., Slovic, P., Fischhoff, B., Layman, M., Combs, B., 1978. Judged frequency of lethal events. Journal of Experimental Psychology: Human Learning and Memory 4, 551–578. Löfstedt, R., Fischhoff, B., Fischhoff, I., 2002. Precautionary principles: general definitions and specific applications to genetically modified organisms (GMOs). Journal of Policy Analysis and Management 21, 381–407. Lupia, A., McCubbins, M.D., 1998. The Democratic Dilemma: Can Citizens Learn What They Need to Know. Cambridge University Press, New York. Maibach, E., Leiserowitz, A., Roser-Renouf, C., Mertz, C.K., 2011. Identifying likeminded audiences for climate change public engagement campaigns: an audience segmentation analysis and tool development. PLoS ONE 6 (3), e17571. Mitchell, R.C., Carson, R.T., 1989. Using Surveys to Value Public Goods. Resources for the Future, Washington, DC. Moser, S., Dilling, L. (Eds.), 2007. Creating a Climate for Change: Communicating Climate Change and Facilitating Social Change. Cambridge University Press, Cambridge. Mooney, H.A., Duraiappah, A., Larigauderie, A., 2013. Evolution of natural and social science interactions in global change research programs. PNAS. www.pnas.org/ cgi/doi/10.1073/pnas.1107484110. Nickerson, R.A., 1999. How we know – and sometimes misjudge – what others know. Psychological Bulletin 125, 737–759. Nisbett, R.E., Ross, L., 1980. Human Inference. Prentice-Hall, Englewood Cliffs, NJ. O’Hagan, A., Buck, C.E., Daneshkhah, A., et al., 2006. Uncertain Judgements: Eliciting Expert Probabilities. Wiley, Chichester. Oreskes, N., Conway, E.M., 2008. Merchants of Doubt. Bloomsbury Press, New York. Poulton, E.C., 1989. Bias in Quantifying Judgment. Lawrence Erlbaum, Hillsdale, NJ. Poulton, E.C., 1994. Behavioral Decision Making. Cambridge University Press, Cambridge. Reyna, V.F., 2012. A new intuitionism: meaning, memory, and development in fuzzytrace theory. Judgment and Decision Making 7, 332–339. Schlosberg, D., 2007. Defining Environmental Justice: Theories, Movements, and Nature. Oxford University Press, Oxford. Schwartz, S., 2006. A theory of cultural value orientations: explication and applications. Comparative Sociology 5 (2–3), 137–182. Slovic, P., 1999. Trust, emotion, sex, politics, and science. Risk Analysis 19, 698–701. Slovic, P., 2010. Feeling of Risk. Earthscan, London. Slovic, P., Fischhoff, B., Lichtenstein, S., 1979. Rating the risks. Environment 21 (4), 14–20, 36–39. Slovic, P., Lichtenstein, S., Fischhoff, B., 1984. Modeling the societal impact of fatal accidents. Management Science 30, 464–474. Slovic, S., Dixon, T. (Eds.), 1992. Being in the World: An Environmental Reader for Writers. Macmillan, New York. Starr, C., 1969. Societal benefit versus technological risk. Science 165, 1232–1238. Treasury, H.M., 2005. Managing Risks to the Public. Author, London. Tarr, J.A., 1996. The Search for the Ultimate Sink. University of Akron Press, Akron. Tengs, T., 1995. Five-hundred life-saving interventions and their cost-effectiveness. Risk Analysis 15, 369–390. Thaler, R., Sunstein, C., 2009. Nudge: Improving Decisions about Health, Wealth and Happiness. Yale University Press, New Haven, CT. Tribe, L.H., 1972. Policy science: analysis or ideology? Philosophy and public affairs 2, 66–110. Tversky, A., Kahneman, D., 1974. Judgment under uncertainty: heuristics & biases. Science 185, 1124–1131. Vaughan, E., 1993. Individual and cultural differences in adaptation to environmental risks. American Psychologist 48, 673–680. Viscusi, W.K., 1992. Fatal Tradeoffs. Oxford University Press, Oxford.