Organizational Behavior and Human Decision Processes Vol. 79, No. 3, September, pp. 179–198, 1999 Article ID obhd.1999.2844, available online at http://www.idealibrary.com on
Conflict Aversion: Preference for Ambiguity vs Conflict in Sources and Evidence Michael Smithson The Australian National University, Canberra, Australia
This research investigates preferences and judgments under ambiguous vs conflicting information. Three studies provided evidence for two major hypotheses: (1) Conflicting messages from two equally believable sources are dispreferred in general to two informatively equivalent, ambiguous, but agreeing messages from the same sources (i.e., conflict aversion); and (2) conflicting sources are perceived as less credible than ambiguous sources. Studies 2 and 3 yielded evidence for two framing effects. First, when the outcome was negative, subjects’ preferences were nearly evenly split between conflict and ambiguity, whereas a positive outcome produced marked conflict aversion. Second, a high probability of a negative outcome or a low probability of a positive one induced conflict preference. However, no framing effects were found for source credibility judgments. Study 3 also investigated whether subject identification with a source might affect preferences or credibility judgments, but found no evidence for such an effect. The findings suggest cognitive and motivational explanations for conflict aversion as distinct from ambiguity aversion. The cognitive heuristic is that conflict raises suspicions about whether the sources are trustworthy or credible. The motivational explanation stems from that idea that if sources disagree, then the judge not only becomes uncertain but also must disagree with at least one of the sources, whereas if the sources agree then the judge may agree with them and only has to bear the uncertainty. q 1999 Academic Press
The author thanks Ms. Karen Harris and Mr. Thomas Bartos for assistance in conducting the experiments reported here. Thanks also to Dr. Margaret Foddy, Dr. J. Michael Innes, Dr. Neil Thomason, two anonymous reviewers, and the “Groupthink-tank” participants in the Division of Psychology at the Australian National University for helpful discussions and comments on earlier drafts of this paper. Address correspondence and reprint requests to Michael Smithson, Division of Psychology, Australian National University, Canberra, ACT 0200, Australia. 179 0749-5978/99 $30.00 Copyright q 1999 by Academic Press All rights of reproduction in any form reserved.
180
MICHAEL SMITHSON
Research concerning the effects of ambiguity on decision making under uncertainty has ignored, for the most part, an important distinction between ambiguous and conflicting information. The connection between them was first suggested more than a decade ago in Einhorn and Hogarth’s (1985) pioneering study when they claimed that conflict can cause ambiguity. While there is ample evidence that conflicting information may engender a sense of uncertainty in a decision maker, it is not clear what kind of uncertainty that might be or whether it would be equivalent to ambiguity arising from some other cause. This provides opportunities for further empirical and theoretical investigations. Conflict refers to disagreement over states of reality that cannot hold true simultaneously. If one source informs me that a red car is parked in my driveway and another source says the car is blue, then I will take this to be conflicting information about the color of the car if I do not believe it could be red and blue at the same time. According to this usage, conflict is not limited to opposites (e.g., “there is a car in your driveway” vs “there is no car in your driveway”). Ambiguity is a term that has not been used consistently in the judgment and decision making literature, with the modal usage equating it with vagueness. Here, it is used in the same sense as Max Black’s classic paper (1937), in which he defined it as a condition in which a referent has several distinct possible interpretations. For example, the word “hot” in the sentence “This food is hot” could refer to spiciness, high temperature, or stolen food. Since it involves multiple possible interpretations, a particular kind of ambiguity may be analogous to conflict or a mental representation of it. Writers in the classical traditions of literary criticism and linguistics have elaborated typologies of ambiguity (cf. Empson, 1930), and they point out that ambiguity may be conflictive when multiple possible interpretations are mutually exclusive or incompatible. Ambiguity may range from concision to conflict. Concision occurs when an ambiguous word or phrase may carry multiple meanings simultaneously, none of which conflict with one another. While some multiple-meaning phrases may not carry their meanings simultaneously, those meanings still might not be outright conflictive. The underlined words in “He bagged the game” could refer to putting captured or dead animals in a sack or (at least in the Australian vernacular) denigrating a sport, and it would be very unlikely that these two meanings could refer to one and the same act. Outright conflict between meanings occurs when they are opposites or incompatible. These varieties of ambiguity not only reflect different senses of uncertainty but also are used for different kinds of rhetorical work. This paper is concerned mainly with the conflictive kind. A number of metaphors, similes, and proverbs in English and other languages attest to the claim that people draw an analogy between certain kinds of uncertainty and conflict, especially in binary choices. Conflict in this sense is closely related to contrariety, which in turn is linked with equivocalness and thence with ambiguity. We speak of considering “competing” alternatives, assessing “conflicting” evidence, “being in two minds” about alternatives, “equivocating” (i.e., speaking with two voices), “a house divided against itself,” and “inner conflict” versus being “at peace with oneself.” On the other hand, a
CONFLICT AVERSION
181
similar analogy is drawn between group consensus or cooperation and a decisive individual. One person may be in two minds about a matter, but a group may be “of one mind,” “speak with one voice” (i.e., univocally instead of equivocally) and act “as one.” There is a mathematical literature linking uncertainty and conflict in formalisms for representing and measuring uncertainty. That literature uses terms such as “ambiguity” and “conflict” in different ways from their uses in psychological writings, so some care must be taken to avoid confusing definitions and concepts. While this is not the place for a detailed account, a brief overview is worthwhile since it also suggests that conflict involves a special kind of uncertainty. In the belief function framework (Shafer, 1976), three measures have been developed that arguably distinguish among various kinds of ambiguity in the sense that we are using the term here (cf. Klir & Folger, 1988, Chapter 5, for a lucid and accessible account). The first is nonspecificity, which refers to the size of the subsets whose elements could be true according to the evidence at hand. For instance, if we know that a very large bag of marbles contains some unspecified combination of yellow or green marbles, we nevertheless have more specific information than if we know only that it contains some combination of yellow, red, green, or black marbles. Nonspecificity in this sense is related to Tversky and Koehler’s (1994) support theory (see also Rottenstreich & Tversky, 1997), with its attention to unpacking and repacking subsets. The second and third measures bear more on conflict. Shafer (1976) defined a coefficient of conflict between two or more beliefs, measuring the degree to which nonzero weights of evidence are assigned to two or more disjoint subsets of a universal set. If we know that a bag may contain some combination of yellow or green marbles, then this is not specific information but neither is it conflictive. On the other hand, if one piece of evidence suggests the bag contains only green marbles and another equally reputable piece of evidence suggests it contains only yellow ones, then this information is conflictive because we now have nonzero evidence for two disjoint propositions. The third measure, known as confusion, is distinguishable from conflict only when there are more than two possible alternatives (Dubois & Prade, 1987) and we will ignore it here. The usage of the term conflict in the mathematical literature ignores the distinction between pieces of evidence and sources of evidence. If laboratory A produces 10 studies supporting an hypothesis (H, say) and laboratory B produces 10 studies disconfirming H, the collection of 20 studies will seem more conflictive in a psychological sense than if laboratories A and B each produce 5 studies supporting H and 5 disconfirming it. Indeed, in the latter instance people may well conclude that laboratories A and B are in agreement; the conflict, if any, will be perceived to be among the studies. However, the laboratories could also be said to be ambiguous with respect to whether H is supported or not. In a third scenario where each lab produces 10 studies, and each study yields evenly split evidence for and against H, we might expect people to surmise that both labs and all 20 studies are agreeing and ambiguous even
182
MICHAEL SMITHSON
though the evidence concerning H is just as equivocal as in the first two scenarios. I will use source conflict here to refer to disagreement among sources and source ambiguity in cases where a source provides conflicting or uncertain evidence. A source may be a person, organization, or even a non-human provider of information. The main thesis of this paper is that people respond to source conflict as if it is a more severe kind of uncertainty than source ambiguity when both are informatively equivalent. We will restrict our attention to two “pure” cases: source conflict with precise (agreeing) evidence from each source versus source agreement with ambiguous or conflicting evidence from each source. The analogy between conflict and a special type of ambiguity raises several research questions of interest. First, conflict and ambiguity may each engender feelings of uncertainty. For example, popular discourse concerning risks and hazards often lumps conflict or disagreement together with various other types of uncertainty. Conflicting messages from experts about an issue such as, say, whether there is a link between exposure to powerful EMFs and an increased risk of leukemia, appear to arouse feelings of uncertainty in the lay public, as do inconclusive or ambiguous messages from experts. Moreover, there is a tradition in social psychology going back to Festinger (1957) highlighting the aversiveness of inconsistency, and attribution theory’s inclusion of consensus among the three major determinants of causal attribution (Kelley, 1967). That tradition has not been integrated with research on uncertainty effects, however, and the conflict–ambiguity connection may provide a path for such a synthesis. Given the heavy reliance by humans on vicarious learning and second-hand, socially constructed knowledge, conflicting evidence about a phenomenon would seem likely to arouse a sense of uncertainty, and a representation of the phenomenon as ambiguous would appear to be a plausible ensuing mental model of that body of evidence. A second question, as indicated earlier, is whether people (dis)prefer conflict to various kinds of uncertainty. Given their informative equivalence, ambiguity and conflict make a natural pair for such preference tests. Under what conditions do people prefer to receive precise risk messages even though they conflict, rather than ambiguous risk messages even though they agree, or vice versa? There are at least two psychological arguments for the notion that conflict may engender greater feelings of uncertainty than informatively equivalent ambiguity. First, judgments about the quality of conflicting evidence may be based on a heuristic assumption that evidence for and against a proposition will be proportionally distributed among equally trustworthy, unbiased sources. It seems unlikely that all evidence for H to accrue to one source and all evidence of not-H to the other, and such an unequal distribution raises suspicions about whether the sources are indeed trustworthy. Conflicting sources appear to be based on two different sets of evidence or two different ways of processing and interpreting the evidence, so that neither has access to all relevant evidence or interpretive methods. They may be somehow biased in favor of H or not-H.
CONFLICT AVERSION
183
Agreeing sources, on the other hand, seem more likely to be based on all relevant and available material. Conflicting sources either have not shared information with one another or are ignoring one another. Either way, they display greater ignorance and/or bias than agreeing sources and thereby seem less credible. Second, there is the argument that under uncertainty a commitment to one side or another should be avoided. Given source conflict, the sources have effectively placed their bets, and if one of the sources is completely correct then the other is completely wrong. If the sources agree and are ambiguous, no bets have been placed yet and neither is correct nor incorrect. Moreover, mainly when it involves human sources, conflict overlaps more explicitly with the social (i.e., interpersonal) domain than ambiguity or other kinds of uncertainty, which are predominantly intra-personal. If sources disagree, then the judge becomes uncertain and also must disagree with at least one of the sources, but if the sources agree then the judge may agree with them and then only has to bear the uncertainty. If Lakoff and Johnson’s (1980) claim that “argument is war” holds true, then war 5 disagreement 5 conflict 5 conflictive ambiguity, but conflict between sources is more threatening than conflict between pieces of evidence. Finally, in contrast to the psychological arguments about greater credibility for consensual ambiguous sources, the belief function framework itself allocates nonzero believability to states of reality based on two sources that are precise and conflicting but zero (vacuous) belief to those states when the sources agree but are ambiguous. Moreover, it identifies nonzero conflict in the former situation and zero conflict in the latter situation (details are available from the author). If any hypotheses could be drawn from these distinctions, perhaps it would be that people might find precise statements more believable or credible in some sense than ambiguous ones, while nevertheless being averse to conflict. However, there appears to be no compelling psychological argument supporting the hypothesis that people will generally find precise statements more believable than imprecise ones, especially when precision is accompanied by conflict with other sources. Should we expect that explanations of conflict aversion and ambiguity aversion will differ? Current explanations of ambiguity aversion have cognitive and motivational components (Baron & Frisch, 1994). The cognitive component refers to a heuristic that people do better when they have more information, so ambiguity aversion is “created by missing information that is relevant and could be known” (Frisch & Baron, 1988). The idea of ambiguity as missing information is, of course, the dominant class of synonyms, metaphors, and other associations with the term. Dictionary definitions affiliate ambiguity with obscurity, lack of clarity, and similar terms. To “disambiguate” is to clarify or provide the missing information. The motivational component resides in issues of credit- and blameworthiness (Curley, Yates, & Abrams, 1986; Heath & Tversky, 1991). However, most research on determinants of source credibility has focused on individual source
184
MICHAEL SMITHSON
characteristics (e.g., competence) rather than on the relations among alternative available sources (e.g., conflict). Even in the cases where researchers have manipulated ambiguity via disagreement among sources (e.g., Kunreuther & Hogarth, 1989), they have treated source conflict as merely an instantiation of ambiguity as missing information. A final consideration is whether conflict aversion will apply regardless of whether the prospect is gain or loss. For ambiguity this has been demonstrated not to be the case (e.g., Einhorn & Hogarth, 1986). In their extension of Ellsberg’s (1961) classic study, they found that people are more likely to be ambiguity-averse when the prospect is gain than when it is loss, but that people are generally ambiguity-averse. We might reasonably expect that framing effects of the same kind found for ambiguity or other kinds of uncertainty will hold for conflict as well. Likewise, conflicting messages from two equally believable sources may be more disturbing in in general than two informatively equivalent, ambiguous, but agreeing messages from the same sources. This hypothesis is investigated in all of the studies presented in this paper. If true, then any framing effects should be truncated in the sense that they are biased towards conflict aversion. The hypotheses advanced here indicate cognitive heuristic and motivational explanations for conflict aversion as distinct from ambiguity aversion. The heuristic, described earlier, is that source conflict seems less likely than an even distribution of evidence among sources, and raises suspicions about whether the sources are trustworthy or credible. We already have good evidence that ambiguous messages decrease source credibility, so here is an opportunity to compare the impact of informatively equivalent uncertainties on source credibility. The guiding hypothesis here is that conflict has a worse effect on credibility than ambiguity, although framing effects may also occur. A possible motive for discounting conflicting sources lies in the anticipation of future relations between the subject and those sources. Since the subject must eventually disagree with one or both conflicting sources, it is easier to do this if the subject perceives both sources as discreditable. Finally, any preference ordering of conflict and ambiguity implies that one may be traded off against the other. Exchangeability provides another plank in the important but incomplete “bridge between the psychology of choice and the psychology of group and organizational decisions” (Camerer & Weber, 1992, p. 339). A literature on politeness and consensus building indicates that vagueness and ambiguity may be used to avoid or defuse conflict (cf. Brown & Levinson’s (1978) account of “negative politeness” and a related discussion of these issues in Smithson, 1989), thereby suggesting a social basis for motives behind the construction of ambiguity and vagueness in human communication. Ambiguous assertions are easier to defend than precise ones. Conversely, an insistence on precision or explicit resolution of opinions may be used to incite conflict. For instance, conventional wisdom and anecdotal testimony both have it that experts are under considerable pressure from both public and political stakeholders to provide unambiguous risk messages, but this generalization
CONFLICT AVERSION
185
ignores contingencies such as disagreement among experts. These considerations indicate a rich set of possible social motivations for and uses of ambiguity and precision in communication. While it is beyond the scope of this paper to explore these possibilities in any real depth, one of the primary purposes of the present research is to prepare the way for an understanding of conflict–ambiguity preferences and tradeoffs that explicitly takes social influences into account. There is a considerable body of research in the social identity tradition suggesting that the conflict aversion effect could be overturned under conditions where the person identifies as a member of a social category containing one expert and not the other. When conflicting opinions are divided along stereotyped group lines (i.e., “consistent” conflict), people may be more likely to make external attributions about why those opinions are held (e.g., Oakes, Turner, & Haslam, 1991). The studies reported here include an exploratory pilot study (Study 1) and two experiments (Studies 2 and 3). All three studies investigate preferences for conflict or ambiguity under the following conditions: • Information sources are presented as equivalent in every respect (except for providing conflicting messages) so that subjects should not expect sources to differ in status, ideological orientation, or access to knowledge. • Subjects are given no reason to identify or align themselves with one source to a greater degree than the other, or to presuppose any prior relationship with either source (with the exception of one task in Study 3). These conditions are intended as a “nonalignment” baseline for comparisons with future experiments in which social relational aspects are manipulated, such as subjects’ identification with a source or interrelations among sources. Study 1 addresses the question of whether people prefer consensual ambiguity to conflictive precision, and whether there are source credibility effects. Thus, Study 1 is akin to some ideas in Ellsberg’s (1961) exploration of ambiguity aversion, but it uses realistic scenarios instead of gambles. Study 2 examines the impact of framing and the balance of evidence on conflict–ambiguity preferences and source credibility effects. The framing involves manipulating perceived consequences to the subject in hypothetical choice situations. The balance of evidence is manipulated to investigate whether conflict aversion will hold when the probability of a good outcome is low, or when the probability of a bad outcome is high. A body of empirical evidence indicates that ambiguity aversion reverses under both of those conditions (e.g., Gardenfors & Sahlin, 1982; Kahn & Sarin, 1988; Hogarth & Einhorn, 1990), so it is possible that conflict aversion might do likewise. Study 3 extends those investigations by manipulating the valence of the decision itself (e.g., deciding which project to oppose versus deciding which project to support), having decisional consequences befall others instead of the decision maker, and introducing a manipulation of subject identification with sources.
186
MICHAEL SMITHSON
STUDY 1: CONFLICT AVERSION EFFECT
Method Subjects. The subjects were 40 students enrolled in psychology at James Cook University in Queensland, Australia. All were volunteers for this study, which was embedded in another project. Procedure and stimuli. Subjects were presented with three pairs of scenarios and questions in a computer-controlled judgment task. They were asked questions requiring them to choose a scenario by clicking a radio-button next to its label. Since this study was exploratory in nature, the three scenarios were varied in three respects. Two of the scenarios involved human sources but the second one used computer forecasts as the sources, to determine whether subjects would respond to conflict between nonhuman sources in the same way as to conflictive human ones. The questions eliciting credibility choices also necessarily differed in wording, with “reliable” being used for the Eyewitness scenario, “believable” for the Cyclone forecast, and “knowledgeable” for the Alzheimer’s risk scenario. Finally, whereas subjects were asked which scenario they preferred in the first two scenarios, in the Alzheimer’s task they were asked which scenario made them feel more “concerned.” These differences in wording turned out to make little difference to subjects’ responses. The scenarios are described below. Eyewitness Testimony (based on Einhorn & Hogarth, 1985) Suppose you are on a jury, and the court hears from two eyewitnesses to an armed robbery at a nightclub. Each of the witnesses makes a statement separately from the other, and they have not previously communicated with one another. One of the pieces of evidence that needs to be determined is the color of the automobile used by the robbers. Here are two possible scenarios for the witness’ testimonies: Scenario A: One witness says the automobile was green, the other says it was blue. Scenario B: Both witnesses say the automobile was either green or blue.
Subjects were then asked two questions: Q1: Which testimony would you prefer to receive? Q2: Which scenario makes the pair of witnesses sound more reliable?
Cyclone Forecast Suppose you’re in Townsville during the wet season, and a cyclone is approaching the coast and heading this way. The weather bureau says that it will cross the coast in the next 12 hours, and announces that there are two independent computer forecasts of which metropolitan center is most likely to be hit. They then describe the scenario given by the forecasts. Scenario A: Forecast A predicts the cyclone is most likely to hit Townsville. Forecast B predicts it is most likely to hit Ayr.
187
CONFLICT AVERSION
Scenario B: Forecasts A and B both predict that the cyclone is most likely to hit either Townsville or Ayr.
Subjects were then asked two questions: Q1: Which message would you prefer to receive? Q2: Which scenario makes the pair of forecasts sound more believable?
Alzheimer’s Risk There is currently controversy over whether aluminum causes Alzheimer’s disease or not. Consider the following scenarios: Scenario A: About half the experts say studies show that there is a link between aluminum and Alzheimer’s disease, while the other half say studies do not show there is such a link. Scenario B: All experts say that about half the studies show that there is a link between aluminum and Alzheimer’s disease, while the other half do not show there is such a link.
Subjects were then asked two questions: Q1: Which message makes you feel more concerned about the risk? Q2: Which scenario makes the experts seem more knowledgeable?
Results Table 1 shows the preference patterns and credibility effects for the three tasks. In the Eyewitness Testimony and Cyclone Forecast tasks, a large majority of subjects showed conflict aversion in the sense that they preferred the ambiguous scenario. In the Alzheimer’s Risk scenario, which pitted conflicting sources presenting agreeing evidence against agreeing sources presenting conflicting evidence, the conflict aversion trend was diminished when subjects were asked which scenario would make them more concerned about the risk. However, the trend was still significant at the .05 level as can be seen from the lower bound of the 95% confidence interval. Similarly large majorities in all three tasks indicated that they found the
TABLE 1 Preferences and 95% Confidence Limits Task
Conflict
Ambiguity
95% C.I.
Eyewitness testimony Cyclone forecast Alzheimer’s risk
Preference 6 (15.0%) 6 (15.0%) 14 (35.0%)
34 (85.0%) 34 (85.0%) 26 (65.0%)
.739, .961 .739, .961 .502, .798
Eyewitness testimony Cyclone forecast Alzheimer’s risk
Credibility 7 (17.5%) 2 (5.0%) 5 (12.5%)
33 (82.5%) 38 (95.0%) 35 (87.5%)
.707, .943 .882, 1a .773, .977
a
The upper bound estimate is out of range at 1.02.
188
MICHAEL SMITHSON
eyewitnesses more reliable, forecasts more believable, and experts more knowledgeable in the ambiguous scenario. Conflict appears to affect source credibility more negatively than does ambiguity. STUDY 2: FRAMING AND BALANCE OF EVIDENCE
Method Subjects. The subjects were 191 students, of whom 104 were volunteer participants enrolled in psychology at James Cook University, while the remaining 87 were first-year psychology students at the Australian National University in Canberra, whose participation was rewarded with credit-points in their first-year psychology subject. They completed the tasks described below as part of a larger study. Procedure and stimuli. As in Study 1, subjects were presented with pairs of scenarios and questions presented on a computer screen and were asked questions which required them to choose a scenario by clicking a radio-button next to its label. The study had two tasks, the Diet Decision and Degree Decision, each of which had a different experimental design. In both tasks, subjects were asked to make a choice between two alternatives, and also to indicate in which alternative the experts seemed more knowledgeable. The Diet Decision involved a one-way design with framing as the factor. Subjects were randomly assigned to one of two framing conditions, and asked to imagine that they had to choose one of two dietary plans for medical reasons. In one framing condition, both of the diets might or might not cause them to gain weight, whereas in the other condition the diets might or might not cause them to lose weight. Subjects were then asked whether the possible effect would constitute a positive or negative effect on their weight, and on that basis were assigned to a “positively framed” or “negatively framed” group for data analysis. Four subjects were excluded from the analysis because they were unable to specify whether they considered the effect on their weight positive or negative. Subjects were then presented with expert testimony about the two dietary plans. In the conflictive source scenario, subjects were told that “half the nutritional experts claim the studies show it [the diet] will significantly reduce (increase) weight, whereas the other half claim the studies show it will not alter weight.” In the ambiguous source scenario, subjects were told that “all of the nutritional experts claim that half the studies show it will significantly reduce (increase) weight, whereas the other half of the studies show it will not alter weight.” In addition to having to choose one of the two diets, subjects were also asked to indicate in which scenario the experts sounded more “knowledgeable” (the same measure as used for the Alzheimer’s task in Study 1). The Degree Decision asked subjects to imagine that they were employed fulltime but considering undertaking full-time study to obtain a university degree. They were asked to choose one of two degree programs and also asked to indicate in which scenario the experts sounded more knowledgeable. This part
CONFLICT AVERSION
189
TABLE 2 Preferences and Framing in the Diet Task Choice Framing Negative Positive Total
Conflict
Ambiguity
Total
44 (44.9%) 24 (27.0%) 68 (36.4%)
54 (55.1%) 65 (73.0%) 119 (63.6%)
98 89 187
of the experiment had a 2 3 2 factorial design. The first factor was framing: whether the scenario is positive (the degree might increase their earning power) or negative (the degree might decrease their earning power). The second factor concerned the balance of evidence for the effect on earning power. In one condition 75% of the evidence indicated an effect and 25% indicated no effect for both degrees, whereas in the other condition the evidence was 50% either way. The reason for introducing the second factor was to vary the strength of apparent conflict or ambiguity of the sources and evidence. Subjects were then presented with conflicting expert testimony about one degree and informatively equivalent ambiguous expert testimony about the other. Under the 50–50 balance condition, in the conflicting scenario subjects were told that “half the career counseling experts claim the studies show you will earn significantly more (less) than you do now; whereas the other half claim the studies show you will earn about the same amount that you do now.” In the ambiguous scenario, they were told that “all of the career counseling experts claim that half the studies show you will earn significantly more (less) than you do now; whereas the other half of the studies show you will earn about the same amount that you do now.” Results Diet decision. The framing effect was significant and in the predicted direction (x 2 5 6.481, p 5 .011). As predicted, the effect was truncated with neither condition resulting in a majority preference for conflict. The odds ratio for Table 2 is (44/54)/(24/65) 5 2.21, so the effect is only moderately strong. Table 3 shows the number of people who felt the experts sounded more
TABLE 3 Knowledgeability and Framing in the Diet Task Knowledgeable Framing Negative Positive Total
Conflict
Ambiguity
Total
20 (20.4%) 17 (19.1%) 37 (19.8%)
78 (79.6%) 72 (80.9%) 150 (80.2%)
98 89 187
190
MICHAEL SMITHSON
TABLE 4 Preference, Balance, and Framing in the Degree Task Choice Balance: 75/25 Framing Negative Positive Total
50/50
Conflict
Ambiguity
Conflict
Ambiguity
Total
22 (61.1%) 5 (16.1%) 27 (40.3%)
14 (38.9%) 26 (83.9%) 40 (59.7%)
28 (47.5%) 23 (35.4%) 51 (41.1%)
31 (52.5%) 42 (64.6%) 73 (58.9%)
95 96 191
knowledgeable in one scenario or the other. Unlike the choice results, there was no framing effect (x 2 5 0.050, p 5 .823) but simply a strong tendency to see the ambiguous experts as more knowledgeable. Degree decision. Table 4 shows the choices of degree programs made by subjects under each condition. Hierarchical loglinear analysis indicated that the saturated model was the only acceptable model. The 75–25 condition produced a reflection effect and more extreme conflict aversion/seeking, whereas the 50–50 condition yielded a truncated effect and less extreme conflict aversion. The odds-ratio in the 75–25 subtable is (22/14)/(5/26) 5 8.17, whereas for the 50–50 subtable it is (28/31)/(23/42) 5 1.65. The marginal choice distribution is the same for the 75–25 and 50–50 conditions (collapsed across framing), so there is no effect for balance on choice. Subjects were equally conflict-averse under both balance conditions. The chi-square contribution for the balance-by-choice effect is only x 2 5 0.099 with df 5 1, which is not significant. However, the effect for framing on choice is significant (its contribution is x 2 5 11.086, df 5 1, p 5 .0001) and demonstrates a truncated reflection effect. Subjects were more conflict-averse under the positive condition. The odds ratio for the corresponding marginal table (collapsed across balance) is (50/45)/(28/68) 5 2.70. Table 5 shows the distribution of subjects’ indications of which were the more knowledgeable experts. The data are clearly quite skewed, indicating a strong tendency for subjects to see ambiguous experts as more knowledgeable in all conditions. Loglinear analysis indicated that the saturated model for this TABLE 5 Knowledgeability, Balance, and Framing in the Degree Task Knowledgeable Balance: 75/25
50/50
Framing
Conflict
Ambiguity
Conflict
Ambiguity
Total
Negative Positive Total
13 (36.1%) 4 (12.9%) 17 (25.4%)
23 (63.9%) 27 (87.1%) 50 (74.6%)
7 (11.9%) 14 (21.5%) 21 (16.9%)
52 (88.1%) 51 (78.5%) 103 (83.1%)
95 96 191
CONFLICT AVERSION
191
table was the only acceptable one, solely because of a significant three-way interaction effect. The omnibus test for two-way effects yielded x 2 5 2.655 with df 5 3, which is not significant ( p 5 .4479). The odds ratio for in the 75–25 subtable is (13/23)/(4/27) 5 3.82, whereas for the 50–50 subtable it is (7/52)/ (14/51) 5 0.49. To sum up, both the Diet and the Degree tasks produced truncated framing effects on preferences but none on source credibility. Moreover, the Degree Decision results indicated that framing and balance of evidence have an interactive effect on conflict aversion. Only when the balance of evidence favored a negative outcome did a majority of subjects prefer the conflicting sources. Study 3 extended the investigation of framing effects and introduced the additional variable of identification with a source. STUDY 3: DECISION VALENCE AND IDENTIFICATION WITH SOURCE
Method Subjects. The subjects were 169 students, of whom 94 were volunteer participants enrolled in psychology at James Cook University, while the remaining 75 were first-year psychology students at the Australian National University in Canberra, whose participation was rewarded with credit-points in their firstyear psychology subject. They completed the tasks described below as part of a larger study. Procedure and stimuli. As in Study 2, subjects were presented with pairs of scenarios and questions presented on a computer screen. The study included two tasks, the Candidate Decision and the Resort Decision, each of which had a different experimental design. In both tasks, subjects were asked to make a choice between two alternatives, and also to indicate in which alternative the experts seemed more knowledgeable. However, unlike Study 2, the decisions in Study 3 were framed according to whether consequences were positive or negative for someone other than the subject. There is some evidence that people are more cautious when making decisions for others rather than for the self (Kogan & Zaleska, 1969), although this claim has not been well established. In this study, the object is to ascertain whether the effects found in Study 2 hold when decisional consequences befall others. In the Candidate Decision, subjects were asked to imagine they were in charge of a small department in a corporation. This part of the experiment had a 2 3 3 factorial design. The first factor was framing. In the negative scenario, subjects were told that “recent downturns in profits have forced you to retrench one of your employees. There are two equally qualified employees, A and B, who are candidates for redundancy. Each has four supervisors who have given you their evaluations, and on that basis you must select one of the candidates for redundancy.” In the positive scenario, they were told that “recent increases in profits have enabled you to hire one additional staff. There are two equally qualified applicants, A and B. Each has four referees who have
192
MICHAEL SMITHSON
given you their evaluations, and on that basis you must select one of the candidates for the position.” The second factor concerned the balance of evidence for the positive or negative aspects of the candidates. There were three conditions, with the balance being either 50–50 for both candidates or 75% (25%) negative and 25% (75%) positive for both candidates. The aim here is to investigate whether the effect found for balance in Study 2 holds for both positive and negative information about the candidates. As in Study 2, subjects were presented with a choice between a candidate with conflicting referees’ reports (e.g., three positive and one negative) and a candidate with ambiguous referees’ reports (e.g., each report being 75% positive and 25% negative). The choice being evaluated was which candidate the subjects preferred to hire (or not sack). The Resort Decision required subjects to consider two proposals for the development of a marine resort in a World Heritage listed coastal area. They were informed that both proposals had environmental impact assessments conducted by qualified marine scientists hired by the State Ministry for the Environment, but both also had been assessed by equally qualified marine scientists from a university in the region who were members of the regional Environmentalist movement. In the conflicting assessment scenario, the groups of scientists disagreed on whether the evidence suggested environmental damage should the resort go ahead; and in the ambiguous scenario the scientists agreed but found the evidence for potential damage ambiguous. This part of the study had a 2 3 2 design. The first factor was framing: whether the subject was asked to choose a proposal to oppose or to permit going ahead. The second was identification with source: whether the subject was asked to take the role of the State Minister or the President of the environmentalist organization. Results Candidate decision. Table 6 shows the choice distributions, with the decision to sack a candidate reverse-coded so that it refers to choosing which candidate not to sack. Loglinear modeling indicated that the best model retained both the framing and balance effects on choice. Eliminating the framing effect resulted in x 2 change of 4.947 with df 5 1 and p 5 .0261. Eliminating TABLE 6 Preference, Balance, and Framing in the Candidate Task Choice Balance: 751/252
50/50
251/752
Framing
Conflict
Ambig.
Conflict
Ambig.
Conflict
Ambig.
Negative Positive Total
8 (33.3%) 5 (20.8%) 13 (27.1%)
16 (66.7%) 19 (79.2%) 35 (72.9%)
18 (42.9%) 8 (19.5%) 26 (31.3%)
24 (57.1%) 33 (80.5%) 57 (68.7%)
15 (71.4%) 12 (70.6%) 27 (71.1%)
6 (28.6%) 5 (29.4%) 11 (28.9%)
193
CONFLICT AVERSION
TABLE 7 Knowledgeability, Balance, and Framing in the Candidate Task Knowledgeability Balance: 751/252
50/50
251/752
Framing
Conflict
Ambig.
Conflict
Ambig.
Conflict
Ambig.
Negative Positive Total
4 (16.7%) 5 (20.8%) 9 (18.8%)
20 (83.3%) 19 (79.2%) 39 (81.2%)
5 (11.9%) 4 (9.8%) 9 (10.8%)
37 (88.1%) 37 (90.2%) 74 (89.2%)
6 (28.6%) 4 (23.5%) 10 (26.3%)
15 (71.4%) 13 (76.5%) 28 (73.7%)
the balance effect yielded a x2 change of 21.119 with df 5 2 and p , .00005. The final model fitted the data quite well and nearly equally so throughout the table, yielding x 2 5 1.660 with df 5 4 and p 5 .798. The balance effect actually produced a sharp reflection effect when the candidate information was 75% negative. Subjects became conflict-seeking under those conditions, which indicates an Einhorn–Hogarth (1985) style of optimism bias for low-probability positive (or high-probability negative) outcomes. The framing effect was in the predicted direction, namely that a positive decision increased conflict aversion. In terms of conflict aversion, the odds ratio in the table below is (41/46)/(25/57) 5 2.03. Note the similarity in form and magnitude of this relationship to the one found in the Degree scenario. However, people did not become conflict-seeking when making a negative decision. Moreover, in Table 6 it is clear that the framing effect occurred only when the evidence was balanced at 50–50. The odds-ratio for that subtable is (33/8)/ (24/18) 5 3.09. The subjects again demonstrated a strong tendency to see ambiguous consensual experts as more knowledgeable than precise conflicting ones. Loglinear analysis revealed that the best model had no effect from either framing or balance on knowledgeability. The final model required only a marginal effect for Knowledgeability and Balance (because of unequal cell sizes), with x 2 5 5.441 with df 5 8 and p 5 .710. While the data in Table 7 appear to suggest a difference between the balance conditions, this is not a reliable effect. Resort decision. The best model was one that included an effect of framing TABLE 8 Preference, Role, and Framing in the Resort Task Choice Role: Minister Framing Negative Positive Total
Environmentalist
Conflict
Ambiguity
Conflict
Ambiguity
Total
21 (50.0%) 14 (31.1%) 35 (40.2%)
21 (50.0%) 31 (68.9%) 52 (59.8%)
19 (46.3%) 5 (12.2%) 24 (29.3%)
22 (53.7%) 36 (87.8%) 58 (70.7%)
83 86 169
194
MICHAEL SMITHSON
TABLE 9 Knowledgeability, Role, and Framing in the Resort Task Knowledgeability Role: Minister Framing Negative Positive Total
Environmentalist
Conflict
Ambiguity
Conflict
Ambiguity
Total
8 (19.0%) 14 (31.1%) 22 (25.3%)
34 (81.0%) 31 (68.9%) 65 (74.7%)
8 (19.5%) 8 (19.5%) 16 (19.5%)
33 (80.5%) 33 (80.5%) 66 (80.5%)
83 86 169
on preference but no role effect. The final model had x 2 5 4.936 with df 5 4 and p 5 .294. The role effect had a trend but it was too weak to be significant. Table 8 shows that the framing effect was in the predicted direction, namely that a positive decision increased conflict aversion. In terms of conflict aversion, the odds ratio in the marginal table collapsing across balance conditions is (40/43)/(19/67) 5 3.28. Again, this is of a similar magnitude and form to those found in the Degree and Candidate tasks. Table 9 shows, once again, a strong tendency for subjects to see ambiguous experts as more knowledgeable than conflicting ones. The best model had no effect from either framing or role on knowledgeability. The final model used only a marginal effect for knowledgeability, with x2 5 2.755 with df 5 6 and p 5 .839. This pattern is similar to that found in the Candidate task. GENERAL DISCUSSION
Study 1 produced consistent support for the main hypotheses, namely that source conflict is generally dispreferred to source ambiguity and conflicting sources are perceived as less credible than ambiguous sources. The first two scenarios contrasted precise but conflicting sources (the disagreeing eyewitnesses and conflicting computer forecasts) with agreeing but ambiguous ones and found strong preferences for ambiguity. It is noteworthy that subjects responded very similarly to the human and nonhuman sources, since this finding suggests that conflict aversion is not entirely explicable in terms of subjects’ inferences about human dispositions, motivations or the like. The Alzheimer’s risk scenario, on the other hand, provided evidence of a weaker but detectable preference for ambiguous sources in the sense of sources that agree but present conflicting evidence. The tasks for Studies 2 and 3 were intentionally fashioned along the lines of the Alzheimer’s risk scenario on the grounds that they would provide conservative tests of the main hypothesis. They also provided support for the existence of an overall conflict aversion effect. While more studies in other settings and cultures would be desirable, the evidence accumulated here indicates that conflict aversion is a robust, generalizable effect. If it holds generally, conflict aversion has several important implications. First, where there is a forced choice between reducing conflict versus reducing
CONFLICT AVERSION
195
uncertainty, people are likely to choose to reduce conflict at the expense of reducing uncertainty. Likewise, an ambiguous-consensual message may be used as a reinforcer for accepting a precise-conflictive one. In political matters that are fraught with both conflict and uncertainty, we should expect conflict reduction to be a higher priority than uncertainty reduction. Second, people prefer ambiguous-consensual risk messages to precise-conflictive ones. They also attribute greater credibility to sources providing the former kind of message than sources providing the latter kind. Audience influence therefore may motivate risk communicators to convey less precise, more consensual messages if the only alternative is conflict with other risk communicators. If so, risk communicators may face a dilemma that is structurally equivalent to a Prisoners’ Dilemma, when considering communications strategies while knowing beforehand that other risk assessors hold views contrary to their own. Finally, all other things being equal, people will tend to choose options whose risk assessments are ambiguous-consensual over those whose risk assessments are precise-conflictive. This behavioral effect suggests that under certain conditions, ambiguity may be used to influence choice. Studies 2 and 3 supported the framing effect hypothesis consistently in all tasks. When the evidence is evenly balanced, this effect is truncated toward conflict aversion so it is not a true reflection effect. When the outcome is negative, subjects’ preferences are nearly evenly split between conflict and ambiguity; whereas a positive outcome produces marked conflict aversion. This finding holds for different tasks, outcomes, and decisional consequences. The Diet, Degree, Candidate, and Resort Decisions each varied certain aspects of the outcomes and decisional consequences. The Diet and Degree Decisions had consequences for the decision-maker, while the Candidate Decision had consequences for another person and the Resort Decision had consequences for the environment. Moreover, the Candidate Decision pitted evidence of positive against negative outcomes rather than having a positive (or negative) alternative versus a neutral one. The framing effect under the condition of a 50–50 balance of evidence was nearly identical for all of them. Manipulating the balance of evidence provided consistent support for the hypothesis that a high probability of a negative outcome will induce conflict preference. In the Degree Decision when subjects were presented with a 75–25 evidential split favoring a negative outcome, 61% of the subjects chose the conflictive option whereas in the 50–50 condition 47% of them chose it. On the other hand, when presented with a 75–25 split favoring a positive outcome, 84% of the subjects chose the ambiguous alternative, whereas given a 50–50 split that percentage dropped to 65%. The Candidate Decision separated outcome framing from consequence framing, by crossing good (bad) candidate reports with good (bad) decisional outcomes for the candidates. Moreover, the outcome evidence was positive versus negative rather than one of those versus a neutral verdict. Nevertheless, the balance of evidence effect was consistent with the findings in the Degree task. Under a 75–25 negative–positive split, 71% of the subjects chose the conflictive
196
MICHAEL SMITHSON
option regardless of decisional frame (i.e., whether they were choosing a candidate for hiring or not sacking). Under a 75–25 positive–negative split, decisional framing had a small effect but overall 73% of the subjects chose the ambiguous option. With the exception of the Resort Decision, the tasks in all three studies placed the subject in the position of being a nonexpert and a nonaligned outsider. The experts were presented as equally competent and equally socially “distant” from the subjects. The Resort task, however, provided no reliable evidence of a source identification effect. It is possible that the manipulation here simply was not effective, and that will be rectified in future studies. As mentioned in the introductory section, theory and research in the social identity tradition suggests that the conflict aversion effect could be overturned under conditions where the person identifies as a member of a social category containing one expert and not the other. But there are avenues of research along these lines that are more interesting than simple alignment or social categorization effects. For instance, Oakes, Turner, and Haslam (1991) present evidence that social categorization becomes salient to the extent that it matches perceived similarities and differences among members of groups. When conflicting opinions are divided along stereotyped group lines (i.e., “consistent” conflict), people are more likely to make external attributions about why those opinions are held; but when conflict is divided by group and contrary to group stereotypes (i.e., “inconsistent” conflict), then people perceive the group as less influential. It is possible that source credibility and/or conflict aversion effects could be moderated by the consistency of conflict, and future studies will pursue this issue. Likewise, conflict aversion seems only partly linked to a stronger and considerable more consistent source credibility effect. In all tasks, precise-conflictive sources were viewed as less credible than ambiguous-consensual ones even when subjects expressed preference for the precise-conflictive alternative. The credibility effect was essentially unchanged by any of the experimental manipulations in the studies reported here, including framing and balance of evidence. The evidence suggests that subjects do employ a heuristic that conflicting sources are less credit-worthy than agreeing but ambiguous ones. This strong effect merits further investigation along at least three lines. One of them has already been suggested, namely an exploration of the impacts of social categorization and source identification on the source credibility effect. A second, related avenue is suggested by the links between the concepts of credibility and trustworthiness, and the fact that both are at the root of cooperative behavior. If argument 5 conflict 5 war, as Lakoff and Johnson (1980) say, then perhaps conflictive ambiguity is redeemed somewhat when it occurs in the context of an agreement between sources because it is interpreted as conciliatory. Hwang and Burgers (1997) observe that trust has at least two bases, one in the predictability (or reliability) of someone’s behavior and the second in what Ring and Van de Ven (1994) have called faith in another person’s “moral integrity.” People may perceive precise-conflictive sources as having less moral integrity because they appear more ignorant and biased, and less
CONFLICT AVERSION
197
conciliatory than agreeing-ambiguous sources. Hence, the former are judged less trustworthy than the latter because they seem to have less goodwill and are less likely to cooperate. A third line of research stems from source theory (Foddy, 1988; Webster & Sobieszek, 1974). Source theorists have noted that people may elect to pay more heed to one disagreeing expert than another, or they may average the conflicting judgements, or simply ignore them. Research on that topic thus far has produced findings that are consistent with both averaging and ignoring, but the source credibility finding indicates that disagreeing sources should be discounted more than ambiguous ones. Future studies will address the question of whether the source credibility effect flows on to source influence. REFERENCES Baron, J., & Frisch, D. (1994). Ambiguous probabilities and the paradoxes of expected utility. In G. Wright & P. Ayton (Eds.), Subjective probability (pp. 273–294). Chichester, UK: Wiley, 273–294. Black, M. (1937). Vagueness: an exercise in logical analysis. Philosophy of Science, 4, 427–455. Brown, P., & Levinson, S. (1978). “Universals in language use: Politeness phenomena.” In E. N. Goody (Ed.), Questions and politeness. London: Cambridge Univ. Press. Camerer, C., & Weber, M. (1992). “Recent developments in modeling preferences: Uncertainty and ambiguity.” Journal of Risk and Uncertainty, 5, 325–370. Curley, S. P., Yates, J. F., & Abrams, R. A. (1986). Psychological sources of ambiguity avoidance. Organizational Behavior and Human Decision Processes, 38, 230–256. Dubois, D., & Prade, H. (1987). Properties of measures of information in evidence and possibility theories. Fuzzy Sets and Systems, 24, 161–182. Einhorn, H. J., & Hogarth, R. M. (1985). Ambiguity and uncertainty in probabilistic inference. Psychological Review, 92, 433–461. Einhorn, H. J., & Hogarth, R. M. (1986). Decision making under ambiguity. Journal of Business, 59, S225–S250. Ellsberg, D. (1961). Risk, ambiguity, and the Savage axioms. Quarterly Journal of Economics, 75, 643–669. Empson, W. (1930/1995). Seven types of ambiguity. London: Penguin. Festinger, L. (1957). A theory of cognitive dissonance. Evanston, IL: Row, Peterson. Foddy, M. (1988). Paths of relevance and evaluative competence. In M. Webster & M. Foschi (Eds.), Status generalization (pp. 232–247). Stanford, CA: Stanford Univ. Press. Frisch, D., & Baron, J. (1988). Ambiguity and rationality. Journal of Behavioral Decision Making, 1, 149–157. Gardenfors, P. E., & Sahlin, N.-E. (1982). Unreliable probabilities, risk-taking, and decision making. Synthese, 53, 361–386. Heath, C., & Tversky, A. (1991). Preference and belief : Ambiguity and competence in choice under uncertainty. Journal of Risk and Uncertainty, 4, 5–28. Hwang, P., & Burgers, W. P. (1997). Properties of trust: An analytical view. Organizational Behavior and Human Decision Processes, 69, 67–73. Hogarth, R. M., & Einhorn, H. J. (1990). Venture theory: A model of decision weights. Management Science, 36, 780–803. Kahn, B. E., & Sarin, R. K. (1988). Modeling ambiguity in decisions under uncertainty. Journal of Consumer Research, 15, 265–272. Kelley, H. H. (1967). Attribution theory in social psychology. In D. Levine (Ed.), Nebraska Symposium on motivation (Vol. 15). Lincoln: Univ. of Nebraska Press.
198
MICHAEL SMITHSON
Klir, G. J., & Folger, T. A. (1988). Fuzzy sets, uncertainty, and information. Englewood Cliffs, NJ: Prentice Hall. Kogan, N., & Zaleska, M. (1969). Level of risk selected by individuals and groups when deciding for self and others. Proceedings, 77th Annual Convention, APA, 423–424. Kunreuther, H., & Hogarth, R. M. (1989). Risk, ambiguity, and insurance. Journal of Risk and Uncertainty, 2, 5–35. Lakoff, G., & Johnson, M. (1980). Metaphors we live by. Chicago: Univ. of Chicago Press. Oakes, P. J., Turner, J. C., & Haslam, S. A. (1991). Perceiving people as group members: The role of fit in the salience of social categorizations. British Journal of Social Psychology, 30, 125–144. Ring, P. S., & Van de Ven, A. H. (1994). Development process of cooperative interorganizational relationships. Academy of Management Review, 19, 90–118. Rottenstreich, Y., & Tversky, A. (1997). Unpacking, repacking, and anchoring: Advances in support theory. Psychological Review, 104, 406–415. Shafer, G. (1976). A mathematical theory of evidence. Princeton: Princeton Univ. Press. Smithson, M. (1989). Ignorance and uncertainty: Emerging paradigms. New York: Springer-Verlag. Tversky, A., & Koehler, D. J. (1994). Support theory: a nonextensional representation of subjective probability. Psychological Review, 101, 547–567. Webster, M., & Sobieszek, B. (1974). Sources of self-evaluation: A formal theory of significant others and social influence. New York: Wiley. Received June 3, 1999