Personality and Individual Differences 85 (2015) 60–65
Contents lists available at ScienceDirect
Personality and Individual Differences journal homepage: www.elsevier.com/locate/paid
Multiple moral foundations predict responses to sacrificial dilemmas Damien L. Crone ⇑, Simon M. Laham Melbourne School of Psychological Sciences, University of Melbourne, Australia
a r t i c l e
i n f o
Article history: Received 30 January 2015 Received in revised form 15 April 2015 Accepted 21 April 2015 Available online 15 May 2015 Keywords: Moral psychology Trolley problem Moral foundations Moral Dyad Theory
a b s t r a c t Moral dilemmas, by definition, demand trade-offs between competing moral goods (e.g., causing one harm to prevent another). Although moral dilemmas have served as a methodological pillar for moral psychology, surprisingly little research has explored how individual differences in moral values influence responses to dilemmatic trade-offs. In a cross-sectional study (N = 307), we tested competing claims regarding the relationship between the endorsement of foundational moral values and responses to sacrificial dilemmas, in which one judges the moral acceptability of causing fatal harm to one person to save multiple others. Inconsistent with Moral Dyad Theory, our results did not support the prediction that Harm concerns would be the unequivocally most important predictor of sacrifice endorsement. Consistent with Moral Foundations Theory, however, multiple moral values are predictive of sacrifice judgments: Harm and Purity negatively predict, and Ingroup positively predicts, endorsement of harmful action in service of saving lives, with Harm and Purity explaining similar amounts of unique variance. The present study demonstrates the utility of pluralistic accounts of morality, even in moral situations in which harm is central. Crown Copyright Ó 2015 Published by Elsevier Ltd. All rights reserved.
1. Introduction A recurring question in moral psychology concerns how different dispositional and situational factors influence responses to moral dilemmas requiring trade-offs between competing moral goods. The most commonly studied type of moral dilemmas are ‘‘sacrificial dilemmas’’ like the trolley problem, in which one decides whether to harm one or more people in order to save a larger group (e.g., Petrinovich, O’Neill, & Jorgensen, 1993). To date, studies have examined the influence of broad personality traits (Moore, Stevens, & Conway, 2011), psychopathologies (Bartels & Pizarro, 2011), subtle contextual features (Moore, Clark, & Kane, 2008), or cognitive states (Greene, Morelli, Lowenberg, Nystrom, & Cohen, 2008) on sacrificial dilemma responses. Surprisingly little research, however, has considered how specific moral values shape responses to these dilemmas. To this end, we test two accounts of moral judgments, Moral Foundations Theory (Graham et al., 2013; Haidt, 2007), and Moral Dyad Theory (Gray, Young, & Waytz, 2012), to explore how specific moral values drive responses to sacrificial dilemmas. Each theory makes distinct predictions: the former asserts that moral judgments, even for issues chiefly about
⇑ Corresponding author at: Melbourne School of Psychological Sciences, Redmond Barry Building, University of Melbourne, VIC 3010, Australia. Tel.: +61 3 8344 4185. E-mail address:
[email protected] (D.L. Crone). http://dx.doi.org/10.1016/j.paid.2015.04.041 0191-8869/Crown Copyright Ó 2015 Published by Elsevier Ltd. All rights reserved.
harm (e.g., torture), are determined by multiple moral values (Koleva, Graham, Iyer, Ditto, & Haidt, 2012), while the latter proposes that judgments are determined by a single moral concern: harm. The present study aims not only to clarify the role of different moral values in explaining responses to sacrificial dilemmas, but also to inform debates concerning which theory better accounts for variation in moral judgment. 1.1. Contrasting pluralistic and monistic theories of moral judgment Moral Foundations Theory (MFT; Graham et al., 2013; Haidt, 2007) is a morally pluralistic theory that attempts to parsimoniously explain variation in moral preferences by positing five discrete ‘‘foundational’’ moral values which individuals endorse to varying degrees. These values exert pervasive, additive effects on evaluations of people, actions and issues (Koleva et al., 2012). The five foundations are Harm (concerned with caregiving and alleviating suffering), Fairness (concerned with monitoring cheating, cooperation, and rights violations), Ingroup (concerned with loyalty to and solidarity with ingroup members), Authority (concerned with respect for superiors, tradition, and hierarchy), and Purity (concerned with overcoming human depravity and carnal predispositions through, for example, regulating eating and sexual behavior). Although, to our knowledge, MFT has not been systematically studied in the context of sacrificial dilemmas, one might expect
D.L. Crone, S.M. Laham / Personality and Individual Differences 85 (2015) 60–65
multiple foundations to shape responses. Obviously, given the clear involvement of harm in these dilemmas, chronic Harm concerns should attenuate endorsement of sacrificing lives.1 Similarly, stronger Fairness concerns might decrease endorsement of sacrifice, as harming an innocent individual (even to protect others’ interests) would constitute a violation of that individual’s rights. Conversely, insofar as one endorses Ingroup (and perceives the endangered individuals as ingroup members), one may be more likely to endorse sacrifice (e.g., Petrinovich, O’Neill, & Jorgensen, 1993). As conventional sacrificial dilemmas do not involve clear hierarchical relationships, we do not make any predictions for Authority (although if such relationships were present, they could affect responses; Piazza, Sousa, & Holbrook, 2013). Finally, as Purity has been linked to concern for the sanctity of life via disapproval of suicide, abortion, euthanasia, and stem cell research (e.g., Koleva et al., 2012; Rottman, Kelemen, & Young, 2014), endorsing Purity should correspond to reluctance to endorse (fatally) harmful actions. In contrast, another account of moral judgment, Moral Dyad Theory (Gray et al., 2012), asserts that all moral judgments are fundamentally driven by perceptions of harm: without (perceived) harm, there is no moral judgment. Even judgments of seemingly non-harmful acts (e.g., harmless Purity violations) should be reducible to (sometimes erroneous) perceptions of harm (Gray, Schein, & Ward, 2014). While at first appearing elegantly parsimonious, this proposition has been criticized as vague and unfalsifiable without a clear definition of harm, and an unambiguous specification of the causal role of harm in judgments (Graham & Iyer, 2012). Given the many possible interpretations, we thus formulated multiple hypotheses based on varyingly strict readings of the theory. On the strictest reading, multiple foundations may individually correlate with sacrificial dilemma responses, however in a multiple regression, foundations other than Harm should explain no unique variance. Failing that, a less strict reading of Moral Dyad Theory would predict Harm to be the most important predictor (potentially operationalized in multiple ways, described below), without specifying the statistical significance (or lack thereof) of the other foundations. To our knowledge, only one study (Koleva, Selterman, Iyer, Ditto, & Graham, 2014, Study 2), reports findings loosely relating to our predictions. Consistent with MFT, Koleva et al. (2014) presented correlational evidence that standard measures of Harm, Fairness and Purity endorsement each negatively predicted endorsing harm in a set of sacrificial dilemmas. These relationships, however, were not explored further, leaving open the question of whether the effects of Fairness and Purity can be explained by Harm.2 1.2. Aims and hypotheses To summarize, we aim to test contrasting predictions from pluralistic and monistic3 accounts of morality as they relate to sacrificial dilemmas. Our reading of MFT leads us to predict that endorsement of Harm, Fairness and Purity will decrease, and Ingroup increase, endorsement of sacrificial actions. These predictions, however, contrast with Moral Dyad Theory, which predicts that Harm will be either the only, or most important, predictor of endorsement of sacrificial action. Importantly, using sacrificial
61
dilemmas allows a strong test of Moral Dyad Theory, given that harm is a core feature. 2. Methods 2.1. Participants We recruited 330 participants via Amazon’s Mechanical Turk (AMT). Data from 23 participants were excluded based on the following a priori criteria: incomplete surveys (n = 6), failing more than one (of four) attention checks (see below; n = 16), or completing the study more than once (n = 1). This left 307 participants (162 females), aged 18–72 (M = 37.59, SD = 12.70). Using a single-item 9-point political orientation scale (1 = Extreme left (liberal); 9 = Extreme right (conservative)) embedded in a qualification survey, we deliberately recruited relatively even numbers of liberals (n = 115 scoring 1–4), moderates (n = 92, scoring 5), and conservatives (n = 100 scoring 6–9). 2.2. Materials and procedure All materials were presented via participants’ web browsers, after participants had provided informed consent. 2.2.1. Sacrificial dilemmas (Moore et al., 2008) Participants were presented with six dilemmas, portraying hypothetical scenarios requiring the participant to judge the moral acceptability of fatally harming one individual to save the lives of multiple others. Participants were randomly assigned to see either ‘‘personal’’ or ‘‘impersonal’’ versions of each dilemma (see Moore et al., 2008). For all dilemmas the self was not in danger, and the death of the other was avoidable. For each dilemma, participants reported how ‘‘morally acceptable’’ they judged the sacrificial action (1 = Absolutely unacceptable; 6 = Absolutely acceptable). Responses across dilemmas were averaged to create an overall sacrifice rating. Also included were two attention-check ‘‘dilemmas’’ (‘‘Donation’’ and ‘‘Collateral Damage’’), for which judging the proposed actions as unacceptable and acceptable, respectively, constituted failure of the attention check. Presentation order was randomized across participants. 2.2.2. Moral Foundations Questionnaire (MFQ; Graham et al., 2011) Participants then completed the 30-item MFQ, measuring endorsement of each foundation. First, participants rated the relevance of 15 different considerations to their judgments of right and wrong (0 = Not at all relevant; 5 = Extremely relevant; e.g., ‘‘Whether or not someone suffered emotionally’’ for Harm). Second, participants rated their agreement with 15 statements (0 = Strongly disagree; 5 = Strongly agree; e.g., ‘‘It is more important to be a team player than to express oneself’’ for Ingroup). Scores on the two sections are averaged to produce an index of endorsement of each foundation. Also included were two standard MFQ attention-check questions.4 Finally, participants completed a short demographic questionnaire. 3. Results 3.1. Screening for outliers
1
One could reasonably predict the opposite, given the sacrifice is intended to reduce overall suffering. However, Harm as described by MFT (emphasizing caregiving instincts) seemingly prioritizes immediate over distal forms of suffering. 2 The correlations reported by Koleva et al. are insufficient to compare predictor importance as they are based on zero-order correlations from different (but overlapping) samples, and limited to three foundations. 3 We use ‘monistic’ rather than ‘dyadic’ because, contra pluralistic accounts, Moral Dyad Theory reduces moral judgments to a single moral concern, harm.
Given that previous tests of MFT against Moral Dyad Theory have been susceptible to outliers (Gray, 2014), before testing our predictions, we removed five multivariate outliers with significant Mahalanobis distances at a = .001, leaving 302 cases for analyses. 4
See http://www.moralfoundations.org/questionnaires.
62
D.L. Crone, S.M. Laham / Personality and Individual Differences 85 (2015) 60–65
3.2. Bivariate relationships We first examined bivariate relationships between MFQ scores and sacrifice ratings. Descriptive statistics, correlations, reliabilities, and standardized regression weights from a model with all moral foundations simultaneously predicting sacrifice ratings are presented in Table 1. As expected, correlations suggested that both Harm and Purity negatively correlated with sacrifice ratings. Interestingly, there were no significant correlations between either Fairness or Ingroup and sacrifice ratings.
3.3. Predictor importance Going beyond bivariate relationships, our follow-up analyses tested the relative importance of the five foundations in predicting sacrifice ratings. Consistent with the zero-order correlations, the regression showed that both Harm and Purity negatively predicted sacrifice ratings. In contrast to the correlational analysis (and consistent with initial predictions), Ingroup positively predicted sacrifice ratings, indicating a suppression effect. Fairness was again non-significant. These findings contradict the strongest prediction of Moral Dyad Theory (i.e., Harm as sole predictor), and failed to provide unequivocal support for the next strictest prediction (i.e., Harm being the most important predictor), as the betas for Purity and Harm were indistinguishable. However, an important statistical point warrants consideration, regarding the latter prediction. The correlations reported in Table 1, suggest considerable collinearity between moral foundations, which can lead to misleading conclusions when relying solely on betas or correlations to draw inferences about predictor importance. Indeed, for the analyses above, correlations and betas suggest different conclusions regarding the most important predictors. Thus, we employed dominance analysis (DA; Budescu, 1993), an analysis strategy that accounts for collinearity when comparing predictor importance, using the yhat package for R (Nimon & Oswald, 2013). Briefly, DA involves pairwise comparisons of the incremental validity of predictors (i.e., comparing the R2 increment from adding the omitted predictor Xi, to that of the omitted predictor Xj) across subsets of all possible models using 2P 1 combinations of P predictors where Xi and/or Xj are omitted. These incremental validity comparisons yield inferences about three decreasingly strict kinds of predictor importance (see Nimon & Oswald, 2013). First, complete dominance occurs when the incremental validity for one predictor Xi is greater than that of another predictor, Xj, in all models where Xi and Xj are omitted. Second, conditional dominance occurs when the average incremental validity for one predictor, Xi, for each model containing k predictors, is greater than that of Xj at all levels of k (i.e., for each subset of models where models where k = 1, k = 2, . . . k = P, the average incremental validity of Xi is greater than that of Xj). Finally, general dominance occurs when the average incremental validity of one predictor, Xi, is greater than that of Xj when averaging across all 2P 1 models. A summary of the DA presented in Table 2. First, we tested a strict interpretation of Harm as the most important predictor, namely that the unique variance accounted for by Harm would exceed the unique variance accounted for by all other foundations combined. Essentially, this amounts to comparing three regression models with sacrifice ratings regressed onto: (a) just Harm (Model 1, Table 2), (b) the other four foundations (Model 30, Table 2), and (c) all five foundations (the full model reported in Table 1). To provide a meaningful comparison between models with differing numbers of predictors, we compare adjusted R2 (i.e., adjusted for the number of predictors).5 5 Calculated as 1 predictors.
((1
R 2)(n
1)/(n
k
1)) where k is the number of
Subtracting the adjusted R2 for one of the partial models (e.g., Harm only) from the full model approximates the unique variance explained by the other partial model (e.g., the other four foundations) and vice versa. On this metric, the unique variance explained by Harm was .03, and the unique variance explained by the other four foundations was .04, contradicting the strong interpretation of Harm being the most important predictor of sacrifice ratings. Next we tested the weaker version of the prediction of Harm as the most important foundation by comparing the importance of Harm to each foundation individually. As shown in Table 2, Harm failed to achieve complete dominance over all other foundations, dominating all foundations but Ingroup, which had greater incremental predictive validity in Model 13. Harm, however, did achieve conditional and general dominance over all predictors by having the greatest average R2 increment for all levels of k (for conditional dominance, see the Average k rows for k = 1 through 3, and Model 30, where Harm dominates the other models for k = 4; for general dominance, see the final row). Importantly though, the observed conditional and general dominance of Harm was only achieved by a small margin: for all levels of k, the average R2 increment of Harm was never more than .01 greater than that that of the next best predictor, Purity. Furthermore, for k = 4, there was essentially no difference between the incremental predictive validity of Harm and Purity.6 To summarize, the DA provided only modest support for Moral Dyad Theory. 3.4. Note on the removal of outliers and exploratory analyses The handling of outliers is recognized as a complex issue, often without clear solutions (Aguinis, Gottfredson, & Joo, 2013). While we opted for deleting outlying cases, this decision may have influenced our results. Thus, we re-ran our analyses while retaining the five outlying (but otherwise valid) cases. Interestingly, their removal may have biased results toward overstating the importance of Harm: Purity, rather than Harm, achieved general dominance over all other moral foundations (by a similarly narrow margin), providing the greatest R2 increment in all but one of the models where it was omitted. For brevity, we omit full reporting of these analyses (though they are available upon request). Finally, we explored the possibility of differing relationships between moral foundations and sacrifice ratings on the basis of political orientation, and the personal-impersonal distinction. As these analyses do not bear on our hypotheses, we report them in the Supplementary Materials.7 4. Discussion This study tested competing predictions from two theories regarding the effect of different moral values on responses to sacrificial dilemmas. Specifically, from Moral Foundations Theory, we predicted that Harm, Fairness, and Purity would negatively predict, and Ingroup positively predict, approval of harmful actions in sacrificial dilemmas. In contrast, from Moral Dyad Theory, we predicted that Harm would be the sole significant predictor, or failing that, the most important predictor of moral judgments, compared to the other moral foundations (either in combination, or individually). Consistent with MFT, multiple foundations predicted sacrifice 6 Before rounding, the R2 increment of Harm in model 30 is .001 greater than that of Purity in model 26 (.030 vs. .029, respectively). 7 As an additional side-note, the betas and correlations reported in Table 1 indicate a suppression effect for Ingroup. Inspection of the DA in Table 2 (specifically models with Ingroup omitted) reveals that Ingroup tended to offer its greatest R2 increment in models that included Purity, pointing towards Purity as the source of the suppression effect.
63
D.L. Crone, S.M. Laham / Personality and Individual Differences 85 (2015) 60–65 Table 1 Summary of correlations, descriptives and regression weights for moral foundations predicting average sacrifice ratings. Item
M
SD
b (SE)
1
2
3
4
5
6
1. 2. 3. 4. 5. 6.
2.70 3.46 3.33 2.42 2.74 2.40
1.12 .86 .87 .92 .90 1.25
–
– .21⁄⁄⁄ .09 .04 .04 .16⁄⁄
.68 .69⁄⁄⁄ .30⁄⁄⁄ .32⁄⁄⁄ .33⁄⁄⁄
.73 .22⁄⁄⁄ .20⁄⁄⁄ .14⁄
.73 .70⁄⁄⁄ .59⁄⁄⁄
.71 .72⁄⁄⁄
.84
Sacrifice rating Harm Fairness Ingroup Authority Purity ⁄⁄⁄
⁄⁄
.25 .06 .21 .07 .25
(.08)⁄⁄ (.08) (.08)⁄⁄ (.09) (.08)⁄⁄
⁄
p < .001; p < .01; p < .05; regression weights are standardized coefficients from a model with sacrifice ratings regressed onto all foundations simultaNotes: N = 302; neously (F(5,296) = 6.25, p < .001, Adjusted R2 = .08); reliabilities bolded on diagonal; alphas for sacrifice ratings were not calculated because different participants saw different dilemmas.
Table 2 Model R2 and incremental predictive validity for all combinations of moral foundations predicting sacrifice ratings. Model
1. H 2. F 3. I 4. A 5. P Average (k = 1) 6. H, F 7. H, I 8. F, I 9. H, A 10. F, A 11. I, A 12. H, P 13. F, P 14. I, P 15. A, P Average (k = 2) 16. H, F, I 17. H, F, A 18. H, I, A 19. F, I, A 20. H, F, P 21. H, I, P 22. F, I, P 23. H, A, P 24. F, A, P 25. I, A, P Average (k = 3) 26. H, F, I, A 27. H, F, I, P 28. H, F, A, P 29. H, I, A, P 30. F, I, A, P Average (all)
R2 increase R2
H
F
I
A
P
.05 .01 .00 .00 .02
– .04 .06 .05 .03 .04 – – .05 – .04 .05 – .03 .04 .04 .04 – – – .05 – – .03 – .03 .04 .04 – – – – .03 .04
.01 – .01 .01 .00 .01 – .01 – .01 – .01 .00 – .01 .01 .01 – – .01 – – .00 – .00 – .01 .01 – – – .00 – .01
.01 .00 – .01 .03 .01 .01 – – .01 .01 – .04 .03 – .02 .02 – .01 – – .04 – – .02 .02 – .02 – – .02 – – .02
.00 .00 .01 – .01 .01 .00 .00 .01 – – – .02 .01 .00 – .01 .00 – – – .02 .00 .00 – – – .01 – .00 – – – .01
.01 .02 .05 .04 – .03 .01 .03 .05 .02 .04 .04 – – – – .03 .03 .02 .03 .04 – – – – – – .03 .03 – – – – .03
.05 .06 .01 .05 .01 .01 .05 .03 .05 .04 .06 .05 .06 .02 .06 .09 .06 .07 .04 .05 .07 .09 .07 .09 .07
Notes: N = 302; H = Harm; F = Fairness; I = Ingroup; A = Authority; P = Purity; bolded values in R2 column denote models with k predictors explaining the greatest amount of variance; bolded values in R2 Increase columns denote the omitted predictor that results in the greatest R2 increase for that model.
ratings, in the expected direction (except for null findings for Fairness). Harm and Purity negatively predicted sacrifice ratings, consistent with the two foundations as respectively underlying aversion to harmful actions, and a desire to uphold the sanctity of human life. Ingroup, once entered into a regression with other foundations, positively predicted sacrifice ratings, consistent with the view that greater Ingroup concerns produce willingness to override the interests of individual group members for group benefit. These findings necessarily preclude supporting the strictest reading of Moral Dyad theory (i.e., Harm as the sole predictor of moral judgments). Testing a less-strict reading of Moral Dyad Theory, comparing predictor importance of the five moral foundations, again failed to conclusively support Moral Dyad Theory. Harm concerns
accounted for less unique variance than the other four foundations combined. Moreover, while dominance analysis suggested that Harm offered the greatest R2 increment over each foundation individually, the difference in predictive validity between Harm and Purity was negligible, and contingent on deletion of five outlying (though otherwise valid) responses that, if included, led to Purity, rather than Harm, being the most important predictor.
4.1. Implications for moral psychology Our results further demonstrate the value of moral pluralism in explaining moral judgments. Although such has been shown for many contentious political issues (e.g., Koleva et al., 2012), our findings extend this research in one important way, by removing political context. Many demonstrations of MFT involve polarizing issues for which different ideological groups have deeply entrenched and well-articulated positions (e.g., conservative opposition to abortion). In such cases, it is possible that (at least part of) the observed relationship between moral foundations and moral judgments can be accounted for by people adopting positions that conform with those of their ideological counterparts. Indeed, attitudes can easily be shaped by one’s social identity (Cohen, 2003; cf. Koleva et al., 2012). In contrast, our analyses of moral judgments avoided this concern by studying a topic that is not ideologically divisive (i.e., unlike abortion, there are no activist groups advocating specific positions on trolley problems). We thus demonstrate that, even when a moral question is devoid of ideological significance, we still observe the predictive value of moral pluralism in lieu of external validity (Mook, 1983). Of course, our results speak to the universality of Moral Dyad Theory (i.e., whether Harm always trumps other concerns), but not its generality (i.e., when Harm trumps other concerns). Such issues are better addressed by Koleva et al. (2012), and similar studies. Our results concur with the uncontroversial prediction of Moral Dyad Theory (and MFT) that Harm concerns should predict the perceived immorality of harmful acts. Where Moral Dyad Theory departs from MFT, however, is in its assertion that all moral transgressions are fundamentally driven by harm (Gray et al., 2012, p. 107). Depending on one’s interpretation, this assertion leads to different predictions. The strictest prediction would be that no moral foundation would explain unique variance beyond that accounted for by Harm. A less strict prediction would be that Harm would either account for more unique variance than all other foundations combined, or each other foundation individually. At best, we found equivocal support for the weakest of these predictions. While Harm was (relatively) strongly predictive of moral judgments, other foundations still explained unique variance. Furthermore, Harm explained less variance than could be uniquely attributed to the other four foundations and, depending on outlier handling procedures, less variance than Purity.
64
D.L. Crone, S.M. Laham / Personality and Individual Differences 85 (2015) 60–65
Combined with similar studies (Koleva et al., 2014; e.g., Rottman et al., 2014), these results suggest a compelling rejection of the hypothesis that moral judgments are entirely or even predominantly driven by Harm concerns, even when harm is a core feature of the dilemma. However, our findings are insufficient to reject Moral Dyad Theory as a whole. Gray et al. (2014) have shown that seemingly harmless acts automatically activate the concept of harm, thus arguing that other moral values only influence moral judgment insofar as they prompt one to perceive harm (though the causal influence of such associations on judgment has yet to be shown). Our findings show that chronic Harm concerns do not exclusively drive moral judgments (even about harms), however they do not clearly rule out the possibility that harm perceptions may be separately driving judgments. In the sacrificial dilemmas employed here, the harm perception account would predict that greater Purity concerns would only explain disapproval of fatal sacrifices insofar as greater Purity concerns led to greater perceived harm, which in turn led to moral disapproval. Essentially, the relationship between non-Harm moral foundations (e.g., Purity) and moral judgments (e.g., sacrifice ratings) should be fully mediated by an implicit association between violations of those foundations and harm concepts. Importantly, this would provide a critical test of Moral Dyad Theory: if implicit associations as measured by Gray et al. (2014) do not mediate the relationship between non-Harm concerns and moral judgments, it is doubtful that any strong version of Moral Dyad Theory could be true.8
4.3. Conclusion Our primary aim was to test contrasting predictions from two prominent theories of moral psychology. Specifically, we pitted Moral Foundations Theory’s prediction of multiply determined judgments, against that of Moral Dyad Theory, predicting the singular importance of Harm to judgments, using canonical harm-related sacrificial dilemmas. Although individual differences in Harm concerns were an important predictor of sacrificial dilemma responses, we failed to find convincing support for Moral Dyad Theory’s prediction that Harm would be the only, or most important, predictor of moral judgments. While Harm did predict endorsement of sacrifice, so too did Ingroup and Purity (the latter exhibiting comparable predictive power). The present study thus provides evidence against the hypothesis that Harm is the essence of moral judgment. However, decisive evidence bearing on this hypothesis requires further experimental research, as proposed above, that can establish the causal role of harm perception in moral judgment. Acknowledgements This project was funded by an internal grant provided by The University of Melbourne. We are grateful for feedback from Ain Simpson and Margaret Webb, as well as members of the Melbourne Moral Psychology Lab, and the Macquarie University Centre for Agency, Values and Ethics, and an anonymous reviewer.
4.2. Caveats and limitations
Appendix A. Supplementary data
While our study speaks to two prominent theories, it of course has limitations. First, although we demonstrate that different moral values predict responses to standard sacrificial dilemmas, we (in agreement with others, e.g., Kahane, Everett, Earp, Farias, & Savulescu, 2015) do not believe these dilemmas measure preferences for different moral codes (e.g., utilitarian vs. deontological preferences). Second, the extent to which predictor importance was influenced by measurement reliability is unclear.9 The MFQ subscales show robust variation in internal consistency (e.g., Graham et al., 2011), which may have attenuated the magnitude of particular relationships. While this may bear on our findings for the weaker predictions regarding predictor importance ordering (possibly underestimating the importance of Harm), it does not negate the finding that multiple foundations are predictive. Importantly, this limitation is by no means unique to our study (e.g., Koleva et al., 2012; Rottman et al., 2014), and calls for urgent progress in the measurement of moral values. Another point demanding consideration is the cross-sectional nature of our study. Our data alone say nothing of the causal role of specific moral values in driving judgments, however we still describe the observed relationships in causal terms. This is warranted, we believe, given the diverse experimental literature linking manipulations of dilemma features pertinent to particular moral values (e.g., vividness of harm or endorsement of authority figures; Moore et al., 2008; Piazza et al., 2013). As already mentioned though, much work remains to clarify the causal mechanisms connecting values to judgments.
Supplementary data associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/j.paid.2015.04.041.
8 Alternatively, one may posit an interaction between the implicit association and one’s chronic concern for one or both of the relevant moral foundations such that accounting for the interaction renders the non-Harm foundation non-significant. 9 As an anonymous reviewer suggested, one might also argue that the relatively larger variability in Purity scores also contributes to its predictive advantage, though we believe reliability to be a greater concern.
References Aguinis, H., Gottfredson, R. K., & Joo, H. (2013). Best-practice recommendations for defining, identifying, and handling outliers. Organizational Research Methods, 16(2), 270–301. http://dx.doi.org/10.1177/1094428112470848. Bartels, D. M., & Pizarro, D. A. (2011). The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas. Cognition, 121(1), 154–161. http://dx.doi.org/10.1016/j.cognition.2011.05.010. Budescu, D. V. (1993). Dominance analysis: A new approach to the problem of relative importance of predictors in multiple regression. Psychological Bulletin, 114(3), 542–551. http://dx.doi.org/10.1037//0033-2909.114.3.542. Cohen, G. L. (2003). Party over policy: The dominating impact of group influence on political beliefs. Journal of Personality and Social Psychology, 85(5), 808–822. http://dx.doi.org/10.1037/0022-3514.85.5.808. Graham, J., Haidt, J., Koleva, S. P., Motyl, M., Iyer, R., Wojcik, S. P., et al. (2013). Moral Foundations Theory: The pragmatic validity of moral pluralism. Advances in Experimental Social Psychology. http://dx.doi.org/10.1016/B978-0-12-4072367.00002-4. Graham, J., & Iyer, R. (2012). The unbearable vagueness of ‘‘essence’’: Forty-four clarification questions for Gray, Young, and Waytz. Psychological Inquiry, 23(2), 162–165. http://dx.doi.org/10.1080/1047840X.2012.667767. Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S. P., & Ditto, P. H. (2011). Mapping the moral domain. Journal of Personality and Social Psychology, 101(2), 366–385. http://dx.doi.org/10.1037/a0021847. Gray, K. (2014). Harm concerns predict moral judgments of suicide: Comment on Rottman, Kelemen and Young (2014). Cognition, 133(1), 329–331. http:// dx.doi.org/10.1016/j.cognition.2014.06.007. Gray, K., Schein, C., & Ward, A. F. (2014). The myth of harmless wrongs in moral cognition: Automatic dyadic completion from sin to suffering. Journal of Experimental Psychology: General, 143(4), 1600–1615. http://dx.doi.org/ 10.1037/a0036149. Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101–124. http://dx.doi.org/10.1080/ 1047840X.2012.651387. Greene, J. D., Morelli, S. A., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition, 107(3), 1144–1154. http://dx.doi.org/10.1016/j.cognition.2007.11.004. Haidt, J. (2007). The new synthesis in moral psychology. Science, 316(5827), 998–1002. http://dx.doi.org/10.1126/science.1137651. Kahane, G., Everett, J. A. C., Earp, B. D., Farias, M., & Savulescu, J. (2015). ‘‘Utilitarian’’ judgments in sacrificial moral dilemmas do not reflect impartial concern for the
D.L. Crone, S.M. Laham / Personality and Individual Differences 85 (2015) 60–65 greater good. Cognition, 134, 193–209. http://dx.doi.org/10.1016/ j.cognition.2014.10.005. Koleva, S. P., Graham, J., Iyer, R., Ditto, P. H., & Haidt, J. (2012). Tracing the threads: How five moral concerns (especially Purity) help explain culture war attitudes. Journal of Research in Personality, 46(2), 184–194. http://dx.doi.org/10.1016/ j.jrp.2012.01.006. Koleva, S. P., Selterman, D., Iyer, R., Ditto, P. H., & Graham, J. (2014). The moral compass of insecurity: Anxious and avoidant attachment predict moral judgment. Social Psychological and Personality Science, 5(2), 185–194. http:// dx.doi.org/10.1177/1948550613490965. Mook, D. G. (1983). In defense of external invalidity. American Psychologist, 38(4), 379–387. http://dx.doi.org/10.1037/0003-066X.38.4.379. Moore, A. B., Clark, B. A. M., & Kane, M. J. (2008). Who shalt not kill? Individual differences in working memory capacity, executive control, and moral judgment. Psychological Science, 19(6), 549–557. http://dx.doi.org/10.1111/ j.1467-9280.2008.02122.x.
65
Moore, A. B., Stevens, J., & Conway, A. R. A. (2011). Individual differences in sensitivity to reward and punishment predict moral judgment. Personality and Individual Differences, 50(5), 621–625. http://dx.doi.org/10.1016/ j.paid.2010.12.006. Nimon, K. F., & Oswald, F. L. (2013). Understanding the results of multiple linear regression: Beyond standardized regression coefficients. Organizational Research Methods, 16(4), 650–674. http://dx.doi.org/10.1177/1094428113493929. Petrinovich, L., O’Neill, P., & Jorgensen, M. (1993). An empirical study of moral intuitions: Toward an evolutionary ethics. Journal of Personality and Social Psychology, 64(3), 467–478. http://dx.doi.org/10.1037//0022-3514.64.3.467. Piazza, J., Sousa, P., & Holbrook, C. (2013). Authority dependence and judgments of utilitarian harm. Cognition, 128(3), 261–270. http://dx.doi.org/10.1016/ j.cognition.2013.05.001. Rottman, J., Kelemen, D., & Young, L. L. (2014). Tainting the soul: Purity concerns predict moral judgments of suicide. Cognition, 130(2), 217–226. http:// dx.doi.org/10.1016/j.cognition.2013.11.007.