Decision biases in the context of ethics: Initial scale development and validation

Decision biases in the context of ethics: Initial scale development and validation

Personality and Individual Differences 153 (2020) 109609 Contents lists available at ScienceDirect Personality and Individual Differences journal hom...

531KB Sizes 0 Downloads 25 Views

Personality and Individual Differences 153 (2020) 109609

Contents lists available at ScienceDirect

Personality and Individual Differences journal homepage: www.elsevier.com/locate/paid

Decision biases in the context of ethics: Initial scale development and validation

T



Logan L. Wattsa, , Kelsey E. Medeirosb, Tristan J. McIntoshc, Tyler J. Mulhearnd a

Department of Psychology, Baruch College & The Graduate Center, CUNY, 55 Lexington Ave, New York, NY 10010, United States Department of Management, The University of Nebraska at Omaha, 6001 Dodge Street, Omaha, NE 68182, United States c Washington University School of Medicine, 660 S. Euclid Ave.St. Louis, MO 63110, United States d Neurostat Analytical Solutions, LLC, 2331 Mill Road, Suite 100, Alexandria, VA 22314, United States b

A R T I C LE I N FO

A B S T R A C T

Keywords: Decision biases Ethical decision making Scale development

Two studies were conducted to develop and validate the Biased Attitudes Scale (BiAS)—a self-report measure that assesses an individual's propensity to express three types of decision biases proposed to inhibit ethical decision making. In study one, exploratory factor analysis results supported a three-factor model of decision biases, including simplification, verification, and regulation biases. In study two, the three-factor model demonstrated adequate fit when subjected to cross-validation procedures using an independent sample. In addition, BiAS scores correlated with a battery of individual differences measures, as well as three unique types of ethical decision-making criteria. BiAS scores also demonstrated incremental validity in predicting ethical decision making after accounting for individual differences variables.

“The last thing I would have ever expected to happen to me in my life would be that, in fact, I would be accused of doing something wrong and maybe even something criminal.” Kenneth Lay, Former CEO of Enron 1. Introduction Following corporate ethics scandals, a common theme may often be observed in explanations provided by the accused—that they did not view their actions as wrong when they were committing them. For example, the CFO of Enron, Andy Fastow, has noted that it never occurred to him at the time that his fraudulent behavior was illegal (Elkind, 2013). The opening quote by Kenneth Lay, former CEO of Enron, demonstrates a similar sentiment (Leung, 2005). How could two senior leaders of a major U.S. corporation be so blind to the potential negative consequences of their decisions? We suggest such blindspots in decision making are common, and that decision biases may be to blame for some of these errors. It has been recognized for some time that human decision making is fraught with biases (Kahneman & Tversky, 1979), and ethical decision making is no exception (Jones, 1991). Biases broadly refer to “pervasive simplifications or distortions in judgment and reasoning that systematically affect human decision making” (Toet, Brouwer, van den Bosch,



& Korteling, 2016, p. 1). With respect to ethical decision making, biases have been described as “cognitive errors” that can manifest when facing ethical dilemmas (Medeiros et al., 2014). Biases have been shown to inhibit rational decision-making processes in a wide variety of contexts, including consumer purchase choices (Park & Lessig, 1981), entrepreneurial risk taking (Dosi & Lovallo, 1997), creative problem solving (Watts, Mulhearn, Todd, & Mumford, 2017), and medical diagnostic decisions (Croskerry, 2003). Given that the subject of biases has received so much attention across a variety of literatures, it is not surprising that several general and context-specific taxonomies of decision-making biases have been proposed (e.g., Arnott, 1998; Blumenthal-Barby & Krieger, 2015; Carter, Kaufmann, & Michel, 2007; Duffy, 1993; Jones & Roelofsma, 2000; Kerr & Tindale, 2004; Oreg & Bayazit, 2009; Stanovich, Toplak, & West, 2008). However, almost none of these efforts have focused on the context of ethical decision making. Ethical decision making refers to the processes by which individuals make sense of, and formulate responses to, ethical dilemmas in order to arrive at professionally appropriate decisions (Mumford et al., 2008). Ethical dilemmas are ambiguous, high-stakes events involving in which there are is no one right answer (Lefkowitz, 2017). In this context, we define biases as faulty beliefs, attitudes, or behavioral tendencies that constrain cognition and thereby inhibit an individual's ability to make ethical decisions. Because of the cognitively demanding nature of

Corresponding author. E-mail address: [email protected] (L.L. Watts).

https://doi.org/10.1016/j.paid.2019.109609 Received 8 July 2019; Received in revised form 7 September 2019; Accepted 8 September 2019 0191-8869/ © 2019 Elsevier Ltd. All rights reserved.

Personality and Individual Differences 153 (2020) 109609

L.L. Watts, et al.

consistency with some reference point. In light of these similarities, and Oreg and Bayazit's detailed articulation and defense of potentially relevant individual differences, we considered Oreg and Bayazit's model as a suitable theoretical framework for guiding the present effort. Specifically, we sought to assess whether Oreg and Bayazit's threefactor model of general decision-making biases could be applied to explain the manifestation of decision biases in the context of ethics. The distinction between biases in general decision making and biases relevant to the context of ethical decision making is important. Most taxonomies of biases have adopted a generalist, rather than a context-specific, perspective. General decision-making biases have traditionally been assessed by asking respondents to solve hypothetical problems designed to detect errors in logic and/or reasoning (e.g., Bruine de Bruin et al., 2007; Ceschi et al., 2019). Ethical dilemmas, however, represent a fundamentally different type of problem. Because ethical dilemmas are by definition ambiguous, ill-defined, and highstakes events in which there are a potentially infinite range of responses—responses which can have very real and personal impacts on the decision maker and other stakeholders (Watts & Buckley, 2017)—the traditional approach to measuring decision biases may not adequately capture the domain of biases relevant to this particular context. To manage this issue, we sought instead to begin with the end in mind by crafting a measure of decision biases specifically targeted to the context of ethics. Another challenge involved in the measurement of decision biases is that they are held to be latent in the unconscious (i.e., present but largely unobserved) until activated (Haidt, 2001). A number of methods are available for temporarily activating biases in order to observe them (e.g., condition reasoning tests; Slugoski, Shields, & Dawson, 1993). However, many of these methods are expensive to develop and deploy, and scores from these measures tend to show poor reliability within persons across time (Scherbaum & Meade, 2013). An alternative approach, though less commonly employed to measure decision biases, is to rely on self-reports of behavioral tendencies. Although the limitations of self-report techniques are well documented (Nisbett & Wilson, 1977), people are capable of providing accurate reports of their mental processes, behavioral tendencies, and emotions (Robinson & Clore, 2002; Smith & Miller, 1978), particularly when items are framed covertly to minimize the threat of social desirability bias. Prior research by Bandura and colleagues has demonstrated that self-report questionnaires are capable of capturing information bearing on cognitive errors even though people may fail to become consciously aware of these errors in the reporting process (Bandura, 1989; Bandura, Barbaranelli, Caprara, & Pastorelli, 1996). This technique has also been applied in the development of self-report measures designed to assess “dark side” personality traits found to predict career derailment (e.g., Foster & Gaddis, 2014). Thus, in the present effort, we investigated if a covert self-report questionnaire might be used to detect the expression of decision biases relevant to the context of ethics.

ethical dilemmas, unique biases may emerge in these types of situations that make unethical decisions more likely (Jones, 1991). Nevertheless, biases have received relatively little theoretical and empirical attention in this domain. One reason for this limited attention may include the lack of an available measure targeted to the context of ethics. The primary objective of the present effort was to develop a measure of biases relevant to this context that can be used to advance research. 2. Decision biases: theory and measurement Historically, decision biases have been studied almost exclusively in isolation, and some biases have received considerably more attention than others. For example, Kühberger's meta-analysis (1998) showed that in less than two decades after Kahneman and Tversky's (1984) landmark work on the framing effect, over one hundred empirical studies had been conducted on this particular type of decision bias. Another example may be observed with respect to the hundreds of studies investigating hindsight bias (Christensen-Szalanski & Willham, 1991), or the tendency to assume past events were predictable after becoming aware of these events (Blank, Musch, & Pohl, 2007). Given the independent nature of the many separate streams of research on specific decision biases, scholars have only begun to explore how biases might relate to one another, or how biases might emerge in response to particular contexts, such as ethical decision making. More recently, scholars have sought to determine if the multitude of decision biases appearing across different literatures might be meaningfully categorized, as well as examine the potential relationships among these categories (e.g., Bruine de Bruin, Parker, & Fischhoff, 2007). For example, Oreg and Bayazit (2009) proposed a three-factor model consisting of simplification biases (e.g., insensitivity to base rates, illusory correlations), verification biases (e.g., self-serving bias, illusions of control), and regulation biases (e.g., framing errors, regret avoidance). These three types of decision biases are held to emerge from a complex interplay of individual differences, motivated by three fundamental human needs. First, simplification biases emerge from personal traits bearing on cognitive style or ability (e.g., intelligence, need for cognition). The expression of simplification biases is ultimately motivated by the need to comprehend reality. Next, verification biases are held to emerge from individual differences such as self-efficacy, locus of control, and neuroticism. The expression of verification biases is motivated by the need to achieve consistency. Finally, regulation biases are argued to emerge from approach or avoidance temperaments. The expression of regulation biases is motivated by the need to seek out pleasure and avoid pain. To our knowledge, Oreg and Bayazit's (2009) model has not yet been subject to empirical examination. However, recent work by Ceschi, Costantini, Sartori, Weller, and Di Fabio (2019) provides some insight into the potential explanatory power of this three-category model of decision biases. Specifically, Ceschi et al. (2019) factor analyzed responses to 17 decision-making tasks traditionally used to examine decision biases and found support converging on a three-factor model. Ceschi et al. labeled these factors mindware gaps (i.e., tendency to rely on readily available information and succumb to errors in logic), valuation biases (i.e., sensitivity to anticipated gains and losses), and anchoring/adjustment (i.e., tendency to be influenced by a reference point). As noted by Ceschi et al. (2019), the valuation bias factor demonstrates strong overlap with the regulation bias category proposed by Oreg and Bayazit. In addition, the mindware gaps factor demonstrates partial overlap with the simplification bias category proposed by Oreg and Bayazit, in that each captures basic errors in information processing. Finally, although Oreg and Bayazit did not explicitly consider anchoring and adjustment biases to fall under the category of verification biases, the verification bias category as conceptualized by Oreg and Bayazit and anchoring/adjustment bias factor as measured by Ceschi et al. may both be explained by the motivation to achieve

3. Study 1: Initial scale development 3.1. Item development We began constructing a self-report measure by first developing a taxonomy of decision biases. This involved reviewing the literature on general decision-making biases (e.g., Arnott, 1998; Blumenthal-Barby & Krieger, 2015; Carter et al., 2007; Ceschi et al, 2019; Duffy, 1993; Jones & Roelofsma, 2000; Kerr & Tindale, 2004; Oreg & Bayazit, 2009; Stanovich et al., 2008) as well as the literature specifically pertaining to decision biases in in the context of ethics (e.g., Christakis & Asch, 1993; Kimmel, 1991; Mecca et al., 2016; Medeiros et al., 2014; Messick & Bazerman, 2001; Novicevic et al., 2008; Ruedy & Schweitzer, 2010; Steele, Johnson et al., 2016; Thiel et al., 2012). Based on these reviews, 60 biases were identified, and operational definitions were written for each decision bias. This initial list was then 2

Personality and Individual Differences 153 (2020) 109609

L.L. Watts, et al.

participants were removed because they did not complete the majority of the study, and data from another 90 participants were removed due to careless responding (i.e., they failed to pass at least 4 out of the 5 attention check items). Thus, a final sample of 344 participants remained. Ages ranged from 18 to 42, with an average age of 19 and an average of 2.6 years of work experience. Participants reported an average GPA of 3.47 and an average ACT score of 26.5 (83rd percentile nationally). The majority of participants were women (78%) and reported English as their first language (92%).

refined by removing biases that were judged as conceptually identical to one another or irrelevant to the context of ethics (e.g., base rate fallacy, illusory correlation). Table 1 presents the final list of 42 biases, along with their operational definitions, that the authors agreed were potentially relevant to the context of ethics. Next, we categorized each of the 42 remaining biases into one of three bias categories based on Oreg and Bayazit's (2009) three-factor model of simplification, verification, and regulation biases. Although a number of other models and taxonomies have been proposed, we considered Oreg and Bayazit's most promising as a guiding framework due to their careful attention to elaborating the theoretical mechanisms by which decisions biases might emerge within individuals via patterns of individual differences. Table 1 presents the final results of this categorization process.1 The final bias labels and operational definitions presented in the supplemental appendices were used to generate items for a self-report scale (Mumford, Costanza, Connelly, & Johnson, 1996). In writing these items, careful attention was paid to ensuring the items were not too socially loaded or too obvious to avoid inducing socially desirable responding. The central challenge was writing items capable of capturing latent decision biases without tipping our hand about what better or worse responses might be. Although including words about ethics might prime respondents to answer questions in a more socially desirable light, we considered it important to balance this concern with the issue of producing item content that was too general (i.e., not specific enough to the domain of ethics). Thus, approximately half of the items explicitly mentioned the domain of ethics by using words like “ethical decision making,” “ethical dilemmas,” or “ethical issues,” while the remaining items, on their surface, expressed biases relevant to complex decision making more generally (e.g., “Once the group has decided on something, it is important to go along with the decision”), but that were still expected to be related to ethical decision making. Approximately 30 to 35 items were written for each bias category, producing a final list of 98 items, with about half requiring reverse-scoring.

4.2. Procedures Because this was an online study, participants were able to complete the materials at their own pace using whatever computer equipment they chose. All materials were administered using Qualtrics. Participants began the study by reading and responding to an informed consent page. Next, participants completed a demographic form which requested standard information such as participants’ age, gender, year in school, and years of work experience. Next, participants were asked to complete the decision bias items using a 1 (disagree strongly) to 5 (agree strongly) scale. Items were presented to participants in a randomized order to control for potential ordering effects. Five “attention check” items were randomly placed throughout the decision bias items to check for careless responding (e.g., Please mark ‘slightly disagree’ in response to this item). Upon completion of these items, participants read a debriefing page and were awarded their participation credit. 5. Results and discussion Based on procedures reviewed by Conway and Huffcutt (2003), principal axis factoring with Promax rotation was employed using IBM SPSS v. 21. Promax rotation was used because, drawing on Oreg and Bayazit (2009), we expected the expression of different decision biases to be correlated (i.e., non-orthogonal). In the initial, unconstrained solution (i.e., retain all factors with eigenvalues >1), 32 factors were extracted that explained approximately 66% of the variance. In search of a more parsimonious factor structure, the scree plot was examined. The scree plot indicated relatively clear break points immediately after the third and fifth factors. Thus, three-, four-, and five-factor solutions were examined. Several items were removed to further improve the fit and parsimony of the three-, four-, and five-factor models via inspection of the pattern matrix. There is no consensus in the literature regarding the cutoffs for determining whether one should retain or discard items as markers of their intended factors, and thoughtless application of such cutoffs is rarely recommended (Kline, 2011). Costello and Osborne (2005) suggested a .50 and above threshold to identify items with “strong” loadings. However, we first opted for using a more liberal .35 threshold as a “first pass” to ensure that we did not discard too many items based on this indicator alone. Second, items that showed cross-loadings on multiple factors above .30 were dropped (Tabachnick & Fidell, 2001). Third, items were removed that decreased internal scale reliability coefficients. Finally, four final items were removed that loaded more strongly on an unintended factor than their theorized factor in order to ensure the content validity of items remaining in the resulting sub-scales (Hardesty & Bearden, 2004). Following these data reduction procedures, 32 items remained. The resulting four- and five-factor solutions were problematic for three reasons. First, the small number of items loading on the outer factors (i.e., fourth and fifth factors) resulted in unacceptable internal scale reliability estimates (i.e., a < .70). For example, only two items loaded on the fourth factor (with one of these items loading at only .31) while one item loaded on the fifth factor. Second, the four- (40%) and five-factor (43%) models failed to explain much variance beyond the three-factor (36%) model, raising the question of whether the small increase in variance explained was worth the sacrifice of parsimony.

3.2. Content validity To assess the content validity of these items, Master's students in industrial and organizational psychology volunteered to serve as subject matter experts (SMEs). After undergoing approximately 5 hours of rater training, 21 SMEs independently sorted each survey item into one of the three bias categories. For approximately 91% of the items, the majority of SMEs agreed on how the item was categorized, providing support for the content validity of these items. We retained the small number of items where SMEs disagreed in order to ensure total coverage of the bias categories for the exploratory factor analysis. 4. Method 4.1. Sample Undergraduates in psychology at a large university in the southwest United States were recruited for an online study using their departmental web site. Participants were offered credit in their courses. A total of 516 participants began the online study. Data from 82 of these 1 All four authors possessed scholarly and applied expertise related to the topic of ethical decision making, scale development techniques, and the delivery of ethics education and training programs. For example, the authors had collectively delivered over 500 hours of face-to-face instruction in professional ethics training to graduate students across a wide range of fields, including the social sciences, biological sciences, health sciences, engineering and physical sciences, humanities, performing arts, and business. Although we intentionally used a comprehensive approach to try and identify all known biases relevant to the context of ethics, we acknowledge that different scholars may have arrived at a different list of biases or used alternative theoretical models for categorizing biases.

3

Personality and Individual Differences 153 (2020) 109609

L.L. Watts, et al.

Table 1 Mapping biases relevant to ethical decision making onto Oreg and Bayazit (2009). Bias

Definition

Mapping onto Oreg and Bayazit (2009)

1. Advantageous comparison 2. Anchoring/adjustment

Comparing one's poor decisions to examples of others’ poorer decisions (Bandura, 1989) Focusing on a piece of information and then adjusting to allow for the circumstances of the present decision (Bodenhausen et al., 2000) Ignoring information related to events one has not directly observed (Schwenk, 1986) Blaming the situation or victim for causing a negative outcome rather than admitting personal responsibility (Bandura, 1989) Overly attending to extreme information or examples (Suárez-Lledó, 2011) Unwillingness to revise one's opinion in the face of new information (Baron, 2005) Demanding participation and agreement by all group members (Duffy, 1993) Assuming existing trends will continue and underestimating the probability of alternative events (Skitka et al., 2000) Relying on information that is more concrete, simple, or salient in memory (Schwenk, 1986) Seeking evidence that supports one's point of view; downplaying evidence that contradicts one's point of view (Nickerson, 1998) Disassociating people from their humanity to justify poor treatment (Bandura, 1989) Avoiding taking personal responsibility for one's decisions (Bandura, 1989) Allowing personal attachments to affect one's judgment (Seo & Barret, 2007) Considering more recent reports to be more valid than early reports (Schulz-Hardt et al., 2000) Assuming an issue is less pervasive than it actually is (Bandura, 1989) Allowing the emotions of coworkers or peers to influence one's emotions (Barsade, 2000) Supporting decisions because they align with choices made in the past; also has been shown to occur in group settings (Bazerman, 1984) Sanitizing the language used to describe unethical conduct (Bandura, 1989) Overestimating the degree of similarity others’ views and one's own (Ross et al., 1977) Attributing others’ failures to internal causes (Milgram, 1974) Allowing group discussion to escalate one's opinions towards more extreme positions (Lamm, 1988) Agreeing with the group to preserve group harmony (Janis & Mann, 1977) Making decisions too quickly before fully processing information (Keinan, 1987) Considering an event to have been obviously predictable after it occurred (Hawkins & Hastie, 1990) Over-focusing on past issues at the expense of recognizing new issues (Hilbert, 2012) Focusing on information commonly known instead of information unique to individual members (Stasser, 1992) Assuming one or two characteristics of a sample are representative of the population (Schwenk, 1986) Applying the same problem-solving approach to every dilemma (Spiro, 1988) Viewing unethical conduct as acceptable because it serves a moral purpose (Bandura, 1989) Assuming that taking action results in more negative consequences than not taking action (Anderson, 2003) Considering oneself to be incapable of making poor decisions (Klayman et al., 1999) Pandering to particular stakeholders (customer, team, boss, etc.) over others (Medeiros et al., 2015) Assuming the ease with which an event is recalled predicts the frequency of the event occurring (Schwenk, 1986) Choosing an alternative decision (one that is not the norm) because it has previously been satisfactory (De Martino et al., 2006) Considering one's own experiences as “normal” and events outside one's experiences as abnormal (Jonas et al., 2001) Believing oneself to be “above average” relative to others (Brown, 1986) Imitating the attitudes and behaviors of coworkers and peers (Rilling & Sanfey, 2011) “Slacking off” in group settings because one's individual accountability is diminished (Karau & Williams, 1993) Reluctance to change (Samuelson & Zeckhauser, 1988) Overly focusing on or absorbed in one aspect of a decision (Posavac et al., 2010) Avoiding decisions that are ambiguous or complex (Anderson, 2003) Assuming events with positive outcomes are more likely (Bandura, 1989)

Regulation Simplification

3. Anecdotal observation 4. Attribution of blame 5. 6. 7. 8.

Black swan Close-mindedness Coercion Complacency

9. Concrete information 10. Confirmation 11. 12. 13. 14. 15. 16. 17.

Dehumanization Diffusion of responsibility Distorting attachments Distrusting early information Downplaying Emotional contagion Escalation of commitment

18. 19. 20. 21. 22. 23. 24. 25. 26.

Euphemistic language False consensus Fundamental attribution error Group polarization Groupthink Hastened response time Hindsight History effect Ignoring unique information

27. Law of small numbers 28. Metacognitive 29. Moral justification 30. Omission 31. Overconfidence 32. Political decision making 33. Representativeness/availability 34. Rule of thumb 35. Selective awareness 36. Self enhancement 37. Social imitation 38. Social loafing 39. 40. 41. 42.

Status quo Tunnel vision Uncertainty avoidance Wishful thinking

Third, exploratory fit statistics for the four-factor model (χ2 = 541.65, df = 374, RMSEA = 0.036) and five-factor model (χ2 = 481.75, df = 346, RMSEA = 0.034) were not substantially better than the three-factor model (χ2 = 611.02, df = 403, RMSEA = 0.051). Although a larger increase in fit may be observed with the chi-square estimate, this estimate is notoriously unreliable (Henson & Roberts, 2006). As such, relying more heavily on the RMSEA suggests that the increase in fit was too small to warrant a decrease in parsimony. The three-factor solution demonstrated acceptable reliability estimates for each factor, with alphas ranging from .73 to .85. In addition, the three-factor model was supported by the content validity evidence described earlier. Further, item content for each factor generally aligned with expectations based on Oreg and Bayazit's (2009) model which served as a guiding framework for interpreting the factor

Simplification Regulation Simplification Simplification Verification Simplification Simplification Verification Regulation Regulation Regulation Simplification Regulation Regulation Verification Regulation Verification Verification Verification Verification Simplification Verification Verification Simplification Simplification Simplification Regulation Regulation Verification Regulation Simplification Simplification Verification Verification Verification Regulation Verification Simplification Regulation Regulation

structures. Factor one consisted of simplification biases such as ignoring complex information and adhering to black-and-white thinking. The strongest loading items on this factor were, “It is okay to change one's mind once new information has been presented.” Factor two consisted of regulation biases, such as rationalizing or downplaying the seriousness of ethical dilemmas and avoiding taking personal responsibility due to feelings of discomfort. The strongest loading item on this factor was, “It is okay to ignore small unethical actions that appear to have no immediate consequences.” Factor three consisted of verification biases such as over-attending to the status quo and agreeing with opinions expressed by one's group for the sake of maintaining consensus. The strongest loading item on this factor was, “Maintaining tradition is an important factor in making ethical decisions.” Table 2 presents the respective factor loadings (i.e., pattern matrix) 4

Personality and Individual Differences 153 (2020) 109609

L.L. Watts, et al.

Table 2 Pattern matrix for three-factor structure of BiAS scores. Factor 1 Simplification α = 0.854

Item #

Item Text

23 73 66 47 91 94 15 36 41 82

It is okay to change one's mind once new information has been presented. (R) Feelings of fear and anxiety can cloud one's judgment. (R) It is important to keep an open mind to new information even if a solution appears straightforward. (R) Before making an important group decision, it is important to consider the opinions of individual group members. (R) Sometimes groups need to revisit a decision that led to a suboptimal outcome. (R) Sometimes, unethical events occur that I am not aware of. (R) I am open to trying different approaches to solving problems depending on the situation. (R) Waiting for more information usually improves the quality of a decision. (R) Sometimes, people commit unethical acts despite good intentions. (R) It is important to take the time to gather information when faced with an ethical issue, even if it prolongs the decisionmaking process. (R) People are more likely to act unethically if their peers act unethically. (R) It is important recognize others' personal opinions when making a decision that affects the group. (R) Only bad people make unethical decisions. It is okay to ignore small unethical actions that appear to have no immediate consequences. It is okay to commit unethical acts in service of the greater good. Sometimes it is necessary to break the rules to get along with those in power. If certain unethical acts occur only rarely, then they are not a big deal. It is okay to ignore the rules now if they have successfully been ignored in the past. It is my responsibility to report unethical behavior committed by my peers. (R) Even if it benefits others, it is not okay to commit unethical acts. (R) If I know that other people witnessed an unethical act, I wouldn't personally bother to report it. Ethical dilemmas tend to resolve on their own. Maintaining tradition is an important factor in making ethical decisions. It is important to be consistent in the way one solves problems, regardless of the situation. Sometimes people just need to accept the decision that has been made rather than putting up a fuss. In the end, difficult situations tend to work themselves out for the best. Talking to other like-minded people generally results in a more rational decision. Dissenting opinions just slow things down. Once the group has decided on something, it is important to go along with the decision. If everyone in the group agrees, it is probably the right decision. Memorable events tend to provide the best information for making decisions. It is important to make decisions that align with one's past behaviors.

24 27 59 87 32 65 3 100 20 56 44 4 16 31 19 48 35 84 13 70 25 8

2 Regulation α = 0.763

3 Verification α = 0.725

0.657 0.652 0.631 0.612 0.566 0.542 0.540 0.538 0.536 0.529 0.526 0.508 0.455 0.630 0.629 0.598 0.500 0.481 0.481 0.440 0.382 0.334 0.579 0.513 0.483 0.437 0.422 0.414 0.403 0.401 0.393 0.387

Note. N = 344; Crossloadings < 0.30 omitted for clarity; “R” denotes item is reverse-scored; Higher scores represent more bias.

Fig. 1. Summary of hypotheses.

exploratory nomological network testing—the focus of study two.

for each item in the three-factor model along with item content. Factor intercorrelations were moderately positive, ranging from 0.15 to 0.36. Thus, overall, the three-factor model was considered the most promising. In sum, exploratory factor analysis procedures demonstrated that an underlying factor structure could be identified for scores on the 32 items designed to measure decision biases in the context of ethics—hereafter referred to as the Biased Attitudes Scale (BiAS) for convenience. Three models were examined, and the three-factor model was identified as the most promising for future research. This alignment with Oreg and Bayazit's model provided strong theoretical support for our three-factor model as well as clear avenues for cross-validation and

6. Study 2: Cross-validation and nomological network testing Nomological network testing involves laying out and testing a web of hypothesized relationships among psychological constructs based on an underlying theoretical rationale (Cronbach & Meehl, 1955). When scores pertaining to some construct of interest (e.g., decision biases) are found to be related to scores pertaining to other constructs (e.g., intelligence) in the direction of their hypothesized relationships, such findings provide evidence for the construct validity of inferences drawn from scale scores (Messick, 1995). Drawing on Oreg and 5

Personality and Individual Differences 153 (2020) 109609

L.L. Watts, et al.

Conscientiousness is one of the Big Five personality traits and refers to “socially prescribed impulse control that facilitates task- and goaloriented behavior, such as thinking before acting, delaying gratification, following norms and rules, and planning, organizing, and prioritizing tasks” (John & Srivastava, 1999, p. 121). Conscientiousness is positively related to performance on ethical decision-making simulations (Antes et al., 2007) as well as followers’ ratings of ethical leadership (Kalshoven, Den Hartog, & De Hoogh, 2011). We expected a negative relationship between conscientiousness and oversimplification biases, because conscientious individuals tend to be more skilled at, or at least invest more resources toward, deliberation and planning. Put differently, people who have a tendency to spend more time than others thinking through the detailed steps of a plan are less likely to come up with overly simplistic solutions. The potential link with regulation biases is also clear. The ability to delay gratification is reminiscent of the enhanced emotional self-regulation tendencies of those high in need for cognition. Conscientious individuals are less likely to take shortcuts or cut corners, even if doing so may result in short-term gains. Finally, the potential relationship with verification biases appears less straightforward, and even an opposite trend is possible. For example, the motivation to conform to social norms may actually enhance the likelihood of verification biases among conscientious individuals. To summarize, we generally expected the constructive individual differences examined here to be negatively related to decision biases in the context of ethics. Hypotheses 1 through 3. Decision biases will be negatively related to intelligence (H1), need for cognition (H2), and conscientiousness (H3).

Bayazit's (2009) individual differences perspective of bias emergence, and prior work on individual differences related to ethical decision making (e.g., Craft, 2013; O'Fallon & Butterfield, 2005), we developed an initial nomological network for decision biases relevant to the context of ethics. Fig. 1 presents a conceptual illustration of the proposed nomological network of other constructs we expected to relate to decision biases and also summarizes our hypotheses. Given the early stages of theory development on this topic, we do not attempt to specify which types of biases are most strongly related to these individual differences variables. 6.1. Constructive individual differences Three individual differences traits were identified that we expected to be negatively related to decision biases in the context of ethics. We labeled intelligence, need for cognition, and conscientiousness as constructive individual differences because they have generally been shown to facilitate ethical decision making. We generally expected that these three individual differences would be negatively related to decision biases, such that those who are higher in each of these traits would be less prone to express simplification, verification, and regulation biases. Intelligence, or the speed and depth of information processing capabilities (Hunt, 1980), has long been held to relate to moral reasoning (Kohlberg, 1969; Piaget, 1932; Rest, 1979). Indeed, making sense of ethical dilemmas requires processing large amounts of complex information from a variety of sources, including information bearing on professional guidelines, actors, motives, emotions, causes, and consequences (Martin, Bagdasarov, & Connelly, 2015). Further, people evidencing lower cognitive ability have been shown to be more prone to a variety of biases involving oversimplification errors (Oreg & Bayazit, 2009). Thus, the potential link between intelligence and simplification biases appears clear, such that those with higher levels of intelligence should be less prone to simplification biases. For verification biases, it may be that individuals higher in intelligence will express lower levels of these particular biases because they have more cognitive resources at their disposal to consider alternatives to the existing social norms (Martin et al., 2015). Intelligence may also inhibit the expression of regulation biases, because cognitive ability helps to facilitate the regulation of positive and negative emotions (Schmeichel, Volokhov, & Demaree, 2008). Thus, all other things being equal, individuals higher in intelligence may be less likely to withdraw from an ethical issue due to emotional discomfort. Need for cognition refers to the extent to which one is motivated to seek out and solve complex problems (Cacioppo & Petty, 1982), including the types of problems characterizing ethical dilemmas (Watts, Ness, Steele, & Mumford, 2018). Prior research by Boyle, Dahlstrom, and Kellaris (1998) has shown need for cognition to be related to contrast biases when judging the ethicality of hypothetical scenarios. We expected a negative relationship between need for cognition and the three types of decision biases. First, individuals high in need for cognition should be less prone to simplification biases, because these individuals are intrinsically motivated by the challenge of solving complex problems. Second, individuals high in need for cognition should also be less prone to verification biases. The drive for consistency and stability held to underlie verification biases is antithetical to the drive for novelty and exploration—key markers of individuals high in need for cognition. Put differently, those high in need for cognition are more likely to subvert norms rather than blindly follow the way things have been done in the past (Watts, Steele, & Song, 2017). Finally, need for cognition may be negatively related to the expression of regulation biases because there is some evidence that individuals higher in need for cognition may be more skilled at emotional selfregulation (Maio & Esses, 2001). Thus, these individuals may be better equipped to navigate the affective discomfort inherent in ethical dilemmas, compared with lower need for cognition individuals who may be more likely to avoid dealing with ethical issues.

6.2. Destructive individual differences Narcissism, cynicism, and Machiavellianism are three personality traits that have been shown to inhibit ethical decision making. Thus, we refer to these individual differences as destructive. We generally expected these three traits to be positively related to simplification, verification, and regulation biases. Narcissism refers to inflated perceptions of the self (Penney & Spector, 2002), and high levels of self-reported narcissism are represented by feelings of self-admiration, personal superiority, and psychological entitlement (Emmons, 1987). Narcissism has been shown to negatively predict ethical decision making (Antes et al., 2007; Bergman, Westerman, Bergman, Westerman, & Daly, 2014; Brown, Sautter, Littvay, Sautter, & Bearnes, 2010). Narcissistic individuals may be more likely to oversimplify ethical dilemmas. Because narcissistic individuals have a strong motivation to protect their positive self-view, any information suggesting otherwise gets rejected or reframed via this lens of self-deception. In addition, we expected narcissism to be positively related to verification biases. By maintaining the status quo, narcissistic individuals minimize potential threats to their ego that can arise from changing conditions (Bushman & Baumeister, 1998). Along related lines, narcissism should be positively related to regulation biases. In the context of ethical dilemmas, seeking pleasure and avoiding pain are ultimately self-interested actions in which individuals prioritize their own needs above the needs of others. Given that narcissists are more likely to feel entitled to such conditions, regulation biases should be manifest more strongly among individuals higher in narcissism. Cynicism refers to hostile attitudes about the intentions or capabilities of others as well as one's ability to affect change (Cook & Medley, 1954; Dean, Brandes, & Dharwadkar, 1998). Cynicism has been argued to be a critical variable influencing ethical decision making for some time (Morris & Sherlock, 1971). Indeed, people reporting higher levels of cynicism also tend to report greater intentions for engaging in unethical behavior (Andersson & Bateman, 1997; Antes et al., 2007). It is plausible that individuals high in cynicism may be more prone to decision biases when faced with ethical dilemmas. For example, if one 6

Personality and Individual Differences 153 (2020) 109609

L.L. Watts, et al.

believes that all stakeholders are motivated by hostile or self-interested intentions, this belief evidences a tendency to oversimplify information. Further, cynical individuals may be less likely to challenge existing norms—a strategy which often involves personal discomfort—if they believe that doing so is unlikely to result in meaningful change. Along these lines, we expected cynicism to be positively correlated with verification and regulation biases. Machiavellianism refers to the tendency to view people as “pawns” and the willingness to manipulate others to achieve personal goals (Christie & Geis, 1970). Machiavellianism has been found to influence ethical judgments, such that those high in Machiavellianism are more likely to view unethical behavior as acceptable (Winter, Stylianou, & Giacalone, 2004). Further, Machiavellianism has been found to be negatively related to perceptions of the benefits of whistleblowing and perceptions of personal responsibility, resulting in reduced intentions to report observed misconduct (Dalton & Radtke, 2013). We expected Machiavellianism to be positively related to the expression of decision biases in the context of ethics. Perhaps the strongest rationale for a link is between Machiavellianism and regulation biases. Indeed, when confronted with ethical dilemmas, we view the expression of common Machiavellian beliefs (e.g., “the ends justify the means”) as overt rationalizations for avoiding discomfort or other forms of personal sacrifice that may come with addressing ethical issues. Put differently, regulation biases can serve as a tool used by Machiavellians to further downplay the seriousness of ethical dilemmas or to justify violating ethical standards. Along these lines, a potential link with oversimplification biases is also plausible. Because Machiavellians tend to narrowly focus on outcomes, and ignore information bearing on the morality of one's strategies for obtaining these outcomes, these individuals should also express higher levels of oversimplification biases. Meanwhile, the potential link between Machiavellianism and verification biases may be less straightforward, because Machiavellians may view the status quo as a resource or an impediment depending on the situation. Hypotheses 4 through 6. Decision biases will be positively related to narcissism (H4), cynicism (H5), and Machiavellianism (H6).

2010; Hom & Kaiser, 2016; Novicevic, Buckley, Harvey, & Fung, 2008; Sligo & Stirton, 1998). In sum, we expected expression of the three types of biases to be negatively related to ethical decision making, which, in the present study, was measured using three methods—open-ended case studies, a closed-ended situational judgment test (SJT), and a behavioral measure of overt lying. We also considered it critical to evaluate whether the three decision biases might predict ethical decision making above and beyond the individual differences variables discussed in earlier hypotheses. Hypotheses 7 through 9. Decision biases will be negatively related to case-based (H7), SJT-based (H8), and behavior-based (H9) measures of ethical decision making after controlling for individual differences.

6.3. Ethical decision making

7.2. Procedures

Ethical decision making refers to the process by which an individual arrives at a professionally appropriate decision when facing an ambiguous, high-stakes dilemma in which there is the potential for harm to oneself or other stakeholders (Mumford et al., 2008). Because ethical dilemmas can be complex (Antes et al., 2010), such situations are held to require sensemaking (Weick, 1995). Sensemaking refers to the construction and application of mental models—or cognitive representations of the causes and outcomes involved in complex events—in order to understand the situation at hand (Bagdasarov et al., 2016; JohnsonLaird, 1983). The viability of these mental models for supporting ethical decisions rests upon one's ability to gather, encode, organize, and analyze information from a variety of relevant sources such as experiential knowledge, professional guidelines, trusted colleagues, and field norms (Thiel, Bagdasarov, Harkrider, Johnson, & Mumford, 2012). Clearly, simplification biases that restrict a person's willingness or capacity to process the full range of information bearing on ethical dilemmas are likely to be disruptive to ethical decision making. Of course, ethical decision making depends upon more than making sense of what one should do, it also involves making a decision—decisions that can require personal sacrifice or disrupt the status quo (Watts & Buckley, 2017). Thus, regulation and verification biases may be disruptive to making ethical decisions. Not surprisingly, several specific decision biases (e.g. status-quo bias, hindsight bias, self-enhancement bias, contrast bias, etc.) have been shown to influence people's responsiveness to, and judgments of, ethical issues (Bailey, 2013; Bostrom & Ord, 2006; Guiral, Rodgers, Ruiz, & Gonzalo,

Because this was an online study, participants were able to complete the materials at their own pace using whatever computer equipment they chose. All materials were administered via the department's Qualtrics survey. Participants began the study by reading an informed consent page. After providing consent, half of the participants were asked to indicate whether they were promised a gift card in exchange for their participation (none were actually offered a gift card). Responses to this task were coded as a behavioral measure of ethical decision making. The other half of participants were presented with the gift card question at the end of the study, immediately before the final debriefing page. Participants were randomly assigned to these two conditions to check for potential ordering effects. Next, all participants completed a timed measure of intelligence, followed by the randomized BiAS items. Five “attention check” items identical to those used in study 1 were randomly placed throughout these items to check for careless responding. Next, participants were asked to read and respond to two short cases presenting ethical dilemmas. Following these cases, participants were asked to complete a battery of untimed covariate measures and standard demographic questions. Upon completion of all materials, participants were presented with a debriefing page.

7. Method 7.1. Sample Undergraduates in psychology at a large university in the southwestern United States were recruited for an online study of problem solving via their departmental web site. Students in this second sample were from a diverse, urban-based university in a different state from the first sample. The participants were offered research credit or extra credit in their courses in exchange for their participation. A total of 303 participants began the online study. However, data from 17 of these participants were removed because they failed to complete a majority of the materials. Additionally, data from 49 participants were removed because of careless responding (i.e., failed to answer at least 4 out of the 5 attention check items correctly). Finally, data from 5 participants were removed because they chose to withdraw their consent upon being debriefed at the end of the study. Thus, a final sample of 232 participants remained. Ages ranged from 18 to 41, with an average age of 20 and an average of 3.16 years of work experience. Participants reported an average GPA of 3.18 and an average ACT score of 25 (79th percentile nationally). The majority of participants were women (61%) and reported English as their first language (74%).

7.3. Measures Biased Attitudes Scale (BiAS). The 32-item measure of ethical decision-making biases developed in study one was included in study 7

Personality and Individual Differences 153 (2020) 109609

L.L. Watts, et al.

Table 3 Correlation matrix of BiAS scores, individual differences, and ethical decision making. Mean

SD

BiAS 1. Simplification biases 1.92 0.57 2. Verification biases 3.01 0.52 3. Regulation biases 2.45 0.59 Constructive individual differences 4. Intelligence 18.36 8.75 5. Need for cognition 3.28 0.55 6. Conscientiousness 3.43 0.51 Destructive individual differences 7. Narcissism 2.59 0.76 8. Cynicism 3.04 0.50 9. Machiavellianism 2.75 0.38 Ethical decision making 10. Case-based 2.60 0.71 11. SJT-based 7.15 4.01 12. Behavior-based 0.74 0.44

1

2

3

(0.88) 0.11 0.26⁎⁎

(0.67) 0.29⁎⁎

(0.75)

−0.12* −0.25⁎⁎ −0.22⁎⁎

−0.11* −0.22⁎⁎ 0.14*

0.05 −0.18⁎⁎ −0.21⁎⁎

⁎⁎

⁎⁎

⁎⁎

4

5

(0.84) 0.05 −0.05

(0.85) 0.19⁎⁎

⁎⁎

6

7

8

9

10

11

12

(0.76) 0.23⁎⁎ 0.06

(0.60) 0.16⁎⁎



(0.74) ⁎⁎

0.20 0.00 0.16⁎⁎

0.38 0.27⁎⁎ −0.08

0.28 0.17⁎⁎ 0.40⁎⁎

−0.16 −0.22⁎⁎ 0.00

−0.17 −0.18⁎⁎ −0.05

0.01 −0.08 −0.35⁎⁎

(0.88) 0.34⁎⁎ 0.24⁎⁎

(0.74) 0.28⁎⁎

(0.69)

−0.26⁎⁎ −0.43⁎⁎ −0.14*

−0.25⁎⁎ −0.25⁎⁎ −0.11

−0.16⁎⁎ −0.40⁎⁎ 0.02

0.07 0.10 0.09

0.31⁎⁎ 0.31⁎⁎ 0.01

0.02 0.24⁎⁎ −0.11

−0.11* −0.25⁎⁎ −0.03

−0.15⁎⁎ −0.24⁎⁎ −0.08

−0.06 −0.31⁎⁎ 0.05

Note. N = 232; SD = standard deviation; SJT = situational judgment test; BiAS = Biased Attitudes Scale; “—” = unable to estimate reliability; (one-tailed); internal scale reliability and interrater agreement coefficients are presented in parentheses on the diagonals.

⁎⁎

p ≤ .01; *p ≤ .05

flatter important people.” Some construct validity evidence has been provided by Christie and Geis (1970). Internal scale reliabilities for narcissism, cynicism, and machiavellianism were acceptable at 0.88, 0.74, and 0.69. Case-based ethical decision making. Participants were asked to read and respond to two brief cases presenting ethical dilemmas in business that were based on real events. These “mini-cases” were selected from a pool of some 40 scenarios available on Carnegie Mellon's web site, “Arthur Anderson Case Studies in Business Ethics, 2017”. Each case was approximately half a page in length, and case content focused on relatively common ethical issues encountered in the workplace such as discrimination and interpersonal conflicts. After reading each case, participants were asked to respond to the following three prompts: 1) “Describe the ethical issues in this case”, 2) “What biases do the characters have and how might these biases influence their decision making?”, and 3) “How would you resolve this case?” These questions were adapted from prior work (e.g., Thiel et al., 2012) which showed that responses to such questions, when judged by multiple raters using benchmark rating scales, provide reliable markers of ethical decision making. Four judges were trained to apply benchmark rating scales to participants’ responses judging the overall ethicality of their final decision in response to each case. Ratings of decision ethicality for both cases were strongly and positively correlated. To simplify presentation of the results, ratings were averaged across the two cases to create a composite score for each participant. The interrater agreement coefficient was acceptable, with an rwg = 0.76 (LeBreton & Senter, 2008). SJT-based ethical decision making. Ethical decision making was also assessed using Becker's (2005) 20-item employee integrity test. The employee integrity test is a multiple-choice, situational judgment test (SJT) in which different response options have been pre-coded to correspond to varying levels of integrity. Each item presents a unique ethical dilemma. Throughout the 20 scenarios, respondents are asked to take on a number of different roles (e.g., employee, manager, business owner) in a diverse range of industries (e.g., engineering, customer service, education, medicine). Example ethical issues include deciding how to navigate stakeholder conflicts, whether to report a supervisor's misconduct, and responding to potential product safety concerns. After reading each scenario, participants are asked, “Which of the following do you think you would most likely do?”, and then select one of four possible response options. Scores can range from −18 to +18, with higher scores representing more integrity. Becker (2005) presents the full list of scenarios and scoring key. Becker has also provided some evidence for the construct validity of test scores as predictors of positive supervisor ratings concerning the respondent's relationships, career potential, leadership, and in-role performance in a variety of industries.

two. Participants were asked to indicate their agreement with 32 statements using a 1 (strongly disagree) to 5 (strongly agree) scale, such that higher scores indicated more bias. The first subscale, simplification biases, consisted of 13 items such as, “Only bad people make unethical decisions.” The second subscale, verification biases, consisted of 10 items such as, “Dissenting opinions just slow things down.” The third subscale, regulation biases, consisted of 9 items such as, “Ethical dilemmas tend to resolve on their own.” The Cronbach's alpha reliability estimates for simplification biases, verification biases, and regulation biases were acceptable at 0.88, 0.67, and 0.75 (see Table 3). Individual differences variables. Individual differences expected to facilitate ethical decision making included intelligence, need for cognition, and conscientiousness. Intelligence was assessed using the 30-item employee aptitude survey (EAS; Ruch & Ruch, 1980). Participants were given a set of facts and then asked to respond to several statements by indicating whether each statement was true or false. Ruch and Ruch (1980) have provided some evidence of construct validity. Need for cognition was measured using Cacioppo, Petty, and Kao's (1984) 18-item need for cognition scale. Using a 1 to 5 scale, participants indicated how much they agreed with statements such as, “I prefer my life to be filled with puzzles that I must solve.” Cacioppo, Petty, Feinstein, and Jarvis (1996) have provided some construct validity evidence bearing on scores obtained using the scale. Conscientiousness was assessed with Costa and McCrae's (1992) NEOFFI, a 60-item measure of the Big Five personality characteristics (i.e., openness, conscientiousness, extraversion, agreeableness, and emotional stability). Scandell (2000) and McCrae and Costa (2004) have provided some evidence for the construct validity of scale scores. Internal scale reliabilities for intelligence, need for cognition, and conscientiousness were acceptable at .84, .85, and .74. Individual differences measures expected to hinder ethical decision making included narcissism, cynicism, and machiavellianism. Narcissism was assessed using Campbell, Bonacci, Shelton, Exline, and Bushman's (2004) 9-item psychological entitlement scale. Participants used a 1 to 5 scale to indicate their agreement with items such as, “I honestly feel I'm just more deserving than others.” Campbell et al. (2004) provided some evidence for the construct validity of scale scores. Cynicism was measured with Wrightsman's (1964) 20-item measure of trustworthiness and cynicism. Using a 1 to 5 scale, participants were asked to indicate their agreement with statements such as, “Most people would tell a lie if they could gain by it.” Antes et al. (2007) and Wrightsman (1992) have provided some evidence of construct validity. Machiavellianism was assessed using the 20-item Mach IV (Christie & Geis, 1970). Using a 1 to 5 scale, participants indicated their agreement with statements such as, “It is wise to 8

Personality and Individual Differences 153 (2020) 109609

L.L. Watts, et al.

p = .049), and regulation biases (β = −0.17, t = −2.62, p = .009) were all statistically significant predictors. For behavior-based ethical decision making, only simplification biases was a statistically significant predictor (β = −0.19, t = −2.69, p = .008). Thus, the findings partially supported hypotheses 7 through 9. To summarize, the individual differences variables correlated with simplification, verification, and regulation biases as predicted, but with a few exceptions. Each of the six individual differences correlated with at least two of the three decision biases in the predicted direction. Among the constructive individual differences, need for cognition served as the strongest and most consistent predictor of all three types of decision biases, with correlations ranging from −0.18 to −0.25. Among the destructive individual differences, narcissism demonstrated the strongest and most consistent predictor of all three decision biases, with correlations ranging from 0.20 to 0.38. Thus, hypotheses 2 and 4 were fully supported, while hypotheses 1, 3, 5, and 6 were partially supported. However, none of the individual differences showed large effect sizes with any of the decision biases, with the largest correlation observed between Machiavellianism and regulation biases (r = 040). After accounting for individual differences, all three decision biases demonstrated statistically significant associations with measures of ethical decision making in the predicted directions. However, the results across different measures were inconsistent. The most consistent results were observed for simplification biases. Individuals expressing higher levels of simplification biases tended to exhibit lower performance across all three measures of ethical decision making. In other words, they not only performed worse at solving multiple types of hypothetical dilemmas, they were actually more willing to lie to the researchers for financial gain. However, it should be noted that the standardized beta weights were small to moderate in size (−0.19 to −0.29). Regulation and verification biases also proved of some value in predicting ethical decision making in response to hypothetical dilemmas, but not ethical behavior.

The internal scale reliability for SJT-based ethical decision making in the present study was .60. Behavior-based ethical decision making. The behavioral measure of ethical decision making involved a brief deception task in which participants had the opportunity to lie for financial gain. Participants were asked whether or not they were promised a gift card in exchange for their participation in the study. However, no participants were, in actuality, promised a gift card. Participants who responded “no” were coded with a “1” to indicate that they told the truth, whereas participants who responded with a “yes” were coded with a “0” to indicate they had lied. Thus, higher scores indicated more ethical behavior. To assess the potential impact of demand characteristics, half of participants were randomly assigned to view the gift card question as the first task in the study, while the other half viewed this question as the last task. This check for demand characteristics is included as a control in our analyses where relevant. 8. Results and discussion Maximum likelihood estimation (Mplus v. 6.12) was used to estimate the fit of the 32-item, three-factor model identified in study one with an independent sample. Each item was constrained to load onto a single factor, according to the final pattern matrix from study one (see Table 2). Not surprisingly, a small reduction in model fit was observed when comparing the fit statistics from sample one (χ2/df = 1.84, CFI = 087, TLI = 086, RMSEA = 005) and sample two (χ2/df = 1.67, CFI = 082, TLI = 081, RMSEA = 005). Nevertheless, the three-factor model fit the data from sample two reasonably well considering the diverse range of biases grouped within each factor (Kline, 2011). Because hypotheses were directional, one-tailed bivariate correlations were calculated between BiAS scores and six individual differences variables specified in hypotheses 1 through 6. Table 3 presents these results. First, scores on the three, constructive individual differences variables were expected to be negatively related to BiAS scores. Intelligence, need for cognition, and conscientiousness were negatively related to all three biases at statistically significant levels (r = −0.11 to −0.25), with the exception of a null relationship between intelligence and regulation biases and a positive relationship between conscientiousness and verification biases. Thus, hypotheses 1 through 3 were partially supported. Next, correlations were estimated among the three destructive individual differences variables and BiAS scores. As predicted, narcissism was positively related to all three BiAS subscales (r = 0.20 to 0.38). Cynicism was positively related to verification (r = 0.27) and regulation biases (r = 0.17), but showed no relationship with simplification biases. Machiavellianism correlated with simplification (r = 0.16) and regulation (r = 0.40) but was unrelated to verification. Thus, hypotheses 4 through 6 were partially supported.

9. General discussion Although it has been recognized for some time that decision biases may inhibit decision making in the context of ethics (e.g., Jones, 1991), prior work has yet to systematically examine the expression of decision biases in this domain. The present studies advances the literature by providing evidence that the general three-factor model of simplification, verification, and regulation biases proposed by Oreg and Bayazit (2009) can be fruitfully applied to the context of ethical decision making. We found that scores on these three factors correlated positively, but not strongly (r = 020s). This pattern of intercorrelations indicates that the three types of biases (i.e., simplification, verification, and regulation) are related, but distinct from one another, suggesting it may not be uncommon for individuals to be high in one type of bias while being low in another type. Thus, individuals may be expected to evidence unique patterns of these three types of biases. In fact, weak to moderate factor correlations have also been observed in prior research on general decision-making biases (e.g., Bruine de Bruin et al., 2007; Ceschi et al., 2019). In addition, there are a number of ways in which the findings provide evidence for the construct validity of BiAS scores—that is, the extent to which BiAS scores are representative of decision biases expressed in contexts calling for ethical decision making. First, content validity evidence provided by a pool of SMEs supported the alignment between each item's content and the operational definitions for the three types of decision biases. Second, some evidence for structural validity was demonstrated in that the three-factor solution identified in study one was shown to fit with data from an independent sample in study two—although not all of the fit indices indicated a good fit. Third, convergent validity was demonstrated in that BiAS scores correlated with an array of constructive and destructive individual differences.

8.1. Incremental validity To test hypotheses 7 through 9, stepwise regressions were conducted to examine the incremental validity of BiAS scores in predicting three measures of ethical decision making after accounting for individual differences variables and timing of the gift card question. In step one, we included the six individual differences variables tested earlier. Timing of the gift card question was also entered in step one to ensure that the behavioral results were not influenced by ordering effects. In step 2, BiAS scores were included. Standardized beta coefficients are interpreted in the text while unstandardized coefficients are reported in Table 4. For case-based ethical decision making, simplification biases (β = −0.20, t = −3.04, p = .003) and verification biases (β = −0.17, t = −2.37, p = .019) were statistically significant predictors. For SJTbased ethical decision making, simplification biases (β = −0.29, t = −4.83, p < .001), verification biases (β = −0.13, t = −1.98, 9

Personality and Individual Differences 153 (2020) 109609

L.L. Watts, et al.

Table 4 Incremental validity of BiAS scores in predicting ethical decision making. Case-based EDM Model 1

Individual differences Intelligence Need for cognition Conscientiousness Narcissism Cynicism Machiavellianism Gift card timing BiAS Simplification biases Verification biases Regulation biases R2 ΔR2

SJT-based EDM Model 2

Behavior-based EDM

Model 1

Model 2

Model 1

Model 2

B

SE

b

SE

b

SE

b

SE

b

SE

b

SE

0.00 0.37⁎⁎ −007 −001 −012 −008 −014

0.01 0.09 0.10 0.07 0.10 0.14 0.09

0.00 0.29⁎⁎ −009 0.08 −012 −010 −011

0.01 0.09 0.10 0.07 0.10 0.14 0.09

0.03 1.63⁎⁎ 1.04* −062 −068 −2.12⁎⁎ −078

0.03 0.45 0.51 0.34 0.53 0.71 0.47

0.02 0.94* 0.66 0.04 −083 −1.51* −043

0.03 0.43 0.48 0.34 0.50 0.71 0.44

0.00 0.01 −009 −000 −007 0.04 0.12*

0.00 0.06 0.06 0.04 0.06 0.09 0.06

0.00 −003 −011 0.03 −008 0.03 0.13*

0.00 0.06 0.06 0.04 0.07 0.09 0.06

−025⁎⁎ −024* −001 0.18 0.06⁎⁎

0.08 0.10 0.09

−2.01⁎⁎ −099* −1.17⁎⁎ 0.36 0.13⁎⁎

0.42 0.50 0.45

−015⁎⁎ −007 0.02 0.08 0.04*

0.06 0.07 0.06

0.12 0.12⁎⁎

0.23 0.23⁎⁎

0.04 0.04

Note. N, 232; EDM, ethical decision making; SJT, situational judgment test; for gift card timing “0”, beginning of study and “1”, end of study; BiAS, Biased Attitudes Scale; b, unstandardized beta coefficient; SE, standard error; ΔR2, change in proportion of variance explained; ⁎⁎p ≤ .01; *p ≤ .05.

“first draft.”

Fourth, criterion-related validity was demonstrated by the relationships observed among BiAS scores and the three unique types of ethical decision-making criteria. Finally, the overall BiAS and its three subscales evidenced acceptable levels of internal scale reliability. Collectively, these findings provide strong evidence for the construct validity of scores obtained using the BiAS (Messick, 1995). Thus, researchers may employ the BiAS with some confidence as a measure of a person's propensity for expressing three types of decision biases relevant to ethical decision making.

11. Future research If decision biases do, in fact, inhibit ethical decision making, then a practical question extending from this effort is whether individuals might be trained to become more aware of, and perhaps better manage, the expression of these biases. To examine this question, the BiAS might be employed in future research as a measure for evaluating the effectiveness of ethics education and training programs. Such programs have demonstrated some efficacy at improving a number of ethics-related criteria (e.g., ethical decision making, ethical behavior, knowledge of rules and guidelines, ethical perceptions; Medeiros et al., 2017; Watts et al., 2017). However, few ethics training and education courses have explicitly examined decision biases as a criterion. The small number of studies that have directly examined how decision biases might be reduced, or managed, have shown some promise. For example, Mumford and colleagues (Mecca et al., 2014, 2016) identified a list of compensatory strategies, or cognitive strategies proposed to counteract the negative influence of biases and found that training in such strategies was effective in facilitating ethical decision making. Additionally, in a series of experiments, Gino, Moore, and Bazerman (2009) demonstrated that adopting a deliberative, analytic mindset reduced the impact of outcome bias on ethical judgments. Further, Noval (2016) found that participating in a self-reflection activity reduced the influence of focal bias (i.e., the tendency to focus excessively on desired outcomes), leading to the production of more ethical intentions. Finally, Tomlin, Metzger, Bradley-Geist, and Gonzalez-Padron (2017) provided qualitative evidence bearing on the effectiveness of a training module aimed at developing students’ bias identification skills.

10. Limitations Two key limitations should be noted. First, the samples for these studies were comprised of college students. As a result, some caution is warranted if generalizing the results to non-student samples (e.g., employees). However, the average participant reported several years of work experience and high academic capabilities and performance as exhibited by average GPA and ACT scores. In other words, participants appeared to be academically motivated and bright and possessed some “real-world” experience—characteristics becoming of future professionals (Judge, Higgins, Thoresen, & Barrick, 1999). Clearly, it would be beneficial to examine how patterns of results might differ among other populations and settings. Such studies could also provide some insight into the potential relationships between decision biases and particular demographic characteristics (e.g., gender, age, expertise). Second, the initial version of the BiAS scale presented here demonstrated some flaws with respect to its psychometric properties. Although the three-factor model demonstrated superior fit to alternative models examined in study one, the fit could still be improved. For example, two common fit indices demonstrated the three-factor model fit the data well (χ2/df < 2; RMSEA < 0.08), while two other fit indices fell below conventional standards for demonstrating a goodfitting model (i.e., CFI and TLI > 0.90; Bentler & Bonett, 1980). Thus, the fit could perhaps be improved further in future iterations of the scale. Further, while we initially generated a balanced number of reverse-scored items for each subscale, the final subscales were not balanced, suggesting that the wording of the items may be interacting with the content of the decision bias assessed with each subscale. In addition, while the Cronbach's alpha of all three subscales were above 0.70 in study one, in study two the verification bias subscale was only 0.67. Finally, there are of course limitations to any self-report approach, particularly when socially desirable responding is a concern (Schwarz, 1999). Thus, the scale presented here should be viewed as a

12. Conclusion One need not look further than the justifications provided by those accused of unethical behavior to observe decision biases in action. Decision biases—such as the tendency to oversimplify complex problems, cling to the status quo, and avoid uncomfortable decisions—appear to inhibit people's ability to make ethical decisions. Thus, developing mechanisms for detecting such decision biases may be critical for supporting the ethical decision making of individuals and the organizations for which they work. 10

Personality and Individual Differences 153 (2020) 109609

L.L. Watts, et al.

Acknowledgments

Campbell, W. K., Bonacci, A. M., Shelton, J., Exline, J. J., & Bushman, B. J. (2004). Psychological entitlement: Interpersonal consequences and validation of a self-report measure. Journal of Personality Assessment, 83, 29–45. Carter, C. R., Kaufmann, L., & Michel, A. (2007). Behavioral supply management: A taxonomy of judgment and decision-making biases. International Journal of Physical Distribution & Logistics Management, 37, 631–669. Ceschi, A., Costantini, A., Sartori, R., Weller, J., & Di Fabio, A. (2019). Dimensions of decision-making: an evidence-based classification of heuristics and biases. Personality and Individual Differences, 146, 188–200. Christakis, N. A., & Asch, D. A. (1993). Biases in how physicians choose to withdraw life support. The Lancet, 342, 642–646. Christensen-Szalanski, J. J., & Willham, C. F. (1991). The hindsight bias: A meta-analysis. Organizational Behavior and Human Decision Processes, 48, 147–168. Christie, R., & Geis, F. L. (1970). Studies in Machiavellianism. Academic Press, Incorporated. Conway, J. M., & Huffcutt, A. I. (2003). A review and evaluation of exploratory factor analysis practices in organizational research. Organizational Research Methods, 6, 147–168. Cook, W. W., & Medley, D. M. (1954). Proposed hostility and parasaic virtue scales for the MMPI. Journal of Applied Psychology, 38, 414–418. Costa, P. T., & McCrae, R. R. (1992). Revised NEO Personality Inventory (NEO-PIR) and NEO Five Factor Inventory (NEO-FFI) professional manual. Odessa: FL: Psychological Assessment Resources. Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research & Evaluation, 10, 1–9. Craft, J. L. (2013). A review of the empirical ethical decision-making literature: 2004–2011. Journal of Business Ethics, 117, 221–259. Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281–302. Croskerry, P. (2003). The importance of cognitive errors in diagnosis and strategies to minimize them. Academic Medicine, 78, 775–780. Dalton, D., & Radtke, R. R. (2013). The joint effects of Machiavellianism and ethical environment on whistle-blowing. Journal of Business Ethics, 117, 153–172. Dean, J. W., Brandes, P., & Dharwadkar, R. (1998). Organizational cynicism. Academy of Management Review, 23, 341–352. Dosi, G., & Lovallo, D. (1997). Rational entrepreneurs or optimistic martyrs? Some considerations on technological regimes, corporate entries, and the evolutionary role of decision biases. Cambridge, UK: Cambridge University Press41–70. Duffy, L. (1993). Team decision-making biases: An information-processing perspective. In G. A. Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.). Decision making in action: Models and methods (pp. 346–359). Westport, CT: Ablex Publishing. Elkind, P.2013, July 1. The confessions of Andy Fastow. Fortune.com. Retrieved August 2, 2017, fromhttp://fortune.com/2013/07/01/the-confessions-of-andy-fastow/. Emmons, R. A. (1987). Narcissism: Theory and measurement. Journal of Personality and Social Psychology, 52, 11–17. Foster, J. L., & Gaddis, B. H. (2014). Personality derailers: Where do we go from here? Industrial and Organizational Psychology, 7, 148–151. Gino, F., Moore, D. A., & Bazerman, M. H. (2009). No harm, no foul: The outcome bias in ethical judgments. Harvard Business School NOM Working Paper08–080. Guiral, A., Rodgers, W., Ruiz, E., & Gonzalo, J. A. (2010). Ethical dilemmas in auditing: Dishonesty or unintentional bias. Journal of Business Ethics, 91, 151–166. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108, 814–834. Hardesty, D. M., & Bearden, W. O. (2004). The use of expert judges in scale development: Implications for improving face validity of measures of unobservable constructs. Journal of Business Research, 57, 98–107. Hawkins, S. A., & Hastie, R. (1990). Hindsight: Biased judgments of past events after the outcomes are known. Psychological Bulletin, 107, 311–327. Henson, R. K., & Roberts, J. K. (2006). Use of exploratory factor analysis in published research: Common errors and some comment on improved practice. Educational and Psychological Measurement, 66, 393–416. Hilbert, M. (2012). Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making. Psychological Bulletin, 138, 211–237. Hom, H. L., Jr, & Kaiser, D. L. (2016). Role of hindsight bias, ethics, and self-other judgments in students’ evaluation of an animal experiment. Ethics & Behavior, 26, 1–13. Hunt, E. (1980). Intelligence as an information‐processing concept. British Journal of Psychology, 71, 449–474. Janis, I. L., & Mann, L. (1977). Decision making: A psychological analysis of conflict, choice, and commitment. Free press. John, O. P., & Srivastava, S. (1999). The Big Five trait taxonomy: History, measurement, and theoretical perspectives. In L. A. Pervin, & O. P. John (Eds.). Handbook of personality: Theory and research (pp. 102–138). (2nd ed.). New York: Guilford Press. Johnson-Laird, P. N. (1983). Mental models: Toward a cognitive science of language, inference, and consciousness. Cambridge, MA: Harvard University Press. Jonas, E., Schulz-Hardt, S., Frey, D., & Thelen, N. (2001). Confirmation bias in sequential information search after preliminary decisions: an expansion of dissonance theoretical research on selective exposure to information. Journal of Personality and Social Psychology, 80, 557–571. Jones, T. M. (1991). Ethical decision making by individuals in organizations: An issuecontingent model. Academy of Management Review, 16, 366–395. Jones, P. E., & Roelofsma, P. H. (2000). The potential for social contextual and group biases in team decision-making: Biases, conditions and psychological mechanisms. Ergonomics, 43, 1129–1152. Judge, T. A., Higgins, C. A., Thoresen, C. J., & Barrick, M. R. (1999). The big five

We would like to thank Alisha Ness, Logan Steele, Chanda Sanders, and Ashley Jorgensen for allowing us to recruit participants in their courses, and Ethan Rothstein and Kajal Patel for reviewing prior drafts of the manuscript. Earlier versions of this work were presented at the 2018 Conference of the Society for Industrial and Organizational Psychology, Chicago, IL. References Andersson, L. M., & Bateman, T. S. (1997). Cynicism in the workplace: some causes and effects. Journal of Organizational Behavior, 18, 449–469. Anderson, C. J. (2003). The psychology of doing nothing: forms of decision avoidance result from reason and emotion. Psychological Bulletin, 129, 139–167. Antes, A. L., Brown, R. P., Murphy, S. T., Waples, E. P., Mumford, M. D., Connelly, S., et al. (2007). Personality and ethical decision-making in research: The role of perceptions of self and others. Journal of Empirical Research on Human Research Ethics, 2, 15–34. Antes, A. L., Wang, X., Mumford, M. D., Brown, R. P., Connelly, S., & Devenport, L. D. (2010). Evaluating the effects that existing instruction on responsible conduct of research has on ethical decision making. Academic Medicine, 85, 519–526. Arnott, D. (1998). A taxonomy of decision biasesMelbourne, Australia: Monash University, School of Information Management and Systems Unpublished technical report. Arthur Anderson Case Studies in Business Ethics. Carnegie Mellon, Tepper School of Business. 2017 Retrieved August 2, 2017 from http://public.tepper.cmu.edu/ethics/ aa/arthurandersen.htm. Bagdasarov, Z., Johnson, J. F., MacDougall, A. E., Steele, L. M., Connelly, S., & Mumford, M. D. (2016). Mental models and ethical decision making: The mediating role of sensemaking. Journal of Business Ethics, 138, 133–144. Bailey, J. J. (2013). Judging managerial actions as ethical or unethical: Decision bias and domain relevant experience. International Journal of Business and Social Research, 3, 1–17. Bandura, A. (1989). Self-regulation of motivation and action through internal standards and goal systems. In L. A. Pervin (Ed.). Goal concepts in personality and social psychology (pp. 19–85). Hillsdale, NJ: Lawrence Erlbaum Associates. Bandura, A., Barbaranelli, C., Caprara, G. V., & Pastorelli, C. (1996). Mechanisms of moral disengagement in the exercise of moral agency. Journal of Personality and Social Psychology, 71, 364–374. Baron, R. S. (2005). So right it's wrong: Groupthink and the ubiquitous nature of polarized group decision making. Advances in Experimental Social Psychology, 37, 219–253. Barsade, S. G. (2000). The ripple effect: Emotional contagion in groups. Yale SOM Working Paper No. OB-01. Bazerman, M. H. (1984). The relevance of Kahneman and Tversky's concept of framing to organizational behavior. Journal of Management, 10, 333–343. Becker, T. E. (2005). Development and validation of a situational judgment test of employee integrity. International Journal of Selection and Assessment, 13, 225–232. Bentler, P. M., & Bonett, D. G. (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin, 88, 588–606. Bergman, J. Z., Westerman, J. W., Bergman, S. M., Westerman, J., & Daly, J. P. (2014). Narcissism, materialism, and environmental ethics in business students. Journal of Management Education, 38, 489–510. Blank, H., Musch, J., & Pohl, R. F. (2007). Hindsight bias: On being wise after the event. Social Cognition, 25, 1–9. Blumenthal-Barby, J. S., & Krieger, H. (2015). Cognitive biases and heuristics in medical decision making: A critical review using a systematic search strategy. Medical Decision Making, 35, 539–557. Bodenhausen, G. V., Gabriel, S., & Lineberger, M. (2000). Sadness and susceptibility to judgmental bias: The case of anchoring. Psychological Science, 11, 320–323. Bostrom, N., & Ord, T. (2006). The reversal test: Eliminating status quo bias in applied ethics. Ethics, 116, 656–679. Boyle, B. A., Dahlstrom, R. F., & Kellaris, J. J. (1998). Points of reference and individual differences as sources of bias in ethical judgments. Journal of Business Ethics, 17, 517–525. Brown, J. D. (1986). Evaluations of self and others: Self-enhancement biases in social judgments. Social Cognition, 4, 353–376. Brown, T. A., Sautter, J. A., Littvay, L., Sautter, A. C., & Bearnes, B. (2010). Ethics and personality: Empathy and narcissism as moderators of ethical decision making in business students. Journal of Education for Business, 85, 203–208. Bruine de Bruin, W., Parker, A. M., & Fischhoff, B. (2007). Individual differences in adult decision-making competence. Journal of Personality and Social Psychology, 92, 938–956. Bushman, B. J., & Baumeister, R. F. (1998). Threatened egotism, narcissism, self-esteem, and direct and displaced aggression: Does self-love or self-hate lead to violence. Journal of Personality and Social Psychology, 75, 219–229. Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42, 116–131. Cacioppo, J. T., Petty, R. E., & Kao, C. F. (1984). The efficient assessment of need for cognition. Journal of Personality Assessment, 48, 306–307. Cacioppo, J. T., Petty, R. E., Feinstein, J. A., & Jarvis, W. B. G. (1996). Dispositional differences in cognitive motivation: The life and times of individuals varying in need for cognition. Psychological Bulletin, 119, 197–253.

11

Personality and Individual Differences 153 (2020) 109609

L.L. Watts, et al.

Journal of Applied Social Psychology, 38, 1061–1091. O'Fallon, M. J., & Butterfield, K. D. (2005). A review of the empirical ethical decisionmaking literature: 1996-2003. Journal of Business Ethics, 59, 375–413. Oreg, S., & Bayazit, M. (2009). Prone to bias: Development of a bias taxonomy from an individual differences perspective. Review of General Psychology, 13, 175–193. Park, C. W., & Lessig, V. P. (1981). Familiarity and its impact on consumer decision biases and heuristics. Journal of Consumer Research, 8, 223–230. Posavac, S. S., Kardes, F. R., & Brakus, J. J. (2010). Focus induced tunnel vision in managerial judgment and decision making: The peril and the antidote. Organizational Behavior and Human Decision Processes, 113, 102–111. Penney, L. M., & Spector, P. E. (2002). Narcissism and counterproductive work behavior: Do bigger egos mean bigger problems. International Journal of Selection and Assessment, 10, 126–134. Piaget, J. (1932). The moral judgment of the child. New York: Free Press. Rilling, J. K., & Sanfey, A. G. (2011). The neuroscience of social decision-making. Annual Review of Psychology, 62, 23–48. Rest, J. (1979). Development in judging moral issues. Minneapolis: University of Minnesota Press. Robinson, M. D., & Clore, G. L. (2002). Belief and feeling: Evidence for an accessibility model of emotional self-report. Psychological Bulletin, 128, 934–960. Ross, L., Greene, D., & House, P. (1977). The “false consensus effect”: An egocentric bias in social perception and attribution processes. Journal of Experimental Social Psychology, 13, 279–301. Ruch, F. L., & Ruch, W. W. (1980). Employee aptitude survey. Los Angeles, CA: Psychological Services. Ruedy, N. E., & Schweitzer, M. E. (2010). In the moment: The effect of mindfulness on ethical decision making. Journal of Business Ethics, 95, 73–87. Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision making. Journal of Risk and Uncertainty, 1, 7–59. Scandell, D. J. (2000). Development and initial validation of validity scales for the NEOFive Factor Inventory. Personality and Individual Differences, 29, 1153–1162. Scherbaum, C. A., & Meade, A. W. (2013). New directions for measurement in management research. International Journal of Management Reviews, 15, 132–148. Schmeichel, B. J., Volokhov, R. N., & Demaree, H. A. (2008). Working memory capacity and the self-regulation of emotional expression and experience. Journal of Personality and Social Psychology, 95, 1526–1540. Schulz-Hardt, S., Frey, D., Lüthgens, C., & Moscovici, S. (2000). Biased information search in group decision making. Journal of Personality and Social Psychology, 78, 655–669. Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54, 93–105. Schwenk, C. H. (1986). Information, cognitive biases, and commitment to a course of action. Academy of Management Review, 11, 298–310. Seo, M. G., & Barrett, L. F. (2007). Being emotional during decision making—good or bad? An empirical investigation. Academy of Management Journal, 50, 923–940. Skitka, L. J., Mosier, K., & Burdick, M. D. (2000). Accountability and automation bias. International Journal of Human-Computer Studies, 52, 701–717. Sligo, F., & Stirton, N. (1998). Does hindsight bias change perceptions of business ethics? Journal of Business Ethics, 17, 111–124. Slugoski, B. R., Shields, H. A., & Dawson, K. A. (1993). Relation of conditional reasoning to heuristic processing. Personality and Social Psychology Bulletin, 19, 158–166. Smith, E. R., & Miller, F. D. (1978). Limits on perception of cognitive processes: A reply to Nisbett and Wilson. Psychological Review, 85, 355–362. Spiro, R. J. (1988). Cognitive flexibility theory: Advanced knowledge acquisition in illstructured domains. Center for the Study of Reading Technical Report no. 441. Stanovich, K. E., Toplak, M. E., & West, R. F. (2008). The development of rational thought: A taxonomy of heuristics and biases. Advances in Child Development and Behavior, 36, 251–285. Stasser, G. (1992). Information salience and the discovery of hidden profiles by decisionmaking groups: A “thought experiment”. Organizational Behavior and Human Decision Processes, 52, 156–181. Steele, L. M., Johnson, J. F., Watts, L. L., MacDougall, A. E., Mumford, M. D., Connelly, S., et al. (2016). A comparison of the effects of ethics training on international and US students. Science and Engineering Ethics, 22, 1217–1244. Tabachnick, B. G., & Fidell, L. S. (2001). Principal components and factor analysis. Using Multivariate Statistics, 4, 582–633. Thiel, C. E., Bagdasarov, Z., Harkrider, L., Johnson, J. F., & Mumford, M. D. (2012). Leader ethical decision-making in organizations: Strategies for sensemaking. Journal of Business Ethics, 107, 49–64. Toet, A. T., Brouwer, A. M., van den Bosch, K., & Korteling, J. H. (2016). Effects of personal characteristics on susceptibility to decision bias. International Journal of Humanities and Social Sciences, 8, 1–17. Tomlin, K. A., Metzger, M. L., Bradley-Geist, J., & Gonzalez-Padron, T. (2017). Are students blind to their ethical blind spots? An exploration of why ethics education should focus on self-perception biases. Journal of Management Education. Watts, L. L., & Buckley, M. R. (2017). A dual-processing model of moral whistleblowing in organizations. Journal of Business Ethics, 146, 669–683. Watts, L. L, Medeiros, K. E., Mulhearn, T. J., Steele, L. M., Connelly, S., & Mumford, M. D. (2017). Are ethics training programs improving? A meta-analytic review of past and present ethics instruction in the sciences. Ethics & Behavior, 27, 351–384. Watts, L. L., Mulhearn, T. J., Todd, E. M., & Mumford, M. D. (2017). Leader idea evaluation and follower creativity: Challenges, constraints, and capabilities. In M. D. Mumford, & S. Hemlin (Eds.). Handbook of research on leadership and creativity (pp. 82–99). Cheltenham, UK: Elgar. Watts, L. L., Ness, A. M., Steele, L. M., & Mumford, M. D. (2018). Learning from stories of leadership: How reading about personalized and socialized politicians impacts performance on an ethical decision-making simulation. The Leadership Quarterly, 29,

personality traits, general mental ability, and career success across the life span. Personnel Psychology, 52, 621–652. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263–291. Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American Psychologist, 39, 341–350. Kalshoven, K., Den Hartog, D. N., & De Hoogh, A. H. (2011). Ethical leader behavior and big five factors of personality. Journal of Business Ethics, 100, 349–366. Karau, S. J., & Williams, K. D. (1993). Social loafing: A meta-analytic review and theoretical integration. Journal of Personality and Social Psychology, 65, 681–706. Keinan, G. (1987). Decision making under stress: scanning of alternatives under controllable and uncontrollable threats. Journal of Personality and Social Psychology, 52, 639–644. Kerr, N. L., & Tindale, R. S. (2004). Group performance and decision making. Annual Review of Psychology, 55, 623–655. Kimmel, A. J. (1991). Predictable biases in the ethical decision making of American psychologists. American Psychologist, 46, 786–788. Klayman, J., Soll, J. B., González-Vallejo, C., & Barlas, S. (1999). Overconfidence: It depends on how, what, and whom you ask. Organizational Behavior and Human Decision Processes, 79, 216–247. Kline, R. (2011). Principles and practice of structural equation modeling (3rd ed.). New York: Guilford Press. Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. In D. A. Goslin (Ed.). Handbook of socialization theory and research (pp. 348–480). New York: Academic Press. Kühberger, A. (1998). The influence of framing on risky decisions: A meta-analysis. Organizational Behavior and Human Decision Processes, 75, 23–55. Lamm, H. (1988). A review of our research on group polarization: Eleven experiments on the effects of group discussion on risk acceptance, probability estimation, and negotiation positions. Psychological Reports, 62, 807–813. LeBreton, J. M., & Senter, J. L. (2008). Answers to 20 questions about interrater reliability and interrater agreement. Organizational Research Methods, 11, 815–852. Lefkowitz, J. (2017). Ethics and values in industrial-organizational psychology. Routledge. Leung, R.2005, March 14. Enron's Ken Lay: I was fooled. 60Minutes. Retrieved August 02, 2017, fromhttp://www.cbsnews.com/news/enrons-ken-lay-i-was-fooled-11-032005/. Maio, G. R., & Esses, V. M. (2001). The need for affect: Individual differences in the motivation to approach or avoid emotions. Journal of Personality, 69, 583–614. De Martino, B., Kumaran, D., Seymour, B., & Dolan, R. J. (2006). Frames, biases, and rational decision-making in the human brain. Science, 313, 684–687. Martin, A., Bagdasarov, Z., & Connelly, S. (2015). The capacity for ethical decisions: The relationship between working memory and ethical decision making. Science and Engineering Ethics, 21, 271–292. McCrae, R. R., & Costa, P. T. (2004). A contemplated revision of the NEO Five-Factor Inventory. Personality and Individual Differences, 36, 587–596. Mecca, J. T., Medeiros, K. E., Giorgini, V., Gibson, C., Mumford, M. D., & Connelly, S. (2016). Biases and compensatory strategies: The efficacy of a training intervention. Ethics & Behavior, 26, 128–143. Mecca, J. T., Medeiros, K. E., Giorgini, V., Gibson, C., Mumford, M. D., Connelly, S., et al. (2014). The influence of compensatory strategies on ethical decision making. Ethics & Behavior, 24, 73–89. Medeiros, K. E., Mecca, J. T., Gibson, C., Giorgini, V. D., Mumford, M. D., Devenport, L., et al. (2014). Biases in ethical decision making among university faculty. Accountability in Research, 21, 218–240. Medeiros, K. E., Gibson, C., Mecca, J. T., Giorgini, V., Connelly, S., & Mumford, M. D. (2015). Playing, sitting out, and observing the game: An investigation of faculty members’ perspectives on political behavior in ethical decision making. Accountability in Research, 22, 284–300. Medeiros, K. E., Watts, L. L., Mulhearn, T. J., Steele, L. M., Connelly, S., & Mumford, M. D. (2017). What is working, what is not, and what we need to know: A meta-analytic review of business ethics instruction. Journal of Academic Ethics. Messick, D. M., & Bazerman, M. H. (2001). Ethical leadership and the psychology of decision making. The next phase of business ethics: Integrating psychology and ethics. Emerald Group Publishing Limited213–238. Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons' responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741–749. Milgram, S. (1974). Obedience to authority: An experimental view. New York: Harper & Row. Mumford, M. D., Connelly, S., Brown, R. P., Murphy, S. T., Hill, J. H., Antes, A. L., et al. (2008). A sensemaking approach to ethics training for scientists: Preliminary evidence of training effectiveness. Ethics & Behavior, 18, 315–339. Morris, R. T., & Sherlock, B. J. (1971). Decline of ethics and the rise of cynicism in dental school. Journal of Health and Social Behavior, 12, 290–299. Mumford, M. D., Costanza, D. P., Connelly, M. S., & Johnson, J. F. (1996). Item generation procedures and background data scales: Implications for construct and criterion‐related validity. Personnel Psychology, 49, 361–398. Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2, 175–220. Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84, 231–259. Noval, L. J. (2016). On the misguided pursuit of happiness and ethical decision making: The roles of focalism and the impact bias in unethical and selfish behavior. Organizational Behavior and Human Decision Processes, 133, 1–16. Novicevic, M. M., Buckley, M. R., Harvey, M. G., & Fung, H. (2008). Self‐evaluation bias of social comparisons in ethical decision making: The impact of accountability.

12

Personality and Individual Differences 153 (2020) 109609

L.L. Watts, et al.

acceptability of unethical information technology practices: The case of Machiavellianism and ethical ideology. Journal of Business Ethics, 54, 275–296. Wrightsman, L. S. (1964). Measurement of philosophies of human nature. Psychological Reports, 14, 743–751. Wrightsman, L. S. (1992). Assumptions about human nature: Implications for researchers and practitioners. London: Sage Publications.

276–294. Watts, L. L., Steele, L. M., & Song, H. (2017). Re-examining the relationship between need for cognition and creativity: Predicting creative problem solving across multiple domains. Creativity Research Journal, 29, 21–28. Weick, K. E. (1995). Sensemaking in organizations. Thousand Oaks, CA: Sage. Winter, S. J., Stylianou, A. C., & Giacalone, R. A. (2004). Individual differences in the

13