The outcome effect – A review and implications for future research

The outcome effect – A review and implications for future research

Journal of Accounting Literature 31 (2013) 2–30 Contents lists available at SciVerse ScienceDirect Journal of Accounting Literature journal homepage...

495KB Sizes 2 Downloads 74 Views

Journal of Accounting Literature 31 (2013) 2–30

Contents lists available at SciVerse ScienceDirect

Journal of Accounting Literature journal homepage: www.elsevier.com/locate/acclit

Review

The outcome effect – A review and implications for future research Lasse Mertins a,1,*, Debra Salbador b,1, James H. Long c,1 a b c

Towson University, United States Virginia Polytechnic Institute and State University, United States Auburn University, United States

A R T I C L E I N F O

A B S T R A C T

Keywords: Outcome effect Evaluation Managerial performance evaluation Audit litigation

This paper synthesizes the extant research on the outcome effect in the accounting domain, focusing primarily on the context of performance evaluation. It reviews the current state of our knowledge about this phenomenon, including its underlying cognitive and motivational causes, the contexts in which the outcome effect is observed, the factors that influence its various manifestations, and ways in which undesirable outcome effects can be mitigated. It also considers various perspectives about the extent to which outcome effects represent undesirable judgmental bias, and whether this distinction is necessary to motivate research on this topic. The paper is intended to motivate and facilitate future research into the effects of outcome knowledge on judgment in the accounting context. Therefore, we also identify important unanswered questions and discuss opportunities for future research throughout the paper. These include additional consideration of instances in which the outcome effect is reflective of bias, how this bias can be effectively mitigated, ways in which outcome information influences judgment (regardless of whether this influence is considered normative), and how the underlying causes of the outcome effect operate singly and jointly to bring about the outcome effect. We also consider ways that future research can contribute to practice by determining how to encourage evaluators to retain and incorporate the relevant information conveyed by outcomes, while avoiding the inappropriate use of outcome information, and by enhancing external validity to increase the generalizability of experimental results to scenarios frequently encountered in practice. ß 2013 University of Florida, Fisher School of Accounting. Published by Elsevier Ltd. All rights reserved.

* Corresponding author. Tel.: +1 410 704 4181; fax: +1 410 704 3641. E-mail address: [email protected] (L. Mertins). 0737-4607/$ – see front matter ß 2013 University of Florida, Fisher School of Accounting. Published by Elsevier Ltd. All rights reserved.

http://dx.doi.org/10.1016/j.acclit.2013.06.002

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

3

1. Introduction Performance evaluations play a critical role in a number of accounting contexts, including managerial performance evaluation, capital budgeting decisions, and audit litigation arising from allegations of material financial misstatement. These evaluations are generally conducted after outcome information is available. Outcome knowledge can inform evaluations of the actions and decisions leading to that outcome, yet prior research has also found that possession of outcome knowledge systematically influences the evaluation in the direction of the outcome (e.g., Brown & Solomon, 1993; Brown & Solomon, 1987; Fisher & Selling, 1993; Ghosh & Ray, 2000; Ghosh, 2005; Lipe, 1993), a phenomenon known as the outcome effect (Tan & Lipe, 1997). When outcome information is diagnostic with respect to decision quality, it is of interest to both research and practice to develop an understanding of how and why evaluators incorporate this information into their performance assessments. When outcome information is not diagnostic with respect to decision quality, its incorporation into the evaluation of performance may introduce undesirable bias. Given the high stakes nature of many performance evaluations, it is important to determine when outcome knowledge biases judgment, and explore ways in which this bias can be mitigated. This paper synthesizes the extant research on the outcome effect in the accounting domain to facilitate future research into the effects of outcome knowledge on judgment. We focus on performance evaluation, including managerial performance evaluation and audit litigation scenarios. Although this synthesis focuses on outcome effect research in accounting, we also include relevant papers from the psychology literature when they add significant clarity to our discussion. In general, these papers were cited heavily in the theory development sections of the relevant accounting studies. A full and complete discussion of the outcome bias literature in psychology is beyond the scope of this synthesis. Furthermore, we exclude related research (including hindsight effect research) unless it is required to inform our theoretical understanding of the outcome effect phenomenon.2 To identify relevant studies, we searched ABI/INFORM, PsycARTICLES, Google Scholar and the Social Science Research Network (ssrn.com) using the following terms: outcome effect, outcome focus, outcome bias, effects of outcome information, effects of outcome knowledge, hindsight bias,3 outcomes in performance evaluations, and outcomes in performance appraisals. We also reviewed the references from each paper identified during our search to ensure that we captured all relevant studies. The remainder of this paper is organized as follows: we begin by discussing the importance of outcome effect research in accounting, and consider varying perspectives in the literature related to whether or not (and under what conditions) the outcome effect represents a judgmental bias. We then review research on (1) the underlying causes of this phenomenon, and (2) manifestations of the outcome effect in the accounting context.4 We conclude by suggesting avenues for future research. 2. The outcome effect and judgmental bias The accounting environment provides an ideal setting in which to study the impact of outcome information on judgment because a primary purpose of accounting is to capture, synthesize, and present relevant information to decision makers. Much of this information is historical in nature and includes outcome information. There are a number of accounting contexts in which the use of 1

Each author contributed equally to this paper, which is based in part on Dr. Mertins’ dissertation, chaired by Dr. Salbador. For instance, Helleloid (1988) focuses on the hindsight bias rather than the outcome effect, and does not add to our theoretical understanding of the outcome effect beyond that provided by the extant outcome effect research in psychology and accounting. Therefore, it has been excluded from this synthesis. 3 We included this search term because some hindsight bias papers also addressed the outcome effect or provided theory that was utilized in outcome effect studies. 4 A number of studies investigate the impact of various factors on the magnitude of the outcome effect using experimental methods. Although they consistently report that various factors do affect the magnitude of the outcome effect, these findings should be interpreted cautiously, as experiments are not an ideal methodology to identify the magnitude of effects in cases where manipulation strength can vary. We acknowledge and thank an anonymous reviewer for this insight. 2

4

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

outcome information is relevant (e.g., managerial and investor decision making, assessment of subsequent events and the entity’s ability to continue as a going concern, evaluation of contingencies). However, the outcome effect is related to the impact of outcome information on the judged quality of a decision; therefore, the majority of the extant outcome effect research in accounting has been conducted within the realm of performance evaluation. Performance evaluations play a critical role in a number of accounting settings, including the traditional employee performance evaluation and audit litigation contexts. These evaluations ultimately involve an assessment of an evaluatee’s decision quality. Because they are used to determine whether an evaluatee is rewarded or punished, these evaluations should objectively and accurately assess the evaluatee’s decision quality. For instance, managerial performance evaluations are used in bonus and promotion decisions, thereby playing an integral role in managers’ career advancement (Moers, 2005). They are also critical in aligning managerial incentives with those of the firm (Banker & Datar, 1989; Lambert, 2001). In the audit litigation context, judges and jurors consider whether the auditor was negligent, failing to provide sufficient levels of audit quality. Auditors who are found negligent are subject to severe financial, reputational, and professional penalties (e.g., fines, the loss of current and potential clients, and revocation of professional licenses). One threat to objective and accurate performance evaluations identified in the extant literature is under- or overreliance on outcome information, resulting in an outcome effect. Research on the effects of outcome knowledge on evaluation is rooted in the psychology literature. Fischhoff (1975) found that subjects who had received outcome information were more likely to have an increased perception of the probability of that particular outcome ex post, relative to subjects who had not been given any outcome information. Further, subjects were largely unaware that it was their knowledge of the outcome that led to the increase in their perception of the likelihood of that outcome. The increased perception of the probability of outcome causes these individuals to perceive that their ability to predict the outcome is greater than it most likely was when the decision was made, affecting their evaluation of the decision ex post. This phenomenon was labeled ‘‘hindsight bias,’’ and was formally defined as ‘‘the tendency for people with outcome knowledge to believe falsely that they would have predicted the reported outcome of an event’’ (Hawkins & Hastie, 1990, p. 311). Although hindsight bias and outcome effects both arise as the result of possession of outcome knowledge and are sometimes investigated jointly (e.g., Baron & Hershey, 1988; Emby, Gelardi, & Lowe, 2002), they differ conceptually. Outcome information may influence an evaluation in two distinct ways: ‘‘(a) an effect on the judged probability of outcomes, which, in turn, affects evaluation; and (b) a direct effect on the judged quality of the decision’’ (Baron & Hershey, 1988, p. 570). The effect of outcome information on the judged probability of outcomes is hindsight bias; the effect of outcome information on the judged quality of a decision is the outcome effect. Consequently, the outcome effect is concerned with the direct impact of outcome knowledge on performance evaluation. 2.1. Perspectives on the impact of outcome information on evaluation Although prior research has consistently demonstrated that the possession of outcome knowledge influences ex post evaluations of decision quality (e.g., Baron & Hershey, 1988; Brown & Solomon, 1987; Frederickson, Peffer, & Pratt, 1999; Lipe, 1993; Lipshitz & Barak, 1995; Peecher & Piercey, 2008), significant debate exists over whether (and under what conditions) this influence represents a judgmental bias (e.g., Hershey & Baron, 1995; Lipshitz, 1995; Peecher & Piercey, 2008), as well as whether answering this question is necessary to motivate research on this topic (Lipe, 2008; Lipshitz, 1995). Central to the first debate is the nature of decision making under uncertainty, which requires an actor to gather and consider information about the possible outcomes resulting from each decision alternative, and the probability and relative desirability of each outcome. The actor should then choose the decision alternative that yields the appropriate expected value given the relevant utility function (Brown & Solomon, 1987). Decision quality may therefore be evaluated as the extent to which the decision maker’s chosen alternative maximizes expected utility (or in the context of audit litigation, achieves a minimum threshold of audit quality). Lipshitz and Barak (1995) provide the

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

5

following example (attributed to A. Tversky and conditional on the relevant utility function being positively related to wealth maximization): ‘‘Two persons are offered a choice between (1) winning $100 if the outcome of tossing a fair coin is ‘‘head’’ and (2) winning $100 if the outcome of rolling a fair die is ‘‘6’’. Person A chooses the coin and loses. Person B chooses the dice and wins. Despite these outcomes A made the correct decision because he chose the gamble with the greater likelihood of success. Thus, when decisions and outcomes are related probabilistically, the appropriate criterion for their evaluation is not what had happened but what might have happened.’’ [p. 106] In the context of performance evaluation, the judge is tasked with assessing the quality of the actor’s decision making. This entails determining the extent to which the actor’s decision maximized expected relevant utility given the information available at the time the decision was made (Hershey & Baron, 1992). Three information sets are relevant in this context (Baron & Hershey, 1988): (1) actor information (information known only to the actor), (2) joint information (information known to both the actor and the judge), and (3) judge information (information known only to the judge; generally includes outcome information). The extent to which outcomes are diagnostic of decision quality is often used to determine the extent to which incorporation of this information reflects biased evaluator judgment. Outcome diagnosticity is generally negatively related to the extent to which joint information encompasses all relevant decision process information. When joint information includes (does not include) most or all of the decision relevant information, outcomes are (are not) diagnostic of decision quality. The following sections describe conditions under which the use of outcome information is considered normative. 2.2. Appropriate incorporation of outcome information in performance evaluation Hershey and Baron (1995) develop a simple model of the conditions under which outcomes can provide relevant information about decision quality. These conditions are that: (1) the probability of a successful outcome (S) as a result of a good decision (G) is greater than the probability of a successful outcome as a result of a bad decision (B), and that (2) the probability of a failed outcome (F) after a bad decision is greater than the probability of a failed outcome after a good decision. Positive (negative) outcomes are generally more likely to result from good (poor) decisions (Brown & Solomon, 1987; Hershey & Baron, 1995; Peecher & Piercey, 2008; Tan & Lipe, 1997).5 Therefore, outcomes are a reasonable (although imperfect) signal of decision quality when: (1) actor information dominates joint or judge information, (2) there is little other relevant information, (3) the actor is motivated to deceive the evaluator about the extent or nature of actor information, or (4) the evaluator does not have or is not certain that they have all relevant information that was available to the decision maker ex ante6 (Baron & Hershey, 1988; Brown & Solomon, 1987; Hershey & Baron, 1992; Tan & Lipe, 1997). In these cases, outcomes provide the evaluator with relevant information about the actor’s private information. Finally, although there may be some information that the actor did not have at the time the decision was made, if it was feasible for the actor to obtain this information, it may still be relevant to the evaluation of decision quality. Normative models suggest that the actor should obtain information about the relevant expected values for each decision alternative as long as the relative benefits of obtaining this information outweigh the costs (e.g., Lipe’s (1993) managers, who were assumed to use historical data and/or consult engineering personnel to determine the relative expected costs and 5 It should be noted that in addition to the traditional view that outcomes can provide negative evidence about decision quality (e.g., that the actor ignored prior probabilities or under-weighted contradictory evidence), outcomes may provide positive evidence about decision quality (e.g., that the actor did take into account relevant information that was unavailable to or unconsidered by the evaluator) [Brown & Solomon, 1987]. 6 Relevant information includes the decision maker’s: (1) probabilities, (2) assessment of contingent consequences, (3) information search thoroughness, and (4) ability to integrate these assessments (Hershey & Baron, 1992).

6

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

benefits of investigating system variances). Outcomes may provide the evaluator with information about the extent to which the actor acquired and/or considered the available information about the expected values of each decision alternative. The perspective that outcome effects may be normative is reflected in the accounting literature (e.g., Anderson, Jennings, Lowe, & Reckers, 1997; Brown & Solomon, 1987; Frederickson et al., 1999; Lipe, 1993; Peecher & Piercey, 2008; Tan & Lipe, 1997). For instance, in the context of audit litigation, Peecher and Piercey (2008) argue that the evaluator’s lack of complete and certain information renders outcomes diagnostic with respect to the auditor’s decision quality. Thus, incorporating outcome information into the evaluation is normative. Moreover, Lipe (1993) argues that when the intention is to share risk between the principal and agent, and a risk-sharing agreement is explicitly contracted, outcomes provide an appropriate basis for evaluation. 2.3. Inappropriate incorporation of outcome information in performance evaluation Although circumstances sometimes render outcomes diagnostic with respect to decision quality, practically, the decision maker does not (and cannot) have access to outcome information at the time that the decision must be made. Given that outcome information is not available to the actor ex ante, normative models suggest that the judge should not consider outcome information when joint information encompasses all ex ante information relevant to the actor’s decision making processes (Brown & Solomon, 1987; Hershey & Baron, 1992; Lipe, 1993; Peecher & Piercey, 2008; Tan & Lipe, 1997). In these circumstances, outcome information is not diagnostic with respect to decision quality, and the outcome effect may be reflective of bias. Additionally, outcome effects may be non-normative when outcome information: (1) inappropriately dominates decision process knowledge, (2) is used to update prior probabilities, (3) motivates overly harsh evaluations, or (4) is under-weighted. First, given the probabilistic nature of decision making under uncertainty, bad outcomes can follow from good decisions, and good outcomes can follow from bad decisions; therefore, actors should be judged by the quality of their decision making independent of outcome (Edwards, 1984; Lipshitz, 1995; Lipshitz & Barak, 1995). When the evaluatee’s decision making process is highly observable (and therefore, joint information encompasses a significant amount of the evaluation relevant information), agency and control theory both support the use of decision process knowledge as the primary input for a performance evaluation (Fisher & Selling, 1993). Although this perspective has at times been interpreted to mean that outcome information is irrelevant with respect to decision quality (e.g., Lipshitz, 1989), it is more appropriate to say that it is suboptimal to judge decision quality solely by outcomes when decision quality information is available (e.g., Baron & Hershey, 1988; Lipshitz, 1995). Given the relative ease with which an evaluator can provide an evaluation based on outcomes, it may be tempting from a cost/benefit perspective to over-rely on this heuristic or overweight outcome information. However, this is not appropriate when the consequences arising from inaccurate evaluations are severe (Baron & Hershey, 1988). Secondly, Hawkins and Hastie (1990) argue that the possession of outcome knowledge may alter the evaluator’s perception of the probability of the outcome occurring, such that they over-weight the ex ante probability of the occurrence of the actual outcome. Brown and Solomon (1987) argue that the use of outcome information to update prior probabilities is reflective of bias, as outcome information was not available to the evaluatee at the time the decision was made. When updated prior probabilities affect performance evaluations, it is likely that these evaluations are biased, at least to some degree. Thirdly, outcomes can bias performance evaluations when they provide the judge with motivation to punish (or be lenient with) the evaluatee. For instance, in the audit litigation context, possession of adverse outcome information has been shown to lead to harsher evaluations of the likelihood of auditor negligence (Kadous, 2001; Peecher & Piercey, 2008). This is problematic when instead of being used to infer information about the auditor’s decision making process, outcome information motivates the evaluator to judge the auditor harshly (perhaps because of the plaintiff’s losses (Kadous, 2000) or the auditor’s deep pockets). Finally, although much of the outcome bias in accounting is framed in terms of overly harsh evaluations, several studies provide evidence of a reverse outcome bias, in which evaluators underreact to, and consequently under-weight, outcome information (e.g., Grenier, Peecher, & Piercey,

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

7

2008; Peecher & Piercey, 2008). Although outcome bias can have severe consequences for evaluatees, reverse outcome bias can affect the corresponding parties (e.g., the company in a managerial performance evaluation context, or the plaintiff in the context of audit litigation). These consequences include rewarding suboptimal managerial decision making or failing to hold an auditor responsible when they were in fact negligent. 2.4. Determining bias or non-bias Early calls in the behavioral accounting literature encouraged researchers to focus on improving judgment (Lipe, 2008). Therefore, much of the extant literature has been motivated by demonstrating that the outcome effect phenomenon represents a bias, and by exploring ways to mitigate it (e.g., Frederickson et al., 1999; Peecher and Piercey, 2008). Yet, researchers have often had difficulty designing conditions under which it can be definitively asserted that the outcome effect represents a bias. Outside of a strictly controlled laboratory setting (and at times, even within the lab), it is often unclear whether joint information encompasses all decision relevant information. Moreover, the evaluation process is unobservable, and it is difficult to determine whether the evaluator is using outcome information appropriately because both appropriate and inappropriate use will be reflected in the overall performance evaluation. Finally, even when both outcome and decision process information are diagnostic of decision quality, it is difficult to prescribe the relative consideration the evaluator should give to each piece of information, or the normative overall judgment that should result (Fischhoff, 1975, p. 293). Thus, determining a normatively correct decision, and/or a normatively correct evaluation of that decision, is problematic. This makes it difficult to show that an evaluation is definitively biased. To address these concerns, Hershey and Baron (1992) describe three ways to show that outcome information is used non-normatively in evaluations: (1) provide the judge with all of the decision relevant ex ante information (e.g., a priori probability information about each possible outcome and its desirability), plus outcome information (e.g., there is no actor information), (2) inform the judge that the outcome that resulted was clearly unforeseeable, or that the cost of identifying the outcome and associated probability was prohibitive, or (3) instruct the judge to make the judgment that they would have made if outcome information were unavailable. However, each option requires the judge to understand and believe that they do have perfect knowledge of the relevant ex ante information that was available to the decision maker. Even when researchers are able to demonstrate that the use of outcome information is non-normative, it is unclear whether the experimental conditions are externally valid; therefore, one should be careful not to over-generalize these findings to the ‘‘real world’’ (Hershey & Baron, 1992). The following section discusses some of the more successful accounting efforts to design conditions under which it can be strongly asserted that the observed outcome effect is reflective of bias. 2.5. Perspectives on the outcome effect in the accounting literature Lipe (1993) and Frederickson et al. (1999) make strong cases that the outcome effect they observed was representative of bias by meeting the first criteria established by Hershey and Baron (1992). They provided evaluators with all ex ante information available to the decision maker, along with outcome information, under conditions that allowed for a determination of a normative decision. The authors argue that in these circumstances, the evaluation should not incorporate information about the outcome, and outcome effects are representative of bias. However, even under these carefully designed conditions, the determination that outcome information is non-diagnostic with respect to decision quality is contingent on evaluators believing that they were accurately provided with all relevant information. Information accuracy is critical in this context, particularly with respect to the probabilities associated with each outcome. Frederickson et al.’s (1999) outcome-experienced evaluators may have questioned the accuracy of the probability information with which they were provided because when they obtained their experience, there was always a 50% chance that the actual return would equal the good state return (Table 1, Frederickson et al., 1999). However, these individuals were told that ex ante probabilities for these returns ranged

8

Table 1 Outcome effect research in accounting. Variables

Methodology

Primary contribution(s)

Primary results

IV:  Outcomes (no outcome/positive outcome/ negative outcome/negative outcome with alternative outcomes mitigation treatment/negative outcome information with alternative stakeholders’ mitigation treatment) DV:  Evaluation rating of audit partner’s decision

Sample:  157 state judges Task:  Participants received financial and non-financial information about a client involved in merger negotiations  Information about inventory obsolescence was provided  Participants then rated the auditor’s decision not to recognize the inventory losses

 First study that examined whether a judge’s outcome bias can be mitigated by cognitive mitigation strategies  Specifically, it tested whether an ‘‘alternative outcomes debiasing method’’ and an ‘‘alternative stakeholders’ debiasing method’’ mitigates outcome bias in an audit litigation setting

 Judges are vulnerable to outcome bias in the audit litigation setting  The ‘‘alternative outcomes debiasing method’’ did not mitigate the outcome bias  The ‘‘alternative stakeholders’ debiasing method’’ [i.e., redirecting ‘‘attention away from the plaintiff (to other stakeholders)’’, pg. 21] mitigated the outcome bias

Brown and Solomon (1987)

IV:  Reported outcome of project (no outcome reported/project failure due to a change in a law/project failure due to economic reasons/project success)  Evaluator involvement with the decision (prior involvement/no involvement) DV:  Evaluation of degree of judgment error by budgeting committee’s approval of a project

Sample:  96 upper-level undergraduate and master’s level graduate students Task:  The participants took on the role of an assistant controller.  Their main task was to support a corporate committee by appraising capital budgeting proposals and projects

 First study in the accounting area to examine the outcome effect using theory that is primarily based on hindsight effect studies  Examined whether the prior involvement in the evaluatee’s decision process and the evaluatee’s degree of responsibility in anticipating the outcome influenced the occurrence and magnitude of the outcome effect

 Evaluations of managerial decisions are influenced by outcome information  There are outcome effects when the evaluator was not involved prior to the evaluation and when the evaluatee was responsible for anticipating the outcome

Brown and Solomon (1993)

IV:  Relationship between subject-adviser’s suggestion and committee’s decision about a proposal (agreement/disagreement)  Relationship between committee’s decisions and the outcome of projects (no project outcome/project that committee choose was ex-post best/project that committee choose was not ex-post best) DV:  Evaluation of the division committee’s funding decision

Sample:  92 predominately senior undergraduates Task:  Participants took on the role of an adviser who had to consult with the capital-budgeting and corporate committee  In a post audit environment, their main task was to assess the division committee’s funding decision

 Examined several cognitive (‘‘cognitive reconstruction’’) and motivational causes (‘‘selfenhancing motive’’ and ‘‘escalation-of-commitment analogue’’) of the outcome effect

 The findings support the cognitive reconstruction explanation for the outcome effect  Evaluators’ appraisals are not influenced by project outcomes when the evaluatee adopts the evaluator’s suggested procedures

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

Author(s) Anderson et al. (1997)

IV:  Three Outcome Conditions (no outcome info/capacity cost reporting outcome info/ traditional cost reporting outcome info)  Capacity decision (‘‘appropriate’’/ ‘‘insufficient’’) DV:  Performance evaluation rating

Sample:  82 undergraduate students Task:  Participants reviewed and evaluated two hypothetical capacity decisions

 Tested whether a capacity cost reporting system is more vulnerable to the outcome effect than a traditional reporting system

 Capacity reporting leads to evaluator outcome effects not present in traditional reporting systems  Capacity reporting provides inconsistent feedback relative to traditional reporting

Charron and Lowe (2008)

IV:  Outcome knowledge (no outcome knowledge, a moderate surprise outcome, a large surprise outcome, and a very large surprise outcome) DV:  Performance evaluation rating

Sample:  92 general jurisdiction judges  68 prospective jurors Task:  Case study  Judges/jurors evaluate an auditor’s decision

 Tested whether outcome surprise vs. causal nature of an outcome decreased the magnitude of the outcome effect

 Increased outcome surprise decreased magnitude of the outcome effect for judges but not for jurors

Clarkson et al. (2002)

IV:  Outcome knowledge (general/with weak debiasing instructions/with outcome effect awareness debiasing instructions/with consequences debiasing instructions/no outcome knowledge) DV:  Evaluation of auditor’s judgment quality

Sample:  122 graduate students Task:  Participants received information that the auditee declared bankruptcy after audit took place  The subjects then rated the auditor’s provision of an unqualified audit opinion

 Tested whether the outcome effect can be mitigated by certain types of instructions that are provided to the evaluator

 Telling participants (weak debiasing instructions) about a potential biasing impact of outcome knowledge did not mitigate the outcome effect  Outcome effect debiasing awareness instructions and instructions about potential consequences to the auditor in case of a conviction (strong debiasing instructions) reduced the outcome effect

Cornell, Warne, and Eining (2009)

IV:  Apology (absent/present)  Justification (absent/present)  Negligence verdicts  Jurors’ need to assign blame

Sample:  139 jury-eligible adults Task:  Adapted Kadous’s [2001] litigation case  Participants assessed audit firm’s performance by making a verdict

 Examined how ‘‘apology’’ and ‘‘first-hand justification’’ impact jurors’ evaluations of auditor performance, in situations when the consequences of an audit are available to the juror

 Auditors that provide an apology or first-person justification receive fewer negligence verdicts than auditors that don’t provide them

De Villiers (2002)

IV:  Financial outcome (above budget/below budget)  Controllability (controllable/ uncontrollable conditions)  Reasons leading to performance (known/ unknown) DV:  Performance evaluation rating

Sample:  69 executive MBA students Task:  Respondents rated the performance of managers of four divisions that achieved belowbudget and four that achieved above-budget financial results

 Tested whether the controllability of reasons (provided by evaluatees) that led to an outcome (either known or unknown to the evaluator) influenced the magnitude of the outcome effect

 Controllability of reasons (provided by evaluatees) leading to an outcome significantly increases performance evaluation ratings  Outcome effect detected because performance ratings are higher in favorable financial condition despite uncontrollable conditions

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

Buchheit and Richardson (2001)

9

10

Table 1 (Continued ) Variables

Methodology

Primary contribution(s)

Primary results

IV:  Outcome knowledge of whether client company filed bankruptcy (negative), continued to exist (positive) or no outcome information (none) DV:  Evaluation of the audit partner’s judgment to render an unqualified audit report

Sample:  122 audit partners from the US and Canada Task:  Participants acted as a member of a peer review committee  They evaluated the judgment of an audit partner who rendered an unqualified audit report on a firm (client)

 Tested whether the outcome effect exists when participants evaluated an audit partner’s judgment (i.e., tested whether the outcome effect exists in an audit performance evaluation setting)

 Evaluators without outcome knowledge provide significantly higher evaluations of the auditor’s judgment than evaluators that have negative outcome knowledge  Evaluators without outcome knowledge do not provide statistically different evaluations than evaluators with positive outcome knowledge

Fisher and Selling (1993)

IV:  Outcome knowledge (outcome observable/ unobservable)  Decision process observability (observable/ unobservable)  Evaluator–evaluatee-outcome agreement (all agree/both mispredict outcome/only evaluatee predicts outcome/only evaluator predicts outcome) DV:  Performance evaluation rating

Sample:  36 subjects – 23 who performed financial analysis as a significant part of their jobs, 13 were 2nd year MBA students Task:  The participants evaluated a decision maker who conducted bankrupt/survive decisions for sample firms

 Tested whether evaluator– evaluatee outcome agreement and the evaluator’s observability of the decision process influence/ mitigate the outcome effect

 No main effect for outcome observability  Outcome effect present when the evaluator did not correctly forecast the outcome  There was no significant outcome effect when an evaluator correctly forecasted an outcome

Frederickson et al. (1999)

IV:  Prior experience with performance evaluation schemes (decision-based/ outcome based)  Prior experience with frequencies of outcome feedback (infrequent/frequent) DV:  Performance evaluation rating

Sample:  108 undergraduate students Task:  Participants evaluated 16 decision makers based on the decision makers’ investment decisions

 Examined whether prior experience with an outcome vs. a decision-based evaluation system increased the magnitude of the outcome effect

 Prior experience under outcome-based (decision-based) performance evaluation systems leads to larger (smaller) outcome effects  Outcome feedback frequency amplifies the difference of the outcome effect between the two systems

Ghosh and Lusch (2000)

IV:  Store outcomes (target success/target failure for sales and gross profit based on several financial and nonfinancial measures) DV:  Performance evaluation rating

Sample:  Performance evaluation relevant data about 204 stores and their managers where data was available for all variables Task:  In a field study in the retail industry, regional supervisors rated managers on several attributes

 Conducted a field study to examine the outcome effect in an organizational setting (new methodology)  Examined how measures with different levels of controllability impacted the outcome effect in a field setting

 There is statistically significant evidence of the outcome effect in performance evaluations in a field study setting  Evaluation rating is affected by controllable measures and by uncontrollable central management factors

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

Author(s) Emby et al. (2002)

IV:  Outcome (production process in control/ out of control)  Specific measure vs. range of probability that production process may be out of control (specific value/range of values) DV:  Performance evaluation rating

Sample:  60 undergraduate students in upper-level accounting course Task:  Subjects were asked to evaluate the manager’s performance who chose to perform a variance investigation

 Tested whether the preciseness of outcome data influences the magnitude of the outcome effect in a cost variance investigation setting

 Outcome effect can be mitigated by providing the evaluator more accurate information about the nature and extent of uncertainty in the form of a range of probabilities rather than as a specific estimate

Ghosh (2005)

IV:  Outcome of manager’s decision (improvement/no improvement)  Single outcome measures (sales per square foot, ROI, customer satisfaction, employee satisfaction)  Assessment of measure controllability (present/absent) DV:  Performance evaluation rating

Sample:  308 retail store managers Task:  Case study  Subjects (supervisors) evaluate the performance of retail store managers

 Tested whether the use of nonfinancial measures vs. financial measures increase the magnitude of the outcome effect (as a result of their higher perceived level of controllability)

 The greater the perceived controllability of an outcome measure, the greater the outcome effect  Employee and customer satisfaction measures were perceived to be more controllable than ROI and sales per square foot  Controllability assessment of outcome measure prior to evaluation mitigates the outcome effect

Grenier et al. (2008)

IV:  Quality of audit performed (lower/higher)  Five interventions and a control condition that may influence the outcome effect (control, counter-explanation, attribution, warning, re-weighting, and re-weightingafter)  Bayesian judgments of auditor negligence (above or below 40%) DV:  Participants’ judgments of auditor negligence given adverse outcome information  Participants’ prior beliefs about the frequency of (1) Material misstatements (2) Auditor negligence (3) Material misstatements given auditor negligence

Sample:  1746 undergraduate students Task:  Case study  In the case, the SEC concluded that a company’s financial statements were materially misstated. Participants were asked to assess the probability that the SEC was correct in their judgment. Finally, they were asked to provide a revised belief of auditor negligence

 Testing new theory of reverse outcome bias in outcome effect mitigation scenarios that had been previously tested

 Outcome effect debiasing strategies from previous accounting studies decreased the magnitude of the outcome effect for the lower range of Bayesian probabilities but increased the risk of reverse outcome bias for the higher range of Bayesian probabilities

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

Ghosh and Ray (2000)

11

12

Table 1 (Continued ) Variables

Methodology

Primary contribution(s)

Primary results

IV:  Perceived benefit of additional audit time (yes/no)  Frames (loss/cost)  Outcome of audit (findings/no findings) DV:  Performance evaluation rating  Desire to work with auditor in the future

Sample:  47 auditors from Big Five firms Task:  A computerized test in an audit setting where an auditor (supervisor) evaluates her/his subordinate

 Tested whether the positive outcome of an over-budget audit (vs. a negative outcome) increased an auditor’s performance rating  Tested Lipe’s (1993) cognitive model (managerial accounting) in an auditing setting (‘‘performance evaluation of a subordinate auditor’’)

 Performance evaluations are higher when additional audit time leads to financial statement adjustments and a qualified opinion (vs. clean opinion)  Participants are more likely to continue their relationship with an auditor when financial statement adjustments are discovered when the auditor exceeds the time budget

Kadous (2001)

IV:  Outcome information (not provided/ negative)  Audit quality (higher audit quality – auditor asked a specialist for help/lower audit quality – auditor did not have any help)  Attribution of negative affect (attribution instruction available/not available) DV:  Performance evaluation rating of auditor  Verdicts (in favor of or against the auditor)

Sample:  216 jury-eligible adults Task:  Participants evaluated an auditor and chose a verdict in an audit litigation setting

 Examined whether the outcome effect in auditor litigation settings is caused by negative affect when jurors are informed about a negative audit outcome

 Juror focus on the outcome creates negative affect toward the auditor in a litigation setting  The outcome effect occurs when jurors utilize negative affect  Attribution instructions prevent jurors from using negative affect and decrease outcome focus; this mitigates the outcome effect

Lipe (1993)

IV:  Decision (to investigate/not to investigate)  Investigation outcome (problem incontrol/problem out-of-control)  Frames (cost/loss)  Perceived benefit (yes/no) DV:  Performance evaluation rating

Sample:  140 and 116 undergraduate students in the first two experiments and 59 members of the IMA in a third experiment Task:  Participants were asked to evaluate a manager’s performance on a variance investigation decision

 Tested a framework (cognitive model) that explains the occurrence of the outcome effect in a variance investigation decision setting (outcome of investigation ! perceived benefit of investigation ! framed as ‘‘costs’’ instead of ‘‘losses’’ ! higher performance ratings when ‘‘cost’’ frame is used and not ‘‘loss’’ frame)

 Manager’s performance assessment is influenced by ex post information  Evaluators that classify an investigation expenditure as a cost assess a manager’s performance higher than evaluators that classify it as a loss

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

Author(s) Jones and Chen (2005)

IV:  Measure weights (with weights/without weights)  Outcome (above/below target) DV:  Evaluation rating of store manager

Sample:  119 MBA students from two large state universities Task:  Participants took on the role of a regional manager and were asked to evaluate a store manager based on financial measures, non-financial measures and uncontrollable environmental factors

 This study tests whether the utilization of measure weights impacts the occurrence and magnitude of the outcome effect

 Individuals that perform evaluations based on measures that are associated with predefined weights are more susceptible to the outcome effect than individuals whose measures do not have any predefined weights

Lowe and Reckers (1994)

IV:  Outcome (no outcome, negative outcome, negative outcome with a debiasing strategy) DV:  Evaluation rating of audit partner’s decision  Evaluation of the relevance of each information cue

Sample:  92 prospective jurors Task:  Participants take on the role of jurors and reviewed a going concern case  They then evaluate the auditor’s decision that the financial statements were fairly stated

 Examined whether jurors’ decisions in a audit litigation context are impacted by outcome bias and if the outcome bias can be mitigated through a 3-step debiasing strategy that emphasized the consideration of alternative outcomes

 Jurors’ judgments were significantly impacted by outcome information  The 3-step debiasing strategy mitigated outcome bias

Lowe and Reckers (1997)

IV:  Decision aid usage (full utilization/partial utilization)  Outcome information about the reason why inventory turnover ratio decreased (provided/not provided)  Tolerance of ambiguity (only measured, not manipulated) DV:  Evaluation of the audit senior’s judgment  Promotion decision

Sample:  147 audit seniors Task:  The participants evaluate an auditor’s judgment in deciding to spend further time investigating an unexplainable decrease in the inventory turnover ratio

 Examined whether the partial utilization of decision aids and people’s intolerance of ambiguity influence the magnitude of the outcome effect

 Even when the participants were asked to ignore outcome information, they were influenced by outcome knowledge when the evaluatee only partially utilized the audit decision aid

Mertins (2010)

IV:  Outcome (below/above target)  Type of evaluation system (subjective/ formula based) DV:  Performance evaluation rating

Sample:  106 professional MBA students Task:  Participants received outcome information for a store either in the form of more subjective information or as a summary measure (i.e., formula-based)  They then evaluated fictional store managers based on financial and non-financial measures

 Tested whether a formula-based performance evaluation system is more prone to outcome effects than a more subjective system

 The formula-based performance evaluation system was more prone to the outcome effect than the more subjective system  In a formula-based system, evaluators seem to focus more on outcome information than in a subjective system

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

Long et al. (2013)

13

14

Table 1 (Continued ) Variables

Methodology

Primary contribution(s)

Primary results

IV:  Information order (outcome information provided in the beginning or end of the process)  Outcome (below/above target)  Evaluation time horizon (five day time lag/ no time lag) DV:  Performance evaluation rating

Sample:  198 students enrolled in undergraduate management accounting courses Task:  Participants evaluated fictional store managers based on financial and non-financial measures

 Examines how information presentation order (i.e., when outcome information is presented) and a time lag between the information presentation and the evaluation impact the occurrence of the outcome effect

 When outcome information is presented at the end of the evaluation process, the outcome information has a significant influence on the final performance evaluation (and not when outcome information is presented at the beginning of an information sequence)  The outcome effect is positively related to the time lag that may exist in the performance evaluation setting

Peecher and Piercey (2008)

IV:  Outcome temporality (material misstatement either occurred in past or future)  Outcome probability (0%, 25%, 75%, 90%, or 100% – only two levels in second experiment)  Order (participants provided their prior beliefs before or after having outcome knowledge – this variable was not used in second experiment) DV:  Frequency of possible material misstatements  Auditor negligence  Amount that participants would require the audit firm to pay for compensatory damages (only in experiment 2)

Sample:  933 undergraduate students (Experiment 1)  168 undergraduate students (Experiment 2) Task:  Case study  Participants were asked to judge frequencies of material misstatements and auditor negligence in an auditing setting

 Introduction of a new theory called ‘‘reverse outcome bias’’  Impact of Bayesian probabilities on the outcome effect

 The outcome effect was present for participants whose Bayesian probabilities of auditor negligence were lower than 40%, while there was a reverse outcome bias when the Bayesian probabilities of auditor negligence were higher than 40%

Rose (2004)

IV:  Financial outcome (favorable/unfavorable)  Situational factors (none/neutral/outcome facilitating/outcome inhibiting)  Cognitive load (high/low) DV:  Performance evaluation rating

Sample:  101 executive MBA students Task:  Subjects evaluated the performance of eight division managers based on budgeted financial data, associated financial outcomes, and a situational factor

 Tested whether and how situational information influences the occurrence and magnitude of the outcome effect

 When outcomes are negative, then situational factors do not play a significant role in performance evaluation  No statistically significant differences in recall of situational information in favorable vs. unfavorable outcome conditions  As cognitive load increases, the cognitive capacity of evaluators to integrate situational factors decreases

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

Author(s) Mertins and Long (2012)

IV:  Outcome (good/bad)  Outcome controllability (controllable/ uncontrollable)  Decision process quality (good/poor) DV:  Performance evaluation rating

Sample:  120 undergraduate students and 107 part-time MBA students Task:  The participants evaluated a manager’s decision

 Tested whether knowledge of decision quality and controllability of outcome measures influenced the occurrence and magnitude of the outcome effect

 Outcome effects are present even when the evaluator receives specific information about the actor’s decision process quality  Controllability only had a significant impact when the outcomes were negative

Wermert (2002)

IV:  Outcome information (no outcome/good outcome/bad outcome) DV:  Performance evaluation rating

Sample:  64 non-business college students Task:  Participants received information about an audit by a CPA firm  They then rated the sufficiency of evidence gathered by the auditor

 Examines if jurors’ recall, interpretation and weighting of trial evidence impact the magnitude of the outcome effect

 The interpretation and weighting of trial evidence reduced the magnitude of the outcome effect  Jurors’ recall of trial evidence did not influence the magnitude of the outcome effect

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

Tan and Lipe (1997)

15

16

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

from 35% to 65%. Their actual experiences could have caused them to question the accuracy of the probability information they were given when they were tasked with evaluating the decision maker, rendering outcome information diagnostic with respect to actual outcome probabilities. Since evaluatees made a number of investment decisions, receiving outcome feedback after each decision, evaluators may have penalized evaluatees for failing to recognize that actual probabilities differed from projected probabilities. Peecher and Piercey (2008) employ another criterion in an attempt to demonstrate the nonnormative use of outcome information. They evaluate whether or not judges respond to outcome information in a manner consistent with the prescriptions of Bayesian updating. However, they acknowledge that this approach only allows normative statements about the appropriateness of the judge’s update to their probability judgment; it does not allow for normative statements about whether the evaluator’s prior beliefs, final judged probabilities, or overall evaluation are appropriate. They assert that this is an inherent limitation for all outcome effect studies in accounting. Their contribution was in measuring all of the relevant evaluation inputs such that they could speak to the normative nature of the subsequent judgment updating. Kadous (2001) argued that the incorporation of outcome information into judgments about auditor negligence is not appropriate because the applicable legal standard of care is related to ex ante audit quality rather than ex post outcome (Causey & Causey, 1991). Therefore, normative evaluations should rely on information about audit quality (not outcome) to evaluate the decision maker. This contention was conditioned on judges and jurors having full and complete information about the auditor’s decision making process, a situation which is rare in the audit litigation context (Peecher & Piercey, 2008). On one hand, the nature of the litigation environment encourages the auditor to reveal private information that supports their audit opinion, and the plaintiff’s attorneys have incentive to bring to light information that indicates that the auditor did not provide sufficient audit quality. It is possible that between these two parties with competing motivations, all of the information that is relevant to the quality of the auditor’s decision making will be available to the evaluator. On the other hand, the auditor is incentivized to suppress private information that may be indicative of insufficient audit quality. Moreover, judges and juries are not audit experts, and there is often a significant debate among professional auditors about whether or not an audit firm provided sufficient audit quality in the event of audit failure (reflected in the courtroom through the appearance of opposing expert witnesses). The volume of information with which judges and juries are provided (both relevant and irrelevant to the evaluation), and the conflicting motivations and interpretations of this information offered by the plaintiff, the defendant, and the opposing attorneys, make it difficult for the evaluators to (1) identify all of the relevant information about the auditor’s decision making, and (2) be certain that they have access to all of the relevant information. Therefore, in a real audit litigation setting, outcomes may provide relevant information to jurors about audit quality, and the simple incorporation of outcome information is not necessarily reflective of bias.7 Kadous and Magro (2001) take a different approach to demonstrating non-normative uses of outcome information. Their tax compliance recommendation setting employs a scenario in which the tax authority asserts that outcome is irrelevant with respect to the defensibility of a tax position. However, they find that tax professionals do consider outcome information in these circumstances. In this case, the non-normative nature of the reliance on outcome information depends on the evaluator’s value function. If the tax professional and client intend to minimize the expected value of the client’s tax liability, then it may be normative to evaluate the impact of the outcome on the chance that the tax position will be allowed or disallowed, particularly if the taxing authority is reluctant to challenge a subjective position with a good outcome. On the other hand, if the evaluator is the taxing authority, one could make a case that the use of the outcome information is indeed non-normative. 7 One way in which Kadous (2001) offers additional support for her contention that jurors used outcome information inappropriately is by including a mitigation manipulation in which jurors were provided with an instruction attributing the negative affect that they felt to the nature of their role as jurors, rather than to the outcome. Although this attribution instruction should not have affected the extent to which jurors infer information about the auditor’s decision making from the outcome, it should mitigate the motivation to punish the auditor arising from the evaluator’s negative affect. The success of this manipulation in mitigating the incorporation of outcome information provides support for Kadous’s argument that in this case, evaluators were using outcome information inappropriately.

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

17

In this context, the question of whether incorporation of outcome information into judgment represents bias may depend on the perspective adopted. 2.6. Is it necessary to demonstrate bias? Although outcome effect research in accounting has generally sought to demonstrate that outcome effects bias evaluations, this may not be necessary to motivate research on this topic. Scholars in both psychology and accounting argue that it is worthwhile to study and understand the impact of outcomes on judgment, regardless of whether one can conclusively determine that these effects do ultimately bias judgment (Lipe, 1993, 2008; Lipshitz, 1995). Lipe (2008) argues that a preoccupation with judgment biases constrains the types of decisions that can be studied to those that can be shown to have a normatively correct answer. This arbitrary criterion excludes many of the important decisions that accountants face on a daily basis without such clear-cut normative criteria. She also argues that accounting researchers should seek to understand the impact of various types of information on judgment and the judgment process, because understanding how processes work is one of the fundamental goals of social science. From a practical perspective, this understanding can be used to design performance evaluation systems and procedures that address specific contextual goals. For instance, the ultimate goal of many performance evaluation systems is to align the agent’s incentives with those of the principal. Understanding how information affects evaluator judgment is the first step toward understanding the agent’s incentives related to managing this information. 3. Underlying causes of the outcome effect Prior research has considered a number of cognitive and motivational causes of the outcome effect. These explanations are likely context-specific, and may operate jointly (Baron & Hershey, 1988; Hawkins & Hastie, 1990). The following sections discuss each cause. 3.1. Cognitive causes Initial work in psychology posited several cognitive explanations for the hindsight and outcome effects (Fischhoff, 1975) (see and Hawkins and Hastie (1990) for a review). Of these, cognitive reconstruction (e.g., backwards processing) has received the strongest support in the accounting literature (e.g., Brown & Solomon, 1993), and subsequent work in accounting has primarily relied on this explanation.8,9 Cognitive reconstruction can be described as follows: Evaluators who receive outcome knowledge start with outcomes and work backward to try to connect causal links leading to the outcome. Their perceptions of decisions and events change to align with actual outcomes, and they use outcome knowledge as a cue to recall information about actions and factors that led to the outcome. The backwards nature of this process increases the ex post salience of outcome-congruent, decision-relevant factors. In contrast, information that does not align with the outcome is perceived to be less important (Baron & Hershey, 1988; Charron & Lowe, 2008; Wermert, 2002). This alters the evaluator’s existing knowledge structure such that the outcome of the decision appears to have been foregone. When the outcome is negative, the evaluatee is held responsible for failing to foresee the ‘‘obvious’’ consequences of the decision. Mertins and Long (2012) determined that in the vast majority of outcome effect studies in accounting, outcome information was presented immediately before the performance evaluation was solicited under conditions favoring a recency effect. When information presentation order was reversed under similar conditions, the outcome effect was not observed. These findings demonstrate that the relative salience of outcome information, rather than the simple possession of outcome information, is also important (consistent with Baron and Hershey, 1988). 8 See Table 1 in Mertins and Long (2012) for a classification of each outcome effect study in accounting by the underlying theoretical explanation for the outcome effect. 9 Although cognitive reconstruction is the dominant cognitive explanation for the outcome effect in the literature, a number of studies caution that several other cognitive or motivational factors may also work in concert to produce the outcome effect in various contexts. See Hawkins and Hastie (1990) for a thorough discussion of these considerations.

18

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

3.2. Motivational causes Prior research has also examined motivational explanations for the outcome effect. While not supported as strongly by prior research as cognitive causes, these explanations do explain this phenomenon in certain contexts (Hawkins & Hastie, 1990). These explanations include the extent to which individuals are motivated: (1) to exert sufficient cognitive effort to perform the task appropriately (e.g., Fischhoff, 1977; Hell, Gigerenzer, Gauggel, Mall, & Mueller, 1988), (2) by self-serving biases (e.g., Brown & Solomon, 1993; Hawkins & Hastie, 1990), and (3) to assign blame (Kadous, 2000, 2001). Within accounting, Brown and Solomon (1993) examines two motivational causes. ‘‘Selfenhancing motive’’ suggests that a manager will react to an outcome as if s/he always expected that outcome to occur (Campbell & Tesser, 1983), motivated by preservation of self-image and improving how others appraise them (Ross & Sicoly, 1982). ‘‘Escalation of commitment’’ posits that an evaluator appraises an evaluatee whom they hired more positively (Bazerman, Beekun, & Schoorman, 1982), motivated by the desire to justify their hiring decision. Negative affect has also been shown to explain the outcome effect in the context of audit litigation, which involves negative outcomes: losses to the plaintiff resulting from audit failure (Kadous, 2000, 2001). The outcome effects arises in this context because jurors interpret negative affect arising from knowledge of negative audit outcomes as evidence relevant to the assignment of auditor responsibility for these outcomes. Specifically, this negative affect is manifested as a need to lay blame, which motivates jurors to interpret ambiguous information in the plaintiff’s favor and against the auditor, systematically and negatively affecting their evaluations of the auditor’s performance. As described in Kadous (2000), this increases the salience of information cues that indicate poorer decision quality at the expense of cues that indicate sufficient decision quality. This explanation is tied to the cognitive information cue salience explanation described above. Table 1 includes summaries of the accounting literature on this topic, including authors, independent and dependent variables, methodology, primary contributions and results. 3.3. Directions for future research examining the underlying causes of the outcome effect The cognitive and motivational mechanisms described above differ to some degree with respect to their underlying explanations for the outcome effect, yet they generally support the same expectations. Brown and Solomon’s (1993) findings are consistent with psychology literature that finds stronger support for a cognitive rather than a motivational explanation for the outcome effect (Conolly & Bukszar, 1990; Hawkins & Hastie, 1990). However, there are contexts in which motivational factors play an important role in the existence of the outcome effect, and it is often difficult to disentangle the competing cognitive and motivational explanations from one another (Brown & Solomon, 1993; Hawkins & Hastie, 1990). This is primarily because judgment and appraisal processes involve cognition (i.e., they require human processing); yet, cognition is often strongly affected by specific motivations (Brown & Solomon, 1993; Roch, 2005). Given the interrelated nature of the underlying causes, it is difficult to separate them into purely cognitive and motivational components. Hawkins and Hastie (1990) call for research to identify the conditions under which each of the observed explanations for the outcome effect is operational. Moreover, both Hawkins and Hastie (1990) and Clarkson, Emby, and Watt (2002) believe that many of the findings of the extant literature require the operation of multiple causes simultaneously. Therefore, they call for the development of a better understanding of how these underlying causes interact. Future research that models the relationships between the causative factors identified above would help advance this research agenda. Brown and Solomon (1993) provides an example of how such factors can be examined jointly. 4. Manifestations of the outcome effect in the accounting literature In addition to examining the underlying causes of the outcome effect, prior research in accounting has considered various manifestations of this phenomenon in a number of contexts. These manifestations result from the impact of various contextual factors on cognitive and motivational

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

19

causes, including: (1) information cue salience, (2) cue interpretation/integration into a judgment, (3) evaluator involvement or agreement with, or observability of, the actor’s decision making process, (4) outcome surprise, (5) outcome controllability, and (6) negative affect. We review this literature in the following sections. Table 1 also summarizes these studies. 4.1. Information cue salience One cognitive explanation for the outcome effect is the increased salience of outcome-congruent information cues (Baron & Hershey, 1988). Researchers have identified factors that influence cue salience, including (1) performance evaluation system experience (Frederickson et al., 1999), (2) ambiguity tolerance (Lowe & Reckers, 1997), (3) information presentation format (Buchheit & Richardson, 2001; Mertins, 2010; Long, Mertins & Vansant, 2013), and (4) evaluation time horizon (Mertins & Long, 2012). Several mitigation strategies used to reduce the outcome effect may also affect relative cue salience. These include (1) counter-explanation (Lowe & Reckers, 1994), (2) identifying over-reliance on outcome information as non-normative (Clarkson et al., 2002), and (3) stressing consequence severity (Clarkson et al., 2002). Frederickson et al. (1999) examined the extent to which performance evaluation system (PES) scheme (outcome vs. decision-based) experience impacts outcome and decision quality information cue salience. They found that when evaluators have experience with an outcome-based (decision quality-based) PES, outcome (decision quality) information cues are more salient in their memory and knowledge structures. Cue salience affects evaluator focus. Thus, evaluators experienced with an outcome-based PES focus on outcomes, systematically influencing their overall assessment in the direction of the outcome. Evaluators experienced with a decision-based PES focused on decision quality information, reducing the influence of outcome information on their assessments of performance. These effects were exacerbated by feedback frequency, which was positively related to the salience of PES-congruent information cues. When the PES was outcome-based, the magnitude of the outcome effect was amplified by more frequent feedback. The study explicitly demonstrated that the influence of experience on the outcome effect is determined by the nature of the experience (the form of performance evaluation scheme and the frequency of feedback as experience was acquired). These findings provide support for a cognitive explanation for the existence of the outcome effect. While this study documented that the outcome effect was smaller when the evaluator had experience with a decision-based performance evaluation system, the authors noted that the implementation of a decision-based system can be costly and sometimes unfeasible. Future research could explore the conditions under which the marginal benefits of a decision-based system outweigh the costs. Lowe and Reckers (1997) also provide evidence that individual characteristics can affect the salience of certain information cues. They found that individuals who were intolerant of ambiguity were motivated to reduce ambiguity when outcomes were negative and the available evidence about decision quality was mixed. In these circumstances, evaluators focused on the negative outcome at the expense of other information about decision quality. Consequently, ambiguity intolerant individuals were more susceptible to outcome effects. Individuals who were tolerant of ambiguity were more likely to consider all of the available information about decision quality, and were less susceptible to outcome effects. Although not specifically identified as such by the authors, this finding provides evidence supporting a motivational explanation for the outcome effect: individuals were motivated to truncate decision processes as a coping mechanism to reduce ambiguity. That is, individuals were not motivated to exert sufficient cognitive effort to avoid bias and perform the task appropriately (Fischhoff, 1977; Hell et al., 1988; Hawkins & Hastie, 1990). Future research can focus on ways in which PESs can be (1) designed to reduce the effort required to conduct the performance evaluation, or (2) incentives can be employed to motivate evaluators to provide significant cognitive effort. A third factor that affects the relative salience of information cues is presentation format. This is particularly relevant in the accounting literature, which has considered the impact of presentation format on decision making (e.g., Cardinaels, 2008; Maines & McDaniel, 2000). The impact of presentation format on the outcome effect has been considered in two contexts: the choice of cost

20

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

reporting system (capacity cost vs. more traditional cost systems) (Buchheit & Richardson, 2001) and the type of performance evaluation system (formulaic vs. subjective) (Long et al., 2013; Mertins, 2010). In both cases, the relative salience of outcome and decision quality information depends on information presentation format. When the format increased the salience of outcome information (decision quality information), the outcome effect phenomenon was present (not observed) and/or the magnitude of the outcome effect was greater (smaller). Future research can develop information presentation formats that encourage the evaluator to properly consider and weight both outcome and decision quality information cues. A fourth factor that affects the relative salience of information cues is evaluation time horizon (Mertins & Long, 2012). The time over which relevant information is encountered affects the ease with which the evaluator can access various information cues in memory. When outcome information is encountered at the end of a lengthy evaluation time horizon, it is more easily accessed in memory relative to other information cues, and is thus highly salient. Under these conditions, evaluation time horizon is positively related to the magnitude of the outcome effect (Mertins & Long, 2012). Future research can develop decision aids that make it easier for the evaluator to access relevant information cues in memory, and help weight each cue appropriately. Finally, Lowe and Reckers (1994)10 and Clarkson et al. (2002) employed mitigating strategies intended to affect relative cue salience through instructions: (1) to engage in counter-explanation (identifying alternative outcomes (Kennedy, 1995; Koonce, 1992)), or (2) that stressed the nonnormative aspects of reliance solely on outcomes and the severity of the consequences resulting from a negative evaluation. The first strategy was intended to reduce the perceived inevitability of the outcome, allowing the evaluator to reconstruct ex ante outcome probabilities without being overly influenced by outcome information. The second strategy was intended to remind the evaluator to evaluate the evidence more thoroughly, reducing reliance on outcome information. The third strategy was intended to encourage the evaluator to examine alternative perspectives. Each strategy effectively reduced the magnitude of the outcome effect. The authors attribute their findings to increased (reduced) salience of decision-quality (outcomeconsistent) information cues. They posit that the change in salience of information cues inhibited ‘‘creeping determinism’’ (Fischhoff, 1975), reducing the perceived inevitability of the outcome (interfering with cognitive reconstruction). This supports a cognitive explanation of the outcome effect. However, Anderson et al. (1997) did not find that this strategy effectively mitigated the impact of outcome information on judges’ evaluations of auditor liability. They attribute these findings to differences between the ways in which judges and jurors process information [p. 23]. Furthermore, Grenier et al. (2008) provide evidence that the counter-explanation strategy can result in reverse outcome bias (e.g., evaluators under-weight outcome information). Therefore, the overall effectiveness and desirability of this strategy remains unclear. Future research can focus on debiasing strategies that do not carry unintended consequences. 4.2. Cue interpretation and integration into a judgment A second cognitive explanation for the outcome effect is the impact of outcome information on the interpretation and integration of information cues into an overall judgment about the evaluatee’s decision quality (Hawkins & Hastie, 1990). Several studies in the accounting literature have considered the extent to which various factors affect these processes. These factors include (1) framing (Jones & Chen, 2005; Lipe, 1993), (2) cue ambiguity (Lowe & Reckers, 1997; Wermert, 2002), (3) evidence weighting (Wermert, 2002), and (4) probability weighting (Grenier et al., 2008; Peecher & Piercey, 2008). Two studies have considered the extent to which framing influences the existence of the outcome effect in the context of individual performance evaluation (Jones & Chen, 2005; Lipe, 1993). In these studies, framing impacted the evaluator’s perception of additional expenditures as costs or losses. In both cases, the decision maker considered whether to expend additional resources to investigate a 10 Although this paper is couched in terms of hindsight bias rather than the outcome effect, it does consider the impact of outcome knowledge on evaluation, and is therefore more properly categorized as an outcome effect paper.

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

21

potentially problematic issue. When the investigation outcome was favorable (i.e., a problem was discovered), the expenditure was framed as a cost; when no issues were noted, the expenditure was framed as a loss. Although both ‘‘cost’’ and ‘‘loss’’ have negative connotations with respect to net overall performance, evaluators generally considered costs (losses) favorably (unfavorably) because they are associated with benefits (not associated with benefits). These differences lead to different interpretations of the appropriateness of the expenditure across frames, reflected in the overall performance evaluation. Evaluations were more positive in the cost frame than in the loss frame (although Jones and Chen (2005) provide some evidence that this may be mitigated by experience in the audit context). These findings suggest that both ex post outcome information and the framing of that information can influence managerial performance evaluation, supporting a cognitive explanation for the outcome effect. Future research on this topic can consider how other frames influence the outcome effect. Several studies also provide evidence that individuals interpret evidence differentially and consistent with the outcome (Lowe & Reckers, 1997; Wermert, 2002). They provided evaluators with a mechanism to support outcome consistent evaluations: cues that were ambiguous with respect to decision quality. Evaluators who were not provided with outcome information interpreted ambiguous information cues as positive or neutral signals of audit quality. However, when outcomes were negative, evaluators interpreted the ambiguous cues to provide evidence of negative audit quality, and they did not consider alternative explanations for the negative outcome (Janoff-Bulman, Timko, & Carli, 1985; Lowe & Reckers, 1997; Wermert, 2002). These findings provide support for a cognitive reconstruction explanation of the outcome effect. Future research can consider ways in which evaluators can be sensitized to the implications of ambiguous information cues through the creation of decision aids or debiasing instructions. Wermert (2002) also considered several additional factors that could cause differential interpretation and/or integration of information cues: recall and weighting of information cues. He considered whether memory plays a role in the outcome effect and found that recall did not influence the existence of the outcome effect. However, along with interpretation, cue weighting did. When jurors had negative outcome information (e.g., management fraud was identified), they weighted information that aligned with the negative outcome more heavily than information that did not align with the outcome. These findings support a cognitive reconstruction explanation for the outcome effect. Future research can consider whether it is possible to prescribe normative cue weights for various pieces of information, or implement performance evaluation systems that explicitly define appropriate weights for outcome and decision process information. Finally, prior research has considered the extent to which individuals’ tendencies to misweight probabilities can affect the integration of information cues (specifically, the integration of ex post outcome knowledge), resulting in outcome and reverse outcome biases (Grenier et al., 2008; Peecher & Piercey, 2008). Peecher and Piercey (2008) use the probability weighting function of Prospect Theory to predict boundary conditions on the outcome effect based on an individual’s Bayesian probability because individuals tend to over- (under-) weight probabilities <40% (>40%). They observed an outcome effect when the individual’s Bayesian probability of auditor negligence was relatively low (i.e., the evaluator over-reacted to outcome information). However, when the Bayesian probability of auditor negligence was relatively high, they observed a reverse outcome bias (i.e., the evaluator underreacted to outcome information). These findings indicate that outcome induced biases are more complex than previously thought (e.g., rather than eliminating problematic bias, the debiasing mechanisms previously employed in the literature may cause reverse outcome bias). Therefore, one must not over-generalize the results of prior research, which may benefit from a reexamination that accounts for reverse outcome effects. Grenier et al. (2008) introduced a debiasing mechanism to address individuals’ tendency to misweight probabilities: re-weighting. This intervention involves providing participants with information ‘‘about how and when evaluators tend to over- and under-react to outcome information’’ [p. 4]. After receiving the reweighting instructions, individuals revised their initial probability judgments. As expected, this mechanism made evaluators aware of their vulnerability to distorted probabilities, and they exhibited smaller outcome and reverse outcome effects.

22

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

Lipe (2008), discussing the Peecher and Piercey (2008) study, posed the following question, which is an excellent starting point for future research in this area: ‘‘. . .the Bayesian updating literature often reports that people respond too conservatively to data. . .In contrast, Peecher and Piercey (2008) note that outcome bias is a situation in which humans’ response to a datum (i.e., the outcome) is too extreme. This interesting difference has not been explored in the literature. Why is outcome information weighted more heavily in belief updating while other information is often underweighted?’’ (p. 277) Furthermore, while both reverse outcome bias studies primarily base their theory development on the findings in the outcome effect literature, they are best described as mixed studies of hindsight bias and outcome effects. The participants in the experiments provided probability judgments and performance evaluation judgments. Probability judgments are more common in hindsight effect studies because hindsight effect studies examine how the probability judgment changes after an individual acquires outcome knowledge. Future research could investigate whether the reverse outcome bias also exists in a pure performance evaluation setting. Finally, these studies were conducted in the context of audit litigation, a setting in which outcomes are generally negative. Future research could explore whether reverse outcome bias occurs when outcomes are positive. 4.3. Involvement with, agreement with, and observability of the decision making process Outcome effect research in accounting has also considered the impact of the evaluator’s involvement with, agreement with, and/or the observability of, the evaluatee’s decision making process. Brown and Solomon (1987) demonstrated that the evaluator’s ex ante involvement with the actor’s decision making process can mitigate the outcome effect. They attributed their findings to reduced reliance on outcome information when the evaluator engages in cognitive reconstruction (i.e., ex ante involvement makes it easier for the evaluator to consider other possible outcomes, and reduces the perceived inevitability of the actual outcome). However, several other explanations for the outcome effect may also operate in this context. For instance, an evaluator who was involved in the decision making process may be motivated to judge decision quality more highly in order to enhance themselves, both in their own eyes and in the eyes of others (Brown & Solomon, 1993), consistent with escalation of commitment (Staw, 1976). In addition, involvement with the decision making process may reduce information asymmetry, thereby reducing the diagnosticity of the outcome information with respect to decision quality. Several studies provide some evidence to support the latter explanation (Fisher & Selling, 1993; Tan & Lipe, 1997). These researchers found that in certain circumstances, the outcome effect was reduced (although not eliminated) when the evaluator observed the decision making process, reducing the diagnosticity of outcomes with respect to performance, decreasing the evaluator’s outcome focus, and reducing the magnitude of the outcome effect. Brown and Solomon (1993) also considered the impact of the agreement between the evaluator’s ex ante recommendation of a course of action and the evaluatee’s ultimate decision. Outcome effects were (were not) observed when the evaluatee’s decision was not (was) congruent with the evaluator’s recommendation. The authors attribute these results to cognitive reconstruction, explaining that the formation of a recommendation creates a well-developed mental representation of the support for that course of action relative to other, non-recommended courses of action. When the evaluatee’s decision is congruent with the recommended course of action, the evaluator is able to recall the original mental representation of the decision leading to their recommendation, rather than reconstruct a new mental representation of the original decision, which can be heavily influenced by outcome information. These results also provide support for motivational explanations for the outcome effect. When the parties agreed on the appropriate course of action ex ante, the evaluator was motivated to appraise the evaluatee more favorably. One critical feature of the actor’s ex ante decision making environment is the extent to which possible outcomes are subject to uncertainty. A number of studies have constructed experimental contexts in which outcome probabilities are reduced to point estimates (e.g., Frederickson et al., 1999). This presentation is externally valid when the real world context involves outcomes with ex

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

23

ante known probabilities (such as a die roll or coin flip). However, in many contexts, the uncertainty surrounding the probabilities associated with possible outcomes is better represented as a range; presenting it as such provides more accurate information about conditions under which the actor made the decision. Ghosh and Ray (2000) provide evidence that giving the evaluator information about outcome probabilities as a range reduces the outcome effect. They attribute their findings to the evaluator’s utilization of the simulation heuristic, which is altered by more accurate knowledge of the probability data that the evaluatee faced during the decision making process (Kahneman & Tversky, 1982). This information allows the evaluator to more easily consider alternative outcomes that may have occurred as a result of the actor’s decision, reducing the perceived inevitability of the actual outcome. These findings provide support for a cognitive reconstruction explanation for the outcome effect. Given that the outcome effect was reduced (but not eliminated) when the evaluator was provided with additional information about the actor’s decision making process, there is some evidence that the outcome effect is not simply a response to a lack of information (Tan & Lipe, 1997). However, this does not necessarily indicate that the consideration of outcome information is representative of bias; in spite of the receipt of additional information about the actor’s decision process, the evaluator may continue to believe that the actor retained some relevant private information, rendering outcome information diagnostic with respect to decision quality, at least to some extent. Collectively, these findings suggest that there are future research opportunities related to the impact of performance evaluation system design and implementation on the outcome effect. From the design perspective, future research can consider how various information presentation formats can effectively communicate the conditions under which the evaluatee was required to make the decision, reducing information asymmetry between the actor and judge. From the implementation perspective, there are both pros and cons associated with involving the evaluator in the evaluatee’s decision process. This involvement reduces information asymmetry, allowing the evaluator to provide a less noisy performance evaluation based on decision quality factors. However, involvement may also compromise objectivity by motivating the evaluator to assess performance more positively, particularly when the evaluatee’s decision is congruent with the evaluator’s recommendation. Future research can explore the impact of various evaluator involvement schemes on the outcome effect, and note the costs and benefits of each approach. It can also explore related factors, like the impact of a decentralized (and thus more visible) work environment, evaluator/evaluatee relationships, and the evaluatee’s performance relative to peers. 4.4. Outcome surprise Prior research has considered how outcome surprise affects the existence and magnitude of the outcome effect. Findings in this area are mixed. Two studies found that as outcome surprise increases, the magnitude of the outcome effect decreases (i.e., Brown & Solomon, 1987; Charron & Lowe, 2008); two other studies found that as outcome surprise increases, the magnitude of the outcome effect increases (i.e., Fisher & Selling, 1993; Emby et al., 2002). These studies were conducted in the context of: (1) capital budgeting (Brown & Solomon, 1987), (2) audit litigation (Charron & Lowe, 2008), (3) managerial performance appraisal (Fisher & Selling, 1993), and (4) auditor peer review (Emby et al., 2002). Outcome surprise was operationalized as: (1) the decision maker’s perceived responsibility for anticipating the outcome (Brown & Solomon, 1987), (2) the extent to which subsequent events were foreseeable at the time the decision was made (Charron & Lowe, 2008), (3) the evaluator’s ability to predict the outcome (Fisher & Selling, 1993), and (4) negative outcomes in a going concern scenario (Emby et al., 2002). Brown and Solomon (1987) found that the extent of the decision maker’s perceived responsibility for the outcome was positively related to the magnitude of the outcome effect, relying on attribution theory and Zuckerman’s (1979) findings to explain their results.11 They argue that when an unexpected outcome occurs, the evaluatee is perceived to be less responsible for anticipating the

11

Zuckerman (1979) found that unexpected outcomes are often credited more to luck/chance than to ability to reason.

24

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

outcome. The reduction in responsibility reduces the diagnosticity of outcome information with respect to decision quality, thus mitigating the outcome effect. In an audit litigation setting (in which outcomes are generally negative), Charron and Lowe (2008) found that judges consistently exhibit significant outcome effects when the outcome is foreseeable; when the outcome is unforeseeable, the magnitude of these outcome effects is reduced.12 They relied on cognitive reconstruction to account for their results. When judges faced only a moderate outcome surprise, they worked backwards from the negative outcome to identify outcome-consistent antecedent factors, and blamed the auditor for not foreseeing the negative outcome. Given the ease with which they were able to engage in cognitive reconstruction, they did not exert additional cognitive effort to search for alternative explanations, and their evaluations were significantly influenced by outcome information. However, as the magnitude of the outcome surprise increased, judges were unable to identify antecedent factors that were consistent with the negative outcome. This required them to exert additional cognitive effort to consider ex ante audit decisions and procedures (rather than just decisions and procedures that correspond to the negative outcome), increasing the salience of decision quality information relative to outcome information, and reducing or eliminating the outcome effect. Although they do not specifically consider the impact of outcome surprise on the outcome effect, Fisher and Selling (1993) do provide some evidence about the phenomenon. They found that the outcome effect did not (did) occur when the evaluator correctly (incorrectly) forecasted the outcome ex ante (pgs. 70–71). Their results differ from those described above in that when the outcome was a ‘‘surprise’’ (e.g., different from the predicted outcome), the outcome effect was larger. They attribute their finding to evaluator perceptions that outcomes contain more information (and are thus more diagnostic with respect to decision quality) when they were unable to predict the actual outcome. Finally, Emby et al. (2002) tested the impact of outcome knowledge on audit partners’ peer evaluation judgments. When going concern outcomes were positive, outcome effects were not observed. However, when going concern outcomes were negative, substantial outcome effects occurred. In this context, the negative outcome was viewed as a surprise, and significant outcome effects were observed. The authors attribute these findings to the fact that a negative outcome indicates that the auditor should have modified her/his ex ante judgment in anticipation of the outcome. Additionally, the negative outcome in this case was considered an ‘‘event’’ that was not consistent with the evaluator’s expectation. In contrast, a positive outcome was considered to be a ‘‘no surprise’’ outcome and a ‘‘nonevent.’’ Since ‘‘individuals appear to have cognitive difficulty in processing nonevents’’ (Emby et al., 2002, p. 90), positive outcomes do not significantly influence an auditor’s peer performance evaluation rating. Given the conflicting evidence on the impact of outcome surprise, future research could focus on determining the conditions and contexts under which it increases and decreases the magnitude of the outcome effect. 4.5. Outcome controllability Following Brown and Solomon’s (1987) concept of decision maker responsibility for anticipating outcomes, a number of studies have considered the impact of outcome controllability on the magnitude of the outcome effect (e.g., Ghosh, 2005; Ghosh & Lusch, 2000; Tan & Lipe, 1997).13 Attribution theory holds that when success and failure of outcomes are attributed to a person’s internal factors (e.g., managerial skills [Kelley, 1967]), the decision maker should be held more responsible for the decisions leading to the outcome (Kelley & Michela, 1980). Research has also shown that an individual is held to be more responsible for a decision when the level of perceived personal control over that action is high (Schlenker, Britt, Pennington, Murphy, & Doherty, 1994).

12 Contrary to the findings related to the judge participants, outcome surprise did not affect the existence or magnitude of the outcome effect exhibited by the juror participants in this study. The authors attribute this finding to judges’ sensitivity to the importance of reasonable causal reconstruction as suggested under relevant tort law, whereas jurors tend to be relatively insensitive to realistic causality, consistent with a strict liability regime. 13 Outcome controllability differs from outcome surprise. Controllability is concerned with evaluatee responsibility for the outcome itself; outcome surprise is concerned with the evaluatee’s responsibility for predicting the outcome.

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

25

In the performance evaluation setting, this means that outcome information is diagnostic of decision quality when the evaluatee can exert significant control over the outcome; however, evaluatees should not be held responsible for uncontrollable outcomes, and outcome information should not be used to evaluate decision quality in these circumstances (Tan & Lipe, 1997). The dichotomous distinction between controllable and uncontrollable outcomes is operationalized easily in the laboratory. However, it is unclear the extent to which laboratory findings are generalizable to the real world, because in most cases, evaluatees exert partial control over outcomes. Under these conditions, it is difficult to determine the extent to which the evaluator should consider outcomes to be diagnostic of decision quality. Nevertheless, this factor has received extensive attention in the accounting literature, perhaps because it is still of interest to researchers to determine whether evaluators do consider controllability information, even if it is difficult to definitively determine whether they use controllability information normatively. The extant literature generally provides evidence that evaluators incorporate information about controllability into their performance assessments, and that as predicted, controllability is positively related to the magnitude of the outcome effect, both in the lab (De Villiers, 2002; Ghosh, 2005; Rose, 2004; Tan & Lipe, 1997) and in a corporate setting (Ghosh & Lusch, 2000). However, there are some subtle nuances associated with these findings. Although Ghosh and Lusch (2000) documented that evaluators were sensitive to a store manager’s control over specific outcome determinants (e.g., inventory, advertising) or lack of control over environmental determinants (e.g., store saturation, economic health of trade area), they did not consider the controllability of central management determinants (such as the total square footage of the store or the size of the market). The authors attribute these findings to the fact that the assignment of managers to stores is usually not random. The quality of a manager’s prior performance can lead to a transfer to a store with a certain location and/or store size. Hence, the location and the size of a store may provide relevant information about the manager’s past performance. Both Rose (2004) and Tan and Lipe (1997) documented that evaluators may consider controllability information asymmetrically. That is, when outcomes were negative, and uncontrollable factors were partly responsible for negative outcomes, evaluators ‘‘discounted’’ negative outcomes, and adjusted their performance assessments upwards. However, when outcomes were positive, evaluators did not ‘‘discount’’ the positive outcome information. Rose (2004) also considered the impact of cognitive load on controllability consideration, providing evidence that under high cognitive load, evaluators ignore controllability information. He attributed these findings to evaluators’ limited cognitive resources (i.e., when cognitive load was high, evaluators did not have the cognitive capacity required to integrate situational factors and adjust their initial performance assessment, which was based on the financial outcomes). Finally, Ghosh (2005) provided evidence that measure controllability affects the magnitude of the outcome effect. Specifically, he found that the outcome effect was larger when evaluations were based on non-financial measures relative to financial measures, primarily due to a higher degree of perceived controllability for non-financial measures used in this study. Yet, these findings should be interpreted cautiously, as controllability is most likely measure and context specific, rather than strictly associated with type of measure (financial vs. nonfinancial). In addition, he found that requiring the evaluator to assess measure controllability prior to assessing performance effectively mitigated the outcome effect. This finding suggests that under conditions in which evaluators specifically consider controllability, they incorporate measure controllability into their evaluation. Essentially, this increases the relative salience of controllability information. The outcome controllability literature discussed above suggests that evaluatees are held partly accountable for outcome measures that they cannot control. Based on responsibility accounting, uncontrollable measures should not be a part of managerial performance evaluations, yet most performance evaluation systems include measures that are at least partly uncontrollable. Future research could examine whether measure controllability is included appropriately in existing performance evaluation systems. Perhaps the biggest opportunity for future research on this topic relates to the way in which evaluators use outcome information when the evaluatee exercises partial control over the outcome. Although it may be difficult to determine the extent to which the evaluator’s use of outcome information is normative in these circumstances, the first step toward answering this

26

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

question is to understand how evaluators use this information. Furthermore, as Lipe (1993) argues, obtaining this understanding is valuable for its own sake, even if we cannot make unequivocal statements about normativity. 4.6. The impact of negative affect Finally, negative affect has also been identified as an underlying cause of the outcome effect, particularly in the context of audit litigation (Kadous, 2001). Given the severity of the consequences facing auditors who are found guilty of negligence, it is important to ensure that judges and juries evaluate auditors objectively. The applicable legal standards related to auditor negligence consider the level of audit quality provided by the auditor; they do not contemplate audit outcomes (Cornell, Warne & Eining, 2009; Kadous, 2000). Yet these outcomes have been shown to foster negative affect toward auditors, manifested in harsher evaluations of auditor negligence (Kadous, 2001). Prior accounting research has focused on reducing the impact of negative affect on evaluations of auditor negligence, and has considered five mitigating strategies that may reduce or redirect negative affect: an attribution instruction (Kadous, 2001), apology (Cornell et al., 2009), first-person justification (Cornell et al., 2009), redirecting the evaluator’s attention to non-plaintiff stakeholders and the consequences arising from a Type II error (Anderson et al., 1997), and stressing consequence severity (Clarkson et al., 2002).14 Kadous (2001) demonstrated that an attribution instruction could reduce jurors’ negative affect, and consequently, the outcome effect. Specifically, her attribution instruction informed participants that real jurors often feel anxious and tense, and that the subjects may feel this way in the role of the juror. The purpose was to direct the subjects’ affect to the juror role (to a ‘‘neutral source’’), rather than to the assessment process. It successfully reduced both jurors’ negative affect and the outcome effect. However, the attribution instruction may carry some unexpected costs. Grenier et al. (2008) provide some evidence that it may lead to reverse outcome bias (at least when audit quality is high); that is, evaluators may under-react to outcome information. Cornell et al. (2009) examined two interventions intended to bifurcate the defendant’s actions from the negative consequences arising from audit failure: apology and first-person justification. Apology allows the auditor to communicate regret regarding negative consequences without admitting guilt; first-person justification allows the auditor to explain how and why they made various audit choices, and why these choices were appropriate, given the available information. Both tactics effectively reduced negligence verdicts against the auditor. Apology reduced jurors’ need to assign blame to the auditor; first-person justification altered jurors’ perceptions of the auditor’s professional responsibilities and reasonableness of decisions given available information. These findings provide support for cognitive and motivational explanations for the outcome effect, and demonstrate that both causes may be operational in a single context. In addition, although not explicitly considered by the authors, two studies employed mitigating strategies that may have directly reduced or redirected negative affect. In the context of hindsight bias, Anderson et al. (1997) provide evidence that directing the evaluator’s attention to non-plaintiff stakeholders (e.g., stockholders, creditors and employees), and the consequences arising to these parties from issuing a qualified audit opinion when the financial statements were in fact free from material misstatement, can reduce the influence of outcome information on judgment.15 Highlighting the auditor’s additional responsibilities to multiple stakeholders may allow the evaluator to better understand the auditor’s ex ante decision making process and environment, and reduce negative affect

14 A number of other studies have been conducted in the context of audit litigation, all of which include scenarios in which outcome information is available to the evaluator (e.g., material financial misstatement occurred, allegedly as the result of audit failure). However, these studies have not explicitly considered the impact of outcome information on juror negligence verdicts. Furthermore, their findings are consistent with those discussed in this section, although they introduce several moderating factors that affect evaluations of responsibility and desire to blame and punish, such as client importance/auditor independence (Brandon & Mueller, 2006) or credibility (Grenier, Pomeroy, & Reffett, 2012). Given the extent of this literature, its tangential relationship to the outcome effect literature, and the fact that it does not offer significant incremental contribution to this synthesis beyond that provided by the papers that have already been included, we exclude it from our discussion. 15 This paper is couched in terms of hindsight bias, yet it considers the impact of outcome knowledge on evaluation.

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

27

by demonstrating that severe, negative consequences could arise regardless of the auditor’s decision related to the issuance of a going concern opinion. Clarkson et al.’s (2002) instruction highlighting the severity of the consequences for the auditor who is judged to be negligent may also have served to reduce the impact of negative affect. Given that negative affect has been shown to be positively correlated with consequence severity accruing to the plaintiff (Kadous, 2000), jurors might ‘‘net’’ the consequences for the plaintiff arising from audit failure with consequences for the defendant arising from a negligence verdict, reducing the negative affect directed toward the auditor. Given that Anderson et al. (1997) and Clarkson et al. (2002) did not explicitly consider the impact of their manipulations on negative affect, future research can explore this relationship. Grenier et al. (2008) also provide evidence that some manipulations may give rise to reverse outcome bias. Future research can consider whether that is the case with the interventions described above. 5. Additional paths for future research In addition to the future research opportunities noted above, several other important topics with practical implications require additional investigation. These include (1) the generalizability of prior findings, (2) the unintended impact of debiasing instructions and boundary conditions on the outcome effect, (3) single- vs. multi- outcome/period evaluation, (4) the influence of the magnitude of the difference between target and outcome, (5) the development of a more fully informed cost/benefit analysis of outcome effects, and (6) the impact of memory on the performance evaluation process. The following sections discuss these topics in more depth. Many of the studies reviewed in this synthesis examine the occurrence of the outcome effect in very specialized situations or environments, abstracted from many of the important contextual factors found in practice. This limits generalizability. Therefore, one of the most fruitful areas for future research to address is the disconnect between experimental settings examined by the extant literature and their real world analogs. Although there are inherent limitations to experimental research, and it is appropriate for the initial work in an area to focus on basic questions that can be simplified in order to be examined in a lab, we are now at a point in this research stream in which real value can be derived by focusing on external validity. Future research could (1) employ richer experimental materials that incorporate important contextual variables, (2) conduct more field studies, (3) use analytical methods to develop additional theoretical expectations about motivational causes and incentives, and (4) employ more archival methods to draw more generalizable conclusions about the outcome effect in practice. In addition, future research could extend prior research findings to other settings and contexts. For instance, most prior research has been conducted in the context of a uni-dimensional, single measure evaluation system, yet modern performance evaluation systems provide detailed and multifaceted performance information. These systems may require evaluatees to formally document their decision processes, rate their own performance and justify their ratings, reducing the degree of information asymmetry between the actor and judge, and reducing the evaluator’s cost to consider decision quality information. Evaluators may also be asked to justify their performance assessments, documenting how and why they weighted outcome and decision quality information. Future research could examine the influence of outcome information in these settings. Furthermore, the Kadous (2001) and Clarkson et al. (2002) studies demonstrate that relatively simple instructions can influence the magnitude of the outcome effect. More research is required to determine the appropriateness and feasibility of the debiasing instructions considered by prior research, particularly given concerns about reverse outcome bias (Grenier et al., 2008; Peecher & Piercey, 2008). Future research could explore types of instruction that achieve the goal of juror objectivity most effectively, and identify those that result in reverse outcome effects. Future research could also consider additional factors that give rise to boundary conditions on the outcome effect. In addition, in many instances, a manager will not be evaluated on a single decision. Rather, they will be evaluated on an outcome that arises as the result of a number of decisions over the course of the evaluation period. Although outcomes may be noisy measures of decision quality in a single-shot outcome setting, they may provide more accurate measures of decision quality over time as the

28

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

outcomes of multiple decision periods become available for evaluation. If a manager constantly makes ‘‘good’’ decisions, then the outcome trend should be positive over time, rendering multiple outcomes more diagnostic with respect to decision quality. Prior research has focused on high-stakes single-shot decision evaluations; future research can consider whether their findings are applicable to settings in which the outcome results from a number of smaller stakes decisions and/or evaluations for which multiple outcomes are available. Prior research has also failed to consider whether the magnitude by which an evaluatee exceeds or falls short of an outcome goal influences the magnitude of the outcome effect, or whether evaluatees are aware of the impact of outcome information on their performance evaluations. This raises additional questions related to evaluatee incentives with respect to the management of outcomes. Researchers might look to the earnings management literature to develop theoretical expectations about the issues that might arise under these scenarios related to appropriate and inappropriate evaluatee incentives. An additional path for future research relates to developing a more informed cost/benefit perspective on outcome effects. For instance, extant research generally focuses on situations in which outcome knowledge biases performance assessments and/or on outcome effect mitigation strategies. Future research could explore when it is desirable to focus on outcomes in performance evaluations. In addition, evaluators may face significant time pressure, and outcomes may provide them with efficient (although noisy) means of evaluating performance. Previous research has found that managers focus on diagnostic information when confronted with time pressure and ignore nondiagnostic information (Glover, 1997). When outcomes are diagnostic with respect to decision quality, do evaluators take advantage of these efficiencies? Do they over-rely on this heuristic when outcomes are not diagnostic with respect to decision quality? To determine whether time pressure plays a role in the occurrence of the outcome effect, future research can determine how much time evaluators have to make performance assessments, and consider the impact of time pressure on the outcome effect. Finally, the experimental work conducted to date has primarily ignored the longitudinal nature of many performance evaluation processes and the role of memory in the evaluation process (e.g., the acquisition of information over a lengthy time horizon; exceptions include Brown and Solomon (1987) and Mertins and Long (2012)). However, these factors play an important role in the evaluation process in practice. Future research should incorporate them in order to enhance external validity. 6. Conclusion In conclusion, prior research makes a compelling case that it is important for researchers to develop a thorough understanding of the ways in which outcome information affects judgment and to determine when and how this influence compromises decision quality. In the second case, it is also crucial for researchers to consider ways in which this bias can be mitigated. The accounting environment provides an ideal context in which to examine the outcome effect both because of the role of historical accounting information in performance evaluation, and the number of diverse and unique settings in which accounting professionals are evaluated. Despite the attention that the outcome effect has garnered in the accounting literature to date, this literature has primarily focused on the outcome effect as bias and explored ways to mitigate it. Although this paper identifies a number of interesting and important questions on this topic which remain to be explored, it also highlights opportunities for accounting researchers to contribute to our theoretical understanding of: (1) the ways in which outcome information influences judgment (regardless of whether this influence is considered normative), and (2) how the underlying causes of the outcome effect operate singly and jointly to bring about the outcome effect. Furthermore, it notes that future research can contribute to practice by: (1) determining how to encourage evaluators to retain and incorporate the relevant information conveyed by outcomes, while avoiding unwanted and unintended bias, and (2) enhancing external validity to increase the generalizability of experimental results to scenarios encountered in practice. Given the critical nature of performance evaluation in the accounting context, it is imperative for future research to address these questions. We hope that this review of the literature facilitates this process.

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

29

Acknowledgement The authors wish to acknowledge Dr. Brian Vansant, an anonymous referee, and the editor for helpful comments and suggestions. References Anderson, J. C., Jennings, M. M., Lowe, D. J., & Reckers, P. M. J. (1997). The mitigation of hindsight bias in judges’ evaluation of auditor decisions. Auditing: A Journal of Practice and Theory, 16(2), 20–39. Banker, R. D., & Datar, S. M. (1989). Sensitivity, precision, and linear aggregation of signals for performance evaluation. Journal of Accounting Research, 27(1), 21–39. Baron, J., & Hershey, J. (1988). Outcome bias in decision evaluation. Journal of Personality and Social Psychology, 54(4), 569–579. Bazerman, M., Beekun, R., & Schoorman, F. (1982). Performance evaluation in a dynamic context: A laboratory study of the impact of a prior commitment to the ratee. Journal of Applied Psychology, 873–876. Brandon, D. M., & Mueller, J. M. (2006). The influence of client importance on juror evaluations of auditor liability. Behavioral Research in Accounting, 18(1), 1–18. Brown, C. E., & Solomon, I. (1987). Effects of outcome information on evaluations of managerial decisions. The Accounting Review, 62(3), 564–577. Brown, C. E., & Solomon, I. (1993). An experimental investigation of explanations for outcome effects on appraisals of capitalbudgeting decisions. Contemporary Accounting Research, 10(1), 83–111. Buchheit, S., & Richardson, B. (2001). Outcome effects and capacity cost reporting. Managerial Finance, 27(5), 3–16. Campbell, J. D., & Tesser, A. (1983). Motivational interpretation of hindsight bias: An individual difference analysis. Journal of Personality, 51, 605–620. Cardinaels, E. (2008). The interplay between cost accounting knowledge and presentation formats in cost-based decision-making. Accounting, Organizations and Society, 33, 582–602. Causey, D. Y., Jr., & Causey, S. A. (1991). Duties and liabilities of public accountants (4th ed.). Starkville, MS: Accountant’s Press. Charron, K. F., & Lowe, D. J. (2008). An examination of the influence of surprise on judges and jurors’ outcome effects. Critical Perspectives on Accounting, 19, 1020–1033. Clarkson, P., Emby, C., & Watt, V. (2002). Debiasing the outcome effect: The role of instructions in an audit litigation setting. Auditing: A Journal of Practice and Theory, 21(2), 7–20. Conolly, T., & Bukszar, E. W. (1990). Hindsight bias: Self flattery or cognitive error? Journal of Behavioral Decision Making, 3(3), 205– 211. Cornell, R. M., Warne, R. C., & Eining, M. M. (2009). The use of remedial tactics in negligence litigation. Contemporary Accounting Research, 26(3), 767–787. de Villiers, C. J. (2002). The effect of attribution on perceptions of managers’ performance. Meditari Accountancy Research, 10, 53–70. Edwards, W. (1984). How to make good decisions. Acta Psychologica, 56, 5–27. Emby, C., Gelardi, A. M. G., & Lowe, D. J. (2002). A research note on the influence of outcome knowledge on audit partners’ judgments. Behavioral Research in Accounting, 14, 87–103. Fischhoff, B. (1975). Hindsight 6¼ foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1(3), 288–299. Fischhoff, D. (1977). Perceived informativeness of facts. Journal of Experimental Psychology: Human Perception and Performance, 3, 349–358. Fisher, J., & Selling, T. I. (1993). The outcome effect in performance evaluation: Decision process observability and consensus. Behavioral Research in Accounting, 5, 58–77. Frederickson, J. R., Peffer, S. A., & Pratt, J. (1999). Performance evaluation judgments: Effects of prior experience under different performance evaluation schemes and feedback frequencies. Journal of Accounting Research, 37(1), 151–165. Ghosh, D. (2005). Alternative measures of managers’ performance, controllability, and the outcome effect. Behavioral Research in Accounting, 17, 55–70. Ghosh, D., & Lusch, R. (2000). Outcome effect, controllability, and performance evaluation on managers: Some field evidence from multi-outlet businesses. Accounting, Organizations and Society, 25, 411–425. Ghosh, D., & Ray, M. (2000). Evaluating managerial performance: Mitigating the ‘‘outcome effect’’. Journal of Managerial Issues, 2, 247–260. Glover, S. M. (1997). The influence of time pressure and accountability on auditors’ processing of nondiagnostic information. Journal of Accounting Research, 35(2), 213–226. Grenier, J. H., Peecher, M. E., & Piercey, M. D. (2008). Judging auditor negligence: De-biasing interventions, outcome bias, and reverse outcome bias. Working paper. Presented at 2008 American Accounting Association Audit Midyear Meeting. Grenier, J. H., Pomeroy, B., & Reffett, A. (2012). Speak up or shut up? The moderating role of credibility on auditor remedial defense tactics. Auditing: A Journal of Practice and Theory, 31(4), 65–83. Hawkins, S., & Hastie, R. (1990). Hindsight: Biased judgment of past events after the outcomes are known. Psychological Bulletin, 107(3), 311–327. Hell, W., Gigerenzer, G., Gauggel, S., Mall, M., & Mueller, M. (1988). Hindsight bias: An interaction of automatic and motivational factors? Memory & Cognition, 16, 533–538. Helleloid, R. T. (1988). Hindsight judgments about taxpayers’ expectations. The Journal of the American Taxation Association, 101, 31– 46. Hershey, J. C., & Baron, J. (1992). Judgment by outcomes: When is it justified? Organizational Behavior and Human Decision Processes, 53, 89–93. Hershey, J. C., & Baron, J. (1995). REJOINDER Judgment by outcomes: When is it warranted? Organizational Behavior and Human Decision Processes, 62(1), 127.

30

L. Mertins et al. / Journal of Accounting Literature 31 (2013) 2–30

Janoff-Bulman, R., Timko, C., & Carli, L. L. (1985). Cognitive biases in blaming the victim. Journal of Experimental Psychology, 21, 161– 177. Jones, K. T., & Chen, C. C. (2005). The effect of audit outcomes on evaluators’ perceptions. Managerial Auditing Journal, 20(1), 5–18. Kadous, K. K. (2000). The effects of audit quality and consequence severity on juror evaluations of auditor responsibility for plaintiff losses. The Accounting Review, 75(3), 327–341. Kadous, K. K. (2001). Improving juror evaluation of auditors in negligence cases. Contemporary Accounting Research, 18(3), 425–444. Kadous, K. K., & Magro, A. M. (2001). The effects of exposure to practice risk on tax professionals’ judgments and recommendations. Contemporary Accounting Research, 18(3), 451–475. Kahneman, D., & Tversky, A. (1982). The simulation heuristic. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 201–208). Cambridge, MA: Cambridge University Press. Kelley, H. H. (1967). Attribution theory in social psychology. In Levine, D. (Ed.). Nebraska symposium on motivation pp. 192–238). (Vol. 15Lincoln: University of Nebraska Press Kelley, H. H., & Michela, J. L. (1980). Attribution theory and research. Annual Review of Psychology, 31, 457–501. Kennedy, J. (1995). Debiasing the curse of knowledge in audit judgment. The Accounting Review, 70(2), 249–273. Koonce, L. (1992). Explanation and counterexplanation during audit analytical review. The Accounting Review, 67(1), 59–76. Lambert, R. (2001). Contracting theory and accounting. Journal of Accounting and Economics, 32, 3–87. Lipe, M. (1993). Analyzing the variance investigation decision: The effects of outcomes, mental accounting, and framing. The Accounting Review, 68(4), 748–764. Lipe, M. (2008). Discussion of ‘‘Judging audit quality in light of adverse outcomes: Evidence of outcome bias and reverse outcome bias’’. Contemporary Accounting Research, 25(1), 275–282. Lipshitz, R. (1989). Either a medal or a corporal: The effect of success and failure on the evaluation of decision making and decision makers. Organizational Behavior and Human Decision Processes, 44, 380–395. Lipshitz, R. (1995). Judgment by outcomes: Why is it interesting? A reply to Hershey and Baron: ‘‘Judgment by Outcomes: When is it justified?’’. Organizational Behavior and Human Decision Processes, 62(1), 123–126. Lipshitz, R., & Barak, D. (1995). Hindsight wisdom: Outcome knowledge and the evaluation of decisions. Acta Psychologica, 88(2), 105–125. Long, J. H., Mertins, L., & Vansant, B. (2013). The effect of the provision of performance measure weights on evaluators’ subjective performance evaluation ratings. Working Paper. Auburn University and Towson University. Lowe, D. J., & Reckers, P. M. J. (1994). The effects of hindsight bias on jurors’ evaluations of auditor decisions. Decision Sciences, 25(3), 401–426. Lowe, D. J., & Reckers, P. M. J. (1997). The influence of outcome effects, decision aid usage, and intolerance of ambiguity on evaluations of professional audit judgement. International Journal of Auditing, 1(1), 43–58. Maines, L. A., & McDaniel, L. S. (2000). Effects of comprehensive income characteristics on nonprofessional investors’ judgments: The role of financial statement presentation format. The Accounting Review, 75(2), 179–207. Mertins, L. (2010). The impact of the choice of performance evaluation system on the magnitude of the outcome effect. Working paper. Presented at 2010 Annual American Accounting Association Meeting. Mertins, L., & Long, J. H. (2012). The influence of information presentation order and evaluation time horizon on the outcome effect. Advances in Accounting, 28(2), 243–253. Moers, F. (2005). Discretion and bias in performance evaluation. Accounting, Organizations and Society, 30(1), 67–80. Peecher, M. E., & Piercey, M. D. (2008). Judging audit quality in light of adverse outcomes: Evidence of outcome bias and reverse outcome bias. Contemporary Accounting Research, 25(1), 243–274. Roch, S. G. (2005). An investigation of motivational factors influencing performance ratings: Rating audience and incentive. Journal of Managerial Psychology, 20(8), 695–711. Rose, J. M. (2004). Performance evaluations based on financial information: How do managers use situational information? Managerial Finance, 30(6), 46–65. Ross, M., & Sicoly, F. (1982). Egocentric biases in availability and attribution. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 179–189). Cambridge, MA: Cambridge University Press. Schlenker, B. R., Britt, T. W., Pennington, J., Murphy, R., & Doherty, K. (1994). The triangle model of responsibility. Psychological Review, 101, 632–652. Staw, B. M. (1976). Knee-deep in the Big Muddy: A study of escalating commitment to a chosen course of action. Organizational Behavior and Human Performance, 16, 27–44. Tan, H. T., & Lipe, M. G. (1997). Outcome effects: The impact of decision process and outcome controllability. Journal of Behavioral Decision Making, 10, 315–325. Wermert, J. G. (2002). Outcome information in the evaluation of auditor performance: the role of evidence recall, interpretation and weighting. Advances in Accounting Behavioral Research, h 5, 77–113. Zuckerman, M. (1979). Attribution of success and failure revisited, or: The motivational bias is alive and well in attribution theory. Journal of Personality, 47, 245–287.

.