The Effects of Decision Consequences on Auditors' Reliance on Decision Aids in Audit Planning

The Effects of Decision Consequences on Auditors' Reliance on Decision Aids in Audit Planning

ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES Vol. 71, No. 2, August, pp. 211–247, 1997 ARTICLE NO. OB972720 The Effects of Decision Conseque...

2MB Sizes 0 Downloads 35 Views

ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES

Vol. 71, No. 2, August, pp. 211–247, 1997 ARTICLE NO. OB972720

The Effects of Decision Consequences on Auditors’ Reliance on Decision Aids in Audit Planning James R. Boatsman, Cindy Moeckel, and Buck K. W. Pei School of Accountancy, College of Business, Arizona State University, Tempe

This study examines the effects of decision consequences on reliance on a mechanical decision aid. Experimental participants made planning choices on cases with available input from an actual management fraud decision aid. Participants examined each case, made an initial planning judgment, received the decision aid’s prediction of the likelihood of management fraud, and made a final planning judgment. This sequence documents two types of nonreliance: (1) intentionally shifting the final planning judgment away from the aid’s prediction even though this prediction supports the initial planning judgment and (2) ignoring the aid when its prediction does not support the initial planning judgment. Results are consistent with a pressure-arousal-performance explanation of the effects of decision consequences on intentional shifting, but do not support any particular explanation of the effects of decision consequences on ignoring. q 1997 Academic Press

This research was supported by KPMG Peat Marwick’s “Research Opportunities in Auditing” program and by the Dean’s Council-of-100 program from the College of Business at Arizona State University. We thank Tim Bell and John Willingham for their generous assistance throughout the study’s experiments. The views expressed are ours and do not necessarily reflect those of KPMG Peat Marwick. We also thank the workshop participants at University of Alberta, University of Arizona, Arizona State University, Brock University, University of California—Irvine, Murdock University, University of Saskatchewan, University of Utah, AJS/MARS Audit Research Symposium, and International Conference on Contemporary Accounting Issues II. Comments from Hal Arkes, Barry Cushing, Elizabeth Davis, Joanna Ho, Peter Gilette, Larry Grasso, Robin Keller, Lisa Koonce, Steve Kaplan, Harry Magill, Ted Mock, Kurt Pany, Hal Reneau, Brian Shapiro, Joe Shultz, Ira Solomon, Jeff Wilson, Bill Waller, and Bill Wright were also very helpful. Juliette Webb’s very able research assistance is gratefully acknowledged. Address correspondence and reprint requests to Buck K. W. Pei, School of Accountancy, Arizona State University, College of Business Tempe, AZ 85287-3606. E-mail: [email protected]. 211

0749-5978/97 $25.00 Copyright q 1997 by Academic Press All rights of reproduction in any form reserved.

212

BOATSMAN, MOECKEL, AND PEI

I. INTRODUCTION

Auditing firms provide assurance on the veracity of financial reports. The process involves evaluation and assimilation of complex cues to arrive at an overall judgment. Accordingly, auditing firms have increasingly developed decision aids to support audit decision making (Messier, 1995). Decision aids that emphasize mechanical combination of cues have demonstrated ability to reduce judgment bias (Ashton et al., 1988). Given the same information cues, judgment based on mechanical combination is almost certainly superior to unaided judgment (e.g., Dawes, Faust, & Meehl 1989). Despite the potential benefits, individual auditors may choose not to rely on decision aids.1 The existing literature offers only limited evidence about why that would be, since most prior research examines the development of the aids rather than their use by auditors (Gibbins & Swieringa, 1995; Messier, 1995). In addition, Kachelmeier and Messier (1990) suggest that auditors tend to circumvent an audit sampling decision aid by manipulating its parameters to fit the sample sizes their intuition supplies. In explaining this “working backward” phenomenon, Kachelmeier and Messier (1990, p. 213) argue that such a strategy allows auditors to create an appearance of reliance on the sampling decision aid, while eliminating the need to justify any overt disagreement with it. In a bond rating task, Ashton (1990) shows that auditors faced with performance pressures such as financial incentives, justification, or outcome feedback tend to reduce reliance on a decision aid. Ashton (1990) argues that auditors’ departure from the decision aid reflects the effects of pressure. Taking the factors in turn, auditors may think strict reliance on the aid (1) reduces their chances to outperform others (financial incentives), (2) makes decisions difficult to justify absent evidence of further analysis of their own (justification), or (3) is ill-advised in the face of the aid’s less than perfect performance (outcome feedback). Similar results are documented in Arkes, Dawes, and Christensen (1986) and in Bockenholt and Weber (1992).2 A common observation emerges: nonreliance is often intentional or strategic in nature and may be attributable to concerns about the consequences of reliance, rather than only to concerns about the technical features of the decision aid. Motivations of the Study The aim of this study is to examine how voluntary reliance on a mechanical decision aid is affected by concerns about decision consequences. Firms often hesitate to mandate reliance on the judgment of an aid despite its superiority 1 Unless stated otherwise, decision aids referred to in this study are restricted to those based on mechanical combination designed for highly judgmental tasks. 2 The Wall Street Journal (September 29, 1994) reports that mutual fund managers also show resistance to decision aids. It reports that “as the European bond markets were getting hammered in this year’s first quarter, risk management systems at Citicorp’s Citibank unit were flashing warnings of European bonds . . . The model said ‘get out’ but senior managers in Europe overrode the system and stuck to their bets. The results? Big losses.”

AIDS TO SUPPORT AUDIT DECISION MAKING

213

to unaided judgment. Thus, auditors’ reluctance to rely on such decision aids presents a dilemma in that individual auditors are the ultimate arbiters of whether the aids are actually used. Research such as the current study that identifies possible sources of auditors’ reluctance to rely on decision aids is important in solving this dilemma. While judgment/decision-making research has provided many significant insights on decision making, its findings have sometimes been criticized for relying on tasks of limited realism (Edwards, 1983; Einhorn, 1976; Hogarth, 1987; Kahneman, Slovic, & Tversky 1982; Smith & Kida, 1991). Fischoff (1982), for example, argues that many findings of judgment biases are attributable to faulty tasks (asking judges to perform abstract or unfamiliar tasks and often in a context that precludes the use of relevant prior knowledge). Accordingly, increasing task realism is crucial for the generality of these findings (Edwards, 1983; Kleinmuntz, 1985). To this end, it is important to note that prior experimental studies on decision aid reliance have exclusively relied on mock decision aids of unknown origin to judges, tasks of restricted contextual properties (e.g., containing three to five decision cues), and judges who have no prior experience in performing the tasks. In contrast, this study uses a real decision aid with known source credibility, a decision task rich in contextual properties (containing 24 client characteristics based on actual audit engagements), and professional auditors who perform the decision task in the normal course of employment. An emphasis on task realism as described above has two implications for judgment /decision-making literature in general and for the issue of nonreliance on decision aids in particular. First, the focus obviates some concern about the generality of prior results by enriching the content of the task and the knowledge of the judges. Both elements are secondary focuses of a typical judgment/ decision-making study, which often focuses on the structure of the task (Solomon & Shields, 1995). Second, it provides a stronger test of the proposition that judges resist decision aids (since use of a real decision aid and a task containing a large number of decision cues seemingly favors reliance). Finally, replacing mental computations with an automated formal model is often considered a promising alternative to enhance judgment quality. That decision aiding approach is based on the robust finding that cue combination based on actuarial judgment is superior to clinical judgment. Thus, research that provides insights about factors associated with nonreliance on decision aids in this specific setting is important. Outline of the Experiment Actual decision consequences include continued employment, raises, and promotions for accurate decisions and termination, loss of reputation, stifled career opportunities, and litigation costs for inaccurate decisions. To explore the effects of decision consequences on auditors’ reliance, we used monetary payoffs that include payments for accuracy and penalties for inaccuracy as the tangible expression of these consequences. We varied the severity of monetary

214

BOATSMAN, MOECKEL, AND PEI

penalty for specific decision errors made when an aid was available. As discussed in the following section, the effects of severity of penalties for errors may depend on the decision aid’s predictions. Thus we also included the predictions of the decision aid as a part of the research design. One hundred eighteen senior auditors from one international accounting firm participated in the research. They were asked to make audit planning judgments based on their assessment of the potential for management fraud. They interacted with a decision aid developed by their firm and modified by us as to user interface to allow collection of participant responses. They made planning decisions for five profiles (including one warm-up case) derived from their firm’s actual audit decisions for five case profiles (including one warm-up case) derived from their firm’s actual audit engagements. For each case, participants made an initial planning judgment (plan an extended audit in anticipation that fraud is likely to be present or plan a regular audit in anticipation that no fraud exists), received the decision aid’s prediction (fraud or no fraud likely), and made a final judgment. This within-subject design enables us to examine two possible kinds of overt nonreliance (see Fig. 1): (1) If in initial disagreement with the aid’s prediction, ignore the decision aid and maintain one’s initial planning judgment, even though the aid’s prediction indicates otherwise, and (2) if in initial agreement with the aid’s prediction, intentionally shift away from the initial planning judgment, even though the aid’s prediction supports that initial judgment. The remaining sections are organized as follows. Section II reviews the prior literature and develops some expectations for the study’s research questions. Section III describes the research method and procedures. The principal results are presented in Section IV. Section V discusses the implications of the results and directions for future research.

II. LITERATURE REVIEW

Previous research has generally found that human decision makers hesitate to rely on the judgment output of decision models or mechanical decision rules, regardless of improvements in overall performance made possible by the aids (e.g., Goldberg, 1968; Arkes, Dawes, & Christensen, 1986; see B. Kleinmuntz, 1990, for a literature review). This finding casts doubt on whether and how auditors will rely on decision aids in situations that require substantial professional judgment. The existing literature offers only limited evidence on this issue (Ashton, 1990), despite the prevalence of decision aids in the auditing environment (Ashton & Willingham, 1989). At least three factors may contribute to an auditor’s reluctance to rely on a mechanical decision aid. The first is confidence in one’s own ability to perform. Knowledge that no decision aid is perfect prompts reluctance to rely on this aid, particularly when the auditor is confident. For example, Arkes, Dawes,

AIDS TO SUPPORT AUDIT DECISION MAKING

215

FIG. 1. Possible choices regarding reliance on the decision aid.

and Christensen (1986, Experiment 2) show that reliance on a decision aid does not rest on whether it substantially outperforms unaided judgment. Rather, reliance depends on users’ beliefs that they have extensive knowledge or expertise in a domain. Arkes et al. (1986) find that self-professed experts are less likely to use the output of a decision aid than are novices. A second reason for nonreliance may involve the technical features of a decision aid.3 For example, 3 The technical features of a decision aid include ease of use, face validity, and compatibility with users’ “preferred” strategies. The focus of the present research is not on the possibility of increasing reliance by improving technical features of decision aids, nor is the focus on comparing

216

BOATSMAN, MOECKEL, AND PEI

a decision aid based on mechanical combination or a statistical formula is unable to recognize situation-specific patterns or trends and thus fails to capture “broken-leg cues” that may arise in any environment (Kleinmuntz, 1990; Dawes et al., 1989). Ashton (1990, Experiment 3) found that auditors rely less on a decision aid when it uses fewer cues for prediction, even though the predictive accuracy remains the same. Using a going concern judgment task, Davis (1994) found evidence consistent with this argument. She showed that in making a going concern judgment, auditors using a statistical model were less willing to rely on the aid than were auditors using an enhanced checklist, even though the two decision aids provided the same level of predictive accuracy. Unlike the statistical model that used only five variables, the enhanced checklist used 18 variables to make a prediction of a company’s going concern status. These findings thus suggest the importance of face validity on decision aid reliance. In this research we focused on a third possible cause of nonreliance: decision consequences. In audit planning, there are two layers of potential consequences. Auditors are always concerned about the costs of an incorrect planning decision. When a decision aid is available, they are also concerned about the costs of an incorrect reliance decision (i.e., relying on an incorrect prediction by the aid or ignoring a correct prediction). Thus, unlike earlier studies, we do not focus on a decision aid’s technical features (e.g., prediction accuracy, or the “comprehensiveness” of the decision variables captured), nor do we focus on the decision makers’ confidence in their judgment competence.4 Instead, the focus here is on whether and how auditors’ reliance on a decision aid is affected by the preceding two layers of consequences. One key dimension of the audit plan is whether a set of client characteristics indicates a high likelihood of management fraud. If so, the auditor must plan accordingly to modify standard auditing procedures to take account of the increased risk. As with all significant areas of an audit, auditors are concerned with the negative consequences of incorrect planning judgments related to the potential for management fraud. Planning a regular audit that is not extensive enough to detect fraud when it exists can result in audit failure. Conversely, planning an extended audit when fraud does not exist can reduce audit efficiency and result in overauditing. Consequences like litigation and loss of competitive edge make both types of error costly to firms. Consequences to firms also flow through to individual auditors. How will an increase in penalty for one type of error or another affect an auditor’s reliance on a decision aid? Regarding this question, at least four published studies are informative. Ashton (1990) offers a “pressure-arousalperformance” (PAP) hypothesis, which suggests that reliance will decrease

auditors’ reliance on decision aids with alternative features. Thus, we will not review this line of research extensively. 4 We did, however, collect information pertinent to these other possibilities in the experiment. Such information was used to test the robustness of our results.

AIDS TO SUPPORT AUDIT DECISION MAKING

217

when performance pressure is present.5 The hypothesis has implications for reliance on a decision aid. Ashton (1990) argues that the presence of performance pressures can offset the potential benefits of a decision aid by changing the perceived nature of the task so that a decision maker believes aggressive decision strategies are necessary for successful performance. Pursuing such strategies can result in decreased reliance on a decision aid and attendant diminished performance. Consistent with this argument, he finds that auditors rely on a decision aid less when faced with pressures such as financial incentives, feedback, or justification. Arkes et al. (1986) also provide evidence that pressures cause decision makers to become less tolerant of errors in a probabilistic task and induce lessened reliance on a decision aid. In a related vein, Kleinmuntz (1990) notes that as the severity of negative consequences increases (e.g., loss of life), decision makers will be more reluctant to rely on a decision aid. They may view reliance on a decision aid as “dehumanizing,” preferring to intervene in the decision process more as the decision becomes crucial. Kleinmuntz illustrates this point with anecdotal evidence from medical diagnoses. The designer of a particular medical decision aid cautions against reliance in high-risk situations despite the high reliability of the aid. Arkes et al. (1986, p. 94) offer a similar observation. “In predicting outcomes important to us, we want to be able to account for all of the variance. If we use a standardized procedure, we know we cannot predict everything.” The severity of penalty also intensifies the need for justification (Tetlock, 1985). This need for justification in turn increases the reluctance to rely on a decision aid (Ashton, 1990). In our setting, increasing the cost of either audit failure or overauditing should induce greater performance pressure. The PAP hypothesis would predict this increased pressure to result in increased non-reliance on the decision aid. On the other hand, there is the Bayesian perspective. One might view the decision aid as a source of information which Bayesian decision makers use to revise their initial judgment of the probability of fraud. Under this view, the effects of increased costs of audit failure and overauditing depend on whether the aid predicts fraud or no fraud. When the aid predicts fraud, an increase in the cost of audit failure increases the appeal of siding with the aid and planning an extended audit. An increase in the cost of overauditing decreases the appeal of siding with the aid and planning an extended audit. When the aid predicts no fraud, reliance effects of increases in the costs of audit failure and overauditing are reversed.6 A Bayesian view accommodates ignoring the

5

The PAP hypothesis predicts that the relationship between performance and pressure is depicted by an inverted U-shaped curve. Thus, pressure can result in improved performance, no change in performance, or diminished performance. We did not test for the presence of an inverted U relation between performance and pressure, but rather for the relationship between reliance and pressure. 6 A formal analysis of Bayesian reliance on a decision aid is available from the authors on request.

218

BOATSMAN, MOECKEL, AND PEI

decision aid, but absent any change in decision consequences, it does not accommodate intentional shifting. Moreover, a Bayesian view does not accommodate intentional shifting even in combination with certain changes in consequences. For example, when the initial plan is an extended audit and the decision aid predicts fraud, an increase in the cost of audit failure would never produce an intentional shift to plan a regular audit. Similarly, when the initial plan is a regular audit and the decision aid predicts no fraud, an increase in the cost of an extended audit would never produce an intentional shift to planning an extended audit. Other work suggests additional implications of decision consequences for decision aid reliance. Hogarth et al. (1991) observe that when the task environment becomes more exacting for errors (as in increasing the costs of both audit failure and overauditing), people’s search for better rules intensifies and they tend to abandon a would-be-successful rule prematurely. In the context of decision aid reliance, this suggests departure from a decision aid since one can neither change nor control the rules used by the aid. With unaided judgment, there is at least an appearance of control and thus hope to produce a better rule. Accordingly, simultaneous increases in the cost of both audit failure and overauditing might be expected to decrease reliance on a decision aid. Such behavior could also be interpreted as a response to increased pressure predicted by the PAP hypothesis. Under a Bayesian view, however, the response to a simultaneous increase in the penalty for both audit failure and overauditing depends on the decision maker’s prior probabilities of fraud and no fraud as well as the aid’s prediction. Arkes et al. (1986, Experiment 1) showed that explicit warnings not to depart from a decision aid may be effective in decreasing nonreliance. (To the extent that such warnings imply an incremental cost of error, decreased nonreliance is consistent with a Bayesian view.) Therefore, an increase in the penalty for error contingent on incorrectly overriding a decision aid might be expected to decrease nonreliance. On the other hand, a contingent increase in penalty may simply be an additional source of pressure, arousing the perceived need for justification and (according to the PAP hypothesis) increasing nonreliance on the aid. Finally, evidence from the judgment/decision-making literature suggests that merely directing a decision maker’s focus to alternative aspects of decisions is sufficient to induce differences in decisions (Kahneman, Slovic, & Tversky 1982). This suggests that directing an auditor’s focus to the costs of an extended audit costs can affect reliance, even if the total costs associated with any final judgment error (e.g., audit failure or overauditing) are unchanged in the process. Such attention-directing effects are alien to a Bayesian view. The above discussion suggests interest in effects of the following five manipulations of decision consequences on auditors’ propensity to rely on an audit planning decision aid: 1. increasing the cost of audit failure relative to the cost of overauditing, 2. increasing the cost of overauditing relative to the cost of audit failure,

AIDS TO SUPPORT AUDIT DECISION MAKING

219

3. making the penalty structure more exacting for errors by increasing the penalty for either error, 4. increasing the penalty for error contingent on incorrectly overriding the decision aid, and 5. increasing the focus on the cost of an extended audit. Our experiment, described in the following section, involved each of the above five manipulations of decision consequences relative to a control group where the penalty for overauditing equaled that for audit failure.

III. RESEARCH METHOD AND PROCEDURES

Decision Setting and Experimental Procedures One hundred fifty diskettes were distributed to potential participants (audit seniors) in an international accounting firm by the firm’s executive office. One hundred eighteen usable responses were returned. The decision task of the experiment involved the assessment of management fraud potential and selection of either a regular or an extended audit plan. Participants were instructed that “the purpose of the study is to improve the usefulness of an actual decision aid developed by [their] firm’s national executive office, and to examine whether and how certain characteristics of the audit environment and of a decision aid affect its use by auditors.”7 The firm also provided an actual decision aid. The participants were further informed about the development, features, and performance of the decision aid (see the list below for details). The decision aid uses 24 red-flag questions to estimate fraud potential. These questions were distilled statistically from a larger set of red-flag indicators documented in prior research and professional literature (Pincus, 1989; Loebbecke, Eining, & Willingham, 1989; Bell, Szykowny, & Willingham, 1993). Participants were informed that the aid outperformed even firm experts, as will many decision aids across settings, because of a superior ability to weight and combine numerous cues. Based on validation with over 380 audit engagements of the firm, the aid has a classification accuracy of 81% for both fraud and nonfraud audits (cases where fraud was or was not eventually discovered on the actual engagement).8 The aid is very easy to use. An auditor only needs to respond “yes” or “no” 7 However, given that “The aim of the study is to examine how voluntary reliance on a mechanical decision aid is affected by concerns of decision consequences,” we did not hold participants accountable individually but rather informed them that the results would be presented to their employer in the aggregate. Specifically, participants were instructed via an information screen that “Once we have mailed the [payment] checks, we will destroy the address list, and any link between you and your work. Nobody at [your firm] will ever see your individual responses, and all results will be analyzed and presented in the aggregate.” 8 Based on the debriefing questions at the end of the program, participants report that they would in fact tolerate a lower accuracy rate. The mean response to the question—“What is the minimum level of accuracy you would tolerate for classification of fraud [no-fraud] cases?”—was 76.83% (71.96%).

220

BOATSMAN, MOECKEL, AND PEI

to the 24 red-flag questions and the aid will make an assessment of the likelihood of management fraud. If the auditor is not clear about the rationale for any particular red-flag question, the aid will provide explanations upon demand. We wrote a Hypercard computer program to capture the actual decision aid’s user interface. This allowed us to simulate closely the setting in which the aid would be used in practice. However, we made two modifications. First, the responses to the red-flag questions were precoded for the participants for each case. Essentially all the red-flag questions prompt simple “yes” or “no” responses. The questions would normally be addressed from the outset of each audit and the answers would not normally change much from one year to the next. Two such questions are “Have there been instances of material management fraud in prior years?” and “Is the company publicly traded?” Accordingly, providing precoded profiles does not create a dramatic departure from settings in which the aid would be used in practice. Second, the participants were asked to provide an initial explicit unaided selection of a regular or an extended audit (tied to their assessment of fraud potential) and a final judgment after seeing the decision aid’s assessment of the likelihood of fraud.9 (A sample of the response screens presented to the participants via the program is presented in Appendix I.) Inclusion of an explicit unaided initial judgment permits examination of the participants’ reactions to the decision aid when it predicts fraud versus no fraud, and when their initial assessment and the aid’s agree or disagree. Figure 1 depicts possible reactions. As shown in Fig. 1, nonreliance can be evidenced by one of the following two actions. When an auditor initially disagrees with the aid, that auditor can choose to ignore the aid; when an auditor initially agrees with the aid, that auditor can intentionally shift to the opposite audit plan. Both ignoring and intentional shifting are clear behavioral evidence of nonreliance, even though they occur under opposite conditions of initial agreement. Note that either ignoring or intentional shifting can occur regardless of whether the decision aid predicts fraud or no fraud. (One might argue that intentional shifting would be unlikely to occur since the participants received no information other than confirmation or disconfirmation from the aid. However, intentional shifting may reflect an active attempt to surpass the performance benchmark of the aid (Ashton, 1990; Arkes, 1986). The program automatically tracked and recorded the participants’ responses, including the amount of time spent on each case. The program also administered the entire experiment and enforced all its applicable rules. For example, a participant had to access the responses to all 24 red-flag questions (thus review all components of the profile) before recording an initial selection. In order to isolate the influence of the aid on each participant’s unaided judgment about each case, the decision aid’s assessment was revealed only after the 9

This requirement is not unrealistic since auditors are always encouraged to form their own “independent professional judgment” regardless of the accessibility of a decision aid. Asking them to submit their own judgment prior to seeing the decision aid’s prediction also helps avoid anchoring bias (see Kinney & Uecker, 1982; Biggs & Wild, 1985).

AIDS TO SUPPORT AUDIT DECISION MAKING

221

participant made an irrevocable initial choice. The program also informed the participants of the following information. (See Appendix II for a sample of the screens presented by the program.) 1. The payoff structure for the decisions. The program displayed the payoff table for the participant’s group, as well as related explanations for each decision consequence. To keep the payoff structure salient, a (show payoff) pop-up screen was available for every audit planning decision. The program also contains five self-checking questions (with feedback) to ensure the subject’s understanding of the payoff scheme prior to responding to the four experimental case profiles. 2. The characteristics of the decision aid. The following information was also included in the computer screens presented to the auditor participants: • Prediction accuracy of the aid. Accuracy in classifying both fraud and no fraud clients accompanied each prediction by the aid. The aid has a classification accuracy of 81% for both fraud and no-fraud cases. • Twenty-four red-flag questions to estimate fraud potential. The aid uses questions which were distilled statistically from a larger set of red-flag indicators documented in prior research. • The development of the aid. The aid was developed from over 380 audit engagements of the firm. • The rationale of the decision aid in making the assessment of management fraud. The assessment is based on analysis of patterns of conditions, motivations and attitudes as captured by the 24 red-flag questions. Thus, the aid is more than a simple equally weighted combination of cues. • Benefits of the decision aid. Three generic benefits were mentioned to encourage reliance. Unaided human judgment is subject to biases and imperfections. The decision aid assists attention directing to important client considerations. It also assists information integration in consistent cue combination. 3. Outline of the decision task. Participants were told that they would analyze four case profiles plus a warm-up. Each case profile contains the client characteristics of an actual audit engagement. For each case profile, participants were asked to build the profile by examining each of 24 responses to red-flag questions, make an initial assessment of fraud potential with related audit planning selection, see the prediction of the decision aid, and make a final assessment of fraud potential. They were told that the accuracy of their final assessment and audit plan would be used to calculate their compensation according to their payoff structure.

Research Design To examine the effects of decision consequences on auditors’ reliance on the decision aid, we manipulated the severity of monetary penalty for various

222

BOATSMAN, MOECKEL, AND PEI

errors. The experiment consisted of a baseline control group and five betweensubjects treatment groups. The payoff structure for the control group (group 1) involved identical penalties for audit failure and overauditing, and appears in Fig. 2A. The payoff structures for the treatment groups (group 2 to group 6) appear in Fig. 2B. The manipulations in groups 2 and 3 were intended to examine the effects on reliance of the increase in penalty for committing a particular type of error. Group 2 received an increase in the cost of an audit failure. This would occur if an auditor assessed too low a likelihood of fraud, planned a regular audit, and thus failed to detect a fraud that did exist. Group 3 received an increase in the cost of overauditing. This would occur if an auditor assessed too high a likelihood of fraud, planned an extended audit, and no fraud was ever discovered. Group 4 received an increase in the penalty for either planning error, thus making the payoff structure more exacting for errors. Group 5 received a contingent penalty for incorrectly overriding the decision aid. That is, the extra penalty was charged only when auditors who erred had also chosen to override the aid. Finally, the manipulation in group 6 increased focus on the cost of an extended audit, thus adding pressure for audit efficiency (even though compared to the control it did not change the resulting total penalty for overauditing in the payoff structure). Note in Fig. 2 the equal costs of over-auditing and audit failure for the control group. Also note that the manipulation in groups 2 and 3 did not introduce a large difference between the costs of overauditing and audit failure. One might argue that such a payoff structure is unrealistic because the personal cost of audit failure is always vastly larger than the cost of overauditing. However, audit seniors cannot realistically expect tenure with their current employer to extend long into the future. Ultimate detection of a current fraud often takes several years and therefore may not occur until after an auditor has left his or her current firm. Litigation takes even longer. Therefore, the expected cost of audit failure may be perceived as relatively small. On the other hand, the cost of overauditing is incurred almost immediately and may well be perceived as quite large. Accordingly, it is unclear whether audit seniors indeed perceive that their personal costs in the event of audit failure vastly exceed personal costs of overauditing. Thus, there is no reason to dismiss our manipulation as too weak or unrealistic. In any event, interpretability of our results only requires that our manipulations are salient to participants. Saliency of experimental payoffs, in turn, only requires that subjects perceive the relation between decisions and payoffs and that any induced payoffs dominate the subjective costs of making decisions (Davis and Holt 1993, 24). The predictions of the decision aid were controlled within subjects. Of the four case profiles presented to the auditors, the decision aid predicted fraud in two case profiles and no fraud in two others. The sequence of case presentation was randomized by the computer program. Also, no outcome feedback was provided during the experiment, so the actual status of fraud discovery of the cases was not disclosed to the auditors. Recall, however, that each time the

AIDS TO SUPPORT AUDIT DECISION MAKING

FIG. 2. Experimental payoff structure.

223

224

BOATSMAN, MOECKEL, AND PEI

decision aid’s prediction was presented, auditors were reminded of its classification accuracy.10 Prior to presenting the results, it is worth noting how the present study extends prior work of Arkes et al. (1986) and Ashton (1990). We improved on the face validity of the decision aid, informed our auditor participants of the aid’s development process, and reminded them of the generic benefits of using decision aids for decision making. In contrast, the decision aids used in the prior studies were of unknown origin and participants were not informed how the aids had been developed and validated. Moreover, our decision task should be a familiar one for our participants since it is a required audit procedure. These extensions were mentioned in the prior studies as areas needing further investigation. Next, we directly manipulated decision consequences by varying the severity of penalties for errors and the decision aid’s predictions. In contrast, the prior manipulations might be viewed as manipulations of the sources of decision consequences (via incentives, justification or outcome feedback). Thus, the present study extends its predecessors by examining whether decision consequences are a common cause for auditors’ reluctance to rely on decision aids. Thus, the current study extends their boundary by exploring the added effects of decision consequences rather than the conditional effects of their presence or absence. Finally, we used a two-stage judgment elicitation; this permits an explicit examination of two possible overt types of nonreliance. This design thus can reveal evidence of possible strategic actions that can only be inferred from prior results. IV. RESULTS

General Results The total payout distributed among the 118 participants was $7080; participants were told that the average pay out would be $60.11 The highest actual payout earned was $107 and the lowest was $0. (We adjusted the minimum to $10 for the final payout.) We measured time spent on each screen and can thus separate time spent for reading instructions, reviewing profiles, and recording judgments. Data collected by the Hypercard program indicate that the participants spent significantly more time with cases in which the decision aid predicted fraud and cases in which the aid’s prediction was inconsistent 10 We could not practically include enough cases to mirror the actual base rate of fraud. We did instruct participants to evaluate each case separately and not to expect any particular number of fraud and no fraud cases. As shown later, the disparity between the actual base rate of fraud and 50% rate of encountered in the experiment cannot explain away the treatment effects we observed. Also, there was no significant interaction between the aid’s predictions and the treatments. 11 We told the participants that their compensation would be calculated based on the amount of “Tijus” earned. The instruction read: “The final worth of each Tijus will depend on how many are earned by all participants. On average, each participant will earn $60; however, those whose performance is better than average will earn more, while some will earn less.” See Appendix II for a full description of the compensation scheme.

AIDS TO SUPPORT AUDIT DECISION MAKING

225

with the initial planning judgment. On average, participants spent 4.3 min formulating responses to cases for which the decision aid predicted fraud. In contrast, participants spent 2.9 min formulating responses to cases for which the decision aid predicted no fraud. On average, participants spent 4.2 min formulating responses to cases in which the aid’s prediction disagreed with the initial planning judgment and 2.9 min formulating responses to cases in which the aid’s prediction agreed with the initial planning judgment.12 Such time disparities evidence participant sensitivity to the signal from the decision aid. Certain critical differences between groups on this dimension are highlighted later. Prior participant exposure to management fraud was surprisingly high. Thirty-four out of 118 participants reported direct prior experience with engagements involving management fraud. Participants had a low rate of correct initial unaided planning judgments. Correct initial planning judgments (defined as those in which an extended audit was planned in cases involving actual management fraud, and those in which a regular audit was planned in cases involving no actual management fraud) were made only 58% of the time. Table 1 reports the numbers and percentages of nonreliance decisions in the control group (group 1) and the treatment groups (groups 2 through 6), by whether the initial planning judgment is in agreement with the decision aid’s predictions. All entries in Table 1, panel A involve an initial planning judgment in agreement with the aid’s prediction—a scenario providing an opportunity for intentional shifting. In contrast, all entries in panel B involve initial disagreement with the aid’s prediction—a scenario providing an opportunity to ignore the aid. The bottom row reports percentages of both types of nonreliance for each group and in total. Nonreliance was extensive. Thirty-nine percent of the time, participants either shifted their initial planning judgments away from the decision aid’s predictions or ignored the aid’s predictions. Such a high level of nonreliance is consistent with observations by other researchers. Both intentional shifting and ignoring occurred more frequently when the aid predicted fraud than when it predicted no fraud. Intentional shifting occurred 38% of the time when the aid predicted fraud and 10% of the time when the aid predicted no fraud (Table 1, panel A). Ignoring the aid occurred 77% of time when the aid predicted fraud and 51% of the time when the aid predicted no fraud (panel B). Both disparities in the incidence of nonreliance are significant at the 5% level, as indicated by the triple asterisk following the 38 and 77% entries in Table 1.13 Intentional Shifting Away from the Decision Aid As reported in Table 1, panel A, intentional shifting occurred in 22% of the total possible instances. Recall that absent a change in costs of either audit 12

Both differences are significant at 2%. Unless stated otherwise, all reported tests of significance are binomial tests of the null hypothesis that a given incidence of nonreliance equals the incidence of nonreliance in an appropriate reference group. Since there are many potential reference groups with which a given incidence of 13

226

BOATSMAN, MOECKEL, AND PEI

TABLE 1 Numbers and Percentages of Reliance and Nonreliance Decisions in Control Group and Treatment Groups, by Whether the Decision Aid Predicts Fraud or No Fraud and Whether the Initial Planning Judgment is in Agreement with the Decision Aid’s Prediction (472 Observations) Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Panel A: Initial agreement with the aid’s prediction Aid predicts fraud Intentionally shift 2 9 14 3 10 Stay in agreement 11 12 14 16 15 Percent shifting 15% 43%** 50%** 16% 40%** Aid predicts no fraud Intentionally shift 1 3 4 1 3 Stay in agreement 25 21 28 25 23 Percent shifting 4% 13%** 13%** 4% 12%* Overall percent shifting 8% 27%** 30%** 9% 25%** Panel Aid predicts fraud Ignore Change to rely Percent ignoring Aid predicts no fraud Ignore Change to rely Percent ignoring Overall percent ignoring Overall percent nonreliance

12 12 50%**

Total

50 80 38%***

4 26 13%** 30%**

16 148 10% 22%

B: Initial disagreement with the aid’s prediction 20 1 95%

13 2 87%

16 4 80%

9 6 60%**

7 6 54%**

17 5 77%**

82 24 77%***

5 3 63%

7 5 58%

8 8 50%

4 4 50%

6 6 50%

7 9 44%

37 35 51%

86%

74%

67%

57%**

52%**

63%**

67%

41%

44%

44%

25%

34%

43%

39%

Note. Group 1 is the control group with equal costs of audit failure and overauditing. Group 2 treatment is an increase in the cost of audit failure (relative to the control group). Group 3 treatment is an increase in the cost of overauditing (relative to the control group). Group 4 treatment is an increase in both the cost of audit failure and overauditing (relative to the control group). Group 5 treatment is a contingent penalty for incorrectly overriding the decision aid. Group 6 treatment is an increase in the focus of the cost an extended audit. *Incidence of nonreliance in group is significantly different from incidence of nonreliance in the counterpart control group (10% level of significance). **Incidence of nonreliance in group is significantly different from incidence of nonreliance in the counterpart control group (5% level of significance). ***Incidence of total nonreliance when the aid predicts fraud is significantly different from incidence of total nonreliance when the aid predicts no fraud (5% level of significance).

nonreliance might be compared, only significant results discussed in the text are indicated by asterisks in Table 1.

AIDS TO SUPPORT AUDIT DECISION MAKING

227

failure or overauditing, a Bayesian auditor never alters the initial planning judgment upon receiving a signal confirming the accuracy of that initial judgment. Thus observing intentional shifting more than one-fifth of the time is inconsistent with a Bayesian view, in which a decision aid provides signals with which the auditor revises an initial planning judgment. The incidence of intentional shifting was comparatively low among the control group (group 1) participants who faced equal costs of audit failure and overauditing. With a single exception, manipulations of decision consequences significantly increased the incidence of intentional shifting (as indicated by the single and double asterisks in Table 1). That single exception involved equal increases in the costs of audit failure and overauditing (making the penalty structure more exacting for errors) for the group 4 participants. Increasing the cost of audit failure (group 2), increasing the cost of overauditing (group 3), imposing a contingent penalty for incorrectly overriding the decision aid (group 5), and increasing the focus on the cost of overauditing (group 6) all resulted in increased intentional shifting. Note that this general pattern of responses to manipulations of decision consequences is present regardless of whether the decision aid predicts fraud or no fraud. To the extent that our manipulations of decision consequences aroused a sense of pressure among the participants, the observed increases in intentional shifting in all treatment groups (save group 4) relative to the control group can be construed as consistent with the PAP hypothesis. Ignoring the Decision Aid The results concerning ignoring are less straightforward. As reported in Table 1, panel B, participants ignored the decision aid’s predictions in 67% of the total instances in which they had an opportunity to do so. Also, ignoring the aid occurred more frequently when the aid predicted fraud than when it predicted no fraud. This result is consistent with the behavior of a Bayesian auditor with a very low prior probability of management fraud when the cost of audit failure is not vastly in excess of the cost of overauditing. However, to the extent that a fraud prediction increases pressure, the result is also consistent with the PAP hypothesis. Other results reported in Table 1, however, are not particularly supportive of either the Bayesian view or the PAP hypothesis. Consider first the results from group 6. Focus on the cost of an extended audit was increased for participants in this group—without changing the total monetary payoff associated with planning an extended audit when fraud is discovered (see Fig. 2). Alterations in the way payoffs are focused would not affect the behavior of a Bayesian auditor. Hence the significant decreases in the incidence of nonreliance among the group 6 participants are inconsistent with a Bayesian view. Other results in Table 1, panel B neither support nor contradict a Bayesian view. Increasing the cost of audit failure reduced the incidence of ignoring the aid when it predicted fraud (the incidence of ignoring declined from 95% in the control group to 87% in group 2). However, the decline is not significant.

228

BOATSMAN, MOECKEL, AND PEI

Absence of a significant effect could indicate non-Bayesian behavior; but it could also just indicate that the size of the increased cost of audit failure was not sufficient to induce a change in the initial planning judgment. Similarly, increasing the cost of overauditing reduced the incidence of ignoring the aid when it predicted no fraud (the incidence of ignoring declined from 63% in the control group to 50% in group 3). But again the decline is not significant— possibly indicating only that the size of increased cost of overauditing was not sufficient to induce a change in the initial planning judgment.14 Although the results in Table 1, panel B are not particularly diagnostic with respect to a Bayesian view, they are inconsistent with the PAP hypothesis. According to the PAP hypothesis, increases in pressure increase the incidence of nonreliance. However, the overall incidence of ignoring the decision aid was highest in the control group (86%). When the incidence of ignoring the aid within a treatment group was significantly different from its counterpart in the control group (as in groups 4, 5, and 6), the difference was decreased nonreliance. Moreover, it is difficult to dismiss the insignificance of the treatments administered to groups 2 and 3 on grounds that the treatments were not strong enough to induce pressure. After all, these same treatments were strong enough to induce significant increases in intentional shifting. Supplemental Analyses of Results We evaluated the sensitivity of our results relating to each type of nonreliance by examining five participant-specific variables (see Appendix III for questions used to elicit the responses to those variables). The five variables were the (1) level of prior exposure to management fraud, (2) perceptions of the aid’s actual accuracy, (3) perceptions of the aid’s ability to capture “broken-leg cues,” (4) level of technical understanding of how the aid aggregates cues, and (5) level of confidence in the initial planning judgment. In particular, we estimated two logistic equations of the following form (one using all observations involving an opportunity for intentional shifting and one using all observations involving an opportunity for ignoring the aid): CHOICE 5 b1 1

6

o

i52

biDi 1

10

o bj Xj 1 m,

j57

where CHOICE is an indicator variable coded one if the ultimate planning choice is consistent with the decision aid’s prediction and zero otherwise, b1 is the mean of CHOICE in the control group (group 1), Di is an indicator variable coded one if CHOICE is an observation from group i and zero otherwise, Xj is a control variable (i.e., level of prior exposure to management fraud), bi and 14 An experiment capable of discriminating between non-Bayesian and Bayesian responses to increases in the costs of audit failure and overauditing would require a subject- and case-specific cost structure.

AIDS TO SUPPORT AUDIT DECISION MAKING

229

bj are parameters, and m is a residual. Since b1 is forced to equal the mean of CHOICE in the control group, a significance test of bi is a test of the whether the level of reliance in group i differs from the level in the control group (after controlling for the effects of Xj). Results of significance tests were not qualitatively different from the results reported in Table 1. In our experiment, the average per participant payout was fixed at $60. For this reason, the dollar value per unit of our experimental currency depended on all the participants’ actual choices. The uncertain per unit value of the experimental currency may have induced a tournament mentality among some participants. A possible manifestation of such a mentality might be intentional shifting—attempting to win larger dollar amounts by deliberately making choices believed to be different from the likely choices of competing participants. To evaluate the prospect that a tournament mentality explains our results, we compared the incidence of nonreliance among our control group participants with the incidence of nonreliance among participants in another study. Participants in the other study were audit seniors recruited from the same firm as participants in the study at hand, who evaluated the same planning cases. But unlike participants in the study at hand, the other study participants were paid a certain piece-rate of $5 per correct planning judgment. If the incidence of nonreliance reported in Table 1 is due to a tournament mentality, then nonreliance among our control group participants should exceed nonreliance among participants in our other study since the only difference between these two groups of participants was payment scheme. No such excess nonreliance was observed. In fact, using this comparable group as a control, the resulting comparisons with each of the treatment groups (for both types of nonreliance) revealed substantively identical results to those presented earlier in Table 1. These results indicate that nonreliance in the study at hand was not induced by the competitive payment scheme. Finally, recall that each of our participants evaluated four planning cases presented in random order. The two types of nonreliance did not occur uniformly over the four case sequence. Let the term early denote one of the first two case profiles evaluated and the term late denote one of the last two case profiles evaluated. Ignoring tended to occur early regardless of whether the aid predicted fraud or no fraud. However, the tendency was not a particularly strong one. On the other hand, the timing of intentional shifting was strongly related to the aid’s predictions. When the decision aid predicted fraud, intentional shifting always occurred late. When the decision aid predicted no fraud, intentional shifting always occurred early. Before concluding that this rather curious pattern is a function of order, it is important to recall that intentional shifting did not occur uniformly across the control and treatment groups. Intentional shifting occurred more often in the treatment groups, where decision consequences were manipulated. If such nonreliance were merely an order effect, one would expect it to occur with the same frequency in the control and treatment groups. The fact that it did not argues against a simple order effect.

230

BOATSMAN, MOECKEL, AND PEI

The Effects of Judgment Confidence Prior evidence suggests that a decision maker’s confidence in his or her judgment may also affect reliance on a decision aid. Arkes et al. (1986) show that reliance on a decision aid does not rest on whether it substantially outperforms unaided judgment. Rather, reliance is conditioned by users’ beliefs about the sufficiency of their knowledge or expertise in a domain. Heath and Tversky (1991) also documented a “feeling of competence” effect on people’s choice behavior. They report that people prefer betting on their own judgment over an equivalent chance event only when they consider themselves knowledgeable. Thus, to the extent that an auditor is confident in his or her initial judgment, it may be difficult to convince him or her that s/he cannot outperform the aid. This prediction is supported by our results. When judgment confidence was high, auditors were significantly more likely to ignore the aid (a , .01) and were significantly less likely to intentionally shift away from the aid (a , .05). V. DISCUSSION

The major contention of this study is that an auditor’s reliance on a decision aid is a function of the auditor’s concern over the severity of anticipated penalties for incorrect decisions. We explicitly examined two types of overt nonreliance: ignoring and intentionally shifting away from the aid. The principal results of the treatment effects relative to the control group are summarized in Table 2. from Table 2 that in the control group the occurrence of intentional shifting is rare (3 out of 39 decisions, or 8%), and the occurrence of ignoring the aid is pervasive (25 out of 29 decisions, or 86%). As summarized in Table 2, all treatments but one significantly increased the occurrence of intentional shifting. In contrast, three treatments (making the penalty structure exacting for errors, imposing a contingent penalty for incorrect overriding, and increasing the focus on extended audit costs) significantly decreased the occurrence TABLE 2 A Summary of Principal Results Treatment group manipulation (relative to the control group)

Increase penalty for audit failure (Group 2) Incidence of intentional shifting Increased Incidence of ignoring the aid No effect

Increase penalty for overauditing (Group 3)

Increase penalty for both audit failure and overauditing (Group 4)

Impose a contingent penalty for incorrect overriding (Group 5)

Increase the focus on extended audit costs (Group 6)

Increased

No effect

Increased

Increased

No effect

Decreased

Decreased

Decreased

AIDS TO SUPPORT AUDIT DECISION MAKING

231

of ignoring. Also note that when the aid predicted fraud as opposed to no fraud, the occurrence of ignoring in treatment groups relative to the control decreased and that of intentional shifting increased. Ignoring the Aid With respect to ignoring, there are two important points. First, ignoring was pervasive; it occurred in 67% of possible cases. This result is not out of line with prior research. For example, Ashton (1990, p. 163) found that only 2 of 91 subjects relied completely on the decision aid used in his study. His data also revealed that only 21 of 91 subjects achieved a level of prediction accuracy equal to or higher than that of the aid, showing that most of the remaining subjects decided not to completely rely on the aid. Similar results were found in a nonauditing context by Arkes et al. (1986), who report a reliance rate of 56% and Powell (1991) who reports a reliance rate of 52%. Finally, a survey of the use of formal decision tools in medical decision making also revealed low usage of these tools by general clinicians over the past two decades (Bockenholt & Weber, 1992). Collectively, these results suggest that auditors, like other decision makers, are very hesitant to rely on the judgment output of mechanical decision aids for decisions that are highly judgmental. The good news, however, is evidence that the tendency to ignore the aid can be mitigated by some treatments possible to recreate in an actual audit environment. Of particular interest is the treatment that simultaneously increases the penalty for both audit failure and overauditing. This treatment resulted in a lower rate of ignoring without increasing the rate of intentional shifting. This implies that auditors’ nonreliance in the form of ignoring may be susceptible to managerial interventions without the negative side effect of increasing intentional shifting. Arkes et al. (1986) found evidence that lends some credence to this suggestion. Arkes et al. show that nonreliance can be reduced when people are explicitly told not to depart from the aid. In this study’s context, making the payoff exacting for errors may have sent a warning signal discouraging both forms of nonreliance, rather than pressure leading to at least one form of nonreliance. What are the underlying reasons for ignoring the aid? One possible explanation is confidence in oneself, as suggested by Arkes et al. (1986). Their evidence shows that self-professed knowledgeable decision makers often tend to rely on a decision aid less than novices, largely because of confidence in their expertise. Such results are consistent with the “feeling of competency” effect documented in Heath and Tversky (1991). They show that people prefer betting on their own judgment over an equivalent chance event only when they consider themselves knowledgeable. This account of one reason for ignoring is supported by the results of our supplemental analysis on participants’ initial judgment confidence. Auditors who were high in initial judgment confidence tended to ignore the aid more. While confidence may help to explain nonreliance in general, it is insufficient to account for the differences in rates of ignoring between the control and the

232

BOATSMAN, MOECKEL, AND PEI

treatments, nor does it explain the differences in ignoring the aid when the aid predicts fraud versus no fraud. All treatments but one increased the severity of the penalty for planning error(s); the other treatment increased the focus on the cost of an extended audit. According to the PAP hypothesis, treatments that increase performance pressure increase the likelihood of nonreliance regardless of the aid’s predictions. Under a Bayesian view, the effects of these treatments (except for increasing the focus on extended audit costs) depend on the decision aid’s predictions. Our results, depicted in Table 1, panel B, show that the choice to ignore the aid is insensitive to error penalty treatment if the penalty is for a single type of error only (groups 2 and 3) (see Table 1, panel B). In fact, we found decreases in ignoring in all conditions—significantly so when the aid predicted fraud (groups 4, 5, and 6). These results are inconsistent with the PAP hypothesis and only partially consistent with the Bayesian view. Clearly, further research is needed to specify exactly what the mechanism behind such decreased nonreliance may be. Intentional Shifting Away from the Decision Aid The results concerning intentional shifting clearly support the PAP hypothesis. The severity of penalty treatment affected the likelihood of intentional shifting. With the exception of one treatment group, the incidence of this form of nonreliance increased significantly as penalties were increased from the control group’s baseline. The results are consistent with the PAP hypothesis, where pressure would increase strategic nonreliance, and inconsistent with a Bayesian characterization, where intentional shifting would never occur. Here, intentional shifting may reflect strategic attempts to surpass the performance benchmark of the decision aid in response to performance pressure. The participants’ decision confidence provides further evidence supporting this explanation. Initial confidence was inversely associated with the likelihood of intentional shifting, and postdecision confidence was lower when intentionally shifting than when staying in agreement with the aid. These response patterns suggest that auditors were trying to outperform the aid and were conscious of their actions. One competing explanation for intentional shifting is the possibility of a “tournament effect” induced by our payoff schedule. However, this explanation was ruled out for three reasons. First, unlike the “winner-take-all” payment scheme used in Ashton (1990), the payment scheme used in this study was based on the participants’ relative performance. Thus, a participant received some payment unless s/he was incorrect for all four cases presented. Second, prior evidence showed that for decision aid reliance, use of piecemeal versus tournament payment schemes had no effect (Arkes et al., 1986). Third, our sensitivity analysis revealed no significant difference between the control group and a comparable group with a flat rate payment. The Effect of the Decision Aid’s Prediction Another interesting angle relates to the decision aid’s prediction. No decision aid is perfectly accurate; and relying on the aid’s prediction can lead to negative

AIDS TO SUPPORT AUDIT DECISION MAKING

233

consequences if the prediction later turns out be incorrect. Our auditor participants tended to ignore and intentionally shift more when the aid predicted fraud than no fraud. The overall percent of ignoring (intentional shifting) was 77% (38%) when the aid predicted fraud, whereas the rate was 10% (51%) when the aid predicted no fraud. These results suggest two possible explanations. First, auditors may be more concerned about the negative consequences of incorrect reliance when the aid predicts fraud than no fraud (i.e., overauditing as opposed to audit failure). Second, participants were more convinced of the credibility of the aid when it predicted no fraud than fraud. The first explanation is consistent with a decision consequences explanation, the second is consistent with a prior knowledge effect. Our supplemental analysis tested the validity of the second explanation and failed to find evidence to support it. Participants’ prior exposure to management fraud had no effect on the likelihood of either type of nonreliance. Conversely, the finding that auditors tended to intentionally shift more under the condition where the costs for overauditing were salient lends support to the first explanation. This conclusion is subject to the limitation that the participants in this study were senior auditors. The finding thus may not be generalizable to auditors at a higher rank who may be relatively more sensitive to the consequences of audit failure. It may indicate that the dimension most closely related to one’s area of responsibility is subject to the effects observed. This idea deserves further investigation. Implications and Future Research In addition to those noted above, the study’s results suggest two additional implications worthy of future research. The level of intentional shifting signals a source of nonreliance that has not yet been fully investigated in the decision aid literature. While it remains unclear why a subject would engage in shifting, the pattern of occurrence is clear: intentional shifting took place when there was a “single-sided” penalty structure. Such a structure results in ambiguity with respect to the potential for incurring negative consequences of decision errors. With the exception of the exactingness treatment (group 4), every treatment should arouse consideration of a differential penalty for committing a particular type of error (overauditing alone, audit failure alone, incorrectly overriding the aid, or committing to an unnecessarily costly extended audit). In an uncertain environment, it is difficult to find a strategy that will insure against any such error ex ante. Thus, in deciding on reliance, the auditors’ apprehension and performance pressure becomes higher (relative to the control). This may have led our participants to an escalating cycle of attempts to avoid a particular error, followed by attempts to balance against avoidance of the other, and finally to “out-guessing themselves.” Clearly, the validity of the pressure arousing effect of ambiguity in explaining the likely incidence of consequences should be investigated by future studies. A related possibility is that auditors’ nonreliance is likely to be caused by motivational (anticipation of decision consequences) and cognitive (overconfidence) sources. Anticipation of decision consequences is a motivational source in that it

234

BOATSMAN, MOECKEL, AND PEI

arises from an auditor’s concern about how reliance in the presence of an envisioned decision error will be evaluated by others who are granted hindsight (e.g., superiors, client management, a court). It may be too optimistic to suggest that nonreliance induced by this “ fear of being second guessed” can be mitigated with an accountability treatment. Previous literature suggests that the effect of accountability on performance often depends on many contextual factors, the sources of judgment error, and the preferences of significant others (e.g., Tetlock, 1992; Gibbins & Newton, 1994). For example, if firms had a justification policy on departures from a decision aid, the nature of decision aid reliance would be affected by how a departure would be viewed by the supervisors. Furthermore, intentional shifting is only one way of departing from a decision aid. It may not lend itself to treatment with explicit justification for the departure as a person’s initial judgment is always private. The second cause of nonreliance is a cognitive source that results from an association-based judgment error, or inadequate processing (Arkes, 1991). Our results suggest that these two sources are likely to have separate effects on auditors’ nonreliance, and thus demand alternative strategies to mitigate them. Further research would be needed to explore this suggestion. The generality of our results is limited in at least three ways. First, participants were recruited from a single firm; and not all firms emphasize use of technology in the same way. Second, the decision aid examined in this study and prior empirical studies (e.g., Arkes et al., 1986; Ashton, 1990; Powell, 1991) reflects only a subset of many possible decision aiding mechanisms. According to Messier (1995, p. 208), there is a wide array of decision aids. A decision aid can be “any explicit procedures for generating, evaluation and selection of alternatives, . . . A decision aid is a technology, not a theory.” A mechanical decision aid such as ours is designed for assisting the evaluation phase of a decision—it assists cue combination. Hence, our results on reliance may not generalize to decision aids designed for other decision phases.15 A third and related qualification is that a judge may not have formed his or her initial judgment prior to consulting an aid. That is, the judge may simply wish to use the aid for hypothesis generation (“Let me see if the aid gives me a warning sign”) rather than hypothesis evaluation (“Let me see if the aid agrees with my own assessment”). Clearly, our setting presumes use of decision aids for hypothesis evaluation rather than hypothesis generation; hence, the results should be interpreted accordingly.16 15 In addition, our results may not generalize to other forms of reliance such as reliance on an expert. Reliance on an expert often is in the context of problem formulation (e.g., what is the essence of a particular issue), information search (e.g., where to find relevant information), or seeking domain-specific knowledge (e.g., how to proceed from here, why is this issue important). In our study, the nature of the advice is a prediction based on cue combination on a judgmental issue. Another key difference is that reliance on an expert often allows for sharing the consequences of decision errors, whereas reliance on a decision aid often does not. In a professional setting such as auditing it is the decision maker, not the decision aid, that will bear the consequences of the decision outcome. 16 Prior judgment/decision making studies in both auditing and non-auditing contexts suggest that consulting a decision aid prior to forming one’s own judgment may not be desirable. The resulting judgment may suffer an anchoring-and-adjustment effect. See Footnote 9.

AIDS TO SUPPORT AUDIT DECISION MAKING

APPENDIX I

235

236

BOATSMAN, MOECKEL, AND PEI

AIDS TO SUPPORT AUDIT DECISION MAKING

237

238

BOATSMAN, MOECKEL, AND PEI

APPENDIX II

AIDS TO SUPPORT AUDIT DECISION MAKING

239

240

BOATSMAN, MOECKEL, AND PEI

AIDS TO SUPPORT AUDIT DECISION MAKING

241

242

BOATSMAN, MOECKEL, AND PEI

APPENDIX III

AIDS TO SUPPORT AUDIT DECISION MAKING

243

244

BOATSMAN, MOECKEL, AND PEI

AIDS TO SUPPORT AUDIT DECISION MAKING

245

246

BOATSMAN, MOECKEL, AND PEI

REFERENCES Ashton, R. (1990). Pressure and performance in accounting decision settings: Paradoxical effects of incentives, feedback, and justification. Journal of Accounting Research, 28, 148–180. Ashton, R., D. Kleinmuntz, & Tomassini, L. (1988). Audit decision making. In A. R. Abdel-khalik & I. Solomon (Eds.), Research opportunities in auditing: The second decade. Sarasota, FL: American Accounting Association. Ashton, R., & Willingham, J. (1989). Using and evaluating audit decision aids. In R. Srivastava & J. E. R. Rebele (Eds.), Auditing symposium IX. Lawrence: University of Kansas. Arkes, H., Dawes, R., & Christensen, C., (1986). Factors influencing the use of decision rule in a probabilistic task. Organizational Performance and Human Performance, 93–110. Arkes, H. (1991). Costs and benefits of judgment errors: Implications for debiasing. Psychological Bulletin, 110, 486–498. Bell, T., Szykowny, S., and Willingham, J. (1993). Assessing the likelihood of fraudulent financial reporting: A cascaded logit approach. Unpublished working paper, KPMG Peat Marwick, Montvale, NJ. Biggs, S., & Wild, J. (1985). An investigation of auditor judgment in analytical review. The Accounting Review, 60, 607–633. Bockenholt, U., & Weber, E. U. (1992). Use of formal methods in medical decision making: A survey and analysis. Medical Decision Making, 4(12), 298–306. Dawes, R., Faust, D., & Meehl, P. (1989). Clinical versus actuarial judgment. Science, pp. 1668– 1674. Davis, D., & Holt, C. (1993). Experimental economics. Princeton: Princeton University Press. Davis, E. (1994). Effects of decision aid type on auditors’ going concern evaluation. Unpublished working paper, Baylor University, Waco. Edwards, W. (1983). Human cognitive capacities, representativeness, and ground rules for research. In P. C. Humphreys, O. Svenson, & A. Vari (Eds.), Advances in psychology: Analyzing and aiding decision processes (Vol. 18, pp. 507–513). Einhorn, H. J. (1976). A synthesis: Accounting and behavioral science. Journal of Accounting Research, 14(Supp.), 196–206. Fischoff, B. (1982). Debiasing. In D. Kahneman, P. Slovic, & A. Tversky (Eds.) Judgment under uncertainty: Heuristics and biases (pp. 422–444). New York: Cambridge University Press. Gibbins, M., & Newton, J. D. (1994). An empirical exploration of complex accountability in public accounting. Journal of Accounting Research 32(2), 165–186. Gibbins, M., & Swieringa, R. (1995). Twenty years of judgment research in accounting and auditing. In R. Ashton & A. Ashton (Eds.), Judgment and decision making research in accounting and auditing, (pp. 231–249). New York: Cambridge Univ. Press. Goldberg, L. (1968). Simple model or simple processes? Some research on clinical judgments. American Physchologist, 23, 483–496. Heath C., & Tversky, A. (1991). Preference and belief: Ambiguity and competence in choice under uncertainty. Journal of Risk and Uncertainty, 4, 5–28. Hogarth, R. B. (1987). Judgment and choice. New York: Wiley. Hogarth, R., Gibbs, B., McKenzie, C., & Marquis, M. (1991). Learning from feedback: Exactingness and incentives. Journal of Experimental Psychology: Learning, Memory and Cognition, 17, 734– 752. Kachelmeier, S., & Messier, Jr., W. (1990). An investigation of the influence of a nonstatistical decision aid on auditor sample size decisions. The Accounting Review, 65, 209–226. Kahneman D., & Tversky, A. (1984). Options, values, and frames. American Psychologist, 39, 341– 350. Kahneman D. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263– 291.

AIDS TO SUPPORT AUDIT DECISION MAKING

247

Kinney, W., & Uecker, W. (1982). Mitigating the consequences of anchoring in auditor judgment. The Accounting Review, 57, 55–69. Kleinmuntz, B. (1990). Why we still use our heads instead of formulas: toward an integrative approach. Psychological Bulletin, pp. 296–310. Kleinmuntz, D. (1985). Cognitive heuristics and feedback in a dynamic decision environment. Management Science, 31, 680–702. Loebbecke, J., Eining, M., & Willingham, J. (1989). Auditors’ experience with material irregularities: frequency, nature, and detectability. Auditing: A Journal of Practice & Theory, 9, 1–28. Messier, W., Jr. (1995). Research and development of audit decision aids. In R. Ashton & A. Ashton, (Eds.), Judgment and decision making research in accounting and auditing (pp. 207–228). Cambridge University Press. Palmrose, Z. (1987). Litigation and independent auditors: The role of business failures and management fraud. Auditing: A Journal of Practice & Theory, 6, 90–103. Pincus, K. (1989). The efficacy of a red flags questionnaire for assessing the possibility of fraud. Accounting Organizations and Society, 14, 153–163. Powell, J. L. (1991). An attempt at increasing decision rule use in judgment task. Organization Behavior and Human Decision Processes, 48, 89–99. Smith, J., & Kida, T. (1991). Heuristics and biases: Expertise and task realism in auditing. Psychological Bulletin, 109, 472–489. Solomon, I., & Shields, M. D. (1995). Judgment and decision making research in auditing. In R. Ashton & A. Ashton (Eds.), Judgment and decision making research in accounting and auditing (pp. 137–175). Cambridge University Press. Tetlock. P. (1985). Accountability: The neglected social context of judgment and choice. Research in Organizational Behavior, 297–332. Tetlock. P. (1992). The impact of accountability on judgment and choice: Toward a social contingency model. Advances in Experimental Social Psychology, 25, 351–375. Received: March 7, 1997