ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES
Vol. 72, No. 2, November, pp. 232–255, 1997 ARTICLE NO. OB972735
The Influence of Decision Aids on User Behavior: Implications for Knowledge Acquisition and Inappropriate Reliance Steven M. Glover, Douglas F. Prawitt, and Brian C. Spilker School of Accountancy and Information Systems, Marriott School of Management, Brigham Young University
Structured decision aids are widely used in many organizational settings, and their influence on judgment consistency and accuracy have received considerable attention in the literature. While two benefits commonly associated with decision aid use are improved judgment and enhanced expertise development, relatively little attention has been directed toward the potential influence of decision aids on decision maker behavior. In particular, we argue that the use of structured aids may influence relatively inexperienced decision makers to approach aided tasks mechanistically, without becoming actively involved in the task or judgment. Such passive decision behavior may, in some situations, threaten gains in performance and other intended benefits associated with structured decision approaches. This study examines aided and unaided task execution to investigate whether use of an aid that does not require the active cognitive participation of the user in the task’s underlying concepts can impede acquisition of task-related knowledge. This issue is important in organizational settings because on-the-job learning is a primary means through which inexperienced workers develop into seasoned workers. The study also considers whether use of a structured aid can encourage reliance although the decision maker is aware that the aid does not incorporate all relevant features of Helpful comments were received from Lee Beach, Bryan Cloyd, Terry Connolly, Martha Eining, Bill Felix, Audrey Gramling, Eric Hirst, Jane Kennedy, Lisa Koonce, James Loebbecke, Bill Waller, Ron Worsham, Mark Zimbelman, and workshop participants at the University of Arizona, the University of Texas at Austin, the University of Utah, and the AAA Mid-Year Auditing Section Conference. Financial support was provided by the Mary and Ellis fund and by the Marriott School of Management at Brigham Young University. Address reprint requests to Douglas Prawitt, School of Accountancy and information Systems, Marriott School of Management, Brigham Young University, 540 TNRB, Provo, UT 84602. 0749-5978/97 $25.00 Copyright q 1997 by Academic Press All rights of reproduction in any form reserved.
232
DECISION AIDS AND USER BEHAVIOR
233
the task environment. This issue has implications for individual and organizational task-related judgment effectiveness. q 1997 Academic Press
Interest in decision aids was sparked by early studies in psychology demonstrating that statistical models commonly outperform human decision makers (Goldberg, 1965, 1968). Subsequently, researchers from a multitude of disciplines have demonstrated that decision aids can increase judgment consistency and accuracy. Improved judgment performance is derived by increasing information processing capacity, or by overcoming human judgment shortcomings and biases (e.g., Dawes, Faust, & Meehl, 1988). In addition to providing performance-related benefits, it has been argued that decision aids facilitate taskrelated learning (e.g., Libby & Luft, 1993). These arguments are based on the idea that a structured decision aid provides a model of a task’s knowledge components, thereby facilitating the process of passing the knowledge and expertise of an organization’s experts to entry level professionals. Interestingly, the facilitation of learning and expertise is associated with aided judgment even though virtually no decision aids have been developed with significant attention to the thought process itself (Einhorn, Kleinmuntz, & Kleinmuntz 1979; Mackay, Barr, & Kletke 1992; Simon, 1977). Structured aids likely provide benefits in many decision settings. However, it is important that implementers understand the ways a decision aid can affect user behavior. While it is widely recognized that decision aids change user behavior and decision processes either deliberately or inadvertently, very little attention has been paid to these issues in the literature (Silver, 1990, 1991). Recent research suggests structured aids may inadvertently encourage potentially problematic user behavior. In a series of studies, Todd and Benbasat (1991, 1992, 1993, 1994) found, contrary to a traditional assumption in the decision support literature, that subjects provided with a decision aid behaved as if effort minimization was an important consideration. Such behavior may be problematic if inexperienced decision makers treat aided tasks mechanistically by simply “operating” an aid rather than using the aid as a tool to enhance their judgment. When a decision maker simply operates an aid, the gains commonly associated with aided decision making may be offset by the costs associated with passive involvement. Specifically, mechanistic operation of a decision aid may result in a lack of cognitive participation in the concepts underlying the aid, potentially impeding learning from experience at the task. Further, passive behavior may impair judgment effectiveness when an aid does not accommodate all relevant environmental features. While the benefits of decision aids have been examined empirically, the potentially detrimental aspects of passive user behavior on learning and inappropriate reliance have not been examined. In an experimental setting we examine the influence of a structured decision aid on inexperienced decision makers’ (1) task-related knowledge acquisition
234
GLOVER, PRAWITT, AND SPILKER
and (2) judgment performance.1 In regard to performance, because many aids in practice do not accommodate all relevant cues, we examine the propensity of inexperienced decision makers to inappropriately rely on an aid which they know is incomplete. In the experiments, subjects either had or did not have a decision aid available to assist in the calculation of tax liabilities. Results indicate that aided subjects acquired less task-relevant knowledge than subjects who completed the task without the aid, despite the fact that the aid provided an explicit and detailed model of the task. Even though aided subjects were informed of and experienced the aid’s inadequacy, results also indicate that aided subjects often inappropriately relied on the aid, resulting in a lower level of judgment accuracy than was exhibited by unaided subjects in cases for which the aid did not accommodate all relevant environmental features. We examine these issues in an accounting context because the accounting profession has devoted considerable attention towards aided judgment. Accounting practitioners consider the development and implementation of structured decision aids to be a crucial component of their efforts to improve practice effectiveness and efficiency (Elliott & Jacobson, 1987; Messier, 1995). Further, the use of structured decision aids by relatively inexperienced accounting professionals is pervasive in large accounting firms.2 To motivate further decision aid development and use, accounting academics and practitioners cite the benefits commonly associated with aided judgment, namely enhanced judgment performance, reduced judgment variability, and facilitation of knowledge transfer (Ashton & Willingham, 1988; Libby & Luft, 1993). The influence of structured decision aids on knowledge acquisition and the tendency for decision makers to inappropriately rely on a decision aid are issues of particular relevance to organizations that provide aids to inexperienced professionals. To the extent aids inhibit learning from experience, junior-level professionals may have difficulty developing the cognitive structures that will be required of them as they progress to more advanced levels within the organization. Similarly, if professionals inappropriately rely on an incomplete decision aid, effectiveness may be compromised. Decision aids are undoubtedly appropriate and beneficial for many professional judgment situations. However, the behavioral influences of decision aid use must be considered to properly assess the benefits, limitations, and costs associated with decision aid development and implementation. This paper contributes to the literature by highlighting behavioral influences of decision aids, and by providing evidence of potential behavioral problems associated with their passive use. 1
We use the terms “knowledge acquisition” and “task-related learning” synonomously in this study. 2 Consistent with our concern about mechanistic use of decision aids, we are aware of an international accounting firm that has mandated a reduction in the use of their statistical sampling decision aid. Our conversations with the firm personnel suggest the firm was concerned their employees were not actively participating in the decision-making process. Rather than attending to the problem, the employees were simply applying the decision aid which resulted in inefficient, and in some cases, potentially ineffective procedures.
DECISION AIDS AND USER BEHAVIOR
235
TASK DOMAINS, DECISION MAKERS, AND DECISION AIDS
Relationships among types of problem domain, types of decision aid, and typical levels of decision-maker judgment and expertise are summarized in Fig. 1 (adapted from Abdolmohammadi, 1987; and Messier & Hansen, 1987). In order to clearly delineate appropriate boundary conditions for the issues addressed, this study focuses on a particular combination of aid, decision maker, and task common in public accounting firms and other professional organizations. The impairment of knowledge acquisition is of concern only when the decision maker has yet to acquire the relevant knowledge necessary to successfully complete a particular judgment task. The problem domain and aid situations most likely to be encountered by decision makers of low knowledge and expertise are the highly or semi-structured domains with simple/deterministic decision aids (see Fig. 1). Silver (1990, 1991) also presents a useful framework for examining the characteristics of decision aids. His framework examines two attributes of Decision Support Systems (DSS): “restrictiveness” and “decisional
FIG. 1. Relationship between type of decision aid, problem domain, judgment required, and decision maker’s level of experience. Figure was adapted from Abdolmohammadi (1987) and Messier and Hansen (1987).
236
GLOVER, PRAWITT, AND SPILKER
guidance.”3 To provide structure, encourage use, enhance consistency, and to avoid overwhelming users, inexperienced decision makers are typically provided with restrictive decision aids. Generally speaking, the more restrictive the aid (e.g., the aid’s functions require few judgmental inputs for their users), the less opportunity for decisional guidance. Because we are interested in taskrelated knowledge acquired by inexperienced decision makers, our experimental setting involves a highly or semistructured domain with a simple/deterministic, restrictive decision aid. In practice, a decision context that may be particularly conducive to inappropriate reliance on a decision aid is one where: the decision aid provides a solution (i.e., a highly restrictive, deterministic aid); the aid does not accommodate all relevant environmental cues; the aid is perceived as relatively accurate; and the decision maker is not an expert at the task.4 An aid may be purposefully restricted and therefore not incorporate all relevant cues for a number of reasons. For example, the aid may be restricted to: (1) encourage use, (2) provide structure to the decision making process, (3) increase judgment consistency within the organization, (4) overcome systematic cognitive biases, and (5) prescribe normative decision-making techniques (Silver, 1990). While there may well be other decision contexts conducive to inappropriate reliance, this study examines the issue within the contextual boundaries outlined above.
RELATED RESEARCH
Both the professional and academic literatures cite “knowledge sharing” as an advantage of decision aids. Knowledge sharing involves incorporating the knowledge of expert decision makers into a decision aid, and applying the aid at lower organizational levels. This allows relatively high-level knowledge to be applied by personnel with relatively little experience and training (e.g., Abdolmohammadi, 1987; Elliott & Jacobson, 1987; Libby & Luft, 1993; Prawitt, 1995). Some researchers have speculated that this process fosters structured learning in inexperienced users by exposing them to a model of the interrelationships among a task’s underlying concepts (e.g., Ashton & Willingham, 1988; Libby & Luft, 1993), or by marching decision makers through the steps they must follow (Silver, 1990). However, it is not clear whether knowledge sharing involves the actual transfer of knowledge to the user of the decision aid, or whether the knowledge is simply embedded in the aid which is then applied
3 Restrictiveness refers to the degree to which and manner in which DSSs limit users’ decisionmaking processes. Decisional Guidance is the degree to which and the manner in which DSSs guide users in constructing and executing decision-making processes by assisting them in choosing and using their operators. 4 Previous research has found that the more expertise a decision maker has in making a judgment, the less likely it is that the decision maker will rely solely on a decision aid (e.g., see Arkes, Dawes, & Christensen, 1986; Whitecotton, 1996).
DECISION AIDS AND USER BEHAVIOR
237
mechanistically by the user (Prawitt, 1995).5 Mechanistic operation of a decision aid can lead to “passive understanding” which is an understanding of the mechanics of the aid’s features (e.g., knowing when to push which buttons; Silver, 1990; Stabell, 1983), rather than an understanding of the underlying task concepts. Another commonly cited advantage of decision aid use is increased effectiveness. This benefit, as well as others commonly associated with decision aids, is based on a traditional assumption in the decision support literature that decision makers will use decision aids to enhance their processing capabilities, leading to a better task understanding and problem analysis (Keen & Morton, 1978). Thus, the decision aid frees cognitive resources, allowing the capacityconstrained decision maker to do a more complete problem analysis than would be possible without the aid (Simon, 1976).6 However, the assumption that the decision maker will reapply the cognitive savings provided by an aid conflicts with research suggesting decision makers place value on reducing effort (e.g., Beach & Mitchell, 1978; Payne, 1982). For example, models of contingent decision behavior suggest that minimization of cognitive effort is a key consideration in strategy selection (Beach & Mitchell, 1978; Jungerman, 1985; March, 1978). Consistent with behavioral decision theory, Todd and Benbasat (1991, 1992, 1993, 1994) find that subjects did not process more information, or expend more effort, when provided with a decision aid. Rather, subjects’ behavior was consistent with the idea that effort minimization was an important consideration.7 As discussed below, effort reduction encouraged by decision aid use may have implications for knowledge acquisition and inappropriate reliance.8 5
Along these lines, Ashton and Willingham (1988) note that evidence about the effectiveness of aided decision making has largely been acquired on a trial-and-error basis in the field. They suggest that “aided decision making should be evaluated scientifically for the same reasons that unaided decision making should be evaluated—to understand the conditions under which it is effective, and to provide a sound basis for improving it when needed.” 6 The suggestion that a model can facilitate learning has some support in the problem solving literature, where research has shown that providing subjects with a worked example can enhance performance on similar tasks (Cooper & Sweller, 1987; LeFevre & Dixon, 1986; Sweller, 1988, 1989). On the other hand, research suggests that providing a worked example along with written instructions may reduce knowledge acquisition because subjects tend to focus on the example, and only process written instructions superficially (Chandler & Sweller, 1991; LeFevre & Dixon, 1986). While both worked examples and decision aids can provide conceptually complete models of a task’s knowledge components, a worked example differs from a decision aid in that a worked example simply provides an analogical reference, while an aid is directly involved in the execution of the task. Thus, a decision aid may provide the decision maker opportunities to reduce cognitive effort in ways not available to the user of a worked example. 7 In terms of Silver’s (1990) framework discussed earlier, decisional guidance can be either deliberate or inadvertent. Inadvertent guidance is an unintended consequence of the aid’s use (e.g., reduced effort and passive user involvement). Silver (1990) notes deliberate decisional guidance is rarely found in today’s DSSs. 8 Other potentially negative effects of decision aids have been identified in previous research (e.g., Ashton & Willingham 1988; Jiambalvo & Waller, 1984; Kachelmeier & Messier, 1990; Kottemann, Davis, & Remus, 1994; Kottemann & Remus, 1987, 1991).
238
GLOVER, PRAWITT, AND SPILKER
HYPOTHESIS DEVELOPMENT
Knowledge Acquisition The acquisition of new knowledge involves development of relationships among pieces of information and integration of new information into existing knowledge structures (Devine & Kozlowksi, 1995; Glaser & Bassok, 1989; Hermanson, 1994; Mayer, 1979, 1980; Reigeluth, Merrill, Wilson, & Spiller, 1980). Developing these interrelationships requires the active participation of the decision maker in the task’s underlying concepts (Glaser & Bassok, 1989). Cognitive involvement or participation entails exerting effort to actively consider what information is needed for successful task completion, and to combine and process that information in order to produce a meaningful decision or solution (e.g., see Langer, 1975). The idea that knowledge acquisition involves active participation is consistent with current models in the literature. Libby and Luft (1993) suggest that abilities and experiences result in an internal state of knowledge, and abilities and knowledge together directly affect performance. Building on this model, Libby and Tan (1994) note, “a model of performance is one where experience creates opportunities for the acquisition of knowledge, while ability and effort determine the amount of knowledge acquired given that experience. This acquired knowledge, along with ability and effort, then directly affects performance.” Consistent with decision makers’ tendency to minimize cognitive effort, we argue that aided decision makers may use a structured decision aid mechanistically—responding passively to requests for information, and simply operating the aid’s features (e.g., “pushing the buttons”). Though not typically prohibited from actively participating in the task’s underlying concepts, aided decision makers may not fully consider available information and may fail to develop an understanding of how the aid combines and processes the information to successfully solve the problem.9 In contrast, in order to successfully complete a task, unaided decision makers must participate cognitively in the concepts underlying the task. Because a structured decision aid may encourage passive user behavior, and because passive involvement is not conducive to learning task concepts through experience, we propose the following hypothesis: H1: Subjects provided with a structured decision aid that does not encourage active cognitive participation in the underlying task concepts will acquire less task-related knowledge through experience at the task than will subjects without the aid.
9 The prediction that passive operation of an aid will “hinder” knowledge acquisition and promote overreliance due to effort minimization considerations is also consistent with a body of literature in psychology which suggests that lower psychological involvement can lead to less cognitive effort, and errors in application of techniques and processing of information (e.g., lower “personal involvement” Sherif, Kelly, Rodgers, Sarup, & Tiller, 1973; “issue involvement” Kiesler, Collins, & Miller, 1969; “issue relevence” Lastovicka & Gardner, 1979; Petty & Cacioppo, 1979; and “personal responsibility” Harkins & Petty, 1982).
DECISION AIDS AND USER BEHAVIOR
239
Inappropriate Reliance In accounting and other professional practice settings, relatively simple and deterministic aids are commonly applied in complex decision environments in which potentially relevant cues are varied and numerous. For example, in financial statement assurance engagements, common aids include checklists and worksheets to aid in risk and internal control assessment, sample-size guidance flowcharts, and various tools to assist in going-concern and materiality assessments. In tax practice, commonly used aids include checklists to assist in issue identification, and computerized worksheets to perform complex combinations of numerical inputs. The most prevalent type of DSS guidance in most areas has been mechanical, in the form of look-ahead menus and status lines (Silver, 1990). Variation and complexity in the task environment, and the structuredness or restrictiveness of a decision aid, are inversely related to the likelihood of an adequate “fit” between the decision aid and task requirements (Ballew, 1982; Bamber & Bylinski, 1982; Bamber & Snowball, 1988; Sullivan, 1984). In practice, because environments in which professional tasks are performed can vary significantly, decision aids can rarely accommodate all potentially relevant environmental features. Further, as noted previously, aids are often deliberately simplified or restricted in order to encourage use, avoid overwhelming the decision maker, and provide structure to the task while still adequately accommodating most common task environments (Pincus, 1989; Silver, 1990). Accordingly, most decision aids are developed to aid human judgment, not replace it. When relevant environmental cues are not adequately accommodated by a decision aid, to adequately perform the task the decision maker must exercise more judgment than when the aid accounts for all relevant cues. However, the decision making process of a user relying on an aid is often constrained by that system’s functionality (Silver, 1990). Furthermore, increased decisionmaker care is at odds with the passive user involvement potentially encouraged by decision aids. Passive or mechanistic use of a decision aid may reduce the decision maker’s sensitivity to environmental conditions, some of which may not be adequately accommodated by the decision aid. As long as there is an adequate fit between the decision aid and the task situation, a reduction in cognitive effort associated with the use of a relatively deterministic decision aid will not decrease task performance. However, when there is not an adequate fit, passive use of the aid may result in inappropriate reliance, producing lower quality performance in such situations than would have resulted from unaided judgment. The following hypothesis summarizes our expectation regarding inappropriate reliance: H2: In cases where the decision aid does not adequately incorporate relevant task cues, subjects with the aid will rely on the aid, resulting in a lower level of task performance than subjects without the aid, even though aided subjects are aware of the aid’s inadequacy.
240
GLOVER, PRAWITT, AND SPILKER
RESEARCH METHOD
Subjects and Compensation As discussed previously, because H1 requires measurement of knowledge acquired during task performance, subjects must not already possess the requisite task knowledge. Thus, to achieve a relatively homogenous subject pool, advanced accounting students were selected as proxies for relatively inexperienced staff accountants. The 44 participants had completed nearly all the accounting courses required for a baccalaureate degree at a major private university, including a course on taxation.10 Over 80% of the accounting graduates from this university typically accept positions with “big six” accounting firms. The experimental task, involving determination of individual taxpayer liability, is a task in which the subjects were inexperienced, and in which they had received uniform prior instruction. Subjects had a basic understanding of individual taxation, but lacked an in-depth understanding of the issues involved in the taxation of capital gains. Subjects were told that average total compensation would be approximately $10.00, but that individual compensation would depend on performance. As part of the instructions at the beginning of each experimental session, subjects were informed they would be paid an unspecified base amount plus $.50 for every correctly formulated answer. Subjects were not told how many cases they would be asked to solve and, because the base amount was unspecified, subjects were unable to calculate the number of cases they would need to solve to earn at least the average compensation. Base amounts varied between experimental conditions in accordance with expectations based on pilot subject performance. At the conclusion of both experiments, a lab attendant recorded summary data from the concluding computer screen and filled in a compensation sheet while the subjects completed an exit survey. Each subject then submitted his or her compensation sheet to a cashier for payment. Task The experimental task was computerized, and involved the determination of tax liabilities, including computation of taxes on net capital gains, for six clients.11 For each case, subjects were given a set of client information relating to the client’s taxable income. Subjects either were or were not provided with a decision aid. Other than the availability of a decision aid, screens and procedures were identical for subjects with, and without, access to the aid. 10
Usable data was available for 43 subjects because the computer program malfunctioned for one subject. 11 In determining the number of cases to use in the experiment, we attempted to balance the need to have enough cases to allow subjects the opportunity to learn underlying task concepts, with having few enough cases to maintain a high level of interest in the task. Based on pilot testing, six cases proved adequate to accomplish these objectives.
DECISION AIDS AND USER BEHAVIOR
241
All subjects were provided with general instructions, a tax rate table indicating that net capital gains are taxed as ordinary income (but at a maximum rate of 28%), and client-specific information, some of which was irrelevant. Therefore, while all information necessary to complete the task was available on the screen, subjects were required to identify the relevant client information and appropriately combine the information according to the concepts underlying the taxation of capital gains. To successfully complete the task, No-Aid subjects were required to identify and combine relevant information without the help of an aid. The decision aid (reproduced in Fig. 2) is a computerized worksheet that identifies and appropriately combines the information necessary to calculate a client’s tax liability in most situations. While the decision aid guides the user to correctly compute taxes on capital gains in most typical client situations, it does not take into account that capital gains are taxed at a rate less than 28%
FIG. 2. Computer screen with decision aid: Primary experiment. The actual computer screens included a box for subjects to enter their solution. To view the client data and the marginal tax rates, subjects had to point to and click on the cell containing the data.
242
GLOVER, PRAWITT, AND SPILKER
when the taxpayer is in a lower overall tax bracket (15%).12 As a result, the decision aid appropriately incorporates all critical client-related cues in four of the experiment’s six cases. This incompleteness in the decision aid relative to certain client environments operationalizes the concept that, in practice, aids often do not capture all potentially relevant environmental features. To use the aid, subjects entered relevant client information into the decision aid template. The decision aid then provided a suggested solution for the client’s total tax liability. The aid template by its nature provided subjects an explicitly defined model for determining which client information was relevant for the task, and for determining the appropriate interrelationships among the items in most situations. Similar to many aids in practice, however, the experimental aid and other on-screen resources allowed, but did not require, subjects to actively participate in the task’s underlying concepts. All subjects entered a final number into a solution box designated on the screen, representing a culmination of the task. Subjects could type any number into the box, including, for aided subjects, the aid’s suggested answer, if desired. When subjects entered a final solution for a particular case, the computer program asked them to confirm or change their entry. After subjects confirmed, the program indicated whether the solution was correct or incorrect. If the subject’s response was incorrect, the program also provided the correct solution.13 Subjects then had the option to review the current case before proceeding to the next case. All subjects had access to blank scratch paper, a pencil, and a calculator. Procedures Subjects were randomly assigned to one of the two experimental conditions— “Aid” or “No-Aid.” Prior to beginning the experimental task, subjects in both conditions observed a demonstration of the software along with verbal instructions. The concepts underlying the taxation of capital gains were not specifically explained for either subject group. Subjects in both the Aid and No-Aid conditions were instructed that all information necessary to generate a complete and correct solution for all client situations was available on the screen. The first instruction screen on beginning the task reiterated this fact. Subjects 12 The aid is an adaptation of the actual IRS “Capital Gain Tax Worksheet.” The client circumstance in which the aid is inadequate is a relatively unusual situation because the individual must have sufficient wealth to generate taxable capital gains and yet have total taxable income within the 15% tax bracket. 13 Feedback was given to provide an opportunity to learn from task experience. Accounting decision makers commonly receive outcome feedback for tasks similar to that used in this study (Bonner & Walker, 1994). While prior research indicates that outcome feedback alone is not always effective in promoting knowledge acquisition (see Balzer, Doherty, & O’Connor, 1989), in relatively simple tasks such feedback can allow subjects to reason backwards from the outcome to develop correct explanations (Hirst & Luckett, 1992). Further, subjects in this study not only received feedback on whether their answer was correct, but also the client’s correct tax liability if the subject’s answer was incorrect. Subjects then were given unlimited time to review the prior case given this information.
DECISION AIDS AND USER BEHAVIOR
243
were also told where on the screen they could locate instructive information regarding the taxation of capital gains. Prior to beginning the experimental task, Aid subjects were told that the aid may not always provide a correct solution. For all subjects, the first case for which the aid was not adequate (i.e., the first “15% case”) was randomly placed in either the second or fourth position. The second “15% case” was last for all subjects.14 The other four cases appeared in random order. Subject data files were collected on computer diskettes. Dependent Variables H1 predicts that subjects in the Aid condition will acquire less task-related knowledge than subjects in the No-Aid condition. The measure used to address this hypothesis is an expert’s assessment of subjects’ free-form written explanations of the concepts underlying how capital gains are taxed.15 H2 proposes that decision makers may rely on the aid in settings for which the aid does not adequately incorporate environmental cues and, as a result, perform at a lower level than unaided decision makers in these settings. The dependent measure for the second hypothesis is accuracy in the sixth (last) case, for which the decision aid is inadequate, as noted above. RESULTS
Manipulation Check Prior to the experiment, Aid subjects were told the decision aid may not always provide a correct solution. Additionally, before they were asked to perform the test case, all Aid subjects experienced a case in which the aid did not provide the correct solution. As a check on whether Aid subjects attended to the instructions regarding the aid’s inadequacy, a question on the postexperimental questionnaire asked subjects to indicate (1) how accurate they believed the aid would be before they began the experiment and (2) how accurate they believed the aid actually was after completing the experiment. On average, 14 The first “15%” case was inserted to give all subjects experience with the “15%” client situation before attempting to solve the sixth (test) case. Further, the first “15%” case gave aided subjects experience with a case in which the aid did not suggest the correct answer, confirming the verbal instructions that the aid may not always provide a correct solution. 15 Subjects were asked in the post-experimental questionnaire to write their conceptual understanding of how capital gains are taxed. Each explanation was given a score of zero, one, two, three, or four, depending on the understanding of task concepts demonstrated in the explanation. A response assigned a score of zero includes no relevant or accurate information relating to the taxation process. A score of four indicates a complete understanding, including the idea that capital gains are taxed at less than the maximum rate when the taxpayer is in the 15% tax bracket. A tax expert who is a CPA and a professor of taxation performed the rating. The expert was unaware of the hypotheses or treatment conditions. The authors also collectively performed this rating. The correlation between the independent expert’s and the authors’ ratings was .95. The expert’s ratings are used in the analyses presented.
244
GLOVER, PRAWITT, AND SPILKER
subjects indicated that, prior to completing any cases, they believed the aid would be accurate 81.9% of the time. After completing all the cases, subjects’ average perceived accuracy was 61%, which is close to the aid’s actual accuracy during the experiment (67%). These responses indicate subjects were aware the aid did not always provide a correct solution.
Knowledge Acquisition (H1) To test the knowledge acquisition hypothesis, we compare the expert’s assessment of post-experimental knowledge for Aid and No-Aid subjects. Table 1 reports descriptive statistics for the dependent variable and the test statistic for H1. The knowledge measure is significantly greater for No-Aid subjects than for Aid subjects (t 5 2.60, p 5 0.1).16 Thus, our results are consistent with H1.
Inappropriate Reliance (H2) H2 predicts that No-Aid subjects will outperform Aid subjects in the test (last) case, when the client fact pattern is such that it is not adequately accommodated by the decision aid. Table 2 reports accuracy results for all six cases. In the test case, 18 of 22 No-Aid subjects (82%) entered the client’s correct tax
TABLE 1 Knowledge Acquisition Hypothesis—H1: Mean Scores (Standard Deviations) and t test Treatment condition
Dependent measure
Aid (n 5 21)
No-aid (n 5 22)
t
p
Expert-assessed understanding of the taxation of capital gainsa
1.38 (.87)
2.23 (1.23)
2.60
.01
a
Understanding of the taxation of capital gains is based on subjects’ written descriptions of how capital gains are taxed. Each subject’s description was scored as 0, 1, 2, 3, or 4, depending on the extent to which subjects demonstrated an understanding of the taxation of capital gains. A response assigned a score of 0 includes no relevant or accurate information relating to the taxation process. A score of 4 indicates a complete understanding, including the idea that capital gains are taxed at less than the maximum rate when the taxpayer is in the 15% tax bracket.
16
Consistent with concepts underlying the development of H1, results from a MANOVA using three different proxies for effort as dependent variables (self-assessed effort, mean time to complete each of first five cases, mean time reviewing each of first five cases), suggest Aid subjects exerted less effort at the task than No-Aid subjects ( p 5 .03).
245
DECISION AIDS AND USER BEHAVIOR
TABLE 2 Inappropriate Reliance Hypothesis (H2): Accuracy of Response by Conditiona
Treatment condition
Four “typical” Casesb
First “15%” casec
Second “15%” case (test case)
Aid (n 5 21) No-aid (n 5 22)
85% (71)d 49% (43)
10%e (2) 68% (15)
19%e,f (4) 81%g (18)
a Percentages indicate proportion of cases solved correctly. Numbers in parentheses indicate the total number of correct responses by subjects in that cell. b These are the cases for which the aid provided the correct answer. All subjects received these four cases in the first, second or fourth, third, and fifth positions. c This is the first case in which the aid did not provide the correct answer. All subjects received this case in either the second or fourth position. d Each subject responded to all four cases in which the aid provided the correct answer resulting in a total of 84 responses for the Aid group (i.e., 21 subjects times 4 cases) and 88 for the NoAid group. e A Fisher’s Exact test indicates that the percentage of correct responses is dependent on the decision aid condition ( p , .01). f Of the 13 Aid subjects who received the first “15%” case in the second position, two (15%) responded correctly to the second “15%” case (Case 6). Of the eight Aid subjects who received the first “15%” case in the fourth position, two (25%) responded correctly to the second “15%” case. g Of the 12 No-Aid subjects who received the first “15%” case in the second position, nine (75%) responded correctly to the second “15%” case (Case 6). Of the nine No-Aid subjects who received the first “15%” case in the fourth position, all nine responded correctly to the second “15%” case.
liability, in contrast to only 4 of 21 Aid subjects (19%).17 A Fisher’s exact test indicates that the number of correct responses depends on aid condition ( p , .01).18 However, this measure is limited by the possibility that an incorrect solution could be entered for a “15%” case even though the subject did not inappropriately rely on the aid. For example, the subject might try to calculate the correct liability without using the aid and still arrive at an incorrect solution. Therefore, we examined Aid subjects’ response materials for two conditions before concluding in favor of an “inappropriate reliance” explanation. First, 17 Note that No-Aid subjects performed better, on average, on the 15% cases than they did on the “typical” cases. This is likely the result of two contributing factors. First, 15% cases were the least complex computationally. Given an understanding of the taxation of capital gains, all a subject had to do to compute the correct answer for a 15% case was to multiply taxable income by 15%. In contrast, the “typical” cases required subjects to apply different tax rates to different marginal levels of taxable income. Second, the first “15%” case occurred (randomly) in either position 2 or 4 and the second 15% case was positioned last (6th). As a result, on average subjects had completed more trials before they completed the 15% cases than they had before completing the typical cases. 18 As indicated in Table 2, results are qualitatively similar for the first “15%” case (encountered in either the second or fourth position). In that case, only 2 of 21 Aid subjects (10%) entered the client’s correct liability, while 15 of 22 No Aid subjects (68%) entered the correct liability (Fisher’s exact test, p , .01). Position of the first “15%” case made no qualitative difference in performance in the second “15%” case. As would be expected, in the cases for which the aid suggested a correct solution, Aid subjects were more accurate than No-Aid subjects (85% vs 49%).
246
GLOVER, PRAWITT, AND SPILKER
the subject must have entered the aid’s (incorrect) suggested solution. Second, the solution must have been taken from the decision aid and not calculated independently by the user, as evidenced by the scratch paper collected from each subject. Consistent with an inappropriate-reliance explanation, all but two of the Aid subjects who entered an incorrect solution entered the answer suggested by the aid.19 Also, none of these subjects performed calculations on their scratch paper suggesting that they may have independently arrived at the answer provided by the aid. Thus, we conclude that the results are consistent with H2. In summary, consistent with prior research indicating that the use of decision aids can improve human decision maker performance, aided subjects’ performance in situations where the aid adequately accommodated all environmental cues was superior to that of unaided subjects in the same cases (85% vs 49%— see Table 2). However, results are also consistent with the argument that enhanced performance in routine tasks may be achieved through use of a decision aid at the cost of reduced knowledge acquisition (H1) and reduced performance in tasks for which the aid is not entirely adequate (H2). Additional Analysis Figure 3 depicts subjects’ performance accuracy across the six experimental cases. Data from Aid subjects is separated into two groups, based on whether the first 15% case was received in the second position or in the fourth position.20
FIG. 3. Descriptive data on accuracy by case: Primary experiment. This graph represents subjects’ accuracy by case position and by experimental condition (Aid and No Aid). Aid subjects were split into two groups (Aid 2nd and Aid 4th) based on the position in which they completed the first “15%” case (i.e., either 2nd or 4th). This split was made to more clearly reflect Aid subjects’ performance when the aid did not provide the correct solution. 19
Inferences are similar to those presented in the paper when these two subjects are excluded from the analysis. 20 The Aid group was split in this manner to more clearly show the effect of the aid’s inadequacy on accuracy for the two “15%” cases.
DECISION AIDS AND USER BEHAVIOR
247
As illustrated in Fig. 3, subjects in the Aid condition maintained a very high level of accuracy when the aid provided a correct solution. However, consistent with H2, Aid subjects’ performance was much lower for the two 15% cases, when the aid did not adequately account for the client situation. In contrast, NoAid subjects’ performance exhibits a fairly steady improvement—suggesting subjects gained task-related knowledge over time. Figure 4 presents graphically the total time subjects took to complete each case by experimental condition and case position. As illustrated in Fig. 4, the time subjects spent at the task (time to complete case plus time to review case upon completion) generally decreased over time. The most significant drop between cases was between the first and second cases for the No-Aid subjects, suggesting a steep initial learning curve. The overall downward trend, together with the increasing accuracy of No-Aid subjects depicted in Fig. 3, is consistent with the idea that No-Aid subjects gained task-related knowledge over time, requiring less time to accurately complete and review subsequent cases. Follow-Up Experiment In the experiment described above, Aid subjects acquired less task-related knowledge than did No-Aid subjects, consistent with H1. However, the aid did not provide a complete model of the task with respect to all relevant environmental cues, and this incompleteness may have impeded subjects’ acquisition of the concepts underlying the task. Further, subjects may not have learned from the aid because instructions indicated that the aid may not always be accurate, and in fact, they experienced two cases in which the aid did not provide a correct solution. Finally, because Aid subjects were always provided with the aid, the benefits gained from learning the task’s underlying concepts
FIG. 4. Descriptive data on total time on task by case: Primary experiment. This graph represents subjects’ accuracy by case position and by experimental condition (Aid and No-Aid). Aid subjects were split into two groups (Aid 2nd and Aid 4th) based on the position in which they completed the first 15% case (i.e., either 2nd or 4th). This split was made to more clearly reflect Aid subjects’ time allocations when the aid did not provide the correct solution. Because subjects were not allowed to review case six, the time for case position six reflects time completing the case only.
may not have been sufficiently salient, despite the fact that they were informed that the aid may not provide an appropriate suggested solution in every client environment. These issues were addressed in a follow-up experiment using an aid that represents a complete model of the concepts underlying the task, and therefore correctly calculates the tax liability in all client cases. Aid subjects were informed that the aid would not always be available to them. In summary, the experimental task and procedures were similar to those in the primary experiment with the following modifications: (1) the aid was complete with respect to all client environments and thus always provided a correct solution (and a complete task model), (2) subjects were told that the aid would not always be available, and (3) in order to obtain an additional measure of the understanding subjects acquired through task experience, all subjects completed the sixth case without access to the decision aid. The aid used for the follow-up experiment is reproduced in Fig. 5. Forty-six subjects with backgrounds similar to those described in the primary
FIG. 5. Computer screen with decision aid: Follow-up experiment. The actual computer screens included a box for subjects to enter their solution. To view the client data and the marginal tax rates, subjects had to point to and click on the cell containing the data.
249
DECISION AIDS AND USER BEHAVIOR
experiment participated.21 As in the primary experiment, an expert’s assessment of subjects’ written discussions of capital gains taxation was used to measure knowledge acquistion. Table 3A reports descriptive statistics by condition and the test statistic for H1. In support of H1, No-Aid subjects’ written explanations indicated significantly greater learning than Aid subjects’ explanations (t 5 3.33, p , .01). Performance in the sixth case is used as an additional proxy for knowledge acquisition. Table 3B reports accuracy in the last case by condition. Five of 21 subjects in the Aid condition (24%), and 16 of 23 subjects in the No-Aid condition (70%), provided a correct response in the last case. A Fisher’s exact test indicates that the number of correct responses is dependent on aid condition ( p 5 .02). To the extent performance proxies for knowledge in this experiment, these results provide additional support for H1.22
TABLE 3 Knowledge Acquisition: Follow-Up Experiment Panel A: Mean scores (standard deviations) and ANOVA results for self-assessed knowledge acquisition and subjects’ understanding of the taxation of capital gains Treatment condition
Dependent measure
Aid (n 5 21)
No-aid (n 5 23)
t
p
Understanding of the taxation of capital gainsa
1.00 (1.10)
2.30 (1.46)
3.33
.00
Panel B: Percentage of correct responses by condition (number of correct responses in parentheses) Treatment condition
Cases 1–5
Case 6 (test case)
Aid (n 5 21) No-aid (n 5 23)
89% (93)b 65% (75)
24%c (5) 70% (16)
a
Understanding of the taxation of capital gains is based on subjects’ written descriptions of how capital gains are taxed. Each subject’s description was scored as 0, 1, 2, 3, or 4, depending on the extent to which the description demonstrated an understanding of the taxation of capital gains. b Each subject responded to five cases, resulting in a total of 105 responses for the Aid group (i.e., 21 subjects times 5 cases) and 115 for the No-Aid group. c A Fisher’s Exact test indicates that the percentage of correct responses is dependent on the decision aid condition ( p 5 .02).
21 None of the subjects in the follow-up experiment had previously participated in the original experiment. Results are based on data from 44 subjects because the computer program malfunctioned for two subjects. 22 As in the first experiment, a MANOVA using three different proxies for effort as dependent variables (self-assessed effort, mean time to complete each of first five cases, mean time reviewing each of first five cases) suggests Aid subjects exerted less effort at the task than No-Aid subjects ( p , .01).
250
GLOVER, PRAWITT, AND SPILKER
IMPLICATIONS AND LIMITATIONS
Implications for Decision Aid Design This study provides evidence that, while decision aids may improve performance in routine tasks, decision aid usage can impair task-related knowledge acquisition, and lead to inappropriate reliance when the aid does not adequately accommodate all potentially relevant environmental features. While further research is needed, these results suggest that organizations using structured decision aids for inexperienced workers should attempt to understand and consider the benefits and costs associated with the use of such aids, including potential behavioral effects. That a decision aid can change decision-maker processes and behavior, either deliberately or inadvertently, is well accepted. Thus, when a desired goal of aided decision making is to facilitate the development of expertise or to maintain a high level of judgment performance, designers of decision aids should carefully consider how their aids affect user behavior. The results of Experiments 1 and 2 together suggest decision aids can encourage passive user involvement, resulting in reduced acquisition of task knowledge and inappropriate reliance on the aid. The results of this study, along with prior research, also support the idea that people learn more effectively by actively participating in the concepts underlying a task—working through the processes involved in the task, rather than simply operating a structured aid and receiving outcome feedback. These findings have potential implications for how decision aids are designed and used in practice. The results of our study suggest that if potential influences on user behavior are not considered in the design of a structured decision aid, use of the aid may have unintended consequences. Fortunately, previous research suggests it may be possible to develop aids that encourage users to be actively engaged in a task or judgment (Alter, 1980; Todd & Benbasat, 1994). However, this will only occur if the designer pays careful attention to how the decision aid changes the cognitive requirements and behavioral implications of different problemsolving approaches. Specifically, in applications where the development of user expertise is an important consideration, designers of decision aids should take into account both how human decision makers acquire knowledge, and the knowledge-related characteristics of experts in those applications. Rennie and Gibbins (1993) note, “decision aids should complement existing thought processes, enhancing an accountant’s ability to develop, activate, elaborate, and utilize effective memory structures.” Research in learning and psychology suggests that people acquire knowledge by actively building relationships between new information and current knowledge structures. Expertise depends at least partially on memory that is organized in a way that helps the expert recall and apply information to the task at hand (e.g., see Libby & Frederick, 1990). Thus, a line of research that may be helpful to consider in designing decision aids that facilitate learning is the literature investigating how experts differ from novices, and how experts become experts. In addition, research in education
DECISION AIDS AND USER BEHAVIOR
251
can provide important insights regarding decision-aid design. For example, Kachelmeier, Jones, and Keller (1992) suggest that analogical learning aids that provide structure, sequence, and monitoring can facilitate knowledge acquisition (also see Wynder & Luckett, 1996). Along these lines, decision aids could be designed to provide process feedback, perhaps even including “understanding rules” (see Bonner & Walker, 1994, and Remus & Kottemann, 1986).23 The ways in which decision aids are introduced and used may also be altered to beneficially impact users’ learning through task experience. For example, organizations could provide relevant conceptual instruction or require an inexperienced decision maker to perform a task a number of times prior to being given access to an aid in order to encourage learning through active cognitive participation in the task’s underlying concepts. In tasks for which it is impossible or impractical to design an aid to accommodate all relevant environmental cues, and where inappropriate reliance on the aid is a concern, it seems wise to consider aids that combine the best of what the model and the decision maker can offer. Such an aid would assist the decision maker to overcome human processing weaknesses and encourage the decision maker to be creatively and actively involved in the application of their knowledge base. For example, in addition to providing a method to combine and evaluate information, decision aids could assist decision makers to generate a range of possible options, encourage the decision maker to go beyond the immediate information available, and provide assistance in the exploration of the decision maker’s existing knowledge structures. Further, at least in some cases decision aids could be designed to suggest or even require explicit consideration of whether the aid is appropriate and adequate given the environment in which it is to be applied. Consistent with these ideas, Elam and Mead (1987) suggest that a DSS whose purpose is to enhance creativity should provide feedback with “depth and positive tenor” to encourage prolonged alternative generation and delayed judgment.24
Limitations Care should be taken when generalizing from the results of the experiments reported here. The experiments conducted as part of this study involved inexperienced subjects performing a structured task requiring relatively little subjective judgment and a short task-experience horizon. In addition, it should be noted that the measurements obtained in the study do not include a complete 23 See Bonner and Walker (1994) for a review of the literature on feedback and learning, as well as some empirical findings regarding the roles of instruction, practice, and explanatory versus outcome feedback in learning. Also see Remus, O’Connor, and Griggs (1996). 24 Interested readers should also consider work by Silver (1990, 1991) on directed or nondirected change, and work by Remus and Kottemann (1986), who suggest that by including components of expert systems in decision support systems, decision making can be improved and human cognitive limitations mitigated.
252
GLOVER, PRAWITT, AND SPILKER
set of process data to directly track development of subjects’ knowledge structures. Further, alternative methods of training, feedback, and motivational devices not included in this experiment may play an important role in knowledge acquisition and task performance. With these issues in mind, future research should seek to expand understanding of the issues raised here by using alternative approaches and measures, and by exploring the effects of decision aids on knowledge acquisition and judgment performance in other tasks and environments. Research on structured decision technologies is also inherently constrained in that the researcher must select a particular aid to examine. Despite this constraint, issues such as the influence of decision aids on user behavior, learning, and performance are sufficiently important to warrant continued research. One of the goals of research in this area should be to identify characteristics of particular decision aids or tools that can be generalized to relatively broad sets of available technologies. Based on the results of this and related studies, a potentially important and possibly generalizable characteristic appears to be whether or not a decision aid requires, or at least encourages, the active participation of the user. REFERENCES Abdolmohammadi, M. (1987, Spring). Decision support and expert systems in auditing: A review and research direction. Accounting and Business Research, pp. 173–185. Alter, S. L. (1980). Current practice and continuing challenges. Addison-Welsey Publishing Company, Reading, MA. Arkes, H. R., Dawes, R. M., & Christensen, C. (1986). Factors influencing the use of a decision rule in a probabilistic task. Organizational Behavior and Human Decision Processes, 37, 93–110. Ashton, R. H., and Willingham, J. J. (1988). Using and evaluating audit decision aids. In R. P. Srivastava and J. E. Rebele (Eds.), Audit Symposium IX: Proceedings of the 1988 Touche Ross/ University of Kansas Symposium on Auditing Procedures (pp. 1–25). University of Kansas. Ballew, V. (1982). Technological routineness and intra unit structure in CPA firms. The Accounting Review, January, 88–104. Balzer, W. K., Doherty, M. E. & O’Connor, R. (1989). Effects of cognitive feedback on performance. Psychological Bulletin, 106, 410–433. Bamber, E. M. & Bylinski, J. (1982). The audit team and the audit review process: An organizational approach. Journal of Accounting Literature, 1, 33–58. Bamber, E. M. & Snowball, D. (1988). An experimental study of the effects of audit structure in uncertain task environments. The Accounting Review, July, 490–504. Beach L. R. & Mitchell, T. R. (1978). A contingency model for the selection of decision strategies. Academy of Management Review, 3, 439–449. Bonner, S. E. & Walker, P. L. (1994). Effects of instruction and experience on the acquisition of knowledge. The Accounting Review, January, 157–178. Chandler, P., & Sweller, J. (1991). Cognitive load theory and format of instruction. Cognition and Instruction 8, 292–332. Cooper, G., & Sweller, J. (1987). The effects of schema acquisition and rule automation on mathematical problem-solving transfer. Journal of Educational Psychology, 79, 347–362. Dawes, R. M., Faust, D. & P. E., Meehl. (1989, March). Clinical versus actuarial judgment. Science, pp. 1688–1674.
DECISION AIDS AND USER BEHAVIOR
253
Devine, D. J., & Kozlowski, S. (1995). Domain-specific knowledge and task characteristics in decision making. Organizational Behavior and Human Decision Processes, 64, 294–306. Einhorn, H. J., Kleinmuntz, D. N. & Kleinmuntz, B. (1979). Linear regression and process tracing models of judgment. Psychological Review, 86, 465–485. Elam, J. J., & Mead, M. (1987, December). Designing for creativity: Considerations for DSS design. Information and Management, pp. 215–222. Elliott, R., & Jacobson, P. (1987, May). Audit technology: A heritage and a promise. Journal of Accountancy, pp. 198–218. Glaser, R., & Bassok, M. (1989). Learning theory and the study of instruction. Annual Review of Psychology, 636–666. Goldberg, L. R., (1965). Diagnosticians vs. diagnostic signs: the diagnosis of psychosis vs. neurosis from the MMPI. Psychological Monographs, 79 (9, Whole No. 602). Goldberg, L. R., (1968). Simple model or simple processes? Some research on clinical judgment. American Psychologist, 23, 483–96. Harkins, S. G., & Petty, R. E. (1982). Effects of task difficulty and task uniqueness on social loafing. Journal of Personality and Social Psychology Bulletin, 7, 627–635. Hirst, M. K., & Luckett, P. F. (1992). The relative effectiveness of different types of feedback in performance evaluation. Behavioral Research in Accounting, 4, 1–22. Hermanson, D. R., (1994). The effect of self-generated elaboration on students’ recall of tax and accounting material: Further evidence. Issues in Accounting Education, 9, 301–318. Jiambalvo, J., & Waller, W. (1984, Spring). Decomposition and assessment of audit risk. Auditing: A Journal of Practice & Theory, pp. 80–88. Jungerman, H. The two camps of rationality. (1985). Decision making under uncertainty. New York: Elsevier Science Publishers. Kachelmeier, S. J., J. D. Jones, & Keller, J. A. 1992. Evaluating the effectiveness of a computerintensive learning aid for teaching pension accounting. Issues in Accounting Education 7, 164– 178. Kachelmeier, S. J., & Messier, W. (1990, January). An investigation of the influence of a nonstatistical decision aid on auditor sample size decisions. The Accounting Review, pp. 209–226. Keen, P., & Morton, S. (1978). Decision support systems. Reading, MA: Addison-Wesley. Keisler, C. A., B. Collins, & Miller, N. (1969). Attitude change: A critical analysis of theoretical approaches. New York: Wiley. Kottemann, J. E., Davis, F. D. & Remus, W. E. (1994). Computer-assisted decision making: Performance, beliefs, and the illusion of control. Organizational Behavior and Human Decision Processes, 57, 26–37. Kottemann, J. E., & Remus, W. E. (1987). Evidence and principles of functional and dysfunctional decision support systems. International Journal of Management Science (Omega), 15(2), 135–144. Kottemann, J. E., & Remus, W. E. (1991). The effects of decision support systems on performance. In H. G. Sol & J. Vecsenyi (Eds), Environments for supporting decision processes (pp. 203–214). New York: North-Holland. Langer, E. J., (1975). The illusion of control. Journal of Personality and Social Psychology, 32, 311– 328. Lastovicka, J., & Gardner, D. (1979). Components of involvement. In J. Maloney & B. Silverman (Eds.), Attitude research plays for high stakes. Chicago: American Marketing Association. LeFevre, J., & Dixon, P. (1986). Do written instructions need examples? Cognition and Instruction, 3, 1–30. Libby, R., & Frederick, D. M. (1990, Autumn). Experience and the ability to explain audit findings. Journal of Accounting Research, 28, pp. 348–367. Libby, R., & Luft, J. (1993). Determinants of judgment performance in accounting settings: Ability, knowledge, motivation, and environment. Accounting, Organizations, and Society, 18, 425–450.
254
GLOVER, PRAWITT, AND SPILKER
Libby, R., & Tan, H. (1994). Modeling the determinants of audit expertise. Accounting, Organizations, and Society, 19, 701–716. Mackay, J. M., Barr, S. H., & Kletke, M. G. (1992). An empirical investigation of the effects of decision aids on problem-solving processes. Decision Sciences, 23, 648–672. March, J. G., Bounded rationality, ambiguity and the engineering of choice. (1978). The Bell Journal of Economics, 19, 587–608. Mayer, R. E., (1979). Can advance organizers influence meaningful learning? Review of Educational Research, 49, 371–383. Mayer, R. E., (1980). Elaboration techniques that increase the meaningfulness of technical test: An experimental test of the learning strategy hypothesis. Journal of Educational Psychology, 72, 770–784. Messier, W., (1995). Research in and development of audit decision aids. In R. H. Ashton & A. H. Ashton (Eds.), Judgment and decision making in accounting and auditing. (pp. 207–228). Cambridge University Press. Messier, W., & Hansen, J. V. (1987). Expert systems in auditing: The state of the art. Auditing: A Journal of Practice & Theory, 7, 94–105. Petty, R. E., & Cacioppo, J. T. (1979). Issue-involvement can increase or decrease persuasion by enhancing message-relevant cognitive responses. Journal of Personality and Social Psychology, 37, 1915–1926. Pincus, K. V., (1989). The efficacy of a red flags questionnaire for assessing the possibility of fraud. Accounting, Organizations, & Society, 14, 153–163. Payne, J. W., Contingent decision behavior. (1982). Psychological Bulletin, 92, 382–402. Prawitt, D. F., (1995). Staffing assignments for judgment-oriented audit tasks: The effects of structured audit technology and environment. The Accounting Review, July, 443–465. Reigeluth, C. M., Merrill, M. D., Wilson B. G. & Spiller, R. T. (1980). The elaboration theory of instruction: A model for sequencing and synthesizing instruction. Instructional Science, 9, 195– 215. Remus, W., Kottemann, J. (1986, December). Toward intelligent decision support systems: An artificially intelligent statistician. MIS Quarterly, pp. 403–418. Remus, W., O’Connor, M., Griggs, K. (1996, April). Does feedback improve the accuracy of recurrent judgmental forecasts? Organizational Behavior and Human Decision Processes, pp. 22–30. Rennie, M., & Gibbins, M. (1993). “Expert beyond experience,” CA Magazine (May): 40–46. Sherif, C. W., Kelly, M., Rodgers, H. L., Sarup, G., & Tittler, B. (1973). Personal involvement, social judgment, and action. Journal of Personality and Social Psychology, 27, 311–327. Silver, M. S., (1990, March). Decision support systems: Directed and Nondirected Change. Information Systems Research, pp. 47–70. Silver, M. S., (1991, March). Decisional guidance for computer-based decision support. MIS Quarterly, pp. 105–122. Simon, H. A., Discussion: Cognition and social behaviour. (1976). In J. S. Carroll & J. W. Payne (Eds.). Cognition and social Behaviour (pp. 253–267). Hillsdale: Earlbaum. Simon, H. A., (1977). The new science of management decisions. Englewood Cliffs, NJ: Prentice-Hall. Stabell, C. B., (1983). A decision-oriented approach to building DSS. In John L. Bennett (Ed.), Building decision support systems (pp. 221–260). Addision-Wesley Publishing Company, Reading MA. Sullivan, J., (1984). The case for the unstructured audit approach. Auditing Symposium VII: Proceedings of the 1984 Deloitte & Touche/University of Kansas Symposium on Auditing Problems, University of Kansas, 61–71. Sweller, J., (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 1, 257–285.
DECISION AIDS AND USER BEHAVIOR
255
Sweller, J., (1989). Cognitive technology: Some procedures for facilitating learning in problem solving in mathematics and science. Journal of Educational Psychology, 81, 457–466. Todd, P., & Benbasat, I. (1991). The impact of computer based decisions aids on the decision making process. Information Systems Research, 2, 87–115. Todd, P., & Benbasat, I. (1992): An experimental investigation of the impact of computer bases decision aids on processing effort. MIS Quarterly, 16, 373–393. Todd, P., & Benbasat, I. (1993). An experimental investigation of the relationship between decision makers, decision aids and decision making effort. Infor, May, 80–99. Todd, P., & Benbasat, I. (1994). The influence of decision aids on choice strategies: An empirical analysis of the role of cognitive effort. Organizational Behavior and Human Decision Processes, 60, 36–74. Whitecotton, Stacey, M. (1996). The effects of experience and a decision aid on the slope, scatter, and bias of earnings forecasts. Organizational Behavior and Human Decision Processes, 66, 111–121. Wynder, M., & Luckett, P. 1996. The effects of understanding rules and a worked example on the acquisition of procedural knowledge and task performance. Working paper, University of New South Wales. Received: May 13, 1997