Can executive information systems reinforce biases?

Can executive information systems reinforce biases?

Accting., Pergamon Mgmt. & Info. Tech., Vol. 4, No. 2, pp. 87-106, 1994 Copyright 0 1994 Elsevier Science Ltd Printed in the USA. All rights reserve...

2MB Sizes 0 Downloads 99 Views

Accting.,

Pergamon

Mgmt. & Info. Tech., Vol. 4, No. 2, pp. 87-106, 1994 Copyright 0 1994 Elsevier Science Ltd Printed in the USA. All rights reserved 0959-8022194 $6.00 + .OO

0959-8022(94)00003-4

CAN EXECUTIVE INFORMATION SYSTEMS REINFORCE BIASES? Arun Rai Charles Stubbart David Paper

Southern Illinois University at Carbondale

Abstract-Executive information systems represent a significant departure from traditional computerbased information systems. Advocates of executive systems claim that their systems offer the leadingedge option for making computers truly effective for supporting key management functions. This paper examines the interaction between key characteristics of executive systems and fundamental features of human thinking drawn from the field of Behavioral Decision Theory. We examine the possibility that the operation of these systems may reinforce or intensify certain biases in human information processing. We present three examples of potential bias-intensification: availability, regression effects, and overconfidence. Next, the paper applies the concept of decisional guidance to develop design implications for executive systems. Lastly, the paper offers avenues for theoretical development and empirical research. Keywords: Executive systems, Information Overconfidence, Decisional guidance.

processing

biases, Availability,

Regression

toward

the mean,

INTRODUCTION Cognitive science entered management studies through seminal works such as Simon’s book, Administrative Behavior (1957). In that landmark work, Simon offered a pivotal idea called “bounded rationality.” He argued that constraints on human information processing, such as serial thinking processes, restricted short-term memory and limited calculative ability, prevented managers from optimizing decisions as described by neo-classical economics. Simon’s idea- that human cognitive constraints set limits on information processing and decision-making rationality-was so important that it eventually propelled the whole field of organization theory to break away from economics. Since Simon offered his theory, a vast literature has been assembled documenting heuristic features of human information processing and their effects upon interpretation, judgment, and decision-making (Hogarth, 1980; Kahneman, 1973; Kahneman & Tversky, 1972; Kahneman, Slavic, & Tversky, 1986; Kiesler & Sproull, 1982; Oskamp, 1965; Sage, 1991; Tversky & Kahneman, 1973). While these heuristics are effective under many circumstances, a vast empirical evidence shows that mental heuristics can sometimes produce harmful biases, which cause serious consequences for decision-makers, such as their failures to notice a problem, misinterpreting information, or making suboptimal decisions. In the managerial realm, research by Bazerman (1986), Hogarth & Reder (1986), Kiesler 87

88

A. RAI, C. STUBBART

and D. PAPER

and Sproull (1982), and Taylor (1984) have reviewed and encapsulated a large number of empirical studies which show that managers responsible for important economic decisions are generally as vulnerable to certain biases and errors as the “man in the street.” Executive information systems or executive support systems are designed to increase executives’ ability to scan their information environment and as an aid to assessing and evaluating the strategic situation of their organization. These systems aim to provide executives with meaningful information: when they want it, where they want it, in the form they want it (Ramaprasad, Hill, & Salach, 1993). Although the terms are sometimes used interchangeably, executive information systems generally connotes providing information, whereas executive support systems usually refers to a broad support capability, including communications, data analysis, and organizing tools (Watson, Rainer, & Koh, 1991). In this paper we use executive systems to encompass both terms. Later in the paper we will specify specific features of any technology falling within the scope of our comments. We want to emphasize that our definition of executive systems is not restricted to systems serving only a chief executive officer (Thierauf, 1991). In our view, consistent with reports from the field, these systems can support key executives in carrying out planning, control, coordination, and communication functions. For example, in a recent empirical study investigating use patterns, most organizations that had initially adopted a system for their chief executive, reported that several other key executives had also begun using it. Moreover, a majority of the firms studied reported that they planned to expand their executive system user-base (Bajwa & Rai, 1994). These findings show an evolutionary shift of user-groups from limited access toward organization-wide access to corporate information, as predicted by Rockart (1990). In other words we expect these systems to spread both vertically and laterally to serve a large group of executives in most firms (Watson, 1990). Despite the broad increase of corporate interest in executive systems, they have been documented as high-risk applications. Executives generally have poor computer skills and are skeptical about the ability of computers to improve their job performance (Watson, 1990). Based on interviews with 21 firms that reported executive system failures, Watson and Clover (1989) discovered some common causes, including use of inadequate or inappropriate technology, lack of executive commitment, failure of the system to meet user needs, and executive resistance to technology. Some of the firms studied learned from experience and found their way to eventual “success.” Even so, it is clear from the literature and case studies that there remains a dangerously thin theoretical understanding to guide the design and implementation of executive systems (Walls, Widmeyer, & El Sawy, 1992). The outstanding potential of these systems, combined with weak understanding about what factors actually lead to success, creates a pressing requirement for some theory to guide their development. That requirement has not been filled by commercial developers because commercial software generators and development tools have chiefly evolved out of pragmatic vendor efforts to get computers into the executive’s office (Walls, Widmeyer, & El Sawy, 1992). Therefore, “Executive Systems Theory” lags far behind practical developments. As a result, there is now a critical need for researchers to construct and test alternative design theories for executive systems. For example, Walls, Widmeyer, and El Sawy (1992) have proposed one such theory based on the concept of “vigilant information systems.” Their ideas constitute a substantial contribution toward a workable theory. Even so, their design theory does not address the cognitive constraints that Simon described in 1957--t leaves out the user. An adequate theory for the design of executive systems probably comprises three main components: the human, the machine, and the task. What we need at this stage is a theo-

CAN EXECUTIVE

INFORMATION

SYSTEMS

REINFORCE

89

BIASES?

retical task analysis similar to the strategy that was advocated by Marr (1982) and Simon and Newell (1971). A comprehensive theory of information system design should integrate the machine, nature of task, and the features of human cognition. The resulting information system must compensate for the physiological and psychological shortcomings of human information processing while multiplying their strengths. Implemented systems should then augment human capabilities while not diminishing abilities in areas where human abilities exceed those of computers (Dreyfus, 1979). The relationship between management information systems design and human cognition is not a new topic. It is widely accepted that management information systems play a significant role in defining and consequently channeling the manger’s view of his decisionmaking environment (Haskins & Nanni, 1988; Hedberg & Jonsson, 1979; March, 1987). This means that an information system can potentially trigger certain dysfunctional biases under some circumstances. Therefore, it is clear that any comprehensive theory of executive system design should safeguard against harmful biases, both those brought directly by decision-makers, as well as those inadvertently introduced by the technology or the use of the system. In this paper, we contend that a purely technical orientation toward delivering executive systems that is inattentive to the constraints of human information processing may result in systems-human interactions that actually intensify biases! The remainder of the paper is organized as follows: the next section describes typical functional capabilities of executive systems as well as the primary methods used to design these systems. The third section presents a systematic analysis of how user-interaction with the key features of executive systems may reinforce the use of three popular heuristics. The fourth section enumerates some design considerations to alleviate potentially harmful decision-making biases. The last section concludes with an identification of limitations and implications for future research.

ESSENTIAL

Information

FEATURES

OF EXECUTIVE

INFORMATION

SYSTEMS

requirements determination

Wetherbe (1991) provided a comparison of three popular information requirements methods: critical success factors, business systems planning, and ends/means analysis. He proposed deploying cross-functional teams, conducting joint applications design and using structured interviews while using these methods to determine executive’s conceptual information requirements. Using a prototyping development process should enable refinement of the contents of specific screens and reports, level of aggregation and detailed information required by users, and presentation formats that appeal to the user. Critical success factors are areas where things must “go right” for the business to succeed (Rockart, 1982). In the executive systems realm, the critical success factors approach is probably the most commonly discussed information requirements determination method (Cottrell& Rapley, 1991; Rockart, 1979; Rockart, 1982; Rockart & DeLong, 1988; Volonino & Watson, 1991; Wang & Turban, 1991). For instance, executive systems at Gillette and British Airways incorporate critical factors (Cottrell & Rapley, 1991; Volonino & Watson, 1991). But Volonino and Watson (1991) found that executive systems designed using the critical success factors approach are having only moderate success-at best. Perhaps the mixed results are caused by an approach that assumes objectives and goals are known and fixed, or because previous executive systems embody a management control orientation rather than strategic orientation (Walls, Widmeyer, & El Sawy, 1992).

90

A. RAI, C. STUBBART

and D. PAPER

Recent research has investigated some of the alleged shortcomings of the critical success factors approach. Watson and Frolick (1988) advocated using both formal and informal methods in tandem. Formal sessions and participation in strategic planning meetings can be combined to identify industry and organization-related information requirements, while informal discussions pinpoint unique information requirements. Volonino and Watson’s (1991) strategic business objectives method incorporates strategic planning into the process of determining executive information requirements to develop systems that are less calibrated toward individual information needs, yet better tailored toward strategic organizational goals. Unfortunately, their method still focuses predominantly on monitoring goals and retains a control perspective (Walls, Widmeyer, & El Sawy, 1992). The critical attention tracker proposed by Walls, Widmeyer, and El Sawy (1992) aims at identifying critical events and measuring their impact on critical success factors. This method has theoretical foundations in both executive cognitive processing and normative models of organizational adaptation. The critical events and factors are elicited from executives using a delphi technique or a variant of it. This method aims to shift the orientation of executive systems from a control system that purely monitors the state of an organization toward a system that understands cause and effect. From our perspective, it is the only current approach with theoretical foundations. While the orientation of the approach is clearly critical (Watson, 1990), any design approach still needs to address the cognitive constraints of the decision-maker. To sum up, even though the critical attention tracker method reflects a strategic orientation and is rooted in theory, none of the popular methods now used to design executive systems really tackle the fundamental issues posed by empirical research findings about heuristics and harmful biases reported from behavioral decision theory.

Salient executive systems characteristics Before we explain how executive systems might interact with normal heuristics to reinforce biases, we want to make clear the functional capabilities or key operators that we are referring to when we use the term executive systems. The impression of dramatic growth of executive systems reported in articles and case studies in both academic and practitioner outlets is reinforced by vendor community reports of rapid increases in the sales of development products for these systems. Commercial generators such as COMMANDER (Comshare) and COMMAND CENTER (Pilot Executive Software) deliver sophisticated capabilities that enable rapid prototyping. The increasing availability of similar products is a key driving force behind the growth of the executive systems market (Watson, 1992). A rapidly expanding literature has delineated several distinctive aspects of executive systems applications, which are listed and described below (Burkan, 1988; Friend, 1986; Kogan, 1986; Paller & Laska, 1990; Rockart & DeLong, 1988; Rockart & Treaty, 1982; Zmud, 1986). 1. Executive systems filter information for executives who do not have time (or patience) to sift through all available company information (Paller & Laska, 1990). Executive users designate an acceptable range of values for the variables being monitored, and then the system creates exception reports that catch his attention. In effect, the detection of exceptions is shifted to the system by incorporating exception identification rules into the design. 2. Another powerful characteristic of an executive system lies in its drill-down capability

CAN EXECUTIVE

3.

4.

5.

6.

7.

INFORMATION

SYSTEMS

REINFORCE

BIASES?

91

enabling executive users to drill-down to detailed information supporting text, numbers or graphs (Paller & Laska, 1990; Turban, 1990). Drill-down information can be organized to support multiple views of the data at varying levels of detail. Drill-down stands in stark contrast to distilled and aggregate information traditionally presented to managers in periodic reports. Executive systems usually incorporate communication capabilities that support environmental scanning (Wang & Turban, 1991). The primary capability reported by studies is access to on-line databases and information sources such as Dow Jones and Compuserve. Decision-making must take place faster and more frequently in the turbulent postindustrial environment (Huber, 1984). Research also shows that most senior executives are impatient in accessing information from a computer (Paller & Laska, 1990; Watson, Rainer, & Koh, 1991). As a result, commercial products now provide the capability to access real-time information from multiple, geographically-dispersed databases. In the past, executives had to rely on messages from subordinates, advice from business contacts, and unwieldy reports for information. Information got filtered, distorted, and politicized, as it made its way up through different levels of management. Executive systems products have communication capabilities such as electronic mail and other support functions such as calendaring and scheduling that enable executives to circumvent that filtering process, or even to dispense with it altogether. The importance of cognitive styles and modes of presentation has been recognized in the information systems literature (McKenney & Keen, 1974; Sage, 1981). Executive system applications offer increasingly flexible and colorful presentation formats, including tables and many types of graphs. The use of other graphical images to simulate a graphical user interface is also becoming increasingly common. For instance, geographic maps can facilitate information acquisition by an executive. Reviews of executive system characteristics identified in the literature (Burkan, 1988; Friend, 1986; Kogan, 1986; Zmud, 1986) reveals that one feature noteworthy by its absence is any sophisticated analytical capability, such as time series analysis and other advanced statistical capabilities, simulations or optimizing models. Most systems in operation today do not offer powerful analytical capabilities. Instead, their capabilities have been primarily relegated to simple “what if” analysis (Cottrell & Rapley, 1991). This is a matter of great significance that we will discuss repeatedly in this paper.

To summarize, the distinctive features of executive system products; including drilldown, exception reporting, real-time access to information, integration with external databases, communication, and flexible presentation formats, all combine to offer executives a powerful and exotic technological tool. But the designers and system builders have left a big gap, because they have not considered the vast empirical research from behavioral decision theory. Nor have they given us a systematic account of the relationship between an executive’s task, the executive systems technology, and the constraints on executive’s information processing and thinking. In the following sections of our paper, we begin to explore the ramifications of that gap by offering some propositions about the relationship between the characteristics of executive systems and three common judgmental heuristics that cause biases and lead to decision-making errors.

A. RAI, C. STUBBART

92

and D. PAPER

EXECUTIVE SYSTEM CHARACTERISTICS AND INFORMATION PROCESSING HEURISTICS

The challenging information-processing

task for executives

The nature of their information-processing tasks marks one important structural difference between a top executive position and a middle-management position. In large, diversified firms, today’s executives constantly grapple with information that is more varied, more complex, and more ambiguous than their subordinates. Varied, complex, ambiguous information produces high levels of uncertainty for them (Mason & Mitroff, 1973). Moreover, senior executives must use information to make high-visibility strategic decisions with far-reaching consequences for both their organization and their personal careers. Visibility and importance create high levels of impatience, frustration and stress (Janis, 1982, 1989). Taken together, uncertainty and stress create unique challenges in acquiring, integrating, and interpreting information to make strategic decisions. Despite rapid changes in their information environment, certain important aspects of top management information, as stressed by Simon, never change. Today’s executive never has more than 24 hours in his day. His short-term memory still accommodates the same 5-7 items (Miller, 1956), his long-term memory deteriorates and he deliberates sequentially. In fact, there is no evidence that executives are less prone to errors, omissions, and biases in information-processing than the “man in the street.” To sum up, rapid environmental change and pace of modern global business threatens to bury contemporary executives under an avalanche of complex informationunless their ability to process that information can be constantly strengthened. It is against this modern background of uncertainty, change, stress and constraints that bold claims of executive systems must be critically evaluated. To examine these issues, we present three examples of biases that could be intensified by executive systems. We picked these biases because they are related to pervasive heuristics that are common in many everyday circumstances, estimates and decisions. Each explanation follows a common format: definition of the bias, consequences of the bias, conditions that evoke the bias, and an explanation of why and how an executive system can intensify the bias.

A vailability Definition of availability. “Availability”

refers to a judgmental

heuristic

defined

as:

. . . situations in which people assess the frequency of a class or the probability of an event by the ease with which instances or occurrences can be brought to mind. (Kahneman, Slavic, & Tversky, 1986, p. 11) Due to an automatic, unconscious availability criterion, people inadvertently assume that readily-available instances, examples or images represent unbiased estimates of statistical probabilities. On the whole, availability is a useful, even essential, mechanism for human information processing because frequent instances and examples of many categories of experience are better remembered (Nisbett & Ross, 1980). In most cases, availability in memory is a good indicator of frequent occurrence. But availability becomes a liability in human information processing when factors other than observed frequency increase ease-of-retrieval from memory.

Consequences of availability. Availability maker’s

estimates

about

the probability

heuristics can create sizeable errors in decisionof cases, examples, rates or categories of many

CAN EXECUTIVE

INFORMATION

SYSTEMS

REINFORCE

BIASES?

93

kinds of phenomena, such as behavior, events or data structures. In addition to discrete instances, availability can also bias estimates of relationships, such as causal relationships, correlations, and trends. When people rely upon biased estimates resulting from availability to guide their conscious thinking and deliberations, they can introduce large errors and omissions into their important decisions and activities.

Antecedents of availability. Availability heuristics can create biased estimates through several different mechanisms, including salience, familiarity, and memory structure. We now take a brief look at each of these. Some features of experience are especially vivid, colorful, odd, newsworthy, or notable. These features make such experiences especially salient in memory. Salient memories are easily retrieved. Ease of retrieval creates a strong probability that salient memories will serve as the basis for mistaken estimates about probabilities. For example, unemployed executives are likely to overestimate unemployment among executives whereas employed executives are likely to underestimate unemployment among executives. For each executive, employment estimates are biased by the vivid salience of their own personal situation. The structure of experience (famikzrity) affects the ease of retrieval of information from memory. For example, a successful executive who attended Yale is likely to remember fellow alums he encounters in his business circle and his social life. But his success places him in a narrow professional and social stratum. Because of his special, circumscribed range of experiences he is likely to overestimate the relative proportion of successful Yale graduates (because he meets them all) and to underestimate the proportion of unsuccessful Yale graduates (because he never meets them). The concept of a search set introduces the structure of memory into the availability equation. A search set refers to the procedural mechanisms to search through memory. Although psychologists, cognitive scientists, and expert system builders debate the best method for characterizing memory search, it is clear that every method has the potential for producing availability biases. For example, it is easier to find books about “history” in the library than it is to find books about “strategic failures” because history is part of the library cataloguing system whereas “strategic failures” is not. In other words, any kind of categorization scheme favors some kinds of searches over others. What’s true for the public library also holds for personal memory-the retrieval structure can bias estimates. Availability and executive systems. The graphics and exception reporting features of executive systems contribute toward developing salient memories. Commercial products offer many powerful options for the designer to present data and executive system applications take advantage of these options to produce active screens (Cottrell & Rapley, 1991; Paller & Laska, 1990). Both designers and executive users often praise the system’s capability for exception reporting (Paller & Laska, 1990; Walls, Widmeyer, & El Sawy, 1992). In other words, the systems and the designers aim for vivid, salient impact. Executive system applications call attention to designated indicators. Variations in values on key indicators are portrayed with red, green, or yellow lights. As with any other management support system it is important that decision-makers properly define items to be monitored (Judd, Paddock, & Wetherbe, 1981). In the case of executive systems, the issue is further complicated as features such as motion and color can render certain data especially salient to the user and inadvertently accentuate availability biases. In other words, these systems have a large capacity to create salient effects. As a result, executive users may make substantial errors in interpreting exception reports and taking inappropriate actions because of them.

94

A. RAI, C. STUBBART and D. PAPER

The personal design element of executive systems, based on critical success factors, contributes to familiarity and retrieval structures, as discussed earlier. One of the chief differences between executive and traditional systems lies in the role of “personal design.” Instead of designing problem-driven or theory-driven systems, advocates of executive systems extol the virtues of mirroring specific manger’s preferences, skills and habits (Paller & Laska, 1990; Rockart & DeLong, 1988; Violano, 1989). In the same way that the system leverages preferences, skills, and habits, it could also leverage biases. From an executive’s viewpoint, systems that accurately incorporate their own preferences are easy to understand, easy to use, and easy to evaluate; so they receive high approval ratings from executive users. For example, executive systems often focus executive’s attention upon critical success factors, those indicators that the chief executive officer regards as uniquely informative strategic indicators of the firm’s position and performance. But these critical factors represent a limited set of easily remembered, overlearned items extracted from executives long-term memory. As a result, using approaches such as critical success factors as an essential information-processing tool multiplies the effects of salience, familiarity, the retrieval structures described above, possibly increasing, reinforcing or introducing biases. Executive system applications offer a fast, powerful extension of human memory. But application generators do not incorporate analytical features to compensate for user’s heuristics that can produce biases. Reported applications developed using popular commercial generators never provide statistical analysis of probabilities designed to offset human information-processing heuristics that produce biases. On the contrary, it is perhaps not an exaggeration to regard availability as an unstated design criterion for an executive system, because designers regard “ease of retrieval” as an unmitigated benefit of the system not as a potential problem to be guarded against! This leads to the proposition: P,: Characteristics of e.xecutive s_vsterns such as salient graphics, critical success factors, and exception-reporting increase judgmental biases associated with availability heuristics for executives using these systems (compared to standard reporting systems).

Regression Definition

effects of regression

effects.

“Regression

effects”

occur when:

Extreme values of a variable are used to predict extreme values of the next observation variable (thus failing to allow for regression to the mean). (Hogarth, 1980, p. 169)

of the

In everyday situations, people do not realize that extreme values on a variable (outliers) tend to revert toward the mean on subsequent trials (Bazerman, 1990; Campbell, 1969; Sage, 1981). Suppose a salesman’s performance (monthly sales targets) is plotted across time (see Figure 1). Even though scores “a” and “b” are outliers, there is a high probability that the next performance score will revert toward the mean. In other words, outlying scores represent normal variations, not changes in the underlying trend.

Consequences of regression. Regression is a ubiquitous phenomenon that impacts the predictive relationship between two events (Campbell & Stanley, 1963). Regression phenomena affect the accuracy of predictions (Bazerman, 1990) when observed performance varies irregularly around some average value and people misinterpret random variation as trends. For instances, from a statistical perspective, policy makers often get overly excited about minor changes in unemployment rates or GNP growth (Campbell, 1969; Hogarth, 1980).

CAN EXECUTIVE

INFORMATION

SYSTEMS

REINFORCE

95

BIASES?

Performance

b

1 X

X X

X

xx

xx

1

x

x

x

Mean performance

x-x

X

X

xx

X

X X

a

Time

Fig. 1. Performance scores plotted across time. From Judgemenf and choice by R. M. Hogarth, 1980, New York: John Wiley & Sons, Ltd. Copyright 1980 John Wiley & Sons, Ltd. Adapted by permission.

Decision-makers’ lack of appreciation for the inherent randomness of the environment leads toward predictions based on an assumption of perfect correlation with past data (Bazerman, 1990; Tversky & Kahneman, 1974), especially when past and present events are usually imperfectly correlated (Campbell, 1969). In general, people fail to anticipate and adjust for regression effects in their environment (Kahneman & Tversky, 1972). Failure to appreciate the nature of regression can create sizeable errors in decision-makers’ estimates about the relationship between events. Their resulting errors include overreaction to misleading cues from the environment (Hogarth, 1980), false attributions about their efficiency (Hogarth, 1980), misperception of the true causes of events (Tversky & Kahneman, 1974), naive estimates (Bazerman, 1990), and inappropriate planning (Bazerman, 1990). Regression effects cause managers to overestimate the effectiveness of punishments and underestimate the effectiveness of rewards when chance alone causes changes in employee performance (Kahneman & Tversky, 1972; Tversky & Kahneman, 1974). Antecedents ofregression. A major factor explaining regression effects is a human tendency to place too much emphasis on exceptions (outliers). Outlier-emphasis occurs when people do not recognize a probabilistic process, random fluctuations, or the presence of variations (Hogarth, 1980). On the contrary, people often believe that an outlier score represents a drastic change and that it is a clear precursor to future outliers occurring in the same direction. Even when people recognize random variation, they often invent self-serving causal explanations for it (Kahneman & Tversky, 1972). For instance, if a manger’s subordinates are performing well, he may attribute their high performance to his superior managerial skills, whereas, if their performance is down, he tends to blame those employees. In addition, people also seek simple explanations (Hogarth, 1980). Blaming “poor performers” is much easier to understand than a complex statistical explanation.

A. RAI, C. STUBBART

96

and D. PAPER

Regression and executive systems. Executive systems are expressly designed to facilitate scanning for outliers and to focus attention upon them. For example, exception reporting, drill-down, and graphics capabilities are some of the features of executive system applications that facilitate quick and easy performance tracking. As described earlier, commercial products enable construction of applications that call attention to extreme values with key indicators such as red and yellow lights. Once identified, extreme values can be further investigated by use of drill-down. The system actually facilitates easily locating information behind aggregates. In other words, executive systems enhance managers’ ability to identify exceptions. These systems also offer easy-to-use graphics capabilities. Employee performance can be viewed graphically to help executives visually inspect outliers. Commercial generators are built with powerful capabilities that enable rapid construction of such applications. All in all, identifying outlying performance values is one of the chief objectives of executive systems. On the other hand, specific analytical capabilities to deal with regression effects are not being offered by commercial available executive system generators. Useful modeling capabilities such as time series analysis, robust regression, and other forms of outlier analysis have not been incorporated into most reported applications. In other words, although products can highlight outlying values with drill-down, graphics, and exception reporting, they offer no built-in safeguards against regression effects (Exception reports cannot alleviate regression effects. They just make it easier for executives to misinterpret information acquired through their executive systems). As a result, executive users may miscalculate the importance of an exception that may lead to inappropriate actions. This leads to our second proposition: P2: Characteristics of executive systems such as exception-reporting, drill-down, and graphical functions increase judgmental biases associated with regression heuristics for executives using these systems (compared to standard reporting systems). Overconfidence Definition

of overconfidence.

People generally ascribe more credibility to data than is warranted and hence overestimate the probability of success merely due to the presence of an abundance of data. (Sage, 1981, p. 648) Oskamp (1965) found that predictive accuracy reaches a ceiling at an early point in an information gathering process. In contrast to accuracy, however, confidence in decisions continues to climb as more and more information is obtained (Einhorn & Hogarth, 1978; Hogarth, 1980; Koriat, Lichtenstein, & Fischhoff, 1980; Oskamp, 1965; Schwenk, 1986). In other words, confidence increases but accuracy doesn’t. Executives who rely on the quantity of available data to determine the probability of a decision’s accuracy, will make mistakes. Researchers have detected “overconfidence” as a common judgmental pattern and it has been established in a wide variety of settings (Bazerman, 1990). Consequences of overconfidence. Overconfidence can have serious consequences. Studies of clinical psychologists (Oskamp, 1965) and Big Eight Accountants (Joyce & Biddle, 1981) illustrate that as the individual’s confidence about their decisions soared with the receipt of more information, the “predictive accuracy” (correctness) of their estimates went wildly out of proportion with the accompanying increase in confidence, leading to an awful combination of increasing overconfidence allied with drastically suboptimal decisions.

CAN EXECUTIVE

INFORMATION

SYSTEMS

REINFORCE

BIASES?

91

Overconfidence causes many deleterious effects in the information processing activities of individuals (Bazerman, 1990; Hogarth, 1980; Langer, 1983; Lichtenstein & Fischhoff, 1977, 1980). For example, overconfident managers stop gathering and processing information about an issue sooner. Executives who rely upon amount-of-data to gauge their confidence will not use models and quantitative aids to perform a systematic analysis. Their overconfidence can lead them to unwarranted conclusions (Sage, 1981). On the whole, overconfidence preempts the collection of disconfirming evidence, reduces analysis of data, and discourages the examination of alternative ideas and solutions.

Antecedents of overconfidence. Today’s executive operates in a complex, unstructured, and uncertain information environment. Their job requires constant attention toward an overwhelming range and detail of information. As Simon pointed out, executives are constantly threatened by information overload. The danger that information overload can lead toward overconfidence is indirectly supported by other findings. For example, individuals have been observed selectively seeking out information that supports their position while disregarding contradictory information (Langer, 1983). Moreover, the extent to which information is repetitious and redundant adds to overconfidence. Questions about reliability and validity tend to fade into the background. With all these processes: selectivity, repetition, redundancy, the stress is on “quantity” of information-as the accumulation of information increases, confidence goes up without an accompanying increase in predictive accuracy. Overconfidence and executive systems. An executive system is designed to make a vast range and scope of both internal and external information more accessible to executives. Providing access to vast realms of internal and external information plus capabilities such as drill-down and flexible direct queries, dramatically increases the volume and scope of data that executives have at their fingertips. With powerful technology, executives can now race through incredible amounts of information at unbelievable speeds. Some specific features of executive systems may contribute to an environment that encourages overconfidence. For example, drilf-down enables top executives to pin-down small specific details that were previously difficult to access and flexible capabilities enable executives to move around their databases rapidly and easily. Real-time information plus new communications abilities now encourage top managers to rapidly take action when they detect trouble-spots, leading possibly to the misguided confidence that they can control everything and everyone in their company. Case studies reporting success stories of executive systems often include stories such as the “salesman in Kansas City who receives a call from the chairman 24 hours after he lost the big contract.” In this way, the system is encouraging more intervention by top executives. Decision-makers mainly search for data to confirm their beliefs and find it difficult to integrate information that is not aligned with their belief system (Einhorn & Hogarth, 1978). When personal-design approaches (such as the critical success factors method) are incorporated, the system becomes permanently geared toward providing more and more information the executive considers important - only that information. A vast sea of data is collected, organized, and presented according to an executive’s preferences. An inherent selective filtering of information takes place, based on the worldview of the executive(s) participating in the system design. All in all, an executive can be lulled into a feeling of overconfidence because he is sitting in front of a system that tells him only what he believes is important. Overconfidence can also be reinforced by things that executive systems do not do. Because these systems incorporate few analytical capabilities, executives only have access to

A. RAI, C. STUBBART

98

and D. PAPER

simple descriptive statistics such as mean values. As mentioned above, relying on undigested data in the absence of sample size, confidence intervals, or other measures of information validity, increases the chances that users will ascribe more credibility to the data than warranted. This too can lead toward an overestimation of the probability of success. Based on these points, we find that some features of executive systems may cause overconfidence. This leads to the proposition: P,:

Characteristics of an executive system such as drill-down, real-time data, direct electronic communications, critical success factors (especially in the absence of analytical information) increase judgmental biases associated with overconfidence heuristics for executives using these systems (compared to standard reporting forms).

SUMMARY Heuristics are a natural and essential characteristic of human information processing. We can’t get rid of them because we need them and they are generally useful. But, under some circumstances, heuristics can result in serious errors. In this section we have explained why certain characteristics of executive systems may interact with three specific heuristics to produce unfavorable results. Of course, the question is an empirical one. Whether our propositions hold or not, there still remain important general theoretical questions about the design of executive systems. These general questions are discussed in the next section.

DESIGN

DECISIONAL GUIDANCE: IMPLICATIONS FOR EXECUTIVE

SYSTEMS

In the previous sections we have built a case around the idea that certain executive system features might inadvertently interact with certain heuristics, increasing the chances that specific kinds of biases might result in poor decisions. Naturally, one must next develop empirical research to put our propositions to test. But, aside from research projects, our questions also involve theoretical issues about design principles behind executive systems. In a general context, our reservations about certain aspects of these systems arise from what we regard a fundamental misalignment among the task requirements, the decision maker, and the system. In that regard, theory, the questions we have raised about executive systems dovetail nicely with the recent work of Silver (1988, 1990, 1991). Decisional

guidance

In writing

about

decision

support

systems,

Silver (1991) defined:

Decisional Guidance: how a decision support system enlightens or sways its users as they structure and execute their decision-making processes-that is, as they choose among and use the

systems’s functional

capabilities.

(p. 107)

Several aspects oi decisional guidance are especially relevant to the design of executive systems. First, Silver made the important point that “decisional guidance is not limited to instances where a designer purposefully includes guidance mechanisms in a system . . . Inadvertent guidance is the unintended consequence of the system’s design” (p. 107). In approach to designing other words, even when designers take a “hands-off” “ user-driven”

CAN

EXECUTIVE

INFORMATION

SYSTEMS

REINFORCE

99

BIASES?

a system, the combination of technology capabilities and user preferences can produce significant yet unintended decisional-effects. In essence, if designers help executives “design their own system,” the executives can unintentionally multiply decisional side-effects by building their own biases directly into the system- biases x biases- as we have argued in previous sections. This potential is heightened because few top executives know how heuristics can create errors and therefore they are not safeguarding against them when they design systems. Second, Silver pointed out that; Much of the meta-support available today helps users with mechanics of operating a system’s features . . In contrast, decisional guidance helps users deal with the decision making concepts involved in choosing among and interacting with a system’s information-processing capabilities. For example, a mechanical help screen simply lists each available option and how to invoke it, decisional guidance might identify the strengths and weaknesses of each alternative. (p. 108) In this regard, executive systems are noteworthy, because, as previously discussed, these systems do not provide any decisional guidance at all. In other words, if decisional guidance is important in some situations or circumstances, then executive systems as they are

currently designed will be totally unable to provide it. The provision of guidance in executive systems has been approached in a technologically oriented manner with emphasis placed on providing mechanical guidance to help users with the mechanics of operating these systems. Third, Silver listed the circumstances where decisional guidance would be most important: 1. Since the opportunity to provide decisional guidance arises when users need to make discretionary judgements, how much guidance a system can provide depends upon how much discretion that system grants its users. (p. 108)

Strictly speaking, users of executive systems are not making judgements within the system, but they are making judgements about how to use the systems capabilities and how to organize, reorganize, view, and communicate information from the system for strategic purposes. Therefore, since an executive system should be minimally restrictive, the need for decisional guidance is decidedly high. 2. As the frequency, complexity, and importance of the judgments demanded from them increase, decision makers may require, or at least desire, computer based facilities that provide meta-support for the judgements they must make. . . the greater the motivation for providing (decisional) guidance,

We pointed out in the beginning of this paper how commercial products are targeted at top executives who must monitor critical success factors in a turbulent, uncertain and complex information environment. So with regard to Silver’s point (2) mentioned above, executive systems seem to fit exactly into the category where he finds the greatest need for decisional guidance. Later in his paper, Silver points out several additional circumstances that indicate a need for decisional support, such as: l

l

The lesser the degree of structure for the task, guidance. The lesser the availability of standard solutions, guidance.

the greater

the need for decisional

the greater

the need for decisional

A. RAI, C. STUBBART

100

l l

The less well-defined the objectives, the greater the need for decisional The greater the uncertainty and risk facing decision-makers, the greater decisional guidance.

and D. PAPER

guidance. the need for

Each of these indicators reinforces the main point we made in the beginning of the paper, that there is a dangerous mis-alignment among task, user, and executive systems’ technology, resulting from pell-me11 technological development without theoretical development. In particular, according to Silver’s rationale, executive systems need decisional guidance mechanisms because the systems are used for complex tasks with strategic consequences in the context of bounded-rationality.

Executive systems: Structuring the decision-making process What kinds of decisional guidance apply to the design of executive systems? Silver’s framework for approaching decision guidance problems is comprised of two main compoand “Form of Guidance” (as illustrated in Figure 2). nents, “Target of Guidance” Silver (1991) distinguished between guidance directed toward structuring the decisionmaking process versus guidance for executing the decision-making process. Structuring the process involves “selecting a problem representation and then defining and ordering the set of information-processing and problem-solving activities to be performed” (p. 111).

Form of Guidance Suggestive Guidance

Structuring the Process

Informative

Guidance

Recommended operator

Description/analysis of operators

Set of recommended operators

Comparison of operators

Ordered list of recommended operators

Map of relationship among operators

Set of operators not recommended

Record of behavior in similar contexts History of activity this season

Target of Guidance

Recommended values Set of recommended values Executing the Process

Definitions of required input values Description of how inputs will be used

Ordered list of recommended values

Tables, graphs, or analyses of data

Set of values not recommended

Record of behavior in similar contexts History of activity this season

Fig. 2. Examples of decisional guidance. From “Decisional guidance for computer-based decision support by M.S. Silver, 1991, MIS Quarterly, March. Copyright 1991 MIS Quarter/y. Adapted by permission.

CAN EXECUTIVE

INFORMATION

SYSTEMS

REINFORCE

BIASES?

101

Executing the process entails “actually performing the various information-processing and problem-solving activities” (p. 111). He envisioned two forms of guidancesuggestive versus informativeto guide the decision-maker in the structuring and execution of the decision-making process. Suggestive guidance means specific recommendations on the use of an executive system whereas informative guidance informs users without prescribing specific action. In the following sections, we apply this framework to generate some examples to illustrate how decisional guidance could improve an executive system. Because, the difficulty in providing guidance escalates with an increase in the number of distinct contexts requiring such support, an executive systems’ designer faces a dual challenge. First, he needs to provide the necessary functionality to support the decision-making tasks of top executives. Second, he needs to provide appropriate guidance for users, to ensure that the wide array of system options are appropriately used. The remainder of this section focuses on how designers of executive systems can provide decisional guidance to alleviate judgmental biases. Suggestive guidance. Designers of executive systems could concentrate on suitable operators or an ordered list of operators. For example, guiding prompts could make recommendations about time-series analysis or robust regression when an executive is probing performance data. Designers could also focus on operators that complement the use of exception reports. Operators may be designed to urge users to follow a standard method for interpreting data based on rigorous statistical analysis. Recommendations should lead to the appropriate deployment of techniques to counter-balance human limitations in detecting outliers, in recognizing random variations, and estimating probability accurately the major antecedents for regression and overconfidence biases. The designer could guide the user by recommending appropriate sampling techniques that may be suitable for a particular decision-making context. Building guidance on sampling strategies should alleviate availability and overconfidence biases that may otherwise be caused due to sampling errors such as small sample sizes and selective filtering of data.

Informative guidance. Designers of executive systems can also provide a detailed analysis of operators. They could caution and remind users about information processing biases that may result in a given context from sole reliance on specific operators such as exception reports or drill-down. The system could inform users about complementary and substitute capabilities of various operators by laying out a “map” on their purpose and how they are interrelated. Such information should caution users on the potential dangers and shortcomings of specific operators for particular situations. In addition, users could be informed about the advantages and shortcomings of various sampling techniques. Such an approach can be used to inform users about how the decision-making process might be structured using the capabilities of the system and the potential biases that may be introduced by using certain operators in isolation or in conjunction with other operators. Executing the decision-making process Suggestive guidance. The outputs of an executive system are designed to support judgement while the inputs to the system require human judgement. However, exception conditions are often specified with insufficient analysis of the underlying data. Systems could recommend statistical procedures to analyze the data (mean, standard deviation and skewness) guiding the establishment of exception values. Designers could also include procedures that indicate when existing exception conditions need to be changed. This is necessary when

A. RAI.

102

C. STUBBART

and D. PAPER

discrepancies separate present conditions from the actual distribution of the underlying data. A shift in the distribution of the data may lead to a recommendation that exception conditions be redefined. Such analytical attention to exception monitoring should counter bias triggers such as salience, selective filtering and inability of humans to recognize random variation. Along another line, designers could study how an executive system was used in the past. Collection of historical data series could be used to build a rich “action” repository on the execution of operators in similar contexts, the decisions taken, and the associated outcomes of these decisions. While this kind of analysis represents a big, challenging undertaking, appropriate analysis derived from historical data might lead toward recommendations about the usefulness of selected execution approaches.

Informative guidance. Executive systems can be designed to inform users about the role of human judgement-based input in executing specific operators. Information on alternative methods (such as analysis of statistical distributions) that can be deployed to establish exception conditions can be made available to users. The information would be geared toward explaining how user-input interacts with specific operators and available options to determine values of input parameters. Wheelwright and Makridakis (1985) recognized action-outcome-feedback links as useful learning mechanisms and emphasized the importance of timely feedback. Feedback must be carefully designed to present information pertinent to a specific decision-making context. As part of a feedback strategy, users could be informed about how they executed decision-making processes in past situations and the associated outcomes. SUMMARY Current system design methods and characteristics of executive systems pay little attention to the issue of decisional guidance. None of the present requirements determination approaches (those with either a strategic or management control orientation) address the constraints documented in behavioral decision theory. In the future, designers must include steps in the requirements determination process that identify decisional information that the system ought to incorporate. Group processes such as cross-functional teams, joint applications design, structured interviews, and prototyping have been suggested as possible ways to improve the determination of information requirements (Wetherbe, 1991). This line of inquiry should be extended to identify characteristics of group processes that lead to the identification of information that should alleviate judgmental biases. In addition to incorporating specific debiasing information identified for a given context through the requirements determination process, system designers clearly need to focus on how the system’s operators are deployed by users. In other words, the executive system design problem should not be approached in a purely technical manner but should rather take a more holistic perspective by recognizing the complex interactions between the user, the task, and the technology. CONCLUSIONS

Limitations

AND DIRECTIONS

FOR RESEARCH

and objections

We recognize at least three main objections to our ideas. First, there is the implicit opinion that top officials’ intelligence, experience, and intuition protects them from biases

CAN EXECUTIVE

INFORMATION

SYSTEMS

REINFORCE

BIASES?

103

that affect ordinary people, such as sophomores in laboratory experiments (Sears, 1986). But research for a broad array of disciplines, such as cognitive psychology (Hogarth, 1980; Kahneman, Slavic, & Tversky, 1986; Taylor, 1984) international relations (Janis, 1982, 1989; Jervis, 1976; Vertzberger, 1989), history (Neustadt & May, 1986) and strategic management (Bazerman, 1990; Schwenk, 1986; Zajac & Bazerman, 1991) all present the existence and importance of systematic biases in strategic decision-making by top managers and officials. Therefore, the empirical case shows that these biases do apply to top executives too. Second, some readers may complain that we have “selectively” chosen just three “unrepresentative” heuristics from the dozens of heuristics that have been studied. Obviously, we do not claim that executive systems intensify all biases, but a much more systematic examination should be required to determine just how many heuristics may be intensified (or alleviated) by these systems. We justify our selection because these three heuristics affect all top-level executives, they occur all the time, and their documented consequences are serious for decision-makers. It will be important to explore similar associations with other heuristics/biases such as anchoring and adjustment (Slavic & Lichtenstein, 1971; Tversky & Kahneman, 1974), illusions of correlation (Fischhoff, 1978), fundamental attribution errors (Nisbett & Wilson, 1977), etc. Lastly, while we realize that we have not presented a comprehensive theory for the design of executive systems, we do maintain that we have moved the field a step further toward genuine theory.

Implications for empirical testing We have examined the possible association between judgmental heuristics and key characteristics of executive systems. Our discussion was limited to three common information processing heuristics that intensify decision-making biases. We offered three propositions to the effect that certain characteristics of executive systems could intensify judgmental biases. Obviously, our propositions cannot be proven a priori by theoretical arguments for or against. Even so, they are important conceptually, because they run counter to most “optimistic” accounts of these systems. They call for empirical research tests, which we are planning. We think that our propositions can be tested through laboratory experiments. We do have our own COMMANDER system at this institution and plan on using it to develop executive systems applications for a specific decision-making context. We propose on designing appropriate experiments to examine the effect of specific capabilities of executive systems on judgmental biases.

Implications for theory Whether or not our propositions withstand empirical testing, we believe that we are correct in our overall observations about executive system-without-theory! Theoretical analysis and extension is especially relevant because the technology is designed to mirror the user as well as to complement his abilities. In some ways, executive systems work like a multiplier of an executive’s viewpoint, not just a facilitator. To a large extent, when an executive looks at a screen today, he may be looking at himself in a “technological mirror.” The overall effects of this technological mirror are yet to be fully explored. An analysis of the executive systems design literature indicated that there had been some progress in moving the orientation of these systems from a control to a strategic perspective. This movement represents a recognition of the task-technology interaction but overlooks the user. Toward this end, we have drawn upon the work of Silver on decisional guidance and provided some design guidelines to address this gap. We hope that the cumu-

A. RAI, C. STUBBART

104

and D. PAPER

lative effort of researchers will lead to the development and validation of a comprehensive design theory for executive systems. Such a theory should be the basis to guide both the development methods and product features of executive systems. Acknowledgement-The authors would like to express their thanks to the editor and the two anonymous reviewers for their excellent comments. We hold only ourselves responsible for any oversights, errors, and omissions. This project was partially supported by the Pontikes Center for the Management of Information at Southern Illinois University at Carbondale.

REFERENCES Ansoff, I. (1984). ImpLunting struregic management. Englewood Cliffs, NJ. Prentice-Hall. Bajwa, D.S., & Rai, A. (1994). An empirical investigation between top management support, IS management support, vendor/consultant support and EIS success. Proceedings of the 27/h Annual Hawaii International

Conference on Systems and Sciences, 3, 145-154. Bazerman, M.H. (1990). Judgment in managerial decision making. New York: John Wiley & Sons. Burkan, W.C. (1988). Making EIS work. DSS 88 Transactions (pp. 121-136). Providence, RI: The Institute of Management Sciences. Campbell, D.T. (1969). Reforms as experiments. American Psychologist, 24(4), 409-429. Campbell, D.T., & Stanley, J.C. (1963). Experimentul and quasi-e.uperimentul designs for research. Boston: Houghton Mifflin Company. Cottrell, N., & Rapley, K. (1991). Factors critical to the success of executive information systems in British Airways.

European Journal of Information Systems, l(l), 65-71. H.L. (1979). What computers can’t do: The limits of artificial intelligence. New York: Harper and Row. H.J., & Hogarth, R.M. (1978). Confidence in judgment: Persistence of the illusion of validity. Psvchological Review. 85, 395-416. Fischhoff, B. (1978). Fault trees: Sensitivity of estimated failure probabilities to problem presentation. Journal of Experimental Psychology: Human Perception and Performance, 4, 342-355. Friend, D. (1986). Executive information systems: Successes, failures, insights, and misconceptions. DSS 86 Trunsactions (pp. 35-40). Providence, RI: The Institute of Management Sciences. Haskins, M.E., & Nanni, A.J. Jr. (1988). MIS influences on managers: Hidden side effects. Managemenr Deci-

Dreyfus, Einhorn,

sion, 26(3), 25-3 1, Hedberg, B., & Jonsson, S. (1978). Designing semi-confusing information systems for organizations in changing environments. Accounting, Organizations and Societv, 3( 1), 47-64. Hogarth, R.M. (1980). Judgement and Choice. New York: John Wiley & Sons. Huber, G. (1984). The nature and design of post-industrial organizations. Management Science, 30(8), 928-951. Janis, I.L. (1982). Victims of groupthink (2nd ed.). Boston: Houghton Mifflin. Janis, I.L. (1989). Crucial decisions. New York: Free Press. Jervis, R. (1976). Perception and misperception in internationalpolitics. Princeton: Princeton University Press. Joyce, E.J., & Biddle, G.C. (1981). Anchoring and adjustment in probabilistic inference in auditing. Journal of

Accounting Research, 19, 120-145. Judd, P., Paddock, C.. & Wetherbe, J.C. (1981). Decision impelling differences: An investigation of management by exception reporting. Information and Management, 4(5), 259-267. Kahneman, D. (1973). Attention and effort. Englewood Cliffs, NJ: Prentice-Hall. Kahneman, D., Slavic, P., & Tversky, A. (1986). Judgment under uncertainty: Heuristics and biases. New York: Cambridge University Press. Kahneman, D., & Tversky, A. (1972). Subjective probability: A judgement of representativeness. Cognitive

Psychology, 3, 430-454. Kiesler. S., & Sproull, L. (1982). Managerial response to changing environments: Perspectives on problem sensing from social cognition. Administrative Sciences Quurter/_v, 548-570. Kogan, J. (1986). Information to motivation: A key to executive information systems that translate strategy into results for management. DSS 86 Transactions (pp. 6-13). Providence, RI: The Institute of Management Sciences. Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980). Reasons for confidence. Joarnul of E-yperimentul Psychol-

ogy: Human Learning and Memory, 6, 107-I 18. Kydd, C.T. (1989). Cognitive biases in the use of computer-based decision 335-344. Langer, E.J. (1983). The psycho/og_v of conrrol. Beverly Hills, CA: Sage.

support

systems.

OMEGA,

f7(4).

CAN EXECUTIVE

Lichtenstein,

INFORMATION

S., & Fischhoff,

SYSTEMS

REINFORCE

BIASES?

105

B. (1977). Do those who know more also know more about how much they know?

Organization Behavior and Human Performance, 20, 159-183. Lichtenstein, S., & Fischhoff, B. (1980). Training for calibration. Organization Behavior and Human Performance, 25, 149-171. March, J.G. (1987). Ambiguity and accounting: The elusive link between information and decision making. Accounting, Organizations and Society, I2(2), 153-168. Marr, D. (1982). Vision: A computational investigation into the human representation andprocessing of visual information. San Francisco: W.H. Freeman. Mason, R.O., & Mitroff, 1.1. (1973). A program for research on management information systems. Management Science, 19(5), 475-485. McKeeney, J.L., & Keen, P.G.W. (1974). How managers’ minds work. Harvard Business Review, 52(3), 79-90. Miller, G.A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97. Neustadt, R., & May, E.R. (1986). Thinking in time. New York: Free Press. Nisbett, R.E., & Ross, L. (1980). Human inference: Strategies and shortcomings of socialjudgment. Englewood Cliffs, NJ: Prentice Hall. Nisbett, R.E., & Wilson, T.D. (1977). Telling more than we can know: Verbal reports on mental processes. Psy-

chological Review, 84, 231-259. Oskamp, S. (1965). Overconfidence in case-study judgements. The Journal of Consulting Psychology, 29, 261-265. Paller, A., & Laska, R. (1990). The EIS book. Homewood, IL: Dow Jones-Irwin. Ramaprasad, A., Hill, M.E., & Salach, D.A. (1993). Mental models, cognitive dissonance and executive information systems’ effectiveness. Journal of Information Systems, 3, l-15. Rockart, J.F. (1979). Chief executives define their own data needs. Harvard Business Review, 57, March-April. Rockart, J.F. (1982). The changing role of the information systems executive: A critical success factors perspective. Sloan Management Review, 24(l), 3-13. Rockart, J.F. (1990). Executive support systems: Yesterday, today and tomorrow. In The strategic business objectives method for guiding executive information systems development (cited in Volonino, L., & Watson, H.J., Journal of Management Information Systems, 7(3), 27-39). Plenary speech at DSS-90 Conference, May, Cambridge, MA. Rockart, J.F., & DeLong, D.W. (1988). Executive support systems: The emergence of top management computer use. Homewood, IL: Dow Jones-Irwin. Rockart, J.F., & Treaty, M.E. (1982). The CEO goes on-line. Harvard Business Review, Junuary-February. Sage, A.P. (1981). Behavioral and organizational considerations in the design of information systems and processes for planning and decision support. IEEE Transactions on Systems, Man, and Cybernetics, II(9), 640-678. Sage, A.P. (1991). Decision support systems engineering. New York: John Wiley & Sons. Schwenk, C.R. (1986). Information, cognitive biases, and commitment to a course of action. Academy of Munuge-

ment Review, 11(2), 298-310. Sears, D.O. (1986). College Sophomores in the laboratory: Influences of a narrow data base on social psychology’s view of human nature. Journal of Personality and Social Psychology, 51(3), 515-530. Silver, M.S. (1988). Descriptive analysis for computer-based decision support. Operations Research, 36(6), 904916. Silver, M.S. (1990). Decision support systems: Directed and nondirected change. Information Systems Research, 1(l), 47-70. Silver, M.S. (1991). Decisional guidance for computer-based decision support. MIS Quarterly, March, 105-122. Simon, H.A. (1957). Administrative behavior (2nd ed.). New York: The Free Press. Simon, H.A., & Newell, A. (1971). Human problem solving: The state of the theory in 1970. American Psychologist, 26, 145-159. Slavic, P., & Lichtenstein, S. (1971). Comparison of Bayesian and regression approaches to the study of information processing judgment. Organization Behavior and Human Performance, 6, 649-744. Taylor, R.N. (1984). Behavioral decision making. Glenview, IL: Scott, Foresman and Company. Thierauf, R.J. (1991). Executive Information Systems: A Guide for Senior Management and MIS Professionals. New York: Quorum Books. Turban, E. (1990). Decision support and expert systems. New York: Macmillan Publishing Company. Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive

Psychology, 5, 207-232. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 285, 1124-I 131. Vertzberger, Y. (1990). The world in their minds: Information processing, cognition, and perception in foreign policy decision making. Stanford, CA: Stanford University Press. Violano, M. (1989). I was terrified of technology. Bankers Monthly, IO6(10), 57-62.

106

A. RAl,

C. STUBBART

and D. PAPER

Volonino, L., & Watson, H.J. (1991). The strategic business objectives method for guiding executive information systems development. Journal of Management fnformation Systems, 7(3), 27-39. Walls, J.G., Widmeyer, G., & El Savvy, 0. (1992). Building an information system design theory for vigilant EIS.

Information Systems Research, March, 36-59. E. (1991). Filtering strategic environmental information processing using EIS. Proceedings of the Hawaii International Conference on Systems and Science, 126-133. Watson, H.J. (1990). Avoiding hidden EIS pitfalls. Computerworld, June 25, 90-91. Watson (1992). How to fit EIS into a competitive context. Information Strategy: The Executive’s Journal, g(2), Z-10. Wang, P., 81 Turban,

Watson, H.J., 81 Frolick, M. (1988). Determining information Unpublished working paper, Department of Management, Watson, H.J., & Glover, H. (1989). Common and avoidable

requirements for an executive information system. University of Georgia, Athens, GA. causes of EIS failure. Computerworld, December

4, 90-91. Watson, H.J.. Rainer, K.R., & Koh, C.E. (1991). Executive information systems: A framework for development and a survey of current practices. MIS Quarter/.v, March, 13-31. Wetherbe, J.C. (1991). Executive information requirements: Getting it right. MIS Quarter@, 15(l), 51-65. Wheelwright, S.C., & Makridakis, S. (1985). Forecasting methodsfor management (4th ed.) New York: Wiley. Zajac, E.J., & Bazerman, M.H. (1991). Blind spots in industry and competitor analysis: Implications for interfirm misperceptions for strategic decision. Academv of Management Review, 16(l), 37-57. Zmud, R.W. (1986). Supporting senior executives through decision support technologies: A review and directions for future research. Decisions Support Systems: A Decade in Perspective. Holland-Amsterdam, 87-101.