An evaluation of empirical research in managerial support systems

An evaluation of empirical research in managerial support systems

203 An Evaluation of Empirical Research in Managerial Support Systems * Izak BENBASAT and Barrie R. N A U L T Faculty of Commerce and Business Admini...

2MB Sizes 2 Downloads 73 Views

203

An Evaluation of Empirical Research in Managerial Support Systems * Izak BENBASAT and Barrie R. N A U L T Faculty of Commerce and Business Administration, UniversiO' of British Columbia, Vancouver, B.C., Canada V6T 1Y8 This paper describes, summarizes and comments on the empirical studies in the use of three information technologies to support managerial activities: decision support systems (DSS), group decision support systems (GDSS), and expert systems (ES). These are collectively labelled as managerial support systems (MSS). A classification scheme to organize empirical research in MSS is proposed. An overview of empirical work on two major research themes, namely MSS "design" and "effects of use" of MSS, is then presented for the years 1981-1988. Following this overview, the research strategies suitable for empirical research in MSS are discussed. The paper concludes with suggestions about future research directions in the field.

Ke),words: Decision support systems, Group decision support systems, Expert systems, Decision support aids, Effects of support systems use, Design of support systems, Review of empirical DSS research, DSS research strategies.

1. Introduction The purpose of this paper is to describe, summarize and comment on the empirical studies in the use of three information technologies to support managerial activities: decision support systems (DSS), group decision support systems (GDSS), and expert systems (ES). While each of these are at different levels of maturity in managerial application and empirical research, the commonalities in their objectives make it possible to evaluate their contributions within a common classification scheme. In this paper, they will be collectively labelled as managerial support systems (MSS), defined by Scott Morton [56] as " t h e use of information technologies to support management." The paper begins by providing definitions of the components of MSS followed by the discussion of a classification scheme used to organize empirical research in MSS. An overview of empirical research in MSS is then presented. Next, the appropriate research strategies for empirical research in MSS are discussed. The paper concludes with several comments about future research directions in the field.

2. Definitions: DSS, GDSS, ES Barrie R. Nault is a Doctoral Candidate in the Management Information System Division at the Faculty of Commerce and Business Administration, University of British Columbia. Mr. Nault has a B. Comm. in Management Science from McGill University and has several years of industry experience in Marketing Research and Statistics. His current research interests include strategic information technology, economics of information systems, modelling of IS impacts on markets, and information economics. * This work was supported by the Natural Sciences and Engineering Research Council of C a n a d a under operating grant OGP2421. We would like to thank the a n o n y m o u s referees for their valuable comments. North-Holland Decision Support Systems 6 (1990) 203-226

A DSS is a computer-based system used by managers as an aid to decision making in semi-

lzak Benbasat is Professor and Director of Research at the Faculty of Commerce and Business Administration, University of British Columbia. He received his Ph.D. in MIS from The University of Minnesota in 1974. Professor Benbasat has served on the editorial boards of Management Sci-

ence, M I S Quarterly, Accounting Reoiew and Information Systems Research. His current research interests are in knowledge acquisition methods m " ~ m for expert systems development, theories for conducting research in managerial support technolog.ies, and the organizational adoption of information technologins.

0167-9236/90/$3.50 © 1990 - Elsevier Science Publishers B.V. (North-Holland)

204

L Benbasat, B.R. Nault / Empirical research in managerial support systems

structured decision tasks through direct interaction with data and models. Though research in DSS has been going on since the early seventies [55], except for a few authors who have surveyed the field in general terms [24,35,37,56], there have been no papers that have critically examined the empirical research efforts in DSS in an integrated fashion. GDSS are extensions of the DSS concept to decision making groups. Although GDSS have received increased attention only recently, Scott Morton's seminal study [55] was designed to support a group of three decision makers. DeSanctis and Gallupe [18] define a decision making group as two or more people jointly responsible for a task, not necessarily situated in the same physical location, and who perceive themselves as part of a team working on the task. Kraemer and King [45] provide a broader definition of GDSS as any computer and communication based support of group work including, but not limited to, decision making. Their definition includes computer supported cooperative work (CSCW). Bair [6] states that CSCW is concerned with people working together in organizations engaged in cooperation, collaborating, communicating, sharing, etc. We confine our discussion to the decision making aspects of GDSS. DeSanctis and Gallupe [18] classify group support into three levels. A "level 1" GDSS provides features to remove communication barriers in the group. Examples are large screens, voting management, anonymous input, and message exchange. A "level 2" GDSS provides decision modelling and group decision techniques such as multiattribute utility methods, risk analysis, and automated Delphi. A "level 3" GDSS provides machine-induced communication patterns such as automated parliamentary procedures. These levels are not necessarily cumulative; a GDSS can provide machine induced communication patterns (level 3) without necessarily having any level 2 decision modelling techniques. While work in ES had been going on since the mid-seventies, interest in such systems by academic researchers in information systems is quite new. Luconi, Malone and Scott Morton [49] define ES as computer programs that use specialized knowledge about a particular problem area rather than just general purpose knowledge, use symbolic reasoning rather than only numerical calculations,

and perform at a level of competence that is better than nonexpert humans. They differentiate between an ES, a DSS, and an expert support system (ESS). These distinctions are described in terms of the following four categories: data, procedures, goals/constraints, and strategies. In a DSS, the responsibility of providing, modifying, and managing data, procedures, and goals/constraints are shared by both the person and the computer, while the flexible strategies (i.e., procedures to explore and analyze the problem and possible solutions) are the sole responsibility of the person. In an ES the computer is responsible for all four categories, whereas for an ESS there is joint human-computer responsibility for each of these categories. There are similarities in what ES and DSS are designed to accomplish; both provide computerbased support which could assist a manager or a professional. Most of the generic categories of ES applications [34] such as prediction and planning, are applicable to managerial settings. Turban and Watkins [66,67] consider ES as a special class of DSS. They also propose several alternative ways to integrate ES with DSS, thus practically eliminating any distinctions between the two. FoP lowing these arguments, we consider ES for managerial support to be akin to DSS and GDSS, and will discuss examples of research efforts in these fields on similar issues.

3. A Classification Scheme for M S S Research

The classification scheme shown in table 1 will be used to structure our examination of MSS research. This scheme represents our view of how to organize the literature in order (1) to point out the salient features of the area under study, and (2) to identify the similarities and differences among the empirical work [for other classification schemes see 24,37,56]. The categories of the classification scheme are discussed below.

(A) Research Themes The first level of the classification scheme deals with the research issues of how to design MSS, how to provide a technological base to build them, and how to evaluate their contributions (see [35]

L Benbasat, B.R. Nault / Empirical research in managerialsupport systems

205

Table 1 Categories of the MSS Empirical Research Classification Scheme.

nology" category, this topic is beyond the scope of this paper.

I. ResearchThemes (1) Design (2) Effectsof Use (3) Technology II. User Categories (1) Individual Support (2) Group Support III. Component Parts (1) Modelling and inference Component (i) ProcessModels (ii) Choice Models (iii) Analysis and Reasoning Methods (iv) Judgement Refinement/Amplification Techniques (v) Structured Group Decision Techniques (2) Data and Knowledge Component (i) Information Control Techniques (3) Interfaceand Communication Component (i) RepresentationalAids (ii) Group Collaboration Support IV. Research Strategies (1) Methods (i) Case Studies (ii) Field Studies (iii) Field Experiments (iv) Laboratory Experiments (2) Focus of Research (i) Input-Output (ii) Process

(B) User Categories

a The Technologyliterature is not addressed in this paper.

(C) Component Parts

for similar issues). Research themes associated with MSS are categorized into three areas: (i) Design refers to the concepts and methods utilized for the development and implementation of MSS. Examination of the efficacy of the design process and comparisons of alternate design methods are subsumed under this topic. (ii) Effects of use refers to the examination of the consequences of utilizing a MSS and the value derived from using a MSS. Comparative evaluations of various MSS features and capabilities are covered under this topic. (iii) Technology focuses on the tools used to build a n d / o r use a system. This category includes tools such as DSS generators, prototyping tools, ES shells, dialogue management tools, etc. Our examination of empirical research is concerned with the "design" and "effects of use" categories. While it is recognized that there is another set of work corresponding to the "tech-

An integrated view of MSS can be best illustrated through an examination of their component parts, namely, a modelling-inference component, a data-knowledge component, and an interface-communication component. The major components of a DSS are the data subsystem, the model subsystem, and the user-system interface [60]. There are some parallels between these DSS components and those of an ES. The data component in an ES is a knowledge base which contains qualitative and symbolic knowledge represented using rules, frames or semantic nets, in addition to the factual data found in data bases [35]. The model component consists of inference engines providing generalized reasoning methods, such as forward and backward chaining, and explanation facilities as to why a certain piece of information was requested, or why a given action was taken. The interface component, generally made up of a "system-queries-user" portion and an explanation portion, controls the interaction between user and system.

a

The users of an MSS are classified as either individuals or groups. Individual support focuses on one user who is performing a task independently of others. While the decision will be eventually communicated to others, the individual MSS does not necessarily support such communication. Group support deals with two or more individuals working on an interrelated task as a team, interacting during performance of the task. Such tasks are similar to what Thompson [63] refers to as pooled interdependence. The study of group support becomes more complex due to the additional factors that need to be investigated over and above those relevant to the individual case. These factors include member proximity, group size, group cohesion, emergence of leaders, powers, and influence relationships [18]. In addition, as will be discussed later, the measurement issues associated with studying processes are more complex when studying groups as compared to individual MSS.

206

L Benbasat, B.R. Nault / Empirical research in managerial support systems

Table 2 Decision Aiding Technique

Function

1.

Process Models

Computational models that predict the behavior of real-world processes (e.g.: 'what if' capabilities).

2.

Choice Models

Integration of individual criteria across aspects or alternative choices (e.g.: multiattribute, multialternative utility models).

3.

Information Control Techniques

Storage, retrieval, and organization of data, information and knowledge (e.g.: database management systems).

4.

Representational Aids

Expression and manipulation of a specific representation of a decision problem (e.g.: visual, spatial, and matrix data and model representation methods).

Analysis and Reasoning Methods

Perform problem specific reasoning based on a representation of a decision problem (e.g.: expert systems, mathematical programming).

6.

Judgement Refinement/ Amplification Techniques

Quantification of heuristic judgement processes (e.g.: bootstrapping, Bayesian analysis).

7.

Structured Group Decision Techniques

Methods for facilitating and structuring group decision making (e.g.: automated delphi, nominal group techniques, electronic

brainstorming). Group Collaboration Support

Facilities for information and idea generation, collection, and compilation for groups (e.g.: data and voice transmission, electronicchalkboards, electronicvoting, computer conferencing).

A GDSS is also made up of these three components. However, there is an additional infrastructure needed for dealing with multiple individuals and the joint problem solving process. This will be referred to as a communications component (see also [32] which makes the same point). Kraemer and King [45] identify several elements of the communications component under the following categories: (i) Hardware: e.g., telecommunications facilities, local area networks, special meeting rooms, and large screen viewing facilities. (ii) Software: e.g., facilities for electronic voting and networking, and for managing private as well as shared files. (iii) People: a facilitator to manage the group process in a specific orderly way, a n d / o r to act as an assistant or troubleshooter. As it is important to our integrated view of MSS, this discussion can be summarized by noting that although the specifics of each component in the three support systems are certainly different, the general functions of the three components are

not. Therefore, at some level these different support systems are homogeneous with respect to research issues and the set of empirical research methods which can be used to examine them. The functions available from an MSS could be classified into a number of decision aiding techniques. The first six of the following are suggested by Zachary [74]; the last two were added to describe those functions associated with groups. See table 2. These decision aiding techniques can be classified generically into the three MSS components (see table 1). Decision aiding techniques 1, 2, 5, 6, and 7 relate mainly to the model-inference component. Decision aiding technique 3 relates directly to the data-knowledge component. Decision aiding technique 4 is associated with the interface component; decision aiding technique 8 includes additional interface-communication features for supporting groups. Zachary [74] mentions ES capabilities under several of his categories, for example under choice models, and analysis and reasoning methods. Thus,

Examine the effecttiveness of adaptive design approach

Comparison of three design methods

Comparison of four knowledge acquisition techniques

Comparison of two approaches to knowledge acquisition

Evaluation of the reliability of an automated knowledge acquisition tool

Alavi and Napier (1984)

Mahmood and Medewitz (1985)

Burton et al. (1987)

Henrion and Cooley (1987)

Shaw and Woodward (1987)

Protocol analysis technique took longer and produced a less complete set of information Rule based models led to faster development but later required more extensive testing compared to decision analytic approach Automated knowledge acquisition tool showed intra and inter expert reliability

Analysis and Reasoning Method Judgement Refinement/ Amplification Technique

Analysis and Reasoning Method

InputOutput

Process

InputOutput

Lab Experiment

Case

Case

Designer led iterative design method was marginally superior

InputOutput

Lab Experiment

Adaptive design resulted in high levels of user participation and involvement, and "reduced need for user training

Process

Case

Process Model

Identify executives' support needs and desired benefits

Alavi (1982)

Users should be involved in definition and evaluation of DSS to increase perceived usefulness; Use of prototypes are recommended

Performance did not vary due to participative design, but perception of DSS worth increased

Process Model; Information Control Technique

Input Output

Lab Experiment

Assessment of participative design on implementation success

King and Rodriguez (1981)

Field Study (Survey)

An evolutionary design creating "felt-need" led to higher DSS use

Analysis and Reasoning Method

Input Output

Lab Experiment

Comparison of two design approaches

Alavi and Henderson (1981)

Results

Decision Aiding Techniques

Focus of Research

Research Purpose

Authors (Year)

Research Strategy

Table 3 Summary of Studies on MSS Design: Individual Support

g~

5

.2-

"x

K.

t~

~0

t~

Not having a DSS or not being trained in its use increased number of alternative considered; no other differences (performance, speed, processes) were observed DSS availability influenced decision making processes; effects of DSS on creativity enhancement were mixed Subjects with DSS followed an incremental decision making strategy, those without chose a synoptic approach; DSS group outperformed "no DSS" group for less complex tasks, results were reversed for more complex tasks

Choice Model

Information Control Technique

Judgement Refinement/ Amplification Technique Choice Model

InputOutput; Process InputOutput; Process

InputOutput; Process

Process

Lab Experiment

Lab Experiment

Lab Experiment

Effects of DSS on performance and perceptions of performance

Effect of DSS availability, training and level of detail in the data on decision making

Effects of the availability of creativity enhancing DSS on decision processes and levels of creativity

Assess the influence of DSS use on decision processes

Aldag and Power (1986)

Goslar et al. (1986)

Elam and Mean (1987)

Chu and Elam (1988)

Lab Experiment

DSS availability did not improve performance, but did improve perceptions of decision confidence and satisfaction

Process Model

InputOutput

Lab Experiment

Effect of DSS use on individual's preference functions

Subjects who used the DSS had larger changes in their preference functions

Subjects with the DSS had superior decision quality

Dickmeyer (1983)

Process Model; Analysis and Reasoning Method

InputOutput

Lab Experiment

Assess the impact of a decision calculus model on decision quality

Mclntyre (1982)

Subjects with the DSS had higher performance

Process Model; Representational Aid

InputOutput

Lab Experiment

Effect of DSS availability on performance

Benbasat and Dexter (1982)

Subjects made better decisions before being exposed to the DSS

Process Model; Analysis and Reasoning Method

InputOutput

Lab Experiment

Examine the value of a decision calculus model in decision making

Chakravarti et at. (1979)

Results

Decision Aiding Techniques

Focus of Research

Research Strategy

Research purpose

Authors (Year)

Effects of MSS Availability.

Table 4a Summary of Studies on Effects of MSS Use: Individual Support.

2'

g~

oo

Research Purpose

Impact of information presentation on decision performance

Assess the influence of color and information presentation differences on user perceptions and decision making

Effects of different presentation formats and task complexity on performance

Effects of presentation format and model accuracy on decision performance and user satisfaction

Effects of task and format on information processing and evaluation strategies

Authors (Year)

Ghani and Lusk (1982)

Benbasat et al. (1986)

Dickson et al. (1986)

Liang (1986)

Jarvenpaa (1989)

MSS Features: Interface Component.

Table 4b Summary of Studies on Effects of MSS Use: Individual Support.

Lab Experiment

Lab Experiment

Lab Experiment

Lab Experiment

Lab Experiment

Research Strategy Focus

Process

InputOutput

InputOutput

InputOutput

InputOutput

of Research

Representational Aid

Process Model; Representational Aid

Representational Aid

Representational Aid

Model; Representational Aid

Process

Decision Aiding Techniques

Presentation format influences information acquisition strategies; presentation format and task requirements jointly influence evaluation strategies

Increased model accuracy and tabular reports each improved performance and satisfaction

Influence of presentation format depends on task characteristics

Influence of presentation format on performance depends on task; color had some positive, but no detrimental effects

Changes in any representation increased decision time but had no effect on performance

Results

4.

r,

Choice Model

Process

Lab Experiment

Effects of DSS availability and model features on decision strategies

Todd and Benbasat (1988)

Choice Model

InputOutput

Lab Experiment

Explore user perceptions of DSS restrictiveness

Silver (1988)

DSS availability and model features influenced the decision strategies used

Majority of subjects did not rank DSS restrictiveness the same way as the objective ranking based on system capabilities, indicating the influence of perceptions

Systematic manipulation of model variables and results displaying differences from planned improved performance Process Model; Information Control Technique

InputOutput

Lab Experiment

Effects of model features on performance

Dos Santos and Bariff (1988)

Computer availability had no effect; the heuristic aid and interactive aid use both improved performance

Analysis and Reasoning Method

InputOutput

Lab Experiment

Effects of computer, interactive aid, and availability of decision aid heuristic on decision making effectiveness

Cats-Baril and Huber (1987)

Results

Focus of Research

Research Strategy

Research Purpose

Authors (Year)

Decision Aiding Techniques

Table 4c Summary of Studies on Effects of MSS Use: Individual Support. MSS Features." Model Component.

t~

2

I. Benbasat , B.R. Nault / Empirical research in managerial support ,~vstems

his scheme treats ES as a subset of DSS. This is consistent with the view of ES as being focused on a narrower domain, and therefore could be integrated into DSS components [67].

(D) Research strategies Our scheme to classify empirical research in MSS includes the strategies applicable to such studies. These strategies fall into the broad categories of laboratory experiments, field experiments, field studies, and case studies. For each of these strategies, it is recognized that different types of phenomena can be the focus of research. In i n p u t - o u t p u t studies the interest is on examining the values assumed by the dependent variables based on a systematic manipulation of independent variables. The emphasis is on outcomes, such as the quality of the final decision; the processes intervening between input and output is treated as a "black box," and left unexplored [65]. In contrast, process data could be collected to investigate how and why questions, such as how a MSS influences selection of decision strategies or why a particular choice was made. Of course, a researchers could examine both processes and outcomes in the same study. The classification scheme will be applied in two ways. First, our discussions of empirical research will be partitioned by research theme, and then by user categories. Second, the other elements of the scheme are referenced in tables 3 through 5 which provide summary information for all the empirical studies identified. The tables are broken down by research theme, user categories, decision aiding techniques and research strategies. In the next two sections we provide a retrospective overview of MSS research in design and effects of use, mainly based on work published between 1981 and 1988. The focus is en empirical work, i.e., research that uses qualitative or quantitative data as a basis for the investigation of research questions. Nonempirical work, although important, is beyond the scope of this paper.

4. Empirical Research in MSS Design

(A) MSS Design for Individual Support Table 3 summarizes the findings and salient features of the studies discussed in this section.

211

(1) Description of Empirical Studies Research on design has mainly dealt with the comparison of alternative methods of developing support systems or acquiring and structuring the knowledge to build these systems. The extent and the role of users in developing these systems has been another area of interest. Since design methods are sometimes differentiated by the degree of user involvement, these topics are closely related. There is an association between the design methods for ES and DSS. Turban and Watkins [66] and Waterman [70] suggest that the design principles usually recommended for DSS, iterative design and prototyping, are also applicable to ES, mainly because both areas deal with problems that are semi-structured. There are also cases where specific DSS design methods are used to develop ES [31] providing additional justification for our considering both systems as part of MSS. Alavi and Henderson [2] investigated the influence of alternative implementation strategies (traditional and evolutionary) on the utilization of a DSS in a laboratory study. In the traditional strategy, the DSS was portrayed to the users as a valuable tool that could be shown theoretically to help them to solve their problems. The evolutionary strategy attempted to create a "felt need" [29]. By comparing the normative model (built into the DSS) to descriptive models of subjects' actual behavior, subjects were made aware of their biases, inconsistencies, and costs of their decision making strategies. Data in the experimental sessions indicated that optional DSS usage was highest when it had been implemented in an evolutionary fashion. Alavi and Napier [3] examined the adaptive design method in a case study. With the adaptive design method, traditional systems development steps are combined into a single step that is repeated at short intervals - the concept of iterative design. The findings indicated that the adaptive approach required high levels of user participation and involvement, and reduced the requirements for formal user training. Using a laboratory study, Mahmood and Medewitz [52] compared three DSS design methods: representation-based, evolutive, and adaptive. In representation-based design the DSS designer has the decision maker identify the way in which the information is to be represented, which operations are to be applied to the information, what

Research Purpose

Compare the effects of a GDSS on decision performance and behavior

Compare GDSS (computerized conferencing) vs. face to face group decision making

Examine effects of ES on organizations

Compare the effects of GDSS on decision quality, decision time, and satisfaction

Authors (Year)

Steeb and Johnson (1981)

Turoff and Hiltz (1982)

Sviokla (1986)

Bui and Sivasankaran (1987)

Table 5 Summary of Studies on Effects of MSS Use: Group Support.

Lab Experiment

Multiple Case Study

Lab Experiment

Lab Experiment

Research Strategy

InputOutput

Process

InputOutput

InputOutput

Focus of Research

Group Collaboration Support

Analysis and Reasoning Method

Group Collaboration Support

Structured Group Decision Techniques; Group Collaboration Support; Choice Model

Decision Aiding Techniques

GDSS improved decision quality for more complex task; GDSS groups took longer and had less satisfaction for low complexity task

ES helps in managing complexity, reducing uncertainty, and increasing structure

GDSS did not improve decision quality

GDSS increased decision content, decision breadth, consensus, and satisfaction

Results

to

Lab Experiment

Lab Experiment

Compare two group support systems (networked workstations and electronic blackboard technology) to "no GDSS"

Effects of GDSS availability on performance

Compare effects of GDSS on consensus and influence

Examine GDSS effects on influence behavior

Jarvenpaa et al (1988)

Sharda et al. (1988)

Watson et al. (1988)

Zigurs et al. (1988)

Lab Experiment

Field Experiment

Lab Experiment

Effect of GDSS on group problem finding

Gallupe et al. (1988)

Longitudinal Case Study

Measure the effects of GDSS on river safety

Le Blanc and Kozar (1987)

Process

InputOutput; Process

InputOutput

InputOutput; Process

InputOutput; Process

InputOutput

Group Collaboration Support

Group Collaboration Support

Process Model; Group Collaboration Support

Group Collaboration Support

Group Collaboration Support

Representational Aid; Group Collaboration Support

GDSS did not affect total expressed influence behavior; GDSS increased nonverbal communication

GDSS does not increase group consensus; there were no effects on equality of influence

GDSS improved performance

Networked workstations did not provide better support than electronic blackboards or "no GDSS"; significant team (group) differences dominated the outcomes

GDSS improved decision quality especially for the more difficult task

GDSS utilization decreased the number of accidents

t.o

5

"x.

214

L Benbasat, B.R. Nault / Empirical research in managerial support systems

memory aids are required, and the control aids needed to manage the interaction of the decision maker with the DSS. Adaptive design consists of short progressive cycles where each development is made immediately operational, similar to the evolutive method, except that the decision maker is the leader in the design process. The evolutive design method has the designer leading the design process. The evolutive method was superior to both of the other methods with respect to DSS usage and user satisfaction, and slightly better in terms of DSS attitudes and perceptions. In a laboratory experiment King and Rodriguez [43] examined the value of participative design for a DSS used to identify opportunities in a competitive environment. They observed that those managers who were involved in the development effort tended to perceive the DSS as more worthwhile than those who simply have the system delivered to them. In addition, the participants were more likely to utilize parts of the system that they recommended in the design phase. However, overall system use was not different and decision maker performance did not vary due to participative design. Alavi [1], in interviews with senior level executives, found that the perceived usefulness of DSS is associated with involvement in the definition and evaluation of the DSS. Prototypes were also recommended as a way to reduce adoption apprehension and risk. A key part in the design of ES is knowledge acquisition. This term refers to the extraction, formalization, and computerization of expert knowledge. Henrion and Cooley [36] conducted a case study to compare two approaches to knowledge acquisition, decision analytic and rule-based, for developing a diagnostic ES. The decision analyst employed an influence diagram to represent the relationships between causes and effects. This diagram incorporated a Bayesian model to diagnose problems based on the evidence available. The knowledge engineer used a rule-based system to represent diagnostic relationships which included the degree of belief in intermediate hypotheses and final diagnosis based on Boolean combinations of evidential data. During the construction of the decision analysis based model, the use of conditional distributions to encode diagnostic rules forced the expert to consider the impact of all

possible combinations of evidence. While the rule-based model was less demanding initially, it later required more extensive testing since the expert was more likely to encounter combinations of events that were not considered explicitly during design. In a laboratory study Burton et al. [13] compared four knowledge acquisition techniques: formal interviews, protocol analysis, goal decomposition and card sort (multidimensional analysis). These methods were compared in terms of the time taken for elicitation, number of rules elicited, and completeness of the rule set. While there were several problems associated with the study, (e.g., the use of undergraduate students as experts and no feedback received from the subjects after the initial acquisition of knowledge), results indicated that knowledge acquisition based on protocol analysis took longer to perform and analyze and produced a less complete set of information compared to the other techniques. Shaw and Woodward [58] compared the rules and constructs generated by experts in a given domain with the use of an automated knowledge acquisition tool. The findings showed that the tool performed satisfactorily in terms of the similarity of the constructs elicited from different experts as well as the reliability of the constructs generated by the same expert at different points in time. (2) Comments on Empirical Studies Empirical research on DSS design for individuals suggests that DSS usage is higher, and user satisfaction, attitudes, and perceptions are more favorable when prototyping and iterative design are used. These methods necessitate a higher degree of user involvement in system design. Consistent with these results, studies of user involvement in design have found that higher user participation results in favorable perceptions of usefulness as well as lower rates of system rejection. However, this research does not show that actual decision maker performance with a support system is affected by user participation in design. In general, we have found that the interest in DSS design research has waned in recent years. Contributions to theory building and testing have been limited. As will be discussed in the next section, measuring the effects of use has attracted more attention in the empirical literature since 1980. This could reflect the view that in the opin-

1. Benbasat, B.R. Nault / Empiricalresearch in managerialsupport systems ion of researchers DSS is now a mature technology and there are few new questions to be answered, or that design methodologies for new emerging technologies such as ES are the important ones to investigate. Compared to DSS research, ES design research is in its early stages. Hence, issues specific to ES, such as knowledge acquisition have not been sufficiently studied. While Welbank [72] provides a comparative analysis of nonautomated knowledge acquisition methods (e.g. interviews, protocol analysis, observation), this analysis is not based on empirical work. In general, formal evaluations of these methods are rare as indicated by the small number of studies discussed earlier. There are several automated knowledge acquisition tools reported in the literature: e.g. KSSO [25] and K N A C K [44]. We are not aware of any work that has compared these tools against each other or against a knowledge engineer. Each tool is designed for different task domains and problem types; thus a direct comparison is probably not feasible. The empirical studies discussed earlier provide a beginning for ES research, but there is not enough work yet to develop design implications. The main sources of knowledge about ES design are accounts in which the developers of the systems describe their experiences in dealing with an expert, or other knowledge sources, to build an ES. These research strategies are typical of early work in a new area. At this stage there is not a critical mass of empirical knowledge which provides normative guidelines for MSS design. In an area central to MSS, such as design methods, results are based mostly on perception rather than objective evaluations of design alternatives. Consequently, MSS design research is incomplete.

(B) M S S Design for Group Support Although we did not find any published empirical work examining the design of group support, Kraemer and King [45] point to several potential design problems as possible barriers to successful use of GDSS. These problems include incomplete understanding of the decision making process and shortcomings in areas of GDSS technologies (communication, graphics, modelling and analysis software).

215

Gallupe [26] suggests that an important issue in the design of GDSS is the question of what features to include in the system. There seem to be two alternative approaches. The first would be to include support for specific types of problems. The alternative would be to include a basic array of support features for multiple users. This is similar to the notion of a G D S S shell put forward by DeSanctis and Gallupe [18] where useful features can be selected in the GDSS during the course of working on a specific task. DeSanctis and Gallupe [18] also point to GDSS design research issues. They suggest that the different hardware, software, and decision room configurations will likely influence implementation and eventual GDSS use. They also contend that due to the novelty of GDSS, user participation may not be relied upon as the primary input to implementation analysis. Nonetheless, there is presently no empirical support for any of these views on design of group support. As such, this should be a prime area for future research.

5. Empirical Research into Effects of M S S Use (A) Effects of M S S Use by Individuals Tables 4a, b and c summarize the findings and salient features of the studies on the effects of MSS use discussed in this section.

(1) Description of Empirical Studies Virtually all the studies on effects of MSS use by individuals are based on DSS work. We have not found any comparable studies in ES. Research into the effects of DSS use have focused on two main topics: (1) DSS availability and (2) evaluation of DSS features. Several studies examined whether the availability of a DSS influenced the outcomes of decision making and decision making strategies. Benbasat and Dexter [8] investigated whether various aids would assist decision makers in production and inventory tasks. All decision makers had higher performance with the decision aids. The most significant result was that low analytic decision makers without the aid had the worst performance, but with the aid their performance level equalled that of the high analytics.

216

L Benbasat, B.R. Nault / Empirical research in managerial support systems

Mclntyre [51] studied the impact of DSS availability on decision quality in a promotion budget allocation task. Using a decision calculus model he found that the group with the DSS had significantly higher decision quality. In contrast, Chakravarti et al. [15] in an advertising allocation task using a decision calculus model found subjects made better decisions before being exposed to the DSS; the use of the DSS was detrimental. However, debate in the marketing literature has questioned the suitability of the decision calculus model used. Dickmeyer [19] examined the effect of a DSS on subjects' preference functions. In a university budget planning task employing an interactive financial planning model he observed that preference functions changed more when using the DSS to achieve a greater understanding of the tradeoffs between variables than when subjects only received a printed report on long range forecasts. Goslar, Green, and Hughes [30] examined the effects of applying a DSS to an ill-structured marketing problem. The findings indicated that not having a DSS was associated with the consideration of more alternatives. No differences in decision speed, perceived confidence, amount of data considered, decision processes, and overall performance were due to DSS availability. Aldag and Power [4] investigated the use of a DSS which helps the user with brainstorming, managing pro and con arguments, applying a multiattribute utility model, and evaluating the user's choice. They found that users' attitudes towards decision aids were favorable. There was only limited support for the hypothesis that, compared to unaided users, those with decision aids will exhibit more confidence in, and satisfaction with, their decision processes and recommendations. There was no support for the hypothesis that users of a decision aid will perform more systematic and complete analyses, and make better decisions. In a similar vein, Elam and Mead [23] examined the use of creativity enhancing software on the decision making process and the "creativity" of the solutions proposed in two planning tasks. There were three treatments: " n o DSS" and two versions of creativity enhancing DSS. The first version of creativity enhancing DSS encouraged the subjects to look backwards for causes and depth of understanding in developing plans; the second focused on looking ahead at each step

to practical solutions. Protocol analysis was used to categorize the decision making process as either single step (immediate decision) or multi-step (several steps to reflect about the task). All subjects with a DSS followed a multi-step process; only 40% of those without a DSS did so. However, these DSS effects could be due to the fact that the DSS users were given an explicit process model to follow rather than the support system itself. Version 1 DSS generated more creative responses than the " n o DSS" group, whereas the other DSS group generated less. Using protocol analysis in a laboratory experiment, Chu and Elam [16] studied the effect of a DSS on decision processes. A spreadsheet program was used as the DSS, and subjects were asked to perform a resource allocation task. Contrary to expectations, in the DSS group more subjects used an incremental decision process (marginal analysis, satisficing) than a synoptic approach (unbiased search, maximizing), and the reverse was true for the "no DSS" group. In addition, the " n o DSS" group had higher decision quality than the DSS group on the high complexity task; the results were reversed on the lower complexity task. There was no significant difference in number of alternatives generated. Another focus has been examining the influence of different DSS features on the quality and process of decision making. Two main categories of features were studied: those dealing with the interface component and those dealing with the model component. The following investigated the interface component. Ghani and Lusk [28] examined changes in information presentation format on decision performance. Laboratory results indicated that a change in information format part way through the experimental task (e.g., from tabular to graphical, or vice versa) increased decision time but did not alter profit performance. Subjects generally preferred the representation they initially used. Neither representation format was found to be superior to the other. In three interrelated studies, Benbasat, Dexter, and Todd [9] assessed the influence of color and information presentation differences (tabular reports and line graphs) on user perceptions and decision making. The major finding of these studies was that depending on how well the presentation format supported the task solution, decision

I. Benbasat, B.R. Nault / Empirical research in managerial support systems

making effectiveness was improved. Only when graphical reports were in a form to provide an additional perspective of the solution to the task compared to tabular reports did they exhibit dominance, and were then given significantly higher ratings on all evaluation dimensions except accuracy. Color-enhanced reports had no detrimental effects. The benefits of color were more evident for the use of graphical reports than tabular ones. Liang [48] found that presentation format and levels of model accuracy affected both decision performance and user satisfaction, with increased accuracy having the larger positive effect on performance. Presentation format was the main contributor to user attitudes towards DSS; tabular reports were superior to graphs. This outcome is a consequence of the nature of the task; accurate numbers, as opposed to trend information, were required. Dickson, DeSanctis, and McBride [20] conducted three laboratory experiments investigating the effectiveness of different presentation formats. The overall conclusion they reached was that task environment (content, complexity, and structure) affected the effectiveness of a given presentation and this influence seems to be based on volume of data and precision required. Line plots outperformed tables in the task with higher levels of complexity and structure. Graphics were superior when large amounts of information were presented and either time dependent patterns or recall of specific facts were required. Jarvenpaa [39] examined how individuals select processing strategies in a multiattribute-multialternative problem. Presentation formats were organized either by showing all alternatives within an attribute (e.g., display the costs of all the various options in one report) or showing all attributes within an alternative (e.g., display cost, sales, location, etc. for one option). Task was varied by providing subjects with task instructions which in theory would elicit specific processing strategies. These strategies were identified using verbal protocol analysis. Report format influenced information acquisition, i.e., reports organized by alternative induced information acquisition by alternative. The demands of the task reduced some of the influence of format, i.e., for graphs organized by alternative, tasks eliciting attribute processing led to less processing by alternatives

217

(as compared to tasks eliciting alternative processing). The following studies examined the influence of the model component. Cats-Baril and Huber [14] looked at the effects of interactive versus noninteractive aids and use of a heuristic. The heuristic consisted of a methodology to define the problem and generate alternative strategies. Subjects with the heuristic and the interactive aid produced more ideas, increased their performance level, and experienced a positive attitude change towards the task. On the other hand, they had less confidence in their work and found the process of using the heuristic to be rigid. Dos Santos and Bariff [22] examined the effects of model features. Two features led to superior performance in problem finding and problem ranking: (1) system guided queries where the subject was directed by the system to systematically manipulate the variables (versus no system guidance) and (2) results displaying differences from planned, rather than the actual values. Todd and Benbasat [64] utilized protocol analysis to investigate the influence of different decision support tools on the choice of problem solving strategies in a multiattribute-multialternative problem [39]. Comparing DSS to " n o DSS" it was found that strategy selection (elimination by aspects or additive difference) was influenced by the presence of decision aids and their specific features. Subjects with DSS followed the least effortful strategy (those utilizing and processing less information) as determined by the type of decision aid provided. In another phase of the study, it was observed that the use of more effortful strategies took place only when decision aids that substantially reduced the cognitive costs of employing such strategies were available. Silver [59] used a laboratory experiment to explore user perceptions of system restrictiveness (degree and manner in which the DSS capabilities restrict decision making processes which can be used). Subjects were asked to rank the three different DSS in order of restrictiveness. The researcher also ranked the DSS objectively by restrictiveness, based on the capabilities of the DSS. A majority of subjects did not rank the DSS the same way as the objective ranking, indicating that user perspectives play a role in assessing system restrictiveness. However, much of this discrepancy could be explained by the fact that the functions in the least

218

L Benbasat, B.R. Nault / Empirical research in managerial support systems

restrictive DSS were more complicated to use than the other two.

(2) Comments on Empirical Studies Empirical investigations into the performance effects of DSS availability have been inconclusive. While in some studies use of DSS improved performance, in several studies unaided users performed as well as aided users. In hindsight, this points to the obvious idea that some DSS are more useful than others. This clearly is not very informative unless one can explain why this has taken place. Unfortunately, based on the current empirical data, it is not possible to resolve these conflicting outcomes. These results persist across decision aiding techniques. This might be a reflection of differences in the level of support provided, complexity of using the DSS features, lack of pilot tests to assure the quality of the decision aids, or they could be a function of poor experimental procedures, including the amount of training given to the subjects on DSS use. Some of the research on model features has shown that model accuracy, interactive aids, systematic manipulation of the exogenous variables, and the availability of a heuristic improved performance. Other studies have shown that manipulating model features which alter cognitive costs associated with different strategies can affect the process of decision making. Results from studies examining the effects of presentation format on performance and decision processes have generated converging evidence that task effects are important in evaluating presentation format. This is an encouraging development given that until recently reviews of this field inevitably concluded with the comment that the findings were equivocal. Task characteristics, and the match between them and presentation format, could be a point of focus for future studies and could lead to the development of theory in an area which has attracted much research interest with little result. We find there has been a thorough examination of one decision aiding technique - representational aids. In terms of the classification scheme in table 1, we can also link decision calculus models [15,51] to judgement refinement/amplification techniques, the multiattribute utility models [4,39, 59,64] to choice models, and inventory and financial planning models [8,19] to process models.

Nonetheless, our understanding of the effectiveness of all the applicable decision aiding techniques is incomplete. The researchers did not investigate differences among decision aiding techniques. F r o m our review of the research literature, it becomes clear that there is no strong theoretical foundation on which most of these studies are based. The variables investigated often appear to have been chosen according to the interest of the researchers - more of an ad hoc selection than following the precepts of a research paradigm. There is a need for theories to predict how MSS influence decision making, to formulate hypotheses, conduct research in a directed and parsimonious manner, and to interpret and integrate findings. Furthermore, increased use of process tracing techniques are required to better understand the underlying reasons for these varying effects of MSS use. Due to their relative novelty, the contribution of ES in terms of improving decision maker performance has not yet been fully determined. The aspect of evaluation that has attracted a lot of conceptual attention in the literature is testing the validity of an ES, that is the degree to which the ES behaves and performs like the expertise sources after which it was modelled [10,54,73]. According to Jaikumar and Bohn [38] showing validity of an ES is only one way of determining quality. The other is to study ES in the field or in the laboratory so that the effects of their use can be understood. The lack of such work is a function of the relative infancy of the field. It appears that almost all of the empirical data we have about the value of ES use is based on the validation of these systems performed by their developers. Again, as these systems move from prototype development to regular use there will be opportunities for researchers to evaluate them more extensively.

(B) Effects of M S S Use by Groups Table 5 summarizes the findings and salient features of the studies on the effects of MSS use discussed in this section.

(1) Description of Empirical Studies In general, these studies compare groups with and without computer support. They are divided into two categories: studies that focus on outcomes

L Benbasat, B.R. Nault / Empirical research in managerial support systems

of the group process (e.g., decision quality, consensus, number of alternatives) and those that attempt to understand behavior in the group process (e.g, influence behavior). A number of studies have examined outcome effects of GDSS use. Steeb and Johnston [61] used a GDSS with a video projection unit and an intermediator (for overall GDSS direction and translation of the group's requests into computer entries). The GDSS was an implementation of a multiattribute utility model. Groups with the GDSS exhibited higher decision comprehensiveness, considered a larger range of options, more attributes, actions and events, and had more consensus and satisfaction with the group process. Although the aided group took 12% longer, this difference appeared to be below the magnitude of the quality differences. G r o u p size in this study was three. Turoff and Hiltz [68] studied the use of a computerized conferencing system (CCS) as a GDSS. They compared groups of students, operating either in a face to face mode or linked by a CCS, solving an "arctic survival problem" which involved rank ordering a list of survival aids. Results indicated no differences in decision quality and no relationships between decision quality and consensus. Face to face groups were more likely to reach consensus; however, the CCS allowed for greater opinion giving. A second phase of the study conducted in the field examined groups made up of people working for the same organization. They found that groups with a designated leader or groups who were provided statistically aggregated feedback of group members' opinions had higher group consensus. G r o u p size in both experiments was five. Two experiments with groups of three using a GDSS were used by Bui and Sivasankaran [12] to examine decision quality, decision time, and user satisfaction on tasks (criteria determination and selection) of differing complexity. The G D S S allowed subjects to generate solution criteria, establish weights, and aggregate inputs to determine a final outcome. A human facilitator was also available. Results indicated that decision quality was superior for GDSS groups on the higher complexity task and no different on the lower complexity task. GDSS groups took longer and were slightly less satisfied with the group decision on the lower complexity task; no differences in decision time or

219

satisfaction were observed for the higher complexity task. Gallupe, DeSanctis and Dickson [27] in a laboratory experiment examined the effects of using a GDSS for groups of three to support tasks of different levels of difficulty. The experimental task was to identify problem cause. The GDSS managed individual entries, provided some aggregate information and incorporated a voting mechanism. Results showed that the GDSS improved decision quality, especially for the more difficult task, and increased the number of alternatives generated. However, the groups using the GDSS were found to be less confident, had more group conflict, and less consensus and satisfaction. Sharda, Barn and McDonnell [57] used an IFPS based financial analysis model with groups of three during one semester in a business task which simulated upper management decisions. The study found that groups with the G D S S had higher performance and there was less variability in performance. There were no differences in terms of time, alternatives generated, and confidence. However, in this experiment it is not clear if the dependent variables measured the performance of the group or the dominant individual in the group. In a longitudinal case study Le Blanc and Kozar [47] examined the impact of a GDSS used to manage vessel traffic from different locations. The GDSS provided communications and information from the different locations and this information was displayed on a m a p of the portion of the waterway being covered. Using an objective measure of G D S S effectiveness, higher proportions of vessel traffic tracked by the GDSS decreased the number of accidents in the system. Three recent studies have examined influence behavior in groups using GDSS. Watson et al. [71] used a GDSS with three to four person groups working on a resource allocation problem. The G D S S recorded, stored and displayed each individual's problem definitions, provided methods to enter and aggregate weights for solution criteria~ and included a voting mechanism. The resource allocation task had no analytical solution and depended on personal preference. Three treatments were compared: no decision aid (baseline), pencil and paper decision aid (manual), and computerized decision aid (GDSSt. The GDSS did not increase group consensus, perceived deci-

220

L Benbasat, B.R. Nault / Empirical research in managerial support systems

sion quality, solution satisfaction, or equality of influence when compared to either unaided or manually aided groups. Both consensus and influence were measured objectively from individuals' and groups' allocations. Zigurs et al. [76] studied influence behavior within the context of GDSS use with three to four person groups. The manual and GDSS treatments were the same as in Watson et al., except the GDSS also allowed for exchanges of comments. The group task was to develop admittance criteria and make choices. Process tracing on verbal interaction, nonverbal interaction, and electronic/ written communication were used to measure influence behavior. There were no differences in total expressed influence behavior between treatment groups; however, GDSS groups had more nonverbal communication behavior. There was partial support for distribution of individual group members' influences being more even in GDSS groups and some support for the proposition that patterns of influence behavior are different between treatments. Jarvenpaa et al. [41] examined both outcomes and behavior (process) in an exploratory field experiment using subjects from the same organization. This study differs from others in that the groups were larger (seven members), two different GDSS types were used (networked workstations and electronic blackbord (EBB)) in addition to a " n o GDSS" treatment. Each group performed the same three tasks (conceptual software design) and groups had three sessions with the same GDSS (as opposed to one) for a total of nine sessions together. In general, there was no disadvantage to using EBB compared to " n o GDSS" and results comparing networked workstations to " n o GDSS" were inconclusive. However, differences between groups provided the most significant effects, overshadowing other results. A different emphasis on the effects of MSS use supporting multiple individuals is found in [62], which investigated the effects of ES on organizations. The criteria focused on how the organization, its information processing capacity, and the organizational programs (a highly complex and organized set of responses to environmental stimulus) change in response to the use of ES. A comparative, pre-post exploratory design was used to examine the impact of three commercially used systems: PlanPower (financial planning), XCON

(computer configurations), and M U D M A N (a drilling ES). The findings revealed that ES augmented the information processing capacity of the firm, altered the process of task execution, and increased the scope and formality of data used in the task. ES helped in managing complexity, improving representation, improving standards, and providing a rigorous understanding of some of the previously uncertain parts of the task. On the other hand, by imposing structure, ES introduced rigidity into task performance. (2) Comments on Empirical Studies All the GDSS studies used group collaboration support as their major decision aiding technique; this is similar to a level 1 GDSS as outlined by DeSanctis and Gallupe [18]. Only the Steeb and Johnston [61] GDSS can be considered as providing structured group decision techniques. Overall, research into GDSS indicates that group decision quality can be enhanced by a GDSS; no study has found GDSS reducing decision quality. In studies measuring task difficulty GDSS has been found to enhance decision quality on the more difficult tasks. GDSS availability has increased the number of alternatives considered by groups. Effects of GDSS on group consensus and group conflict have been mixed. Total influence behavior does not seem to be changed by GDSS, although distribution of influence and patterns of behavior may be affected. Of the studies examined here, most experimental work on GDSS use has been face to face and with either three or four person groups. An examination of research on groups indicates a need for studies with larger group size, non face to face GDSS set ups, and clearer understanding of the impact of different hardware and decision room configurations. Moreover, research into the effects of GDSS use over time are required. In addition, studies of GDSS in the field are needed to provide external validity to laboratory results. The recent work by Jarvenpaa et al. [41] provides a step in the right direction. Perhaps due to experience gained studying individual support, and the substantial amount of work in reference disciplines such as organizational behavior and communications, the research into the effects of GDSS use has been more systematic. GDSS researchers have paid careful attention to the theoretical and empirical research in

L Benbasat, B.R. ]Vault / Empirical research in managerial support systems

these fields, obtaining a solid base for their work. For example, both DeSanctis and Gallupe [18], and Dennis et al. [21] view information exchange theory as a basis for understanding GDSS effects. In contrast to studies with individuals, typical GDSS research incorporates dependent variables which cover a wider range of phases of decision making: generating alternatives, negotiating, and making a choice. One thus finds a large number of different variables investigated. There appears to be some agreement on a common set of important outcome variables. However, there is no consensus on how to measure them. Neither is there agreement on which process variables to investigate. Research models [21,46], as well as standardized constructs and measurement methods will assist cumulative research endeavors. To date we have not seen empirical work using an ES for group support, nor an ES integrated with a GDSS. Sviokla's work provided some preliminary understanding of how ES supports multiple individuals.

6. Research Strategies The strategies (methods) and data collection techniques best suited to study managerial support systems have not been addressed extensively. Some descriptive articles have surveyed the type of research strategies employed [24,37,56]. Other authors have prescribed the empirical strategies best suited for studying the field [7,65] and discussed the type of methodological problems that frequently occur [26,40]. There is no single best research strategy. The selection of a particular strategy depends, among other things, on the amount of existing knowledge, the resources available to the researchers, the purpose of the research, and the nature of the topic researched. Many of the pioneering researchers in DSS employed action research, i.e., research in which the investigator was also involved in the development of the system, or case methods [5,55]. There are two reasons for the use of these qualitative research strategies. First, they are suited for exploration and discovery when our understanding of the phenomenon is new [11]. Second, when the dynamics of the phenomenon might be a major determinant of outcomes, they allow the investiga-

221

tor to answer how and why questions to understand the nature and complexity of the processes that have taken place. Subsequently, as DSS research has matured, researchers have tested models of the design and implementation process derived from the organizational behavior literature [2,29] and hypotheses about factors generated from case studies in both the field and in the laboratory. In studies of ES we observe that action research studies have again been part of the groundbreaking work [17,50]. The use of a qualitative approach, which attempts to provide an indepth understanding of a few examples, is consistent with the fact that not much previous research has taken place on the effects of ES. However, a recent study by Sviokla [62] utilized a case strategy for testing hypotheses, providing an arms-length examination for the phenomenon as compared to action research approaches. Interestingly, in GDSS the studies of design and effects of use have originated in industry research or university laboratories; well known examples are the University of Arizona and the University of Minnesota. In these projects, implementation and evaluation have been closely intertwined. Development of prototype GDSS was followed up by evaluation of the consequences of using these systems. The approach taken by Arizona researchers was to bring in larger size groups (15 + ) from various organizations to use their facilities [53]. The research strategy utilized was similar to a case study where the activities of a natural working group are examined in depth. In contrast, the Minnesota researchers have decided to follow a more controlled laboratory experimentation approach. They have examined a larger number of smaller groups (4-5 members, mainly students) dealing with sets of problems assigned to them. While the nature of the groups chosen and their size was partly to increase the number of groups, thus increasing statistical power, it is still difficult to obtain an adequate number of groups to obtain the desired level of power [76]. These two contrasting approaches to studying GDSS allow a degree of triangulation by providing data about the internal and external validity of the findings associated with some of the key GDSS design variables. Looking specifically at the effects of MSS use. we and other researchers [57] have observed that

222

L Benbasat, B.R. Nault / Empirical research in managerial support systems

within the last few years laboratory experimentation has been the predominant research method with few exceptions [41,62]. Within laboratory experiments, researchers can focus on outcomes a n d / o r processes of decision making with MSS. Most of the research to date has examined outcome variables, such as profit and time performance, demonstrating only whether these systems influence decision outcomes or not (i.e., i n p u t output studies). It is ironic that even though the fundamental studies in this field have placed emphasis on understanding the influence of computerized support on changes in decision processes [42,55,60], few MSS studies that examine these relationships have emerged. The input-output paradigm which is widely used in cognitive psychology experimental research is not always best suited for MSS research. The types of tasks of interest to the management support system researchers are richer and more complex. Consequently, there are many intervening variables that can potentially influence the final outcome. While it is important to measure the influence of a decision aid, it is also important to understand how the change in the decision processes caused by the use of support tools lead to changes in outcomes. Furthermore, in most cases, given that the problems studied are semistructured in nature, it is not possible to determine a priori what the best outcomes should be. When the evaluation cannot be made by looking at inputs and outputs alone, examining the quality of the decision processes becomes important. Todd and Benbasat [65] provide a detailed analysis suggesting why concurrent, verbal protocol analysis is an appropriate process tracing technique to be utilized in examining the effects of MSS use by individuals. Recent examples of such use of protocol analysis includes works of Elam and Mead [23], Chu and Elam [16], Jarvenpaa [39], and Todd and Benbasat [64]. Process tracing methods have also been utilized for capturing and analyzing group interactions [33,76]. In a recent paper, Zmud [77] developed a recommended measurement method for examining behavior in a nonrepeating multiple-individual decision context. His schema to identify the important variables in a group context consists of the following major components:

(i)

the situational context (characteristics of the task and the group), (ii) group processes (relation-building and decision resolution activities), and (iii) interactions among the group members (communication network characteristics and discrete communication exchanges). Zmud [77] also suggested associated measurement methods (e.g., direct observation, video and audio tapes) to collect data on the three components of his schema. From the research to date, it appears that the analysis of interaction behaviors in the GDSS research has not reached the same level of sophistication and understanding as the process tracing methods for individuals, though this situation is changing. In the initial process tracing studies in GDSS, process data extracted from tape recordings were presented in an anecdotal/qualitative manner and as secondary to the more "objective" quantitative findings used for hypotheses testing [27,71]. More recent studies have placed a higher emphasis on process data for hypothesis testing [41,76]. There is a need for continued development of better techniques to analyze the "observational" process tracing data dealing with groups. One such technique has been suggested by Zigurs [75]. It is comprised of the following: (i)

coding of verbal acts into one of five procedural message categories representing verbal influence behavior, (ii) an aggregate level analysis by two observers who rate the behavior of group members in terms of the prominence and achievement dimensions of influence, and (iii) measurement of influence by counting acts of nonverbal procedural behavior, and acts of electronic behavior (support system use) and written communication behavior. This methodology thus consists of a coding scheme and the use of a variety of data capturing schemes, namely audiotapes, written transcripts, videotapes, and computer log files for the raters and the researchers to utilize in analyzing the data. It should be emphasized that we are not advocating the exclusive use of process tracing tech-

L Benbasat, B.R, Nault / Empirical research in managerial support systems

niques. Qualitative and quantitative measures of outcome, such as time, quality of performance, and perceptions of decision makers, are important aspects of evaluating effects of use which cannot be obtained from process data only. Our discussion of process tracing methods is motivated by their relatively low utilization to date given their potential for increasing our understanding of the effects of MSS use. However, it is also important to note that some process tracing methods are expensive, cumbersome, and time consuming both from a data collection and data analysis point of view. The difficulties of applying this method to individuals are magnified in the case of groups [75]. The concerns about justifying the investment required for this type of analysis is an open question. Furthermore, a comparative evaluation of different process tracing approaches that could be used in research dealing with groups has not yet been conducted.

7. Concluding Comments The central theme of our discussion has been empirical research on MSS design and the effects of MSS use. A classification scheme to organize and categorize empirical research was proposed for providing a view of the accumulated body of knowledge. It is clear from our examination of the literature that there are substantial topical and methodological differences between research issues dealing with individual support as opposed to supporting groups. The variables of interest in group support are more numerous, varied and complex. Interestingly, most of these differences are not due to the underlying technology but due to studying the interaction among individuals. Therefore, independent variables such as group size and group members characteristics, and dependent variables such as group consensus and influence behavior, are added to the list of issues to be investigated. Furthermore, the study of groups requires the development of measures for analyzing group processes. GDSS researchers have utilized common views and definitions borrowed from reference disciplines. As a result the work can be integrated more easily. On the negative side, very little work has been done researching GDSS de-

223

sign and most of the research into effects of use has been conducted in the laboratory. There is also a lack of studies that investigate how these systems influence organizations. One area in which we did not find any work is the link between individual and group support. Typically group researchers compare the best individual decision (prior to group meeting) with the supported and unsupported group decisions. However, the individual whose decision is considered in these comparisons is not supported by a MSS. Moreover, the issue of support to individuals within the group meeting, such as individuals accessing their own models and data, has not been investigated. We have argued that DSS and ES could be viewed as complementary MSS. The ES field has developed an underlying technology that extends the type of concepts and tools that are part of MSS. We would also speculate that ES brings some additional methodological problems to the study of MSS, though the paucity of empirical research in ES makes it difficult to identify any patterns in the methodologies used. From Sviokla's [62] work we have learned that ES have influences similar to any computerized support system as well as their own unique characteristics. The study of the ES design process could be more complex because of the involvement of users and knowledge engineers as well as experts in the design process. It could even follow a different model than DSS implementation since current ES design practices show that users do not participate in the initial design; this is left to the knowledge engineer and the expert. In addition, some cases require representing the knowledge of multiple experts [69] thus making ES design more complex. When considering the effects of use, added features brought about by ES, such as explanation capabilities, and some of their unique effects, such as an increase in the structure of decision making and less decision maker control, necessitate investigation of a larger set of variables as compared to DSS studies. Similarly, the study of flow ES influences decision quality is closely related to the quality of the expertise that went into developing the ES. This then requires that a large amount of time and effort be spent (more so than that for a DSS) in developing and validating a " g o o d " ES before such a study could be conducted. There is thus a need to discuss how these factors might

224

L Benbasat, B.R. Nault // Empirical research in managerial support systems

influence the way MSS research should be conducted. W e have identified two p r o b l e m s i n the recent work associated with i n d i v i d u a l MSS. The attention paid to theoretical d e v e l o p m e n t a n d the use of organizational change a n d d e v e l o p m e n t models to guide design a n d i m p l e m e n t a t i o n research f o u n d prior to 1980 is n o t evident in the more recent work. F u r t h e r m o r e , the research evaluating the effects of MSS use has mostly b e e n ad hoc i n its selection of variables a n d devoid of theoretical linkages. W e have p o i n t e d out the need for u n derlying theories to provide a fuller u n d e r s t a n d i n g of the relationship b e t w e e n decision s u p p o r t a n d performance. Over the last 30 years we have seen improvem e n t s in the applications of c o m p u t e r s for decision support b r o u g h t a b o u t b y advances in hardware a n d software technology as well as b y a better c o n c e p t u a l u n d e r s t a n d i n g of how to p u t them into use. It is obvious to us that whether carefully researched or not, these technologies will i m p a c t a n d s u p p o r t decision m a k i n g i n organizations. The i m p o r t a n t issue for researchers is to study these i n greater depth, u n d e r s t a n d a n d catalog their influences so that we move a step closer to u n d e r s t a n d i n g how, w h e n a n d where to best a p p l y m a n a g e r i a l s u p p o r t system technologies to i m p r o v e the effectiveness of individuals a n d organizations [56].

References [1] Alavi, M., An Assessment of the Concept of Decision Support Systems as Viewed by Senior-Level Executives, MIS Quarterly 6, No. 4 (December 1982) 1-9. [2] Alavi, M. and Henderson, J., Evolutionary Strategy For Implementing a Decision Support System, Management Science 27, No. 11 (November 1981) 1309-1323. [3] Alavi, M. and Napier, H.A., An Experiment in Applying the Adaptive Design Approach to DSS Development, Information and Management 7, No. 1 (February 1984) 21-28. [4] Aldag, R.J. and Power, D.J., An Empirical Assessment of Computer-assisted Decision Analysis, Decision Sciences 17, No. 4 (Autumn 1986) 572-588. [5] Alter, S.A., A Study of Computer-Aided Decision Making in Organizations, Ph.D. Dissertation, Sloan School, MIT, 1975. [6] Bair, J.H., CSCW '86 Report, SIGOIS Bulletin 8, No. 3 (Summer 1987) 3-13. [7] Benbasat, I., An Analysis of Research Methodologies, The Information Systems Research Challenge, F. Warren Mc-

Farlan (editor), Harvard Business School Press, Boston, MA, 1984. [8] Benbasat, I. and Dexter, A.S., Individual Differences in the Use of Decision Support Aids, Journal of Accounting Research 20, No. 1 (Spring 1982) 1-11. [9] Benbasat, I., Dexter, A.S., and Todd, P., An Experimental Program Investigating Colour-Enhanced and Graphical Information Presentation: An Integration of the Findings, Communications of the ACM 12, No. 11 (November 1986) 1094-1105. [10] Benbasat, I. and Dhaliwal, J.S., A Framework for the Validation of Knowledge Acquisition, Proceedings of the 3rd Knowledge Acquisition for Knowledge-BasedSystems Workshop, Boose, J.H., and Gaines, B.R., eds., 1988, 1.1-1.18. [11] Benbasat, I., Goldstein, D., and Mead, M., The Case Research Strategy in Studies of Information Systems, MIS Quarterly 11, No. 3 (September 1987) 369-386. [12] Bui, T. and Sivasankaran, T.R., GDSS Use Under Conditions of Group Task Complexity, Working Paper, The US Naval Postgraduate School, 1987. [13] Burton, A.M., Shadbolt, N.R., Hedgecock, A.P., and Rugg, G., A Formal Evaluation of Knowledge Elicitation Techniques for Expert Systems: Domain 1, in Proceedings of the First European Workshop on Knowledge Acquisition for Knowledge-Based Systems, Reading University, September 1987, 1-20. [14] Cats-Baril, W.L. and Huber, G.P., Decision Support Systems for Ill-Structured Problems: An Empirical Study, Decision Sciences 18, No. 3 (Summer 1987) 350-372. [15] Chakravarti, D., Mitchell, A. and Staelin, R., Judgement Based Marketing Decision Models: An Experimental Investigation of the Decision Calculus Approach, Management Science, 25, No. 3 (March 1979) 251-263. [16] Chu, P.C. and Elam, J.J., Decision Process, Task Complexity, and Decision Support System Effectiveness, Ohio State University Working Paper, July 1988. [17] Davis, R., Buchanan, B., and Shortliffe, E., Production Rules as a Representation for a Knowledge-BasedConsultation Program, Artificial Intelligence 8 (1977). [18] DeSanctis, G. and Gallupe, R.B., A Foundation for the Study of Group Decision Support Systems, Management Science 33, No. 5 (May 1987) 589-609. [19] Dickmeyer, N., Measuring The Effects of a University Planning Decision Aid, Management Science 29, No. 6 (June 1983) 673-685. [20] Dickson, G.W., DeSanctis, G. and McBride, D.J., Understanding the Effectiveness of Computer Graphics for Decision Support: A Cumulative Experimental Approach, Communications of the ACM 29, No. 1 (January 1986) 40-47. [21] Dennis, A.R., George, J.F., Nunamaker, J.F., Group Decision Support Systems: The Story Thus Far, Working Paper, Department of MIS, University of Arizona, 1988. [22] Dos Santos, B.L. and Bariff, M.L., A Study of User Interface Aids for Decision Support Systems, Management Science 34, No. 4 (April 1988) 461-468. [23] Elam, J.J. and Mead, M., Can Software Influence Creativity?, forthcoming in Information Systems Research. [24] Elam, J.J., Huber, G.P., and Hurt, M.E., An Examination of the DSS Literature (1975-1985), Decision Support

I. Benbasat , B.R. Nault / Empirical research in managerial support systems

[25]

[26]

[27]

[28]

[29]

[30]

[31]

[32] [33] [34] [35]

[36]

[37]

[38]

[39]

[40]

Systems: A Decade in Perspective, E.R. McLean and H.G. Sol (eds.), Elsevier Science Publishers B.V. (North Holland) IFIP, 1986. Gaines, B.R., Rapid Prototyping for Expert Systems, Oliff, M. (ed.), Proceedings of the International Conference on Expert Systems and the Leading Edge in Production Planning and Control, University of South Carolina, 1987, 213-241. Gallupe, R.B., Experimental Research Into Group Decision Support Systems: Practical Issues and Problems, Proceedings of the Nineteenth Annual Hawaii International Conference on System Sciences, 1986, 515-523. Gallupe, R.B., DeSanctis, G., and Dickson, G.W., Computer-Based Support for Group Problem-Finding: An Experimental Investigation, MIS Quarterly 12, No. 2 (June 1988) 277-296. Ghani, J. and Lusk, E.J., The Impact of a Change in Information Representation and a Change in the Amount of Information on Decision Performance, Human Systems Management 3, No. 4 (December 1982) 270-278. Ginzberg, M.J., A Study of the Implementation Project, in Implementing Operations Research/Management Science, R.L. Schultz and D.P. Slevin (eds.), Amsterdam: North-Holland, 1979. Goslar, M.D., Green, G.I., and Hughes, T.H. Decision Support Systems: An Empirical Assessment for Decision Making, Decision Sciences 17, No. 1 (Winter 1986) 79-91. Goul, M. and Tonge, F., Project IPMA: Applying Decision Support System Design Principles to Building Expert-Based Systems, Decision Sciences 18, No. 3 (Summer 1987) 448-467. Gray, P., Group Decision Support Systems, Decision Support Systems 3 (1987) 233-242. Hastie, R., Penrod, S., and Pennington, N., Inside The Jury, Cambridge: Harvard University Press, 1983. Hayes-Roth, R., Waterman, D.A., and Lenat, D.B., Building Expert Systems, Reading, M_A: Addison-Wesley, 1983. Henderson, J.C., Finding Synergy Between Decision Support Systems and Expert Systems Research, Decision Sciences 18, No. 3 (Summer 1987) 333-349. Henrion, M. and Cooley, D.R., An Experimental Comparison of Knowledge Engineering for Expert Systems and for Decision Analysis, Proceedings AAAI-87 Sixth National Conference on Artificial Intelligence, Seattle, WA, 1987, Vol. 2, 471-476. Hurt, M.E., Elam, J.J., and Huber, G.P. The Nature of DSS Literature Presented in Major IS Conference Proceedings (1980-1985), Proceedings of the Seventh International Conference on Information Systems, December 1986, 27-41. Jaikumar, R. and Bohn, R., The Development of Intelligent Systems for Industrial Use: A Conceptual Framework, Boston: Harvard Business School Working Paper, 9-786-025, 1986. Jarvenpaa, S.L., The Effect of Task Demands and Graphical Format on Information Processing Strategies, Management Science 35, No. 3 (March 1989) 285-303. Jarvenpaa, S.L., Dickson, G.W., and DeSanctis, G., Methodological Issues in Experimental IS Research: Experiences and Recommendations, MIS Quarterly 9, No. 2 (June 1985) 141-156.

225

[41] Jarvenpaa, S.L., Rao, V.S., and Huber, G.P., Computer Support For Meetings Working on Unstructured Problems: A Field Experiment, MIS Quarterly 12, No. 4 (December 1988) 645-666. [42] Keen, P.G.W. and Scott Morton, M.S., Decision Support Systems: An Organizational Perspective, Addison Wesley. Reading, Mass., 1978. [43] King, W.R. and Rodriguez, J.I., Participative Design of Strategic Decision Support Systems: An Empirical Assessment, Management Science 27, No. 6 (June 1981) 717-726. [44] Klinker, G., Boy& C., Genetet, S., and McDermott, J., A KNACK for Knowledge Acquisition, Proceedings AAAI87 Sixth National Conference on Artificial Intelligence, Seattle, WA, 1987. [45] Kraemer, K.L. and King, J.L., Computer-Based Systems for Cooperative Work and Group Decision Making, ACM Computing Surveys, 20, No. 2 (June 1988) 115-146. [46] Kraemer, K.L., and Pinsonneauh, A., The Impact of Technological Support on Groups: An Assessment of the Empirical Research, Working Paper, Graduate School of Management and Public Policy Research Organization, University of California at Irvine, 1988. [47] Le Blanc, L.A., and Kozar, K.A., The Impact of Group Decision Support System Technology on Vessel Safety, Working Paper, Indiana University, 1987. [48] Liang, T.P., Critical Success Factors of Decision Support Systems: An Experimental Study, Data Base (Winter 1986) 3-16. [49] Luconi, F.L,, Malone, T.W., and Scott Morton, M.S., Expert Systems: The Next Challenge for Managers, Sloan Management Review 27, No. 4 (Summer 1986) 7-14. [50] McDermott, J., RI: A Rule Based Configurer of Computer Systems, Artificial Intelligence 19, No. 1 (1982). [51] Mclntyre, S.H., An Experimental Study of the Impact of Judgement-Based Marketing Models, Management Science 28, No. 1 (January 1982) 17-33. [52] Mahmood, M.A. and Medewitz, J.N., Impact of Design Methods on Decision Support Systems Success: An Empirical Assessment, Information and Management 9, No. 3 (October 1985) 137-151. [53] Nunamaker, J.F., Applegate, L.M., and Konsynski, B.R., Facilitating Group Creativity: Experience with a Group Decision Support System, Journal of Management Information Systems 3, No. 4 (Spring 1987) 5-19. [54] O'Leary, D.E., Validation of Expert Systems with Applications to Auditing and Accounting Expert Systems, Decision Sciences 18, No. 3 (Summer 1987) 468-486. [55] Scott Morton, M.S., Management Decision Systems, Boston, Mass.: Division of Research, Graduate School of Business Administration, Harvard University, 1971. [56] Scott Morton, M.S., The State of the Art of Research, The Information Research Challenge, F. Warren McFarlan (ed.), Harvard Business School Press, Boston, MA, 1984. [57] Sharda, R., Barr, S.H., and McDonnell, J.C., Decision Support System Effectiveness: Review and Empirical Test, Management Science 34, No. 2 (February 1988) 139-159. [58] Shaw, M.L.G. and Woodward, J.B., Validation of a Knowledge Support System, Boose, J.H/., and Gaines, B.R., eds., Proceedings of the Second AAAI Knowledge Acquisition for Knowledge-Based Systems Workshop, October 1987, pp. 18.0-18.15.

226

L Benbasat, B.R. Nault / Empirical research in managerial support systems

[59] Silver, M.S., User Perceptions of Decision Support System Effectiveness: An Experiment, Journal of Management Information Systems 5, No. 1 (Summer 1988) 51-65. [60] Sprague, R.H., A Framework for the Development of Decision Support Systems, MIS Quarterly 4, No. 4 (December 1980) 1-26. [61] Steeb, R. and Johnston, S.C., A Computer-Based Interactive System for Group Decisionmaking, IEEE Transactions on Systems, Man, and Cybernetics SMC-11, No. 8 (August 1981) 544-552. [62] Sviokla, J.J., PLANPOWER, XCON, and MUDMAN: An In-depth Analysis Into Three Commercial Expert Systems In Use, Unpublished D.B.A. Dissertation, Graduate School of Business Administration, Harvard University, 1986. [63] Thompson, J.D., Organizations in Action, McGraw-Hill, New York: New York, 1967. [64] Todd, P., and Benbasat, I., Experimental Investigation of the Impact of Computer Based Decision Aids on the Process of Preferential Choice, Working Paper 88-MIS026, University of British Columbia, June 1988. [65] Todd, P. and Benbasat, I., Process Tracing Methods in Decision Support System Research: Exploring the Black Box, MIS Quarterly 11, No. 4 (December 1987) 493-512. [66] Turban, E. and Watkins, P.R., Integrating Expert Systems and Decision Support Systems, Transactions of the Fifth International Conference on Decision Support Systems, San Francisco, April 1985, 52-63. [67] Turban, E. and Watkins, P.R., Integrating Expert Systems and Decision Support Systems, MIS Quarterly 10, No. 2 (June 1986) 121-136. [68] Turoff, M. and Hiltz, S.R., Computer Support for Group Versus Individual Decisions, IEEE Transactions on Communications COM-30, No. 1 (January 1982) 82-91. [69] Vedder, R.G. and Mason, R.O., An Expert System Appli-

[70] [71]

[72]

[73]

[74]

[75]

[76]

[77]

cation for Decision Support in Law Enforcement, Decision Sciences 18, No. 3 (Summer 1987) 400-414. Waterman, D.A., A Guide to Expert Systems, Reading, Mass.: Addison-Wesley, 1986. Watson, R., DeSanctis, G., Poole, M.S., Using a GDSS to Facilitate Group Consensus: Some Intended and Unintended Consequences, M1S Quarterly 12, No. 3 (September 1988) 463-478. Welbank, M., A Review of Knowledge Acquisition Techniques for Expert Systems, Martlesham Consultancy Services, British Telecom Research Laboratories: England, 1987. Yu, V.L., Fagan, L.M., Wraith, S.M., Clancey, W.J., Scott, A.C., Hanigan, J.F., Blum, R.L., Buchanan, B.G., and Cohen, S.N., Antimicrobial Selection by Computer: A Blinded Evaluation by Infectious Disease Experts, Journal of the American Medical Association 242 (1979) 12791282. Zachary, W., A Cognitively Based Functional Taxonomy of Decision Support Techlliques, Human-Computer Interaction 2, No. 1 (1986) 25-63. Zigurs, I., Interaction Analysis in GDSS Research: Description of an Experience and Some Recommendations, Working Paper MISRC-WP-88-04, Management Information Systems Research Centre, Carlson School of Management, University of Minnesota, November 1987. Zigurs, I., Poole, M.S., and DeSanctis, G., A Study of Influence in Computer-Mediated Decision Making, MIS Quarterly 12, No. 4 (December 1988) 645-666. Zmud, R.W., A Specification Structure and Measurement Strategies for the Nonrepeating Group Decision Context, Working Paper, Information and Management Science, College of Business, Florida State University, November 1987.