J. SYSTEMS SOFTWARE 1994; 24~2677215
267
Use of a Group Support System to Evaluate Management Information System Effectiveness
Abdelhaleem Ashqar, and Ashraf Shirani
Brian
J. Reithel,
Milam
W. Aiken,
Department of Management and Marketing, School of Business Administration, Unh!ersity qf Mississippi, Unillersity, Mississippi
Group
support
organizations planning
systems
goals,
group conflicts, lems,
(GSSs)
to perform editing
group
generating
and voting information
velopment though the
a GSS
may
development
puter of two GSS
than
ments
were
high-quality
1.
been
Yet,
were
a significant
this
involving with
in
the
GSS
of comevaluation
shows
satisfied
Although
solely
can be used
the
a GSS
more
meeting. were
Al-
throughout
and effectiveness
study
systems
generated
de-
GSSs.
focuses
how a GSS
significantly
comments
article
synergy
system
with
productively
by showing
a verbal
of a manage-
traditional
techniques
life cycle,
information
participants
prob-
be used
A case
resolving
have they
the efficiency
systems.
by
but rarely
by integrating
on the last phase for evaluating
documents,
or evaluation
and evaluation
used
including
business
system.
may be achieved
been
of tasks,
ideas about
on issues,
used for the development ment
have
a variety
with
that the
fewer
com-
session,
more
generated.
INTRODUCTION
The capital investment required to design and implement an organizational computer system-otherwise known as a mangement information systems (MIS)-and the expenditures in personnel and supplies necessary to use these systems have increased sharply (Appleton, 1986; McFarlan, 1981). Yet, the increased expenditures have been associated with a decrease in managerial satisfaction with the results of these large investments, and increasing economic costs often have not been matched by increasing
Address correspondence to Bn’an Reithel, Dept. cjf Management and Marketing, School of Business Administration, Unil,ersi& of Mississippi. Unir,er,Gty,MS 386 77.
0
Elsevicr
655 Avcnuc
Scicncc Inc. of the Americas.
New York. NY 10010
economic returns (Ahituv, 1980; Appleton, 1986; DeSanctis and Gallupe, 1985; King and Rodriguez, 1978; Lederer and Sethi, 1991; Mahmood, 1991; Matlin, 1979; Srinivasan, 1985). Managerial satisfaction and economic returns are not the only measures of system success, however. Evaluation of an organization’s computer system effectiveness is often difficult because of the many complex, tangible, and intangible effectiveness measurement factors. In addition, there are multiple and conflicting viewpoints of system evaluators. This latter problem-different perspectives of end users and evaluators-is perhaps one of the most difficult to overcome. Any technique or technology that enhances communicating and sharing of these perspectives may improve not only the evaluation of an MIS, but also the definition of system requirements. One technology that improves communication among organization personnel is a group support system (GSS) (Chervany and Dickson, 1970; Nunamaker et al., 1991). The purpose of this article is to demonstrate how a GSS can be used to improve evaluation of an MIS. GSSs have been used most frequently for support business meetings; the systems have been used only rarely for software engineering. Although a GSS can be used at several different stages in the software development life cycle, this article concentrates on its use in the final stage of system evaluation. First, we provide an overview of typical, nonsupported approaches for evaluating system effectiveness. Next, we describe major functional groups of personnel that are involved in system evaluation. Finally, GSSs are introduced as an adjunct technology to support system evaluation. A case study involving the evalua-
268
J. SYSTEMS SOFIWARE 1994; 241267-275
tion of two information systems advantages of using a GSS.
A. Ashqar demonstrates
the
2. EVALUATION OF MANAGEMENT INFORMATION SYSTEMS Evaluation of an MIS can be conducted from two different perspectives: the usage-centered view or the system resource view (Chandler, 1982). In general, both views should converge in assessing system effectiveness. The usage-centered view focuses primarily on the user domain. Here, effectiveness is determined by comparing system performance to objectives of the user; throughput, reliability, and response time are common measures. Therefore, we have to determine the task objectives and then develop criterion measures to assess the degree of achievement of the objectives, e.g., comparing actual costs and benefits with budgeted costs and benefits. The system resource view focuses on the computer system domain. Effectiveness is determined by attainment of a normative state and is conceptualized in terms of resource viability rather than in terms of specific task objectives (Hamilton and Chervany, 1981a). For example, in terms of technological resources, effectiveness might be indicated by the quality of the system or service levels, or performance may be measured in terms of resource utilization, cost, and efficiency. The many approaches currently used to assess system effectiveness include the following (Hamilton and Chervany, 1981a): audit. This focuses on assessing the 1. Compliance adequacy and completeness of controls for system inputs, processing, security, and access. review. This focuses on com2. Budget performance pliance with a predetermined budget expenditure level for the MIS development or operations process. productivity measurement. This is “the 3. Personnel production capability of MIS personnel, typically assessed in terms of productivity,” e.g., lines of code per unit time for programmers. performance evaluation. This typically 4. Computer assesses computer hardware in terms of performance efficiencies and bottlenecks that limit production, e.g., actual throughput. 5. Service level monitor. This focuses on assessing the information and support provided to the user based on the terms established between MIS and user personnel. It includes turnaround times, response times, and error rates.
et al.
User attitude survey. This focuses on assessing the user’s perceptions of the information and support provided by the MIS function. These surveys assess many aspects, such as quality of reports, timeliness, quality of service, and MIS-user communication. Postinstallation review. The postinstallation review (PIR) focuses on assessing whether the system meets the requirements definition. Does the system do what it is designed to do? The scope of the PIR may include an assessment of the information and support provided, an analysis of the actual use process, and cost/benefit analysis of the system effects on user performance. Cost/benefit analysis. This quantifies the system’s effect on performance in terms of dollars, e.g., direct cost savings. It is often used in capital budgeting to assess the return on investment. Multiple criteria approach. This accepts the infeasibility of an optimal solution for a conflicting goal structure and produces a satisfactory result (Chandler, 1982). This approach is based on three stages: System evaluation. This is viewed as being interactive, with each iteration involving the invocation of these three stages to improve system performance. It produces performance statistics for the resources in the aggregate and for them with respect to identified users. User goal evaluation. This ascertains the degree of user goal achievement and then determines guidelines for altering the current system configuration to improve performance. Design evaluation stage. This ascertains the satisfaction of the current design with respect to both user and system criteria. The multiple criteria approach is flexible enough to be able to handle the dynamics of the information system environment and map their influence into the model. It provides a tool to analyze trade-offs between goals, applications, and performance. Furthermore, it allows for the investigation of the impact of environmental and design policy decisions on information system performance and user goal attainment. System evaluation is complicated not only because of a large selection of evaluation techniques, but also because of other factors, including the following (Chandler, 1982; Hamilton and Chervany, 1981b): 1. An expanding range of users and applications with a corresponding expanding set of diverse performance goals and resource requirements.
Use of a GSS to Evaluate MIS Effectiveness 2. A growing demand to achieve conflicting performance objectives, e.g., time versus cost versus effectiveness. 3. Objectives and measures of accomplishments are often inadequately defined initially. Furthermore, the stated objectives frequently do not represent the real ones because underlying aims of the users involved may go unstated. and easily quantified objec4. Efficiency-oriented tives and measures are typically used, whereas effectiveness-oriented and qualitative objectives and measures are ignored. In many cases, measures of intangible qualitative effects of systems are not available. 5. Individual perceptions may differ on what the objectives and measures are. Different evaluator viewpoints may arise during assessment of system effectiveness because no mutual agreement among the participants has been established concerning MIS objectives and measures. From the foregoing discussion, it can be seen that most problems in evaluating system effectiveness result from the varying opinions, objectives, and motives of all of the personnel involved. Therefore, one way to improve system evaluation (as well as system definition) is to improve communication among all of the stakeholders in the system. These stakeholders (people who have some stake in the system) are described in the following section. 3. SYSTEM STAKEHOLDERS
AND THEIR
VIEWPOINTS
Evaluations of an MIS tend to be subjective and are influenced by the perception of objectives. Therefore, evaluation of a system can be a source of disagreement and conflict among different system stakeholders involved in MIS implementation. A presentation of each stakeholder functional group and a short discussion of its viewpoints in terms of MIS evaluation are presented below 1. User personnel include those for whom the system is being developed and maintained. They can be classified into primary users, who make decisions based on the MIS outputs and intermediaries, and secondary users, who provide and maintain data for the system. Users are most concerned with accuracy, reliability, and timeliness in responding to a request, and assistance in using the system (Hamilton and Chervany, 1981b). Neumann and Segev (1979) suggest that content is the domain information characteristic, compared with accuracy, fre-
J. SYSTEMS SOFTWARE 1994; 24:261-215
269
quency, and recency of information. In general, users are concerned with organizational validity, or how well the information system actually meets the needs of an organization. 2. MIS development personnel include system analysts, programmers, and users who have been assigned the task of developing and modifying the system. According to Hamilton and Chervany (1981b), a system is often viewed as effective or successful by developers when it is developed, is installed, and works. King (19781 suggests that developers assess system effectiveness by l l
l
its effect on information
provided
the logical organization of the functions it performs to meet the requirement definitions narrowly defined cost-effectiveness
criteria.
King (1978) also suggests that designers do not consider the effect of the system on the user’s job. In general, they are concerned with technical validity. 3. Management personnel are responsible for planning, organizing, and controlling the system, and also for the development and maintenance effort. Based on a survey of 305 MIS managers, Hallam and Striven (1976) found that the five highest ranking objectives, in descending order of importance, were to: l
meet deadlines
l
minimize costs
l
minimize turnaround
l
maximize training of MIS personnel
l
maintain a stable workload
time
Norton and Rau (1978) characterized the concerns of senior management about system effectiveness: “General management values the measures of product effectiveness and of the output of the system development process more highly than process efficiency measures, and they are more keenly aware of security and cost performance than documentation compliance and system development schedules compliances.” 4. Internal audit personnel assist management in evaluating the effectiveness of the system and controls. Their concerns are more oriented toward application control and compliance, rather than general controls. Schwarzbach (1976) identified the most important areas of concern to internal auditors as follows (in descending order of importance):
270
l
l
_I. SYSTEMS
SOFTWARE 1994; 24~2677275
the accuracy, timeliness, and usefulness of information provided by MIS reports the MIS design process, e.g., whether the system was designed with users in mind, or whether the design specifications were met.
4. USE OF A GSS TO EVALUATE MIS EFFECTIVENESS
MIS users, development personnel, management personnel, and internal audit personnel all have different perspectives on an organization’s computer system. Several evaluation techniques are available for judging the success of these systems, but most of these techniques are restricted to certain viewpoints of what makes a system effective or successful. Until a consensus is reached among all of the system stakeholders on what is meant by “success,” such evaluation techniques may be misapplied. Therefore, one goal of system evaluation should be to reach this consensus before judging the system. One technolo~ that has been proven to enhance group communication and consensus is the group support system (Aiken et al., 1991; Dennis et al., 1988; DeSanctis and Gallupe, 1987; Kraemer and King, 1988). Groups using GSSs have experienced increased satisfaction, better decision outcomes, and less meeting time over verbal, nonsupported meetings (Nunamaker et al., 1991). Figure 1 shows that the presence of a GSS, along with many other fac-
Figure 1,
A GDSS research framework (Dennis et al., 1988).
A. Ashqar et al. tors, can influence group processes (participation, conflict, anonymity, etc.) and outcomes (satisfaction, meeting time, number of comments, consensus, etc.). The same theoretical justification can be used to supplant traditional verbal software engineering meetings with GSS meetings. A GSS is typically based on a local area network in a decision room environment where group members can see each other. Such meeting rooms allow groups ranging from 4 or 5 people (as with the University of Minnesota’s SAMM GSS) to 2 55 people (as with the University of Mississippi’s Electronic Meeting Room; Figure 1) to exchange comments, edit group documents, create a list of solution alternatives, vote on alternatives, generate plans, and conduct many other group tasks (Nunamaker et al., 1991). Participants can communicate with each other via the computer and can present their viewpoints easily without being identified by their superiors or workmates, and they can refute or discuss others’ ideas or viewpoints. A large video screen in the front of the room may be used to display a summa~ and analysis of data. A GSS can be used to allow all functional groups to participate in the evaluation of MIS effectiveness. The participation of all functional groups allows for a comprehensive evaluation and gives the MIS developers a chance to closely understand the needs of all concerned parties. The user’s attitude and postinstallation review evaluation techniques are particularly suited to the use of a GSS. However, a GSS
Use of a GSS to Evaluate MIS Effectiveness
may also be used in some respect with any of the evaluation techniques. By using a GSS, many advantages can be achieved in system evaluation, and some of the main issues facing management can be resolved. According to Huber (19841, there is a need for the development of a proved means of aiding group processes: decision makers find themselves faced with an increasing number of lengthy meetings needed to discuss information-laden issues, but they tend to resist attending such meetings because they take time away from other critical activities. Others stated the problem differently by saying that the productivity loss in verbal group meetings resulted from information loss, information distortion, or suboptimal decision making (Nunamaker et al., 1991). A GSS can reduce these losses by allowing anonymity of participants’ contributions to the discussion, facilitating data base searches and analyses in order to answer questions, and enabling individual inputs to be displayed on the public screen for open discussion (Chervany and Dickson, 1970). Turoff and Hiltz (1982) found that the anonymity of electronic communication increases the degree of interpersonal exchange and reduces the probability of any one member dominating the meeting. It also reduces extreme influence of high-status members, lack of acknowledgement of the ideas of low-status members, low tolerance of minority or controversial opinions, inability to access information that is down the hall or in the computer during the course of the group meeting, and undue attention to social activities relative to the task activities of the group. Although anonymity reduces evaluation apprehension and the pressure to conform, it may also increase “free riding” (not participating in the discussion because nobody knows who has submitted a particular comment). Anonymity encourages members to challenge others, thereby increasing process gains by catching errors and thus leading to a more objective evaluation. Finally, anonymity provides a low-threat environment in which less-skilled members can contribute and learn. The GSS aims to improve the process of group collaborative work by removing common communication barriers, providing techniques for structuring decision analysis, and systematically directing the pattern, timing, or content of the discussion. Also, a GSS changes the interpersonal exchange that occurs as a group proceeds through the problem-solving process. A GSS provides the group with opportunities to speed up, change the content, or change the direction of message exchange. It presents groups with new possibilities and approaches for making
J. SYSTEMS SOFTWARE 1994; 24:x7-275
271
decisions by removing common barriers, use of systematic techniques, and use of rules for controlling the pattern, timing, or content of information exchange that can be imposed on the group. GSS technology aims to improve the outcome of meetings; this could be measured on many dimensions, including decision quality, timeliness, satisfaction with the decision, cost or ease of implementation, member commitment to implementation, or the group’s willingness to work together in the future. Briefly, the objective of GSS technology should be to aid in the selection of either the correct solution or the socially preferred solution. Rutter and Robinson (1981) found that when people do not meet in the same room, the discussion encourages open inputs of creative ideas, discovery of optima1 solutions, and selection of an alternative based on its merits rather on compromise. Finally, a GSS might help in avoiding the hidden agendas that may be promoted at the expense of better alternatives. There may be some disadvantages to using a GSS, however. Turoff and Hiltz (1982) concluded that there is more task-focused communication and less joking and laughing in GSS supported groups. People may feel more comfortable talking than typing on a keyboard. Also, people may tend to be excessively critical of each other’s ideas when they communicate electronically. Rutter and Robinson (1981) indicated that social cues are lost when people do not meet in the same room. In general, Hiltz and Turoff (1985) pointed out that high satisfaction and high decision quality cannot be simultaneously achieved. Therefore, the group must choose which goal is more important: the quality of the decision or the satisfaction of the group. However, the long-term purpose of GSSs is to improve the quality and efficiency of organizational meetings.
5. A SCENARIO
ILLUSTRATING
GSS USE
The following hypothetical scenario illustrates how a GSS can be used to evaluate MIS effectiveness. A group of users decide to evaluate the effectiveness of their inventory control system by holding a discussion on a GSS. The group consists of primary users (those making decisions based on MIS outputs and intermediaries, e.g., staff, who filter or interpret the output) and secondary users (personnel who provide and maintain data for the system, e.g., data feeders, data entry clerks, operators, and control clerks, but do not benefit from MIS outputs in performing their tasks). In addition, one manager will attend the meeting.
272
A. Ashqar et al.
J.
SYSTEMS SOFTWARE 1994: 24~267-275
The manager elects to use the University of Mississippi’s Electronic Meeting Room (Figure 2). The manager holds a presession meeting with the GSS facilitator (a person who is experienced with the system and will conduct the meeting) to establish the best match and use of the electronic support tools to meet the group’s needs, clarify the role of the GSS, and m-anage group expectations. During the GSS session, participants interact with a variety of tools to support the inventory control system evaluation. The group begins with Brainstorm, a tool that allows participants to exchange comments anonymously and simultaneously. Each participant enters his or her ideas about how well the inventory control system meets personal expectations. In addition, each participant can read all of the others’ comments about the MIS. Next, the group uses Organizer to identify particular issues or categories of ideas that arose in the discussion with Brainstorm and put those comments into the appropriate categories. For example, comments dealing with system accuracy could go into the “Accuracy” category. Finally, each member in the group uses Rank to privately rank the issues identified with Organizer.
The facilitator shows a summary of the group’s rankings on the public display in the front of the room and then allows the group to rank the issues again to try to reach a consensus. The meeting lasts for 90 minutes, during which time the group generates 385 comments concerning the success or effectiveness of the inventory control system. As a result of the vote, the group has ranked the following objectives according to their importance (greatest importance first): 1. 2. 3. 4. 5. 6.
Accuracy Reliability Timeliness in responding to a request Assistance cost Miscellaneous (a conglomeration of all other issues)
By use of the GSS, the group has reached a consensus on how the inventory control system should be improved, and all of the stakeholders can “buy in” to the decision. In addition, group members had a chance to express anonymously feelings that may have been previously unspoken.
Figure 2. The University of Mississippi’s Electronic Meeting Room.
J. SYSTEMS SOITWARE 1994; 241267275
Use of a GSS to Evaluate MIS Effectiveness
213
1. How should the telephone registration system be improved? 2. They need to add a l-800 line so that we can use it without great expense. 3. Sometimes when I sign up for a class it takes a long time to verify my schedule. 4. It should be open longer. 5. It should confirm changes made to your schedule without having to go back in the menu, perhaps reciting to you your entire schedule at the end of all your changes. It should also be able to check if we fulfill all the prerequisites. F6 Figure 3. Brainstorm
screen
showing
10:21:55=
Save comment
edit feature.
Once the list ofevaluation issues has been identified (as shown above), the group can use the Vote tool to assess whether the system performs adequately in each of the areas on the list. Each group member would vote “yes” or “no” on the system’s performance in each of the areas, and the voting results could be used as the basis for prioritizing future modifications to the inventory control software. As group members become accustomed to the use of the GSS software for evaluation of information systems that have already been delivered, the GSS could also be used to develop a list of system evaluation issues to guide the construction of new systems. After a new system has been delivered, another GSS session could be conducted to come up with a postdelivery list of system evaluation issues. The post-delivery list could be compared with the predelivery list to identify modifications needed based on the users’ actual hands on experience with the working software and/or modifications driven by changes in the business environment. Thus, the use of the GSS for
Table 1. Ranking of Changes for Telephone Registration System
system evaluation can become the critical component in establishing the type of feedback loop needed in today’s demanding quality-conscious environment. 6. A SYSTEM EVALUATION
Twenty undergraduate students from a management information systems class used the University of Mississippi’s GSS and a verbal meeting to discuss possible improvements to two different information systems: Lotus l-2-3 and the University’s telephone-based class registration system (Figure 3). The students were divided into two groups, Group I and Group II, in order to control the effect of the order of discussion environment (GSS first and then verbal; verbal first and then GSS) on the evaluation discussions. While Group I used the GSS to discuss improvements to Lotus l-2-3, Group II met in the classroom to verbally discuss improvements to Lotus l-2-3. Both groups spent 20 minutes evaluating the Lotus l-2-3 software, followed by 5 minutes to try to rank the five most important suggested improvements that had emerged during the evaluation ses-
Table 2. Ranking of Changes for Lotus l-2-3
Ranking 1
Speed Time periods User friendly Human on phone Sound
2
6 2
2
1
4
1
2 2
CASE STUDY
Ranking
3
4
5
2 4 2 2
1 2 3 2 2
1 4 5
Performance User friendly Functions Price Help
1
2
3
4
3 3 2 2
2 2 3 2 1
2 3 3
3
I
1
1
6
5 2 2 4 2
274
J. SYSTEMS SOFTWARE
A. Ashqar
1994; 241267-275
Table 3. Satisfaction
et al.
with Verbal and GSS Meetings Lotus l-2-3 (n = 10)
Telephone (n = 10)
Total (n = 20)
4.0 0.47 1 3-s 2.3 0.823 l-3
4.3 0.675 3-5 2.7 0.823 2-4
4.15 0.587 335 2.5 0.827 l-4
GSS average Standard deviation Minimum-maximum Verbal average Standard deviation Minimum-maximum Scale: I, extremely dissatistied;
2, moderately
dissatisfied:
3. neutral; 4. moderately
sion. Next, the two groups exchanged rooms (Group I is now in a verbal meeting, Group II is in a GSS meeting) and discussed improvements to the telephone-based class registration system for 20 min. Then each group spent 5 min trying to rank the five most important changes for the class registration system. In both verbal meetings, the students were unable to complete a ranking of the five most important changes in the time allocated. On the other hand, the students were able to complete their rankings when they used the GSS software. Tables 1 and 2 show the group summary rankings generated by GSS software for each system discussed in the GSS meeting room. In Table 1, six people in the group have ranked “speed” as first in importance, two people have ranked “speed” third in importance, one person has ranked it fourth, and one person has ranked it fifth. Two people have ranked “time periods” first in importance, and so on. The total for each row is 10, the number of people in each GSS group session. The ranking summary shown lists each alternative in order of its average ranking. On average, the group ranked “speed” highest and ranked “sound” lowest in importance. If everybody in the group were in perfect agreement on the rankings, the summary would show a diagonal line of numbers from the top left to the bottom right. However, as is frequently the case, there is considerable disagreement in both of the figures. Students were asked to record their satisfaction with the GSS and verbal meetings for use in system evaluation on a scale from 1 (extremely dissatisfied) to 5 (extremely satisfied). As shown in Table 3,
satisfied; 5, extremely satisfied
students expressed significantly more satisfaction with the GSS than with verbal meetings (P < 0.0001). The average satisfaction with the GSS was 4.15, and the average satisfaction with the verbal meeting was 2.5. Table 4 shows the number of comments generated in each meeting. In both cases, students expressed more comments in verbal meetings than in the GSS meetings. However, students made higher quality comments (as evaluated by an objective reviewer) when using the GSS. Students generated fewer comments when using the GSS because of their relatively slower typing speeds (as compared to talking). However, as group sizes increase, the parallel communication allowed by a GSS should compensate for the inefficiency of typing rather than talking.
7. CONCLUSION Evaluation of effectiveness is an integral part of the information resources management function. MIS effectiveness is of concern not only to top management, but also to the user, the developer, and the internal audit personnel involved in MIS implementation. This article has showed how a GSS can be used for evaluating an MIS. With this technique, all functional groups may become involved in the evaluation, which allows top management to get an accurate assessment of the system as well as a realistic measurement of the system’s goals and objectives. In addition, a case study shows that group members prefer the GSS over verbal meetings and may generate more high quality comments in a GSS.
Table 4. Comments in Verbal and GSS Meetings Lotus l-2-3 GSS comments High-quality comments Verbal comments High-quality comments
51 18 100 15
Telephone 67 23 70 6
Use of a GSS to Evaluate MIS Effectiveness
J. SYSTEMS SOFTWARE 1994:24:267-215
REFERENCES Ahituv, N., A Systematic Approach Toward Assessing the Value of an Information System, MIS Quart. 4, 61-75 (lY80). Aiken, M., Liu Sheng, O., and Vogel, D., Integrating Expert Systems with Group Decision Support Systems, ACM Trans. Infir. Syst. 9, 75-95 (1991). Appleton, D., Very Large (1986, January 1.5). Chandler, J., A Multiple Information Systems,
Projects,
Datamation, 63-70
Criteria Approach for Evaluating MIS Quart. 6, 61-74 (1982).
Chervany, N., and Dickson, G., Economic Evaluation of Management Information Systems: An Analytical Framework, Deck. Sci. I, 296-308 (1970). Dennis, A., George, J., Jessup, L., Nunamaker, J., and Vogel, D., Information Technology to Support Electronic Meetings, MIS Quart. 12, 591-624 (1988). DeSanctis, G. and Gallupe, Data Base 3-10 (1985).
R., GDSS:
A New Frontier,
DeSanctis, G.. and Gallupe, R., A Foundation for the Study of GDSS, Manag. Sci. 33, 589-609 (1987). Hallam, S., and Striven, D., EDP Objectives and the Evaluation Process, Data Manag. 14, 40-42 (1976). Hamilton, S., and Chervany, N., Evaluating Information System Effectiveness, Part 1: Comparing Evaluation Approaches, MIS Quart. 5, 55-69 (1981a). Hamilton, S., and Chcrvany, N., Evaluating Information System Effectiveness, Part 2: Comparing Evaluator Viewpoints, MIS Quart. 5, 79-86 (1981b). Hiltz, S., and Turoff, M., Structuring Computer-Mediated Communication Systems to Avoid Information Overload, Commrtn. ACM 28, 680-689 (1985). Huber, G., Issues in the Design of Group Decision port Systems, MIS Quart. 8, 195-204 (1984). King, R.. Automated Welfare Client-Tracking Integration: The Political Economy of Commun. ACM 21, 484-493 (1978).
Sup-
and Service Computing,
King, W. R., and Rodriguez, J. I., Evaluating Management Information Systems, MIS Quart. 2, 43-51 (1978).
275
Kraemer, K., and King, J., Computer-Based Systems for Cooperative Work and Group Decision Making, ACM Camp. Surtl. 20, 117-146 (1988). Lederer, A. L., and Sethi, V., Critical Dimensions of Strategic Information Systems Planning, Decis. Sci. 22, 1044119 (1991). Mahmood, M. A., A Comprehensive Model for Measuring the Potential Impact of Information Technology on Organizational Strategic Variables, Deck. Sci. 22, 869-897 (1991). Mathn, G., What is the Value of Investment in Information Systems? MIS Quart. 3, 5-34 (1079). McFarlan, M., Portfolio Approach to Information Systems, Harvard Bus. Ret%.59, 142-159 (1981). Neumann, S., and Segev, E., User Evaluation of Information Characteristics, UCLA Working Paper #4.79, University of California at Los Angeles. Los Angeles, California, 1979. Norton, D., and Rau, K., A Guide to EDP Performance Management, QED Information Sciences, Wellesley, Massachusetts, 1978. Nunamaker, J., Dennis, A., Valacich, J., Vogel, D., and George J., Electronic Systems to Support Group Work: Theory and Practice at Arizona, Commun. ACM 34, 40-61 (1991). Rutter, D., and Robinson, B., An experimental analysis of teaching by telephone: Theoretical and practical implications for social psychology, in Progress in Applied Social Psychology, Wiley, New York, 198 I. Schwarzbach, H., Auditing Management Information Systems, Ph.D. Thesis, University of Colorado, Boulder, Colorado, 1976. Srinivasan, A., Alternative Measures of System Effectiveness: Associations and Implications, MIS Quart. 9. 243-253 (1985). Tripp, R., Managing the Political and Cultural Aspects of Large-Scale MIS Projects: A Case Study of Participative Systems Development, Info. Res. Munag. .I. 4. 2-13 (1991). Turoff, M., and Hihz, S., Computer Support for Group versus Individual Decisions, ZEEE Trans. Commun. 30, 82-90 (1982).