Evaluation and Program Planning, Vol. 16, pp. 119-129, Printed in the USA. All rights reserved.
EVALUATING
1993 Copyright
PUBLIC SECTOR INFORMATION Satisfaction
SYSTEMS
Versus Impact
BRUCE Northern
0149-7189/93 $6.00 + .OO G 1993 Pergamon Press Ltd.
ROCHELEAU Illinois University
ABSTRACT Police information systems are evaluated using both satisfaction and impact measures. Police reported high rates of satisfaction with their systems, but, with the exception of reporting, little or no impact in areas such as productivity, personnel allocation, and other forms of decision making. The dynamics of satisfaction are explored. Nearly half of the departments report that their satisfaction has changed. A new classification is developed that takes into consideration both satisfaction and the nature of the change. It is argued that this measure would help to identify organizations where users are in the process of changing their attitudes toward the system. Implications for the evaluation of information systems are discussed.
created the basis for a strategically important information system. More importantly, the growth of microcomputers and end-user computing has increased access to computers so that they are no longer viewed as remote systems. Indeed, in the private sector, it is clear that many view information systems as strategic factors in obtaining competitive advantage (Synnott, 1987; Johnson & Vitale, 1988). In many businesses, information systems contribute importantly to the speed of bringing new products to market. As the length of product life cycle continues to shorten (Noori, 1990), information systems become crucial to success. Although information systems have shed their low status in the private sector, the situation is not so clear in the public sector. Although it is clear that public sector organizations are spending large amounts of money on computer technology, government organizations appear to lack major motivating factors that would lead to information systems being viewed as strategic and valuable assets. Public sector organizations lack competition and need for rapidly changing new products and services that are driving forces behind the rising importance of information systems in the private sector.
The evaluation of information systems is still in its infancy. Until the development of the microcomputer and end-user computing, information systems were viewed as sources of information for evaluating the direct services of organizations but were rarely evaluated themselves.’ For example, the community mental health field developed some excellent guidelines to integrated information management systems in the 1970s. These systems were viewed as essential to improving management by making possible cost effectiveness and other forms of accountability (Smith & Sorensen, 1974). Even in the private sector, evaluation of information systems was rare, perhaps because information systems had low prestige and were not viewed as having an important impact on organizational missions (Lucas, 1984). One reason for this low prestige was that information systems were limited mostly to routine transaction processing and reporting functions, and thus not viewed as relevant to the major strategic concerns of private firms (Saunders & Scamel, 1986). The situation has changed dramatically over the last decade in the private sector. The development of the concept of a decision support system (Keen & Morton, 1978)
A previous version of this article was delivered at the 1991 Meeting of the American Evaluation Association, Chicago, Illinois. Requests for reprints should be sent to Bruce Rocheleau, Division of Public Administration, Northern Illinois University, DeKalb, IL 60115. ‘For a review of early approaches to evaluating information systems, see Hamilton & Chervany (1981a; 198lb). For a review of public sector evaluation, see Newcomer & Caudle (1991). 119
120
BRUCE ROCHELEAU
The goal of this paper is to contribute to understanding how to evaluate public sector information systems. We explore both user satisfaction and perceived impacts as indicators of successful information systems. We also assess the dynamics of satisfaction with information systerns. We test our concepts by studying how police agenties in Illinois evaluate their information systems. Are users satisfied with the systems? What kind of impacts
do they have? Is there an association between satisfaction and perceived impacts? There are relatively few evaluative studies of public sector information systems, and many of these were conducted during the mainframe era.* By studying how police agencies in Illinois evaluate their information systems, we hope to contribute to an understanding of the role of information systems in public organizations and how they can be evaluated.
BACKGROUND There is no consensus on how to evaluate information systems in the public or private sectors. Many researchers question whether we can evaluate information systems by their impact on ultimate organizational goals. Information systems are support services, and their effects are indirect-they depend on how they are used by people. For example, we might view information systems as potentially valuable tools in solving crimes, but many other variables (e.g., the quality of the police personnel, unemployment, and other environmental conditions) may limit or outweigh any information system impacts on crime rates. The same principle operates in the private sector: Other organizational variables may overwhelm the effects of a good information system (Weil & Olson, 1989). For example, the Mutual Benefits Life Insurance company has experienced recently serious problems despite having an information system viewed as one of the best and most innovative in the private sector, proving that good information systems cannot save bad organizations (Currid, 1991). Indeed, because of the difficulty in assigning quantitative measures of impact to information systems, Keen has argued for “value analysis,” in which the perceived value of the system is weighted more heavily than costs (Keen, 1981). In several early evaluations of information systems, the frequency of use was viewed as good measure of the success of information systems (e.g., Lucas, 1975; Wynne, 1977). The basic assumption was made that a successful system would be heavily used. However, more recent assessments point out some flaws in this assumption (Srinivasan, 1985). First of all, use of the system may be involuntary. While voluntary usage of the system may be related to satisfaction (Robey 1979; Galetta & Lederer, 1989), being required to use a system is negatively related to satisfaction (Hiltz & Johnson, 1990). Moreover, it is possible that a great deal of time spent on systems may be due to the inefficacy of the system (Ginzberg, 1978) - a good system might require less time to get desired output. Indeed, Hiltz (1988) found only a low correlation between use and productivity. Finally, the concept of the use of a system is not simple: Use can be indirect as well as direct (Danziger & Kraemer, 1986; Barki & Hartwick, 1989). Decision makers may spend no time on the system but benefit indirectly from it. It may have no direct impacts on their decisions but help
broaden their understanding of the issues and thus still be worthwhile (Ginzberg, 1978). User satisfaction with information systems has been the most popular measure of success over the last decade (Bailey & Pearson, 1983; Ives, Olson, & Baroudi, 1983; Rushinek & Rushinek, 1986). Most people would agree that, if users are dissatisfied with a system, it would be difficult to consider the system a success. Melone (1990) argues that it is possible to have an effective information system without user satisfaction if the system is tightly linked to the worker’s activities. However, this would seem to be limited to routine types of activities and jobs. User satisfaction has some major advantages as an evaluation measure. Baroudi, Olson, and Ives (1986) found that user satisfaction leads to usage of the system. It can be measured by surveys and does not require detailed examination of organizational goals and development of different measures for different organizations. Thus, user satisfaction evaluations allow us to compare systems whose specific goals differ. User satisfaction is not without its weaknesses and limitations. Previous research shows that consumers can be satisfied with services despite the fact that the services may not be accomplishing their ultimate goals (Goyne & Ladoux, 1973). A general difficulty is that most consumer evaluations in the health and mental health areas tend to find high rates of satisfaction. A meta-analysis of satisfaction studies found the average level at 75 percent (Lehman & Zastowny, 1983). In these fields, it is a rare study that finds less than 70 percent satisfied (Lebow, 1983a). While there is nothing wrong with high rates of satisfaction if they reflect reality, it means (if the same skewness exists in rating of information systems) that the instrument does not do a very good job in assessing quality differences among most systems. User satisfaction is subjective. It can be difficult to interpret dissatisfaction. Expectations can influence satisfaction with systems. For example, Hiltz and Johnson (1990) found that expectations were a good predictor of a user’s subsequent evaluation. Others have found that too high expectations can lead to subsequent dissatis‘Much of the pioneering work on public sector information systems was done by Kenneth Kraemer and associated researchers. In a number of their works (e.g., Kraemer et al., 1981; Danriger & Kraemer, 1986; Kraemer et al., 1989). they touch on issues related to evaluation.
Public Sector Information faction with a system if it fails to meet lofty goals, as occurred with the Florida Department of Law Enforcement (Sloane, 1991): Earlier this year, the Florida Department of Law Enforcement announced its $21.7 million AFIS [Automated Fingerprint Identification System] was able to handle less than half the workload originally anticipated. The problem stemmed not from system malfunctions but from an alltoo-common problem of expecting too much from a computer. Because of their importance, emphasis is now being given to managing user expectations so that they are positive but not too high (Ginzberg, 1981; Doll & Ahmed, 1983). Kraemer and King (1986) point out that satisfaction is a dynamic concept. What seems like high performance at one point in time becomes the routine expectation at a later time. Discussions with police and other city officials supported our belief that many people become more positive or negative through time in their evaluations - only a fraction remain steady. If this is true, then it has important implications for the way we evaluate information systems. We test the hypothesis that satisfaction is dynamic. Given the advantages of using surveys as data-gathering instruments in evaluating information systems, is there any way of expanding on the use of surveys to broaden our evaluations beyond user satisfaction? In our study, we assumed that users could distinguish between their satisfaction with their system and its impact on the organization. In this study, we asked police to rate their satisfaction with information systems but also to assess the impact of these information systems on critical organizational issues, such as police productivity, allocation of resources, police satisfaction with their jobs, the identification of problems in the organization, the setting of realistic goals, and reporting capacity. Our basic assumption (that we test) is that users are capable of distinguishing between subjective satisfaction and impact on critical organizational goals and performance. If this is the case, it would allow researchers to expand evaluations of information systems based on survey research beyond satisfaction to other issues. By studying perceived impacts as well as satisfaction, we can gain insight into whether police view their information systems as critical to their success. Perceived impact measures may be an important addition to user satisfaction as a vehicle for making comparisons among the effectiveness of public information systems. Why Use Police to Test Impact of Information Systems? Although information systems may have only indirect impacts on organizational goals in many organizations, they may be critical success factors in some (Cerullo, 1980; Bergeron & Begin, 1989). A critical success fac-
Systems
121
tor refers to an aspect of an organization’s
activity that has a significant impact on its goals. In a critical success factor approach, evaluators assess the impact of information systems on access to critical information needed by management. For example, Le Blanc (1987) claims that a public sector decision support system (DSS) of the U.S. Coast Guard on the Mississippi River led to a significant reduction in vessel casualties. There are limits to the capacity of information systems to influence the achievement of organizational goals positively. According to Daft and Lengel (1986), computerized information lacks the “richness” needed to make many ambiguous decisions. This argument is supported by research such as Olson’s (1982), which found that even data processing managers showed a strong preference for face-to-face or phone contact even when electronic mail was available. It is also possible for information systems to have negative impacts. For example, computer terminals in cars may cause police to overemphasize certain kinds of offenses (e.g., outstanding warrants, traffic tickets) at the expense of other, more important activities (Laudon, 1986). Also, the most important information needed to be effective may not reside in the information system. Or, even if it is supposed to be part of the system, it may not be collected. James Q. Wilson (1984) argues that the best information to help police would involve gathering data on serious offenders, such as their known associates and the places the offenders are likely to be found, but that such information is not likely to be gathered: . . there is no immediate apparent benefit to the officer. On the contrary, getting such information often means leaving the squad car on a cold unpleasant day and talking to people who are at best suspicious and then writing down something the officer may never see again.
In an earlier work, Wilson (1968) also pointed out that much police activity is not aimed at crime at all but rather at peacekeeping and service functions that are not emphasized in computerized police information systems. Nevertheless, police organizations are among the best of public sector organizations to study the impacts of information systems. Police agencies have been (along with budget/finance) the earliest city departments to computerize (Gurwitt, 1988). Moreover, it is widely recognized that “information is the lifeblood of the police” (Maltz, Gordon, &Friedman, 1991). Data base searches are a primary method of finding criminals. For example, searching for matches by descriptions, fingerprints, modus operandi, and many other variables can be greatly facilitated by computerized information systems. Indeed, the resources and effort to develop computerized data bases on individuals by criminal justice agencies have been so extensive that many have been concerned with violation of privacy (e.g., Marchand, 1980). Empirical research has shown that computers can
122
BRUCE
ROCHELEAU
assist police detectives in improving their searches (Danziger & Kraemer, 1985). In short, police information systerns have a better chance of being critical success factors that those of most other public agencies. Thus, it is fea-
sible, if not likely, that police information systems have had a significant impact on police and thus make possible a more complete analysis of the relationship between satisfaction and impact.
METHODOLOGY Our theoretical population of interest included all Illinois police departments in cities with a population of 10,000 or higher (based on 1980 census figures). We limited the study to these cities because there are more than 600 local police agencies in Illinois, and the smaller ones are much less likely to have computerized systems. This yielded 178 departments of which 149 responded to our survey (83.7 percent). Sixteen of these responding departments had no computer at all other than a dumb terminal to enter data into the State LEADS (Law Enforcement Agency Data System). We compared the 149 responding and 29 non-responding agencies and found no statistically significant differences on the following dimensions: (a) size of the department in terms of sworn police officers and number of civilians; (b) crime index, rates, and number of arsons: (c) population size and geographical location of the police agencies. The survey was endorsed by the Illinois Association of the Chiefs of Police, which helps to explain our favorable response rate. This Association, as well as selected local police officers, were asked to review the preliminary questionnaire, which was revised based on their comments. We used two measures of user satisfaction. One was Doll and Torkzadeh’s (1988) index of enduser satisfaction (see Appendix for a listing of questions related to this article). This measure consists of 11 questions concerning the system, such as accuracy, ease of use, timeliness, and value of the output. For a detailed discussion of the methods used to develop this index, see Doll and Torkzadeh (1988) and Doll and Torkzadeh (1991) as well as a critique of the index by Etezadi-Amoli and Farhoomand (1991). We varied the response items slightly for the questions from Doll and Torkzadeh’s “almost never, some of the time, about half of the time, most of the time, almost always,” to “rarely, sometimes, usually, almost always.” In our study, the internal consistency (alpha coefficient) for this index was .93. We also asked questions concerning their overall satisfaction and whether their satisfaction had changed with time. This global satisfaction question is useful because it provides us with a direct question about whether the user is satisfied. Until a large number of studies are reported on using instruments like Doll and Torkzadeh’s, there is no standard to determine what the results mean. Also, it is possible that the global question will elicit different responses because it takes into account the user’s expectations. For example, a user might rate the instrument as only providing timely and valuable output “sometimes,” but still be satisfied because they have low
expectations. Likewise, it is possible for someone to rate a system very high on the end-user instrument but be dissatisfied because they have even higher expectations that are not met. We were also interested in studying factors that could influence user satisfaction and perceptions of impact. Thus, we asked questions about their system’s capability on the following aspects: access to data bases, analysis of incidents and crimes,3 mapping, analysis of officer activities, system flexibility, and vendor support. For most of our analyses, we treat these as discrete variables. However, in one of our analyses, we use an index of overall capability as a control variable. We formed the index of system capability by totaling the individual items and taking their mean. The alpha coefficient of the internal consistency of the resulting capability index was .84. To measure impact, we asked a series of questions in which police rated the impact of the information systems on a scale ranging from very negative to very positive. The areas of impact studied were: productivity, allocation of resources, police satisfaction with their job, providing information for supervisors about performance of subordinates, identification of organizational problems, and information for setting realistic goals. On a few response items (e.g., number of sworn officers in the department), we were able to compare their survey responses with data recently provided by the state and they were very closely correlated. A number of phone calls were made to individual departments (about one-fourth of the respondents), primarily to clarify responses about hardware and software. These calls allowed us to check on the general accuracy of the surveys, which proved to be excellent. It also allowed us to gather qualitative data based on open-ended discussions over the phone about their systems. Note that prior to the survey, we made a number of site visits to cities and talked with police about their information systems, which provided some of the insights we are testing here, such as the potential discrepancy between rating of capability and satisfaction with the systems. The survey was mailed to the police chiefs who filled it out themselves in 15 percent of the cases but gave it to someone else to fill it out in other cases. We analyzed the data on all of the key dependent and independent variables but found no significant impact of the position of the person filling out ‘An incident is a more general term than a crime. For example, it could include getting an animal out of a tree, peacefully solving family or neighborhood dispute\, etc.
Public Sector
Information
the survey (e.g., chief vs. other police, police vs. civilian, etc.) Note, our data reflect the total target population (minus non-respondents). Consequently, any differences are real. However, in order to provide indicators of the importance of the differences or associations, we present
Systems
123
certain statistics (e.g., Wilcoxon Matched-Pairs SignedRanks Test). Thus the significance tests reported in the paper can be interpreted as follows: If this were a sample rather than a population, would these be considered statistically significant?
FINDINGS Assessing System Capability Table 1 shows that there is substantial variation of police department ratings of key system capabilities. The non-parametric Wilcoxon Matched-Pairs Signed-Rank tests help to determine if one variable is consistently ranked higher or lower than another (Siegel, 1956). By far the most highly rated capability is reporting-only about 24 percent rate their system’s capability as less than good (i.e., fair, poor, or very poor.) Most of the other capabilities including system flexibility, ability to analyze officer activities, vendor support, and crime analysis, and ability to analyze incidents, find over 40 percent of the departments rating them as only fair or worse. Well over 50 percent rate their systems as only fair, poor, or very poor on ability to access neighboring departments’ records and ability to map crime. The
TABLE 1 POLICE
ASSESSMENT
OF SYSTEM
CAPABILITIES Percent Rating System as Less
Type VI
: Reporting
V2: Access V3: Ability V4:
of Capability capability
federal/state to analyze
Flexibility
V5: Vendor
databases
officer
activities
of system support
V6: Crime analysis V7: Analyze
incidents
V8: Access
neighboring
V9: Mapping 1 = Very Poor;
departments
of crime 2 = Poor;
Than
N of
Mean
SD
Good
Cases 128
4.21
1.41
24.2
3.84
1 .85
32.8
128
3.82
1 .54
41.4
128
3.81
1.59
42.2
128
3.73
1.69
43.5
124
3.66
1.51
43.0
128
3.54
1.60
49.6
127
2.59
1.84
65.6
128
2.01
1.59
80.3
127
3 = Fair; 4 = Good;
5 = Very Good;
6 =
ability to access neighboring departments’ records is significant in areas such as the suburbs of Chicago where there is a large number of small departments, and criminals are likely to cross jurisdictional boundaries with great frequency. To summarize, only one system capability, reporting, achieved an average rating of good. Satisfaction Measures Table 2 shows that nearly 79 percent of the police expressed overall satisfaction with their information systems. At face value, this rate of satisfaction appears quite favorable. For example, a recent survey found only 65 percent of respondents satisfied with the quality of their software (Soat, 1990). However, as pointed out earlier, it is very much in keeping with the findings from the health and mental health field.Indeed, in the health field, a dissatisfaction rate above 10 percent is rare and could be a danger signal (Lebow, 1983b). However, it is quite possible that rates of satisfaction with information systems will be lower. In the health and mental health fields, some helper (physician or counsellor) is attempting to aid the consumer. Under such conditions, it may seem to be an ungrateful act to express dissatisfaction even if the treatment was unsuccessful. It can be argued that expressing dissatisfaction with computer hardware and software would be much easier. Although useful, the overall satisfaction rating needs to be supplemented by other measures. For example, the significance of being “somewhat satisfied” is unclear the person could have significant reservations about the system (Rocheleau & Mackesey, 1981). Table 3 employs the end-user (Doll & Torkzadeh, 1988) satisfaction index, which shows the response of police to questions about key characteristics of their systems. Users are usually satisfied with their system’s performance concerning its speed, accuracy, utility, and ease of use. But there clearly is room for improvement. Satisfaction with only
Outstanding Wilcoxon Variable
Pair
Matched-Pairs 2-Tailed
Signed-Ranks Test
Probability
Tests Cases
TABLE 2 OVERALL
Vl
vs. v2
.08
128
Vl
vs. v3
SATISFACTION
How Satisfied?
WITH INFORMATION Percent
SYSTEM
Cumulative
.OOl
128
v2 vs. v3
.71
128
v3 vs. v4
.87
124
Very satisfied
43.2
v4 vs. v5
.78
124
Somewhat
satisfied
35.6
78.8
V5 vs. V6
.48
124
Somewhat
dissatisfied
12.9
91.7
V6 vs. V7
.08
127
Very
8.3
100.0
V7 vs. V8
<.OOl
127
V8 vs. V9
,003
127
N=
dissatisfied 132
43.2
Percent
BRUCE ROCHELEAU
124
TABLE 4
TABLE 3 END USER SATISFACTION
MEASURES
CHANGE
IN SATISFACTION:
YOUR
INFORMATION
Nature
of Change
HAS YOUR SYSTEM
SATISFACTION
CHANGED
WITH
WITH TIME?
Percent Satisfaction System
With
Performance
N of
Almost
Mean
SD
Cases
Always
3.54
.673
129
61.2
No, stayed Yes,
Yes, Vl :
Up-to-date
V2:
Accuracy
V3:
Satisfaction
information? of system? with accuracy
.626
129
53.5
.74l
130
46.9
3.26
.755
129
41.9
3.26
.832
129
46.5
V4:
Information
V5:
Get information
V6:
Easy-to-use?
3.23
,752
129
39.5
V7:
User friendly?
3.17
,861
128
43.0
V8:
Content
meet needs?
3.15
,808
130
36.2
V9:
Precise
information
3.12
,758
130
33.1
3.12
,835
129
35.7
2.98
.887
128
30.5
Vl 0: Output
reports
provided?
needed?
2 = Sometimes; Wilcoxon
Variable
in time?
useful?
VI 1: Provide 1 = Rarely;
clear?
3.47 3.33
Pair
3 = Usually;
Matched-Pairs 2-Tailed
4 = Almost
Signed-Ranks Probability
Always
Test N of Cases
Vl
vs. v2
.23
128
v2
vs. v3
,008
129
v3
vs. v4
.41
129
v4
vs. v5
.90
129
V5
vs. V6
.70
129 128
V6
vs. V7
.42
V7
vs. V8
.74
128
V8
vs. V9
.62
130
v9
vs. VlO
v10vs.v11
1 .o .04
129 128
two performance areas was seen as being almost always true by more than half of the sample: the up-to-date nature of the information and the accuracy of the system. The lowest ratings were assigned to the nature of the information provided (providing needed reports, the precise information needed, and the content of the system meeting needs.) The relatively low ratings given to providing needed reports appears to conflict with the information in Table 1, which shows that police rate their system’s reporting capability as its most outstanding characteristic. The resolution of this apparent discrepancy could be explained by expectations: police view their computerized information systems primarily as vehicles for fulfilling reporting requirements. Their expectations concerning this aspect are high, and consequently they are least satisfied with this aspect of their systems even though they rate reporting higher than other capabilities.
N=
Improved
through
time
the same
declined
Cumulative
Percent
25.4
25.4
52.3
77.7
22.3
100.0
130
nearly half the cases (47 percent), their satisfaction had changed. Roughly equivalent percentages found their satisfaction had improved (25 percent) or declined (22 percent). Only about half (52 percent) have remained the same. This supports the argument that satisfaction with information systems is dynamic. The reasons why their views change deserve research. There are a number of possible reasons for changes. Changing expectations are one major possibility. Also, as users become more familiar with systems, they find positive or negative aspects that they were unaware of in the early stages. Note that our measure of change was taken at one point in time and thus might be regarded as a static measure. Clearly, in future studies, it will be important to employ longitudinally based measures of changing satisfaction. Nevertheless, our measure does reflect the police’s perception of change and thus is a valid and useful indicator too. It is instructive to combine the change and satisfaction measures to provide an analysis of the dynamics of satisfaction with the system. In Table 5, we propose three major categories of users: clearly positive, clearly negative, and the unclears. The positive group ranges from those who are very satisfied to those who are somewhat
TABLE 5 DYNAMIC Positive
MEASURE
OF OVERALL
SATISFACTION Percent
Group
Very satisfied
& improving
13.8
Very satisfied
& steady
27.7
Very satisfied
& declining
Somewhat
satisfied
& improving
Somewhat
satisfied
& steady
Totals:
Posittve
Totals:
2.3 8.5 15.4 67.7
group
Unclear Group Somewhat dissatisfied Somewhat
The Dynamics of Satisfaction Some researchers (e.g., Kraemer & King, 1986) have suggested that satisfaction with information systems may be dynamic. We wanted to test how changeable satisfaction was. Consequently, we asked police (Table 4) whether their satisfaction has changed with time. In
Percent
satisfied
Unclear
3.1
& improving
11.5
& declining
14.6
group
Negative group Dissatisfied & steady Dissatisfied
9.2 8.5
& declining
Totals:
Negative
Overall
Totals
17.7
group 100%
(N=
130)
Public
Sector Information
OF INFORMATION
SYSTEM % Negative or No
N of
Type
of Impact
Mean
SD
Impact
Cases
: Reporting
capability
6.02
1.23
10.4
125
5.34
1.27
25.8
124
5.10
1.20
41.6
125
4.89
1.18
45.5
123
V5: Set goals
4.84
1.11
41.5
123
V6: Identify
4.77
1.16
41.5
123
4.57
1 .04
59.3
125
VI
V2: Information
on performance
V3: Productivity V4: Allocate
V7: Police 1 = Very
resources problems satisfaction
Negative
4 = No Impact; Positive
(arrests)
Impact;
2 = Somewhat
5 = Slightly
Pos.;
Neg.;
3 = Slightly
6 = Somewhat
Pos.;
Perception of Impacts We asked the police to rate their systems on seven areas of impact on a scale of 1 (very negative impact) to 7 (very positive impact). Table 6 shows that the most positive impact was on reporting capacity. Only about 10 percent of the departments reported negative or zero impact on their reporting. A substantial percentage also reported a positive impact on the provision of information about the performance of subordinates. Only about 25 percent reported negative or zero impact for this issue. Otherwise, information systems were viewed as having little impact on productivity (arrests), the allocation of resources, setting of realistic goals for units, the identification of problems, and police satisfaction with their job. Indeed, despite the high rate of satisfaction with the computer system, most departments reported either no impact or a negative impact of the systems on police satisfaction with their jobs. Thus the data on perceived impacts adds a significant dimension to our evaluation that would have been missing if we had limited it to questions concerning satisfaction. In Table 7, we show the correlations between our two satisfaction variables and impact measures. There is a significant positive zero order correlation between both measures and all of the impact variables. Thus, despite the low ratings of impact, those departments who are most satisfied also tend to report the most impact in relative terms. We hypothesized that both satisfaction and impact would be affected by the capability of their information systems-high capability would lead to both satisfaction and impact. Consequently, we controlled for perceived system capability. Because both the capability and impact measures contained similar items concerning reporting, we eliminated this item from the capability index for this analysis. The partial correlation
Neg.;
7 = Very
Impact Wilcoxon
Variable VI
with job
Pair
Matched-Pairs 2-Tailed
Signed-Ranks
Tests N of Cases
Test of Probability
vs. v2
<.OOl
124
v2 vs. v3
.07
124
v3 vs. v4
.14
123
v4 vs. v5
.70
122
V5 vs. V6
.60
123
V6 vs. V7
.04
122
125
would be able to capture this movement. The group of unclear users would be much larger, which could alert the organization to a major flux in attitudes towards the information system.
TABLE 6 IMPACTS
Systems
satisfied but steady (about 67 percent). The negative group includes those users who are very or somewhat dissatisfied and either steady or declining in their assessment of systems (only about 18 percent of the total). A third group contains those who are difficult to categorize as supporters or opponents of the system: those who are either somewhat dissatisfied but improving or somewhat satisfied but declining. Together they make up 17 percent of the users. In our case, since the majority of the users were in the clearly positive category, the results do not alter our primarily positive perception of police information systems presented by the static measured in Table 2. However, if attitudes were in a major state of flux, the dynamic approach to measuring satisfaction
TABLE 7 CORRELATIONS
BETWEEN Global
Type
of Impact
Productivity Allocation Police
.26 of personnel
satisfaction
Information identify
on subordinates
problems
Set goals Reporting Number
of Cases
Zero Order
.22 .31 .42 .37
capacity = 114
(p = ,003)
.33 (p < .OOl)
.47
(p (p (p (p (p
= = < < <
SATISFACTION
Satisfaction
AND IMPACT
MEASURES
Question
Control
End User Satisfaction
for Capability
Zero Order
.03 (p = .37)
.32
.I5 (p =
.I 9 (p = .02)
.05)
,008)
-04 (p = .34)
.OOl)
.I3
,001)
.23 (p = ,007)
.34
(p = .Ol) (p = ,008)
.28
,001)
.21
,001)
.23
(p = .08)
.26 .29
.46
(p < ,001) (p (p (p (p (p
= = < = <
Control
Index for Capability
.I2 -.07
,002)
.08
,001)
.08
,001)
.OQ
.OOl)
.06
.OOl)
.I 9
(p (p (p (p (p (p (p
= = = = = = =
,I I) .24) .20)
.iQ) .I 7) .28) .02)
126
BRUCE ROCHELEAU
coefficients (controlling for system capability) are much reduced. The partials for the global satisfaction question are still significant for the impact on the identification of problems, setting goals, and reporting capacity, but the associations are modest (.20 range). The end-user index’s partials are only significant for reporting capacity. Tables 6 and 7 support our argument that asking users about perceived impacts obtains valuable additional information beyond that of satisfaction. Also, the fact that
the global question is closely associated with more impact measures than the end-user’s index suggests that the global question may get at some aspects (e.g., expectations) that are missing in the end-user index. Although perceived impacts may not be related to objective impact measures, it can be argued that perception is often more important than reality. If users don’t think that an information system is having any impact, they may be less likely to use or support the system.
DISCUSSION What do these results mean? Taken together, it appears that information systems are not viewed by most departments as a source of decision support. For most police, the role of information systems is to produce routine, required reports, not assist in decision making. Reporting is viewed as its strongest aspect and the one that has the most impact. On the other hand, perhaps because their expectations are highest concerning reporting, they are least satisfied with the system’s performance on this dimension. Despite the reported potential for police information systems to play important roles in productivity, allocation of resources, and other key management decisions, our data suggest that most departments do not currently use their systems for these purposes. Studies like those of the Chicago police department reveal that even young officers are highly skeptical of the utility of information systems (Maltz et al., 1991). In short, the high level of satisfaction with information systems is misleading. It is possible to view information systems as consisting of three levels of activity (Huxhold, 1991; Sacco & Ostrowski, 1991):
CONCLUSIONS
1. Operational,
structured, routine activities such as transactions, routine reports; 2. Semi-structured problems related to management of resources; 3. Unstructured, policy issues concerning the direction of the organization. Based on our results, it does not appear that information system functions have passed beyond Level 1 functions in most police departments. Huxhold (1991) argues that systems must provide information for all three levels to be effective. Evaluators of information systems need to take into consideration these hierarchies of use in their assessment. The types of decisions influenced by the information system provide a good perspective on its overall value. As police and other public administrators become more accustomed to viewing information systems as management tools, the situation is likely to change. Evaluators of public information systems can help to detect this change by gathering time series of data on perceptions of information system impact on organizational functions.
AND IMPLICATIONS
We found that satisfaction is dynamic. It is important to incorporate questions that allow us to determine if satisfaction is on the rise or fall. Although we did not study expectations in this research directly, our data suggest that their role needs to be given more attention in evaluation research on information systems. Thus it is quite possible for a system with superior capability to be rated lower because of differing expectations. If a system assists only in routine, operational activities, this does not necessarily denote a failure of information systems, but it does show important limitations. Our study suggests that expectations about information systems are low among police. Police information systems are not viewed as critical success factors or as means of decision support. They are viewed primarily as mechanisms for handling the demands for reports and other paperwork.
Police demonstrated an ability to differentiate between satisfaction and impact. Although perceived measures of impact need to be supplemented by more direct measures, our research suggests that users of police systems can and should be queried about their perception of impacts. In addition, we need to ask police directly about their expectations of information systems. It is quite possible that, due to rising expectations in the future, satisfaction with information systems may decline as system capability and performance improve. The above suggestions also point out the need for longitudinal studies that can capture directly changing expectations and assessments. By giving more attention to expectations of systems, as well as the dynamics of satisfaction and perceived impacts, we will be able to build a richer data base on information system successes and failures.
Public Sector Information
127
Systems
REFERENCES BAILEY, J.E., & PEARSON, S.W. (1983). Development of a tool for measuring and analyzing computer user satisfaction. Munugement Science, 29(5), 530-545.
HAMILTON, S., & CHERVANY, N.L. (1981a). Evaluating information system effectivenessPart 1: Comparing evaluation approaches. MIS Quarterly, S(September), 55-69.
BARKI, H., & HARTWICK, J. (1989). Rethinking involvement. MIS Quarter/y, ZS(March), 53-63.
HAMILTON, S., & CHERVANY, effectiveness-Part II: Comparing terly, 4(December), 79-86.
the concept of user
BAROUDI, J.J., OLSON, M.H., & IVES, B. (1986). An empirical study of the impact of user involvement on system usage. Communications of the ACM, 29(3), 232-238. BERGERON, F., & BEGIN, C. (1989). The use of critical success factors in evaluation of information systems: A case study. Journal of Management Information Systems, 5(4), 111-124. CERULLO, M.J. (1980). Information systems success factors. nal of Systems Management, (31 December), 10-19. CURRID, C. (1991, August 12). ‘Good’information ‘bad’ firms. PC Week, p. 63.
Jour-
systems can’t save
DAFT, R.L., & LENGEL, R.H. (1986). Organizational information requirements, media richness, and structural design. Management Science, 32(5), 554-511. DANZIGER, J.N., & KRAEMER, K.L. (1985). Computerized databased systems and productivity among professional workers: The case of detectives. Public Administration Review, 45(l), 196-209.
N.L. (1981b). Evaluating system evaluator viewpoints. MIS Quar-
HILTZ, S.R. (1988). Productivity enhancement from computer-mediated communication: A systems contingency approach. Communications of the ACM, 31(12), 1438-1454. HILTZ, S.R., &JOHNSON, puter-mediated communication 739-764.
K. (1990). User satisfaction with comsystems. Management Science, 36(6),
HUXHOLD, W.E. (1991). An introduction to urban geographic formation systems. New York: Oxford University Press.
IVES, B., OLSON, M.H., & BAROUDI, J.J. (1983). The measurement of user information satisfaction. Communications of the ACM, 26(10), 785-793. JOHNSON, H.R., & VITALE, vantage with interorganizational 12 (June), 153-165.
M.R. (1988). Creating competitive adinformation systems. MIS Quarterly,
KEEN, P.G.W., & MORTON, An organizationalperspective. lishing Company.
S.S. (1978). Decision support systems: Reading, MA: Addison-Wesley Pub-
DANZIGER, J.N., & KRAEMER, K.L. (1986). Peopleandcomputers: The impacts of computing on and users in organizations. New York: Columbia University Press.
KEEN, P.G.W. (1981). Value analysis: tems. MIS Quarter/y, 5(l), 1-15.
DOLL, W., & AHMED, M.U. (1983). Managing Journal of Systems Management, 34(6), 6-l 1.
KRAEMER, management sity Press.
user expectations.
in-
Justifying
decision support
sys-
K.L., DUTTON, W.H.,&NORTHROP, A. (1981). The of information systems. New York: Columbia Univer-
DOLL, W.J., & TORKZADEH, C. (1988). The measurement of enduser computing satisfaction. MIS Quarter/y, 12(June), 259-274.
KRAEMER, K.L., & KING, J.L. (1986). Computing and public organizations. Public Administration Review, 46 (special issue), 488-496.
DOLL, W.J., & TORKZADEH, C. (1991). The measurement user computing satisfaction: Theoretical and methodological MIS Quarterly, IS(I), 5-12.
KRAEMER, K.L., KING, J.L., DUNKLE, D.E., & LANE, J.P. (1989). Managing information systems. San Francisco: Jossey-Bass.
of endissues.
ETEZADI-AMOLI, J., & FARHOOMAND, A.F. (1991). On end-user computing satisfaction. MIS Quarter/y, 15(l), l-4. GALETTA, measurement 419-438.
D.F., & LEDERER, of user information
A.L. (1989). Some cautions on the satisfaction. Decision Sciences 20(3),
GINZBERG, M.J. (1978). Finding an adequate fectiveness. Interfaces, 18(4), 59-62.
measure
of or/ms
GINZBERG, M.J. (1981). Early diagnosis of misimplementation ure: Promising results and unanswered questions. Management ence, 27(4), 459-478.
ef-
failSci-
LAUDON, K.C. (1986). Dossier society. versity Press.
New York: Columbia
Uni-
LE BLANC, L. (1987). An analysis of critical success factors for public sector decision support. Evaluation Review, /l(l), 73-83. LEBOW, J.L. (1983a). Similarities and differences between mental health and health care evaluation studies assessing consumer satisfaction. Evaluation and Program Planning, 6, 237-245. LEBOW, J.L. (1983b). Research assessing consumer satisfaction with mental health treatment: A review of findings. Evaluation and Program Planning, 6, 21 I-236.
GOYNE, J.B., & LADOUX, P. (1973). Patients’ opinions of outpatient clinic services. Hospital and Community Psychiatry, 24,627-628.
LEHMAN, A.F., & ZASTOWNY, T.R. (1983). Patient satisfaction with mental health services: A meta-analysis to establish norms. Eva/uation and Program Planning, 6, 265-274.
GURWITT, R. (1988). The computer revolution: Microchipping at the limits of government. Governing, I(May), 35-42.
LUCAS, H.C. (1975). Why information lumbia University Press.
away
systemsfail.
New York: Co-
128
BRUCE
LUCAS, H.C. (1984). Organizational vices department. Communications
ROCHELEAU
power and the information serof the ACM, 27(l), 58-65.
MALTZ, M.D., GORDON, A.C., FRIEDMAN, crime in its community setting: Event geography Springer-Verlag.
W. (1991). Mapping analysis. New York:
MARCHAND, D.A. (1980). Thepolitics ofprivacy, criminal justice records. Arlington, VA: Information
computers, and Resources Press.
MELONE, N.P. (1990). A theoretical assessment of the user-satisfaction construction in information systems research. Management Science, 36(l), 77-91. NEWCOMER, K.E., & CAUDLE, S.L. (1991). Evaluating public sector information systems: More than meets the eye. Public Administration Review, 51(5), 377-384. NOORI, H. (1990). Managing the dynamics ofnew wood Cliffs, NJ: Prentice-Hall, Inc. OLSON, M .H. (1982). New information tional culture. MIS Quarter/y, 6 (special
technology.
Engle-
technology and organizaissue), 71-92.
ROBEY, D. (1979). User attitudes and management tem use. Management Science 22(3), 527-538.
information
sys-
ROCHELEAU, B., & MACKESEY, T. (1981). What, consumer feedback surveys again? Evaluation & the Health Professions, 3(4), 405-419. RUSHINEK, A., & RUSHINEK, S.F. (1986). What makes happy. Communications of the ACM, 29(7), 594-598.
users
SACCO, J.F., & OSTROWSKI, J.W. (1991). Microcomputers and government management. Pacific Grove, CA: Brooks/Cole Publishing Company.
APPENDIX:
SAUNDERS, C.S., & SCAMEL, R.W. (1986). Organizational power and the information services department: A reexamination. Communications of the ACM, 29(2), 142-147. SIEGEL,
SOAT, J. (1990, December 20). Lawyers, mation Week, Issue 300 pp. 17-23.
SYNNOTT, W.R. (1987). Theinformation Wiley and Sons. WEIL, P., &OLSON, theory of management Information Systems,
~ _
weapon. New York: John
behavior.
Cambridge:
Har-
WILSON, J.Q. (1984). Problems in the creation of adequate criminal justice information systems. In A.P. Westin (Ed.), Information, police and crime control strategies. Washington, D.C.: U.S. Department of Justice, Bureau of Justice Statistics, NCH 93926. WYNNE, B. (1977). Measuring public sector. Interfaces, 8(l),
~ ~ ~ ~
Does the system provide the precise information you need? Is the system accurate? Does the information content meet your needs?
Infor-
M.H. (1989). An assessment of the contingency information systems. Journal of Management 6(l), 59-85.
WILSON, J.Q. (1968). Varieties ofpohce vard University Press.
~ ~
~
guns, and software.
SRINIVASAN, A. (1985). Alternative measures of system effectiveness: Associations and implications. MIS Quarter/y, 9(3), 243-253.
~ ~ __
2. Please rate your computer system on the criteria below (Scale of 1-4; 1 = Rarely; 2 = Sometimes, 3 = Usually; 4 = Almost Always.):
City and State,
SMITH, T.S., & SORENSEN, J.E. (1974). Integrated management information systems for community mental health centers. Rockville, MD: National Institute of Mental Health, DHEW Public No. (ADM) 75-165.
1. Please rate your current information system with respect to the capabilities listed below (Scale of 1-6; 1 = Very Poor; 2 = Poor; 3 = Fair; 4 = Good; 5 = Very Good; 6 = Outstanding; NA = Not Applicable.):
~ ~ ~ _ ~ ~
New York: McGraw-Hill.
SLOANE, T. (1991, July 29) Book ‘em by computer. pp. GMl-GMlI.
SURVEY
Ability to access state and federal databases Ability to access neighboring police department records Reporting capability Ability to do crime analysis Ability to do mapping Ability to analyze officer activities Flexibility of the system Support from software vendor
S. (1956). Non-parametricstatistics.
the immeasurable 106-109.
or credibility
in the
QUESTIONS Are you satisfied with the accuracy of the system? Does the system provide up-to-date information? Does the system provide reports that seem to be just about exactly what you need? Is the system user-friendly? Is the system easy to use? Is the information clear? Do you get the information you need in time?
3. Overall, how satisfied are you with the system? (Check One Only): ~ __ ~ _
Very dissatisfied Somewhat dissatisfied Somewhat satisfied Very satisfied
4. Has your satisfaction with your information with time? (Check One Only): ~ ~ _
No, stayed about the same. Yes, declined through time. Yes, improved through time.
changed
Public Sector Information 5. Please rate the impact of the computer system on each of the following: (Scale of -3-+3, -3 = Very Negative Impact, -2 = Somewhat Negative Impact; - 1 = Slightly Negative Impact: 0 = No Impact; +l = Slightly Positive Impact; +2 = Somewhat Positive Impact; +3 = Very positive Impact.): _ _
Police productivity (e.g., arrests) Allocating personnel
_ _ _ _ _
Systems
129
Police satisfaction with job Providing information about performance of subordinates Identifying problems, abuses, or inefficiencies in the units In setting realistic goals for the units or individuals you supervise Reporting capacity