Are frequent computer users more satisfied?

Are frequent computer users more satisfied?

Information Processing & Management Vol. 25, No. 5. pp. 557-562. Printed in Great Britain. ARE FREQUENT COMPUTER 1989 Copyright 0 USERS MORE 030...

460KB Sizes 0 Downloads 81 Views

Information Processing & Management Vol. 25, No. 5. pp. 557-562. Printed in Great Britain.

ARE FREQUENT

COMPUTER

1989 Copyright 0

USERS

MORE

0306.4573189 $3.00 + .OO 1989 Pergamon Press plc

SATISFIED?

RAVINDERNATH Department of Management Information Systems and Decision Sciences, Fogelman College of Business and Economics, Memphis State University, Memphis, TN 38152, USA (Received 18 July 1988; accepted in final form 18 November 1988)

Abstract-Whether increased usage of computer-based information systems (CBIS) results in improved end-user satisfaction is investigated. Data collected from 98 end users in 12 organizations indicate that as an aggregate both the frequency of on-the-job CBIS use and the proportion of total time spent using CBIS on the job are positively correlated with end-user satisfaction. For upper level managers, the higher the frequency of computer use per week, the more satisfied they are with their information systems. However, for lower level managers, it is not the frequency but the amount of time spent using computers that correlates significantly with their satisfaction level. This information in conjunction with the nature of the information systems (monitor, exception, inquiry, and analysis) that an organization provides to its users, could be pivotal in designing improved measures of end-user satisfaction.

INTRODUCTION

Do computer-based information systems (CBIS) improve white collar productivity? Decision-making effectiveness? and so on, are some of the questions management has raised while deciding the spending level for information systems. Gauging such performance for CBIS is extremely difficult [ 1,2]. A surrogate measure of CBIS effectiveness is known to be user satisfaction with the information systems they use [3,1]. Despite some obvious shortcomings in using satisfaction as a surrogate for CBIS effectiveness, this measure has gained widespread popularity in the management information system (MIS) community. Low satisfaction scores should warrant further investigation by the MIS management of the firm so that problem areas could be identified and appropriate corrective actions taken. Several instruments that measure computer-user satisfaction exist and have been tested empirically [3-51. Prominent among such measures is that proposed by Bailey and Pearson [3] and subsequently modified by Ives, Olson, and Baroudi [5]. By factor analyzing the instrument, these researchers have identified factors that underlie the “satisfaction” construct. Ives, Olson, and Baroudi [5] provide these four factors: EDP staff and services, information product, vendor support, and knowledge or involvement. Nath [6] developed an instrument specifically suited for measuring satisfaction in an end-user computing (EUC) environment. He identified six factors that were significant in determining EUC success: 1. System Characteristics. This factor encompasses issues relating to the existing and potential capabilities of the system, like electronic communication with other users and ease of uploading and downloading information from the mainframes. 2. User Training. This factor relates to the relevance, availability, and effectiveness of computer-related training for users. 3. System Output. This factor focuses on accuracy, reliability, and timeliness of system-generated information. 4. User and Management Involvement. This factor relates to the involvement of users as well as managers in the end-user computing issues such as acquiring software and hardware.

R. NATH

558

5. System Efficiency. This factor focuses on software and hardware reliability and the cost effectiveness of the system. 6. Documentation. This factor concerns the availability and usefulness of the system and applications’ documentation available to users. The above mentioned satisfaction measures can be adapted for studying end users of specific types of systems. One such example from the library sciences is bibliographic retrieval systems, in which end users typically access online bibliographic data bases. Mischo and Lee [7] report on the existing literature in the area of end-user searching of bibliographic or data bases. They contend that “. . . little has been reported about user satisfaction search performance” [7]. In subsequent studies, Saracevic and Kantor [8,9] and Saracevic, Kantor, Chamis, and Trivison [lo] investigated the searchers (end-users), their cognitive traits and decision-making processes, and search effectiveness measures. One of the effectiveness measures employed by them is the “degree of satisfaction” expressed by the user with the result of the search. A 5-point Likert scale (1 = dissatisfied and 5 = satisfied) is utilized to gauge the degree of satisfaction. Their statistical analysis revealed that several user and task characteristics impact the success of the search as indicated by the effectiveness measures. In the same vein, MIS researchers have investigated whether organizational, user, or system characteristics such as firm size, user’s age, type of information system being used, type of tasks performed, computer-related training, etc., influence end-user satisfaction [11,12]. As expected, several of these characteristics have been found to correlate significantly with user satisfaction [12]. Building on the existing research, in this paper, we first will investigate whether the patterns of CBIS usage differ across different management levels. Second, we examine whether frequency of on-the-job CBIS use and the percent of time spent using computers are related to information satisfaction . Specifically, the following two hypotheses are tested: H, : A positive association exists between the frequency a user and his/her satisfaction level. Hz: A positive association exists between the amount and his/her satisfaction level.

RESEARCH

of on-the-job

CBIS usage by

of time a user spends using CBIS

METHODOLOGY

To collect data to test the aforementioned hypotheses, in 1988 12 organizations in a large city in the United States were selected. Data were gathered by administering a questionnaire to randomly selected end-users in 16 functional areas of these companies. The criterion to include a functional area in the study was based on the length of time end-user computing (EUC) activities have occurred in that area. A cutoff value of two years was used, assuring sufficient technology adaptation time. Out of the 250 questionnaires distributed, 98 were returned-an overall return rate of 39%. Return rates for individual departments varied from 15 to 60%. Table 1 describes the characteristics of the sampled end users and the organizations.

Measuring research variables End user satisfaction was measured via a questionnaire instrument consisting of 12 items [6], where each item measures the performance of a certain attribute of the computing environment. Table 2 presents the end-user satisfaction instrument that was used. A numeric satisfaction score for each user was computed by averaging his/her responses to the 12 questionnaire items. A self-reported measure of frequency of on-the-job CBIS use was obtained by giving each user a list of applications (e.g., spreadsheet analysis, database inquiry, statistical anal-

End-user Table

1. End-user

End user characteristics

satisfaction

and organizational

Percent

26 44 22 8

Position Upper management Middle management Lower management

characteristics Organization

Age 30 or below 31-40 41-50 Over 51

559

18 31 51

Table 2. End user information

characteristics

Percent

Organization type Distribution Government Manufacturing Service

;2 15 29 44

Department Accounting Customer Service Engineering Human Resources Marketing & Sales Management Support Planning Research & Development

13 18 12 12 15 8 4 18

satisfaction

instrument

Please evaluate the PERFORMANCE attained by the Computer Information System within your department by circling the number that in your opinion most represents your evaluation of the performance of each of the factors listed below. Use the 7-point scale given below. Very Poor 0 1

Poor ;

1. Availability/timeliness 2. Accuracy 3. Volume system

; of reports

of information of output

Good

2

3

4

5

6

7

by the system

1

2

3

4

5

6

7

1

2

3

4

5

6

7

1

2

3

4

5

6

7

1

2

4

5

6

7

1

2

4

5

6

7

1

2

4

5

6

7

1

2

4

5

6

7

1

2

3

4

5

6

7

1

2

3

4

5

6

7

provided

in system development

5. Ease of upgrading applications

the system

for future

7. System’s

responsiveness

8. Attitude

of managers

9. Top management participation 10. Application

for new

to changing towards

involvement database

of computer

training

user needs

user computing in defining

user

technology

11. Orientation of user training toward description/future promotion 12. Suitability

by the

system applications

of modern

;

6

1

4. Your participation

6. Planning

5

and information

provided

information

;

Excellent

higher

job

for user needs

1

2

3

4

5

6

7

1

2

3

4

5

6

7

ysis, etc.) he/she uses while performing job-related activities. Each user checked those applications that were applicable and provided the frequency of each application’s use per week by circling the appropriate number (1 = low, 2 = medium, 3 = high). Furthermore, if users performed applications that were not on the list, they were requested to add them to the list. This resulted in several additional applications. Besides giving the frequency of information system usage, each user was asked to estimate the time he/she spends per week on each activity. These data allowed us to calculate the percent of weekly time a user spends on each application. For each end-user, the average of these percentages over all applications served as a measure of the time spent on CBIS-related activities.

560

R. NATH SUMMARY

Time spent oz computer-related

OF RESULTS

activities

A breakdown of the average times by application type and management level revealed some interesting but expected patterns. Upper level managers reported, on the average, spending 16% of their weekly time using CBIS. This jumped to 30% for middle level managers and 3 1% for lower level managers. This is in line with the fact that top level executives spend considerably less time using computerized information systems than their counterparts at the middle and lower levels. A reason for this, perhaps, is that traditional information systems are not designed to satisfy the information needs of top level executives. Table 3 reports the CBIS usage data broken down by management level and application type. Upper level managers spend 4.5% of their time performing statistical analysis, and 3.8% on text processing. The two dominating activities for middle level managers are programming (7.2%) and database inquiry (7.2%). The same two activities are also reported by lower level managers as dominating their CBIS usage. To find out whether on the average the three groups of managers spend the same amount of time on CBIS activities, a nonparametric analysis of variance (ANOVA) technique called Kruskal-Wallis test was performed. We selected a nonparametric technique because the normality of the data could not be assured. The results were significant (chisquare = 5.79, p = 0.055) at the .10 level, indicating that indeed the three groups of managers spend significantly different amounts of time on CBIS-related activities. Further analysis using multiple comparisons showed that the means for the middle and lower managers do not differ significantly from each other whereas the mean for upper managers was significantly lower compared to the other two groups. So in our sample, top level managers did not spend as much time using CBIS as managers below them.

Frequency of CBIS use On the surface it appears that the higher the frequency of computer use, the greater the time spent on CBIS activities. But this may not be the case of some users. For example, consider a top level manager who queries a corporate data base very often to examine key pieces of data, but each session lasts only a few minutes. This end user will have high frequency-of-use but a small value for the “percent of time” variable. On the other hand, certain end users may use the information system very rarely but for prolonged periods of time. Such users will have “low” frequency-of-use and, perhaps, a high value for the “amount of time spent” variable. On the whole, however, when data were aggregated over all groups of managers, we found the correlation between the two variables to be significant (r = .69, p = .OOl). Also, our data suggest that middle managers use CBIS with the highest frequency, followed by upper level managers and low level managers.

Table 3. Percent

weekly time spent on CBIS activities Management

level

Lower

Middle

Upper

Spreadsheets

5.3%

4.5%

2.6%

Text processing

3.6

4.5

3.8

Programming

8.1

7.2

1.8

Data base inquiry

6.8

7.2

3.0

Application

Graphics Statistical Total

analysis

1.6

3.0

0.3

5.6

3.6

4.5

31%

30%

16%

End-user

Relationships

between frequency

satisfaction

561

of use, time spent, and information satisfaction

Table 4 shows the correlations between the frequency of on-the-job computer use, percent time spent on using CBIS, and end-user satisfaction. Hypotheses Hi, stating that there is a positive association between on-the-job CBIS use and satisfaction, is accepted at the .Ol level for upper management end users, and at the .lO level for middle level managers. We fail to accept this hypothesis for lower level managers (r = .168). Overall, however, there is significant evidence to support Hi (r = .238, p < .05) at CY= .05. Also, at the .05 level, hypothesis Hz is accepted for lower and middle level managers but not for upper management. It appears that an increase in time spent on CBIS results in higher satisfaction for low and middle level managers but not for upper level managers. To further assess the relationships between the CBIS usage variables (frequency of CBIS use, and time spent) and the satisfaction levels of the users, we correlated the usage variables with the 12 components constituting the “satisfaction” scale. These results are given in Table 5. An examination of the correlation values shows that a significant positive association exists between how much time a user spends using CBIS and the rating (1 = very poor; 7 = excellent) of each component of the satisfaction scale. Similar associ-

Table 4. Correlations

between

satisfaction Correlation

Management Upper

level

“Frequency”

n = 16)

Middle Lower

(n = 23) (n = 47)

Overall

and usage variables between and

“satisfaction”

“Percent

time spent”

.591*

.272

.319***

.370**

,168

.482*

.238**

.399*

*p < .Ol. **p < .05. ***p < .lO.

Table 5. Correlations

between

CBIS usage variables

and components

of “satisfaction”

scale

CBIS usage variables Frequency

Scale components 1. Availability/timeliness 2. Accuracy 3. Volume

of reports

of information of output

and information

provided

information

by the system

provided

by the system

of use

Time spent

.213*

.399*

.128

.187**

.162***

.238**

4. Your participation

in system development

.163’**

.250**

5. Ease of upgrading

the system

.167***

.315*

.233**

.263**

.164***

.322*

6. Planning

for future

7. System’s

responsiveness

8. Attitude

of managers

9. Top management 10. Application

system

for new applications

applications

to changing towards

involvement

of modern

database

user computing

.153***

.186***

in defining

.251**

,356’

.242”

.319*

.284*

.441*

*p < .Ol. **p < .05. ***p < .lO. IPM 25:5-G

of computer

training

.107

.240**

user participation

technology

11. Orientation of user training toward description/future promotion 12. Suitability

user needs

higher job

for user needs

562

R. NATH

ations are indicated for the frequency of CBIS usage variable, except for two components: accuracy of information provided by the CBIS, and the suitability of computer training for user needs. DISCUSSION

We have shown that the higher the frequency of CBIS use at the upper and middle management levels, the more satisfied the users are with their information systems. However, this is not the case for users at the lower management level. At this level, how long a user uses the system per week is more significant. When correlations are computed without any regard to the position of the end user’s position in the management hierarchy, both frequency of use and the percent time using the system are found to be significant in determining satisfaction levels. Therefore, the patterns of CBIS use should be taken into account while developing measures of end-user satisfaction. More research needs to be done in this area by sampling a large number of firms over a period of time. We encourage MIS researchers to replicate the research presented here. Acknowledgmenr-The Their input significantly

author appreciates the comments improved this paper.

and suggestions

made by the two anonymous

referees.

REFERENCES I.

2. 3. 4. 5. 6. I. 8. 9. 10. Il. 12.

Ein-Dor, P.; Segev, E. Organizational context and the success of management information systems. Management Science, 24(10): 1064-1077; 1978. King, W.R.; Rodriguez, J.J. Evaluating management information systems. MIS Quarterly, 2(3): 107-124; 1978. Bailey, J.E.; Pearson, S.W. Development of a tool for measuring and analyzing computer user satisfaction. Management Science, 29(5): 530-545; 1983. Baroudi, J.J.; Orlikowski, W.J., A short-form measure of user information system satisfaction: A psychometric evaluation and notes on use. Journal of Management Information Systems, 4(4): 44-59; 1988. Ives, B.; Olson, M.H.; Baroudi, J.J. The measurement of user information satisfaction. Communications of the ACM, 26(10): 785-793; 1983. Nath, R., A measure of end-user computing success. Working paper, MIS Department, Memphis State University, Memphis, TN; 1988. Mischo, W.H. and Lee, J. End-user searching of bibliographic databases. Annual Review of Information Science and Technology, 22: 227-263; 1987. Saracevic, T.; Kantor, P. A study of information seeking and retrieving. II. Users, questions, and effectiveness. Journal of the American Society for Information Science, 39(3): 177-196; 1988. Saracevic, T.; Kantor, P. A study of information seeking and retrieving. III. Searchers, searches, and overlap. Journal of the American Society for Information Science, 39(3): 197-216; 1988. Saracevic, T.; Kantor, P.; Chamis, A.Y.; Trivison, D. A study of information seeking and retrieving. I. Background and methodology, Journal of the American Society for Information Science, 39(3): 161-176; 1988. Nelson, R.R.; Cheney, P.H. Training end users: an exploratory study. MIS Quarterly, 1 l(4): 547-559; 1987. Turner, J.A. Firm size, performance, and computer use. Proceedings of the Third International Conference on Information Systems; 1982 December; University of Michigan, Ann Arbor.