Life in the middle: An analysis of information centers from the perspective of their major stakeholders

Life in the middle: An analysis of information centers from the perspective of their major stakeholders

Information& Management30 (1996) 101- 109 ELSEVIER Research Life in the middle: An analysis of information centers from the perspective of their ma...

693KB Sizes 0 Downloads 14 Views

Information& Management30 (1996) 101- 109

ELSEVIER

Research

Life in the middle: An analysis of information centers from the perspective of their major stakeholders Regina F. Bento * Merrick School of Business, University of Baltimore, 1420 N. Charles Street, Baltimore, MD 21201, USA

Abstract Information Centers (ICs), in their mission of supporting end-user computing, find themselves in the middle of two different constituencies, users and Information Systems (IS) managers. A conceptual model, based on role theory, is presented here to explore the special challenges of "life in the middle", such as different expectations about the roles that should be performed by IC professionals, different criteria for evaluating their performance, and different perceptions of their success. These special challenges were empirically studied through in-depth interviews with users, IS Managers and IC Managers, in a random national sample of forty-seven Fortune 500 companies. The results indicate that, given the multiple expectations surrounding Information Centers, IC professionals need to be flexible in adopting different roles, skilled in coping with different sets of performance criteria, and keenly aware of the highly subjective nature of the evaluations received from their diverse constituents. Keywords: Informationcenters; End user computingmanagement;Role expectations; Role behaviors;Role performance;Evaluationcriteria

1. Introduction

Information Centers (ICs) represent a bridge between two very different sub-cultures in organizations: users and computing specialists [26]. This paper examines the special challenges of the "life in the middle" of these two constituencies. These influence the very notion of why ICs exist, and what role they play in the management of end-user computing. Not surprisingly, there is a wide range of perceptions about ICs [9,23,25], which can be simultaneously seen as doomed to extinction [1,34], or as direct contributors to the success of end-user computing and to overall company performance [11,16]. Before the advent of ICs, the relationship between * E-mail: [email protected].

computing specialists and users was rarely peaceful [21,27]. When ICs were created, with the dual mission of supporting end-user computing and alleviating the pressure on Information Systems (IS) departments, IC professionals found themselves poised in between two worlds. They were expected to speak the "language" of both IS specialists and users, reconcile their values, and find common ground between their occupational subcultures. This "in-betweeness", which is so important for understanding the challenges faced by ICs, has not yet received adequate attention in the literature. Studies on ICs (see, for example, [4,5,12,13,24,33,36]) have typically looked at ICs through the eyes of a single type of stakeholder - in some studies, the IC manager; in others, the IS manager; in still others, the users. This study departs from the single stake-

0378-7206/96/$15.00 © 1996 Elsevier Science B.V. All rights reserved SSDI 0378-7206(95)00051-8

102

R.F. Bento / ln]brmation & Management 30 (1996) 101-109

holder approach by analyzing ICs directly from the perspective of representatives of all three groups.

2. The conceptual model and research questions The functions performed by ICs involve supporting users in a variety of areas: application development, consulting, data base support, training, and hardware and software acquisition [6,37]. There are two basic ways for IC professionals to help users in any of these areas: they may help by applying their knowledge in doing something for the users (Doers) or by transferring part of that knowledge to the user, thus facilitating self-help (Facilitators). The role of the IC professional will be characterized, over time, by particular combinations of the two types of behavior. This taxonomy is inspired by Schein's analysis of the role of the consultant and his discussion of the "helping dilemma" [28,29]: how to choose between direct intervention (e.g. giving expert advice) or catalyst intervention (e.g., facilitating the help seeker's own problem solving processes), in situations where both types of behavior are possible. Doer roles are inspired by the logic of specialization: when dealing with problems that fall within an area of expertise, someone who is a specialist in that area is more likely to achieve the desired results, with less expenditure of time, energy and other resources. Facilitator roles are inspired by the logic of self-reliance, as expressed in the old saying, "Give me a fish, and I eat for one day; teach me how to fish and I'!1 eat for a lifetime." Based on role theory [3,17,19], we can assume that all the people who have a stake in the performance of IC professionals may hold expectations favoring one or the other type of helping behavior. The preferences, or normative role expectations, of IS managers and users are communicated to IC professionals. IC professionals receive those communications, with greater or lesser accuracy, and then engage in behaviors that can be characterized by a predominance of either Doing or of Facilitating helping modes. These behaviors will result in some level of performance. The evaluation of performance will influence role expectations, and the cycle will repeat. As Graen [10] points out, several "discrepancies"

can disrupt this cycle; e.g., there can be problems in the processes of communication, leading to inaccurate perceptions of expectations [15]. The IC professional may be unable or unwilling to translate expectations into actual behaviors and the behaviors may be ignored or monitored with greater or lesser accuracy. These considerations inspired the first set of research questions addressed here: What are the nor-

mative expectations surrounding the role of IC professionals? Do IS managers and users agree, or disagree, on which helping mode (Doer or Facilitator) should characterize the role of lC professionals? Which helping mode actually describes the behavior of the IC staff, according to IC managers? When heterogeneous sets of people evaluate the performance of a focal person, they are likely to use different criteria depending on their particular stakes. The computing specialists in the IS department have a stake in the buffering and control functions [20]: IS is likely to perceive IC professionals as successful if they are able to shield IS from the demands of users, and if they make sure that users abide by IS standards and policies for data security, accuracy, systems documentation, hardware and software compatibility. In contrast, users are likely to want IC professionals to be responsive to requests in a timely manner, while providing reliable and effective service [18,25,30]. By the same token, users are not likely to care for rules and policies that curtail their freedom. Consequently, IC professionals may find themselves trying to satisfy the conflicting criteria of their two masters: IS departments, which sometimes have formal authority over the ICs; and users, whose needs justify the very creation, and continued existence, of ICs. Discrepancies in the evaluation of role performance may also occur even if stakeholders use the same criteria. The theories of person perception [31 ] make reference to a similarity bias: the tendency to like the behavior of people whom the evaluator perceives as similar to him or herself. This would imply a bias for a more favorable evaluation of ICs on the part of IS computing specialists, and a less favorable one on the part of users [7,14]. The difficulties inherent in performance appraisal in organizational contexts [22] are likely to be more

R.F. Bento/ lnlbrmation & Management30 (1996) 101-109 pronounced for Information Centers [32]. An empirical investigation of ICs [24] found that only 25% of the ICs in their study had formal evaluation procedures. The various possible sources of discrepancies in evaluating role performance led to the second set of research quest:ions: Do IS managers, IC managers and users apply different criteria when evaluating IC performance? If so, what are these criteria? Do different criteria lead to the same overall evaluation of the IC, from the point of view of the different stakeholders?, and When they use the same criteria, do they arrive at similar, or different conclusions? The two sets of research questions in this study are critical for ICs, as they face the new challenges of end-user computing management in the 1990's. The ability of Information Centers to respond to the evolving needs of their multiple constituencies will determine their role in the second half of this decade, and even their very survival. How well they are able to "live in the middle" may define whether they will end up as dynamic catalysts of the growth of end-user computing, or as quaint relics of the mid1980's end-user computing explosion.

3. Methodology The research questions described above were investigated in a random, cross-sectional U.S. national sample of forty-seven Fortune 500 companies, using structured interviews with the IS manager, the IC manager and typical users in each organization. Three different questionnaires were used to provide structure, one for each type of respondent. Data collection design and procedures followed Dillman's [8] "Total Design Method", which consists of a system of articulated rules intended to increase the rewards and decrease the costs of response. Following the recommendations of Bank et al. [2] and Van Sell, Brief and Schulen [35], a clear distinction was kept between the expectations of the IC roles and actual behavior. IS Managers and users were asked about several situations involving a broad range of tasks performed by IC staff. For each situation, they were presented with two alternative ways for the IC staff to perform that service - one a Doer and the other a Facilitator mode - and asked

103

which would be the most appropriate behavior. This was preceded by a disclaimer on how people vary in their expectations for user support, in order to decrease any preoccupation with the social desirability of different responses. The tasks covered areas such as application development, consulting and data-base support, helping users make decisions about training programs, and acquisition of end-user computing hardware and software. IC managers were asked about the same situations. They were confronted, however, with a different question: IC managers were asked to indicate what their staff actually did, typically, in each circumstance, regardless of the IC manager's personal feelings. IS Managers and users were asked to describe evaluation criteria that would be most appropriate for measuring successful IC performance and how they would rate the staff of the IC along these criteria (on a five-point ordinal scale: excellent/good/satisfactory/unsatisfactory/failing). For comparison purposes, IC managers were asked what criteria were actually used. In addition, IS Managers and users were asked what grade they would give the IC, if there were "report cards" for evaluating the way it was supporting EUC in their organization (where " A " was excellent, " B " was good, " C " was satisfactory, " D " was unsatisfactory and " F " was fail). Information was also collected about the context within which role performance and evaluation took place, including: industry, size, and characteristics of the IC and users. The industry was identified by the Fortune 500 classification of the organization. Size was measured in terms of the whole organization (number of employees), the IS function (full-time IS personnel) and the Information Center (full-time staff and users supported). Questions on user characteristics involved frequency of interaction, types of services used, and level of user expertise in computing (menu, command, programming and expert functional support levels). Finally, characteristics of the IC were assessed in terms of type of user support offered (microcomputer, timesharing or both); background of the IC manager (from Data Processing or Information Systems background); IC problems and types of services; age of the IC and its reporting relationships. In each organization, the data was collected in

104

R.F. Bento / lnfi~rmation & Management 30 (1996) 101-109

three rounds of interviews: first IS managers, then IC Managers, and finally users. As expected, the real obstacles were faced in the first round of interviews. Of the 100 IS managers in the original sample, 47 interviews were completed. The non-respondents included 23 who could not be reached (for reasons like vacations, travel, other appointments, fiercely protective secretaries, wrong or disconnected numbers, etc.); 7 were ineligible by not having an IC; and 20 chose not to participate (typically providing reasons like " I ' m sorry, I have no time for surveys"). In the second and third round of interviews, response rate problems were minimal. Of the 47 organizations where interviews had been obtained with IS managers, a total of 38 were completed with IC managers and 36 with users. In order to estimate the possibility of sampling bias, the demographic characteristics of non-respondents were obtained from secondary sources and compared to those of the respondents. No evidence was found by this procedure. As a further check, phone interviews were conducted with other members of the IS and IC staff and users in organizations where the intended respondents had refused to participate; the results were compared with the sample and did not show any evidence of sampling bias from non-response. The sample was finally composed of 47 organizations, in 32 of which there was complete information on all items for all types of respondents. The organizations in the sample were located all over the continental United States and represented 21 different areas of economic activity, according to the Fortune 500 classification.

4. Results

The ICs were typically characterized by providing support to the use of microcomputers (36.8%) or both mainframe and microcomputers (60.5%); having an IC manager with an Information Systems background (73.7% of the ICs), rather than in other functional business areas or the social sciences, and being relatively " m a t u r e " (55.3% were between 6 and 9 years old, which may be considered maturity). The users in the sample reported frequent contacts with the IC (63.9% of the cases) and were at least at the command level of expertise; only 5.6% classified

Table 1 Percent of Facilitator normative role expectations and behavior by area of IC activity Area of IC activity

I. Application development IS managers' normative expectations Users' normative expectations 1C staff's reported behavior 2. Consulting IS managers' normative expectations Users' normative expectations IC staff's reported behavior 3. Data base support IS managers' normative expectations Users' normative expectations IC staff's reported behavior 4. Training IS managers' normative expectations Users' normative expectations IC staff' s reported behavior 5. Hardware, software acquisition IS managers' normative expectations Users' normative expectations IC staff's reported behavior

% of answers mentioning Facilitator roles

N

68.9 80.6 76.3

45 36 38

84.4 83.3 60.5

45 36 38

84.4 80.6 81.6

45 36 38

33.3 58.3 44.7

45 36 38

35.6 36.1 13.2

45 36 38

themselves as " m e n u " users, who simply access data through menus and do not create tailored procedures nor utilize report generators.

4.1. Role expectations and behavior Table 1 presents the frequency distributions of IS Managers' and users' normative expectations for the IC staff, as well as the IC members' reported actual behaviors in various areas of IC activity. IS Managers and users, although typically favoring Facilitator roles, varied, depending on the area of IC activity: both strongly preferred Facilitator behaviors in application development, consulting, and data base support; but their preferences tilted towards Doer behaviors in the areas of management of training activities (particularly for IS managers) and recommendations for hardware and software acquisition (both IS Managers and users). In spite of the fact that the total frequency of Facilitator expectations in application development was higher for users than for IS managers, paired

R.F. Bento/ Information & Management30 (1996) 101-109 t-tests showed no significant difference. Paired t-tests did not show the difference between actual behavior and IS and users' expectations to be significant. IS expectations in application development were positively associated ( r = 0.4587, p < 0.01) with users' expectations in this area, but the IC manager's reports on actual behaviors were not related to either. Given that the role-making models in the literature typically picture actual role behavior as being influenced by expectations, this suggests that further studies are needed to investigate and possibly explain such findings. In terms of consulting activities, IS Managers and users displayed an even more pronounced preference for Facilitator behaviors. Paired t-tests indicated that the difference between actual behavior in consulting and IS expectations was significant at the 0.02 level (two-tailed probability). No significant associations came up between the IS and users' normative expectations in consulting, nor between expectations and actual behavior. In the area of data base support, IS Managers and users again preferred Facilitator over Doer approaches. This was the area where Facilitator roles were most typically reported by IC managers as being actually performed. No significant associations were found between IS and end- users' expectations here. Training activities showed IS Managers reversing their normative expectations, favoring Doer roles about twice as much as Facilitator roles. Users also showed less preference for Facilitator roles in this area than in the previous ones. Paired t-tests showed a two-tailed probability of 0.09 that the difference

between IS and users' normative expectations in this area could be due only to sampling error. Doer approaches to the management of training activities were reported by ICs as being slightly more typical, in practice, than Facilitator ones. Paired t-tests showed no significant differences with either IS or users' expectations. Again, no significant correlations were found between IS and users' expectations for the handling of training activities, nor between these expectations and actual role performance. The reversal of normative expectations and behaviors towards Doer roles was even stronger in regard to recommendations for hardware and software acquisition. This was the area where IC managers most typically reported Doer behaviors were actually taking place. Their tendency towards Doer roles was significantly higher than IS and end-users' expectations and paired t-tests showed a two-tailed probability of only 0.018 in both cases that the differences could be due to mere sampling error. As in the above areas, the data showed no significant association between IS expectations, users' expectations and actual behavior in recommendations for hardware and software acquisition. Comparing expectations and behaviors across areas of IC activity, correlations were found between IS expectations in the areas of application development and data base support ( r = 0.4771, one-tailed prob. < 0.01); between IS expectations for training and users' expectations for application development (r=-0.5270, one-tailed p r o b . < 0 . 0 0 1 ) ; and between IC managers' reports of actual behaviors in application development and data base support ( r = 0.4623, one-tailed prob. < 0.01).

Table 2 Criteria mentioned as most appropriate for 1C evaluation Performance criteria Examples of responses in this category IC responsiveness Service quality Volume of output User satisfaction User productivity User knowledge Interpersonal relations IC staff's technical knowledge Control of end-user computing

105

Speed or timeliness of response to requests; turnaround time; keeping up with demand. Accuracy, quality of work, good application development. Number of completed assignments, percent use of IC vs external consultants. Satisfaction, good feedback, number of complaints or colossal failures. Making users more productive, more efficient in their jobs. Level of computer literacy, self-sufficiency. Quality of interactions, good communications,rapport, Knowledgeable staff, ability to solve problems, keep up with technology. Compliance to standards and guidelines, documentation,coordination, non-redundancyof existing capabilities, cost effectiveness.

106

R.F. Benin~Information & Management 30 (1996) 101-109

The fact that expectations for Doer and Facilitator behaviors were found to depend on the type of task may be explained by their relative frequency. Overall, Facilitator behaviors were found more appropriate for high frequency tasks. It is important for the IC to control long-term demand for such services by helping users become self-sufficient. In contrast, preferences moved towards Doer behaviors for tasks of a relatively low-frequency, non-repetitive nature, where the logic of specialization is likely to prevail over the logic of self-reliance.

4.2. Evaluation of role performance Criteria related to controlling EUC were very important for IS managers and irrelevant for users, with IC managers falling somewhere in between. Criteria relative to IC responsiveness to user needs were much more important for users than for either IS or IC managers. Table 2 shows the nine categories of answers into which the open-ended questions were reclassified. Table 3 shows the frequency distributions of these categories. Control was not an important concern for the users, and this supports the widespread assumption in the trade literature that users value the freedom made possible by the advent of EUC. Users did not seem to be as inclined as IS to choose "buffering from user dissatisfaction" issues as performance criteria for the IC. Compared to IS managers and the users, the IC

respondents seem to be truly "living in the middle". The criteria they most frequently mentioned fell in the category of prevention of user dissatisfaction, showing more concern in this area than IS did. This makes it particularly ironic, however, to see that IC managers were as far from matching the users' preoccupation with responsiveness as IS managers. On the other hand, IC Managers are generally attuned to users' concerns for service quality; this was the second most mentioned category of criteria for both IC and users. IC managers also seem to be quite attuned to the users in the relatively low importance they attach to the control function of Information Centers. Given that ICs are typically part of the formal structure of IS departments and that the control function was found to be so important for IS managers, this discrepancy may be a source of conflict and misunderstanding. Table 4 presents descriptive statistics for the evaluations given by IS managers and users to the IC staff of their organizations in terms of their chosen top two criteria for performance appraisal and of the general grade they would give the IC on a report card for performance evaluation. IS specialists and users differed in their evaluation of performance of the same IC staff. Interestingly enough, no significant correlations were found between the views from each of these two constituencies. Even though the evaluations were significantly correlated within subjects, IS evaluations were not found to be associated with users' evaluations of the same ICs. Paired t-test

Table 3 Frequency distribution of criteria mentioned as most appropriate for IC evaluation Performance criteria

IC responsiveness Service quality Volume of output User satisfaction User productivity User knowledge Interpersonal relations IC staff's technical knowledge Control of end-user computing Total

Percent of times cited as best criterion by IS Managers

End-users

IC Managers

12.0 11.0 11.0 18.0 15.0 10.0 5.0 0.0 18.0 100% (N = 100) a

39.8 17.2 3.6 9.6 3.6 4.8 7.2 7.2 6.0 100% (N = 83) a

ll.1 23.6 8.3 29.2 0.0 4.2 9.7 6.9 6.9 100% (N = 72) a

aMore than one criterion was mentioned by each respondent

R.F. Bento / Information & Management 30 (1996) 101-109 Table 4 Frequency distribution of IC evaluation by IS managers and end-users

107

analysis revealed that, for any given IC, IS evaluations of the IC along IS chosen criteria tended to be more favorable than the users' evaluations along users' chosen criteria (two-tail prob. = 0.056).

perceptions of how other, possibly very different constituencies perceive the IC. There are several significant implications. First, IC managers should consider that role expectations favored Facilitator behaviors, but not for all tasks. This implies a need for flexibility on the part of IC staff. Moreover, given that many IC staffers come from an IS tradition of Doer behaviors, this may also signify the need for the development of interpersonal and communication skills required for Facilitator behaviors. Second, the results should alert IC Managers to the fact that they may also be responding to different criteria for performance evaluation - these may even differ from the ones the IC would consider most appropriate. IC managers should seek to identify the evaluation criteria used by their stakeholders and use this information when establishing IC goals and priorities. Third, IC managers should realize that the evaluation of IC performance may be highly subjective, as no significant associations were found between IS's and users' evaluations of the same ICs. The apparent lack of an objective basis for performance evaluation highlights the influence that "soft", subjective factors (such as role expectations, outcome expectancies, and power balance) may have on the evaluation of performance.

5. Conclusions

Acknowledgements

This study revealed that IC professionals face many expectations from their various stakeholders, where some behaviors are deemed more appropriate than others, depending on the nature of the task. It also proposed that the two main IC constituencies, users and computing specialists in IS departments, prefer different yardsticks for measuring the role performance: IS managers want to tap into the IC's buffering and control functions, while users focus on IC responsiveness. Finally, it was found that "beauty is in the eye of the beholder", i.e., the same performance of IC professionals was seen more favorably by one set of stakeholders (computing specialists) than by others (users). The results emphasize the importance of multiple perspectives when studying ICs, and not relying on a single type of stakeholder's

Financial support from M.I.T.'s Center for Information Systems Research and the Sloan School of Management is gratefully acknowledged.

Evaluation

Percent of IS managers' evaluations of IC 1st criterion 2nd criterion

(A) (B) (C) (D) (E) -

Excellent Good Satisfactory Unsatisfactory Fail

Total Evaluation

(A)(B) (C) (D) (E) Total

Excellent Good Satisfactory Unsatisfactory Fail

28.9 48.9 22.2 0.0 0.0

31.1 40.0 26.7 2.2 0.0

Overall grade 20.0 60.0 20.0 0.0 0.0

100%

100%

100%

(N = 45)

(N = 45)

(N = 45)

Percent of end-users' evaluations of IC 1st criterion

2nd criterion

Overall grade

33.3 36.1 16.7 11.1 2.8 100% ( N = 36)

12.8 31.9 23.4 6.4 0.0 100% (N = 35)

8.5 46.8 17.0 4.3 0.0 100% (N = 36)

References [1] Anderson, P., " I S Groups: Dinosaur in an End-User World?", Datamation, Vol. 40, No. 6, March 15, 1994, p. 88. [2] Bank, B.J., Biddle, B.J., Keats, D.M. and Keats, J.A., "Normative, Preferential and Belief Modes in Adolescent Prejudice", Sociological Quarterly, Vol. 18, 1977, pp. 574588. [3] Biddle, B.J. and Thomas, E.J., Role Theory: Concepts" and Research, Wiley, New York, 1966. [4] Brancheau, J.C., Vogel, D.R. and Wetherbe, J.C., "An In-

108

R.F. Bento / lnformation & Management 30 (1996) 101-109

vestigation of the Information Center from the Users' Perspective", Data Base, Vol. 17, 1985, pp. 4-19. [5] Carr, H.H., Managing End User Computing, Prentice-Hall, Englewood Cliffs, NJ, 1988. [6] Christoff, K.A., Managing the lnfi~rmation Center, Scott, Foresman/Little Brown, Glenview, Ill, 1990. [7] Crepeau, R.G., Crook, C.W. and Goslar, M.D., -Career Anchors of Information Systems Personnel", Journal of Management lnJbrmation Systems, Vol. 9, No. 2, Fall 1992, pp. 145-160. [8] Dillman, D.A., Mail and Telephone Surveys: The Total Design Method, Wiley, New York, 1978. [9] Fuller, M.K. and Swanson, E.B., "'Information Centers as Organizational Innovation: Exploring the Correlates of Implementation Success", Journal of Management Information Systems, Vol. 9, No. 1, Summer 1992, pp. 47-68. [10] Graen, G., "Role-Making Processes within Complex Organizations", In: M.D. Dunnette (Ed.), Handbook of Industrial and Organizational Psychology, Rand McNally, Chicago, 1976, pp. 1201-1245. [11] Guimaraes, T. and Igbaria, M., "Exploring the Relationship between IC Success and Company Performance", Information and Management, Vol. 26, No. 3, March 1994, pp. 133-142. [12] Head, R.V., Planning and Implementing lnJbrmation Resource Centers for End-User Computing, QED Information Sciences, Wellesley, MA, 1985. [13] Henry, L., Cassidy, J. and Malley, J., "The Information Resource Center: Control Mechanism for the End-User Environment", Journal of Computer lnfi~rmation Systems, Vol. 34, No. 2, Winter 1993-1994, pp. 47-52. [14] Igbaria, M., Greenhaus, J. and Parasuraman, S., "Career Orientations of MIS Employees: An Empirical Analysis", MIS Quarterly, Vol. 15, No. 2, June 1991, pp. 151-170. [15] lgbaria, M. and Guimaraes, T., "Antecedents and Consequences of Job Satisfaction among Information Center Employees", Journal of Management Information Systems, Vol. 9, No. 4, Spring 1993, pp. 145-174. [16] Khan, E.H. "The Effects of Information Centers on the Growth of End-User Computing", Information and Management, Vol. 23, No. 5, November 1992, pp. 279-290. [17] Kahn, R.L., Wolfe, D.M., Quinn, R.P., Snoek, J.D. and Rosenthal, R.A., Organizational Stress: Studies in Role Conflict and Ambiguity, Wiley, New York, 1964. [18] Karten, N., Mind your Business: Managing the Impact of End-User Computing, QED Information Sciences, Wellesley, MA, 1990. [19] Katz, D. and Kahn, R.L., The Social Psychology of Organizations, Wiley, New York, 1966. [20] Leitheiser, R.L., "'MIS Skills for the 1990s: A Survey of MIS Managers' Perceptions", Journal of Management lnjbrmation Systems, Vol. 9, No. 1, Summer 1992, pp. 69-92. [21] Markus, M.L. and Bjorn-Andersen, N., "'Power over Users:

its Exercise by Systems Professionals", Information Systems Working Paper No. 4-87, University of California, Graduate School of Management, Los Angeles, 1986. [22] Murphy, K.R. and Cleveland, J.N., Performance Appraisal: An Organizational Perspective, Allyn and Bacon, Boston, 1991. [23] Nardi, B.A., A Small Matter of Programming: Perspectives on End-User Computing, MIT Press, Cambridge, MA, 1993. [24] O'Shea, K. and Muralidhar, K., "The Function and Management of Information Centers", Journal of Systems Management, Vol. 41, No. 12, December 1990, pp. 7-10. [25] Regan, E.A. and O'Connor, B.N., End-User Infi~rmation Systems: Perspectioes Jbr Managers and Information Systems Professionals, Macmillan, New York, 1994. [26] Robey, D., Smith, L.A. and Vijayasarathy, L.R., "'Perceptions of Conflict and Success in Information Systems Development Projects", Journal of Management Information Systems, Vol. 10, No. 1, Summer 1993, pp. 123-140. [27] Rockart, J.F. and Flannery, L.S., "The Management of End-User Computing", Communications of the ACM, Vol. 26, 1983, pp. 776-784. [28] Schein, E.H., "'The Role of the Consultant: Content Expert or Process Facilitator?", Personnel and Guidance Journal, February 1978, pp. 339-343. [29] Schein, E.H., Process Consultation: Lessons Jbr Managers and Consultants, Vol. II, Addison-Wesley, Reading, MA, 1987. [30] Schiffman, S.J., Meile, L.C. and Igbaria, M., "An Examination of End-User Types", Information and Management, Vol. 22, No. 4, April 1992, pp. 207-216. [31] Schneider, D.J., Hastorf, A.H. and EIIsworth, P., Person Perception, Addison-Wesley, Reading, MA, 1979. [32] Stoak Saunders, C. and Williams Jones, J., "Measuring Performance of the Information Systems Function", Journal of Management lnfi~rmation Systems, Vol. 8, No. 4, Spring 1992, pp. 63-82. [33] Sumner, M., "'Organization and Management of the Information Center: Case Studies", ACM Proceedings of the Computer Personnel and Business Data Processing Research Conference, Minneapolis, Minnesota, 1985, pp. 38-49. [34] Tayntor, C.B., "New Challenges or the End of EUC?", Information Systems Management, Vol. 11, No. 3, Summer 1994, pp. 86-89. [35] Van Sell, M., Brief, A.P. and Schulen, R.S., "Role Conflict and Role Ambiguity: Integration of the Literature and Directions for Future Research", Human Relations, Vol. 34, 1981, pp. 43-71. [36] Wetherbe, J.C. and Leitheiser, R.L., "Information Centers: A Survey of Services, Decisions, Problems and Successes", Journal of Information Systems Management, Vol. 2, 1985, pp. 3-10. [37] Zwass, V., Management Information Systems, Win. C. Brown, Dubuque, IA, 1992.

R.F. Bento / Information & Management 30 (1996) 101-109

Regina F. Bento is an Assistant Professor at the Merrick School of Business, University of Baltimore. She started her career as a psychiatrist, with an M.D. degree from the Federal University of Rio de Janeiro (Brazil). Interested in learning more about the relationship between work behaviors and psychological processes, she went on to pursue graduate studies in administration (M.S., Federal University of Rio de Janeiro and Ph.D., Sloan School of Management, M.I.T.). Her research focuses on organizational culture and change, looking at the psychological processes that surround innovations in information technology, performance evaluation and compensation.

109