Information & Management 38 (2001) 499±506
IS service performance: self-perceptions and user perceptions James J. Jianga,1, Gary Kleinb,*, Jinsheng Roanc, Jim T.M. Lind a
Department of Management Information Systems, University of Central Florida, P.O. Box 161400, Orlando, FL 32816-1400, USA b College of Business and Administration, The University of Colorado, 1420 Austin Bluffs Parkway, Colorado Springs, CO 80933-7150, USA c Department of Information Management, College of Management, National Chung Cheng University, Min-Hsiung, Chia-Yi 62117, Taiwan d Department of Information Management, College of Management, National Central University, Chung-Li, Taiwan Accepted 18 December 2000
Abstract User evaluation of the quality of an information system (IS) and its service is often a major factor in the performance evaluation of the IS staff. While views of users are critical, user evaluation may be incomplete and prompt inappropriate decisions regarding the delivery of the IS service. Three hundred and sixty degree evaluation techniques strive to avoid unjusti®ed actions by eliciting feedback from multiple stakeholders to calibrate expectations more effectively and set the most appropriate future goals. Evidence from a survey of 193 IS users and IS staff members clari®ed the helpfulness of the principles in 3608 feedback with regard to IS staff performance. # 2001 Elsevier Science B.V. All rights reserved. Keywords: IS staff evaluation; 3608 feedback; Social perception; IS service
1. Introduction There are at least two groups of stakeholders in any information system (IS) service or system Ð staff and users. When there are multiple stakeholders, the model argues that there is no prima facie priority of one set of interests and bene®ts over another [7,9]. As a result, Glass [15] argues that a new success measure is necessary Ð a consonance measure that incorporates the organization's needs, IS user expectations, * Corresponding author. Tel.: 1-719-262-3157; fax: 1-719-262-3494. E-mail addresses:
[email protected] (J.J. Jiang),
[email protected] (G. Klein),
[email protected] (J. Roan),
[email protected] (J.T.M. Lin). 1 Tel.: 1-407-823-3174; fax: 1-407-823-2389.
and IS staff needs. Understanding the differences between groups of stakeholders is essential in securing success of the system service and other projects. IS success in the eyes of IS staff often has to do with their self-perception of their job performance and learning experiences [23]. Users, however, evaluate performance in terms of how well their needs are satis®ed. There is a con¯ict in this case, and the ®nal evaluation of either a system or service depends solely on the criteria identi®ed by the evaluators. So, who should evaluate the service provided by the IS staff? Should it be IS users alone or IS staff as well? Many researchers argue that IS users are the customers, and thus the legitimate ones to make the ®nal judgment. Linberg's ®ndings do not dispute the ``effectiveness'' of user evaluation, but rather raise a
0378-7206/01/$ ± see front matter # 2001 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 8 - 7 2 0 6 ( 0 1 ) 0 0 0 7 2 - 6
500
J.J. Jiang et al. / Information & Management 38 (2001) 499±506
question about performance-related criteria. Does the IS user perception of IS staff performance match the perceptions of the IS staff on the same success criteria? If so, incorporation of IS staff job performance criteria into the ®nal success measure will not make any difference in IS user evaluation. IS users will continue to perceive IS staff performance in a fashion consistent with their perceptions on certain success criteria (e.g., user satisfaction, service quality). Three hundred and sixty degree feedback processes suggest that the job performance ratings must represent the key constituencies from the full circle of relevant stakeholder viewpoints Ð subordinates, peers, supervisors, and customers [24]. This method has grown in popularity in a variety of disciplines, because organizations recognize the multidimensional nature of their job as seen by different constituencies as well as the value of constituent perceptions as guides in job performance evaluations [4,30]. In short, 3608 feedback practices suggest that IS staff should be included in the evaluation process. An organization must be convinced that the use of a feedback process warrants the time and expense. If there is a relationship between the evaluation of service product by both users and staff, there is less need for a complex evaluation process. To this end, we examine whether IS user perceptions of IS staff job performance are predicted solely by user evaluations on service quality satisfaction or if IS staff self-perceptions also serve as predictors. Likewise, we check to see whether user evaluations serve as predictors of IS staff self-evaluations. If the evaluators from one perspective serve as predictors for the other perspective, there would seem to be no need for 3608 evaluative procedures. If there are differences, then this evaluation is important and can be used to create more realistic expectations on the part of the user as well as developing a more accurate sense of goal achievement on the part of the IS staff. 2. Research hypotheses Many studies identify factors that contribute to IS success [12]. Six major dimensions of success can be distinguished, including: system quality, information quality, system use, user satisfaction, individual impact, and organizational impact. These dimensions
mainly lie in the IS user view, and are oriented toward a delivered product. Other researchers argue that a system consists of two components: IS service and the IS product [20,26]. Zmud [33] suggests that there is an increasing importance of IS service in today's system development. User satisfaction is often the most widely adopted measure for system success in the IS literature. It includes the service components of user involvement and the IS staff relationship. Many researchers have argued that today IS needs to look beyond system building as its major contribution to organizational productivity [21,32]. IS units must examine how they can improve the quality of IS service to increase IS user productivity and consequently that of the organization. We therefore consider additional items of service quality in this study. The speci®c service quality includes ®ve dimensions: tangible, reliability, responsiveness, assurance, and empathy. Overall, an effective measure of performance for the IS staff is a rating of job performance. Different stakeholders can be expected to differ in evaluation of these measures. Social perception theory posits a cognitive process in which individuals notice, encode, store, and later retrieve information about others [5,27,28]. According to the social perception model, people develop their own schema, a cognitive framework for understanding the external world [29]. Schemas play an important part in our understanding and interpretation of information about other groups and other individuals whom we encounter. Several decades of research on social perception leave little doubt that this is a complex process. Furthermore, studies suggest that different working environments (e.g., IS vs. accounting department) and individual differences (e.g., education, knowledge, and position) often in¯uence one's schema development; this is one reason that people perceive things differently [1,3,17]. In other words, unless two individuals have similar ``schema'' about the same object, they will probably have differences in their perceptions. We would expect there to be systematic differences in perceptions of IS users and IS staff; they have different working environments, positions, education, and skills. For example, Jiang and Klein [19] found that IS users and IS staff place different weights on system success criteria. Klein et al. [22] discovered a
J.J. Jiang et al. / Information & Management 38 (2001) 499±506
gap between IS user and IS staff perceptions on IS staff performance. We therefore hypothesize: H1:
H2:
The IS user perception of IS staff job performance is predicted by the user evaluation of service quality and satisfaction and not the IS staff evaluation of service quality or satisfaction. The IS staff perception of their job performance is predicted by the IS staff evaluation of service and satisfaction and not the IS user evaluation of service quality or satisfaction.
3. Research methodology 3.1. Respondents The sample involved interaction with 200 IS users and 200 IS professionals in different organizations in US. The authors or graduate assistants ®rst called the IS users by telephone. They were then asked to complete a slightly modi®ed well-known questionnaire, which is described below, and ®nd an IS professional in their organization to complete a paired response. The forms and self-addressed return envelopes for each respondent were then mailed. Identi®cation allowed the matching of a user/IS professional pair. A total of 193 questionnaire pairs were returned for a response rate of 96% (follow-up calls were made if subjects did not mail back the questionnaires within 3 weeks). Demographic information on respondents is shown in Table 1. A variety of organizational sizes are represented, as are the levels of position and authority within the organizations. There are more men than women, and age is relatively low. To be certain that these two traits do not affect responses, the scales on the instruments were treated as independent variables in a MANOVA with gender and age categories in turn as independent variables. No relationships were found. The same tests were conducted for position and industry size variables, and again no relationship was found. This gives us some con®dence that the results are not biased by the responding sample. The external validity of the ®ndings would be threatened if the sample itself were systematically biased, e.g., if the responses were obtained largely
501
Table 1 Demographics IS staff
IS user
Gender Female Male No response
64 127 2
79 112 2
Age 35 and under 35 and <45 45 No response
99 53 38 3
131 33 27 2
Work experience 10 years and under 10 and <20 years 20 years No response
89 72 30 2
112 57 21 3
Managerial position Department/Division Manager Assistant Manager Professional No response
71 29 88 5
63 39 89 2
from better or more poorly performing IS staff. The mean of IS staff performance is 4.24, with a median of 4.22, skewness of 1.38, and kurtosis of 4.19. The responses indicate good distribution in IS staff performance, because the mean and the median are similar; skewness is less than 2, and kurtosis is less than 5 [14]. Overall, IS staff performance-related and user satisfaction bias seems unlikely. 3.2. Constructs 3.2.1. Perceived delivered service quality We modi®ed the 1991 SERVQUAL instrument of Parasuraman et al. [25] slightly, so that it could measure the respondents' perceived IS service level. Although, the SERVQUAL has been validated in previous studies, we examine the construct using con®rmatory factor analysis (CFA). That is, if the model provides a reasonably good approximation of reality, it should provide a good ®t to the data. The CFA for the perceived delivered IS service quality measure results in a w2/d.f. of 2.16 (<5 is recommended), a comparative ®t index (CFI) of 0.92 (0.90 recommended), an adjusted goodnessof-®t index (AGFI) of 0.77 (0.80 recommended), and a root mean square residual (RMR) of 0.04 (0.10
502
J.J. Jiang et al. / Information & Management 38 (2001) 499±506
recommended) [2]. Thus, the measures represent a reasonably good ®t for the measurement model. Convergent validity is demonstrated when different instruments are used to measure the same construct, and scores from these different instruments are strongly correlated [8]. The convergent validity can be assessed by reviewing the t-tests for the factor loading (greater than twice their standard error). The t-test for the indicator loads are shown in Table 2. The results show that the construct demonstrated a high convergent validity, since all t-values are signi®cant at the 0.01 level. Discriminant validity is implied when measures of each construct converge on their respective true scores, which are unique from the scores of other constructs [10]. Discriminant validity is assessed using the con®dence interval test [13]. The con®dence
interval test to assess the discriminant validity between two factors involves calculating a con®dence interval of 2 S.E. around the correlation between the two factors, and determining whether this interval includes 1.0. If it does not include 1.0, discriminant validity is demonstrated. Discriminant validity for project risks is supported, since no range includes the value 1.0 for each pair of factors. We also examine the internal consistency reliability of the construct according to Cronbach a-value, which will be high if the various items that constitute the construct are strongly correlated with one another [11]. They exceed the recommended level of 0.70. 3.2.2. User satisfaction The instrument used to measure the user satisfaction is adopted from Baroudi and Orlikowski [6].
Table 2 Perceived delivered service quality Standardized loading
t-Value
Tangibles IS physical facilities are visually appealing IS employees are well dressed and neat in appearance The appearance of the physical facilities of IS is in keeping with the kind of services provided
0.70 0.59 0.88
9.93* 8.16* 12.99*
Reliability When IS promises to do something by a certain time, it does so When users have a problem, IS shows a sincere interest in solving it IS is dependable IS provides its services at the times it promises to do so IS insists on error-free records
0.81 0.82 0.84 0.86 0.65
13.27* 13.47* 14.04* 14.70* 9.75*
Responsiveness IS tells users exactly when services will be performed IS employees give prompt service to users IS employees are always willing to help users IS employees are never too busy to respond to users' requests
0.68 0.83 0.78 0.65
10.35* 13.63* 12.58* 9.84*
Assurance The behavior of IS employees instills confidence in users Users feel safe in their transactions with IS's employees IS employees are consistently courteous with users IS employees have the knowledge to do their job well
0.83 0.85 0.72 0.66
13.49* 14.04* 11.05* 9.78*
Empathy IS gives users individual attention IS has operation hours convenient to all their users IS has employees who give users personal attention IS has the users' best interest at heart Employees of IS understand the specific needs of its users
0.74 0.61 0.78 0.77 0.81
11.33* 8.87* 12.31* 11.97* 13.01*
*
Indicates signi®cance at p < 0:05 level.
Cronbach a 0.77
0.90
0.83
0.85
0.86
J.J. Jiang et al. / Information & Management 38 (2001) 499±506
While the validity and reliability of the UIS measure is widely recognized in the IS ®eld, we subjected the UIS measure also to a con®rmatory factor analysis. The ®nal model included three latent variables (with 11 items). CFA results indicate a reasonably good ®t of the model, with w2 =d:f: 4:39, CFI 0:85, AGFI 0:72, and RMR 0:06. Similarly, the convergent validity was assessed by reviewing the t-tests for the factor loading. The results showed that the construct demonstrated a high convergent validity, since all t-values are signi®cant at the 0.01 level (see Table 3). In addition, Cronbach areliability tests for the UIS dimensions and items were conducted. The user satisfaction construct maintains acceptable reliability coef®cients (a above 0.70). Next, a formal test of discriminant validity was performed in the same way as the SERVQUAL measurement model. Discriminant validity for the construct was supported, since no range includes the value 1.0 for each pair of factors. 3.2.3. Job performance The job performance scale (JPS) was developed by Touliatos et al. [31]. Using a ®ve-point Likert scale ranging from 1, unsatisfactory; 3, neutral; to 5, very satisfactory, personnel self-rated their performance on such characteristics as cooperation/attitude, job knowledge/skills, and quality of work. This instru-
503
ment had also been widely adopted by IS personnel researchers [16,18]. To apply the JPS, we examined this measurement's reliability and validity by conducting a CFA. The measurement model showed a reasonably good ®t, as indicated by w2 =d:f: 2:37, CFI 0:94, AGFI 0:83, and RMR 0:03. The Cronbach a internal consistence reliability values for each factor exceeded the recommended minimum (0.70). Furthermore, the signi®cance of the loading coef®cients (t-value > 2:00 and p-value < 0:05) supported the convergent validity of the scale (see Table 4). Discriminant validity is tested using con®dence interval tests. The results support the discriminant validity of the job performance scale. 3.2.4. Summary The series of tests provide strong support for the reliability and the validity of the constructs we used to measure perceived IS service quality, user satisfaction, and IS staff job performance. 4. Results To test the hypotheses, two independent regressions were conducted. The dependent variable for H1 is IS user perceptions of IS staff job performance. The
Table 3 User satisfaction Standardized loading
t-Value
Knowledge and involvement Processing of request for changes to existing systems Degree of IS training provided to users Users' understanding of systems User feelings of participation Time required for new systems development
0.57 0.65 0.73 0.60 0.68
6.49* 7.69* 8.94* 6.94* 8.13*
IS staff service/relation Relationship with IS professionals Attitude of the IS professionals Communication with IS professionals
0.70 0.74 0.80
8.34* 9.07* 9.98*
Information product Reliability of output information Precision of output information Completeness of the output information
0.85 0.75 0.79
11.08* 9.42* 10.05*
*
Indicates signi®cance at p < 0:05 level.
Cronbach a 0.76
0.79
0.83
504
J.J. Jiang et al. / Information & Management 38 (2001) 499±506
Table 4 Job performance Standardized loading
t-Value
IS staff commitment Cooperation Loyalty to organization Commitment to job Loyalty to supervisor Commitment to organization
0.67 0.87 0.72 0.72 0.85
7.78* 9.34* 8.19* 8.31* 9.25*
Work quality Quality of work Accuracy Ability Promotability Job knowledge
0.83 0.79 0.76 0.74 0.73
5.13* 5.07* 5.03* 5.01* 4.99*
Job skill Communications skills Creativity Planning
0.77 0.69 0.72
5.79* 5.64* 5.71*
*
Cronbach a 0.88
0.88
0.77
Indicates signi®cance at p < 0:05 level.
Table 5 Regression results Dependent variable IS user perception IS staff job performance
IS staff perception IS staff job performance
*
Independent variable
Coefficient
t-Value
p-Value
IS IS IS IS
staff's evaluation of IS service quality staff's evaluation of user satisfaction user's evaluation of IS service quality user's evaluation of user satisfaction
0.04 0.04 0.43 0.18
0.52 0.56 5.72 2.14
0.60* 0.57* 0.00* 0.03*
IS IS IS IS
staff's evaluation of IS service quality staff's evaluation of user satisfaction user's evaluation of IS service quality user's evaluation of user satisfaction
0.45 0.20 0.02 0.03
6.22 2.09 0.26 0.32
0.00* 0.04* 0.80* 0.75*
R2 0.32
0.29
Indicates signi®cance at p < 0:05 level.
dependent variable for H2 is IS staff perceptions of their own job performance. The independent variables for these two regression models are IS user evaluation of service quality and satisfaction, and IS staff evaluation of service quality and satisfaction. The results are shown in Table 5. Results indicate that IS user perception of IS staff job performance (H1) is signi®cantly related to his or her own evaluation of service (0.43 and p-value < 0:05) and satisfaction (0.18 and p-value < 0:05), but not signi®cantly related to IS staff evaluation of service and satisfaction. Therefore, H1 is supported. The IS staff's perception of their own job performance
is signi®cantly related to their own evaluation of service quality (0.45 and p-value < 0:05) and user satisfaction (0.20 and p-value < 0:05), but not to IS user evaluation of service quality and satisfaction. Thus H2 is also supported. 5. Conclusions Our purpose was to provide empirical evidence of the most effective constituencies to provide IS staff evaluation by examining IS staff job performance from both IS staff and IS user views. The data included
J.J. Jiang et al. / Information & Management 38 (2001) 499±506
results from 193 IS projects performing both IS user and IS staff evaluations. The results indicate that IS staff perceptions of their own job performance could be predicted by their own evaluations of service quality and user satisfaction but not IS user evaluations of service quality and satisfaction. Similarly, the IS user perceptions of IS staff job performance could be predicted by their evaluations of service quality and satisfaction but not IS staff evaluations of service quality and satisfaction. The results imply consistency of measures within evaluation groups but not across evaluation groups. The implications are clear. If IS staff are not involved in the evaluation process, IS users will judge their service either a success or a failure consistent with their evaluations of other criteria. The problem that Linberg identi®es will persist Ð a service (or system) that one group considers poor could be considered by IS staff to be successful. Without a 3608 process, we have improper feedback (and may not recognize some quality characteristics in the performance of the IS staff). IS staff dissatisfaction may result if their evaluations of success are not taken into account. This problem is ampli®ed by evidence showing that there is often a disconnect between IS staff and IS users in their evaluations of criteria [22]. Unless two people or groups have the same schema, one can expect there to be a gap between these two people (or groups) in their perceptions. Three hundred and sixty degree feedback attempts to close this gap by providing information on how well (or poorly) one party is viewed by others in order to develop a more accurate sense of goal accomplishment and improvement. Providing a better measure of IS success Ð a consonance of IS user, IS management, IS staff, and organizational goals and needs Ð may be one step toward improving systems services. To achieve consonance successfully through the development process, the 3608 feedback evaluation system and communications may be necessary to re¯ect all the relevant stakeholder concerns and positions. References [1] S.C. Ainley, G. Becker, L. Coleman (Eds.), The Dilemma of Difference, Plenum Press, New York, NY, 1986.
505
[2] J.C. Anderson, D.W. Gerbing, Structural equation modeling in practice: a review and recommended two-step approach, Psychological Bulletin 103 (3), 1988, pp. 411±423. [3] R.D. Arvey, J.E. Campion, The employment interview: a summary of recent research, Personnel Psychology 35, 1982, pp. 281±322. [4] S.J. Ashford, Self-assessments in organizations: a literature review and integrative model, in: L.L. Cummings, B.M. Stow (Eds.), Research in Organizational Behavior, Vol. 11, JAI Press, Greenwich, CT, 1989, pp. 133±174. [5] R.A. Baron, D. Byrne, Social Psychology: Understanding Human Interaction, Allyn and Bacon, Boston, MA, 1991. [6] J.J. Baroudi, W.J. Orlikowski, A short form measure of user information satisfaction: a psychometric evaluation and notes on use, Journal of Management Information Systems 4 (4), 1988, pp. 45±59. [7] A.G. Bedian, R.F. Zammuto, Organizations: Theory and Design, Dryden, Chicago, IL, 1991. [8] D.T. Campbell, D.W. Fiske, Convergent and discriminant validation by multitrait±multimethod matrix, Psychological Bulletin 56, 1959, pp. 81±105. [9] A.H. Church, D.W. Bracken, Advancing the state of the art of 360-degree feedback, Group and Organization Management 22 (2), 1997, pp. 149±161. [10] G.A. Churchill, A paradigm for developing better measures of marketing constructs, Journal of Marketing Research 16, 1979, pp. 64±73. [11] T.D. Cook, D.T. Cambell, Quasi-experimentation, Houghton Mif¯in, Boston, MA, 1979. [12] W.H. DeLone, E.R. McLean, Information systems success: the quest for the dependent variable, Information Systems Research 3 (1), 1992, pp. 60±95. [13] C. Fornell, D.F. Larcker, Evaluating structural equation models with unobservable variables and measurement error, Journal of Marketing Research 18 (1), 1981, pp. 39±50. [14] E.E. Ghiselli, J.P. Campbell, J.P. Zedeck, Measurement Theory for the Behavioral Sciences, Freeman, San Francisco, CA, 1981. [15] R.L. Glass, Evolving a new theory of project success, Communications of the ACM 42 (11), 1999, pp. 17±19. [16] J.H. Greenhaus, S. Parasuraman, W.M. Wormley, Race effects of organizational experience, job performance evaluation, and career outcomes, Academy of Management Journal 33 (1), 1990, pp. 64±96. [17] R.L. Heneman, D.B. Greenberger, C. Anonyuo, Attributions and exchanges: the effects of interpersonal factors on the diagnosis of employee performance, Academy of Management Journal 32, 1989, pp. 466±476. [18] M. Igbaria, J.J. Baroudi, The impact of job performance evaluations on career advancement prospects: an examination of gender differences in the IS workplace, MIS Quarterly 19 (1), 1995, pp. 107±123. [19] J.J. Jiang, G. Klein, User perceptions of evaluation criteria for three system types, Data Base for Advances in Information Systems 27 (3), 1996, pp. 49±53. [20] W.J. Keetinger, C.C. Lee, Perceived service quality and user satisfaction with the information service function, Decision Sciences 25 (5), 1994, pp. 737±766.
506
J.J. Jiang et al. / Information & Management 38 (2001) 499±506
[21] W.J. Keetinger, C.C. Lee, S. Lee, Global measures of information service quality: a cross-national study, Decision Sciences 26 (5), 1995, pp. 569±588. [22] G. Klein, J.J. Jiang, M. Sobol, A new view of IS personnel evaluation, Communications of the ACM, in press. [23] K.R. Linberg, Software developer perceptions about software project failure: a case study, Journal of Systems and Software 49, 1999, pp. 177±192. [24] M. London, J.W. Smith, Can multi-source feedback change perceptions of goal accomplishment, self-evaluation and performance-related outcomes? Theory-based applications and directions for research, Personal Psychology 48 (4), 1995, pp. 803±839. [25] A. Parasuraman, V.A. Ziethaml, L.L. Berry, Re®nement and reassessment of the SERVQUAL scale, Journal of Retailing 70 (3), 1991, pp. 201±229. [26] L.F. Pitt, R.T. Watson, C.B. Kavan, Service quality: a measure of information systems effectiveness, MIS Quarterly 19 (2), 1995, pp. 173±187. [27] M. Ross, G.J.O. Fletcher, Attribution and social perception, in: G. Lindzey, E. Aronson (Eds.), Handbook of Social Psychology, Random House, New York, NY, 1985. [28] H.R. Schiffmann, Sensation and Perception: An Integrated Approach, 3rd Edition, Wiley, New York, NY, 1990. [29] T.K. Srull, R.S. Wyer, Not just another end-user liaison, Computerworld 22 (12), 1988, pp. 95±97. [30] W.W. Tornow, Perceptions or reality: Is multi-perspective measurement a means or an end? Human Resource Management 32, 1993, pp. 221±230. [31] J. Touliatos, A.G. Bedeian, K.W. Mossholder, A.I. Barkman, Job-related perceptions of male and female government, industrial, and public accountants, Social Behavior and Personality 12, 1984, pp. 61±68. [32] R.T. Watson, L.F. Pitt, C.B. Kavan, Measuring information systems service quality: lessons from two longitudinal case studies, MIS Quarterly 22 (1), 1998, pp. 61±79. [33] R.W. Zmud, Design alternatives for organizing information systems activities, MIS Quarterly 8 (2), 1984, pp. 79±93. James J. Jiang is a Professor of Management Information Systems at the University of Central Florida. He obtained his PhD in Information Systems at the University of Cincinnati. His research interests include IS project management and IS personnel management. He has published over 70 referred papers in the journals such as IEEE Transactions on System, Man, and Cybernetics, Decision Support Systems, IEEE Transactions on Engineering Management, Decision Sciences, Journal of Management Information Systems (JMIS), Communications of the ACM, Information and Management, Journal of Systems and Software, Data Base, and Project Management Journal. He is a member of IEEE, ACM, AIS, and DSI.
Gary Klein is the Couger Professor of Information Systems at the University of Colorado in Colorado Springs. He obtained his PhD in Management Science from Purdue University. Before that time, he served with Arthur Andersen in Kansas City and was the Director of the Information Systems, Department for a regional financial institution. His research interests include project management, system development and mathematical modeling with over 70 academic publications in these areas. He teaches programming and knowledge management courses. In addition to being an active participant in international conferences, he has made professional presentations on Decision Support Systems in US and Japan where he once served as a Guest Professor to Kwansei Gakuin University. He is a member of IEEE, ACM, the Society of Competitive Intelligence Professionals, the Decision Science Institute, and the Project Management Institute.
Jinsheng Roan is an Associate Professor of Information Management at the National Chung Cheng University. He received his PhD in Management Information Systems from Purdue University. His research interests include IS development and implementation, accounting information systems, distributed systems, and intelligent agents. He has published papers in Contemporary Accounting Research, IEEE Transactions on Engineering Management, Communications of the ICISA, Soochow Journal of Economics and Business, and International Journal of Electronic Commerce. He is a member of The Chinese Association of Information Management.
Jim T.M. Lin is the Chairman in the Department of Information Management at the National Central University in Taiwan. He obtained his PhD degree in Information Management at the University of Western Ontario in Canada. He has been invited to teach the International MIS course in many Taiwan's Universities, such as National Jiao-tong, Central Police, Open, and Chinese Cultural University. He has written four books and published extensively in Taiwan's journals, newspapers, and magazines. He was also invited by government and many local businesses to provide advices in e-government and e-commerce.