Risks to different aspects of system success

Risks to different aspects of system success

Information & Management 36 (1999) 263±272 Risks to different aspects of system success James J. Jianga,1, Gary Kleinb,* a Department of Computer In...

123KB Sizes 0 Downloads 54 Views

Information & Management 36 (1999) 263±272

Risks to different aspects of system success James J. Jianga,1, Gary Kleinb,* a

Department of Computer Information Systems & Analysis, College of Administration and Business, Louisiana Tech University, Ruston, LA 71272, USA b College of Business and Administration, The University of Colorado at Colorado Springs, 1420 Austin Bluffs Parkway P.O. Box 7150, Colorado Springs, CO 80933-7150, USA Received 8 December 1998; accepted 21 April 1999

Abstract System success is related to many risks associated with information system (IS) development. A common view considers all risks as contributors to a single aspect of success. This view permits controlled research of development complexity. However, some consider system success to be multidimentional with each dimension possibly impacted differently by the various risks. In a survey of 86 IS project managers, the relationship between risk and success is explored. Four IS success measures were found to relate to different risk factors. Project managers can view achievement of a particular success as requiring control of certain risks. # 1999 Elsevier Science B.V. All rights reserved. Keywords: System success; Development risks; Project management

1. Introduction Information system (IS) development projects have been plagued by budget overruns and unmet user requirements [4,25], and [30]. Much research effort focused on identifying the potential risk factors that threaten successful project development. Identi®ed project development risks include nonexistent or unwilling users, multiple users or implementers, turnover, inability to specify purpose or usage, inability to cushion the impact, lack of management support, lack of user experience, technical complexity, and cost effectiveness problems [1,7,18], and [31]. Unfortu*

Corresponding author. Tel.: +719-262-3157; fax: +719-2623494 E-mail addresses: [email protected] (J.J. Jiang), [email protected] (G. Klein) 1 Tel.: +318-257-3445; fax: 318-257-4253

nately, resources tend to be limited in a systems project. To control every identi®ed risk factor would be costly and perhaps unmanageable. The need for complete control may not exist in all settings. System success is a multidimensional trait, not properly described by a single measure [10]. If only the more important dimensions of success need to be targeted, or if the risk controlling efforts can be distributed among the team members, then the overall effort expended may be reduced. To be able to target attention to the different components of success, there is a need to identify the risk factors that contribute to the various dimensions of system success. Before one can determine actions to take in response to potential project risks, information about the effects of various risks on project outcomes needs to be assessed. Unfortunately, the relationship between the various project risk variables and project outcomes has

0378-7206/99/$ ± see front matter # 1999 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 8 - 7 2 0 6 ( 9 9 ) 0 0 0 2 4 - 5

264

J.J. Jiang, G. Klein / Information & Management 36 (1999) 263±272

received limited attention in the literature. This lack is due to the lack of an accepted means of assessing both project risks and system success in the early IS literature [4]. Recently, Barki et al. [4] proposed an initial measure for a project risk construct and Saarinen [28] proposed a multidimensional system success measure. With these constructs, this study explored the link between the various project risks and system success dimensions. The establishment of such links is important since it is likely that different risk variables might be differentially signi®cant in project outcomes [4]. If the links exist, risk variables in¯uencing different project outcomes could be identi®ed, making knowledge of the project risks more applicable. 2. Background Even with widespread use of advanced tools and concepts as prototyping, data modeling, structured design, and computer assisted software engineering (CASE), software project development still suffers an alarmingly high failure rate [23]. Billions of dollars are lost on canceled projects, late delivery, over-budget delivery, and limited functionality. Standish Groups' survey showed that 52.7% of software projects miss their budgeted times and ®nancial targets, 31.1% of all projects are canceled, and only 16.2% of the projects are completed on time and within the budget [16]. Estimates of the failure rate in large-scale software development since the early 1980s go as high as 85% [2]. To avoid this high rate of system failure, IS researchers have attempted to identify factors threatening successful IS project development. Zmud [31] showed that project size, technological change, novelty of application area, and personnel change could signi®cantly in¯uence IS development success. Anderson [3] stated that lack of team expertise on application and task, unwilling users, lack of top management support, and con¯icting preferences between users and IS developers could determine the outcomes of the project development efforts. Boehm [6] recommended that IS project managers avoid the following situations for software development: unrealistic schedules and budgets, wrong (incorrect) user interfaces, wrong software functions, a continual stream of requirement changes, and person-

nel shortfalls. More recently, Jiang et al. [18] identi®ed a set of critical success factors for system development including clearly de®ned goals, top management support, suf®cient resources, competent team members, and adequate communication. A major step in unifying these disparate concepts in the IS literature was a proposed multidimensional measure of risk factors [4]. Meanwhile, researchers had developed a large number of system success criteria. Many had been empirically tested, including: system quality [12], user information satisfaction (UIS) [11], quality of decision making [8], IS usage [27], and productivity from a cost/bene®t standpoint [24]. User perceptions had become particularly prominent within the IS literature [8] and [13]. The use of these psychometric measures was due to the dif®culty in quantifying and linking costs and bene®ts to particular IS innovations. Much of the work pointed to the condition that IS effectiveness is a multidimensional concept [10]. Saarinen [28] proposed a multidimensional measure with four dimensions of system success including: (1) satisfaction with the system development process, (2) satisfaction with system use, (3) satisfaction with system quality, and (4) the impact of the IS on the organization (bene®ts of the investment). These four dimensions represented a more comprehensive form of success measure. Such a measure allowed a more comprehensive assessment of system development outcomes. 2.1. Study procedures Of interest to this study was whether different project risk variables had different effects on system success. For example, which project risk variable(s) was(were) more important in in¯uencing system quality (or system use)? In spite of the importance of these questions, the link between project risks and system success had only received limited attention in the literature [4]. This lack of empirical work may have been due to the lack of sound project risk and system success measures in the early IS literature. With today's existence of project risk and system success measures, we related the various project risk variables to system success. We explored if certain risk variables were more signi®cant than the others in in¯uen-

J.J. Jiang, G. Klein / Information & Management 36 (1999) 263±272

265

cing the outcome of system development efforts. Speci®cally, we looked at the following questions:

Table 1 Sample demographicsa

1. Which risk variables are more important in achieving overall success? 2. Which risk variables are more important in influencing satisfaction with the system development process? 3. Which risk variables are more important in influencing satisfaction with system use? 4. Which risk variables are more important in influencing satisfaction with system quality? 5. Which risk variables are more important in influencing system impacts on the organization?

Position

2.2. Sample

Organization size (number of employees) under 100 employees 100±1000 employees 1000±10,000 10,000 or more employees

7 19 38 20

Number of application areas with experience 1±4 areas 5±8 areas 9 areas or more

40 27 17

Average number of members in IS projects 2±5 members 6±10 members 11±20 members 21±25 members

14 22 27 18

Questionnaires were sent via US Mail to 500 IS project managers in the US. The mailing list was created by randomly selecting current members of the Project Management Institute (PMI). The PMI was selected because the members generally practiced in the ®eld, had expertise on project management, represented a wide variety of organizational settings, and had been widely used in project management research. Returns were made via self-addressed return envelopes. Subjects were assured that responses would remain con®dential. A total of 86 questionnaires were returned for a response rate of 18%. This response rate was consistent with other mail surveys. Table 1 is a summary of the demographic characteristics of the sample. The characteristics of this sample were similar to other studies involving PMI membership [20]. 2.3. Constructs The instrument used to measure the project development risks was adopted from Barki et al. [4]. The questionnaire asked respondents about problems in their most recently completed IS project related to each risk item. Each risk was presented using a scale ranging from `not at all a risk' (1) to `extreme risk' (7). Scores were computed as averages of the retained items. All the risks were anchored so that the greater score represented the greater risk. Project risk scales were examined with one of the more powerful methods to test construct validity, factor analysis [19]. If items loaded into factors are

IS manager IS project leader others

20 44 18

under 11 years 11±15 years 16±20 years 21 or above

17 20 20 25

Gender

male female

65 18

Age

25±30 years/old 31±40 years/old 41±50 years/old 51 or above

5 31 29 14

Work experience

a

Eighty-six responses in total, those items not adding to 86 had omitted responses.

in accordance with a priori expectations, then signi®cant aspects of construct validity have been assessed [26]. A principal component analysis was ®rst conducted. From the analysis, a 12-factor solution emerged that accounted for 76% of the variance. While there is no generally accepted standard on signi®cance of factor loadings, a general cutoff point of 0.50 was chosen as long as the items did not also load greater than 0.40 on other factors [19]. The results were consistent with the Barki, et al. [4] framework, except that two factors (intensity of con¯icts and extent of changes brought by the system) did not have uniquely loading items and were not considered further. Cronbach's alpha [9] was calculated to assess measurement reliability. Table 2 lists the items retained for measuring each construct and each construct's alpha. All alphas were greater than the recommended 0.70

266

J.J. Jiang, G. Klein / Information & Management 36 (1999) 263±272

Table 2 Project risk items

Table 3 Split-half reliabilities for project risks

Technological newness (Cronbach's alpha ( ) = 0.75) Amount of new hardware required Amount of new software required Large number of hardware vendors Large number of software vendors

Measure

Split-half reliabilities

Technological newness Project size Lack of team's general expertise Lack of team's expertise with task Lack of team's development expertise Lack of user support Resources insufficient Lack of clarity of role definitions Application complexity Lack of user experience

0.64 0.90 0.92 0.86 0.81 0.89 0.79 0.92 0.83 0.82

Project size ( = 0.80) Large number of different `stakeholders' on team Large number of users Large number of hierarchical levels occupied by system users Lack of team's general expertise ( = 0.84) Ability to work with management Ability to work well as a team Ability to effectively carry out tasks Lack of team's expertise with the task ( = 0.87) In-depth knowledge of the user department Knowledge of operations Overall administrative skill Expertise in the application area Familiarity with application Lack of team's development expertise ( = 0.83) Development methodology Development support tools Project management tools Implementation tools Lack of user support ( = 0.90) Users do not think the system meets their needs Users are not enthusiastic about the project Users are not available Users are not ready to accept the changes Users respond slowly to development team requests Users do not actively participate in requirement specification Resources insufficient ( = 0.78) Insufficient person hours are budgeted Insufficient dollars are budgeted Lack of clarity of role definitions ( = 0.85) Role of each team member is not clearly defined Role of each person involved in the project is not clearly defined Communications between project stakeholders are unpleasant Application complexity ( = 0.83) Large number of links to existing systems Large number of links to future systems Lack of user experience ( = 0.89) Users are not familiar with system development Users have little experience with the activities to be supported Users are not familiar with this application Users are not aware of the importance of their roles Users are not familiar with information systems as a tool

level, thereby implying an adequate level of internal consistency. To further measure the questionnaire's reliability, a split-half reliability measure was calculated. If the questionnaire was a reliable measurement instrument, then there would be a high correlation between the two halves of the factor, since the questions were presumed to be measuring the same attribute. Gulliksen's [14] formula was used to calculate the split-half reliability of the questionnaire (see Table 3). All reliability measures were signi®cantly different from 0 at the p < 0.01 level. Thus, the project risk items had acceptable reliability and validity for testing the research questions. Saarinen's paper [28] provided four metrics of system success. These included (1) the satisfaction with the development process, (2) satisfaction with system use, (3) satisfaction with the quality of the IS product, and (4) impact of the IS on the organization. Our questionnaire adopted the metrics in the original source. The questions were worded to request satisfaction in the most recently completed IS project. Each scale was scored using a scale ranging from not satis®ed at all (1) to extremely satis®ed (7). All items were presented in such a way that the greater the score, the greater the satisfaction of the particular item. To examine the validity of the system success measures, we conducted a principal components analysis. The analysis produced four factors with Eigenvalues >1.0 that accounted for 72% of the total variance. The four factors were retained for a factor analysis with varimax rotation. The factors were examined against the four dimensions of system success proposed by Saarinen [28]. The results matched well with the expected structure. The criteria used to

J.J. Jiang, G. Klein / Information & Management 36 (1999) 263±272

identify, distinguish and interpret factors were that a given item should load 0.50 or higher on a speci®c factor and 0.40 or lower on other factors. The items Table 4 Items in the system success measuresa Satisfaction with the development process (Cronbach's alpha, = 0.84) IS staff's commitment. User's commitment. Requirement specifications. Analysis and design. Technical implementation Meeting with budget Overall satisfaction on the development process Satisfaction with system use ( = 0.92) User training. User participation during system support functions. IS staff's communication with users. IS staff's communication with users Responses to changes Response to new requirements Overall satisfaction on system use stage Satisfaction with the system quality ( = 0.97) System performance User interface response time User friendliness Ease of use Precision of the output information Accuracy of the output information Reliability of the output information Relevance of the output contents Completeness of the output contents Timeliness of the output contents Current of the output contents Format of the output information Clarity of the output information Overall quality of the IS product Impact of the system on the organization ( = 0.95) Extent of use Improvements to operations Work process Price/performance gains Profitability Cost saving Decision-making enhancement Effectiveness Control of decision making process Organizational structure Internal communication Interorganizational communication Effects on data processing Overall impact of the system on the organization a

These items are a retained subset of those in [28].

267

Table 5 Split-half reliabilities for system success Measure

Split-half reliabilities

System System System System

0.91 0.89 0.93 0.70

development use quality impacts

retained appear in Table 4. The Cronbach alpha values for the factors of satisfaction with the system development process, satisfaction with system use, satisfaction with the system quality, and system impact on the organization were 0.84, 0.92, 0.97, and 0.95, respectively. An overall measure of success was taken as the average of all retained items. Split-half reliability measures for this construct were computed (Table 5). All reliability measures were signi®cantly different from 0 at the p < 0.01 level. 3. Results The sample population appeared to add to the credibility of the study as over 80% of the respondents held managerial IS positions. Over 80% of the respondents had more than 10 years of IS work experience, and about 90% worked in organizations which had 100 employees or more. To mitigate concerns for bias introduced by the sample, each of the demographic variables was run in a separate MANOVA as the independent variable. A signi®cant relationship to the independent variables of system success measures would indicate a possible bias. No demographic variable was found to have a signi®cant relation to any system success measure. This led us to believe the sample population introduced no bias into the study. To explore the research questions, multiple regression analyses were applied to investigate the relationship between the success measures and the 10 project risks. The ®rst regression involved overall risk (see Table 6). The R2 indicated that 32% of the variance in overall system success was explained by the regression model. This R2 level is high for a model that assessed subjective perceptions of behavioral constructs. The overall p-value (0.00) indicated that there was a signi®cant relationship between project risk variables and overall system success. The individual

268

J.J. Jiang, G. Klein / Information & Management 36 (1999) 263±272

Table 6 Overall system success: results of multiple regression analysis Dependent var.

Independent var.

T-value

p-value

Overall system Success

technological newness project size lack of team's general expertise lack of team's expertise with the task lack of team's development expertise lack of user support resource insufficient lack of clarity of role definitions application complexity lack of user experience

ÿ0.46 ÿ0.09 ÿ1.11 0.07 1.08 ÿ0.16 ÿ1.12 ÿ2.14 ÿ2.00 ÿ2.02

0.65 0.93 0.27 0.95 0.28 0.87 0.26 0.03a 0.05a 0.04a

Model

F-value: 3.04; Prob. >F: 0.00a; R2: 0.31

a

Indicates significant at p < 0.05 level.

project risk variables' p-values indicated that the lack of clarity of role de®nitions among the team members (p-value = 0.03), application complexity (p-value = 0.05), and lack of user experience on applications (p-value = 0.04) were most signi®cantly related to overall system success. The results supported the notion that certain project risk variables would be more important in in¯uencing system success. To address the remainder of questions, four independent multiple regression analyses were conducted. Satisfaction with the system development process, satisfaction with system use, system quality satisfaction, and system impact on the organization were the dependent variables, respectively. The independent variables were the project risk variables. The results are shown in Table 7. Indications were that the lack of team's general expertise, application complexity, and the lack of user experience were the most important factors for system development process satisfaction. On the other hand, technological newness was the single most important factor for determining satisfaction with the system quality. The lack of user support was most important to the impact, a system had on an organization. Finally, the level of the user experience on the application and the clarity of role de®nitions signi®cantly in¯uenced system use satisfaction. 4. Conclusions Based on the analysis of the 10 project risk variables on the overall system success, we found that the

various project risk variables are not equally important in in¯uencing system success. While lacking speci®c empirical evidence for answering the above questions, we expected that different risk variables may be differentially important in in¯uencing system success and that expectation held. According to the literature, user involvement, probably, was the most important predictor for system success [17]. User involvement was often positively related to user familiarity with the type of application and to user awareness of the importance of their roles in successfully completing the project [21]. Therefore, we expected that the lack of user support and the associated lack of user experience on applications would be more important in in¯uencing overall system success than the other project risk variables. The fact that only one of these two wound up signi®cant may have been due to the emphasis of non-user related items in three of the four satisfaction measures or by having had only IS personnel in the sample. Application complexity, clarity of role de®nition, and user experience on applications were more important in determining the system success than other risk factors. This and the other relations are summarized in Table 8. Which project risk variables are the most in¯uential in satisfaction with the system development process? A successful system development process meets project schedule and budget, has effective communication between users and project development staffs, and results in a successful system analysis and design [28]. The survey revealed that application complexity, lack of user experience and a general lack of team

J.J. Jiang, G. Klein / Information & Management 36 (1999) 263±272

269

Table 7 Specific system success: multiple regression analysis results Dependent var.

Independent var.

T-value

Value

Development Process

technological newness project size lack of team's general expertise lack of team's expertise with the task lack of team's development expertise lack of user support resource insufficient lack of clarity of role definitions application complexity lack of user experience F-value: 3.39; Prob. >F: 0.00; R2: 0.40

0.00 0.18 ÿ2.00 ÿ0.84 1.34 0.89 ÿ1.70 ÿ1.79 ÿ3.56 ÿ3.29

0.99 0.91 0.05a 0.41 0.18 0.37 0.08 0.07 0.00a 0.00a

technological newness project size lack of team's general expertise lack of team's expertise with the task lack of team's development expertise lack of user support resource insufficient lack of clarity of role definitions application complexity lack of user experience F-value: 2.5; P-value: 0.01; R2: 0.33

ÿ0.48 ÿ0.36 ÿ1.18 0.63 ÿ1.65 ÿ0.27 ÿ1.06 ÿ2.20 0.65 ÿ2.00

0.64 0.72 0.24 0.53 0.09 0.79 0.29 0.03a 0.52 0.05a

technological newness project size lack of team's general expertise lack of team's expertise with the task lack of team's development expertise lack of user support Resource insufficient lack of clarity of role definitions application complexity lack of user experience F-value: 20.3; Prob.>F: 0.02; R2: 0.30

ÿ2.01 0.31 ÿ1.50 ÿ0.12 0.54 ÿ0.58 ÿ0.05 ÿ1.3 ÿ0.02 ÿ0.50

0.04a 0.76 0.14 0.91 0.31 0.57 0.96 0.19 0.98 0.62

technological newness project size lack of team's general expertise lack of team's expertise with the task lack of team's development expertise lack of user support resource insufficient lack of clarity of role definitions Application complexity lack of user experience F-value: 1.70; Prob. >F: 0.09; R2: 0.25

ÿ0.87 0.66 0.02 0.38 0.30 ÿ2.75 0.52 ÿ1.07 1.19 ÿ0.93

0.39 0.51 0.98 0.70 0.76 0.01a 0.61 0.28 0.23 0.35

Model System use

Model System quality

Model Organization Impact

a

Indicates significant at p < 0.05 level.

expertise were perceived to be the more critical risks. This was consistent with the literature that stressed the assurance of resources suf®cient to meet project schedule and budget [18], the suf®ciency of users' experi-

ence on applications to successfully specify user requirements [17], and effective communication channels in the system development process [29]. User's lack of experience provides great dif®culty in deter-

270

J.J. Jiang, G. Klein / Information & Management 36 (1999) 263±272

Table 8 Summary of relationships between risk factors and success measures Overall success Technological newness Project size Lack of team's general expertise Lack of team's expertise with task Lack of team's development expertise Lack of user support Insufficient resource Lack of role clarity Application complexity Lack of user experience

Development process satisfaction

System use satisfaction

System quality satisfaction

Organizational impact

significant significant significant significant significant significant

significant significant

mining system requirements. Complexity must be handled during development to ensure a viable system or no product will survive to even be delivered to the user. Which project risk variables are most important in in¯uencing satisfaction with system use? The results of this study indicated that the satisfaction with system use was most signi®cantly affected by the user experience on the system and clarity of role de®nition of the team members. This ®nding was not surprising. Researchers have argued that user experience and familiarity with the application has increased user involvement and user control [21]. In turn, user involvement and control has led to a higher level of user satisfaction on the service provided by the system [5]. The evaluation of system use was based on outcomes of the IS services provided to the users. To use the IS service effectively requires that the users have the appropriate knowledge of the application and the project team has an understanding of the user requirements. IS management should be aware of the extent of experience the users have on the proposed new applications. User training cannot be over emphasized on the effectiveness of system use. Which project risk variable was most important in in¯uencing system quality satisfaction? The results indicated that technological newness was the most important factor in determining the quality of the system. System quality, such as performance, ¯exibility of changes, response time, and ease of use, is a technical issue. This result con®rmed conventional wisdom that the pursuit of state-of-the-art technology

significant significant

is a risky proposition. In addition, different aspects of system quality, such as response time, ease of use, system reliability, and ¯exibility of the system have been examined by IS researchers [12]. Most of these measures are fairly straightforward, re¯ecting the more engineering (technical)-oriented performance characteristics of the system. Researchers found that these engineering-oriented performance measures were signi®cantly related to technical-related issues of the proposed projects [15]. Which project risk variable is most important in in¯uencing system impacts to organizations? The analysis suggested that the extent of user support was the most signi®cant factor in in¯uencing the impact of a system on an organization. This ®nding has an important implication to management. To achieve the desired levels of organizational productivity and effectiveness through system automation, management must ensure users have a positive attitude toward the new system and are ready to accept the changes the new system will entail. User support has always been proclaimed a major factor to achieving success, it seems to manifest itself most during the productive life of the software product. Researchers also argued that user acceptance is the most important factor for determining the extent to which such changes can be implemented [22]. A lack of user support often indicates that users are not enthusiastic about the project, are not ready to accept the changes the new system will entail, and may have a negative opinion about whether the system will succeed in meeting their needs.

J.J. Jiang, G. Klein / Information & Management 36 (1999) 263±272

The results of this study point out how critical the involvement of users is to more than one form of satisfaction. The experience risks dictate the importance of good staf®ng and team selection. Technology is important to the ®nal perceived quality of the system. Yet the measures are clearly segmented, which would allow the project manager to target various risks with proper personnel or training rather than a concentrated effort at all times on all risks. One limitation of this study is the sampling of strictly information system professionals. Their view of problems caused by the various risks may be biased against the users limiting the qualitative conclusions that can be drawn. Studies of other stakeholder groups will be important in piecing together the entire risk picture. Still, the data indicated strongly that the different aspects of success were not impacted by all components of risk. The ability to segment the risk categories will allow targeting resources and attention to mitigating those areas of highest concern or dif®culty. References [1] S. Alter, Implementation risk analysis, TIMS Studies in Management Sciences 13(2) (1979), pp. 103±119. [2] S. Ambler, Comprehensive approach cuts project failure, Computing Canada 25(1) (1999), pp. 15±16. [3] J. Anderson, R. Narasumhan, Assessing implementation risk: a technological approach, Management Science 25(6) (1979), pp. 512±521. [4] H. Barki, S. Rivard, J. Talbot, Toward an assessment of software development risk, Journal of MIS 10(2) (1993), pp. 203±225. [5] J.J. Baroudi, M. Olson, B. Ives, An empirical study of the impact of user involvement on system usage and information satisfaction, Communications of the ACM 29(3) (1986), pp. 232±238. [6] B.W. Boehm, Software Risk Management, Los Alamitos, CA, IEEE Computer Society Press, 1989 [7] R. Cafasso, How to control risk and effectively reduce the chance of failure, Management Review 73(6) (1984), pp. 50± 54. [8] W.L. Cats-Baril, G.P. Huber, Decision support systems for Illstructured problems: an empirical study, Decision Sciences 18(3) (1987), pp. 350±372. [9] L.J. Cronbach, Coefficient alpha and the internal structure of tests, Psychometrika 16 (1951), pp. 297±334. [10] W.H. DeLone, E.R. McLean, Information systems success: the quest for dependent variable, Information Systems Research 3(1) (1992), pp. 60±95.

271

[11] W.J. Doll, W. Xia, G. Torkzadeh, A confirmatory factor analysis of the end-user computing satisfaction instrument, MIS Quarterly 18(4) (1994), pp. 453±461. [12] C.R. Franz, D. Robey, Organizational context, user involvement, and the usefulness of information systems, Decision Sciences 17(3) (1986), pp. 329±356. [13] D.F. Galletta, A.L. Lederer, Some cautions on the measurement of user information satisfaction, Decision Sciences 20(3) (1989), pp. 419±438. [14] H. Gulliksen, Theory of Mental Tests, Wiley, New York, 1950 [15] S. Hamilton, N.L. Chervany, Evaluating information system effectiveness Part I, Comparing evaluation approach, MIS Quarterly 5(3) (1981), pp. 1981. [16] F. Hayes, Managing user expectation, Computerworld 31(4) (1997), pp. 8±9. [17] B. Ives, M. Olson, User involvement and MIS success: a review of research, Management Science. 30(5) (1984), pp. 19±29. [18] J.J. Jiang, G. Klein, J. Balloun, Ranking of system implementation success factors, Project Management Journal 27(4) (1996), pp. 50±55. [19] F.N. Kerlinger, Foundations of Behavioral Research, HoltRinehart and Winston, New York, NY, 1986 [20] E. Larson, Partnering on construction projects: a study of the relationship between partnering activities and project success, IEEE Transactions on Engineering Management 44(2) (1997), pp. 188±195. [21] M. Lawrence, G. Low, Exploring individual user satisfaction within user-led development, MIS Quarterly 17(2) (1993), pp. 195±208. [22] M.L. Markus, Power, politics, and MIS implementation, Communications of the ACM 26(6) (1984), pp. 430±444. [23] R.L. Meyer, Avoiding the risk in large software system acquisitions, Information Strategy 14(4) (1998), pp. 18±33. [24] A. Money, D. Tromp, T. Wegner, The quantification of decision support benefits within the context of value analysis, MIS Quarterly 12(2) (1988), pp. 223±236. [25] S. Nidumolu, The effect of coordination and uncertainty on software project performance: residual performance risk as an intervening variable, Information Systems Research 6(3) (1995), pp. 191±219. [26] J.C. Nunnally, Psychometric Theory, 2nd edn., McGraw-Hill, New York, NY, 1978 [27] L. Raymond, Organizational characteristics and MIS success in the context of small business, MIS Quarterly 9(1) (1985), pp. 37±52. [28] T. Saarinen, An expanded instrument for evaluating information system success, Information & Management 31 (1996), pp. 103±118. [29] H.J. Thamhain, D.L. Wilemon, Building high performance engineering project teams, IEEE transactions on engineering management EM-34(3) (1987), pp. 130±137. [30] P. Weill, M. Broadbent, Leveraging the New Infrastructure ± How Market Leaders Capitalize on Information Technology, Harvard Business School Press, Boston, MA, 1998 [31] R.W. Zmud, Management of large software development efforts, MIS Quarterly 4(2) (1980), pp. 45±55.

272

J.J. Jiang, G. Klein / Information & Management 36 (1999) 263±272

Dr. James J. Jiang holds the Max Watson Professorship of Computer Information Systems at Louisana Tech University, Ruston, LA. He obtained his Ph.D. in Information Systems at the University of Cincinnati in 1992. His current research interests include software project management and IT infrastructure of knowledge-based organizations. He has published over 50 academic articles in these and other areas in IEEE Transactions on Systems, Men, and Cybernetics, IEEE Transactions on Engineering Management, Decision Support Systems, Communications of ACM, and Information & Management, and other outlets. He is a member of IEEE, ACM, DSI, and AIS.

Dr. Gary Klein is the Couger Professor of Information Systems at the University of Colorado in Colorado Springs. He obtained his Ph.D. in Management Science at Purdue University. Before that time, he served with Arthur Anderson & Company in Kansas City and was director of the Information Systems department for a regional financial institution. His interests include information system development, knowledge management and mathematical modeling with over 50 publications in these areas. In addition to being an active participant in international conferences, he has made professional presentations on Decision Support Systems in the US and Japan, where he once served as a guest professor to Kwansei Gakuin University.