Standardization, requirements uncertainty and software project performance

Standardization, requirements uncertainty and software project performance

,. , y, ELSEVIER Information & Management 31 (1996) 135-150 Research Standardization, requirements uncertainty and software project performance...

1MB Sizes 0 Downloads 108 Views

.,.

,

y,

ELSEVIER

Information & Management 31 (1996) 135-150

Research

Standardization, requirements uncertainty and software project performance S a r m a R. N i d u m o l u 1 Department of Management Information Systems, Karl Eller Graduate School of Business and Public Administration, University of Arizona, Tucson, AZ 85721, USA

Abstract

A risk-based model of software project management was developed to explain the effect of software development standards on a software process and product performance in the presence of uncertainty in requirements. Residual perJbrmance risk, defined as the extent of difficulty in estimating performance outcomes during the later stages of the project, was modeled as a mediating variable. Based on a prior theory, six hypotheses were derived and empirically tested using a survey design. Data from 64 projects suggested support for the model, in general, and five of the six hypotheses, in particular. Increases in the standardization were directly associated with decreases in the residual performance risk which, in turn, led to increases in both process and product performance.

Keywords: Performance risk; Process performance; Product performance; Requirements uncertainty; Software development standards

I. I n t r o d u c t i o n

In spite of thirty years of experience in managing software development projects, cost and schedule overruns continue to plague many organizations. One key problem in completing projects on time and within budget is the uncertainty associated with software development. It is broadly defined as the absence of complete information about the organizational phenomenon being studied [2]. In particular, 1:his study focuses on the uncertainty regarding user requirements, because the requirements analysis is the most important of all phases [44] and has the .greatest impact on future phases [39]. ITel.(Off): (520) 621-4546; e-mail:[email protected] 0378-7206/96/$15.00 © 1996 Elsevier Science B.V. All rights reserved PH S0378-7206(96)01073-7

In the Information Systems (IS) field, the use of the software development standards has long been recommended as the way to provide information to the developers and to improve software development performance (e.g. [13, 16, 21, 41, 42]). Standardization refers to the use of methodologies, tools, and techniques specified above the level of individual projects [ 12]. The research issue that can therefore be posed, is the following: how do we explain the effect of requirements uncertainty and standardization on the performance of software development projects? This study describes an important recent stream of research in software engineering and IS that emphasizes the risks of software development (e.g. [7, 26, 27]). Such risk-based approaches suggest that the primary mechanism affecting the performance of

136

S.R. Nidumolu/lnformation & Management 31 (1996) 135-150

software projects is their performance risk, which is the difficulty in estimating performance outcomes, such as costs or schedules. The purpose of this study is, therefore, to develop and test a theoretical model that describes their mediating role in explaining the effects of requirements uncertainty and standardization on the performance.

2.

Theory

The central focus of this study is a software development project, i.e. a temporary organization which "produce(s) software and documentation in exchange for claims on the organizational resources" ([5]). While risk-based approaches provide a rich source of insights into the role of risk, there is a need for a theory that relates project risk, management practices and project performance. In order to undertake such a theory development, it is important that the theoretical constructs be clearly defined.

2.1. Theoretical constructs 2.1.1. Project performance This study considers the performance of both software development process and product (i.e. the software and documentation) developed at the conclusion of the project. It is important to measure both process and product outcomes, because there is a potential conflict between the efficiency of the process and its quality. For example, processes that are tightly controlled and result in a strict adherence to the time and cost estimates may sometimes inadequately explore product functionality, thereby sacrificing the long-term flexibility of the software for short-term user needs. Process performance can be described by the following three dimensions [31 ]: 1. Learning. The increase in knowledge acquired during the course of the project [11]. 2. Control. The degree to which the development process was managed [5, 11]. 3. Quality of interactions. The quality of the relationship between IS staff and users during the development process [3, 28].

Product performance can be described by the following three dimensions [31 ]:

1. Operational efficiency. The technical performance of the software [29]. 2. Responsiveness. How well the software meets the needs of its users. 3. Flexibility. Describes the software's ability to support distinctly the new products or functions, and its adaptability to changing business needs [29]. 2.1.2. Requirements uncertainty This concept has been widely studied by IS researchers, partly because of the importance of identifying the users' requirements for software development projects. A proper management of the requirements can have the single biggest impact on project performance, and frequent changes create major problems. Unsatisfactory requirements make it difficult to manage the software development process and to validate the software product. Often, information concerning organizational values and beliefs is difficult to elicit during requirements analysis [25]. As Zmud [44] notes: "since the requirements ultimately derive from constantly evolving organizational realities, the tasks associated with requirements analysis are characterized by high levels of uncertainty". From an information processing viewpoint, requirements uncertainty refers to the difference in the information necessary to identify user requirements and the amount of information possessed by the developers (cf. [14, 15, 40]). Three important dimensions of uncertainty can be identified [31, 32]: 1. Requirements instability. The extent of changes in user requirements over the course of the project. 2. Requirements diversity. The extent to which users differ among themselves in their requirements. 3. Requirements analyzability. The extent to which the process for converting user needs to a set of requirements specifications can be reduced to mechanical steps or objective procedures. 2.1.3. Standardization Software developers have recently begun to experiment with process approaches above the level

S.R. Nidumolu /lnformation & Management 31 (1996) 135-150

of individual projects. For example, the Application Software Factory described by Swanson et al. [37] is based on the use of disciplined practices and standard methods. The SEI's capability maturity model also slresses the importance of defining a standard software process for the organization; it describes an integrated set of engineering and management processes [35]. The use and enforcement of the standards has been viewed as one of the most important solutions to the problems in software development. Standardization of software development is a complex construct, because a number of its aspects could be specified above the project level. This study is restricted to the issue of standardized control, due to the importance of studying the control of software development. Organizational control depends on the processes that align individual actions to organizational goals. Drawing from both organization theory and software engineering research, two dimensions of szandardized control can be derived:

1. Output controls standardization. In organization theory, output control occurs in situations where the subordinates are given discretion over the means they use to accomplish the work, but the targets or outcomes are established by the superiors [34, 36]. In software engineering research, the importance of standardizing software development outputs has been identified. Outcomes, such as milestones, are commonly defined as a way of identifying the various phases of development [24]. Henderson and Lee [18] describe how software development projects can be implemented by decomposing the project's goals into a sequence of milestones, which serve as a basis for output control. 2. Behavior controls standardization. In organization theory, behavior controls refer to the control over the actions of subordinates and the transformation processes used to undertake work, either in the form of standard operating procedures [8] or by the superiors' close monitoring of the actions of subordinates. In software development projects, behavior controls are often implemented by a standardized definition of how individual software development

137

tasks should be performed [19]. For example, the tools and techniques to be used for accomplishing development phases may be specified at a level above that of the project.

2.2. Risk-based approaches Risk-based approaches, derived from software engineering research, suggest that an important determinant of performance is the risk of the project [4, 17]. For example, Boehm identifies two key aspects of risk management: a) risk assessment or estimation, concerned with assessing risk sources that are likely to affect the project's outcomes, and b) risk control, or ways in which the risks can be resolved. Gilb also emphasizes the importance of estimation as a critical aspect of risk management. In this study, the risk is conceptualized as the extent of difficulty in estimating the consequences of the project, regardless of the specific estimation technique used by the project team [31,321. A variety of project outcomes could be difficult to estimate. This study focuses, in particular, on performance-related outcomes, because of their emphasis in the literature and because performance is the critical dependent variable. Performance consequences include the actual project cost, project completion time, system benefits, the system's compatibility with its environment, and the technical performance of the resulting system. It is important to note that performance risk changes over the life of the project - performance consequences typically become relatively easier to estimate as the project progresses, because risk resolution methods and approaches have been applied and the requirements have become clarified. Therefore, an important issue arises: at what point during the project should performance risk be described, in order to explain the effects of standardization and requirements uncertainty on project performance'? If measured at the beginning of the project, performance risk could not have been affected by software development standards because they would not yet have been applied to the project. If measured at the end of the project, performance consequences, such as elapsed time and project costs, are known and consequently need not be estimated. Here, performance risk was measured during the later stages of

138

S.R. Nidumolu/lnformation & Management 31 (1996) 135-150

the project, after project planning and requirements analysis had been completed. Sufficient time would, therefore, have elapsed for software development standards to have had some impact, but the estimation of performance consequences of the completed project would still be meaningful. The performance risk assessed during the later stages of the project is labeled residual performance risk to distinguish it from the risk measured during other time periods or instances. It therefore refers to the difficulty experienced during the later stages of the project (after project planning and requirements analysis) in estimating performance consequences. The risk-based model for standardization developed in this study is illustrated in Figure 1. Residual performance risk is modeled as an intervening variable mediating the effects of standardization and requirements uncertainty, on process or product performance. Requirements uncertainty is also modeled to have a direct effect on the performance constructs - over and above an indirect effect via residual performance risk - to explain impacts unrelated to the risk. The model also includes a covariation between requirements uncertainty and standardization, since the nature and direction of the effects between them is not the focus of this study.

2.3. Risk-based hypotheses Software engineering and IS research have identified a number of sources of uncertainty, relating to requirements, which increase the risk of the project. For example, difficulties in specifying the purpose of the system [1], or tasks, that are unstructured, increase the risk of the project. Incomplete, ambiguous or inconsistent requirements [38] or frequent changes in them often lead to redoing the require-

I

S'I'mT1ON M~OCF.~I/ PERFORMANCE Fig.1. Risk-basedmodelofstandardization.

ments analysis [22]; these increase the difficulty in estimating performance outcomes. Requirements uncertainty could also have a direct adverse impact on a performance that is unrelated to the risk. For example, difficulties in analyzing requirements can lead to a software that does not respond to the needs of the users or is inflexible in coping with changing business needs. See [32] for other arguments relating requirements uncertainty to residual performance risk and the performance. The above arguments suggest the following hypotheses:

HI: Increases in requirements uncertainty will be directly associated with increases in residual performance risk. H2: Increases in requirements uncertainty will be directly associated with decreases in process performance. H3: Increases in requirements uncertainty will be directly associated with decreases in product performance. While uncertainties increase the difficulty in estimating performance outcomes, risk-based research has repeatedly stressed the negative relationship between the risk and performance. For example, Charette claims that efforts to reduce performance risk can lead to over 50% gains in productivity, because 40-50% of the cost of developing the software is spent on fixing problems. Difficulties in specifying correct requirements are argued to be one most responsible for overruns in project cost and schedule [38]. In terms of the Software Engineering Institute's process maturity framework [20, 23], immature processes that are high-risk are likely to have cost, schedule, and quality problems; while mature processes have consistently high results because their risks are under control and the management can estimate resources more accurately, and plan and implement efforts at improving the process. This suggests the following hypotheses:

H4: Increases in residual performance risk will be directly associated with decreases in process performance.

S.R. Nidumolu/Information & Management 31 (1996) 135-150

H5: Increases in residual performance risk will be directly associated with decreases in product performance. Software engineers and IS researchers have identified a number of ways in which standards can help reduce the performance risks of software development. For example, their use enables IS staff to review and understand each others' work more easily and consistently, and to undertake system and integration testing more effectively, thus, reducing the likelihood of 'errors in the software. With standards at the disposal, it is possible to fill sudden personnel shortfalls - a source of risk which Boehm [6] identifies as being extremely important. Standards also permit large groups of developers to coordinate their activities more easily, thus, reducing the likelihood of project delays and cost overruns. They promote better communication among the participants in a project, and between the project team and the managers they report to. This, in turn, leads to a cohesive organizational culture, where organizational members speak "the same technical language, sharing common practices and procedures, and referring to organizational goals as their own" [20]. For example, a common set of management controls provide guidelines for project planning, review and evaluation, and for undertaking quality assurance. Overall, well-defined software processes provide control and predictability over software development, increasing the likelihood of obtaining acceptable results [9, 43]. To summarize, software engineering and IS research suggests the following hypothesis:

H6: Increases in standardization are associated with decreases in residual performance risk.

3. Research method

The data used in this study was collected as part of a broader survey of software project management described in [30]. The sample selection and questionnaire design are described fully in [31], where the effects of coordination mechanisms and uncertainty on performance risk and project performance were studied.

139

3.1. Sample selection There were two samples of software projects used in this study. The first was drawn from firms who were members of an Industry Partners group at the author's university, while the second was drawn from firms in the banking industry. For each finn, a liaison, such as the Vice President of Data Processing, was asked to select both the successful and unsuccessful projects to ensure that there was sufficient variation in the sample with regard to the performance. For each project, the Data Processing Manager was expected to complete the survey. See [31 ] for further details on the selection of samples.

3.2. Questionnaire development A pretest stage was used to validate the questionnaire items derived from prior research, and to generate new items for the scales developed for this study. Subsequently, a pilot test was conducted using subject matter experts from the academia and industry. At the end of this stage, a number of modifications were made: 1) more background information on the project and company were added to the questionnaire; 2) some items were added; and 3) less important ones were deleted. The items used to measure each construct are given in Appendix A.

3.3. Analytical techniques Principal components analysis (PCA) was used to create the key variables used in the study, i.e. standardization, requirements uncertainty, process performance and product performance. These variables were given by the first principal component formed by combining the respective dimensions. For example, standardization was formed by combining behavior controls and output controls standardizations. Once the constructs were created, the theorized model and hypotheses were tested through a path analysis using the EQS, a structural equations modelling package that can also be used to study a simultaneous set of regression equations. The steps involved in using the EQS for undertaking path analysis are described fully in [31]. Briefly, they included evaluation of the p-value of the chi-square

140

S.R. Nidumolu/Information & Management31 (1996) 135-150

(which should exceed 0.05), the normed fit index (which should exceed 0.90) and a comparison of the theorized model with alternative models.

4. Findings 4.1. Preliminary analysis The preliminary analysis of the responses addressed the external and construct validities, and the reliability of the study. It also included additional tests on the performance dimensions. The detailed description of this analysis is given in [31], and is summarized in the ensuing sections. 4.1.1. External validity Responses received from 32 projects in seven finns from the first sample gave a response rate of 33% for the projects and 50% for the firms; and those from 32 projects and 19 firms in the second sample gave a response rate of 6% for the projects and 7.6% for the firms [31 ]. Because of the low response rates, it was important to assess the external validity of the study, which describes the extent to which the findings can be generalized to or across times, persons and settings [10]. A variety of tests for non-response bias and intersample differences, described in [31], suggested that external validity was unlikely to be a problem and the samples were generalizable to the population of software development projects. The two samples were therefore pooled together (N----64) for subsequent analysis. A profile of the projects is given in Table 1. 4.1.2. Construct validity This describes the validity of the construct's operationalization [10] and can be assessed using factor analysis. A confirmatory factor analysis using varimax rotation suggested that two of the requirements uncertainty dimensions, requirements diversity and requirements analyzability, did not need any modification; however, two items had to be deleted from requirements instability because they loaded at less than the 0.5 level (see Table 2). One item in the output controls standardization dimension loaded at less than the 0.5 level and was deleted from subsequent analysis; all the behavior controls stan-

Table 1 Profile of projects Attribute

Mean

Std. dev. Minimum Maximum

Cost a Effort (person months) Lines of codeb Project duration (months) Number of user departments

1309 2057 122.3 230.6

20 2

8600 1100

339.9 16

748.3 11.9

0 1

4000 51

5.6

7.4

1

50

a In thousand dollars. b In thousands. N=64 projects.

dardization items loaded, as predicted, above the 0.5 level (see Table 3). For the three process performance dimensions, all items loaded as predicted (see Table 4). However, the analysis suggested that two of the three product performance dimensions needed modification: one item had to be deleted from each of the scales for software responsiveness and flexibility, because they loaded at less than the 0.5 level, whereas all items loaded, as predicted, on operational efficiency (see Table 5). Finally, all the residual performance risk items loaded above 0.5 on one factor, confirming that the construct was unidimensional (see Table 6). 4.1.3. Additional tests on process and product performance Because of their importance as dependent variables, the process and product performance dimensions were additionally measured by a sub-sample of 19 user managers, drawn from the 64 projects in the study. Because multiple types of respondents can be used to represent multiple methods [18], a multi-trait multi-method (MTMM) matrix was constructed for the six performance dimensions. As described in [31 ], the matrix suggested that two of the three process performance dimensions - process control and quality of interactions - and two of the three product performance dimensions - operational efficiency and long-term flexibility - had adequate convergent and discriminant validity. They were therefore retained for subsequent analysis. Moreover, the process control and software responsiveness dimensions correlated negatively, as

141

S.R. Nidumolu/lnformation & Management 31 (1996) 135-150 Table 2 Factor analysis - requirements uncertainty Scale item

Diversity

Instability

Analyzability

Fluctuation in earlier phasesa Fluctuation in later phases Difference in requirements between beginning and end Expected future fluctuations a Differences among users regarding requirements Effort required for reconciling user differences Difficulty in identifying common set of requirements Use of a clearly known way Use of available knowledge Use of existing procedures and practices Use of understandable sequence of steps

0.383 0.373 0.283 0.450 0.711 0.774 0.766 0.294 -0.117 -0.022 -0.037

0.431 0.695 0.827 0.474 0.360 0.436 0.141 -0.246 0.132 0.186 -0.246

- 0.208 0.175 0.118 0.249 -0. Ill2 0.032 -0.169 0.576 0.734 0.643 0.728

Variance explained Percentage of total variance

2.36 36.6%

2.09 32.4%

2.00 3 t .0%

a Items deleted from subsequent analysis.

Table 3 Factor analysis - standardization Scale item The development phases during project The milestones completed during project The documents prepared at milestones The approval procedures at milestones The procedures to control changes * Tools/techniques for project management Tools/techniques for generating requirements Tools/techniques for source selection Tools/techniques for system design Tnols/techniques for coding software Tools/techniques for testing software Tools/techniques for data administration Tools/techniques for installing software Variance explained Percentage of total variance

Output controls standardization 0.65

0.69 0.64 0.81 0.37 0.24 0.43 0.21 0.27 0.07 0.01 0.17 029 2.58 39.0%

Behavior controls standardization 0.29 0.23 0.09 0.05 0.36 0.60 0.52 0.54 0.77 0.80 0.80 0.74

0.66 4.03 61.0%

* Deleted from subsequent analysis.

p r e d i c t e d , w i t h o b j e c t i v e m e a s u r e s o f p r o j e c t perform a n c e (the p e r c e n t a g e b y w h i c h a c t u a l costs, time, a n d effort e x c e e d e d initial p r o j e c t i o n s ) . F o r e x a m p l e , p r o c e s s c o n t r o l c o r r e l a t e d n e g a t i v e l y at the 0.05 level with the percentage by which the actual cost e x c e e d e d the p r o j e c t e d cost ( r = - 0 . 5 3 ) , a n d the p e r c e n t a g e b y w h i c h t h e a c t u a l t i m e e x c e e d e d the projected time (r=-0.49). Software responsiveness c o r r e l a t e d n e g a t i v e l y at the 0.05 l e v e l w i t h the p e r c e n t a g e b y w h i c h the actual t i m e e x c e e d e d the

p r o j e c t t i m e ( r = - 0 . 5 2 ) , a n d the p e r c e n t a g e b y w h i c h the actual effort correlated negatively with the p r o j e c t e d effort ( r = - 0 . 5 8 ) . All o t h e r c o r r e l a t i o n s w e r e also in p r e d i c t e d d i r e c t i o n s .

4.1.4. Reliability T h i s d e s c r i b e s the stability o f the scale a n d h a s t y p i c a l l y b e e n m e a s u r e d b y the C r o n b a c h ' s a l p h a test, w h i c h i n d i c a t e s the a m o u n t o f e r r o r in the m e a s u r e m e n t . A m i n i m u m v a l u e o f 0 . 7 0 is g e n e r a l l y

S.R. Nidumolu/Information & Management 31 (1996) 135-150

142

Table 4 Factor analysis - process performance Scale item

Interaction

Knowledge about use of key technologies Knowledge about use of development techniques Knowledge about supporting user's business Control over project costs Control over project schedule Users' feelings of participation Completeness of user training Quality of communications with the DP staff Variance explained Percentage of total

Learning

Control

0.011 0.145 0.051 0.239 0.117 0.678 0.791 0.818

0.768 0.676 0.637 0.213 0.077 0.324 0.029 -0.032

0.059 0.138 0.098 0.781 0.774 0.093 0.152 O.181

1.85 38.8%

1.61 33.8%

1.31 27.4%

Table 5 Factor analysis - product performance Scale item

Flexibility

Reliability of software Response time of software Cost of software operations Range of outputs generated Ease of use of software * Ability to customize outputs Cost of adapting to business changes Speed of adapting to business changes Cost of maintaing software over life * Variance explained Percentage of total variance

Efficiency

0.152 0.443 0.145 0.248 0.064 0.107 0.848 0.817 0.579

0.568 0.571 0.713 0.162 0.709 0.121 0.140 0.183 0.526

2.04 38.1%

2.03 37.9%

Responsiveness 0.082 0.252 0.031 0.677 0.398 0.712 0.163 0.183 O. 174 1.28 24%

* Items deleted from subsequent analysis.

Table 6 Factor analysis - residual performance risk Scale item Difficulty Difficulty Difficulty Difficulty Difficulty Difficulty

Residual performance risk in in in in in in

estimating estimating estimating estimating estimating estimating

what would be the costs of the project what would be the project completion time what would be the benefits from the software whether software would be compatible with environment whether software would meet user needs what would be the costs of operating the software

Variance explained Percentage of total variance

r e c o m m e n d e d [33]. The reliabilities o f the scales are given in Table 7. One item had to be deleted from the scale for the residual performance risk - difficulty in estimating costs of operating the software - because o f a low i t e m - s c a l e correlation. The revised scale had

0.79 0.77 0.70 0.76 0.70 0.60

3.13 100.0%

an alpha coefficient, 0.81. The reliabilities of all other constructs or dimensions exceeded the 0.70 level, except for the scale for software responsiveness which had an alpha coefficient, 0.65. However, this scale was retained because: 1) it was very close to the

S.R. Nidumolu/Information & Management 31 (1996) 135-150

143

Table 7 Reliability of scales Composite variable

Variable

No. of items

Cronbach'sAlpha

Mean

Residual performance risk Standardization

Output controls standardization Behavior cont. standardization

Requirements uncertainty

Requirements instability Requirementsdiversity Requirements analyzability

Process performance

Product performance

Std. dev.

5

0.81

2.35

0.85

4 8

0.73 0.86

3.42 3.20

0.74 0.77

2 3 4

0.85 0.87 0.79

2.99 2.70 2.73

1.30 1.28 0.88

Learning a Process control User-IS interactions

3 2 3

0.76 0.85 0.78

3.58 3.06 3.55

0.81 1.15 0.88

Operational a efficiency Responsiveness Flexibility

3 2 2

0.71 0.65 0.86

3.84 3.54 3.46

0.73 0.84 0.99

N--64 projects. aDeleted from subsequentanalysis.

guideline; and 2) the correlation between the two responsiveness items was significant even at the 0.001 level.

4.2. Testing the risk-based hypotheses The correlation between the study's variables (see "Fable 8) suggested that there were several significant relationships. The first issue to be addressed in testing the risk-based model is the adequacy of sample size. Bentler recommends that at least a 5 : 1 ratio of sample size to number of parameters be estimated. In ;in EQS model, the parameters to be estimated are the path coefficients, and the variances and covariances of the independent variables. In the risk-based model for standardization, the number of parameters can be

shown to be nine. Given a sample size of 64 projects in this study, the sample size: parameters ratio was over 7 : 1, which exceeded Bentler's recommendation. The sample size was therefore considered adequate. The results of testing the risk-based model for process performance are given in Figure 2. The model fits the data very well, with a p-value of chisquare (0.96) that well exceeded the recommended m i n i m u m cutoff, 0.05. Moreover, the normed fit index value (1.0) exceeded the 0.9 minimum. Finally, the Wald and Lagrange Multiplier (LM) tests indicated that no further changes could be made in the model to improve its fit significantly. The unstandardized path coefficients for the individual effects are also shown.

"Fable 8 Correlation between variables Row a

Variable

1

2

3

4

5

1

Process performance Product performance Requirements uncertainty Residual performancerisk Standardization

1.00 +0.39 d --0.55 e -0.59 e 0.28b

1.00 -0.24 a -0.30 b 0.278

1.00 0.43 e -0.29b

1.00 -0.38 a

1.00

•1 5 N=64 projects.

:'p<0.10, bp<0.05, Cp<0.01, dp <0.005, ep<0.001.

144

S.R. Nidumolu/lnformation & Management 31 (1996) 135-150

RESIDUAL I

RESIDUAL

RESIDUAL ~

8T~'noN

RESIDUAL

1

STANDARDIZATION

I~OCESS PERFOI ~dL~ICE

PflOOUCT I E

REQUI REMENTS UNCERTNNTY

REQJJI REMENTS UNCERTAI NTY P= - 0.N NFI- 1.0

D,

Px~- 0.27 Nil • 0.1)2

P~.05

P
Fig. 3. Product performancemodel.

The path coefficient for the effect of requirements uncertainty on residual performance risk was positive and significant (z score=3.03, p<0.005), which suggested support for hypothesis H 1; i.e. an increased requirements uncertainty was associated with an increased residual performance risk. The path coefficient for the direct effect of requirements uncertainty on process performance was negative and significant (z score=-3.53, p<0.001), which suggested support for hypothesis H2; i.e. an increased requirements uncertainty directly led to a reduced process performance. Also, the direct effect of residual performance risk on process performance was similarly negative and significant (z s c o r e = - 4 . 2 , p<0.001), which suggested support for hypothesis H4; i.e. an increased residual performance risk led to a reduced process performance. Finally, standardization had a significant negative effect on residual performance risk (z score=-2.48, p<0.05), which suggested support for hypothesis H6; i.e. an increased standardization was associated with a reduced residual performance risk. The theorized product performance model fit the data well, with a p-value of chi-square (0.21) that exceeded the 0.05 minimum. Moreover, the normed fit index (0.95) also exceeded the 0.9 minimum. The LM test suggested that no additional effects could be added to the theorized model to improve its fit. However, the Wald test suggested that a significant decrease in the chi-square value could be obtained if the direct effect of requirements uncertainty on product performance were dropped.

The revised product performance model, which excluded this direct effect, fit the data well, with an improved p-value of chi-square (0.27). The normed fit index (0.92) again exceeded the recommended minimum. Moreover, both the Wald and LM tests indicated that no further changes could be made in the model. The results corresponding to this revised model are shown in Figure 3. As in the case of the process performance model, the direct effect of requirements uncertainty on residual performance risk was positive and significant (z value=3.03, p<0.005), reinforcing support for hypothesis H1. However, requirements uncertainty did not have a direct effect on product performance, since the Wald test suggested that this effect was insignificant and could be dropped from the model. This suggested a lack of support for hypothesis H3; i.e. an increased requirements uncertainty did not seem to have a significant effect on product performance. On the other hand, residual performance risk did seem to have a significantly negative effect on product performance (z v a l u e = - 2 . 4 6 , p<0.05), which suggested support for hypothesis H5; i.e. an increased residual perform a n c e risk was associated with a reduced product performance. Also, as in the product performance model, standardization had a significant negative effect on residual performance risk (z value=-2.48, p<0.05), suggesting support for hypothesis H6, i.e. an increased standardization was associated with a reduced residual performance risk.

S.R. Nidumolu/lnformation & Management 31 (1996) 135-150

145

4.3. Effects of output controls standardization

4.4. Effects of behavior controls standardization

The output controls standardization model for process performance fit the data well, since the pvalue of chi-square was 0.50 and the normed fit index was 0.99. However, the Wald test suggested that the covariance between output controls standardization and requirements uncertainty was insignificant and could be dropped from the model. The LM test did not identify any additional effects to be added to the model. The revised model also had a good p-value of chi-square (0.36) and a high normed fit index (0.97). Moreover, the Wald and LM tests suggested that no further changes could be made in the model. The path coefficients were all significant at the p--0.05 level, suggesting that:

The behavior controls standardization model for process performance fit the data well with a high pvalue of chi-square (0.56) and normed fit index (0.99). The LM test suggested that no further effect could be added to the model. However, the Wald test suggested that the effect of behavior controls standardization on residual performance risk could be dropped from the model. This revised model also fit the data well, with a high p-value of chi-square (0.37) and normed fit index (0.97). Both the Wald and LM tests did not suggest any further change in the model to improve its fit significantly. All the path coefficients in the revised model were significant, which suggested that:

1. Output controls standardization had a significant negative effect on residual performance risk. 2. Requirements uncertainty had a significant positive effect on residual performance risk and a direct negative effect on process performance. 3. Residual performance risk had a significant negative effect on process performance. The output controls standardization model for product performance also fit the data well, with a reasonably high p-value of chi-square (0.15) and normed fit index (0.93). However, the Wald test suggested that two effects could be dropped from the model: the covariance between output controls standardization and requirements uncertainty, and the direct effect of requirements uncertainty on product performance. The LM test did not identify any effect to be added. The revised model, which dropped the effects suggested by the Wald test, also fit the data well, with a p-value of chi-square of 0.19, although the normed fit index was reduced (0.85). The Wald and LM tests did not suggest any further modification to the model. The path coefficients for the revised model were all significant, suggesting that: 1. Output controls standardization has a significant negative effect on residual performance risk. 2. Requirements uncertainty has a significant positive effect on residual performance risk. 3. Residual performance risk has a significant negative effect on product performance.

1. Requirements uncertainty had a significant positive effect on residual performance risk and a direct negative effect on process performance. 2. Residual performance risk had a significant negative effect on process performance. 3. Behavior controls standardization did not have any significant effect on residual performance risk. The behavior controls standardization model for product performance also fit the data well, with a high p-value of chi-square (0.52) and normed fit index (0.99). The LM test suggested that no further effect could be added to the model. However, the Wald test suggested that two effects could be dropped from the model: 1. The effect of behavior controls standardization on residual performance risk. 2. The direct effect of requirements uncertainty on product performance. This revised model also fit the data well, with a pvalue of chi-square, 0.38, and a normed fit index, 0.89. Moreover, the Wald and LM tests suggested no further change in the revised model. All the path coefficients in the revised model were significant, suggesting that: 1. Requirements uncertainty has a significant positive effect on residual performance risk.

146

S.R. Nidumolu/lnformation & Management 31 (1996) 135-150

2. Residual performance risk has a significant negative effect on product performance. 3. Behavior controls standardization does not have any significant effect on residual performance risk. 4.5. Summary of findings In our study, a path model was developed and tested, which demonstrated why software development standards improve software project performance, even in the presence of requirements uncertainty. By drawing on risk-based approaches, the model suggested that residual performance risk mediated the effects of standardization and requirements uncertainty on both process and product performance. In particular, while requirements uncertainty increased residual performance risk, software development standards reduced such risk, as suggested by the risk-based approaches and supported by the empirical evidence. However, further analysis suggested that the effects were more c o m p l e x than originally envisioned. Overall, the findings suggested that an organization-wide standards for output controls, such as the phases to be observed during the project, the milestones to be completed at each phase, the documents to be prepared at milestone completion, and the approval procedures to be followed at each milestone, have an important role in reducing performance risk and improving process and product performance, even when there are high levels of requirements uncertainty in the project. However, the organization-wide standardization of the actual tools and techniques to be used in developing software did not significantly decrease performance risk nor improve process and product performance.

5. Conclusion

were reduced, to some extent, in this study by collecting data only on recently completed projects and ensuring that the performance scores were crossvalidated by a subset of user manager responses. The likelihood of a recall bias was also tested by measuring the performance risk recalled as present at the initial stages of the project, and verifying whether it did differ from the performance risk recalled as present at the later stages (residual performance risk). The analysis showed that initial performance risk and residual performance risk were indeed recalled differently by the respondents. Their mean levels were significantly different (t=5.15, p<0.001), and their correlation (r~-0.52) was significantly less than one. Moreover, the correlation between initial performance risk and all the other constructs in the study was considerably less than the corresponding correlations involving residual performance risk. This suggests that initial performance risk had less of an effect on project performance, and that the respondents could discriminate between the performance risk present during the initial and later stages of the project. 5.2. Contributions The support for the risk-based hypotheses lends credence to the importance of studying the performance risk of projects, and of the role of standardization in reducing risks and improving performance. Traditionally, software development has been considered a craft activity, where teams of highly skilled developers have virtually total control over their means of development. In practice, however, the process has often been accused of being undisciplined and characterized by low productivity, inadequate quality, and poor conformance to schedule. The findings suggest that standardized approaches to software development help increase both process and product performance. Interestingly, this positive effect on performance is primarily due to the use of standardized output, rather than behavior controls.

5.1. Limitations Because the study required the respondents to reconstruct their project experience, the findings may have been subject to recall bias. Such recall problems

Appendix A

Questionnaire Items

This appendix describes the questionnaire items that pertained to the constructs used in the study.

S.R. Nidumolu/lnformation & Management 31 (1996) 135-150

Standardization

4. Requirements will fluctuate quite a bit in the future.

For each item listed below, to what extent did the project follow procedures or techniques specified, in advance, by the DP department? (1 - completely ad hoc, 3 - followed guidelines drawn up by the DP department, 5 - followed detailed procedures/ specific tools or techniques established by the DP department)

Output controls standardization 1. The development phases observed during the project. 2. The milestones completed during the project. 3. The documents prepared at the milestone completion. 4. The approval procedures followed the milestones. 5. The p r o c e d u r e s used to c o n t r o l s o f t w a r e changes.

Behavior controls standardization 1. 2. 3. 4. 5. 6. 7. 8.

Tools Tools Tools Tools Tools Tools Tools Tools

or or or or or or or or

techniques techniques techniques techniques techniques techniques techniques techniques

147

for for for for for for for for

project management. generating requirements. software source selection. system design. coding software. testing software. data administration. installing software.

Requirements diversity 1. Users of this software differed a great deal among themselves in the requirements to be met by it. 2. A lot of effort had to be spent in reconciling the requirements of various users of this software. 3. It was difficult to customize software to one set of users without reducing support to other users.

Requirements analyzability 1. There was a clearly known way to convert the user needs to requirements specifications. 2. Available knowledge was of great help in converting the user needs to requirements specifications. 3. Established procedures and practices could be relied upon to generate requirements specifications. 4. An understandable sequence of steps could be followed for converting the user needs to requirements specifications.

Residual performance risk

Requirements uncertainty

How difficult was it to estimate each of the following during the later phases of the project? (1 very easy, 3 - in between, 5 - very difficult) (Later phases: system design, coding and testing, installation).

How much do you disagree or agree with each of the following statements about the project? (1 strongly disagree, 3 - neither disagree nor agree, 5 strongly agree).

1. 2. 3. 4.

Requirements instability 1. Requirements fluctuated quite a bit in the earlier phases. 2. Requirements fluctuated quite a bit in the later phases. 3. Requirements identified at the beginning of the project were quite different from those existing at the end.

What would be the costs of the project? What would be the project completion time? What would be the benefits of the software? Whether the software would be compatible with the environment? 5. Whether the software would meet user needs? 6. What would be the costs of operating the software?

Project performance How do you rate the project and the software that was delivered on each of the following? (1 - very poor, 3 - OK, 5 - very good)

S.R. Nidumolu/lnformation & Management 31 (1996) 135-150

148

Process performance

Flexibility

Learning

1. Cost of adapting the software to changes in business. 2. Speed of adapting the software to changes in business. 3. Cost of maintaining the software over lifetime. 4. Overall long term flexibility of the software

1. Knowledge acquired by the firm about the use of key technologies. 2. Knowledge acquired by the firm about the use of development techniques. 3. Knowledge acquired by the firm about supporting users' business. 4. Overall knowledge acquired by the firm through the project (overall item).

(overall item). Objective measures of performance Cost overrun

Process control 1. 2. 3. 4.

Control over project costs. Control over project schedule. Adherence to auditability and control standards. Overall control exercised over the project (overall

item). Quality of interactions 1. Completeness of training provided to users. 2. Quality of communication between the DP and users. 3. Users' feelings of participation in the project. 4. Overall quality of interactions with the users

(overall item). Product performance

By approximately what percentage, if any, did the actual costs for the project overrun the originally budgeted costs? (indicate underrun by negative sign)--

Schedule overrun By approximately what percentage, if any, did the actual completion time for the project overrun the originally budgeted completion time? (indicate underrun by negative s i g n ) - - - -

Effort overrun By approximately what percentage, if any, did the actual systems and programming effort for the project, overrun the originally budgeted effort? (indicate underrun by negative s i g n ) - - - -

Operational efficiency 1. 2. 3. 4.

Reliability of the software. Cost of the software operations. Response time. Overall operational efficiency of software (overall

item). Responsiveness 1. Ease of use of the software. 2. Ability to customize outputs to various user needs. 3. Range of outputs that can be generated. 4. Overall responsiveness of the software to its users

(overall item).

References [1] Alter, S., "Development Patterns for Decision Support Systems", MIS Quarterly, 2(3), 1978, 3341. [2] Argote, L., "Input Uncertainty and Organizational Coordination in Hospital Emergency Units", Administrative Science Quarterly, 27, 1982, 420---434. [3] Baroudi, J. and Orlikowski, W., "A Short-form Measure of User Information Satisfaction: A Psychometric Evaluation and Notes on Use", Journal of Management Information Systems, 44(4), 1988, 44-59. [4] Barki, H., Pivard, S. and Talbot, J. , "Toward an Assessment of Software Development Risk", Journal of Management Information Systems, 10(2), 1993, 203-225. [5] Beath, C.M., Managing the user relationship in management information systems projects: A transaction governance

S.R. Nidumolu /lnformation & Management 31 (1996) 135-150

161 [7t 181

[91 101 11]

12]

13]

[14]

I 15 ] 1161 1171

[181

[ 191 [201

1211 122]

1231

1241 125]

approach, unpublished Ph.D. dissertation, Graduate School of Management, UCLA, 1986. Boehm, B.W., Software Risk Management, IEEE Computer Society Press, Washington D.C., 1989. Charette, R.N., Software Engineering Risk Analysis and Management, McGraw Hill, New York, 1989. Cheng, J.L.C. and McKinley, W., "Toward an integration of organization research and practice: A contingency study of bureaucratic control and performance in scientific settings", Administrative Science Quarterly, 29, 1983, 85-100. Cobb, R.H. and Mills, H.D., "Engineering Software under Statistical Quality Control", IEEE Software, 7, 1990, 44-54. Cook, T.D. and Campbell, D.T., Quasi-Experimentation, Houghton Mifflin, Boston, MA, 1979. Cooprider, J.G. and Henderson, J.C., "Technology-process lit: perspectives on achieving prototyping effectiveness", Journal of Management Information Systems, 7(3), 1990, 67-87. Cusumano, M.A., Japan's Software Factories: A Challenge to U.S. Management, Oxford University Press, New York, 1991. Cusumano, M.A., "Shifting Economies: From Craft Production to Flexible Systems and Software Factories", Research Policy, 21, 1992, 453-480. Daft, R.L. and Macintosh, N.B., "A Tentative Exploration into the amount and equivocality of information processing in organizational work units", Administrative Science Quarterly, 26, 1981, 207-224. Galbraith, J., Organizational Design, Addison-Wesley, Reading, MA, 1977. Gane, C. and Sarson, T., Structured Systems Analysis: Tools and Techniques, Prentice-Hall, Englewood Cliffs, NJ, 1979. Glib, T., "Estimating the Risk", in Software Risk Management (Boehm, B.W. ed.), IEEE Computer Society Press, Washington D.C., 1989. Henderson, J.C. and Lee, S., "Managing I/S Design Teams: A Control Theories Perspective", Management Science, 38(6), 1992, 757-777. Humphrey, W.S., Managing the Software Process, AddisonWesley, Reading, MA, 1989. Humphrey, W.S., Snyder, T.R., Willis, R.R., "Software Process Improvement at Hughes Aircraft", 1EEE Software, 1991, 11-23. Jackson, M.A., Principles of Program Design, Academic Press, New York, 1975. Jenkins, A.M. and Wetherbe, J.C., "Empirical Investigation ol' Systems Development Practices and Results", Information and Management, 7, 1984, 73-82. Krasner, H., "Continuous Software Process Improvement", in Total Quality Management for Software (Schulmemyer, G.G. and McManus, J.l. eds.), Van Nostrand Reinhold, New York, 1992. Kydd, C.,"Understanding the information content in MIS Management tools", MIS Quarterly, 1989. Leifer, R., Lee, S. and Durgee, J. , "Deep Structures: Real Information Requirements Determination", Information and Management, 27, 1994, 275-285.

149

[26] McFarlan, F.W., "Portfolio approach to information systems", Harvard Business Revien; 59(4), 1981, 26. [27] Mc Gaughey, R.E., Jr.Snyder, C.A. and Carr, H.H., "Implementing Information Technology for Competitive Advantage: Risk Management Issues", Information and Management, 26, 1994, 273-280. [28] Miller, J., Doyle, B.A., "Measuring the Effectiveness of Computer-Based Information Systems in the Financial Services Sector", MIS Quarterly, 1987. [29] Mookerjee, A.S., Global Electronic Wholesale Banking Delivery-System Structure, unpublished D.B.A. dissertation, Harvard University, 1988. [30] Nidumolu, S.R., The Effect of Structure and Uncertainty on Software Project Performance: Theory Testing and Development, unpublished Ph.D. dissertation, University of California, Los Angeles, 1991. [31] Nidumolu, S.R., "The Effect of Coordination and Uncertainty on Software Project Performance: Residual Performance Risk as an Intervening Variable", h!formation Systems Researeh, 6(3), 1995, 191-219. [32] Nidumolu, S.R, "A Comparison of the Structural Contingency and Risk-based Perspectives on Coordination in Software Development Projects", Jaurnal r~["Management Information S';stems (forthcoming). [33] Nunnally, J.C., Psychometric" Theory, McGraw-Hill, New York, 1978. [34] Ouchi, W.G. and Maguire, M.A., "Organizational Control: Two Functions", Administrative Science Quarterly, 20(4), t975, 559-569. [35] Paulk, M.C., Curtis, B., Chrissis, M.B. and Weber, C.V. , "Capability Maturity Model, Version 1.1". IEEE Software. 10(4), 1993, 18-27. [36] Snell, S.A.. "Control Theory in Strategic Human Resource Management: The Mediating Effect of Administrative Information", Academy c~[ Management journal, 35(2), 1992, 292-327. [37] Swanson, K., McComb, D., Smith, J., McCubbrey, D., "The Application Software Factory: Applying Total Quality Techniques to Systems Development" MIS Quarterly, December [991,567-579. [38] Thayer, R H. and Lehman, J.H., "Software Engineering Project Management: A Survey Concerning U.S. Aerospace Industry Management of Software Development Projects", in Tutorial: Software Management (Reifer, D.J., ed.), IEEE Computer Society Press, Washington D.C., 1979. [39] Turner, J.A., "A Comparison of the Process of Knowledge Elicitation with that of Information Requirements Determination", Chapter 22, Challenges and Strategies for Research in Systems Development (Cotterman, W.W. and Senn. A., eds.), John Wiley and Sons, New York, 1992. [401 Tushman, M.L. and Nadler, D.A., "Information processing as an integrating concept in organizational design", Academy of Management Review, 3, 1978, 613-624. [411 Yourdon, E., Modern Structured Analysis, Prentice-Hall, Englewood Cliffs, NJ, 1989.

150

S.R. Nidumolu /lnformation & Management 31 (1996) 135-150

[42] Yourdon, E. and Constantine, L., Structured Design: Fundamentals of a Discipline of Computer Program and Systems Design, Prentice-Hall, Englewood Cliffs, NJ, 1979. [43] Zeltner, R.E., "TQM for Technical Teams", Communications of the ACM, 36(10), 1993, 79-91. [44] Zmud, R.W., "Management of large software development efforts", MIS Quarterly, September 1980, 45-55.

Sarma R. Nidumolu is an Assis-

tant Professor of MIS at the University of Arizona. He received a Ph.D. in Information Systems from the U,C.L.A in 1991. Prior education includes a Post Graduate Diploma in Management from the Indian Institute of Management, Calcutta, India and a B.S. in Electronics and Communications Engineering from the Indian institute of Science, Bangalore, India. His current research interests include software process management, and the processes surrounding the adoption of information technologies in both domestic and international settings. Current projects include studying business process change management issues in a multi-million dollar project on computer-aided business engineering, and the impacts of software process management approaches on business performance. He has publications in Management Science, Information Systems Research, Communications of the ACM, MIS Quarterly, and others.