An empirical analysis of the impact of software development problem factors on software maintainability

An empirical analysis of the impact of software development problem factors on software maintainability

The Journal of Systems and Software 82 (2009) 981–992 Contents lists available at ScienceDirect The Journal of Systems and Software journal homepage...

294KB Sizes 1 Downloads 98 Views

The Journal of Systems and Software 82 (2009) 981–992

Contents lists available at ScienceDirect

The Journal of Systems and Software journal homepage: www.elsevier.com/locate/jss

An empirical analysis of the impact of software development problem factors on software maintainability Jie-Cherng Chen, Sun-Jen Huang * Department of Information Management, National Taiwan University of Science and Technology, 43, Section 4, Keelung Road, Taipei, Taiwan

a r t i c l e

i n f o

Article history: Received 7 October 2008 Received in revised form 11 December 2008 Accepted 22 December 2008 Available online 3 January 2009 Keywords: Software development problem factors Software maintainability Software process improvement

a b s t r a c t Many problem factors in the software development phase affect the maintainability of the delivered software systems. Therefore, understanding software development problem factors can help in not only reducing the incidence of project failure but can also ensure software maintainability. This study focuses on those software development problem factors which may possibly affect software maintainability. Twenty-five problem factors were classified into five dimensions; a questionnaire was designed and 137 software projects were surveyed. A K-means cluster analysis was performed to classify the projects into three groups of low, medium and high maintainability projects. For projects which had a higher level of severity of problem factors, the influence on software maintainability becomes more obvious. The influence of software process improvement (SPI) on project problems and the associated software maintainability was also examined in this study. Results suggest that SPI can help reduce the level of severity of the documentation quality and process management problems, and is only likely to enhance software maintainability to a medium level. Finally, the top 10 list of higher-severity software development problem factors was identified, and implications were discussed. Ó 2009 Elsevier Inc. All rights reserved.

1. Introduction There are two major phases in the software life cycle: the software development phase and the software maintenance phase. A previous study conducted by Yip and Lam (1994) showed that about 66% of the total software life cycle costs were spent on software maintenance. A study conducted by Sousa (1998) showed that only 2.7% of information technology specialists considered the software maintenance process as being very efficient; whereas 70.2% considered it as being of a very low level of efficiency. These studies reveal the fact that software maintenance has been recognized as the most costly and difficult phase in the software life cycle. Many studies on the exploration of software maintenance problems exist in the literature (Lientz and Swanson, 1981; Nosek and Palvia, 1990; Dekleva, 1992; Swanson and Beath, 1992; Yip, 1995; Tan and Gable, 1998). However, they only focus on the problem factors which occur during the software maintenance phase. In comparison to the software development phase, the problems associated with the software maintenance phase are somehow different. For example, some of the software maintenance problem factors proposed by Lientz and Swanson (1981) and Dekleva (1992) may not be suitable for the software development phase, * Corresponding author. Tel.: +886 2 27376779; fax: +886 2 27376777. E-mail address: [email protected] (S.-J. Huang). 0164-1212/$ - see front matter Ó 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.jss.2008.12.036

such as a lack of recognition and respect, a lack of motivation, a lack of support for re-engineering and system hardware changes. Previous studies have shown that one of the causes of software maintenance problems is that software maintainability is not often a major consideration during software design and implementation (Bendifallah and Scacchi, 1987; Schneidewind, 1987). Lee (1998) and Balci (2003) suggested that a reduction in software maintenance costs could be achieved by a more controlled design and implementation process early in the life cycle. Therefore, understanding software development problem factors will help in ensuring the maintainability of delivered software systems. However, there is a dearth of research which explores patterns between software development problem factors and software maintainability. Furthermore, software process improvement (SPI) is wellknown today. The concept of SPI is that well-defined and clearly documented processes are able to effectively solve the project problems and eventually result in high quality products (Humphrey, 1992; McGarry et al., 1994; Haley, 1996; Diaz and Sligo, 1997; Kuilboer and Ashrafi, 2000; Ashrafi, 2003). Hence, understanding the problems which can be solved by SPI and the nature of various software development problem factors and their relationship with software maintainability have become both important and necessary in order to improve the maintainability of the delivered software systems. The present paper explores the relationship between SPI, software development problem factors and the maintainability of the

982

J.-C. Chen, S.-J. Huang / The Journal of Systems and Software 82 (2009) 981–992

delivered software systems by conducting an empirical analysis from a dataset comprising 137 historical software projects. The purposes of this study are: (1) to explore patterns in software development problem dimensions across low, medium, and high maintainability projects; (2) to determine how SPI affects software development problem dimensions and the associated software maintainability; (3) to explore the top 10 higher-severity software development problem factors affecting software maintainability.

2. Background 2.1. Software maintainability Software quality is one of the key user requirements in most software development projects. ISO/IEC 9126-1 (2001) specifies six characteristics for software product external quality: functionality, reliability, usability, efficiency, maintainability and portability. As more than half of the total software life cycle costs are spent on software maintenance, as described in the previous section, this study aims to improve the maintainability of the delivered software systems. According to ISO/IEC 9126-1 (2001), software maintainability is defined as the capability of the software systems to be modified. Modifications may include corrections, improvements or adaptation of the software systems to changes in the environment, and in requirements and functional specifications. Software maintainability in ISO/IEC 9126-1 includes five sub-characteristics: analyzability, changeability, stability, testability, and maintainability compliance. Based on this standard, five items were used to operationalize the ‘‘software maintainability (MA)” dimension, as shown in Table 1. 2.2. Software development problem factors Software development processes are highly complicated and unpredictable. Even today, software development projects have a high rate of failure. According to The Standish Group’s CHAOS Report (2004), more than 80% of 9236 completed worldwide IT projects were delivered late; more than 50% of these did not include the required features and more than 15% were cost overruns. Many problem factors in the software development phase possibly affect the maintainability of the delivered software systems. Through a literature survey, as shown in Table 2, we have identified 25 major problem factors in the development phase of software projects. These problem factors are classified into five dimensions: documentation quality problems, programming quality problems, system requirements problems, personnel resources problems and process management problems. The rationale and a brief description of the five problem dimensions are presented below.

Table 1 Instrument items of ‘‘software maintainability (MA)” dimension. Subcharacteristics

Abbr.

Instrument items

Analysability

MA1

Changeability

MA2

Stability

MA3

Testability Maintainability compliance

MA4 MA5

The delivered software systems were easy to diagnose and/or analysis The delivered software systems were easy to change and/or modify The delivered software systems were stable and were able to avoid unexpected effects from modifications The delivered software systems were easy to test Overall, the delivered software systems were easy to maintain

2.2.1. Documentation quality problems Documentation is very important to both the understandability and modifiability of software (Etzkorn et al., 2001). In a survey of 487 data processing organizations, Lientz and Swanson (1981) found ‘‘documentation quality” ranked 3rd in a list of 26 maintenance problems. Low quality or missing documentation is a major cause of errors in software development and maintenance (Visconti and Cook, 1993). Arthur and Stevens (1989) also pointed out that ‘‘documentation quality” can be defined as the consistency among the code and all documentation of the code for all requirements. During the software development phase, software engineers typically do not update documentation in a timely or complete fashion. This results in documentation being untrustworthy (Lethbridge et al., 2003). Hayes et al. (2007) pointed out that a number of important tasks in software maintenance require an up-to-date requirements traceability matrix (RTM), such as change impact analysis, determination of test cases to execute for regression testing, etc. In the software industry, documented bi-directional traceability needs to be maintained over the entire life cycle of the software systems. Based on the above literature survey results, five items were used to operationalize the ‘‘documentation quality problems (DOC)” dimension, as shown in Table 2. 2.2.2. Programming quality problems A low level of programming quality will cause maintenance problems (Lientz and Swanson, 1981). A high level of program complexity density will also affect software maintenance productivity (Gill and Kemerer, 1991). Various approaches to this issue have been proposed, such as modularization and comments (Woodfield et al., 1981), module restructuring (Tonella, 2001), and re-factoring (Fowler et al., 1999). A good style of programming should make source code easy to read, understand and modify (Kernighan and Plauger, 1982). Low coupling between modules and high cohesion inside each module are the key features of good software design (Tonella, 2001). Fowler et al. (1999) stated that refactoring is one method of improving the design of code without affecting its external behavior. They also urged that the code must be kept as simple as possible and ready for any change that comes along. Based on the above literature survey results, five items were used to operationalize the ‘‘programming quality problems (PGM)” dimension, as shown in Table 2. 2.2.3. System requirements problems Most of the defects in software systems can be traced back to the requirement phase of the development life cycle. Previous studies have shown that the majority of errors in software systems are due to incorrect, incomplete or unclear system requirements. Incorrect requirements mean that the requirements are not valid statements of the customer’s needs due to either a misinterpretation or a failure to document the requirements appropriately. Dethomas and Anthony (1987) pointed out that the errors generated in the early phases of the development process were the most difficult to detect and also costly to correct. These errors include: missing, misinterpreted or incorrect requirements. Apfelbaum and Doyle (1997) pointed out that one of the root causes of 60– 80% of all defects are incorrect requirements specifications. Monkevich (1999) pointed out that the cause of 36% of defects is incorrect requirements translation, and 5% of defects happen because of incomplete requirements. Mogyorodi (2001) pointed out that of the bugs rooted in requirements, roughly half are due to poorly written, ambiguous, unclear and incorrect requirements, and the remaining half are due to requirements that were completed omitted. Furthermore, many studies have shown that unrealistic or conflicting system requirements as well as requirements which continually change will cause project risk and negatively affect software product quality (Nidumolu, 1996; Rai and Al-Hindi,

J.-C. Chen, S.-J. Huang / The Journal of Systems and Software 82 (2009) 981–992

983

Table 2 Five dimensions of software development problems. Dimension (Abbr.) – Definition Abbr.

Survey items (sources)

Documentation quality problems (DOC) DOC1 DOC2 DOC3

Inadequacy or poorness of documentation which would cause software to be hard to understand, change and modify

DOC4 DOC5 Programming quality problems (PGM) PGM1 PGM2 PGM3 PGM4 PGM5

Documentation is obscure or untrustworthy (Lientz and Swanson, 1981; Etzkorn et al., 2001; Lethbridge et al., 2003) Inadequate system documentation; incomplete or non-existent system documentation (Dekleva, 1992; Visconti and Cook, 1993) Lack of traceability; difficult to trace back to design specifications and user requirements (Pigoski, 1996; Juristo et al., 2002; Hayes et al., 2007) Changes are not adequately documented (Schneidewind, 1987; Pigoski, 1996) Lack of integrity/consistency (Lientz and Swanson, 1981; Martin and McClure, 1983; Ghods and Nelson, 1998) Improper manner of programming which causes source codes to be hard to read, diagnose, and analyze Lack of adherence to programming standards (Lientz and Swanson, 1981; Kernighan and Plauger, 1982) Inadequacy of source code comments (Woodfield et al., 1981; Pigoski, 1996; Kramer, 1999) Lack of modularity to divide the program into functionally and operationally independent components (Woodfield et al., 1981; Martin and McClure, 1983; Ghods and Nelson, 1998) Lack of refactoring/restructuring; high level of program complexity (Gill and Kemerer, 1991; Fowler et al., 1999; Tonella, 2001) Improper usage of programming techniques so that degrading self-documentation of source code occurs (Martin and McClure, 1983; Gellerich and Plodereder, 2001)

System requirements problems (SYS) SYS1 SYS2 SYS3 SYS4 SYS5

Incorrect, incomplete, unrealistic, unclear or uncertain system requirements which would cause software products to fail to meet customer’s needs Incorrect system requirements (Dethomas and Anthony, 1987; Apfelbaum and Doyle, 1997; Monkevich, 1999; Mogyorodi, 2001) Unclear or incomplete system requirements (Monkevich, 1999; Wallace et al., 2004; Han and Huang, 2007) Unrealistic or conflicting system requirements (Nidumolu, 1996; Hofmann and Lehner, 2001; Wallace et al., 2004) Lack of consideration for software quality requirements (Martin and McClure, 1983; Schneidewind, 2002) Continually changing system requirements (Nidumolu, 1996; Rai and Al-Hindi, 2000; Wallace et al., 2004; Han and Huang, 2007)

Personnel resources problems (PER) PER1 PER2 PER3 PER4 PER5

The personnel-related factors which would cause project risk and negatively affect project performance.

Process management problems (PM) PM1 PM2 PM3 PM4 PM5

Poor management in software development process, which would cause project risk and negatively affect project performance

Frequent turnover within the project team (Jiang and Klein, 1999; Wallace et al., 2004) Insufficient skills or experience (Jiang and Klein, 2000; Wallace et al., 2004) Lack of proper training (Wallace et al., 2004; Han and Huang, 2007) Lack of resources and time (Wilson and Hall, 1998) Lack of commitment to the project (McDonough, 2000; Wallace et al., 2004)

Lack of policy and management support (Jiang et al., 1998; Jiang and Klein, 1999; Wallace et al., 2004) Ineffective project planning and control (Sumner, 1999; Jones, 2004; Wallace et al., 2004) Inadequate estimation of cost and schedule (Jones, 2004; Wallace et al., 2004) Ineffective configuration management to control software changes (Leblang and McLean, 1985; Berczuk and Appleton, 2003; Jones, 2004) Ineffective quality control audits to ensure level of quality (Galin, 2003; Jones, 2004; Schulmeyer, 2007)

2000; Hofmann and Lehner, 2001; Wallace et al., 2004; Han and Huang, 2007). Based on the above literature survey results, five items were used to operationalize the ‘‘system requirements problems (SYS)” dimension, as shown in Table 2. 2.2.4. Personnel resources problems Balci (2003) pointed out that project quality can be assessed by evaluating human resource management and personnel capability maturity. The Standish Group’s CHAOS reports (2004) found that ‘‘lack of resources” ranked 3rd in the list of the top 10 project problem factors. Wilson and Hall (1998) pointed out that the significant impediments to software quality initiatives are a lack of resources and a lack of time. Furthermore, many studies have also shown that a large amount of software project failures are caused by personnel factors such as frequent turnover within the project team (Wallace et al., 2004), insufficient skills or experience (Jiang and Klein, 2000), lack of proper training (Han and Huang, 2007), and lack of commitment to the project (McDonough, 2000). These personnel factors affect project risk and negatively affect software product quality. Based on the above literature survey results, five items were used to operationalize the ‘‘personnel resources problems (PER)” dimension, as shown in Table 2. 2.2.5. Process management problems Jones (2004) pointed out that an analysis of approximately 250 large software projects between 1995 and 2004 shows that when

comparing large projects that successfully achieve their cost and schedule estimates against those that ran late, were over budget or were cancelled without completion, six common problems were observed: poor project planning, poor cost estimating, poor milestone tracking, poor change control, poor measurement and analysis and poor quality control. Furthermore, previous studies have also shown that a lack of management support will affect project risk (Jiang and Klein, 1999) and cause software development failure (Jiang et al., 1998). Based on the above literature survey results, five items were used to operationalize the ‘‘process management problems (PM)” dimension, as shown in Table 2. Therefore, based on the above descriptions of software maintainability and software development problem dimensions, the first research question of this study is identified below. RQ1: What patterns in software development problem dimensions can be observed across low, medium and high software maintainability projects? 2.3. Software process improvement (SPI) Paulish and Carleton (1994) pointed out that improving software development processes can enhance the quality of software systems and the overall performance of the software project and organization. The ISO 9001 Quality Management System (QMS) (2000) and the Capability Maturity Model Integration (CMMI)

984

J.-C. Chen, S.-J. Huang / The Journal of Systems and Software 82 (2009) 981–992

(2002) are two famous process-oriented methodologies that provide guidelines for SPI. Many studies have shown that SPI is able to effectively solve some of the software development problems and eventually result in quality products (Humphrey, 1992; McGarry et al., 1994; Haley, 1996; Diaz and Sligo, 1997; Kuilboer and Ashrafi, 2000; Ashrafi, 2003; Huang and Han, 2006). For example, after performing SPI activities for four years, the error rate for on-board software for the NASA space shuttle, which was developed and supported by IBM’s development group, decreased from 2.0 to 0.11 (errors per 1000 lines of code) (Humphrey, 1992). Furthermore, a significant amount of credible quantitative evidence on SPI performance results, based on CMMI models, was announced on the CMMI web site. However, there has been no empirical effort to examine the relationship between SPI and software development problem factors. Therefore, the second research question of this study is identified below. RQ2: How does software process improvement (SPI) affect software development problem factors and the associated software maintainability? 2.4. Top 10 list of higher-severity problem factors Several previous studies identified the ‘‘top 10 list” in different areas of software engineering practices. For example, the Standish Group (1995) provided the top 10 list of project impairment factors. Han and Huang (2007) provided the top 10 list of software risks. The ‘‘top 10 list” can provide project managers with a better idea of how to deal properly with project problems when faced with the constraints of limited project resources. Therefore, the third research question of this study is identified below. RQ3: What are the top 10 higher-severity software development problem factors which affect software maintainability?

3. Research approach 3.1. Scale construction Initially, this study developed a preliminary questionnaire consisting of 32 items relating to problem factors. Three senior project managers and three senior software engineers in the software industry in Taiwan, all of whom were familiar with information systems (IS) development and maintenance, were invited to verify the content validity and readability of the original version of the questionnaire. After deleting four items, a 28-item set was approved and 40 pilot forms were sent to participants working in an IT intensive company in Taiwan for a pilot test. After this investigation, Cronbach’s alpha and item-to-total correlation coefficients were used to test the reliability and internal consistency of the constructs of the problem factors. Any item with an item-to-total correlation below 0.40 was deleted, as suggested by Park and Kim (2003). Finally, after deleting three more items, a 25-item set remained and was used to operationalize the five dimensions of software development problems, as described in the previous sections. As for the five sub-characteristics of software maintainability, as shown in Table 1, a five-point Likert-type scale, ranging from 1 (strongly disagree) to 5 (strongly agree), was utilized to indicate the extent to which each survey item characterized the recently delivered software systems. As for the 25 problem factors of software development, as shown in Table 2, a five-point Likert-type scale, ranging from 1 (strongly disagree) to 5 (strongly agree), was utilized to indicate the extent to which each survey item char-

acterized their most recently completed software project. Four options of ‘‘ISO 9001”, ‘‘CMMI”, ‘‘none” and ‘‘others” were utilized to indicate the SPI methodology or some other methods (e.g. agile methods) were conducted to improve software processes and product quality in their organizations or software projects. 3.2. Data collection This study is based on data collected via an e-mail survey. The questionnaire contained four sections. The first section explained the purpose of the study, encouraged project managers or software engineers to participate in this investigation, and emphasized our commitment to protect the confidentiality of the study participant. The second section asked the respondents to provide profile information of their most recently completed software project and the SPI methodology conducted in their project or organization. The third section listed the 25 items on the various problem factors. Finally, the fourth section listed five measures used to evaluate the software maintainability of the most recently delivered software systems. A total of 400 project managers and software engineers, all staff of company members of the Chinese Information Service Industry Association of Taiwan, were invited to participate in the e-mail questionnaire. One hundred and sixty surveys were returned within a span of four weeks. After checking the data received, 23 incomplete questionnaires were discarded. As a result, only 137 became effective response questionnaires yielding a response rate of 34.25%. The profile of these 137 surveyed projects is shown in Table 3. It is noted that this study only sent a follow-up e-mail to those who did not respond to the survey to re-invite them to participate in this investigation, but did not make a follow-up phone call to a random selection of the non-responsive group. However, the t-test method was used in this study to test the potential non-response bias by comparing the demographics of the early respondents versus the later ones (Armstrong and Overton, 1997). The results yielded no significant difference (p > 0.05) and further strengthened the validity of the collected data in this study. This study also employed Cronbach’s alpha and item-to-total correlation coefficients as indices to test the reliability and internal

Table 3 Profile of the 137 surveyed projects. Project characteristics

Frequency

Percentage (%)

Project size (average no. of members) 1–2 3–5 6–10 11–20 >20

4 50 50 28 5

2.9 36.5 36.5 20.4 3.6

Project duration <6 months 6–12 months 13–24 months >24 months

31 83 21 2

22.6 60.6 15.3 1.5

Average experience of project members <1 year 1–3 years 4–6 years 7–9 years >9 years

6 57 53 20 1

4.4 41.6 38.7 14.6 0.7

Software process improvementa None ISO 9001 CMMI ML2/ML3 Others

47 71 58 0

34.3 51.8 42.3 0

a

Some projects performed multiple SPI methodologies.

J.-C. Chen, S.-J. Huang / The Journal of Systems and Software 82 (2009) 981–992 Table 4 Test of reliability and internal consistency of the constructs for each dimension of software development problems and software maintainability. Construct

Documentation quality problems (DOC) DOC1a DOC2 DOC3 DOC4 DOC5 Programming quality problems (PGM) PGM1 PGM2 PGM3 PGM4 PGM5 System requirements problems (SYS) SYS1 SYS2 SYS3 SYS4 SYS5 Personnel resources problems (PER) PER1 PER2 PER3 PER4 PER5 Process management problems (PM) PM1 PM2 PM3 PM4 PM5 Software maintainability (MA) MA1 MA2 MA3 MA4 MA5 a

Factor loading

Eigenvalue (percentage explained)

Item-to-total correlation

3.599 (71.97%)

0.827 0.815 0.839 0.873 0.886

Cronbach’s alpha 0.898

0.730 0.716 0.737 0.794 0.811 3.170 (63.41%)

0.673 0.689 0.817 0.882 0.893

0.846 0.535 0.552 0.672 0.772 0.776

3.426 (68.53%) 0.759 0.861 0.813 0.763 0.930

0.877 0.632 0.759 0.690 0.644 0.866

3.008 (60.16%) 0.763 0.785 0.751 0.755 0.822

0.833 0.614 0.646 0.612 0.615 0.684

3.456 (69.11%) 0.814 0.947 0.769 0.816 0.799

0.883 0.703 0.917 0.646 0.694 0.674

3.437 (68.74%)

0.859 0.827 0.758 0.821 0.875

0.885

0.764 0.721 0.638 0.711 0.786

Refer to Tables 1 and 2 for the meaning of the abbreviations in this table.

consistency of the constructs of the software development problem dimensions and software maintainability. The results are shown in Table 4. Cronbach’s alpha coefficients all exceeded the threshold (0.70) as suggested by Nunnally (1978); and the itemto-total correlation coefficients were all greater than 0.40 as suggested by Park and Kim (2003). This indicates that the internal consistency of the constructs of this study is acceptable.

4. Data analysis and results 4.1. Patterns in problem dimensions across different levels of software maintainability The first research question of this study concerns the patterns in software development problem dimensions across low, medium and high software maintainability projects. In order to classify different levels of software maintainability, the K-means Cluster Analysis (Hartigan, 1975; Hartigan and Wong, 1979; SPSS, 2004)

985

based on the ‘‘software maintainability” variable was performed. This resulted in three clusters representing low (n = 62), medium (n = 63) and high (n = 12) levels of software maintainability. Table 5 provides descriptive statistics of software maintainability and the associated problem dimensions across each cluster. The box plots, as shown in Fig. 1, provide a clear graphical view which shows the data distribution of each variable (MA, DOC, PGM, SYS, PER, PM) across each cluster. In the low maintainability cluster of software projects, the values of software maintainability range from 2.2 to 3.2. In the medium maintainability cluster of software projects, the values of software maintainability range from 3.4 to 4.0. In the high maintainability cluster of software projects, the values of software maintainability range from 4.2 to 5.0. It is noted that three outlier samples (Case ID: 9, 36, 83) exist in the box plots of the low maintainability cluster, as shown in Fig. 1a. However, even though the remaining 59 samples after deleting those three outlier samples in the low maintainability cluster are re-computed, these three ranges of values across each level of maintainability cluster still do not overlap. This indicates that these three outlier samples do not distort the analysis results in the subsequent analyses (Aldenderfer and Blashfield, 1984). As shown in Fig. 1b, the ranges of values of the DOC variable across each cluster of software maintainability overlap. This indicates that even a high maintainability project may possibly have a higher level of severity on the DOC dimension than a low maintainability project. The ranges of values of the other four problem dimensions (i.e., PGM, SYS, PER, and PM) across each cluster of software maintainability, as shown in Fig. 1c–f, are also overlapping. This indicates that even a high maintainability project may possibly have a higher level of severity on a specific problem dimension than a low maintainability project. Several outlier samples exist in the box plots of five problem dimensions across each cluster of software maintainability, as shown in Fig. 1b–f. In order to avoid the distortion of the analysis results, those outlier samples were eliminated in the subsequent analyses, as shown in Table 6. In order to explore the patterns in software development problem dimensions across low, medium and high maintainability projects, the differences between the mean scores of each problem dimension across the different levels of software maintainability projects were computed. The results are shown in Table 6. As for problem dimensions, a higher mean score value indicates higher levels of severity. As for software maintainability, a higher mean score value indicates higher levels of maintainability. The baseline threshold, which was computed by averaging the five problem dimensions, is a baseline value for relativity comparison. For each cluster, if a mean score value is greater than the baseline threshold, it indicates the problem dimension has relatively high levels of severity in that cluster. The ‘‘Dist 1” column represents the differences in the mean scores of each problem dimension between the low and medium maintainability projects. The ‘‘Dist 2” column represents the differences in the mean scores of each problem dimension between the medium and the high maintainability projects. The line chart, as shown in Fig. 2, provides a clear graphical view which shows the mean scores of each problem dimension across different levels of software maintainability. Five important findings are presented as follows: (1) The mean scores of each problem dimension obviously decrease from the low to high software maintainability clusters, as shown in Fig. 2. This indicates that the severity of software development problems needs to be decreased if the maintainability of software projects is to be improved.

986

J.-C. Chen, S.-J. Huang / The Journal of Systems and Software 82 (2009) 981–992

Table 5 Descriptive statistics of software maintainability and the associated problem dimensions across each cluster. Software maintainability and the associated problem dimensions

Cluster

N

Mean

Standard deviation

95% Confidence interval for mean

Min.

Max.

Lower bound

Upper bound

Software maintainability (MA)

Cluster 1 Cluster 2 Cluster 3 MA Total

62 63 12 137

2.971 3.797 4.533 3.488

0.256 0.226 0.299 0.568

2.906 3.740 4.343 3.392

3.036 3.854 4.724 3.584

2.200 3.400 4.200 2.200

3.200 4.000 5.000 5.000

Documentation quality problems (DOC)

Cluster 1 Cluster 2 Cluster 3 DOC Total

62 63 12 137

4.045 3.625 3.000 3.761

0.566 0.634 0.585 0.672

3.901 3.466 2.629 3.647

4.189 3.785 3.372 3.874

2.600 2.200 2.400 2.200

5.000 4.800 3.800 5.000

Programming quality problems (PGM)

Cluster 1 Cluster 2 Cluster 3 PGM Total

62 63 12 137

3.842 3.568 2.767 3.622

0.552 0.556 0.626 0.630

3.702 3.428 2.369 3.516

3.982 3.708 3.164 3.728

2.600 2.200 1.800 1.800

5.000 4.600 3.800 5.000

System requirements problems (SYS)

Cluster 1 Cluster 2 Cluster 3 SYS Total

62 63 12 137

3.797 3.311 2.817 3.488

0.493 0.658 0.581 0.657

3.672 3.145 2.447 3.377

3.922 3.477 3.186 3.599

2.600 1.600 2.000 1.600

5.000 4.600 3.600 5.000

Personnel resources problems (PER)

Cluster 1 Cluster 2 Cluster 3 PER Total

62 63 12 137

3.813 3.257 2.267 3.422

0.478 0.710 0.735 0.759

3.692 3.078 1.800 3.294

3.934 3.436 2.734 3.550

2.800 1.600 1.200 1.200

4.800 5.000 3.600 5.000

Process management problems (PM)

Cluster 1 Cluster 2 Cluster 3 PM Total

62 63 12 137

3.597 3.140 2.317 3.275

0.640 0.809 0.646 0.808

3.434 2.936 1.906 3.138

3.759 3.343 2.727 3.411

2.200 1.600 1.400 1.400

4.600 5.000 3.600 5.000

(2) The DOC and PGM problem dimensions of all the three levels of software maintainability exceed the baseline threshold. This suggests that successfully dealing with these two problem dimensions is a key requirement to achieve the desired software maintainability. (3) The PER problem dimension has the maximum value (0.584) of the ‘‘Dist 1”. This indicates that the PER problem dimension requires the most attention in relation to achieving improvement as we move from the low to the medium maintainability cluster. However, the PGM problem dimension has the minimum value (0.252) of the ‘‘Dist 1”. This indicates that the PGM problem dimension only requires minimum attention in relation to achieving improvement as we move from the low to the medium maintainability cluster. (4) The PER problem dimension has the maximum value (0.962) of the ‘‘Dist 2”. This indicates that the PER problem dimension requires the most attention in relation to achieving improvement as we move from the medium to the high maintainability cluster. However, the SYS problem dimension has the minimum value (0.494) of the ‘‘Dist 2”. This indicates that the SYS problem dimension only requires minimum attention in relation to achieving improvement as we move from the medium to the high maintainability cluster. (5) For all of the five problem dimensions, the values of ‘‘Dist 2” are greater than the values of ‘‘Dist 1”. This indicates that as higher levels of software maintainability are required, a corresponding increase in cost and effort in relation to improvement is also required. This study also explored the relationship between problem dimensions and software maintainability. The Pearson correlation analysis was performed (SPSS, 2004) and the results are shown in Table 7. The results suggest that there is a negative and middle-level linear correlation (r = 0.41 to 0.62, p < 0.01) between project problems and software maintainability.

Furthermore, in order to explore the influence of all problem factors on software maintainability, the 25 problem factors were combined into one variable, namely ‘‘software development problems”. As shown in Table 8, Cronbach’s alpha coefficient exceeded the threshold (0.70) as suggested by Nunnally (1978); and the item-to-total correlation coefficients were all greater than 0.40, as suggested by Park and Kim (2003). This indicates that the internal consistency of the construct of ‘‘software development problems” is acceptable. A regression analysis was then conducted (SPSS, 2004) in this study to examine the relationship between software development problems and software maintainability. As shown in Table 9, the R square value (0.397) indicates that approximately 39.7% of the variance in software maintainability is explained by software development problems. The p-value (0.000) indicates that there is a significant relationship between software development problems and software maintainability. The standardized coefficient ( 0.630) indicates that software development problems have a negative impact on software maintainability. This provides empirical evidence that software development problems can negatively affect software maintainability. This also indicates that decreasing levels of the severity of software development problems in projects can improve maintainability of the delivered software systems. 4.2. The relationship between project demographics and software maintainability This study also explored the possible confounding effect of project demographics (i.e., project size, project duration, and average experience of project members) on software maintainability. Table 10 presents the frequency distribution of low, medium, and high maintainability projects on each project demographical item. Several important patterns in the 137 surveyed projects were found: (1) the high maintainability projects are not all the smallest ones; (2) the high maintainability projects are not all the long-duration projects; and (3) the high maintainability projects are not all ones where project members have vast experience.

987

J.-C. Chen, S.-J. Huang / The Journal of Systems and Software 82 (2009) 981–992

5.00

5.00

Documentation Quality Problems (DOC)

20

Software Maintainability (MA)

4.50

4.00

3.50

3.00 9

2.50

36 83

4.50

4.00

3.50

3.00 4

2.50

22 29

2.00

2.00 1

2

1

3

3

(b) Documentation quality problems (DOC)

(a) Software maintainability (MA) 5.00

112

5.00

System Requirements Problems (SYS)

Programming Quality Problems (PGM)

2

Cluster Number

Cluster Number

4.00

3.00

1

2.00

57

4.00

3.00 80

2.00

1.00

1.00 1

2

1

3

2

3

Cluster Number

Cluster Number

(d) System requirements problems (SYS)

(c) Programming quality problems (PGM) 121

5.00

6

Process Management Problems (PM)

Personnel Resources Problems (PER)

5.00

4.00

3.00

114

2.00

4.00

3.00

2.00

1.00

1.00 1

2

3

Cluster Number

(e) Personnel resources problems (PER)

1

2

3

Cluster Number

(f) Process management problems (PM)

Fig. 1. Box plots of software maintainability and the associated problem dimensions across each cluster.

All project demographics, as shown in Table 10, are ordinal scale variables, so one-way ANOVA analyses were conducted

(Girden, 1992; SPSS, 2004) in this study to examine the relationship between project demographics and software maintainability.

988

J.-C. Chen, S.-J. Huang / The Journal of Systems and Software 82 (2009) 981–992

Table 6 Mean scores of the five problem dimensions and software maintainability across each cluster. Problem dimensions and software maintainability

Documentation quality problems (DOC) Programming quality problems (PGM) System requirements problems (SYS) Personnel resources problems (PER) Process management problems (PM) Baseline threshold Software maintainability (MA) a

Cluster 1: Low maintainability

Cluster 2: medium maintainability

Cluster 3: high maintainability

Mean 1

N

Mean 2

N

Mean 3

N

4.045 3.842 3.780a 3.813a 3.597 3.815 2.971

62 62 59 60 62 – 62

3.671a 3.590a 3.311 3.229a 3.140 3.388 3.797

59 62 63 62 63 – 63

3.000 2.767 2.817 2.267 2.317 2.634 4.533

12 12 12 12 12 – 12

Dist 1b

Dist 2c

0.374 0.252 0.469 0.584 0.457 – –

0.671 0.823 0.494 0.962 0.823 – –

The mean score is obtained after eliminating the outlier samples. Dist 1 = Mean 1 Mean 2. Dist 2 = Mean 2 Mean 3.

b c

4.500

Table 8 Test of reliability and internal consistency of the construct for the variable of software development problems.

4.000

Variable and items

Me a n s c o re s

5.000

Item-to-total correlation

Cronbach’s alpha

3.500 3.000 2.500 2.000 DOC

PGM

SYS

PER

PM

Maintainability

P ro ble m dime ns io ns a nd s o ftwa re ma inta ina blity Low maintainability

Medium maintainability

High maintainability

Fig. 2. Profile plot of clusters.

Software development problems DOC1a, DOC2, DOC3, DOC4, DOC5 PGM1, PGM2, PGM3, PGM4, PGM5 SYS1, SYS2, SYS3, SYS4, SYS5 PER1, PER2, PER3, PER4, PER5 PM1, PM2, PM3, PM4, PM5 a

The 137 surveyed projects were classified into two or three groups (Models 1–3), as shown in Tables 11–13. Each group contains more than 20 projects to ensure that the analysis results are robust (Girden, 1992). The results suggest that there is no significant relationship (P > 0.05) between project demographics and software maintainability. 4.3. Impact of SPI on problem dimensions The second research question of this study concerns how SPI affects software development problem factors and the associated software maintainability. In this study, the 137 surveyed projects were classified into two groups: (1) organizations certified with ISO 9001 and/or CMMI; and (2) organizations without ISO 9001 and CMMI certification. This study has no intention to compare or distinguish between ISO 9001 and CMMI, so that if an organization has either one of these two certifications, its projects are regarded as being part of the SPI group. In order to explore the relationship between SPI and problem dimensions, the Univariate F-test was performed (SPSS, 2004) in this study. The results are shown in Table 14.

0.939 0.581, 0.613, 0.521, 0.671, 0.671 0.498, 0.477, 0.597, 0.551, 0.565 0.670, 0.601, 0.526, 0.583, 0.633 0.572, 0.695, 0.651, 0.515, 0.661 0.689, 0.716, 0.652, 0.646, 0.515

Refer to Table 2 for the meaning of the abbreviations in this table.

The results reveal that SPI projects have a significantly lower level of severity on the DOC dimension (P = 0.040) and PM dimension (P = 0.000) than non-SPI projects. Possible explanations of this finding are: (1) SPI methodologies attach importance to software documentation writing, which can contribute to problem-solving (Humphrey, 1992; Kuilboer and Ashrafi, 2000); and (2) SPI methodologies place an emphasis on improving the capability of process management, which can contribute to achieving project objectives (Hoyle, 2005; Chrissis et al., 2006). Therefore, these results suggest that SPI can reduce the level of severity on the DOC and PM dimensions. Furthermore, the results, as shown in Table 14, also reveal that SPI projects have significantly higher levels of software maintainability (P = 0.045) than non-SPI projects. One possible explanation of this finding is that the well-defined and clearly documented processes provided by SPI methodologies can eventually result in quality products (McGarry et al., 1994; Haley, 1996; Diaz and Sligo, 1997; Ashrafi, 2003). However, the SPI group-mean value of software maintainability (3.56) does not yet achieve high maintainability, based on the five-point scale used to aggregate the

Table 7 Correlation coefficients between problem dimensions and software maintainability. Problem dimensions and software maintainability

(1) (2) (3) (4) (5) (6) **

Documentation quality problems (DOC) Programming quality problems (PGM) System requirements problems (SYS) Personnel resources problems (PER) Process management problems (PM) Software maintainability (MA)

Correlation is significant at the 0.01 level (2-tailed).

Mean (SD)

Correlation coefficients

(n = 137)

(1)

(2)

(3)

(4)

(5)

(6)

3.76 3.62 3.49 3.42 3.27 3.49

1.00 0.54** 0.44** 0.51** 0.54** –0.45**

1.00 0.46** 0.51** 0.44** –0.41**

1.00 0.65** 0.53** –0.52**

1.00 0.65** –0.62**

1.00 –0.47**

1.00

(0.67) (0.63) (0.66) (0.76) (0.81) (0.57)

989

J.-C. Chen, S.-J. Huang / The Journal of Systems and Software 82 (2009) 981–992 Table 9 Regression analysis: software development problems on software maintainability. Dependent variable

Independent variable

Software maintainability

***

Standardized coefficient

t-value

***

Software development problems Model R2 Adjusted R2 F-value P-value

p-value

9.434

–0.630 0.397 0.393 88.991 0.000

0.000

Indicates significance at the 0.001 level.

Table 10 Frequency distribution of low, medium, and high maintainability projects on each project demographical item.

Table 12 The results of one-way ANOVA analysis between project duration and software maintainability.

Project demographical items

Model

Frequency of low maintainability projects

Frequency of medium maintainability projects

Frequency of high maintainability projects

Total

Project size 1–2 persons 3–5 persons 6–10 persons 11–20 persons >20 persons Total

2 26 19 13 2 62

1 21 26 12 3 63

1 3 5 3 0 12

4 50 50 28 5 137

Project duration <6 months 6–12 months 13–24 months >24 months Total

10 42 9 1 62

18 37 7 1 63

3 4 5 0 12

31 83 21 2 137

Average experience of project members <1 year 0 6 1–3 years 24 25 4–6 years 28 22 7–9 years 9 10 >9 years 1 0 Total 62 63

0 8 3 1 0 12

6 57 53 20 1 137

Model 1

Model 2

Model 3

a

Independent variable

Project duration Group 1: <6 months Group 2: 6– 12 months Group 3: >12 months

N

Project duration Group 1: <6 months Group 2: P6 months

N

Project duration Group 1: 612 months Group 2: >12 months

N

Dependent variable

Results Fvalue

Pvaluea

Notes

Software maintainability

0.665

0.516

No significant relationship

Software maintainability

0.151

0.698

No significant relationship

Software maintainability

0.922

0.339

No significant relationship

31 83 23

31 106

114 23

P > 0.05 indicates no significant relationship.

Table 13 The results of one-way ANOVA analysis between average experience of project members and software maintainability. Table 11 The results of one-way ANOVA analysis between project size and software maintainability. Model

Model 1

Model 2

Model 3

a

Independent variable

Project size

N

Group 1: 1– 5 persons Group 2: 6– 10 persons Group 3: >10 persons

54

Project size

N

Group 1: 1– 5 persons Group 2: >5 persons

54

Project size

N

Group 1: 1– 10 persons Group 2: >10 persons

104

Dependent variable

Results Fvalue

Pvaluea

Notes

Software maintainability

2.046

0.133

No significant relationship

Model

Model 1

50 33

Model 2 Software maintainability

2.942

0.089

No significant relationship

83 Software maintainability

0.030

0.864

No significant relationship

Model 3

33

P > 0.05 indicates no significant relationship.

a

Independent variable

Average experience of project members Group 1: <4 years Group 2: 4–6 years Group 3: >6 years

N

Average experience of project members Group 1: <4 years Group 2: P4 years

N

Average experience of project members Group 1: 66 years Group 2: >6 years

N

Dependent variable

Results Fvalue

Pvaluea

Notes

Software maintainability

1.032

0.359

No significant relationship

Software maintainability

1.680

0.197

No significant relationship

Software maintainability

0.004

0.947

No significant relationship

63 53 21

63 74

116 21

P > 0.05 indicates no significant relationship.

990

J.-C. Chen, S.-J. Huang / The Journal of Systems and Software 82 (2009) 981–992

Table 14 Univariate F-test results with SPI toward the five problem dimensions and software maintainability.

top 10 list. It is PER1. It is noted that no problem factors of the PM dimension are included in the top 10 list.

Problem dimensions and software maintainability

5. Discussion and implications

Group1: SPI = ‘‘Yes” (n = 90)

Group2: SPI = ‘‘No” (n = 47)

Fvalue

Pvalue

Group mean difference

*

Group1 < Group2

Documentation quality problems (DOC) Programming quality problems (PGM) System requirements problems (SYS) Personnel resources problems (PER) Process management problems (PM)

3.68

3.92

4.31

0.040

3.59

3.69

0.82

0.367



3.44

3.57

1.13

0.289



3.34

3.58

3.28

0.072



3.10

3.60

12.98

0.000**

Group1 < Group2

Software maintainability (MA)

3.56

3.35

4.10

0.045*

Group1 > Group2

* **

P-value < 0.05. P-value < 0.01.

maintainability level. Therefore, this suggests that SPI is able to increase levels of software maintainability, but is only likely to enhance it to a medium level. 4.4. A list of the top 10 higher-severity problem factors The third research question of this study concerns the identification of the top 10 higher-severity software development problem factors which affect software maintainability. In order to determine the top 10 higher-severity problem factors, the mean values of each problem factor were computed and sorted in descending order. Table 15 presents a list of the top 10 higherseverity problem factors and their associated mean values. Four problem factors of the DOC dimension are included in the top 10 list. They are DOC1, DOC3, DOC4 and DOC5. Three problem factors of the PGM dimension are included in the top 10 list. They are PGM1, PGM2 and PGM5. Two problem factors of the SYS dimension are included in the top 10 list. They are SYS4 and SYS5. One problem factor of the PER dimension is included in the

Table 15 The top 10 higher-severity problem factors.

The results of this study show that DOC and PGM are the principal problem dimensions affecting software maintainability. These two problem dimensions can be regarded as ‘‘software comprehension problems”. In the software maintenance phase, a good quality of documentation and programming can contribute to software comprehension (Woodfield et al., 1981; Arunachalam and Sasso, 1996), and make software easy to maintain. Therefore, project managers should develop an effective management strategy to deal properly with these two problem dimensions in the software development phase to achieve higher levels of software maintainability. The results of this study show that SPI can positively affect software maintainability. Project managers should consider properly utilizing the SPI to improve the maintainability of the delivered software systems. The results of this study also show that SPI can only help reduce levels of severity of two specific problem dimensions (i.e., DOC and PM problem dimensions), whereas no statistically significant difference is observed for the other three problem dimensions (i.e., PGM, SYS, and PER problem dimensions). Therefore, project managers should carefully control the levels of severity of the PGM, SYS, and PER problem dimensions by proper management and the associated software engineering activities to ensure project performance. This study also presents a list of the top 10 higher-severity problem factors, as shown in Table 15. The ‘‘top 10 list” reveals the critical problem factors in software projects. This can provide project managers with a better idea of how to deal effectively with these problem factors when facing the constraints of limited resources within the organization or project. For example, although DOC and PGM problems scored worse than the baseline threshold for all levels of maintainability, all problem factors in DOC and PGM problem dimensions are not included in the ‘‘top 10 list”. Organization or project managers should consider putting these problem factors into a risk management system or issue tracking system during the software development phase to enhance the maintainability of the delivered software systems. The results derived from this empirical study do not demolish the conventional wisdom existing in software engineering practices, but rather expounds it in more detail. For example, the most well-known conventional wisdom holds that the poor quality of documentation and original programming will cause maintenance problems (Lientz and Swanson, 1981), and the software project problems will negatively affect software product quality (Nidumolu, 1996). The results of this study reveal that successfully dealing with the documentation quality and programming quality problem dimensions is a key requirement to achieve the desired software maintainability, and the software development problem factors can negatively affect software maintainability. Furthermore, the results of this study also reveal several important findings across low, medium and high maintainability projects as depicted in Section 4.1. Conventional wisdom holds that the SPI can effectively solve project problems and eventually results in high quality products (Humphrey, 1992). The results of this study reveal that the SPI can help reduce the level of severity of the documentation quality and process management problems, and is only likely to enhance software maintainability to a medium level, as depicted in Section 4.3.

Rank

Software development problem factors

Abbreviation

Mean values (n = 137)

1 2

Inadequacy of source code comments Documentation is obscure or untrustworthy Changes are not adequately documented Lack of traceability Lack of adherence to programming standards Lack of integrity/consistency Continually changing system requirements Frequent turnover within the project team Improper usage of programming techniques Lack of consideration for software quality requirements

PGM2 DOC1

4.05 4.04

DOC4 DOC3 PGM1

3.87 3.85 3.69

DOC5 SYS5

3.66 3.58

PER1

3.55

PGM5

3.51

6. Conclusions

SYS4

3.51

The problem factors of software projects in the software development phase are different from problem factors in the software

3 4 5 6 7 8 9 10

J.-C. Chen, S.-J. Huang / The Journal of Systems and Software 82 (2009) 981–992

maintenance phase. This study has provided empirical evidence that problem factors in the software development phase can negatively affect software maintainability. Therefore, the requirement of software maintainability should be taken into account, and the associated problem factors should also be properly dealt with when performing analysis, design and implementation activities during the software development phase in order to achieve higher levels of software maintainability. This study also provided empirical evidence that SPI can help in reducing the level of severity of the documentation quality and process management problems, and can help enhance software maintainability to a medium level. Such knowledge can help project managers develop an effective management strategy in order to achieve higher levels of project performance. It is important to understand the potential limitations of this study. It is noted that this study did not investigate the non-responsive group by making a follow-up phone call to a random selection of them. Meanwhile, the problem known as ‘‘regression toward the mean” (Raiffa and Schlaifer, 2000) also exists in this study where groups are derived empirically from a dataset based on subjective assessments. Finally, there are no projects in the 137 surveyed projects of this study which were developed using agile methods. Agile software development projects have a completely different approach to documentation and also do not conform to documentation-oriented standards (Holcombe, 2008). Meanwhile, this study only investigates the effects of SPI on software maintainability, which has been identified in earlier research as having a possible influence on project performance. There are still other potential characteristics that may also possibly affect the development problem factors and the associated maintainability of the delivered software. The above limitations of this study are also deserving of future research. References Aldenderfer, M.S., Blashfield, R.K., 1984. Cluster Analysis – Quantitative Applications in the Social Sciences. Sage Publications. Apfelbaum, L., Doyle, J., 1997. Model based testing. In: 10th International Software Quality Week Conference, San Francisco. Armstrong, J., Overton, T., 1997. Estimating non-response bias in mail survey. Journal of Marketing Research 15, 396–402. Arthur, J.D., Stevens, K.T., 1989. Assessing the adequacy of documentation through document quality indicators. In: Proceedings of the Conference on Software Maintenance. IEEE CS Press, pp. 40–49. Arunachalam, V., Sasso, W., 1996. Cognitive processes in program comprehension: an empirical analysis in the Context of software reengineering. Journal of Systems and Software 34 (3), 177–189. Ashrafi, N., 2003. The impact of software process improvement on quality: in theory and practice. Information and Management 40 (7), 677–690. Balci, O., 2003. Verification, validation and certification of modeling and simulation applications. In: Proceedings of the 2003 Simulation Conference, vol. 1, pp. 150– 158. Bendifallah, S., Scacchi, W., 1987. Understanding software maintenance work. IEEE Transactions on Software Engineering 13 (3), 311–323. Berczuk, S.P., Appleton, B., 2003. Software Configuration Management Patterns: Effective Teamwork, Practical Integration. Addison-Wesley Professional. Chrissis, M.B., Konrad, M., Shrum, S., 2006 (CMMI: Guidelines for Process Integration and Product Improvement), second ed. Addison-Wesley Professional. CMMI Product Team, 2002. Capability Maturity Model Integration, Version1.1, CMMI–SW/SE/IPPD/SS, Continuous Representation. Software Engineering Institute Technical Report CMU/SEI-2002-TR-012. Dekleva, S., 1992. Delphi study of software maintenance problems. In: Proceedings of the 1992 Conference on Software Maintenance. IEEE Computer Society, pp. 10–17. Dethomas, Anthony, 1987. Technology requirements of integrated, critical digital flight systems. In: AIAA Guidance, Navigation and Control Conference, Monterey, CA, Technical Papers, vol. 2. American Institute of Aeronautics and Astronautics, New York, pp. 1579–1583. Diaz, M., Sligo, J., 1997. How software process improvement helped Motorola. IEEE Software 14 (5), 75–81. Etzkorn, L.H., Huges Jr., W.E., Davis, C.G., 2001. Automated reusability quality analysis of OO legacy software. Information and Software Technology 43 (5), 295–308.

991

Fowler, M., Beck, K., Brant, J., Opdyke, W., Roberts, D., 1999. Refactoring: Improving the Design of Existing Code. Addison-Wesley Professional. Galin, D., 2003. Software Quality Assurance: From Theory to Implementation. Addison Wesley. Gellerich, W., Plodereder, E., 2001. Parameter-induced Aliasing in Ada. Springer, Berlin/Heidelberg. Ghods, M., Nelson, K.M., 1998. Contributors to quality during software maintenance. Decision Support Systems 23, 361–369. Gill, G.K., Kemerer, C.F., 1991. Cyclomatic complexity density and software maintenance productivity. IEEE Transactions on Software Engineering 17 (12), 1284–1288. Girden, E.R., 1992. ANOVA: Repeated Measures – Quantitative Applications in the Social Sciences. Sage Publications. Haley, T.J., 1996. Software process improvement at Raytheon. IEEE Software 13 (6), 33–41. Han, W.M., Huang, S.J., 2007. An empirical analysis of risk components and performance on software projects. The Journal of Systems and Software 80 (1), 42–50. Hartigan, J.A., 1975. Clustering Algorithms. John Wiley & Sons, New York (NY), USA. Hartigan, J.A., Wong, M.A., 1979. A K-means clustering algorithm. Applied Statistics 28 (1), 100–108. Hayes, J.H., Dekhtyar, A., Sundaram, S.K., Holbrook, E.A., Vadlamudi, S., April, A., 2007. REquirements TRacing On target (RETRO): improving software maintenance through traceability recovery. Innovations in Systems and Software Engineering 3 (3), 193–202. Hofmann, H.F., Lehner, F., 2001. Requirements engineering as a success factor in software projects. IEEE Software 18 (4), 58–66. Holcombe, M., 2008. Running an Agile Software Development Project. Wiley. Hoyle, D., 2005. ISO 9000 Quality Systems Handbook. fifth ed., ButterworthHeinemann. Huang, S.J., Han, W.M., 2006. Selection priority of process areas based on CMMI continuous representation. Information and Management 43, 297–307. Humphrey, W.S., 1992. Introduction to Software Process Improvement. Software Engineering Institute Technical Report CMU/SEI-92-TR-7. ISO 9001, 2000. Quality management systems – Requirements. International Standard Organization, ISO 9001. ISO/IEC 9126-1, 2001. Software Engineering – Product Quality – Part 1: Quality Model. International Standard Organization. Jiang, J.J., Klein, G., 1999. Risks to different aspects of system success. Information and Management 36 (5), 263–272. Jiang, J.J., Klein, G., 2000. Software development risks to project effectiveness. The Journal of Systems and Software 52 (1), 3–10. Jiang, J.J., Klein, G., Balloun, J., 1998. Perceptions of system development failures. Information and Software Technology 39 (14–15), 933–937. Jones, C., 2004. Software project management practices: failure versus success. CrossTalk: The Journal of Defense Software Engineering, 5–9. Juristo, N., Moreno, A., Silva, A., 2002. Is the European industry moving toward solving requirements engineering problems? IEEE Software 19 (6), 70–77. Kernighan, B.W., Plauger, P.J., 1982. The Elements of Programming Style, second ed. McGraw-Hill, Inc., New York, NY, USA. Kramer, D., 1999. API documentation from source code comments: a case study of Javadoc. In: Proceedings of the 17th Annual International Conference on Computer Documentation. SIGDOC: ACM Special Interest Group for Design of Communications, pp. 147–153. Kuilboer, J.P., Ashrafi, N., 2000. Software process and product improvement: an empirical assessment. Information and Software Technology 42 (1), 27–34. Leblang, D.B., McLean, D.G., 1985. Configuration management for large-scale software development efforts. In: GTE Workshop on Software Engineering Environments for Programming in the Large, pp. 122–127. Lee, M.L., 1998. Change impact analysis of object-oriented software. Doctoral Dissertation, George Mason University, Fairfax, Virginia, USA. Lethbridge, T.C., Singer, J., Forward, A., 2003. How software engineers use documentation: the state of the practice. IEEE Software 20 (6), 35–39. Lientz, B.P., Swanson, E.B., 1981. Problems in application software maintenance. Communications of the ACM 24 (11), 31–37. Martin, J., McClure, C., 1983. Software Maintenance: The Problem and its Solutions. Prentice-Hall. McDonough, E.F., 2000. Investigation of factors contributing to the success of crossfunctional teams. Journal of Product Innovation Management 17 (3), 221–235. McGarry, F., Pajerski, R., Page, G., Waligora, S., Basili, V., Zelkowitz, M., 1994. Software Process Improvement in the NASA Software Engineering Laboratory. Software Engineering Institute Technical Report CMU/SEI-94-TR-22. Mogyorodi, G., 2001. Requirements-based testing: an overview. In: 39th International Conference and Exhibition on Technology of Object-Oriented Languages and Systems (TOOLS39), p. 286. Monkevich, O., 1999. SDL-based specification and testing strategy for communication network protocols. In: Proceedings of the 9th SDL Forum, Montreal, Canada. Nidumolu, S.R., 1996. Standardization, requirements uncertainty and software project performance. Information and Management 31, 135–150. Nosek, J.T., Palvia, P., 1990. Software maintenance management: changes in the last decade. Journal of Software Maintenance: Research and Practice 2 (3), 157–174. Nunnally, J.C., 1978. Psychometric Theory. McGraw-Hill, New York, NY. Park, C.H., Kim, Y.G., 2003. Identifying key factors affecting consumer purchase behavior in an online shopping context. International Journal of Retail and Distribution Management 31 (1), 16–29.

992

J.-C. Chen, S.-J. Huang / The Journal of Systems and Software 82 (2009) 981–992

Paulish, D.J., Carleton, A.D., 1994. Case studies of software-process-improvement measurement. Computer 27 (9), 50–57. Pigoski, T.M., 1996. Practical Software Maintenance: Best Practices for Managing Your Software Investment. Wiley. Rai, A., Al-Hindi, H., 2000. The effects of development process modeling and task uncertainty on development quality performance. Information and Management 37 (6), 335–346. Raiffa, H., Schlaifer, R., 2000. Applied Statistical Decision Theory. WileyInterscience. Schneidewind, N.F., 1987. The State of Software Maintenance. IEEE Transactions on Software Engineering 13 (3), 303–310. Schneidewind, N.F., 2002. Body of Knowledge for Software Quality Measurement. Computer 35 (2), 77–83. Schulmeyer, G.G., 2007. Handbook of Software Quality Assurance, fourth ed. Artech House Publishers. Sousa, M.J., 1998. A survey on the software maintenance process. In: 14th IEEE International Conference on Software Maintenance (ICSM’98), p. 265. SPSS Inc., 2004. SPSS 13.0 Base Users Guide. Prentice-Hall. Standish Group International, Inc., 2004. Third Quarter Research Report. Sumner, M., 1999. Critical success factors in enterprise wide information management systems projects. In: Proceedings of the 1999 ACM SIGCPR Conference on Computer Personnel Research. SIGCPR: ACM Special Interest Group on Computer Personnel Research, pp. 297–303. Swanson, E.B., Beath, C.M., 1992. Maintaining Information Systems in Organizations. John Wiley and Sons, Ltd. Tan, W.-G., Gable, G.G., 1998. Attitudes of maintenance personnel towards maintenance work: a comparative analysis. Journal of Software Maintenance: Research and Practice 10 (4), 59–74. Tonella, P., 2001. Concept analysis for module restructuring. IEEE Transactions on Software Engineering 27 (4), 351–363. Visconti, M., Cook, C., 1993. Software system documentation process maturity model. In: Proceedings of the 1993 ACM Conference on Computer Science, pp. 352–357. Wallace, L., Keil, M., Rai, A., 2004. Understanding software project risk: a cluster analysis. Information and Management 42, 115–125.

Wilson, D.N., Hall, T., 1998. Perceptions of software quality: a pilot study. Software Quality Journal 7 (1), 67–75. Woodfield, S.N., Dunsmore, H.E., Shen, V.Y., 1981. The effect of modularization and comments on program comprehension. In: Proceedings of the 5th International Conference on Software Engineering, pp. 215–223. Yip, S.W.L., Lam, T., 1994. A software maintenance survey. In: Software Engineering Conference, 1994 Proceedings, First Asia–Pacific, pp. 70–79. Yip, S.W., 1995. Software maintenance in Hong Kong. In: Proceedings of the 1995 International Conference on Software Maintenance. IEEE Computer Society Press, Los Alamitos, CA, pp. 88–95. Jie-Cherng Chen is currently working toward his Ph.D. degree in the Department of Information Management, National Taiwan University of Science and Technology (NTUST), Taiwan. He received his Bachelor degree in Computer Science from Tamkang University (TKU), Taiwan, in 1983, and received his Master degree in Information Engineering also from TKU in 1988. He is also a member of the Software Engineering and Management Laboratory (SEML) at NTUST, where he participated in several projects from National Science Council, Taiwan. His main research interests include software engineering, software process improvement, software quality management, software project management, and software metrics. Sun-Jen Huang received his B.A. in Industrial Management in 1988, and his M.S. in Engineering and Technology in 1991, both from the National Taiwan Institute of Technology, Taiwan, and his Ph.D. degree from the School of Computer Science and Engineering, La Trobe University, Melbourne, Australia, in 1999. He is currently an associate professor in the Department of Information Management, National Taiwan University of Science and Technology (NTUST), Taiwan. He is also the head of the Software Engineering and Management Laboratory at NTUST, which hosts several research projects from software industry and National Science Council, Taiwan. Dr. Huang is also the chairman of Software Quality Promotion Committee at the Chinese Society for Quality. His main research interests are software engineering and project management-related topics, especially on measurement and analysis, software quality management, software process improvement, and software project estimation.