A field study of the relationship of information flow and maintainability of COBOL programs

A field study of the relationship of information flow and maintainability of COBOL programs

Information and Software Technology 1995 37 (4) 195-202 A field study of the relationship of information flow and maintainability of COBOL programs M...

739KB Sizes 2 Downloads 18 Views

Information and Software Technology 1995 37 (4) 195-202

A field study of the relationship of information flow and maintainability of COBOL programs Michael M Pickard Department of Computer Science, Stephen F. Austin State University, Box 13063, Nacogdoches, TX 75962, USA

Bradley D Carter Department of Computer Science, Mississippi State University, Drawer CS, Miss. State, MS 39762, USA

This paper reports the results of a field study of the relationship of information flow to the maintainability of COBOL modules in a data processing environment. The study, which considers 238 modules from three different organizations, examines the correlation of information flow metrics and subjective maintainability ratings. Results of the study show that, for the environments included in the study, information flow does have significant correlation with maintainability, and information flow metrics can be used effectively to help identify modules with low (poor) maintainability. Keywords: software metrics, information flow, maintainability

For most software products, the maintenance phase dominates the costs (40% to 70%) of the software lifecycle, and, therefore, such products should be designed with high maintainability as an important objective. This has motivated study of various software metrics that show promise of predicting some aspect of a product's maintainability. Unfortunately, many of these metrics are not available until detailed design or coding is complete while most decisions affecting maintainability must occur during architectural design. There are, however, structure-oriented properties that can be measured during architectural design. One such property frequently cited as having potential as an indicator of a product's maintainability is information flow, first studied by Henry and Kafura ~. Of course, no study has indicated that information flow, or for that matter any single measure, will consistently predict maintainability or any other quality attribute. Researchers believe, and studies confirm, that design decisions must be examined using measures of several properties, any of which could indicate that more study of a particular decision is needed or that a design entity should be examined more closely. This paper describes an empirical study to investigate the effectiveness of measuring information flow as one indicator of potential 'problem modules' in a COBOLoriented business data processing environment. The empirical study examines 238 COBOL modules from three different organizations.

0950-5849/95/$09.50 © 1995 Elsevier Science B.V. All rights reserved

Information flow The information flow of a software component is the amount of information (or control) flowing to and from other components of the program or system. It represents one dimension of the complexity of the component, and the sum of the information flow of each component is the information flow of the program or system. In the past decade, several metrics have been used to characterize information flow. An information flow metric was first suggested by Henry and Kafura as

IF=length * (fan-in *fan-out)e, where length is used as a simple measure of procedure complexity, and fan-in and fan-out measure the complexity of the procedure's connections to its environment. Fan-in is defined to be the number of local flows into a procedure plus the number of data structures from which the procedure retrieves information, while fan-out is defined as the number of local flows from a procedure plus the number of data structures which the procedure updates. For two modules M~ and M z local flows of information are said to occur from M~ to M 2 when M~ calls M z, or when M 2 calls M which returns a value that M 2 subsequently uses, or if M 3 calls both Mj and M s and passes an output value from M~ to M21. Studying modules of the UNIX operating system, Henry

195

Information flow and maintainability of COBOL programs: M M Pickard and B D Carter and Kafura produced an information flow complexity ranking of modules that significantly correlated with rankings of the number of changes to the module. This indicated a relationship between complexity (as measured by information flow) and the requirement for change. Elimination of the length factor from the information flow metric yielded an even higher correlation, leading them to revise their metric definition to

IF= (fan-in * fan-ouO e, where fan-in and fan-out are as defined earlier I. Shepperd and Ince found problems with Henry and Kafura's claims that information flow is a useful measure of complexity and also described problems in using it as an indicator of maintainability. However, they did suggest that their information flow metric could be valuable in outlier analysis to identify potential problem areas, such as lack of cohesion2. Shepperd later investigated the ability of various methods of measuring information flow to assist software developers in making distinctions between alternative designs. In his study, he identified one metric that has a strong relationship with development time, and therefore may be a more useful complexity indicator. His information flow metric, designated as IF4, is defined as

IF4 = (unique-fan-in * unique-fan-out) 2, where unique-fan-in is the total of local and global flows into a procedure with all duplicates removed, and uniquefan-out is all the local and global flows from a procedure, excluding duplicates. For modules M~ and M 2, a local flow occurs when M 1 invokes M 2 and passes a parameter, or when M~ invokes M 2 and M 2 returns a parameter; a global flow takes place when a module updates a global data structure or retrieves data from the structure3. Henry has investigated the use of a hybrid form of the original metric defined as

HC = Ci * (fan-in * fan-out) 2, p

P

where HC is the complexity of module P, Ci is the P . P internal complexity of module P, fan-in is the number of local flows into P plus the number of global structures from which a module retrieves information, and fan-out is the number of local flows from P plus the number of global data structures that P updates. Ci can be any code metric used to indicate internal complexity e 4. Kitchenham et al. 5 compared the ability of several design metrics and code metrics to predict problem programs. Their study examined the relationship between various metrics and the following: number of changes due to enhancement; number of changes due to faults; and subjective assessment of complexity. In particular, information flow metrics were considered for their predictive quality. Citing variations that Henry and Kafura have used in their definitions of an information flow metric, they chose to define another metric, named information flow complexity, as

IFC=(ifi * ifo) 2,

196

where /3'i is informational fan-in and ifo is informational fan-out. Informational fan-in and informational fan-out of module M are defined as

ifi=cp+ (dr +db) and

ifo=pc+op+ (dw+db) where cp is defined as the number of procedures that call M, pc is the number of procedures called by M, op is the number of output parameters of M, dr is the number of data structures from which M reads but does not write, dw is the number of data structures to which M writes but dces not read, and db is the number of data structures that are both read and written to by M. They concluded that large informational fan-out may imply excessive complexity, large fan-in may imply excessive size and complexity, and that large values for both/fi and ifo should flag modules for immediate review5. A later similar study by Kitchenham and Linkman confirmed these conclusions, but further indicated that all the size and structure metrics studied, except for informational fan-in, appeared to act as indicators of future maintenance problems 6. Others have developed composite design metrics including elements of information flow that have been effective at predicting maintainability problems. David Card's system complexity metric and a metric, D(G), proposed by Zage and Zage are composite design metrics with good results from initial studies 7'8. A study is currently underway at NASA's Software Engineering Laboratory to examine measures available at the end of architectural design to assess maintainability of Ada programs in an object-oriented design environment. Early results show measures of information flow as potential predictors9.

The cases

This research examines over 230 modules from three operational data processing systems that are products of three quite different environments. The functional applications of the systems, and the machines on which they were developed and now operate are different. The organizations which shelter the respective development environments are not within the same industries and are very dissimilar in other respects. Organization A is a public sector agency; Organization B is a major firm in the petroleum industry; and Organization C is a multi-national chemical corporation. There are, however, underlying similarities among the systems and the environments in which they were produced and are maintained. They are all data processing production systems with all modules written in COBOL. Perhaps now more than ever it is true that there is really no 'typical' data processing environment, but these systems are consistent in many respects with operational COBOL systems that the authors have encountered during 20 plus years of nonacademic experience.

Information and Software Technology 1995 Volume 37 Number 4

Information flow and maintainability of COBOL programs: M M Pickard and B D Carter Case Study I From Organization A 57 COBOL modules were taken for study. These modules, which are part of a larger system, ranged in age from approximately nine years to approximately one year. The system from which the modules are taken was developed as the organization's means of coping with one facet of its multiple responsibilities to end users and to various external entities. A number of different programmers contributed to the development and maintenance of the older programs. Although some documentation of user requirements existed, no documentation of the architectural design was available. Some documentation of the maintenance history of each module was available as part of the program documentation. However, the maintenance documentation usually did not indicate whether a particular maintenance action affected more than one module. The constraints of the environment within which the system operates are in a constant state of rapid flux; hence, demands for new or changed features in the system are continual, and maintenance activity is frequent and heavy. The programming environment for the system from Organization A used no CASE tools, code generators, or similar development tools. Some reuse of code was employed within the system. The basic system of which the programs are a part began as a batch system that accepted input data in card image format from a dumb terminal. It evolved over a period of approximately nine years to a much larger system that includes batch and on-line programs; input is received via on-line transactions, magnetic tape from external sources, and interfaces with other in-house systems. The underlying file structures that serve as the main repositories of data for the system are a mixture of old and new, with some data remaining in ordinary sequential files, and other data managed by a data base management system. Some programs use vender extensions to COBOL to allow access to the data base management system. The programs studied appear to be fairly representative of the larger, comprehensive system. Case Study H Case Study II consists of 31 modules from a system from Organization C. The purpose of this system is to maintain records and calculate commissions for an international sales force. Evolution of the analysis, functional specification, and architectural design was depicted in several documents that utilized narratives, data flow diagrams, structure charts, and system flow charts. These available design documents reflected the design at several different stages of refinement; earlier documents had not been updated to show the system design evolution. Consequently, it was difficult to extract from them meaningful design measures that could be linked to the programs that eventually were produced. Another complicating factor was that parts of the system had been modelled after a similar system in use in another country, and, while some documentation of the original system was available, it was unclear which programs were descendants of this model. At the time of this study, the system construction had begun approximately six years earlier. System implementation had taken

Information and Software Technology 1995 Volume 37 Number 4

place approximately four years before this study. Documentation of the maintenance history of each module was also supplied for analysis. A log of changes to the modules had been maintained; information potentially useful to this study contained in the log includes a narrative description of the modification required, module identification, and date of change. Because the log is organized by module rather than by change, it was difficult to determine the number of modules affected by a given change. Case Study III and Case Study IV Case Study III and Case Study IV are composed of more than 150 modules written in COBOL that form all of a system in use at Organization B. The basic purpose of this system is to maintain information about certain corporate investments. Many of the programs interface with a data base management system. There is a mixture of batch and on-line programs. All of the modules are stand-alone; that is, there are no calls to COBOL subprograms. Development of the system was aided by a COBOL code generator. Considerable documentation of the architectural design in the form of data flow diagrams and process narratives was available for study, as was a maintenance history and some documentation of the implemented system. The maintenance history included for each maintenance action a brief narrative description of the nature of the change, a list of the modules affected, and an indication of the estimated person-hours required by the change. This arrangement allowed identification of the changes that affected multiple programs. The implementation of numerous programs differed slightly from what had appeared in the original design. (For example, a number of programs actually had one more input file than was indicated in the design documents.) One could view the latter quality as a normal effect of stepwise refinement. However, these differences between the design and the implemented system also could be attributed to a failure of the implementors to adhere faithfully to the design, or to an incompleteness of the design. Because of the inconsistencies between the design and the set of programs constituting the implemented system, data was gathered and metrics were computed in two different ways, and the modules were divided into two cases: Case III includes 77 modules that appeared both in the original design and in the current system documentation and uses metrics computed from the artefacts of the original design; Case IV consists of the remaining 73 modules for which primitives were available, and metric computation is based on the characteristics of the implemented modules. Comparison of the Case Study Characteristics Tables, 1, 2, and 3 provide a basis for comparison of the surface characteristics of the four case studies. As can be seen, the systems may be characterized as stable, by virtue of age and volatility. The systems run the gamut from no use of data base management systems to heavy use thereof. Design documentation, where present, does not provide a basis for drawing conclusions about all of the programs

197

Information flow and maintainability of COBOL programs: M M Pickard and B D Carter Table 1. A qualitative comparison of case study software characteristics Case I DBMS use Approximate age of design Approximate age of implementation Code reuse Design reuse Volatility

Case II

Case HI

Case IV

moderate none 9 years 6 years

heavy 3 years

heavy 3 years

8 years

4 years

2 years

2 years

minimal none high

moderate moderate moderate substantial none none low to low low moderate

Table 2. A qualitative comparison of case study design documentation Case I

Case H

Case III

Case IV

Amount of design documentation Available design artefacts

none

moderate extensive minimal

none

Traceability of design to implementation

none to trace

Percentage of modules seen clearly in design

0%

structure DFD's charts; narratives DFD's; system flow charts; narratives very usually difficult possible to connect < 39% 100%

narratives

difficult or impossible 0%

Table 3. A qualitative comparison of case study maintenance data

Dated program comments for each change Dated records external to programs Man-hours per change Category of change

Case I

Case H

Case l l I

Case IV

yes

no

no

no

few

yes

yes

yes

no yes

no no

estimated estimated some some

Table 4. A comparison of case study software sizes

Number of modules Source lines of code (SLOC) Average SLOC per module

Case I

Case II

Case III

Case IV

Total

57

31

77

73

238

22 929

20 402

49 509

33 817

126 657

402

658

643

463

532

in the systems. Available maintenance information does include data that would be helpful to a maintenance programmer, but very little that could be used to characterize maintainability. The kinds of data items available from the three organizations vary widely. Although the systems have some similarities, the information about them reflects the widely varying environments in which they were developed.

198

Table 4 compares the data sets available for study from the three organizations. Figure 1 shows the distribution of the module sizes for the case studies. In this figure, module size is measured in source lines of code (SLOC).

Procedure Assessment of maintainability 'Maintainability is the ease with which a program can be corrected if an error is encountered, adapted if its environment changes, or enhanced if the customer desires a change in requirements', according to a popular software engineering text ~°. Although most practising software developers and maintainers would probably have no difficulty with this definition, it suggests no inherent means to measure the quality called maintainability. Objective measures, such as mean time to change (MTTC), are highly situationally and environmentally dependent. That is, it is difficult to isolate all variables except the variable of interest. Other metrics suggested for use in capturing this quality generally tend to share this problem. If it is true that most software organizations operate at maturity level I H, which may be characterized as lacking in any meaningful measurement activity, then one could expect little objective data to be available that might be related to maintainability. Faced with these difficulties, researchers have historically relied on subjective measurement of maintainability and other qualities. Kitchenham used a 1 to 5 rating scale of complexity by members of development teams, and she has used the subjective evaluation of maintainability by a system expert6'~2. In one of Shepperd's studies, members of the technical staff placed modules into one of four categories according to the perceived complexity of making a maintenance change; and the US Air Force Operational Test and Evaluation Center (AFOTEC) uses a questionnaire to allow evaluators to judge factors contributing to maintainability ~3'14"15. The tree structure of maintainability metrics suggested by Oman and Hagemeister includes a number of metrics based on subjective judgements, including subjective product appraisals and subjective appraisals of the supporting documentation ~6. An abridged model of the AFOTEC questionnaire has also been used at Hewlett-Packard to evaluate software maintainability ~7. For this research, system experts were asked to complete a questionnaire that required a numerical rating of understandability, modifiability, and testability of system modules. These three characteristics have been put forth as components of maintainability ~°'~4. The questionnaire allowed ratings of one (worst) to six (superior) for each of the three attributes, with each rating category described on the questionnaire instrument. When more than one system expert rated a module, the ratings were averaged. The subjective maintainability score, M, of each module was then calculated as the sum of the understandability, testability, and modifiability ratings, allowing a minimum score of three and a maximum score of 18.

Information and Software Technology 1995 Volume 37 Number 4

Information flow and maintainability of COBOL programs: M M Pickard and B D Carter

SLOC 1-

,'//J',,\l 1o

99

II IIII II

1 0 0 - 199 200-

299

300-

399

400-

499

§00-

699

600-

699

700-

799

O00-

899

900-

999

"//~\\\\\I

=s

-

o~

n7

i

~

1000-1099 1100-1199

-

C a s e IV

7

1200-1299

- z-A=

Case III

1500-1599

- E=

C a s e II

1400-1499

-

Case i

1500+

- 1//~

',I ',', ',', N \ \ ]

0

is

I

I

10

20 Number

I

I

I

30

40

50

60

of M o d u l e s

Figure 1 Distribution of module sizes

Measurement of information flow As discussed earlier, a number of information flow metrics have been examined for applicability in various environments. These variations were all considered for study of the data available from the three organizations that contributed to this research. It became clear, however, that not all components necessary to calculate the metrics as specified were readily available from the data for this study. The most elusive component was number of local flows, which contributes to the original information flow metric, as well as to later variations developed by Henry, and to metrics proposed by Shepperd. Nor did the organizations' documentation in all cases provide detailed information pertaining to 'parameters' passed to a module, a component of Kitchenham's informational fan-out. In order to use components that could be derived from each of the available data sets without undue difficulty, two information flow metrics were chosen for this study: IFs = (gdsr + gdsw) 2 and IF5 = ((ucm + gdsr) * (ucbm + gdsw)) 2, where gdsr is the number of global data structure reads, gdsw is the number of global data structure writes, ucm is

Information and Software Technology 1995 Volume 37 Number 4

the number of unique calls to the module, and ucbm is the number of unique calls made by the module. The first metric (IFs), which can be referred to as simplified information flow, can be very easily obtained from the architectural design. The second metric (IF5) should also be obtainable with little difficulty. Both are essentially less complex versions of measures that have been used in previous studies, but recognize the high reliance of transferring information from module to module via global variables and files in the environments under study. IFs, however, includes no primitive measure of control flow nor the associated information flow as would be counted in the local flows components of IF. IF5, on the other hand, includes control flow information which are relatively easy to count early in the design phase. To validate that the IFs and IF5 metrics are representative measures of information flow for this environment and consistent with accepted metrics, Shepperd's IF4 metric was calculated for the 57 modules of Case I and compared to IFs and IF5. Table 5 shows that both metrics correlate well with IF4. This, of course, is because much of the information flow in the data processing modules of all four cases is via global variables and shared files, and there are limited local flows. Table 5 also shows that the three metrics correlate consistently to the subjective maintainability ratings.

199

Information flow and maintainability of COBOL programs: M M Pickard and B D Carter Table 5. Spearman correlation coefficients of IF4, lFs, IF5, and maintainability for Case I IF4 IF4 lFs IF5 M

IFs

IF5

* 0.97 -0.62

* -0.63

* 0.79 0.81 -0.56

Table 6. Spearman correlation coefficients of lFs and IF5 with the subjective maintainability rating Case I II IIl IV

lFs

IF5

-0.62 -0.60 -0.44 -0.74

-0.63 -0.56 -0.58 -0.92

Table 7. Problem modules Case

I II IV All

Total modules (count)

Problem modules (count)

Problem modules (percent)

57 31 73 161

17 9 25 51

29.8 29.0 34.2 31.7

Although this study was constrained somewhat by the available data, the fact that lFs and IF5 can be expected to be generally available early in the development process enhances their viability as predictive metrics. Table 5 shows that IFs and IF5 are, as expected, highly correlated with each other because so much of the information flow is via global data. As will be seen later, the additional information flow due to calls (ucm and ucbm) included in the IF5 metric only slightly improved the relationship to maintainability for the cases in this study.

Results Maintainability and the information flow metrics As an initial step in the analysis of the results, it is important to establish the statistical relationship of the information flow metrics (lFs and IF5) and maintainability. The objective, of course, is to determine if 'high information flow' implies 'low maintainability'. It is the relationship of the order of the information flow metrics to the order of the subjective maintainability scores that is of primary interest rather than the relationship between the magnitudes of the values of the metrics and scores. Therefore, the Spearman correlation coefficient which compares the relative order or rank of two sets of values is used to analyse the relationship of the metrics to maintainability. The use of Spearman correlations is also consistent with a number of other similar studies ~23 that have examined information flow and maintainability.

200

Correlations of 0.62 and 0.63 between IFs and IF5 and maintainability are significant and consistent with the results of the other studies. Table 6 shows the Spearman correlation coefficients between the IFs and IF5 metrics and subjective maintainability for each case. All, except IFs and Case III, appear to be significant relationships, and CASE III is later dropped from the study because of the lack of variability of the subjective maintainability ratings.

Assessment of information flow as a predictor of problem modules A primary question of this study is: 'How effective is information flow as an indicator of low maintainability or of problem modules?' Of course, it is well known that a set of metrics (rather than a single metric) must be selected as software quality indicators for a particular environment and it is not reasonable to believe that information flow alone could be used as such an indicator. This study examines the potential of information flow metrics as one of a class of metrics useful as an indicator of the maintainability of the software. To fully assess the effectiveness of the information flow metrics as maintainability indicators, it was necessary to identify the problem modules. Each module in the study has a subjective maintainability score (from 3 to 18) assigned by software professionals responsible for maintaining the software. Using these scores, the modules with the lowest 30% of the subjective maintainability scores from each case were identified as problem modules. Ties in scores prevented a delineation at exactly 30%, but a division very close to 30% could be made for Cases, I, II, and IV. Unfortunately, the subjective maintainability scores of Case III had little variability and made further analysis unreasonable. The subjective maintainability scores fell into only three groups, and they could be ranked to show either a very high correlation or a very poor correlation with the information flow metrics. Furthermore, there was not a 'division' of maintainability scores such that the 'worst 30%' could be identified. For this reason, Case III was dropped from the remainder of the analysis. Table 7 and Figure 2 show a summary of the number and percentage of problem modules. Seventeen such modules were found for Case I, nine modules for Case II, and 25 for Case IV for a total 51 of the 161 modules from Cases I, II, and IV. The information flow metrics IFs and IF5 were then used as indicators of the problem modules. The modules with the highest 30% of the information flow values were marked as potential problem modules. (Again, ties in the information flow metric scores prevented this from being exactly 30 %.) The IFs metric identified 35 of the 51 problem modules while the IF5 metric identified 40 of the problem modules (see Figure 3). The primitive counts of unique calls present in the IF5 metric added little to the effectiveness of IF5 in these cases. As expected, several modules that were not problems were incorrectly identified as such. As can be seen in Table 8, efficiency of the IFs and IF5 measures ranged from 44.4% to 96%. Also of interest is

Information and Sojqware Technology 1995 Volume 37 Number 4

Information flow and maintainability of COBOL programs: M M Pickard and B D Carter

Cases I, II, and IV (161 Total Modules) 51

Problem ~1~ Modules

Figure 2 Distribution of problem modules

Table 8. Summary of problem module identification using lFs and IF5 Problem modules Case

I 1I IV All modules

Metric

IFs IF5 IFs 1F5 lFs IF5 lFs IF5

Total count

Identified

17

12 12 4 4 19 24 35 40

9 25 51

(%)

Total

Misidentified

(%)

7 6 5 5 0 1 12 12

17.5 15.0 22.0 22.0 0.0 2.1 10.9 10.9

count

the fact that the percentage of 'false-positive' identifications ranged from 0% to 22 %. Summary Various studies have found that information flow metrics can be useful indicators of some qualities of software, notably complexity and maintainability. It is particularly beneficial when these indicators can be used early in the design phase to help identify modules with potential maintainability problems. The intent of this study was to explore the effectiveness of an information flow metric that can be easily determined early in the design phase of a COBOL data processing environment to help identify problem modules. A total of 238 COBOL modules from three disparate

Information and Software Technology 1995 Volume 37 Number 4

Non-problem modules

70.6 70.6 44.4 44.4 76.0 96.0 68.6 78.4

40 22 48 110

environments were selected for an empirical field study, and subjective maintainability ratings by system experts were determined for each module. Two easily measured information flow metrics, IFs and IF5, were defined and values determined for each module in the study. Spearman correlational analysis of these two measures and the subjective maintainability rating indicate a significant relationship between maintainability and each of the measures. The two information flow metrics are shown to be useful in correctly identifying problem modules, i.e. modules with maintainability ratings falling into the worst 30%. Overall, the success rate was 68.6% for IFs and 78.4% for IF5, with only 10.9% of the non-problem modules incorrectly identified as problems. We do not interpret these findings to be supportive of the

201

Information flow and maintainability of COBOL programs: M M Pickard and B D Carter

use of either IFs or IF5 as the 'magic' metric that will give us advance warning of maintenance problems. Rather, this study lends support to the more modest idea that information flow metrics can be valuable predictors. In fact, we believe that information flow is just one indicator of potential maintainability problems and should be used with other metrics (size, function points, internal complexity, etc.) that may also indicate such problems, much like a medical doctor might use blood pressure, weight, and temperature as indicators of health problems in a human. In the environments considered by this study, IFs and IF5 were consistently derivable from the systems that formed the basis of the study, and the results show that either metric could have predicted maintenance problems with a reasonable degree of accuracy for these systems. An effort to incorporate metrics into the software process in a manner that allows continual collection and analysis of metrics could eventually result in selection of a set of design metrics that are valuable in prediction of future difficulties. The results reported here confirm those of other studies that indicate information flow metrics are one class of software design measures with potential for use as predictors of maintainability of software products. More specifically, this study shows that IFs and IF5 are two metrics from that class that can be valuable in many COBOL data processing environments.

References 1 Henry, S M and Kafura, D G 'Software structure metrics based on information flow' IEEE Trans. Soft. Eng. Vol 7 No 5 (September 1981) pp 510-518

202

2 Shepperd, M and Ince, D 'Metrics, outlier analysis and the software design process' Inf. and Soft. Tech. Vol 31 No 2 (March 1989) pp 91-98 3 Shepperd, M 'Design metrics: an empirical analysis' Soft. Eng. J. Vol 5 No 1 (January 1990) pp 3-10 4 Henry, S M and Selig, C 'Predicting source-code complexity at the design stage' IEEE Soft. Vol 7 No 2 (March 1990) pp 36-44 5 Kitchenham, B A, Pickard, L M and Linkman, S J 'An evaluation of some design metrics' Soft. Eng. J. Vol 5 No 1 (January 1990) pp 50-58 6 Kitchenham, B A and Linkman, S J 'Design metrics in practice' Inf. and Soft. Tech. Vol 32 No 4 (May 1990) pp 304-310 7 Card, D N and Glass, R L Measuring software design quality PrenticeHall (1990) 8 Zage, W M and Zage, D M 'Evaluating design metrics on large-scale software' IEEE Soft. Vol 10 No 4 (July 1993) pp 75-81 9 Briand, L C, Morasca, S and Basili, V R 'Measuring and assessing maintainability at the end of high level design' in Proc. Conf. on Soft. Maintenance (Montreal, September 27-30, 1993) IEEE Computer Society Press, Los Alamitos, CA, USA, pp 88-97 10 Pressman, R S Software engineering: a practitioner's approach, 2rid edn, McGraw-Hill (1992) 11 Humphrey, W S Managing the software process Addison-Wesley (1989) 12 Kitchenham, B A 'An evaluation of software structure metrics' in Proc. 12th Int. Computer Soft. and Appl. Conf. (Chicago, October 5-8, 1988) IEEE Computer Society Press, Los Alamitos, CA, USA, pp 369-376 13 Shepperd, M 'Early life-cycle metrics and software quality models' Inf. and Soft. Tech. Vol 32 No 4 (May 1990) pp 311-316 14 Peercy, D E 'A software maintainability evaluation methodology' IEEE Trans. Soft. Eng. Vol 7 No 4 (July 1981) pp 343-351 15 US Air Force Operational and Test Evaluation Center Software maintainability--evaluation guide, AFOTEC Pamphlet 800-2, Vol 3, US Air Force, Kirkland AFB, NM, USA (1991) 16 Oman, P and Hagemeister, J 'Metrics for assessing a software system's maintainability' in Proc. Conf. on Soft. Maintenance (Orlando, November 9-12, 1992) IEEE Computer Society Press, Los Alamitos, CA USA, pp 337-344 17 Coleman, D, Ash, D, Lowther, B and Oman, P 'Using metrics to evaluate software system maintainability' IEEE Computer Vol 27 No 8 (August 1994) pp 44-49

Information and Software Technology 1995 Volume 37 Number 4