INFDRMATWN AND SOFTWARE TECHNOLOGY
ELSEVIER
Information and Software Technology 39 (1997) 2977305
Factors affecting design inspection effectiveness
in software development
Tzvi Raza, Alan T. Yaungb “Faculty of Management, Leon Recanati Graduate School of Business Administration, Tel Aviv University Ramat Aviv 69978, Israel blBM Corporation, I East Kirkwood Boulevard, Roanoke, TX 76299, USA
Received 29 March 1996; revised 17 September 1996; accepted25 September 1996
Abstract
We report the results of an analysis of defect-escape data from design inspections in two maintenance releases of a large software product. We found that the less effective inspections were those with the largest time investment, the likelihood of defect escapes being clearly affected by the way in which the time was invested and by the size of the work product inspected. Regression analysis of the data resulted in logarithmic and exponential equations relating these variables to defect-escape probability. We conclude with some practical guidelines for improving inspection effectiveness. Keywords:
Design inspection; inspection effectiveness;softwaredevelopment
1. Introduction
In software development, as in the development of most other types of products, decisions made at the design stages of the development process have the largest impact on key product characteristics such as cost, reliability, usability, and overall quality. It is, moreover, much easier to correct defects - deviations from requirements - at the design stage than at subsequent stages. Kan, Basili and Shapiro ]I] found that the ratio of the costs of finding and correcting a defect during design, test, and field use was about 1 to 13 to 92, respectively. Examining design documents is probably the most common method of detecting design defects in general; see for example Ackerman [2], Jones [3], Russell [4], Kelly, Sherif and Hops [5], Weller [6] and Barnard and Price [7], and one of the most widely used methods for detecting defects in software work products in particular is Fagan’s [8] formal inspection process. Basically, this process consists of five steps: overview, preparation, inspection, rework and followup. Fagan suggested that formal inspections should be carried out as the product moves from one stage to the next in the development process. Jones [9] reported that formal inspections consistently succeeded in removing at least 60% of the defects present in the software. Formal inspections do add to the overall project cost insofar as they require the investment of resources, mainly in the preparation and inspection steps, which involve a number of development staff. Formal inspections may 0950-5849/97/$17.00
0 1997
t-‘Il 0950-5849(96)01147-O
Elsevier Science B.V. At1 rights reserved
also affect the overall duration, as they introduce time dependencies to allow for adequate preparation and for scheduling of the various meetings involved. There have been some questions regarding the overall cost-effectiveness of inspections in software development. Votta [lo] argued that inspection meetings are wasteful and should be replaced by a series of deposition meetings involving one inspector each. He claimed that these deposition meetings are easier to schedule and better utilize the time of the participants. Porter, Votta and Basili [ll] carried out a controlled experiment to compare several variations of formal inspection meetings. Their findings suggest that inspection meetings (fault collection meetings, in their words) did not contribute to defect detection effectiveness. Nevertheless, formal inspections based on some variation of Fagan’s process remain a widely applied technique for quality assurance in software development. In order to justify its costs, the inspection process must be effective, that is, it must indeed detect all the defects present in the work product being inspected, with none escaping detection. From a quality and process-management perspective, one needs to be concerned with the factors that affect inspection effectiveness, for two related reasons. First, it is useful to initiate remedial action and to prevent residual defects from carrying on to the next stages in the process. Second, management is interested in verifying that the factors influencing inspection effectiveness are set at levels that will maximize the likelihood of effectiveness. In this paper we report the results of a study carried out on
298
T. Raz, A.T. Yaungllnformation and Sofhvare Technology 39 (1997) 297-305
defect-escape data from two maintenance releases of a large software product. The paper is organized as follows. In the next section we describe the inspection process applied in the project studied. Then, we describe the available data and the manner in which it was manipulated in order to obtain escape observations. Next, we present a detailed analysis of the data. Finally, we offer several recommendations that aim to improve design inspection effectiveness.
organization used various activities inspections. Form ing information reported: l
2. Inspection process The development project we studied started around 1988 and involved a product that provides an integrated solution to office productivity enhancement on mid-range computers. A new release is shipped to market every 12-14 months, the size of the 1994 release being approximately 2 million lines of code. The development process basically consists of the following steps: requirements analysis, highlevel design, low-level design, code, component test, and system test. In addition to these development steps, an inspection process is used for the verification of design, code, test plan, and test cases. An inspection is conducted when the work product moves from one step of the process to the next. The inspection process is essentially a variance of Fagan’s inspection method [8], with three types of role players: the author of the work product to be inspected, the moderator of the inspection meeting, and the inspectors of the work product who participate in the inspection meeting. The inspection process starts with a planning step, in which the author meets with the moderator to determine the schedule for the inspection. Then, the author calls an overview meeting and explains the content of the work product to be inspected. After the overview meeting, the participants study the work product on their own in order to prepare for the inspection meeting. The inspection meeting is facilitated by the moderator, who may designate a scribe to record the inspection defects and a reader to present the work product. Following the inspection meeting, the author corrects the defects found during the inspection and verifies that the errors have been corrected in the follow-up step.
3. Inspection data The data for the study came from two consecutive maintenance releases, consisting of new or enhanced functions developed in response to formal requests, called Design Change Requests (DCRs). Upon approval for implementation, each DCR is assigned to a change team that is responsible for making the necessary changes in the high-level design, low-level design (logic), and code of the product. The change team, in effect, follows an abridged version of the software development process, with inspections scheduled after each of the phases. The development
l
l
l
l
l
l
a database to capture data about the involved in the process, including this database we extracted the followfor each inspection that had been
Inspection identifier, which consisted of the concatenation of the DCR identifier, the product and release identifiers, the component identifier, and the inspection sequence number. This composite identifier was needed because a single DCR might require changes to several components in different releases of the product, and the work on a given DCR for a given component may have been divided into a number of pieces which were implemented and inspected separately. The number of person-hours spent preparing for the inspection. The number of person-hours spent conducting the inspection meeting. The size, measured in thousands of lines of code, of the work product inspected. The type of inspection, which depends on the development phase after which it is carried out. In this study there were three types: inspections of the high-level design; inspections of the low-level design; and code inspections. The total number of defects found during the inspection. The breakdown of defects by phase of origin, which is in effect the number of defects that should have been found at a previous inspection.
4. Escape data In order to ascertain the occurrence of defect escapes, we had to identify inspection sequences on the same part of the product. Thus, the records were sorted by inspection identifier, and only those observations that were part of an inspection sequence were retained. Next, for each inspection sequence, we examined the origin of the defects detected in the later inspection(s) to determine whether or not they had escaped from the earlier inspection(s). In this way, we were able to ascertain the occurrence of defect escapes for each inspection. Since there was no data available regarding inspections that took place after the code inspections, it was not possible to determine whether any defects escaped from them, and they were excluded from the study. Even though it was painstakingly collected and collated, the data gathered suffered from certain limitations. A highlevel design defect found at a low-level design inspection or at a code inspection is a clear indication that the high-level design inspection was not fully effective. Similarly, a lowlevel design defect detected at the code inspection suggests that the low-level design inspection was not entirely effective. However, it is possible that some design defects may have escaped not only the design inspection but also the code inspection, only to be detected later on at the system
T. Raz, A. T. Yaungllnformation and Software Technology 39 (1997) 297-305 Table 1 Breakdown
of the observations
by inspection
type and by escape occurrence High-level
Inspections Inspections Total
without escapes (ESC = 0) with escapes (ESC = 1)
a
design inspections
26 20 46
Low-level
design inspections
Total
16 1.5 31
42 35 77
20
100%
18
90%
16
80%
14
70%
2
12
60%
3
10
50%
8
40%
g r=;
b
6
30%
4
20%
2
10%
0
0%
100%
14
90%
12 x g S g
80% 70%
IO
60%
8
50%
6 Z
40% 30%
4
20%
2
10% 0%
0
C
Fig. 1. Distribution
299
20
100%
18
90%
16
80%
14
70%
8 Q
12
60%
$
10
50%
2 cr,
8
40%
6
30%
4
20%
2
10%
0
0%
of the measured variables.
(a) Size in KLOC, (b) preparation
effort in person-hours,
(c) inspection
effort in person-hours.
300 Table 2 Comparison
T. Raz, A.T. Yaungllnformation
of means. (a High-level
PREP INSP EFF SIZE EFFISIZE PREPIINSP
level of significance
and Sofhvare Technology 39 (1997) 297-305
of the t-test)
design inspections
(N = 46)
ESC=O
ESC=l
LY
8.68 4.55 13.23 656 0.082 3.68
19.28 21.96 41.24 2610 0.035 2.16
0.0086 0.0032 0.0035 0.0054 0.1067 0.0630
Low-level
design inspections
(N = 31)
ESC=O
ESC=l
a
ESC=O
Ex=l
11.36 6.01 17.37 869 0.035 4.42
16.87 17.53 34.40 1690 0.040 1.80
0.1360 0.0060 0.0160 0.0367 0.350 0.0200
9.70 5.10 14.80 738 0.064 3.96
18.25 20.06 38.30 2216 0.037 2.05
integration test step or by the customer. Since we had no data beyond code inspections, we could not measure the incidence of such events. However, the change team claimed that such occurrences were extremely rare in DCR work, and that most of the defects encountered downstream from the code inspection were code defects rather than design defects. Thus, it seemed justified to proceed under the assumption that if, for a particular work product, no design defects were detected at the code inspection, then none remained. A second limitation stems from the very large variance in the numbers of detected defects reported. Further discussions with the inspection moderators revealed a certain lack of consistency in counting defects. A single design issue that appeared as multiple defects throughout the work product was sometimes reported as one defect, and at other times as a number of defects, close or equal to the number of appearances. Thus, rather than carrying out a statistical analysis on the actual number of defects and defect escapes reported, we classified each design inspection as either having or not having escaped defects, and this binary classification became the indicator of effectiveness. Obviously, any inspection on a work product that is free from defects cannot yield an escape. Thus, we only retained for the analysis the observations pertaining to those inspections that had at least one defect. Altogether, we had 77 usable observations, which are shown in the Appendix. The breakdown of the observations by inspection type and by occurrence of escapes is summarized in Table 1. Histograms describing the distribution of the three measured variables (size, preparation effort and inspection effort) appear in Fig. 1.
5. Analysis of the data In the first step of our analysis, we looked for differences between inspections with escapes (ESC= 1) and inspections without escapes (ESC= 0). Based on the data available we carried out a comparison of means for the following variables: 9 PREP: number of person-hours invested in preparing for the inspection meeting
All inspections
(N = 77) (Y 0.004 0.0001 0.0001 0.0008 0.130 0.004
INSP: number of person-hours invested in the actual inspection meeting EFF: total effort invested in the inspection: EFF = PREP + INSP SIZE: size of the module inspected, measured in thousands of lines of code (KLOC) EFF/SIZE: number of person-hours per KLOC of module size PREP/INSP: ratio of preparation time to actual inspection meeting time The comparison of means was done for each type of inspection (high-level design and low-level design), separately and for the two types combined. The results, along with the significance levels based on the t-test, are summarized in Table 2. Analysis of Table 2 reveals several interesting findings, which are described in the following sections.
6. Time invested Intuitively, one would expect to find that as more time is invested in the inspection process, more defects are detected and the likelihood of escaped defects decreases. However, the data clearly tells us that more time was spent on the less effective inspections. Inspections with escapes had significantly more time invested in them. This was true for both preparation time and inspection time, either jointly (PREP and INSP variables) or combined (EFF variable). It was also true for high-level design inspections and for low-level design inspections separately and combined. Conversely stated, this finding suggest that inspections that are easier to carry out are also the ones that turn out to be more effective. Apparently there are other factors that affect inspection effectiveness, and just increasing the amount of time invested does not necessarily improve inspection effectiveness.
7. Size The next finding pertains to the size of the work products inspected. Clearly, inspections with escapes are associated
T. Raz, A.T. Yaungllnformation and Software Technology 39 (1997) 297-305
301
IO “a u” w”
09 0.8
%
07
.G z 9 _g e a
0.6 0.5
.z 1 5
X
x
x
0.4 03 0.2
E u’
01 00 4000
2000
0
6000
10000
8000
12000
Work Product Size in KLOC Fig. 2. Cumulative
probability
of escape as a function of work product size.
with larger average work product. This finding, which is true for both the separate and the combined design inspections, suggests that it is more difficult to detect all the defects in larger work products than in smaller ones. To further clarify this point, we plotted the cumulative probability of escape against the size of the work product being inspected. For any given work product size, the cumulative probability was estimated as the fraction of inspections with escapes in the sample out of the number of inspections in the sample that were carried out on work products of equal or smaller size. The plot, which appears in Fig. 2, suggests some type of logarithmic relationship between the probability of escape and the work product size. In order to estimate the magnitude of the relationship, the 77 observations were divided into nine groups according to work product size. The limits of the groups were determined on the basis of gaps in the SIZE values. For each group, the escape probability was calculated as the number of observations with escapes in the group divided by the number of observations in the group. The data appears in Table 3. Fig. 3 shows a plot of the probability of escape of each group as a function of the average group size in KLOC. A
regression analysis on the probability of escape as a function of the logarithm of the average group size produced the following equation: ProbEsc = -0.59
* 0.37 * log(SIZE)
(1)
This regression equation has an R2 of 0.64 with a level of significance of 0.009, based on the F test. Clearly, the size of the work product inspected has an effect on the likelihood that the inspection will fail to detect all the defects. The data in this study suggests that the relationship is logarithmic, meaning that the escape probability grows at a rate slower than the increase in work product size.
8. Effort-size ratio Next, we examined the effect that the ratio of effort to size (EFF/SIZE) has on escape probability. Although the average number of hours invested per KLOC was larger in high-level design inspection without escapes than in high-level design inspections with escapes (0.082 vs. 0.035) the level of statistical significance (0.1067) was not nearly as good as for the other findings. Further, for
Table 3 Average
size and probability
of escape for each group of observations
Size range (KLOC)
Average
10-100 125-250 300-450 500-550 600-894 1000-1200 1600-2000 2190-3900 4725-11700
62 163 351 513 763 1117 1920 3010 7291
size (BLOC)
Log (average size)
Number of obs.
Number of obs. with escapes
Prob. of escape
1.79 2.21 2.54
11 8 8 8 9 9 10 9 5
2 1 2 3 6 2 9 6 4
0.182 0.125 0.250 0.375 0.667 0.222 0.900 0.667 0.800
2.71 2.88 3.05 3.28 3.48 3.86
T. Raz, A.T. Yaung/Information
302
and Software Technology 39 (1997) 297-305
1.00
l
0.90
0.10 0.00 0.00
2.00
1.00
4.00
3.00
5.00
WS~)
Fig. 3. Probability
of escape as a function of the logarithm
low-level design inspections the averages for the two populations were very close (0.035 for inspections without escapes versus 0.040 for inspections with escapes), with the inequality in the reverse direction. For the two types of inspections combined, the level of statistical significance of the difference was not sufficient to allow any conclusions to be drawn. Other analyses carried out on the ratio between EFF and quadratic, exponential and logarithmic functions of SIZE also turned out to be inconclusive.
9. Preparation-inspection
ratio
Finally, we examined the ratio of the amount of preparation time to the amount of inspection time. For both the separate and combined design, the average ratio was higher for inspections without escapes than for inspections that allowed defects to escape. This finding clearly suggests
of work product size.
that adequate preparation does indeed influence the effectiveness of the inspection process. Fig. 4 shows a plot of the cumulative escape probability as a function of the PREP/ INSP ratio. The escape probability for a given value of PREPDNSP was estimated as the fraction of inspections with escapes out of all the inspections in the sample with an equal or smaller PREP/INSP value. The plot in Fig. 4 suggests a negative exponential relationship. In order to explore this possibility further, we applied the same approach that was used in the analysis of the SIZE variable. The 77 observations were sorted in ascending order of PREP/INSP and were divided into groups of about 10 observations each. The limits of the groups were determined on the basis of gaps in the values of PREP/INSP. Table 4 shows the ranges, average PREP/ INSP values and escape probability for each group. Fig. 5 shows a plot of the probability of escape as a function of the average PREPDNSP ratio for each group
1.00 00
g
0.90
4
0.80
%
0.70
.sz P
0.80
2
0.50
0
.E
0.30
z?
0.20
” !I
0.10
o
000
*o #
00
0
j
0.00 >I 0.00
5.00
10.00
15.00
PREPDNSP ratio Fig. 4. Cumulative
probability
of escape as a function of PREP/INSP
ratio.
T. Raz, A.T. Yaungllnformation Table 4 Average PREP/INSP PREPiINSP
and probability
range
0.000-0.500 0.548-0.750 0.880-1.056 1.164-1.625 2.000-2.500 3.200-4.250 4.500-8.000 9.714-14.250
and So&are
of escape for each group of observations
Average PREP/INSP
Number of obs.
Number of obs. with escapes
Prob. of escape
0.372 0.676 0.974 1.380 2.213 3.729 6.402 11.199
10 10 13 8 8 10 11 7
9 5 6 5 3 2 3 2
0.900 0.500 0.462 0.625 0.375 0.200 0.273 0.286
of observations. This plot also clearly suggests an exponential relationship. Fig. 6 shows the same data plotted on logarithmic axes. Mathematically, an exponential relationship between PREP/INSP and escape probability would be expressed as: ProbEsc = A * (PREP/INSP)B After taking logarithms obtain log(ProbEsc) To cients Table value
(2)
of both sides of equation
(2), we
= 1ogA + B log(PREP/INSP)
(3)
estimate the coefficients A and B, regression coeffiwere calculated from the logarithms of the values in 3. The resulting regression line, which had an R2 of 0.71 and a significance of 0.009, was:
log(ProbEsc)
= -0.29
In the exponential relationship:
- 0.36 log(PREP/INSP) formulation,
(4)
we obtain the following
ProbEsc = 0.516 * (PREP/INSP)-0.36
(5)
10. Discussion An immediate and intuitive improve inspection effectiveness escapes would be to invest more Our findings in this study indicate on two complementary strategies: ‘do your homework’.
% 3
jj
6
0.90 0.60 0.70 0.60
response to the need to by decreasing defect in the inspection process. a different direction based ‘divide and conquer’ and
0.40 0.30
d g
0.20 0.10 0.00
10.1. Divide and conquer Since inspections of larger work products appear to be more susceptible to defect escapes, software designers and architects should consider designs based on smaller modules as a way to reduce the likelihood of defect escapes and to improve quality. From the quality perspective, conducting multiple inspections on smaller work products is preferable to single inspection on a large one. There could, however, be a price to pay for following this strategy: the possibility of missing defects at the interfaces when the work products inspected in separate inspections are coupled. These defects are likely to appear at the integration test step, when the cost of removing them is higher. To avoid this potential drawback, work products should be defined with the least amount of external coupling possible, and inspections should also include the external interfacing structures and procedures of each work product. Clearly, the smaller the work products are, the larger the number of interfaces between them. As a follow on study, it would be interesting to develop a mathematical relationship between the size of the design work products and the extent of inspection required for their interfaces. Tl-is relationship could then be used in conjunction with the logarithmic relationship in equation (1) to find the optimal work product size.
1.000
X
a, %
X
xX
z
+
X
%
1 8 0.50 +
.g 3
303
Technology 39 (1997) 297-305
.?z r= -8 % a”
x
x
X
0.100 0.10
4
0.00
3.00
6.00
9.00
12.00
15.00
1.oo
10.00
100.00
Average PREP/INSP
Average PREPIINSP Fig. 5. Probability
of escape as a function of PREP/lNSP
Fig. 6. Probability axes.
of escape as a function of PREP/INSP
on logarithmic
T. Raz, AX. Yaungllnformation
304
and Software Technology 39 (1997) 297-305
10.2. Do your homework
This means increasing the ratio of preparation time to inspection time. Inspections without sufficient preparation are not fully effective. The inspection meeting itself should serve mainly as a forum to verify the potential defects found in the preparation step, with most of the defect detection work having been done during the preparation. The implementation of this strategy affects the planning and scheduling of the project. When planning for inspection activities, sufficient time for adequate preparation should be allowed and participants should be motivated to actually use that time to prepare thoroughly for the inspection meeting. This may require some adjustment on the part of both management and the technical staff, in order to give higher priority to preparation work. The exponential equation (5) suggests that increasing the PREPDNSP ratio provides marginally decreasing improvements in inspection effectiveness. Here again, it would be interesting to find the optimal balance between the benefits from more preparation and the costs to the project in terms of resources utilized and increased turnaround time. Management can implement these strategies by educating the design and inspection personnel, by issuing guidelines regarding recommended work product size and preparation and inspection efforts, and by allowing sufficient time and resources in the development plan to accommodate these efforts. The summary results in Table 2 can serve as an initial benchmark for evaluating the effectiveness of specific inspections and for suggesting early corrective actions. For instance, let us assume that at the conclusion of a particular high-level inspection the PREP/INSP ratio was calculated and found to be less than 2. This would be an indication that the inspection is more likely to belong to the population of those with escapes (average PREP/ INSP = 2.16) than to the population of the inspections without escapes (average PREPDNSP = 3.68), In such a case it might be advisable to invest more time in defect detection, in order to remove any residual defects, prior to allowing the work product to proceed to the next development stage. Similarly, the regression equations (1) and (5) can serve to estimate escape probability based on size or PREPDNSP ratio. Those work products for which the estimates exceed a threshold determined by management should be flagged for additional inspection.
these variables. In both these cases we obtained highly significant statistical relationships. The failure to obtain unequivocal results regarding the effort-size ratio can be attributed to the small sample size relative to the variance in the data. Other factors that may affect inspection effectiveness were not addressed in this study due to limitations of the data available. These include the number of participants in the process and their qualifications, the type and severity of the defects found and of those that escaped, the manner in which the inspection meeting was conducted, and the complexity of the designs inspected. However, assuming that the sample of 77 inspections was not biased in any particular way, the analysis of the variables available allowed us to formulate two straightforward strategies for improving inspection effectiveness. Of course, these strategies serve as guidelines, and when implementing them management should also consider the specifics of the situation at hand. An important extension of this work would be the development of guidelines for selecting the optimal values of the variables examined. Although our results and recommendations are related to a specific software development project, we believe that they can be generalized to other software development processes and conditions, and potentially to other types of design activities as well. Finally, we should mention that studies such as this one illustrate the value to be derived from collecting data on the various process activities in software development and having it available for analysis and control purposes.
References
ill PI [31 [41 151
[61 [71
11. Concluding
remarks
We found that the probability of a defect escaping a design inspection is affected by variables that are easy, to measure: work product size, preparation time and inspection time. The logarithmic equation for SIZE and the exponential equation for PREP/INSP provide useful insights regarding the marginal benefits that can be achieved by manipulating
[81 [91 [lOI
[ill
S.H. Kan, V.R. Basili and L.N. Shapiro, Software quality: an overview from the perspective of total quality management, IBM Systems Journal, 33 (1) (1994) 4-19. F. Ackerman, L. Buchwald and F. Lewski, Software inspections: an effective verification process, IEEE Software, May 1989, pp. 31-36. C. Jones, Applied software measurement, McGraw-Hill, New York, 1991. G. Russell, Experiences with inspections in ultralarge-scale development, IEEE Software, January 1991, pp. 2.5-31. J. Kelly, J. Sherif and J. Hops, An analysis of defect densities found during software inspections, Journal of Systems and Software, 17 (2) February 1992, 111-117. E.F Weller, Lessons from three years of inspection data, IEEE Software, September 1993, pp. 38-45. J. Barnard and A. Price, Managing code inspection information, IEEE Software, March 1994, pp. 59-69. M. Fagan, Design and code inspections to reduce errors in program development, IBM Systems Journal, No. 3, 1976, pp. 219-248. C. Jones, Programming productivity, McGraw-Hill, New York, 1986. L.G. Votta, Does every inspection need a meeting? Proc. of ACM SIGSOFT 93 Symposium on Foundations of Software Engineering, December 1993, pp. 107-114. A.A. Porter, L.G. Votta and V.R. Basili, Comparing detection methods for software requirements inspection: a replicated experiment, IEEE Transactions on Software Engineering, 26(6) June 1995, 563575.
305
T. Raz, A.T. Yaungllnformation and Software Technology 39 (1997) 297-305 Appendix continued.....
Appendix Raw data Type: HLD/LLD
SIZE (KLOC)
HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD HLD LLD LLD LLD LLD LLD LLD LLD LLD LLD LLD LLD LLD LLD LLD LLD
10 11 70 100 100 100 150 1.50 175 250 300 300 330 315 450 500 500 500 500 600 1000 1200 1200 2190 3000 3000 50 164 300 300 550 700 894 1200 1600 2000 2000 2000 2000 2000 2000 3000 4725 7030 8000 11700 50 50 100 125 125 164 450 550 600 650 1000 1100 1150 1200 1600
PREP (person-hours) 4 4 2.5 12 3.5 1 7 3 7.2 1.2 6 2.1 2.2 2.5 4.5 20 15 4 4 8.5 4.5 7.5 1.5 24 34 34 1 9 24 0 6 8.5 9 15 16 16 10 20 25 12 32 34 30 44 5 71 2 4.5 1 4 4 2.5 2.2 9.5 8 14 13 8 10 42 29
INSP (person-hours) 4 2 0.5 1 1 0.2 1 3 4.8 0.8 8.5 2.3 2.5 2.8 6 28 20 1 1 2 2 1 1 15 3.5 3.5 0.5 18 36 4 0.6 9 18 2 16 39 8.5 16 24 21 28 3.5 47 36 2 110 0.6 2 0.5 1 1.1 5 1 16 2.5 3 1 1 10 46 2
ESC: 0: no; 1: yes
Type: HLD/LLD LLD LLD LLD LLD LLD LLD LLD LLD LLD LLD LLD LLD LLD LLD LLD LLD
SIZE (KLOC) 5000 40 500 500 750 887 888 894 1000 2000 2000 2400 2500 3500 3600 3900
PREP (person-hours) 30 5 19 25 6 11 12 27 4 10 64 17 13 29 6.5
INSP (person-hours) 4 1 2 18 24 14 30 30 36 36 20 12 4 4 28 4
ESC: 0: no; 1: yes 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1