Approaches to measuring size of application products with CASE tools

Approaches to measuring size of application products with CASE tools

Approaches to measuring size of application products with CASE tools G Tate and J M Verner* Computer-aided software engineering (CASE) tools offer an...

773KB Sizes 5 Downloads 32 Views

Approaches to measuring size of application products with CASE tools G Tate and J M Verner*

Computer-aided software engineering (CASE) tools offer an unprecedented opportunity to measure automatically the sizes of products developed using them. The sizes o f CASE application products are required for the measurement o f the productivity o f CASE technologies themselves and o f developers using those technologies. A series o f development phase sizes of a CASE application product is needed, from an initial abstract size o f the job to be done, through various specification and design phase sizes, down to the size o f the code ultimately generated. Several partial sizes may also be useful, Jor example, the size o f the data model alone. The paper examines issues concerned with the dependency o f size metrics on the instrumented CASE tools that measure them, and the search for a technology-independent measure o f job size. Candidate units o[size measure are examined, and it is suggested that dictionary token counts and function metrics are the most promising. Problems o f the calibration o f metrics across different CASE technologies are examined. computer-aided soft ware engineering, CA SE, CA SE application products, dictionary token counts,Junction metrics, size measurement, soJ~ware technology dependence

This paper is about size metrics for application products developed using computer-aided software engineering (CASE) tools. To avoid frequent use of the rather longwinded expression ~CASE application product', it shall be abbreviated to CAP (plural CAPs). CASE tools offer unprecedented opportunities for the automatic measurement of aspects of both CAPs themselves, including a variety of sizes of representations of CAPs, and also of what the developers of CAPs actually do. However, before thinking about instrumenting CASE tools to measure relevant aspects of both the development process and its products, it is important to be clear about the purposes of such measurements. As the attention in this paper is limited to software sizing matters, it is asked, why measure the size of CAPs? It is done in an attempt to measure one of two main types of productivity, namely: • Technology productivity - - how good is a specific CASE technology at getting the job done?

Information SystemsDepartment, City Polytechnicof Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong. *School of Information Systems, Universityof New South Wales, PO Box 1, Kensington, NSW 2033, Australia

622

• Developer productivity - - how good is a developer at getting the job done with a specific CASE tool? In both of the above cases the concern is with the size of the particular job to be done. Productivity in this context will typically be measured in cost, effort, or time per unit size of the job to be done, e.g., in S/function point, if job size is measured in function points ~. Developer productivity is more likely to be more important in terms of its spread, and the factors influencing that spread, i.e., in environment management, than in individual developer management. Key questions with which this paper is concerned are: • What CAP sizes are required? • How should CAP sizes be (automatically) measured? • In what units? These form a related set of surprisingly difficult questions. There is nothing absolute about the size of any software product at any stage of its development. Only by careful modelling of the development process, and equally careful definition of product states and sizes at various stages of development, is it possible to get reliable and comparable size measures within a specific environment. Comparison between CASE environments, however, requires a CASE technology-independent measure of size, which may be regarded as an idealized abstract size of the job to be done. Approaches to obtaining size measures with some degree of technology independence are briefly examined. The question of developer productivity is not separately addressed. However, the sizes and size measures discussed should be suitable for this purpose, as they apply to all the intermediate and final CAPs produced by developers, at whatever stage of development is of interest. Candidate units of size measure are investigated, and it is suggested that dictionary token counts and function metrics are the most promising. Problems of calibration, both of different size metrics with each other and also of similar metrics across different CASE technologies, are also examined.

WHAT CASE APPLICATION PRODUCT SIZES ARE NEEDED? It is not the purpose of this paper to model CASE development processes or CASE life-cycles. It is the claim of

0950-5849/91/090622-07 © 1991 Butterworth-Heinemann Ltd

information and software technology

a General datamodel

l

[bdataGeT~l°de1

cOutlinemodelsyst]em ]edataf Detailed owmode]

Id Detailed data model

It should contain all entities and relationships of concern to the application. Note, however, that at this general level entities and relationships are merely named and the cardinalities of the latter identified; their attributes or components are not listed. This gives a first indication of the size of an application in the data dimension.

(b) General dataflow model size

I ,tem ~del I h

I

I Actions

h'L

[g User ]interface ]

Detailed functional specification

I Data3asel }

de.,ign

Design Generated

code

Figure 1. C A S E application product components whose sizes may be (?l"interest - - data-centred application category example. Letters refer to paragraphs in text many CASE tool suppliers that their products will in fact lit many different life-cycle models. However, regardless of the life-cycle model used in any particular environment, there will be a sequence, or more precisely a network, of sizes of interest during the development of a CAP. For the moment the consideration of how a CAP size might be measured shall be set aside and what application aspects some sizes of interest might include considered instead. The particular sizes of interest will depend on the application category to which the CAP belongs, among other factors. To be specific, a particular application category, that of data-centred business systems, shall be considered. Applications in this category can be characterized briefly as consisting of a database, together with a set of relatively simple transactions that update, and report on, the database contents. It must be emphasized that this application category has only been chosen as an example and that the principles described herein are considered to be of much more general applicability. The sizes of interest in the development of a datacentred business system may include some or all of the following. The relationships between these sizes are illustrated in Figure 1. Note that only the sizes of selected CASE products depicted in Figure 1 are examined in greater detail below.

(a) General data model size This is an early measure of the size of the data model, or of a subview of interest, say in entity-relationship terms.

vol 33 no 9 november 1991

This is an early measure of the size of the dataflow diagram exploded several levels to an agreed, rather general, level of process and/or dataflow detail. The flows and processes are merely named and their attributes or components are not identified. There are, of course, potential problems with such a general measure, mostly related to determining and maintaining a consistent level of generality. Guidelines for a general dataflow diagram at other than the trivial top level may be difficult to establish. One possible guideline is that data flows should be broken down to transactions, e.g., screens or windows, reports, inputs, and inquiries, rather than to individual records, groups, or fields. This gives a first indication of the size of an application in the process or transaction dimension.

(c) Outline system model size This combines both (a) and (b) above, namely, a general data model and a general dataflow diagram. Such an outline system model size is likely to be the earliest size obtainable from the analysis level of a CASE environment that can give some basis for an overall estimate of the size of the application as a whole.

(d) Detailed data model size This includes (a) above, revised as necessary due to new information becoming available as development proceeds, together with all named attributes of all entities and relationships. It is a matter of availability, choice, and local standards whether data typing, formatting, and related data element information is included at this stage or not. This size should provide a good basis for estimates of all further database design and implementation activities.

(e) Detailed dataflow diagram size This includes the fully exploded dataflow diagram, with the full structure of each flow specified and all data elements within flows named, though not necessarily typed. The level of the leaf processes in the explosion tree is not so difficult to determine as it is in (a), though it is still a matter for debate. A breakdown to the level of DeMarco's functional primitives 2 may be used or a breakdown based on the concept of complete recordlevel dependence of outputs on inputs. The latter terminates the explosion process when all outputs of a process are dependent on all inputs to that process, there being no redundant inputs for any output and this dependency

623

being at the record level rather than the data element level. The term record could be alternatively defined as a relation in a fully normalized relational model. Note that the attributes or data elements need only be named. Typing, formatting, and related data element information does not need to be included, though it can be if it is available. This size should provide a good basis for transaction design and implementation estimates.

(f) System model size This is a measure of the size of the data model, including all named attributes (data elements), together with the size of the fully exploded dataflow diagram, with the full structure of each flow specified and all attributes named. It is essentially (d) with (e) and as such proves a good basis for estimates of all further design and implementation.

(g) User-interface size This takes a 'black box' view of the (sub)system of interest, being concerned solely with its inputs from, and outputs to, the user. Menu, screen, and window formats and contents, including all prompts, help messages, and error messages together with report formats and contents and any other user interactions are included in complete detail. A user interface may be simple (comparatively small), or complex with extensive help, experience levels, and so on (comparatively large), for essentially the same underlying application. The user interface may be a substantial contributor to implementation effort in business applications. It can be regarded as another dimension of an application, in addition to those of data and processing. This size should provide a good basis for estimates of user-interface implementation effort.

(h) Detailed functional specification size This includes all the information in (f) and (g) together with complete dictionary entries for all data elements, structures, and flows; structured English 3, action diagram 4, or similar, process descriptions; access controls, accounting requirements, and other system administration functions of relevance to the user. As this size combines the data, transaction, and user-interface dimensions of an application, it should be a good predictor of both total design effort and implementation effort.

(i) One or more design sizes These will depend on the structure of the design process and the design method used, e.g., structured design 5 or data structure-driven design 6,7.

(j) Generated code size For example, the size o f the Cobol source code generated, if the CASE environment used generates this.

624

The sizes referred to above may be measured for subsystems or increments, rather than for whole systems. Because all the elements making up any particular size are automatically countable, or measurable, by CASE tools themselves, or associated tools, the number of possible sizes is large, depending on just what is included in a particular size. As development practices vary, there is a strong argument to allow users to define their own size requirements, i.e., using generic sizing tools, in addition to building specific sizing tools for measuring some well defined sizes which emerge as commonly used sizes of importance for comparative purposes, e.g., function points according to widely accepted function point analysis guidelines 8 or function weight 9. A higher-level CASE planning tool might include business characteristics and parameters, or sizing-by-analogy data, which enable an even earlier ball-park size to be obtained. The large number of potential sizes might raise the question, are all these sizes needed? This is not an easy question to answer until there has actually been an opportunity to use them all, but the question can be discussed a priori. Certainly an early size is needed as a basis for effort, cost, and schedule estimates on which to make go/no-go or resource allocation and scheduling decisions as soon as possible in a project. Almost all costestimation models require a size estimate as their most important cost driver. If progress is to be monitored, actual time, effort, and productivity need to be checked against estimates or targets. For monitoring productivity, size is needed. Care must be exercised in all productivity measurements, however, to ensure that size measures are not used that can easily be inflated to give a false impression of greater productivity. If revised estimates are to be made from time to time on the basis of actual work-to-date, sizes of intermediate products are needed on which to base those estimates. It would also be useful to be able to look separately at different component sizes, or size aspects, of a system. In the datacentred business applications category chosen as an example, these aspects could include: • The data dimension - - an examination of the data model alone can give some measure of the amount and complexity of the application data. • The transformation dimension - - an examination of the dataflow diagram, exploded to some convenient level(s), can enable the assessment of functional aspects of a system in terms of the numbers and types of transformations required to derive output flows from input flows and also which inputs particular outputs depend on. • The user-interface dimension - - a separate examination of the size and complexity of the user interactions with a system, perhaps in relation to their degree of friendliness. For practical purposes of project management, sizes (b), (f), (h), (i), and possibly (j) may be of most interest initially. For those who want to study a CASE develop-

information and software technology

ment environment in detail with a view to its better understanding and control, all the above sizes are potentially of interest, particularly if tools are available that can produce them automatically.

look at units of size measure. In doing so, first possible c o m m o n measures, then vector metrics, and finally composite metrics will be looked at.

Common units of size measure for CASE UNITS

OF SIZE

MEASURE

FOR

CASE

Having established that a number of sizes are needed for the effective development of CAPs, it is now necessary to look at ways of measuring these different CAP sizes. The first issue is what unit or units of measure to use. CASE tools are concerned with a wide variety of object types (e.g., entities, relationships, data types, data flows, processes, and many others), which are represented in a number of different ways, including: (i) (ii) (iii) (iv)

graphically as user entries in screen forms as dictionary entries as items in reports

A few objects may be represented in all of these different ways. Most, however, will be represented either in forms (i), (iii), and (iv) or in forms (ii), (iii), and (iv). There are clearly problems in measuring the size of a diagram as a diagram, and combining it with the size of a number of user entries in a screen form. It is not the purpose of this paper to ask, or to answer, philosophical questions about the nature of size itself. There are, however, fundamental problems in determining the size of any software product or representation that consists of a large number of different objects. Traditionally, these problems have been solved in one of two ways: • by finding a c o m m o n measure for all objects; traditionally, this c o m m o n measure for software has been lines of code • by constructing a composite measure that assigns different weights to different objects appropriate for the purpose in hand and then adds the weighted object counts or measures together - - this is the basis of function measures such as function points 8 and function weight 9 To these must be added another approach that has recently emerged, namely, the use of a vector of size metrics rather than a single size value. This will be called the metric vector approach. Basili and R o m b a c h in the T A M E project n° note that a vector of metrics is necessary to capture the characteristics of a development environment and its software products. Similar comments apply to the narrower field of software size. This approach recognizes the fact that systems are made up of different kinds of objects that are, in some senses at least, incommensurate. Thus a vector of counts or measures of several key software object classes may be more useful for some purposes than a single c o m m o n or composite measure. The advent of CASE makes it necessary to take a new

vol 33 no 9 november 1991

Traditionally, the most popular c o m m o n unit of software size measure has been lines of code, often also equated with delivered source instructions. In most CASE developments, however, lines of code are quite uncommon. For example, in data-centred business CAPs, lines of code as such may only occur in parts of process descriptions, process module code, and the final generated code itself. The final generated code may be source code for some compiler. Other objects are either graphical or entries in screen forms. It would seem to be unnatural to look for lines-of-code equivalents, except perhaps for the purposes of comparison with earlier conventional developments. All objects, however, are entered into a dictionary as tokens or fields. (At this stage of CASE development, possibilities such as bit-mapped user graphical objects or voice information shall not be considered.) The tokens in the dictionary description of the objects of interest for a particular CAP size can be counted and added together and a total size in dictionary tokens obtained. As with lines of code, there are advantages and disadvantages in the tokens approach. The advantages include: • Tokens are simple, universal, and easily counted. • It can be argued, as it has been for lines of code, that, although all tokens are not equal, on average they can provide a useful c o m m o n measure comparable across many CASE tools, languages, and other software representations. • Each token can be thought of as a kind of atomic decision that the developer must make concerning the CAP under development. The disadvantages include: • Possible alternative dictionary representations that use different numbers of tokens for the same objects. For example, an arithmetic expression may be stored in the dictionary in either parenthesized or parenthesis-free notation. • The correspondence between tokens and developer decisions may not be one-to-one and may depend on the CASE interface. For example, a screen position may be represented in the dictionary as two coordinates, whereas a user with a mouse may regard choosing a position as a single decision. • Some important objects, such as entities, may be represented by relatively few tokens, whereas some less important objects, such as data types, may require rather more tokens to describe them completely. • CAP token counts are dependent on the CASE technology employed. • Questions arise as to whether all tokens within the CASE dictionary files should be counted, including

625

those in developer notes, comment text, etc., or whether some should be omitted - - which, and why? In some situations some comments may in fact contain meaningful code, such as calls to nonstandard software products. • There are potential difficulties, at least initially, in assigning a meaning to a size in tokens, or thousands of tokens, and relating it to existing size measures in lines of code and function metrics.

As databases of completed systems are built up, metric vectors can also be used to match the characteristics of an outline system model with completed systems for predictions by analogy. Size metric vectors do not replace common or composite size metrics. They complement them, adding extra dimensions and filling out the sizing picture.

The advantages and disadvantages are not dissimilar to those usually adduced for lines o f code. Indeed, the advantages seem rather more and the disadvantages rather fewer than those for lines of code. It is salutary to reflect that, in spite o f its many difficulties, lines of code has proved to be surprisingly workable and durable over almost 30 years of changing software technologies. It is strongly advocated that dictionary token counts be adopted as the standard common measure of CAP sizes. This should be done in the full recognition that there are disadvantages and that ways must be found to overcome them. Wrinkles of this sort will only be ironed out by actually using tokens in practice over a period.

Composite units of size measure can be regarded as functions of metric vectors. The most common composite measures are function points (FPs) ~, which are a complex function of module type (file, input, output, inquiry, or external interface) and the counts of files, record types, and data elements for each module, summed over modules. There are several variants of FPs, including extensions for real-time and scientific systems ~,~2. DeMarco's function weight 2,9 is a composite measure, which is a function of functional primitive type and data element counts for each functional primitive, summed over functional primitive instances. DeMarco has also proposed an analogous composite data measure, which is a function of the number of objects in a database and the number of relations in which each of them is involved. There are some problems in implementing function point analysis (FPA) automatically as the rules for identifying FPA components and counting FPs may require some measure of human judgement. These problems can be overcome to some extent by adhering to strict, automatable rules. The result will be a modified FP-like measure, using the same principles but having slightly different, though consistent values. Other difficulties in automating function measures, such as FPs or function weight, using a CASE tool are that all the necessary information, such as functional primitive type, may not be available automatically and the module types of FPA may not correspond to natural CASE tool objects in most cases. Several questions arise as a result of considering these or other component size measures in a CASE environment:

Size metric vectors for CASE The metric vector approach recognises that a concept such as size has more than one dimension and therefore cannot be embodied completely in a single value. Just as two different people may weigh the same, although one is tall and thin, whereas the other is short and stout, so two systems may have the same token or function-point count, but one may be data-heavy, whereas the other is function-heavy. Systems, as well as people, may be better described by weight, height, and girth, as it were, than by weight alone. Figure 1 indicates an implicit recognition of this fact by its isolation of three different size aspects of a data-centred business system, namely, the data model, dataflow diagram, and user interface, each of which (suitably defined) can readily be sized separately in a CASE environment. CAPs involve so many different types of objects that several levels of metric vectors may be of interest. For example, for data-centred systems, the following vectors may be useful: • (data model size, dataflow diagram size, user-interface size, functional specification size), where sizes are measured in tokens and the fourth entry in the vector is the sum of the other three • (entities, relationships, attributes, data model token count), where the first three entries are counts of the respective data model components • (data flows, processes, data stores, external entities, dataflow diagram token count) • (menus, windows, reports, user-interface token count) It must be emphasized that the above are only examples and that experience with the descriptive power of metric vectors will undoubtedly suggest other metric vectors and metric vector components.

626

Composite units of size measure for CASE

• Are function metrics, i.e., FPs or function weight, natural CASE metrics? • What do function metrics actually measure in a CASE environment? ls it something a bit different from what they measure in a conventional environment, or not? • What do these composite metrics mean in a CASE environment, and how can they be used? Does this shed light on their meaning in general? The relative difficulty of automatically implementing FPA in its traditional form 8, in particular the inappropriateness of the FPA file and record type concepts for entity-relationship models and the difficulty of clearly separating CASE processes into FPA input, output, or inquiry processes, would suggest that FPA requires substantial modification before it will fit easily and naturally

information and software technology

into a CASE environment. Symons ~3 has suggested modifications that would make his Mark II version of FPA more automatable and natural than traditional FPA in a CASE environment. In terms of Figure l, for the data-centred business application category, FPA provides a measure of the parts of the detailed dataflow model which have flows to or from external entities (including interfaces), together with a measure of the detailed data model in terms of files, record types, and data elements, depending on a suitable mapping of these to the data model components. Symons j3 and Verner e t aP 4 have shown that FPA is technology dependent. Verner ~5has also shown that specially tailored FPA-like methods, which take advantage of the technological dependencies in a particular environment, can give much better size estimates than FPA or Symons' Mark II FPA in that environment. Technology-dependent FPA-like methods, such as these, when specially tailored to particular CASE environments may be considered to give 'natural' CASE FP-like measures. These can be calibrated to traditional FPs for comparative purposes as outlined below. What does function weight measure in a CASE environment? In terms of Figure I, for the data-centred business application category, it provides a measure of size of the detailed dataflow diagram and possibly separately of the general data model, though the data measure is seldom used. The composite measures, whether FPs, function weight, or other FP-like metrics, all give early size measures, based on a partial system model. Do they, however, give a better basis for size estimates than direct CASE measures of the system model and its components in tokens? Clearly, they have the potential to do so, as they give different weights to different component types and also to different elements in the metric vectors for those component types. The success of composite measures depends on the purpose for which they are constructed and their fitness for that purpose. Possible purposes include the prediction of: • downstream effort/cost for particular stages and CAP parts, or for a completed CAP as a whole • downstream development time, for similar categories this has been separated because it might be different • downstream size, in tokens and/or, where appropriate, lines of source or machine code -

-

Thus, for particular purposes, composite metrics should be better estimators because the weights they give to different component types and object counts can be tailored to the purposes in hand. For general purposes and for objective target sizes, tokens are suitable.

CALIBRATION Calibration is necessary to achieve some comparability between tokens, FPs, and function weight and also between these and traditional lines of code. To do this it is necessary to have a sample of a sufficient number of

vol 33 no 9 november 1991

common systems that can be measured in each of the desired ways. Once this has been done, there is no difficulty in obtaining calibration ratios of tokens/FP, function weight/FP, or whatever, for the particular application class represented by the chosen sample of systems and for the software development technologies on which the metrics are implicitly or explicitly dependent. It is important to realise, however, that these are merely overall system-level equivalences in most cases and are not necessarily applicable to individual components. For example, the distribution of tokens between FPA component types may not be uniform due to the different weightings applied to different FPA component types. This aspect of calibration is often forgotten by those who use conversion factors between size measures, such as 100 lines of Cobol code per FP. While this may be an overall average, for some module types there may be only 80 lines per FP, while for others there may be 115. In extreme cases particular modules may be far from the average. Calibration is also necessary for establishing comparability of similar FP-like metrics, using different component partitions and weights, across different CASE technologies ~4. The principles and caveats are similar. Calibration is related to the question of technology dependence, which is examined briefly below.

TECHNOLOGY DEPENDENCE AND TECHNOLOGY PRODUCTIVITY To compare the productivities of two different technologies, some measure of the size of the job to be done is needed that is independent of both technologies. Ideally, an abstract size of the job is required that is independent of any software technology or methodology. However, there is no canonical measurable description or specification language available for this purpose. The best to be hoped for is some common measurable model or other descriptive basis that will fit both of two CASE technologies whose productivities are to be compared, though this may not always be possible if the technologies are different. The nature of technology dependence, or independence, depending on one's point of view, in CASE tools is illustrated in Figure 2. Any particular CASE tool uses particular models and notations and tends to be associated with one or more preferred families of software development methodologies. At the top level only the models are important. These models, such as dataflow diagrams and entity-relationship models, are both fundamental and general, but not universal. For example, a tool based on Jackson Structured Design ~6would not use dataflow diagrams and a tool based on ISAC ~7would use different dataflow notations in its A-graphs and Igraphs. Tools based on object oriented concepts may use quite different models, based on hierarchies of objects and messages that pass between them. As CASE tool use moves from analysis to design to implementation, more specific methods are used and the process, and hence the developing product, becomes progressively more techno-

627

8 g t.-

I iii 09

CASEtoo, A

(1)

I

CASEtooIB

Phase dependeneYPhase 1i l M°dUPPER el CASE

< O g

I

of A

I

I 1Phase Phasej

I

[

methodology Designproducts dependency of,B lowercase Phasem Language dependency Phasen I Final products of A I I Final productsof B I

Figure 2. C A S E software technology dependence logy dependent. Eventually, expressed in generated source or machine code, the product and its size is highly dependent indeed on the target technology. Even the c o m m o n measurable descriptive basis, if such exists, which can be mapped on to the technologies of both CASE tools A and B, will not be completely independent of any software technology; it is merely more independent than any of the lower-level aspects in Figure 2.

CONCLUSIONS CASE tools offer an unprecedented opportunity to measure the process of software development from many points of view, including the measurement of a wide variety of sizes of CASE application products at many phases during their development, both in total and broken down into their important parts and aspects. A m o n g other uses, this range of sizes is important for the measurement of productivity in many CASE processes and activities. Different units of measure will be needed for CASE tools from those that have been traditionally used for software measurement and estimation. In particular, lines of code are less appropriate in a CASE environment than dictionary token counts. Also the traditional FPA approach does not fit most CASE models well. Both vector metrics and new function metrics are likely to have important roles in CASE sizing. The former gives additional dimensions to the complex concept of size. The latter can potentially provide better technologydependent early estimators of downstream size, effort, and schedule. Calibration using sample sets of c o m m o n systems can be used to establish some broad relationships between

628

new CASE units of size measure and existing units of measure, such as lines of code and traditional FPs. Such relationships may not hold at a component level, however. Metrics, such as FPs and function weight, are based on, or have correspondences to, system models constructed early in CASE development. Being model dependent in this sense, they are not independent of CASE technologies, though they are more independent than metrics based on later design or implementation methods. The search for a technology-independent measure of the size of a software job to be done seems to be an elusive one.

REFERENCES 1 Albrecht, A J and Gaffney, Jr, J E 'Software function, source lines of code and development effort prediction: a software science validation' IEEE Trans. Soft. Eng. Vol 9 No 6 (November 1983) pp 639-648 2 DeMarco, T 'In the land of function metrics' in Proc. Fifth. Int. COCOMO Users' Group Meeting SEI, Carnegie Mellon University, Pittsburgh, PA, USA (October 1989) 3 Cane, C and Sarson, T Structured systems analysis." tools and techniques Prentice Hall (1979) 4 Martin, J and McClure, C Diagramming techniquesfbr analysts and programmers Prentice Hall (1985) 5 Yourdon, E and Constantine, L Structured design Yourdon Press (1978) 6 Jackson, M A Principles of program design Academic Press (1975) 7 Warnier, J D Logical construction of programs Van Nostrand Reinhold (1974) 8 Zwanzig, K (ed) Handbook for estimation using function points GUIDE Project DP-1234, GUIDE Int. (November 1984) 9 DeMarco, T Controlling software projects." management, measurement and estimation Yourdon Press (1982) l0 Basili, V R and Romhach, H D 'The TAME Project: towards improvements oriented software environments' IEEE Trans. Soft. Eng. Vol 14 No 6 (June 1988) pp 758-773 ll Reifer, D J 'An introduction to RCI's resource estimation tools (RCI-TN-302)' presentation at COCOMO User's Group meeting, Pittsburgh, PA, USA (November 1987) 12 l i t Research Institute 'A descriptive evaluation of software sizing models' prepared for Headquarters USAF/Air Force Cost Centre, Washington, DC 20330-5018, USA by IIT Research Institute, 4550 Forbes Boulevard, Suite 300, Lanham, MD 20706~4324, USA (September 1987) 13 Symons, C R 'Function point analysis: difficulties and improvements' IEEE Trans. Soft. Eng. Vol 14 No l (January 1988) pp 2-11 14 Verner, J, Tate, G, Jackson, B and Hayward, R 'Technology dependence in function point analysis: a case study and critical review' in Proc. l l th Int. Conf. Software Engineering Pittsburgh, PA, USA (May 1989) 15 Verner, J 'A generic model for software size estimation based on component partitioning" PhD thesis Massey University, New Zealand (1989) 16 Jackson, M A System development Prentice Hall (1983) 17 Lundeberg, M, Goldkuhl, G and Nilsson, A Information systems development Prentice Hall ( 1981 )

information and software technology