A new metric for object-oriented design

A new metric for object-oriented design

A new metric for object-oriented design J-Y Chen and J-F Lu The paper presents a new metric for the object-oriented design. The metric measures the c...

729KB Sizes 13 Downloads 87 Views

A new metric for object-oriented design J-Y Chen and J-F Lu

The paper presents a new metric for the object-oriented design. The metric measures the complexity of a class in an objectoriented design. The metrics include operation complexity, operation argument complexity, attribute complexity, operation coupling, class coupling, cohesion, class hierarchy, and reuse. An experiment is conducted to build the metric system. The approach is to derive a regression model of the metrics based on the experimental data. Moreover, the subjective judgement by the expert is incorporated in the regression model. This ensures that the metric system is pragmatic and flexible for the software industry. software metrics, software experimentation, object-oriented design, software engineering

KEY CONCERNS INDUSTRY

IN SOFTWARE

Software quality is a key concern in software industry. Reducing software complexity will definitely improve software quality. Software quality, software complexity and complexity metrics will be briefly covered below.

Software quality Software quality has been interpreted in many ways. For example, Card and Glass discussed its relationship to reliability and safety i. McCall defined a software quality measure which is widely used within the software engineering community :. McCall breaks down the quality factors into criteria. The factors consist of general requirements that a specific customer may or may not want to emphasize in a given software product. Each criterion has specific metrics associated with it. These metrics include both subjective judgements and objective measures. The complete structure involves several hundred data items. These quality factors, however, capture characteristics of the final product (code) rather than its design. Boehm 3 also proposed an approach of metrics which have been developed from an intuitive clustering of software characteristics, Basili proposed a three-step approach for selecting measures in general, not just quality measures: (1) Organizational goals are identified; (2) questions relevant to the goals are defined; and (3) measures that answer the questions are selected. This approach is called Department of Computer Science and Information Engineering, National Chiao-Tung University, Hsinchu, Taiwan 232

'Goal-Question-Metric' (GQM) model 4 which results in a measurement suitable for organizational needs and capabilities. The data collection in the paper follows this approach.

Software complexity The term 'complexity' may be used with different meanings. It is common to class algorithm metric together as to their 'computational complexity', which means the efficiency of the algorithm in terms of its use of machine resources. The perceived complexity of software, on the other hand, is often called 'psychological complexity' because it concerns characteristics of the software which affect programmer performance in composing, comprehending, and modifying the software. A definition of software complexity suggested by Curtis is: 'Complexity is a characteristic of the software system which influences machines, people, or other software to complete' 5.

Complexity metrics Many metrics for measuring software complexity have been explored. Traditionally, code metrics such as lines of code, Halstead's programming effort equation, and McCabe's cyclomatic number, have been proposed. Realizing that design is more crucial than coding, design metrics are explored. For example, Henry and Kafura proposed a design metric for the structured method, which was later empirically validated by Kitchenham et al. 6. Similarly, Shepperd analysed the information flow design metric 7. With the emergence of object-oriented method, object-oriented design metrics began to be investigated. For instance, Chidamber and Kemerer proposed a set of metrics s. Furthermore, HendersonSellers discussed the concerns for the metrics for objectoriented method in general 9. Quite a few researchers validate their metrics by correlating metrics values against the historically collected project data such as cost, changes made, etc. However, as Fenton points out, many such researches show that 'all the metrics correlate with each other, but none correlate with the (project) data significantly better than lines of codel°. ' As a result, Fenton proposed a two-level metric validation scheme to ensure that: (1) a metric is well-defined, consistent, and based on measurement theory; and (2) the metric is specifically related to, or contributing to, a quality such as maintainability or

0950-5849/93/040232-09 © 1993 Butterworth-Heinemann Ltd

Information and Software Technology

J-Y CHEN AND J-F LU

complexity of the software. (1) and (2) above will be discussed below: (1) Regarding the theory of well-defined metrics, Weyuker proposed a formal validation approach to ensure that a metric possesses certain properties such as being neither 'too coarse' nor 'too fine', monotonicity, etc? ~. The validation has been applied to several wellknown code metrics. Chidamber and Kemerer validate their design metrics using this approach s. (2) Regarding the contribution of an individual metric to a quality, this paper takes a new approach. Instead of correlating the metric values to historical data as mentioned above, we correlate them to experts' subjective judgements (see the section headed 'Class score evaluation' page 238. Through experimentally and incrementally implementing our metric system, we may be able to establish an effective quality control system in a software development organization. Next, we will depict the object-oriented design method upon which our metrics are used. OBJECT-ORIENTED

DESIGN METHOD

Many object-oriented design methods have been developed, such as 'Object-oriented design' by Booch 12, Extended object-oriented design (EOOD) by Jalote ~3, General object-oriented design (GOOD) by Seidewitz ~4'~5, Object-oriented structured design (OOSD) by Wasserman t6, etc. The method proposed by Booch has been chosen in this paper. Booch defines the steps of object-oriented design as follows~2: (1) Identifying the objects and their attributes. This step involves the recognition of the major actors, agents and servers in problem space, plus their roles in our model of reality. These objects are typically derived from the nouns used here in the description of the problem space. Some objects of interest that are similar should be characterized as a class.

(2) Identifying the operations (methods) suffered by and required of each object. This step serves to characterize the behaviour of each object. The semantics of the object is established by deciding the operations that may be performed on the object or by the object. In this step, the dynamic behaviour of each object is also established by identifying the constraints on execution time and storage space. (3) Establishing the visibility of each object. The visibility of each object is established in this step in relation to other objects. The static dependencies among objects and classes are identified here. Restated, how an object accesses, and is accessed by, other objects are identified. This step can capture the topology of objects from our model of reality. (4) Establishing the interface of each object. A module specification is produced here by using some suitable formal notation, This step captures the static semantic of each object or class of objects. The Vol 35 No 4 April 1993

specification also serves as a contract between the clients of an object and objects themselves. Restated, the interface forms the boundary between the outside view and the inside view of an object. (5) Implement each object. The final step involves the implementation of the interface. As this paper focuses on design, this implementation step is skipped in our experiment.

Why do we need metrics for object-oriented design? An object-oriented design metric can specifically aid in evaluating the complexity of a software system early in design phase. Therefore, complex objects or classes can be located and redesigned. Software complexity will then be reduced, and the quality thus improved. Moreover, the metric may eventually be a useful objective criterion in setting the design standard for a software development organization. Besides, it can also be taken as a learning criterion for staff members who are new to this approach. Therefore the metric is essential for an organization utilizing object-oriented development. A new metric for the Booch object-oriented design method is proposed next. NEW METRICS FOR OBJECT-ORIENTED DESIGN The new metrics we developed for the object-oriented design arc: (1) operation complexity metric; (2) operation argument complexity metric; (3) attribute complexity metric; (4) operation coupling metric; (5) class coupling metric; (6) cohesion metric; (7) class hierarchy metric; and (8) reuse metric. These metrics are used to measure the complexity of a class. They are respectively described below:

Operation complexity metric This metric measures the operation complexity of a class. It is defined as: x o(i) where O(i) is operation i's complex value. O(i) is evaluated from Table 1 which is similar to that proposed by Boehm3. Summing up the O(i) for each operation i in the class gives this metric value.

Table 1. Operation complexity value Rating Null Very low Low Nominal High Very high Extra high

Complexity value 0 1-10 11-20 214 0 4 l~0 61-80 8 l-100

233

A new metric for object-oriented design

Operation argument complexity metric Class USER

The operation argument complexity is defined as: Z P(i) where P(i) is the value of each argument i in each operation in the class. P(i) is evaluated from Table 2. Summing up all P(i) in the class gives this metric value. Note that flexibility is provided here to tailor the metric to suit particular organizational needs. For example, 'Array' in Table 2 can be assigned either value 3 or 4. One organization may decide to assign value 3 to one-dimensional array and 4 to multi-dimensional array. While another organization may decide otherwise.

Class BOOKHANDLING

Figure 1. Class coupling example the summation of the following:

Attribute complexity metric The attributes complexity metric is defined as: E R(i) where R(i) is the value of each attribute used in the class. R(i) is again evaluated from Table 2. Summing up all R(i) in the class gives this metric value.

Example 1 Class USER has following attributes and operations:

(1) The number of operations which access other classes. (2) The number of operations which are accessed by other classes. (3) The number of operations which are co-operated with other classes. A 'co-operated operation' means that an operation accesses some other class's operation, and vice versa.

Class coupling metric

Attributes: ID: string ~7 Password: string Is

This metric measures the coupling between the class and other classes. It is defined as the summation of the following:

Operations: User (ID, Password) Get ID() return ID Get Password() return Password Get Command() return Command

(1) The number of accesses to other classes. (2) The number of accesses by other classes. (3) The number of co-operated classes.

Consider the following case: If the complexities for the above operations USER, GET ID, GET PASSWORD, and G E T _ C O M M A N D are 20 each; and, according to Table 2, the complexities for arguments ID and PASSWORD are 3 each; then, for Class USER, its operation complexity is 20 + 20 4- 20 + 20 = 80; its operation argument complexity is 6 + 3 + 3 + 3 = 15; and its attribute complexity is 3 4- 3 = 6.

Operation coupling metric This metric measures the coupling between operations in the class and operations in other classes. It is defined as

Table 2. Argument/attributevalue

234

Access

Type

Value

Boolean or integer Char Real Array Pointer Record, Struct, or Object File

0 1 2 3-4 5 6--9 10

A 'co-operated class' means that a class accesses some other class, and vice versa. Low coupling means good encapsulation of the class which is expected to bring forth the ease of modular design, testing, and reuse.

Example 2 The difference between metric 4 and metric 5 above lies in their different viewpoints. For example, from the viewpoint of class coupling (metric 5), Class USER accesses Class B O O K H A N D L I N G (see Figure 1). Contrarily, from the viewpoint of operation coupling (metric 4), Class USER's 'Get Password' operation accesses Class BOOKHANDLIN-G's 'Check' operation (see Figure 2).

Cohesion metric Let us first define two terms: N and M. In a class, assume there are N operations: F(1),F(2) . . . . . and F(n), Information and Software Technology

J-Y CHEN AND J-F LU

Class USER Attribute : ID: string 17

Thus N is 4. The disjoint sets formed by the intersection of the above 4 argument sets are: 1. [ID, Password] 2. [Command]

Password : stringl8

Thus M is 2 as illustrated in Figure 3. Operation :

The cohesion metric then is

USER ( I D, Password) GET_)D( ) Return ID

M --*100% = 2/4*100% = 50% N

GET PASSWORD ( ) Return Password GETCOMMAND ( ) Return Command

Class BOOKHANDLING

Attribute None Operation Input_Menu () CHECK (SI: stringl,7 $2: string 17

SALES ( ) STOCKIN ( )

The idea behind the cohesion metric is that operations with overlapping arguments tend to be related, thus make a cohesive class. If the value of N is much larger than M, we know that: (1) the argument sets of these operations are related; and (2) these operations manipulate the related attributes of the class. That is, a highly cohesive class encapsulates the 'operations' and the 'attributes' rather well. On the contrary, a class of low cohesion implies high complexity which is prone to errors. We thus suggest that such a class should probably be split into several subclasses. Chidamber and Kemerer proposed a similar cohesion concept 8, except that they measure the intersection of instance variables used by operations, instead of the arguments as in this metric. But the instance variables used may be unavailable in the design phase.

STOCKOUT ( )

Example 4

QUERY

Class B O O K has attributes and operations as follows:

Figure 2. Operation coupling example and correspondingly, there are N sets of arguments: 1(1),I(2) . . . . . and I(n). M is the number of disjoint sets formed by the intersection of the N sets of arguments. The cohesion metric for the class is then defined as: M --,100% N The lower the value is, the more cohesive the operations are in the class.

Example 3

Attributes: no: string t8 name: string 6 cost: integer Operations: BOOK (no, name, cost) Get Book Name() Return name Get Book_No() Return no G e t Book Cost() Return cost Each operation has its arguments as follows: 1. 2. 3. 4.

[no, name, cost] [name] [no] ]cost]

SET I

SET 2'

In Class U S E R in Example 1, there are four operations as follows: 1. 2. 3. 4.

User(ID, Password) Get I D ( ) r e t u r n l D Get_Password() return Password Get Command() return Command

Four sets of arguments for the operations above are: 1. 2. 3. 4.

[ID, Password] [ID] [Password] ]Command]

Vol 35 No 4 April 1993

4 SETS, so M / N = 50°1o

Figure 3. Cohesion example 235

A new metric for object-oriented design Thus N is 4. The disjoint set is [no, name, cost]. M is thus 1. The metric value therefore is • 100% = 25%. This low metric value (i.e. 25%) implies that the cohesion of Class BOOK is strong.

number of its available operations is not shown in this example.

Class hierarchy metric

The reuse metric measures whether the class is a reused one. The metric value is 1 if the class is reused from either the current project or a previous one. Otherwise, the metric value is 0. Moreover, a reused class can be either fully reused or partially reused through the inheritance technique.

The class hierarchy metric for a class is defined as the summation of the following: (1) (2) (3) (4)

The depth of the class in the inheritance tree. The number of sub-classes of the class. The number of direct super classes of the class. The number of local or inherited operations available to the class.

Reuse metric

EXPERIMENT In this section, experiments are conducted to validate the metrics proposed in the previous section.

The rationale for this metric is as below:

Experimental procedure (1) The deeper a class is in the hierarchy, the more classes are likely to be inherited by the class. This, as a result, makes the class more complex. (2) The number of children gives the idea of the extent of influence a class has. A class may require substantially more testing effort for the operations in it, if it has many children. (3) A class may be easily affected by its super classes if the class has many direct super classes (multiple inheritance). As most studies reveal, the use of multiple inheritance greatly increases complexity. (4) A class may access either a local operation within it or an inherited operation in another class through class inheritance. The absolute number of possible (available) operations for a class affects complexity.

Example 5 Class U S E R _ I and Class USER_2 are designed here which inherit Class USER in Example 3. Similarly, Class USER_I1 and Class USER_ 12 inherit Class USER 1 (see Figure 4). For Class U S E R 1, its depth in the inheritance tree is 1. The number of its subclasses is 2 (Class U S E R 11 and Class USER_ 12). The number of its direct super class is 1 (Class USER). But, the

I

Class

Class USER I

USER

Class USER._2

Related researches have been referenced 5'18-24 in designing the experimental procedure carried out here. The experimental procedure is as follows: (I) (2) (3) (4) (5) (6) (7) (8)

Choose the object-oriented design method. Design the experimental materials. Train the subjects. Proceed the experiment. Data collection. Data quantification. Class score evaluation. Data analysis.

Experiment setup Each step in the procedure above is described below.

Method The Booch's object-oriented design method depicted earlier was used in the experiment for the following reasons: (1) Booch's method, first proposed in 1986, is a widely used method in object-oriented design. (2) Its excellent discussion of classification and accessing relationships is particularly worth referencing. (3) All subjects in the experiment have studied the Booch's method ~2'2s for about one year. Booch's design method, however, does not include the inheritance relationship. Therefore, in this experiment, a step of identifying class inheritance has been included in the Booch's method as the fourth step.

Experimentalmaterials

Figure 4. Class hierarchy example 236

A training material, 'Bank management system', and two experimental materials--'Bookstore management system' and 'Library management system'--are designed in this experiment. These materials are all similar projects from the same data processing domain. Information and Software Technology

J-Y CHEN AND J-F LU

Table 3. Data form Designer: System Name: Class Name: Item Description 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Design Date:

Score

Operation complexity Operation argument complexity Attribute complexity The number of operations which access other classes The number of operations which are accessed by other classes The number of operations which are co-operated with other classes The number of accesses to other classes The number of accesses by other classes The number of co-operated classes Cohesion The depth of the class in the inheritance tree The number of subclasses The number of direct super classes The number of available operations Reuse

Subjects training There are six subjects participating in the experiment. Below is the description of the training of the subjects: (1) Design method. All subjects are familiar with Booch's design method. Note that the step of inheritance identification has been included in the method. (2) Training material. It takes about four hours to teach the training material ('Bank management system') to all subjects. (3) Documents preparation. The subjects must complete the design documents (see sub-section 'Data collection' below) using MicroSoft Word 5.0 and MacDraw 2.0 packages which are installed in Macintosh Classics in our laboratory. Since all subjects have used the two packages for over one year, they had no problem preparing the documents. Proceed the experiment After the subjects' training, the requirement documents of two experimental materials ('Bookstore management system' and 'Library management system') were handed over to the subjects. Each subject worked on the two materials. The experiment of 'Bookstore management system' was conducted first, followed by that of 'Library management system' about two months later. No hints were given to subjects during the experiment. No discussions between subjects were allowed. The

subjects, however, could refer to the training material. Also, all work had to be conducted in the laboratory so that the experimenters could keep track of the progress of the experiment. Data collection Given the goal of reducing the design complexity, the questions were asked with regard to what measurements should be collected in order to achieve the goal. As a result, five document forms and one data form (next sub-section) are designed. The document forms are: (1) (2) (3) (4) (5)

The objects and their attributes. The operations of each object. The visibility of each object. The interface of each object. The description of each class, class operations, and class attributes.

Also, each subject was interviewed to validate his or her documents. Data quantification The experimenters quantify the documents above and fill in the data form shown in Table 3. Items 1-3 in Table 3 require human judgements to quantify the documents. Other items can be directly counted from the documents.

Table 4. Evaluation items for class score Item

Description The The The The The The The The The

Vol 35 No 4 April 1993

class packages a set of operations which are strongly related to each other class packages a set of data which are strongly related to its operations class can be easily understood and explained class can fully express the property of a problem domain class is easy to implement, test, and maintain class's operations need not access other classes to complete its operations designer fully understands the object-oriented design designer fully understands the requirement document class can be reused in the application domain

237

A new metric for object-oriented design Table 5. Score for each evaluation item

Score

Behaviour of each item

8-10 6-7 4-5 2-3 0-1

Very good Good General Poor Very poor

Table 6. Class score of Class USER

Class score evaluation

Class score measures the complexity of a class according to nine evaluation items as depicted in Table 4. It is a subjective measurement estimated by the experts. The experts evaluate the design of a class without referring to, or even knowing about, the metrics in the experiment. Then they give a score to each evaluation item on a 0 to 10 scale as shown in Table 5. The scores of the nine items are finally summed up to become the class score.

Item

Score

1 2 3 4 5 6 7 8 9 Class score

10 (Very good) 10 (Very good) 8 (Very good) 10 (Very good) 6 (Good) 5 (General) 10 (Very good) 10 (Very good) 5 (General) 74

obtain the class score rather accurately. This implies that the quality control practice in the organization will not be severely affected by the turnover of the experts. Data analysis

Example 6 Class U S E R has attributes and operations as follows: Attributes: ID: string 17 Password: string TM Operations: User(ID, Password) Get ID() return ID Get Password() return Password Get Command() return Command

This experiment adapted the multiple variables regression analysis to obtain the regression model from the experimental data. The SAS statistical package was used for the analysis. An equation in multiple variables regression model 26 is: Y = t0 + fltX, + fl2X2 + . . . + flnXn

The expert evaluates Class U S E R according to the items in Table 4. He or she then assigns a score to each item based on criteria in Table 5. These scores are finally summed up to become the class score as shown in Table 6. For example, item 1 is assigned the score of 10. And the class score of Class U S E R is 74. Recognizing that expert knowledge is no substitute in software engineering, it is assumed that the class score estimated by the experts is correct. Then, we statistically examine how our metrics contribute to the class score. By so doing, a regression model can be built to estimate the class score from the metrics without the assistance of experts. Even though the experts may leave the organization later on, we can still apply the regression model to

where Y is the independent variable; Xi is the dependent variable for i = i,2 . . . . . n; and /~i is the coefficient of each dependent variable for i = 1,2 . . . . , n . In this experiment, Y denotes the class score; Xi denotes each metric such as the operation complexity metric. Notice that the sample size is the number of classes designed in 12 projects (each of the six subjects worked on two projects).

EXPERIMENTAL RESULT The experimental results are depicted in Table 7.

Table 7. Results

Metric

Mean

S

Prob (F > Fo)

1. 2. 3. 4. 5. 6. 7. 8.

76.48 12.82 2.18 3.79 2.04 0.58 4.28 0.28

19.38 17.75 2.12 1.54 0.89 0.39 5.53 0.44

0.0001 0.0001 0.0019 0.0001 0.0001 0.0879 0.3948 0.0401

Operation complexity Operation argument complexity Attribute complexity Operation coupling Class coupling Cohesion Class hierarchy Reuse

Coefficient -0.49* -0.35* -0.43 -7.35* -9.61" 0.43 - 1.50 6.37

Constant value = 153.95. *Significant under ~ = 0.01. Mean: Each metric's sample mean. S: Each metric's sample deviation. Prob (F > F0): The probability of the null hypothesis. Coefficient: The coefficient of the regression model. Constant value: The intercept value of the regression model. 238

Information and Software Technology

J-Y CHEN AND J-F LU

The regression model for the Class Complexity thus is: 153.95 + + + +

the the the the

value value value value

of of of of

metric metric metric metric

1" 2* 4* 5* -

0.49 0.35 7.35 9.61

The regression model above shows that, based on the data in this experiment, four metrics contribute significantly to the class complexity. They are: (1) operation complexity metric; (2) operation argument complexity metric; (3) operation coupling metric; and (4) class coupling metric. Some classes were reused in the experiment. Because the two projects are in the same domain, subjects working on the second project tend to reuse the class, such as Class BOOK, designed in the first project. Further experiments are needed on this. Note that some metrics distributions show high sample deviations, compared with sample means. Take metric 2 'Operation argument complexity' in Table 7, for example. Its high sample deviation (i.e. 17.75) results from the fact that some class contains several 'file' arguments which carry high metric values (10 for each); while many other classes contain arguments with low metric values such as Boolean or integer (0 for each). This experiment has demonstrated that this metric system is a feasible and promising approach.

CONCLUSIONS In the paper, we presented a new metric system for the object-oriented design, along with its experimental validation. This metric system is a preliminary proposal which requires further research based on: (1) theoretical research on individual metrics; and (2) experimental work on the contribution of the individual metrics to a software quality. At this stage, this metric shows the following advantages: (1) This metric system incorporates experts' insights and judgements. In other words, the subjective judgement by experts and the object measurement by our metrics are naturally merged in this metric system. We feel that this approach is more pragmatic than pure objective metrics. Moreover, after we have built the metric system, we can utilize the system to control the design quality for a long period of time without constantly relying on the experts. This characteristic may substantially assist an organization in establishing an effective long-term quality control system. (2) This metric system shows flexibility to meet the diversified nature of software projects. Since different application domains and different project staff normally require somewhat different metrics, it would be unwise to design a rigid metric for the entire organization. Also, it would be cumbersome to devise alternatively a set of complicated metrics for the different situations in the organization. Vol 35 No 4 April 1993

On the contrary, this metric system is capable of generating different regression models for the different situations mentioned above. Note that the regression model will vary according to various data from different application domains and different project teams. That demonstrates the flexibility of this metric system. Furthermore, with the continued use of this metric system, the amount of metrics data will gradually increase. Based on statistical theory, the increased data will incrementally improve the accuracy of the regression models.

ACKNOWLEDGEMENT

The authors are thankful to the reviewers for their helpful suggestions. The work described here was partially supported by the SEED project of the Institute for Information Industry (III) in Taiwan, and by The National Science Council (NSC) Grant NSC 81-0408-E009-27.

REFERENCES

1 Card, D N and Glass, R L Measuring software design quality Prentice-Hall (1990) 2 McCall, J A, Richards, P K and Waiters G F Factors in software quality Rome Air Development Center, RADCTR-369, (November 1977) 3 Boehm, B W Software engineering economics Prentice-Hall (1981) 4 Basili, V R 'Improving the software process and product in a measurable way' IEEE Video Not. IEEE Computer Society Press (1989) 5 Curtis, B 'Measurement and experimentation in software engineering' in Proc. of IEEE Vol 68 No 9 (September 1980) pp 1144-1157 6 Kitehenham, B A, Pickard, L M and Linkman, S J 'An evaluation of some design metrics' Soft. Eng. J. (January 1990) pp 50-58 7 Shepperd, M 'Design metrics: an empirical analysis' Soft. Eng. J. (January 1990) pp 3-10 8 Chidamber, S R and Kemerer, C F 'Toward a metrics suite for object oriented design' in Proc. OOPSLA '91 pp 197-211 9 Henderson-Sellers, B 'Some metrics for object-oriented software engineering' in Proc. TOOLS pp 131-139 (1991) 10 Fenton, N E 'Software metrics: theory, tools and validation' Soft. Eng. J. (January 1990) pp 65-78 11 Weyuker, E J 'Evaluating software complexity measures' IEEE Trans. Soft. Eng. Vol 14 No 9 (September 1988) pp 1357-1365 12 Booch, G 'Object-oriented development' IEEE Trans. Soft. Eng. Vol SE-12 No 2 (February 1986) pp 211-221 13 Jalote, P 'Functional refinement and nested objects for object-oriented design' IEEE Trans. Soft. Eng. Vol 15 No 3 (March 1989) pp 264-270 14 Seidewitz, E and Stark, M 'Towards a general objectoriented software development methodology' SIGAda Ada Letters Vol 7 No 4 (July/August 1987) pp vii. 4-54-vii. 4-67 15 Seidewitz, E 'General object-oriented software development: background and experience' J. Syst. and Soft. Vol 9 No 2 (September 1989) pp 95-108 16 Wasserman, A I, Percher, P A and Muller, R J 'Concepts of object-oriented structured design' Soft. Eng. Not. Vol 14 No 1 (January 1989) pp 29-53 17 Card, D N, Church, V E and Agresti, W W 'An empirical study of software design practices' IEEE Trans. Soft. Eng. Vol SE-12 No 2 (1986) pp 264-271 239

A new metric for object-oriented design 18 Chen, J Y 'The evaluation of software engineering environment--with the example of KANGA' J. Nat. Inst. Composition and Translation. Vol 19 No 1 pp37-77 (Taiwan, 1991) (in Chinese) 19 Basili, V R and Katz, E E 'Metrics of interest in an Ada development' in Proc. IEEE Workshop Soft. Eng. Tech. Trans., Miami, FL (April 1983) pp 22-29 20 Basili, V R, Selby, R W and Hutchens, D H 'Experimentation in software engineering' IEEE Trans. Soft. Eng. Vol SE-12 No 7 (July 1986) pp 733-743 21 Chen, J Y and Wang, J J 'Comparing object-oriented design methods experimentally' in Proc. 3rd Int. Conf. on Tech. Object-Oriented Languages and Systems TOOLS Pacific '90 Sydney, Australia (November 1990)

240

22 Conte, S D, Dunsmore, H E and Shen, V Y Software engineering metrics and models Benjamin/Cummings (1986) 23 Curtis, B 'Productivity factors and programming environments' in Proc. IEEE Workshop on Soft. Eng. Tech. Trans. (April 1984) pp 143-152 24 Kudo, H, Sugiyama, Y, Fujii, M and Torii, K 'Quantifying a design process based on experiments' J. of Syst. and Soft. Vol 9 No 2 (February 1989) pp 129-136 25 Booch, G Object-oriented design with application Benjamin/Cummings (1991) 26 Sandy, R Statistics for business and economics McGraw-Hill (1990)

Information and Software Technology