J. SYSTEMS SOFTWARE lYY3; 20:115-124
115
Kn~w~ed~~-Based Test Planning: Framework for a Knowledge-Based System to Prepare a System Test Plan from System Requirements Dolly Samson Research Affiliate, Jet Propulsion Laboratory, Pasadena, California, and Computer Information Systems Department, Weber State University Ogden, Utah
Early planning for system acceptance testing will result in better prepared and executed tests of a system being delivered. Since system acceptance testing is tightly coupled to system requirements, a test plan can be derived directly from requirements statements. For complex systems with hundreds of requirements, knowledge-based support has the potential to improve the quality and timeliness of a test plan document. This article discusses a framework for such knowledge-based support. First, it describes a taxonomy for classifying requirements an two exercises given to testers to validate the taxonomy and the classification concept. Then, the test-types taxonomy, a rudimentary knowledge base, and matching algorithms are presented. The article gives a complete example of matching requirements statement to an appropriate test type for demonstrating system compliance. An overview is presented of ongoing efforts to construct a robust, knowledge-based product that will implement this framework to automatically prepare a system test plan from requirements.
INTRODUCTION
Test engineering
is an integral
part of every system
project, and test activities occur in all development phases. The most visible portions of system testing occur during actual execution of system tests. However, that portion is only a culmination of test planning and preparation activities that began during the initial system requirements phase. Test planning during the requirements phase contributes to early problem detection in three ways: development
Address correspondence to Pmfessor Dolly Samson, Computer Information Systems Dept., Weber State Uniuersity, Ogden, UT84408. Q Elsevier Science Publishing Co., Inc. 655 Avenue of the Americas, New York, NY 10010
first, if a requirement is determined to be untestable, then there is a problem with it. Second, a system must be amenable to tests, that is, a system must reveal the behavior being tested, or special equipment must be built to observe the behavior. And third, the process of examining requirements for testability can reveal other problems such as conflict, incompleteness, and ambigui~, which should be resolved as early as possible. This article describes a framework that has been developed in ongoing research and development supported by Weber State University and the Jet Propulsion Laboratory (JPL) to provide automated assistance for early test planning. The objective of this work is to build a knowledge-based test planner that identifies high-level system tests based on a system’s requirements. Thus, the main contribution of this work to date is using requirements classification as a vehicle for system test planning. Significant test planner features described here include classification of user requirements statements, classification of test types, a domain-specific testing knowledge base, and an inferencing mechanism to match requirements with test types. Also, this article provides experimental evidence that approximate understanding of natural language requirements through classification can be used to support early system test planning.
SIGNIFICANCE OF RESEARCH AUTOMATED TEST PANNING
SUPPORTING
The two novel features of the test planning system described here are that it provides support for test
116
D. Samson
J. SYSTEMS SOFTWARE 1993; 20:115-124
planning through classification of natural language requirements and that this support can be automated. During the system testing cycle, the test plan must be made visible to users, developers, and managers. Test activities begin with test plan preparation in the requirements definition phase, continue through the design and implementation phases, and culminate in system test execution [l]. These two notions, early test planning and integral development of a test plan within the system development life cycle, provide an impetus for development of an automated test planning tool given the distinctive name Test Planner. The high penalty of discovering system defects is well known. Recent studies show that requirements analysis absorbs 5% of development costs and provides 50% of the leverage to influence improved system quality [2]. Another way to look at this is that it requires 100 times more effort to correct errors in requirements discovered at implementation than it does to correct these same errors if they are discovered during the requirements engineering process [3]. A recent study conducted at JPL strongly supports Boehm’s data. Kelly [4] collected data on 203 inspections performed on five software-intensive projects at JPL over a three-year period. The average cost to fix defects during inspection was 0.5 hours, while the average cost to hx defects during testing ranged from 5-17 hours among the projects studied. Automated support for system-level test planning from requirements fills a void in current productivity tool offerings and opens the door to many requirements-level planning and management activities. Few tools work directly from natural language requirements; most tools that use requirements statements require a formal language translation of those requirements. For example, Cadre Technologies, Inc.‘s Teamwork tool automates object-oriented analysis methods via its own formal specification language.
Test planning reveals needs for instrumentation, testbeds, and interface simulators that must be developed in support of test. Early planning facilitates integral development of these artifacts to ensure that they will be available when the system is ready to test. Integration of system test-planning activities with requirements-definition activities facilitates traceability of system features back to the requirements. Traceability between requirements and test sets ensures demonstration of compliance for each requirement in a system acceptance test. Traceability also supports revalidation of a test plan when changes are made to requirements during development. A not uncommon approach to system testing is to “throw the system over the wall to see what happens.“’ Today’s complex systems cannot risk this approach. Early test planning results in more diligent and focused testing along with increased resource and productivity benefits. The framework for early test planning presented here is based on the design of Test Planner, a knowledge-based system. Figure 1 gives an overview of Test Planner’s architecture. The purpose of Test Planner is to prepare a high-level system test plan based on analysis of a system requirements document. Three major components are a requirements taxonomy, a test-types taxonomy, and a knowledge-based process that matches requirements to test types. These components are described in the following sections.
DEVELOPMENT REQUIREMENTS
OF A TAXONOMY
The foundation of Test Planner is a requirements taxonomy to facilitate requirements classification.
’ This definition was written by a system tester as a comment in the classification exercise described later.
Requirements Taxonomy L b
Test Types Taxonomy Figure 1.
Conceptual architecture of text planner.
High-level System Test Plan
Knowledge-Based
Test Planning
Classification supports problem solving by providing a framework for systematically relating data (requirements) to a preenumerated set of solutions (test tools and methods). Abstraction, heuristic association, and refinement are processes that relate data to a solution. Classification of natural-language requirements statements was chosen over a more traditional translation-to-formal-language approach as a means of deriving tests for three reasons. Most importantly, user requirements frequently contain some vagueness and ambiguity because the system is in its early stages of development and is likely not completely understood. Restating requirements in a formal, restricted language risks misinterpretation and invalid assumptions. Second, Test Planner uses classification to do approximate processing of natural-language text to characterize requirements for test planning, not to produce design documents or an executable system specification (c.f. the PAISLey language, described in [5]. This is because the difficulty of classifying requirements according to their type is not as great as understanding them to demonstrate system functionality. Finally, with a classification approach, a tester does not need to learn a new language, with its restricted vocabulary and formal syntax, into which original requirements are transformed. A classification scheme chosen to characterize system requirements must be flexible to accommodate change as the knowledge base matures and be adaptable to new system domains. These criteria led to a faceted scheme suggested by Pfleeger [6] and based on work by Prieto-Diaz and Freeman [7, 81, who classified software modules for reuse. The faceted scheme very much resembles a relational data base architecture, where each facet describes a system characteristic. Each facet has a set of terms, along with term synonyms, that describe particular instances of the characteristic. Five facets were identified and, taken together, can be used to form a requirements vector that characterizes a system requirement: 1. Feature-the
2. 3. 4.
5.
basic service described by a requirement. Examples: data acquisition, input, security, transformation. Object-the artifact that facilitates a feature. Examples: data, procedure, status, user interface. Function-what is being done to, with, or by the feature. Examples: access, display, link, validate. Performance-behavioral characteristic in an operational environment. Examples: batch, parallel, random, static. Quality goal-nonfunctional subjective character-
117
J. SYSTEMS SOFIWARE 1993; 20:115-124
istic. Examples: accessibility, maintainability, ability.
reli-
The facets for this taxonomy and some of the terms were developed from analysis of how CASE tools use requirements (e.g., dataflow diagrams which highlight entities and processes) and scrutiny of the requirements analysis process. Over 100 requirements statements from three software-based system were indexed (keywords were selected as characteristic indicators) and classification exercises were given to professional testers to provide most of the taxonomy’s terms. Appendix A lists the structure and content of the requirements taxonomy developed for data management systems. The following requirement statement gives an example of classifying a requirement. The sample requirement is from a ground data system, specifically, an earth-based receiving system that is capable of acquiring radar signals, processing those signals into images, and storing, disseminating, and archiving images [9]. The main component of this system is a data base containing images accompanied by descriptive information about these images. Requirement.
Load and maintain a directory that searchable from a computer terminal. Classification.
load and maintain indicates agement
a feature
data man-
directory is a specific object named directory searchable describes a function
search
computer
interactil!e
terminal
indicates
perfor-
mance no quality goal is explicitly stated As shown here, classification of individual requirements is accomplished thorough manual analysis of nouns and verbs and is supported by an engineering thesaurus [lo] and a rudimentary thesaurus for ground data systems. Automation of the requirements classification process will be essential for useful application of the taxonomy to very large systems. A separate research effort is focusing on automated classification of natural language requirements. Palmer and Liang [ll] present an excellent discussion of this automated classification process, which is based on a two-tiered clustering algorithm. The process of indexing and classifying systemlevel requirements for a project can provide user interaction opportunities not presently available and
118
J.SYSTEMS
D. Samson
SOFTWARE 1993; 20:115-124
can reveal several types of potential problems for a new system: User interaction. Utilization of classification provides the opportunity for user and analyst to interact at an early stage to sort out potential issues and reach a good understanding of essential requirements. Identifying conflict among requirements. Requirements that classify with known conflicts can be reported (e.g. batch versus interactive data input). Providing different views. Requirements can be sorted according to similar terms or grouped on user-selected attributes (e.g., list all requirements with a feature of data security). Detecting ambiguity and vagueness. If a known ambiguous (e.g., “power”) or vague (e.g., “friendly”) word is discovered, it can be flagged for resolution. Translation of a natural-language requirement into a restricted, formal syntax forces removal of all ambiguity and vagueness. Because of the training necessary to understand a formal syntax, the user typically will not be inclined or able to discuss the requirements with an analyst in order to resolve differences or come to an understanding of user needs. While ambiguity and vagueness are to be eliminated before implementation, some of those characteristics may exist in early requirements because of innovative or unusual system characteristics. Forcing these problems out through translation to a formal language usually conceals the problem and contributes to later difficulties when user needs are not matched by formal statement of requirements. VALIDATION OF THE REQUIREMENTS TAXONOMY
Most terms in the requirements taxonomy were obtained through study of requirements documents and then validated for usability by two groups of testers. Three high-level requirements documents for ground data systems were used: two from JPL [9, 121 and one from Hill Air Force Base [13]. The first describes a satellite system, its data products, and interactive data access; its 47 functional requirements statements were classified. The second ground data system had half of its 130 requirements statements classified. The third document defines requirements for a telemetry data base supporting the Peacekeeper weapon system; 27 of its 37 requirements were classified. This taxonomy was applied to a very different domain when requirements [14] for
an instrument data processor embedded in a spacecraft were classified. The result was that 30% of the requirements taxonomy terms were applicable and an equal number of new terms were added. This indicates that the original taxonomy was not sufficiently robust or that significant work must be done to classify new domains. A classification exercise was developed to test the appropriateness and usefulness of the requirements taxonomy. The exercise was performed by nine system testers at JPL using excerpts from a JPL requirements document. It consisted of 10 ground data system requirements to be classified, the requirements taxonomy containing the five facets and 88 terms, and classification instructions, including written instructions with an example. Approximately fifteen minutes was spent with each tester to explain the classification process and review the example. Six of the nine testers returned the exercise, with the following results: Three given.
completed
the classification
exercise
as
One indexed 90% of the requirements. One indexed 40% of the requirements. One created his own set of terms. Ten new terms were added to the taxonomy. The testers had an average of 6.8 years of system test experience and 5 years of ground data systems experience. There was little agreement among the testers in actually applying terms to describe requirements. In only two out of 50 cases (five facets for each of ten requirements) did all five testers who used the terms given in the exercise agree on the choice of a term to classify one facet of a requirement. In 17 cases, more than half (i.e., three or more) agreed on the use of a term. Figure 2 shows the results of the JPL classification exercise; data points 1X-5X represent the number of times the same term was used by different people to classify the same requirement. (Actual counts are listed in Appendix B.1 Note that each row may sum to over 50 terms because more than one term could be used by one person for classifying a single requirement. The feature facet showed the most consistency based on the number of times a single term was used; 33 different terms were used an average of 1.9 times for any single requirement and 15 terms were used only once. The smallness of the tester sample and the variety of classifications in this exercise do not validate the correctness of the taxonomy. However, the exercise shows that testers can classify requirements state-
Knowledge-Based
Test Planning
119
J. SYSTEMS SOFTWARE lY9,93; 20:115-124
Facets a
Figure 2. JPL results of
.^ . classihcatlon
1x f3 2x
q
3x
H 4x
El 5x
exemse.
merits, that the taxonomy was usable, and that some agreement exists among the testers in the use of terms to index a requirements statement. Remarkably different results were obtained from a similar exercise given to 10 programmer/analysts at Hill Air Force Base (AFB). The format of their exercise was identical to the JPL exercise except that subject requirements came from a requirements document developed at Hill AFB and the taxonomy was expanded to 103 terms, including new terms
g 2
z
u
Figure 3. Hill AFB results of classification exercise.
added from the JPL exercise and new terms specific to the Hill AFB document. The exercise was distributed by a colleague; the programmer/analysts were given the same written instructions as the JPL testers but no verbal instructions. The Hill AFB group had an average of 9.5 years of system test experience (40% more than JPL) and 6 years of data base development experience. Seven of the ten people completed and returned the exercise. Figure 3 shows a tabulation of results of the Hill AFB exer-
10
5 0
Facets
q
1X
q 2X
@ 3X
8884X
q
5X
i@ 6X
q
7X
I
120
D. Samson
J. SYSTEMS SOFTWARE 1993; 20:115-124
cise; data points 1X-7X represent the number of times the same term was used by different people to classify a facet for the same requirement. (Actual counts are given in Appendix B.) The degree of agreement is much greater than with the JPL group: the same term was applied to describe a requirement by all seven programmer/ analysts eight times (c.f. two for JPL). The same term was used by more than half the people (i.e., four times or more) 34 times (c.f. 17 for JPL). One requirement had agreement among six people for the feature, object, and performance facets and among all seven for the function facet. In fact, the function facet elicited the most agreement across all while performance and quality 10 requirements, facets were the most diverse. The source of the difference in agreement of term usage between the JPL and Hill AFB groups is not clear, and more investigation is needed. Possible reasons include the fact that Hill AFB analysts follow a more specific requirement statement format than do JPL analysts, or that the verbal instructions given to JPL testers were more confusing than helpful. These exercises present experimental evidence that the faceted requirements classification can be performed by computing professionals and may be used with minimal training. Further research is necessary to identify the sources of disagreement as well as the impact of the disagreement on the resulting test plan.
CLASSIFICATION AND METHODS
OF TEST TYPES
In planning acceptance testing during the requirements definition phase of system development, it is likely that only high-level, generic types of tests can be identified. It is usually too early to identify specific data items or algorithms to be tested. Systemlevel testing must include tests to cover hardware, personnel, documentation, and procedures in addition to software tests. Examples of these test types include the boundary value test, which explores boundary conditions of a requirement, the documentation test, which tests the adequacy and accuracy of user documentation, and the installability test, which examines correctness of installation procedures. Just as requirements are classified according to a requirements taxonomy, test types are classified according to a testing taxonomy. This classification facilitates matching requirements types with appropriate test types. The testing taxonomy also uses a faceted structure consisting of four facets:
Structural technique-how the test relates to details of system implementation. Example: stress, recovery, security. Functional technique-how the test relates to examination of system functions. Example: algorithm analysis, control, regression. Operational environment-describes the environmental support needed by the test. Example: manual, simulation, testbed, live. Conditions tested-defines conditions measured or validated by the test. Example: existence, accuracy, timing, typing. identification of facets and terms for the testing taxonomy was more straightforward than for the requirements. There is a body of literature which reviews and classifies test tools and methods (see [15, 161 for examples). The testing taxonomy applies a faceted structure to the informal classifications; terms were defined by identifying features among the test types. The structural and functional facets follow common testing breakdowns. Operational environment and conditions tested were added to more finely differentiate among test types (e.g., different tools would test the conditions existence and accuracy). Appendix A shows the structure and content of the testing taxonomy. A small selection of test types has been classified to serve as the data base of test types which will be matched to requirements according to the requirement’s classification. This test types data base is static, and so the classification was done manually from inspection. Test types in the current data base are boundary value analysis, checklist, modeling, parallel simulation, snapshot, system log, and volume testing. As an example of classifying a test type according to this taxonomy, consider the following test type definition: Test ype. Volume testing uses specific test data sets to test predetermined system limits and verify how a system performs when limits are reached or exceeded. Classification. Structural technique: execution executed to use it)
(software
must be
Functional technique: stress (tests system limits) Operational environment: live or simulation (either environment acceptable) Conditions tested: boundaries)
boundary
(looks
for
volume
As with the requirements taxonomy, another benefit of this classification scheme is that a tester can
Knowledge-Based
Test Planning
J. SYSTEMS SOFTWARE 1993; 20:115-124
sort recommended test types according to different characteristics. For example, a tester can determine what kind of timing analysis tools are needed by listing all requirements that require timing analysis. Work remains to have system testers validate the testing taxonomy, although there is precedence given in the references cited above for the classification used. MATCHING
REQUIREMENTS
TO TEST TYPES
This section describes the matching process framework for the prototype Test Planner. Classified requirements drive the matching process, which is rule based. A rudimentary set of rules has been defined to associate particular requirements types with certain test characteristics. With an average of 13 terms in each facet of the requirements taxonomy, approximately 370,000 distinct requirements can be described. However, many of those combinations are nonsensical, for example, “archiving the documentation of the personnel in realtime emphasizing interoperability” (terms in bold). Metarules eliminate nonsensical combinations and group combinations that are similar in relation to their test strategy to greatly diminish the rule space. In addition, context dependencies provide rules that take precedence over basic rules. To direct rule processing, the requirements function facetwill be the first discriminator because it is the most descriptive. (Palmer and Liang [ll] also use verbs as the first discriminator in their two-tiered clustering algorithm.) The first step of matching produces a set of terms to describe the “ideal” test type, that is, characteristics of a test type that would provide the best validation of each requirement. The second step searches the test type data set to find the most closely matching test type. It may appear to be more efficient to simply associate each requirement type with a “best” test type and provide the match in one step; however, uncoupling requirements and test types provides more system flexibility and extensibili~. This way, new terms can be added to the requirements or test types taxonomy and new rules can be added to the knowledge base without explicitly linking requirements to test types. The knowledge-based inferencing mechanism that matches requirements to tests must have a domainspecific component because context dependency is an issue in classifying and testing requirements. Examples of context dependency include such words as “remove”, where “remove a file from a disk” is different from “remove a disk from a drive” and “power”, where “computing power” refers to mil-
121
lions of instructions per second and “electrical power” refers to watts. Different test strategies will be applied according to the context of a term. In the ground data system, domain knowledge is applied to classifying use of particular words in context. For example, ground data system requirements frequently describe a product, which is actually an image of Earth or another planet or star; this would require a different test strategy than a requirement for the product of two variables. A COMPLETE
EXAMPLE
The following example illustrates what Test Planner does to match a requirement with a test type, Activities of the two major processes of Test Planner are classifying requirements and matching them to test types. Recall the ground data system requirement used earlier: “Load and maintain a directory that is searchable from a computer terminal.” The requirement has been classified as: Feature: data management Object: directory Function: search Performance:
interactive
Quality Goal: nil Next, a rule base is consulted for matching requirements to test type characteristics. The rules will yield the test type characteristics needed to test a requirement with the terms (data management, directory, search, and interactive) used to classify it. A substantial rule base has not been developed yet; however, some example rules have been derived through interviews with system testers: IF (feature = dataManagement) search)
AND (function =
THEN ConditionTested: = location; Some rules will be application ground data system rule is: IF (systemType = Ground-data) directory) THEN functionalTechnique:
dependent,
so a
AND (object =
= intersystem;
The combination of terms produced by the rules provides a faceted definition of the ideal tool as well as facet weights derived from the rule base along with domain knowledge. For the ground data system classified above, the most important facet to match with a test type is structural technique; the least important facet to match is the condition tested. A
122
D. Samson
J. SYSTEMS SOFTWARE 1993; 20:115-124
complete set of domain weights has not been developed. Weights are given in parentheses (lowest weight indicates the most important to match):
further improvement and validation of the matching process between requirements and test type characteristics;
Structural technique: operations (4)
definition of test strategies systems domain.
Functional technique: Operational
intersystem (3)
environment:
live (2)
Condition tested: location (1) Finally, the data base of classified test types which was described earlier is searched and conceptual difference scores are computed. Scores are the sum of the weighted absolute differences between facet weights of the ideal characteristic and the actual characteristic. (See [7, 18, 193 for a discussion of matching algorithms.) Some scores that are computed for test types from the database are as follows: System log: 0 Checklist: 20 Parallel simulation: 55 Modeling: 70 Snapshot: 91 Volume testing: 270 Boundary value analysis: 321 The lowest scoring test type is recommended for this requirement and will appear in a prelimina~ test plan. Thus, Test Planner’s recommendation can be interpreted to mean that a system log should be used to record user accesses to directory, catalog, and inventory contents. This information tells the system designer that a logging mechanism must be included in system development in order to test the requirement under scrutiny. Classification and matching processes are applied to all functional requirements for the system under review. The result is a high-level test plan that recommends a generic test type for each requirement.
for the ground data
Continued involvement of system testers and programmer/analysts at JPL and Hill AFB provides system testing expertise for the domain knowledge base. Mapping system requirements to test types is a very complex task, there is a separate effort underway to JPL to elicit and record the knowledge of 13 senior test leaders and engineers. Information obtained from this effort will be codified in a rule base that will be used to train and support testers as well as to provide mapping heuristics for Test Planner. Although this particular elicitation effort focuses on the spacecraft domain (i.e., hardware), techniques and lessons learned will be transferred to knowledge acquisition for ground data systems testing. Based on recent work to classify requirements in both instrument and ground data systems domains, it appears that techniques used for classifying requirements and mapping requirements to test types can be applied to diverse system domains. Software engineers acknowledge the importance of early test planning; however, development pressures frequently delay planning activities until implementation begins. Consequences of not having tools and techniques available early in the life cycle include a continued high cost to fix defects discovered late, resulting in increased development costs or unfixed, potentially fatal, system defects. An automated tool to build a high-level test plan for large systems will facilitate testing activities early in the system development life cycle. ACKNOWLEDGMENTS This research has been supported by a NASA/ASEE Summer Faculty Fellowship at the Jet Propulsion Laboratory, Pasadena, California, and by Weber State University. I am grateful to the comments of the two anonymous reviewers which strengthened the presentation of this work.
REFERENCES 1. W. L. Bryan and S. C. Siegel, Sofhyare Product AssurFUTURE WORK ON THE AUTOMATED PLANNER PROTOTYPE
TEST
Significant work remains to complete a prototype system test planner and develop a more robust matching rule base. In addition to constructing a working prototype form the framework described here, effort is focused on two areas:
ance Techniques for Reducing Software Risk,
Elsevier Science Publishing Co., New York, 1988. 2. A. S. Shumskas, Software-TQM, T&E, OSD and YOU, ~oceedi~gs of the National Symposium on TQM fur Software, Washington, DC, 1991. 3. B. W. Boehm, Software Engineering Economics, Prentice-Hall, Englewood Cliffs, New Jersey, 1981. 4. J. C. Kelly, J. S. Sherif, and J. Hops, An Analysis of
Knowledge-Based
Test Planning
J. SYSTEMS SOFTWARE 1993; 20:115-124
Defect Densities Found During Software Inspections, Proceedings of the 15th Soj?ware Engineering Workshop,
Goddard Space Flight Center, Maryland, 1990. 5. P. Zave, An Operational Approach to Requirements Specification for Embedded Systems, IEEE Trans.
13.
Software Ens. 8:250-269 (1982). 6. S. L. Pfleeger, personal correspondence,
1989. 7. R. Prieto-Diaz and P. Freeman, Classifying Software for Reusability, IEEE Software, 4:6-16 (1987). 8. R. Prieto-Diaz, Implementing Faceted Classification for Software Reuse, Commun. ACM 34,88-97 (1991). 9. J. E. Hilland, Archive and Operations System Functional Requirements Document, JPL Internal Document D-4738, Rev. A, Jet Propulsion Laboratory, Pasadena, California, 1988. 10. DTIC, “DTIC Retrieval and Indexing Technology,” DTIC, Cameron Station, Alexandria, VA, 1987. 11. J. D. Palmer and Y. Liang, Indexing and Clustering of Software Requirements Specifications, Info. Deck. Technol., 18:283-299 12. J. M. Gunn,
APPENDIX Requirements
15. 16.
17. 18.
(1992).
tions system Requirements, Ground Data System, JPL Internal Document VRM-MOS-3-200, Rev. A, Jet Propulsion Laboratory, Pasadena, California, 1989. TRW Defense Systems Group, Software Requirements for the Peacekeeper Telemetry Analysis System Database Subsystem, PTAD-SRS-1, TRW, Ogden, Utah, 1988. D. A. Geer, NASA Scatterometer Digital Subsystem Software Requirements Document, JPL Internal Document D-3863, Jet Propulsion Laboratory, Pasadena, California, 1988. B. Beizer, Software Testing Techniques, Van Nostrand Reinhold, New York, 1990. W. C. Hetzel, The Complete Guide to Software Testing, QED Information Sciences, Wellesley, Massachusetts, 1988. G. Salton, Another Look at Automatic Text Retrieval Systems, Commun. ACM 29, 648-656 (1986). G. Salton and C. Buckley, Term-Weighting Approaches in Automatic Text Retrieval, Info. Proc. Manag. 24, 513-523
Venus Radar Mapper, Mission Opera-
(1988).
A: Taxonomies Taxonomy
Feature
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20.
14.
Budget Data acquisition Data communication Data compression Data management Data retrieval Data storage Data validation Documentation Format Imaging Input Media output Process status Protocol Scheduling Security System status Transformation
26. External
27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45.
Function Object
46. Access
21. Data
47. Archive
22. 23. 24. 25.
48. 49. 50. 51.
Data source Directory Documentation Entire system
interface
Firmware Identification Image record Hardware Monitor Operations Performance Personnel Procedures Record Report Request Schedule Software Standards Status Tape Text User interface
Acquire Browse Calculate Catalog
123
52. Connect
53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79.
Convert Decode Display Distribute Encode Enforce Index Insert Link Manage Predict Process Produce Query Recover Report Restart Route Search Select Sort Startup Store Track Transfer Update Validate
124
D. Samson
J. SYSTEMS SOFTWARE 1993; 20:115-124
Requirements
Taxonomy
100. 101. 102. 103.
Pe$ormance 80. Batch 81. Dynamic 82. Interactive 83. Intersystem 84. Near realtime 85. Parallel 86. Periodic 87. Prioritized 88. Random 89. Realtime 90. Sequential 91. Static
Test-Types
16. Stress
Operational environment 17. Computer supported 18. Live 19. Manual 20. Prototype 21. Simulator 22. Testbed
Taxonomy
Structural technique 1. Compliance 2. Execution 3. External 4. Inspection 5. Operations 6. Path 7. Recovery 8. Security
Quality attribute 92. Accessibility 93. Adequacy 94. Availability 95. Compatibility 96. Compliance 97. Consistency 98. Efficiency 99. Interoperability APPENDIX
Maintainability Portability Reliability Timeliness
Functional technique 9. Algorithm analysis 10. Control 11. Error handling 12. Intersystem 13. Parallel 14. Regression 15. Requirements
6: Classification
JPL Classification
Exercise
Conditions tested 23. Accuracy 24. Adequacy 25. Boundary 26. Compliance 27. Existence 28. Load 29. Location 30. Logic 31. Quality 32. Sequence 33. Size 34. Timing 35. Typing 36. Utilization
Results
Counts
Times used
Feature Object Function Performance Quality
1
2
3
4
5
Weighted Average
New Terms
Left Blank
15 29 20 23 34
9 8 14 4 6
7 0 1 2 1
2 1 0 1 0
0 0 1 1 0
1.9 1.3 1.4 1.5 1.2
1 3 11 2 1
8 8 8 8 17
Hill AFB Classification
Counts
Times used
Feature Object Function Performance Quality
1
2
3
4
5
6
7
Weighted Average
New Terms
Left Blank
11 14 7 21 21
6 5 1 7 3
1 4 3 2 1
0 4 0 1 6
1 2 1 0 0
5 1 1 3 1
1 0 6 1 0
2.7 2.3 3.7 2.0 1.9
2 1 0 0 0
0 0 0 0 11