World Abstracts on Microelectronics and Reliability to identify potential hazards in requirements and design. As hazards are identified, software defenses can be developed using fault tolerant or self-checking techniques to reduce the probability of their occurrence once the program is implemented. Critical design features can also be demonstrated a priori analytically using proof of correctness techniques prior to their implementation if warranted by cost and criticality. Fault-tolerant software. HERBERT HECHT. IEEE Trans. Reliab. R-IS, (3) 227 (August 1979). Limitations in the current capabilities for verifying programs by formal proof or by exhaustive testing have led to the investigation of faulttolerance techniques for applications where the consequence of failure is particularly severe. Two current approaches, N-version programming and the recovery block, are described. A critical feature in the latter is the acceptance test, and a number of useful techniques for constructing these are presented. A system reliability model for the recovery block is introduced, and conclusions derived from this model that affect the design of fault-tolerant software are discussed.
Time-dependent error-detection rate model for software reliability and other performance measures. AMRIT L. GOEL and K^zu OKUMOTO. IEEE Trans. Reliab. R-IS, (3) p. 206, (August 1979). This paper presents a stochastic model for the software failure phenomenon based on a nonhomogeneous Poisson process (NHPP). The failure process is analyzed to develop a suitable mean-value function for the N H P P ; expressions are given for several performance measures. Actual software failure data are analyzed and compared with a previous analysis.
Asymptotic distribution of a standby system with delayed repair. Z. KHALIL. IEEE Trans. Reliab. R-IS, (3) 265 (August 1979). This paper derives an asymptotic distribution for the time to system failure. The system consists of several elements with one repair facility which remains idle until a queue of failed units is built.
Application of program graphs and complexity analysis to software development and testing. NORMAN F. SCHIq~DEWIND. IEEE Trans. Reliab. R-IS, (3) 192 (August 1979).
Several research studies have shown a strong relationship between program complexity, as measured by the structural properties of a program, and its error properties, as measured by number and types of errors and error detection and correction times. This research applies to: a) the setting of threshold values of complexity in software production in order to avoid undue difficulty with program debugging; b) the use of complexity as an index for allocating resources during the test phase of software development; c) the use of complexity for developing test strategies and the selection of test data. Application # c uses the directed graph representation of a program and its complexity measures to decompose the program into its basic constructs. The identification of the constructs serves to identify a) the components of the program which must be tested, and b) the selection of test data which are needed to exercise these components. Directedgraph properties which apply to program development and testing are defined; examples of the application of graph properties for program development and testing are given; the results of program complexity and error measurements are presented; and a procedure for complexity measurement and its use in programming and testing is summarized.
541
software reliability problem. First an empirical stopping rule for debugging and testing computer software is discussed. Then some results are presented on choosing a time interval for testing the hypothesis that a software system contains no errors, given certain cost and risk constraints.
Modem analytical techniques for failure analysis. J. R. SHAPPIRIO and C. F. COOK, JR. Solid-St. Technol. p. 89 (September 1979). Sophisticated electron and ion beam techniques frequently used by the modern failure analyst are reviewed and compared from the standpoint of the kind of information generated and the relative cost of analytical services.
Empirical validation of three software error" prediction models. ALAN N. SUKERT. IEEE Trans. Reliab. R-IS, (3) 199 (August 1979). From 1974 Aug to 1978 May a study to validate several mathematical models for predicting the reliability and error content of a software package against error data extracted from four large U.S.A. DePartment of Defense software development projects was undertaken by Rome Air Development Center. This paper will describe the results of this empirical study for three such models: Jelinski-Moranda, Schick-Wolverton, modified SchickWolverton. Model predictions will be compared on a total project, functional, and error severity basis, and on a daily vs. weekly basis for defining model time intervals. The question of when to begin applying these models will be addressed. General conclusions are drawn as to model applicability. Software quality metrics for life-cycle cost-reduction. Gh-'NE F. WALTERS and JAMES A. MCCALL. IEEE Trans. Reliab. R-IS, (3) 212 (August 1979). This paper identifies factors or characteristics of which reliability is one, which comprise the quality of computer software. It then discusses their impact over the life of a software product, and describes a methodology for specifying them quantitatively, including them in system design, and measuring them during development. The methodology is still experimental, but is rapidly evolving toward application to all types of software. This paper emphasizes those factors of software quality which have greatest importance at the later stages of a software product's life.
Evaluating single point failures for safety and reliability. ROBERT A. KIRKMAN. IEEE Trans. Reliab. R-IS, (3) 259 (August 1979). Many system specifications today specify that the design shall be fail-safe or that two or more failures or errors shall be required to cause a serious accident. As a part of the compliance, the safety-reliability engineer performs hazard and failure mode analyses which give rise to questions concerning failure mode credibility and s-independence, failure modes in computer and abort systems, and the type and adequacy of techniques to satisfy the requirements. The real world of competition and schedules and rapidly developing and changing designs preclude elaborate statistical studies of each of the larg e numbers of hazards, failure modes, and related factors which collectively determine the accident rate. Instead, rationally based, free flowing, analytic technique with built-in conservatism must be used if the system design is to be affected, and if the available time and effort is to be concentrated in areas of maximum.payoff. This paper discusses these questions in this context and provides a practical rationale for the value judgments the safety/reliability engineer must make to perform his analysis.
Validity of execution-time theory of software reliability. Optimal time intervals for testing hypotheses on computer software errors. ERNEST H. FORMAN and NOZER D. SINGPURWALLA. IEEE Trans. Reliab. R-IS, (3) 250 (August 1979). This paper discusses certain stochastic aspects of the
JOHN D. MUSA. IEEE Trans. Reliab. R-IS, (3) 181 (August 1979). This paper investigates the validity of the executiontime theory of software reliability. The theory is outlined, along with appropriate background, definitions, assump-