Software reliability estimation: a realization of competing risk

Software reliability estimation: a realization of competing risk

588 World Abstracts on Microelectronics and Reliability replacement process for an N modules parallel structure is characterised by an imperfect nat...

129KB Sizes 2 Downloads 74 Views

588

World Abstracts on Microelectronics and Reliability

replacement process for an N modules parallel structure is characterised by an imperfect nature, Intermediate series states are added to the state transition diagram such that a second procedure is now required to fully attain a complete regenerative process. This is quantified in term of the steady state availability, As an extension of the analysis, the effect of hard core components is presented. This has been proved that under certain circumstances it is not a necessary condition of degradation.

More realistic reliability analysis by conditional distributions, KLAUS D. HEIDTMANN. Microelectron. Reliab. 23 (2) 261 (1983). It is shown how the distribution and expectation of component life changes under various stresses. Subsequently the modified life distributions of components serve as input to reliability analysis of systems. In case the modification is caused by handing over the workload of failing components to functioning ones the new results are compared with those assuming statistically independent components. Software reliability estimation: a realization of competing risk. WAY Kuo. Microelectron. Reliab. 23 (2) 249 (1983). A software reliability model presented here assumes a timedependent failure rate and that debugging can remove as well as add faults with a nonzero probability. Based on these assumptions, the expected number of faults and mean standard error of the estimated faults remaining in the system are derived. The model treats the capability of correcting errors as a random process under which most of the existing software reliability models become special cases of this proposed one. It, therefore, serves to realize a competing risk problem and to unify much of the current software reliability theory. The model deals with the nonindependence of error correction and should be extremely valuable for a large-scale software project, A review of error propagation analysis in systems. WAY K u o and V. R. R. UPPULURL Mieroelectron. Reliab. 23 (2) 235 (1983). Error propagation analysis in reliable systems has been studied widely. Unlike classical sensitivity analysis which investigates the range of system performance, the distribution function of system performance studies error propagation analysis. Thus, error propagation analysis is essentially a statistical analysis. This paper reviews and classifies current research articles in error propagation. An overview is presented of error propagation applied to various systems and models. A standard analysis procedure is also given~ Finally, several conclusions are drawn. It is recommended that (i) basic research on error propagation be carried out, (ii) an efficient (least cost) method be developed to analyze large-size problems, and (iii) h u m a n error be included in the system modeling. It is our opinion that error propagation analysis be treated as part of decision-making procedures in system analysis, Error propagation analysis is extremely important for expensive or rare-event systems. This report can benefit those who analyze these systems,

improvement in early failure rates (EFR). The model is compared to 1980 field data which reflect EFR sensitivity to escape levels. The usefulness of the model for examining the sensitivity of burn-in parameters is demonstrated. Finally, a strategy of In Situ test as a means of controlling escapes is discussed.

Reliability of GaAs MESFET logic circuits. M. R. NAMORt)I and W. A. WHITE. IEEE 21st Ann. Proc. Reliab. Phys. 312 (1983). Twelve depletion mode GaAs M E S F E T logic circuits were subjected to a step-stress life test. Each circuit consisted of an 11-stage ring oscillator with one buffer stage. Both BF k and SDFL circuits with V p ~- - 2 . 0 V and - 1 . 0 V were represented. The circuits were operated for 500hr at each of eight temperature stress steps ranging between 25 ° and 200°C with 25°C increments. No chip failures occurred upon completion of the life test. Reasonable interpretation of these results suggests that GaAs M E S F E T logic circuits should be very reliable. Unique on-chip test structures enhance E-PROM manufactnrability. B. MOORE, G. DENES, K. RAO and N. TANDAN. Electronics 135 (22 September 1983). Circuits for testing threshold margin, checking charge gain or loss, and 4-byte programming reduce test costs and also boost system reliability. Full cycle corrective action (FCCA) for improved warranty service. THOMAS M. CAPPELS. Proc. A. Reliab. Maintainab. Syrup. 20 (1983). When failures occur in the field, implied or written warranties generally require that the failed hardware be repaired or replaced, either fully or on a prorated basis. When a failure mode is determined, appropriate corrective action (CA) to prevent future failures should be taken. For the financial success of a company, the CA must formally involve the budget administration organization. This paper presents an overview of the well founded Closed Loop Corrective Action, and then examines the innovative Full Cycle Corrective Action (FCCA) approach in light of improved aftermarket service. FCCA allows sound financial adjustments when negotiating/pricing, funding, and budgeting for procurement, manufacturing and quality assurance functions within most manufacturing concerns. FCCA is outlined and provides managers with potential tools for solving budgetary problems within their own work environments.

Characterization and screening of SiO 2 defects in E E P R O M structures. R. E. SHINER, N. R. MIELKE and R. HAQ. IEEE 21st Ann. Proc. Reliab. Phys. 248 (1983). Papers dealing with charge gain and charge loss, the two modes of data retention failure offloatinggatememories, have been presented at this and previous Reliability Physics Symposiums. One result of that work is that the high temperature bake (250°C, typically) has become an accepted means of predicting E P R O M data retention failure rates down to normal operating temperatures. In order to determine if this stress was also a valid test of data retention for electrically erasable floating gate memories (EEPROMS), the behavior of this memory type at Implications of a model for optimum burn-in. A. J. WAGER, high temperature was also characterized. This work led to D. L. THOMPSON and A. C. FORCIER. IEEE 21st Ann. Proc. the discovery that previously good cells of the F L O T O X Reliab. Phys. 286 (1983). The importance of burn-in as a type could become data retention failures as a result of screen to reduce failure rates (particularly early fails) of extended exposure to high temperature. LSI/VLSI devices, and thereby achieve reliability objectives, This new failure mode was found to be due to partial is discussed. A mathematical model is described which breakdown of the thin tunnel dielectric caused by the high predicts the burn-in duration required to achieve any desired electric field generated by the charge stored on the floating field reliability improvement. Based on a Weibult temporal gate after programming or erasing. This paper will discuss fail distribution, the effects of voltage and temperature the results of the work done to characterize the breakdown acceleration and escapes are included. E s c a p e s - those mechanism and the resulting conduction. It will also discuss devices which are delivered incompletely burned-in or with the relevance of some design and process parameters towards undetected fails due either to inadequate stress or test minimizing the impact of this effect. All of the work reported procedures are shown to be the key factor limiting in this paper involved the lntel 2816, a 16,384 bit E E P R O M .