World Abstracts on Microelectronics and Reliability
287
observed if it occurs before a randomly chosen inspection time and the failure was signaled; otherwise, the experiment is terminated at the instant of inspection. An explicit expression for the posterior pdf of the parameter is derived and a normal approximation to it based on Taylor expansion near the maximum likelihood estimate is suggested. The results of an extensive simulation showed that the reparameterization 0~ = log 0 appreciably increases the accuracy of the normal approximation. Highly accurate HPD-intervals for 01 are derived in a closed form for a normal prior for 0 t , or equivalently, for the lognormal prior on 0.
Reliability of modified designs: a Bayes analysis of an accelerated test of electronic assemblies. Louis HART. IEEE Trans. Reliab. 39(2), 140 (1990). Accelerated tests on two similar assemblies suggest that their lifetimes are in excess of 30,000 hr. A Bayes approach to reliability estimation used life test data from the two assemblies and those from an earlier evaluation of a third assembly, similar to the two under study. The Bayes approach permitted a reduction in the number of test samples, compared with the earlier evaluation. Reliability of the two new assemblies was comparable with or better than that of the older one.
Error log analysis: statistical modeling and heuristic trend analysis. T1NG-TINGY. LIN and DANIELP. SmWlOREK.IEEE Trans. Reliab. 39(4), 419 (1990). Most error log analysis studies perform a statistical fit to the data assuming a single underlying error process. This paper presents the results of an analysis that demonstrates the log is composed of at least two error processes: transient and intermittent. The mixing of data from multiple processes requires many more events to verify a hypothesis using traditional statistical analysis. Based on the shape of the interarrival time function of the intermittent errors observed from actual error logs, a failure prediction heuristic, the dispersion frame technique (DFT), is developed. The DFT was implemented in a distributed on-line monitoring and predictive diagnostic system for the campus-wide Andrew file system at Carnegie Mellon University. Data collected from 13 file servers over a 22 month period were analyzed using both the DFT and conventional statistical methods. It is shown that the DFT can extract intermittent errors from the error log and uses only one fifth of the error log entry points required by statistical methods for failure prediction, The DFT achieved a 93.7% success rate in failure prediction of both electromechanical and electronic devices.
Detection of multiple input bridging and stuck-on faults in CMOS logic circuits using current monitoring. NIRAJ K. JHA and QIAO TONG. Comput. Elect. Engng 16(3), 115 (1990). Current monitoring is a well-established technique for detecting stuck-on and bridging faults in CMOS logic circuits. When such faults are activated by an appropriate vector, the circuit draws current which is much larger than normal and the fault is detected. In this paper we first show that any test set, which detects all single stuck-at faults in any irredundant combinational CMOS logic circuit, also detects all multiple stuck-on faults in it using current monitoring. If the constituent gates of the circuit are all primitive CMOS gates (NAND, NOR, NOT) then we show that the test set detects all multiple stuck-on and input bridging faults (even if the two types of faults occur simultaneously) with current monitoring. Even when the CMOS circuit is redundant we have found that in most cases a test set that detects all detectable single stuck-at faults also has very high coverage for the multiple stuck-on and input bridging faults and their combinations.
Nonparametric confidence bounds, using censored data on the mean residual life. FRANKGUESSand DONGHo PARK.IEEE Trans. Reliab. 40(1), 78 (1991). Mean residual life has been applied in a wide variety of areas, e.g. burn-in, annuities, and strike-duration modeling. Assuming a parametric pdf, it is possible to estimate both the mean residual life function and the failure rate function, even with censored data. For type I & II censoring, many nonparametric estimators and confidence bounds for the failure rate exist; however, similar estimation of the mean residual life seems difficult. We develop an approach to inverting confidence bounds on the failure rate to obtain conservative nonparametric confidence statements about the mean residual life for these and random fight (left) censoring. Calculation of the Binomial survivor function. PAUL N. BOWERMANand ERNESTM. SCrt~UER. IEEE Trans. Reliab. 39(2), 162 (1990). This method calculates the binomial Sf (cumulative binomial distribution), binfc(k; p, n), especially for a large n, beyond the range of existing tables, where: (1) conventional computer programs fail because of underflow and overflow, and (2) Gaussian or Poisson approximations yield insufficient accuracy for the purpose at hand. This method calculates and sums the individual binomial terms while using multiplication factors to avoid underflow; the factors are then divided out of the partial sum whenever it has the potential to overflow. A computer program uses this technique to calculate the binomial Sf for arbitrary inputs of k, p, n. Two other algorithms are presented to: (1) determine the value of p needed to yield a specified Sf for given values of k and n, and (2) calculate the value where p = Sf for a given k and n. Reliability applications of each algorithm/program are given, e.g. the value of p needed to achieve a stated k-out-of-n: G system reliability and the value o f p for which k-out-of-n :G system reliability equals p.
Selecting, under type-il censoring, Weibull populations that are more reliable. SHENG-TSAINGTSENGand HUEY-JANEWU. 1EEE Trans. Reliab. 39(2), 193 (1990). The problem of selecting more-reliable Weibull populations is complex. The main result is that there is no simple selection rule. Under type-II censoring, this paper proposes a locally optimal selection rule when the shape parameters are known. When the unknown shape parameters have some prior distributions, a modified selection rule is proposed. The performance of this modified rule was tested extensively by simulation; this rule is quite robust for a variety of beta prior distributions. Reliability optimization in generalized stochastic-flow networks. HUA CHEN and JIAQI ZHOU. IEEE Trans. Reliab. 40(1), 92 (1991). There is almost no practical reliability optimization technique for modern large systems because of the complexity and tremendous computation associated with these systems. This paper presents a model and an algorithm for reliability optimization in generalized stochastic-flow networks. This algorithm uses the relationship of the k-weak-link sets and sets of failure events to the parameters of the generalized stochastic flow networks. This facilitates a fast location of the optimal capacity expansion for a system from the information obtained by the latest iteration and alleviates the dimension calamity on computation. Consequently, the reliability optimization method is powerful for large systems. As an example, the IEEE Reliability Test System with 24 nodes and 70 components has been tested, and the components and their capacity values which, should be enhanced are revealed. The computation results show that the algorithm can be applied to practical problems. Implementing fault-tolerance via modular redundancy with comparison. YINONG CriEN and TINGHUAI CHEN. IEEE Trans. Reliab. 39(2), 217 (1990). NMR (N-Modular Redundancy is one of the most widely used fault-tolerant techniques. The NMRC (NMR with Comparison) system presented here covers all existing NMR systems in fault