World Abstracts on Microelectronics and Reliability On the analysis of accelerated Hfe-testing experiments. ALAN J. WATKINS. IEEE Trans. Reliab, 40(1), 98 (1991). The analysis of data from accelerated life-testing experiments via the method of maximum likelihood must, in certain cases, be performed numerically, whence it becomes important to exploit fully any potential simplifications. This paper considers one such simplification for a particular class of accelerated life-testing experiments, and compares the efficacy of this simplification with other, recent alternatives. The procedure herein is preferred, and evidence from simulation experiments supports this thesis. Several measures of precision for maximum likelihood estimators of model parameters are analysed and the degree of precision can be reasonably gauged. Comment on: an efficient non-recursive algorithm for computing the reliability of k-out-of-n systems. ALl M. RUSHDI. IEEE Trans. Reliab. 40(1), 60 (1991). The iterative structure of the Sarje and Prasad algorithm for computing the k-out-of-n system reliability duplicates that of an earlier algorithm by Rushdi. Furthermore, the Sarje and Prasad algorithm does not have the best spatial or temporal complexity among existing algorithms. Characterization of the Pearson family of distributions. N. UNNIKRISHNAN NAIR and P. G. SANKARAN. IEEE Trans. Reliab. 40(1), 75 (1991). This paper characterizes the Pearson family of distributions in terms of the failure rates, and presents analogous results when failure "time" is discrete. The theorems proved here generalize the results of Osaki and Li concerning the gamma and negative binomial distributions. Automated analysis of phased-mission reliability. JOANNE BECHTA DUGAN. 1EEE Trans. Reliab. 40(I), 45 (1991). The methodology for automated analysis of phased missions is based on the solution of a discrete-state continuous-time Markov model. The phase-change times are deterministic. A method is presented for combining models for each phase into one model. This results in a model that can be substantially smaller than required by other methods. A unified framework is used for defining the separate phases using fault trees, and for constructing and solving the resulting Markov model. The usual solution technique is altered to account for the phased nature of the problem. The framework is described for a simple 3-component, 3-phase system that has appeared often in the literature. A hypothetical 2-phase mission is solved that involves the fault tolerant parallel processor under development at the C. S. Draper Laboratory. This approach is especially useful where several phases are repeated many times because each phase need be described only once. This approach applies where the transition rates (failure and repair rates) are constant, and where the phase change times are deterministic. If any of these criteria are not met and if the system is not very large, then the approach proposed by Smotherman (1989) is appropriate. Event-tree analysis by fuzzy probability. RASOOL KENARANGUI. IEEE Trans. Reliab. 40(1), 120 (1991). A method is explained for dealing with event-tree analysis under uncertainty. In conventional event-tree analysis, probabilities and consequences are treated as exact values. In many engineering applications, however, it is difficult to evaluate the probabilities and consequences from past experiences, because of dynamic environments of systems, and especially because there are situations where past experience does not exist. Fuzzy-set logic is used to account for imprecision and uncertainty in data while employing eventtree analysis. The fuzzy event-tree logic allows the use of verbal statements for the probabilities and consequences, such as very high, moderate, and low probability. The
291
technique enables us to analyze the qualitative evaluation of the event tree to gain the quantitative results. The application of fuzzy event-trees is further demonstrated by using a set of event-trees for an electric power protection system to assess the viability of the method in complex situations. Such analysis can not be made by hand calculations owing to the complexity of the trees. Hence a computer algorithm has to be developed. Introduction and certification of a quality assurance system. HELMUT ENSSLIN. Quality Europe. Industrial Quality Assurance. 34th EOQ Annual Conference, Dublin, Ireland, 13 (1990). This article traces the course taken at a man-made fibres company in moving from conventional quality inspection to a plant-specific quality assurance (QA) system based on DIN ISO 9001 and then to a "DQS" certificate. An executive-level quality steering group chaired by the works manager devised the directives for introducing the QA system in the works and for motivating the workforce. A quality manager was appointed. His duties included drawing up the AQ documentation jointly with a team of specialists, and introducing and implementing internal quality audits. The aim was to arrange for--and pass--"DQS" certification audit in the works. Experimental evaluation of the fault tolerance of an atomic multicast system. JEAN ARLAT et al. IEEE Trans. Reliab. 39(4), 455 (1990). This paper presents a study that contributes to the validation of a dependable local area network providing multipoint communication services based on an atomic multicast protocol. This protocol is implemented in specialized communication servers that exhibit the fail-silent property, i.e. a kind of halt-on-failure behavior enforced by self-checking hardware. The tests that have been carried out utilize physical fault injection and have two objectives: (1) to estimate the coverage of the self-checking mechanisms of the communication servers and (2) to test the properties that characterize the service provided by the atomic multicast protocol in the presence of faults. An appreciable part of the paper is devoted to the description of the testbed that has been developed to carry out the fault injection experiments. The major results are presented and analyzed. The effect of statically and dynamically replicated components on system reliability. DAR-REN LEU, FAROKI-I B. BASTANI and ERNST L. LEISS. IEEE Trans. Reliab. 39(2), 209 (1990). System reliability can be maximized by using redundant resources optimally. One way to use redundant resources is to invoke them on demand, as in standby redundancy or recovery blocks---dynamic redundancy. An alternative approach is to pre-allocate redundant resources, as in triple modular redundancy--static redundancy. We derive the reliability of general systems using dynamic and static redundancy schemes and then consider communication protocols as a representative example. We study in detail the system reliability for three broadcast protocols using various redundancy-allocation (e.g. retransmission) policies. The analytic and simulation results show that in some cases static redundancy yields a more reliable system than dynamic redundancy. This is essential for distributed system applications. In some cases the failure detection time is substantial so that the hardware reliability and, hence, the system reliability are adversely affected when using dynamic redundancy. This can be a critical factor for distributed system applications since a large overhead of communication can be required for error detection. In these cases unreliable protocols can provide better system reliability than reliable protocols, especially when the communication network is highly reliable and when the machine failure rate is relatively large. Since unreliable protocols generate less load and less resource contention, they are preferable in such cases. The