Achieving high reliability in sonar power supplies

Achieving high reliability in sonar power supplies

902 World Abstracts on Microclectronics and Reliability equal, the procedures for scale alternative models and for proportional hazard models are va...

135KB Sizes 9 Downloads 79 Views

902

World Abstracts on Microclectronics and Reliability

equal, the procedures for scale alternative models and for proportional hazard models are valid. If the shape parameters are not equal, none of the procedures are appropriate and some more complicated method should be Used.

quantile) there will be two lognormal distributions, if there' are any.

Optimizattea of maintained systems. P: K. W. CrIAN and T. DowNs. IEEE Trans. Reliab: R-29 (1) 42 (April 1980). This paper formulates the problem of optimization of maintained systems with constraints on both availability and mean cycle-time. The objective function is the sum of recurring and nonrecurring costs. An example is solved using Fletcber's ideai-penalty-function algorithm which is also briefly described.

Achieving high reliability in sonar power supplies. JF.~OMEC. BOBROWSKI. Prec. Annual Reliability and Maintc~inability Symposium, San Francisco, p. 55 (22-24 January 1980). This paper presents a case history of the design process, testing, environmental screening and resultant measured reliability on the high reliability, militarized low-voltage power supplies (LVPS) used in the Navy's AN/BQQ-5 Sonar Set. There are twelve types and a total of 90 LVPS per Sonar Set. The Development tasks were conducted from March 1974 through November 1977 under contract with the US Navy at IBM's Federal Systems Division facility in Owego, New York. A key requirement was to design and build a LVPS capable of exceeding a Mean Time Between Failure (MTBF) of 100,000 hours.

Ground-hypotheses for beta distribution as Bayesian prior. A. G. Coi.o~mo and D. CONSTANTINI. IEEE Trans. Reliab. R-29 (1) 17 (April 1980): The paper discusses the problem of a reasonable basis for choosing a prior distribution for a probability. The beta distribution is derived on the basis of some ground-hypotheses. The limitations of the proposed approach and a simple application are discussed with reference to reliability. Reliability bounds for decomposable multi-component systems. N. SINGH and S. KUMAR. IEEE Trans. Reliab. R.29 (1) 22 (April 1980). This paper obtains lower and upper bounds for decomposable multi-component complex systems. Some particular cases are discussed. Log-rank vs x test for exponentiality. CRAYTONC. WALKER, DENNIS W. MCLEAVL:Yand WARREN ROGERS. IEEE Trans. Reliab. R-29 (1) 45 (April 1980). This paper appraises a convenient test sometimes recommended to determine whether a set of observations has been drawn from an exponential distribution with unknown mean. The test uses simple linear regression techniques. Historically, it has been used in an intuitive manner. The intuitive procedure usually involves plotting logarithms of the empirical Cdf against corresponding observed values, then "eyeballing" the plotted points for linearity, or intuitively determining whether r 2 calculated for the bivariate distribution is "high enough" or not. Using the objective procedure introduced in this paper, one regresses logarithms of ranks against observed values, calculates a standardized slope statistic, and checks this value against the tabled rejection region(s) provided. Our appraisal of the s-power of the objective log-rank test suggests that it is less s-powerful than competing tests (W, S*, D*) at larger sample sizes. Its relative performance appears to improve somewhat for smaller sample sizes. It seems fair to describe the objective log-rank test as a medium-grade test. Therefore, the practitioner should use the competing tests, unless samples are small, or practical considerations, such as convenience, are decisive in some particular situation. If convenience is important, then the log-rank test with the standardized slope used as the test statistic is an attractive option. The use of the log-rank test in its intuitive form is not recommended at all, since it very likely inclines the practitioner too often to accept the exponential hypothesis when false. On the specification of repair time requirements. Sltlgxg~ E. EMOTe and PAx E. SCHAn~. IEEE Trans. Reliab. R-29 (1) 13 (April 1980). When specifying maintainability requirements, it is widely accepted practice (virtually universal in US military and DoD specifications) to specify the meantime-to-repair and a "maximum" quantile of the repair time distribution. This practice is unsatisfactory for the lognormal distribution of repair times because: (1) As is well-known, it may be that for a given pair (mean, quantile) there is no lognormal distribution. (2) As is not so well-known, for a given pair (mean,

In short, the specification of a mean and a quantile does not uniquelydctermine a single iognormal distribution.

A generalized computer program for the estimation of the optimum number of trials for establishing a system reliability. V, SWAMINATHAN,S. RAJAGOPALANand C. M. CHACKO. QR J, India, 17 (January 1980). A problem of importance in proving the reliability of a system is the optimization of the number of trials (each with constant probability of success) to be carried out. This paper is concerned with (i) a numerical technique for determining the optimum number of trials, with all successes or with a few failures, to be carried out with a view to establishing a pre-assigned overall system reliability figure at a certain confidence level, and (ii) a computer program developed for doing the relevant calculations. Graphs, exhibiting the optimum number of trials to be made in such cases as a function of the reliability and confidence level percentages, are also included in the paper. Difficulties in fault-tree synthesis for process phmt. P. K. ANDOW. IEEE Trans. Reliab. R-29 (1) 2 (April 1980). This paper identifies a number of related difficulties, some of which are still unsolved. Attention is drawn to failings in the type of pressure-flow model commonly used in the literature. Difficulties also exist when published algorithms are applied to control loops. These are illustrated for simple and cascade control applications and discussed in some detail. Eight general conclusions are: 1. The concept of 2-way flow of information in failure models is important in certain situations, e.g. fluid flow. 2. The accuracy of failure models is generally low. This reflects the fact that much of the effort expended in systematic failure analyses has been heavily oriented towards algorithms. 3. Models used in failure analyses do not have to be comprehensive. Only the credible set of events is needed. 4. No always-satisfactory algorithm has been published for fault-tree synthesis where control loops are encountered. 5. The control loop problem is inextricably interlinked with the general difficulty that fault-tree methodology is primarily oriented to binary systems where the time dimension can be ignored. 6. Fault-tree methodology uses simple models to approximate system failures. If these failOres are complex then fault trees might not be suitable. The results of analyses involving complex failures must be treated with great care. 7. When fault-tree methodology is not completely suitable one ought to consider using a different technique altogether. The cause-consequence diagram might be appropriate since it can be used to study failure modes where time is important: 8. Algorithms must be carefully examined and properly validated before widespread use of computer-aided fault-tree