A review of error propagation analysis in systems

A review of error propagation analysis in systems

Microelc...

575KB Sizes 0 Downloads 43 Views

Microelc
A

REVIEW

OF

~.126 2714,83'0202~5 4505 (XI0 ©, 1983 Pergamon Press L t d

ERROR

PROPAGATION

IN

ANALYSIS

SYSTEMS WAY KUO

Engineering

Physics

Union Carbide Nuclear

Division

currently BELL

TELEPHONE Crawfords

Corner

V. R.

R.

WB

07733,

of America

UPPULURI

and Statistics Union Carbide Nuclear

Inc.,

Road,

NEW JERSEY

United States

(Received

with:

LABORATORIES,

Holmdel,

Mathematic

Division,

Corporation,

Research

Division,

Corporation, Division.

for publication

2nd November

1982)

ABSTRACT Error propagation analysis in reliable systems has been studied widely. Unlike classical sensitivity analysis which investigates the range of system performance, the distribution function of system performance studies error propagation analysis. Thus, error propagation analysis is essentially a statistical analysis. This paper reviews and classifies current research articles in error propagation. An overview is presented of error propagation applied to various systems and models. A standard analysis procedure is also given. Finally, several conclusions are drawn. It is recommended that i) basic research on error propagation be carried out, ii) an efficient (least cost) method be developed to analyze large-size problems, and iii) human error be included in the system modeling. It is our opinion that error propagation analysis be treated as part of decision-making procedures in system analysis. Error propagation analysis is extremely important for expensive or rare-event systems. This report can benefit those who analyze these systems.

Keywords: Error Propagation, System Performance

1. INTRODUCTION

An analyst is often faced with predicting the performance of a system prior to its construction and use. The complexity of nuclear power plants, for example, has led to concern by public and scientific researchers. Various techniques for reliability evaluation of complex systems have been studied widely. A recent survey by Hwang, Tillman, and Lee [19] reviews several of these reliability evaluation techniques. For low failure rate nuclear systems, logic diagrams have been used extensively.[13] Whether we are concerned about a nuclear system or not, the system failure rate evaluation, although important, may not be the unique consideration in decision making. An analysis in 235

236

WA'," Kt<) aim V. R. R. I!pl, t l.t RI

the design stage genelally is necessary to determine whether the reliability code is structurally stable or whether any physical or empirical constraints can be exceeded.J18]

The effect on

system performance (reliability is one kind of performance measure) of uncertainties in its components is also of interest. The sources of these uncertainties include the possibilities that i) the model used has been incorrectly specified, ii) the correct values of the components are not known with confidence, iii) the system performance is evaluated differently due to the change of environment conditions, and iv) that human error is heavily involved in various stages. System performance uncertainties contributed by its components have been termed "propagation

of

uncertainties"

or,

equivalently,

"error

(variance)

propagation."

Other

terminologies such as "imprecision analysis," "tolerance analysis," and "function of random variables' have also been adopted in various studies. The term used in this report is error propagation analysis, which is different from classical sensitivity analysis. In classical sensitivity analysis of a system, we are interested in evaluating the effect on system performance of variations in its component's specifications.

In error propagation analysis, however, we are

interested in determining the range of system performance, given the range of the component's specifications. variables.

Both component and system performance measures are treated as random

Therefore, error propagation is a statistical estimation problem, while classical

sensitivity analysis is an algebraic one. In addition, while classical sensitivity analysis has been studied widely ([17, 39] have good discussions on the subject), error propagation analysis has not. Error propagation analysis is also different from perturbation theory, which is well known in physics. Perturbation theory discusses the stability of physical phenomena under the condition that variables of interest are perturbed by small values. For error propagation, however, we investigate the

variation

in the distributions of both component's errors and output

performance. The objective of error propagation analysis is to estimate system performance variation resulting from random variable-typed components. This report surveys recent work in error propagation, classifies and reviews articles and reports, sets up analysis procedures, outlines application problems, and suggests future investigations. The systems we are considering here are very general. However, nuclear systems are emphasized because the induced top-event error is of primary interest to scientists in nuclear safety analysis, The analysis procedures are also applicable to many systems as long as the variation of output performance is important. Systems other than nuclear problems include, for example, long-term economical plans and modern telecommunication systems.

Error propagation analysis

237

2. P R O B L E M S

Suppose that a system performance, Y, is a function of its component performance, Xi, i=l, 2, ..., n, through Y = f(X,, X2, "", X,].

(1)

In equation (1), Y may be regarded as system reliability (unreliability) [19], system availability (unavailability) [2, 21, 42], product of availability and reliability [38], physical measurements such as cross-section evaluations [12, 14], biological measurements such as dose measurements [35], management information system such as long-term profit prediction [34], top-event performance in fault-tree analysis [13, 32], or any performance measure of interest. Similar discussions are also employed to Xi, i=l, 2, .", n. The function f in equation (1) either can be in analytically or empirically functional form. The function f is formulated by various system structures (such as network, fault-tree, or complex system configuration), economical or physical linkages, biological flow path, or computer code. The number of components involved in the construction of f is denoted by n. For error propagation analysis, we assume that Xi's are random variables with analytical or empirical distribution functions. As a function of Xi's, Y is also a random variable. Typically n is a large number. In error propagation analysis, a typical problem is what the variation of Y would be, given the uncertainties of Xi, i = l , 2, "",n. Specifically, we may want to determine the following quantities;

i)

Pr[Y>c~c, a constant, and Xi 's],

ii)

probability density function (pdf) of Y, or

iii)

confidence interval of Y.

Essentially, Y is treated as a random variable. Note that in classical reliability analysis, Y is no more than an undetermined constant. A second problem in error propagation is to evaluate the relative importance of Xi's. An important Xi has a great effect on the distribution of Y, instead of on Y itself. For example, the Xi, for all i, may have large variations and be an important attribute on Y, but cause little variation in Y given a significant change of Xi, hence, the Xi is of negligible importance. On the other hand, small variation of Xi may cause significant variation in Y, and this Xi will rank highly even if it plays a small role in constructing f. Since, for every Xi and Xj, i:~j, they may as well be s-dependent as s-independent and n is usually a large number, ranking X(s is pertinent and presents a problem.

238

WA~ K t o and V. R. R. LJPPt:l,t:kt

Given the maximum allowed Y-variation, a third problem confronting decision makers is how to specify tolerate variation of Xi's by economically utilizing current knowledge on Xi's. In the design stage, this is of utmost importance.

No solution to this problem has yet been

studied.

3. ANALYSIS PROCEDURES AND STATISTICAL MODELS

To approach the problems presented in the previous section, four basic analysis procedures are recommended as follows: i)

model setup,

ii)

screening of Xi's, i = l , 2, "., n, and leaving only those important Xi's, i=l, 2, -", N, where hopefully N < <

n,

iii)

application of statistical design models, and

iv)

construction of the pdf for the calculated consequence.

In certain situations where n is small, then step ii) is often skipped. [15, 23, 38] It is also common that steps ii) and iii) are combined in the analysis. For a typical example of the combined analysis,, see Cox.[7] When a Monte Carlo simulation is selected, steps iii) and iv) are sometimes also eombined.[7, 8] In all cases, construction of the pdf and its associated consequences seems to be the goal for inferences made on Y. Various statistical models in the analysis of error propagation have been adopted for the previous steps. These models are outlined and classified below. Model Setup

As the first step to investigate the error propagation in a system, model setup includes construction of the function f and establishment of the uncertainties of Xi's. Both of these tasks require a thorough understanding of the system. In addition, extensive experience and theoretical judgement are equally important for model setup. Selection of f is different from system to system and is called system modeling. Given function f, the uncertainties of Xi's have been subjectively assigned by many researchers, for example, the assignment of lognormal distribution. [34] Human error is largely involved in the model setup due to the uncertainties of f selection and assignment of components' variations. To simplify our discussions, the following steps are based on a well set-up model. Screening

Screening involves selection of important variables and deletion of unimportant ones.

Error propagation analysis

Today,

the

most

reliable- but

least

elegant-screening

239

procedure

remains the

direct

method.[29] This method selects as the important variables the Xi's whose variations produce the largest change of Y. This steepest ascent approach is mathematically written as

Important X i -- large Xi 0{_~i }, for all i

(2)

Important X i = large A._::7_.~ I Xi [ A x i J ' for all i,

(3)

for the continuous case, or

for the discrete case. This approach in taking the relative variation of OY/OX i (or Ay/AXi) depends on both the location and the distribution of Xi. Because of the number of computer runs required, the direct method is costly for large n and/or steep ascents of f at varied locations of Xi's. Several recent methods attempt to determine the importance of Xi's for a large n using fewer computer runs. One of them is the matrix approach. Let b be the sensitivity coefficients with respect to the n components. The solution to a set of linear equations b = X-lAY

(4)

leads to the determination of the important components. In equation (4), AY is the variation vector containing changes in Y for N computer runs. The disadvantages of this approach are i) a solution is not guaranteed, and ii) one may end up solving an ill-conditioned linear equation. A third screening method applies statistical regression techniques. Through a stepwise correlation analysis (backward or forward regression), the important c o m p o n e n t s - to the Y value - are selected. Again, the method is costly and inefficient for large n. The most promising fast-running screening method is the adjoint method. In this approach, the function f is no longer treated as a black box, as in the previous three screening approaches. This approach allows the exact sensitivity coefficient to be evaluated. While one run of the previous screening methods give the sensitivity of all responses to one parameter, one run of the adjoint method gives the sensitivities of one response to all parameters. This method is still under development and hard to follow. These four screening approaches are compared in Table 1.

Statistical Design Once screening is done, the problem of n components is now reduced to the one of N components, and presumably N < <

n. The n - N components, which are determined to be of

240

WA~ Kt()~md V, R. R. [ ; H ' t l l R~

Table 1 Classification of Various Screening Methods In The Study of Error Propagation Method Direct Method

Comments Give exact importance ranking of independent properties of X(s but very expensive for large n.

References 3. 5, 29. 26

Matrix Approach

Deals with a highly undetermined system of linear equations.

1, 12, 14, 21

Statistical Regression

Importance ranking is not uniquely determined. Extremely expensive and inefficient for large n.

29

Adjoint method

Not easy to follow, but requires fewer runs and exact importance ranking is guaranteed.

l, 30, 3l

negligible importance to the variation of the top consequence Y, can be treated as constants in the statistical design.

Unless n is a small number or the screening procedure has not been

employed, n - N variables are to be dealt with in the stage of statistical design.

Several

approaches are available. First of all, a classical approach has been widely used which applies basic ideas of evaluating the cumulation of uncertainties from X i. In this approach, variation of Y is formulated through the physical structure of f. For example, whenever f constitutes simple summations, variance of Y is the summation of variance of Xi's. In the classical approach, standard deviation (or variance), coefficient of variation, kurtosis, and skewness are regarded as indices of the variations.

The biggest disadvantage of this approach results in the failure of variation

evaluation if the physical structure of Y due to Xi's, f, is not available.

This approach also does

not obtain accurate information on the shape of variation. Nevertheless, this method does obtain a preliminary feeling on the range of the variation.

Many textbooks (for example,

[10, 39]) deal with the classical approach in solving engineering problems. This approach is also well known in chemistry analysis. A second approach applies the Taylor series expansion to determine the mean and variance of Y.

Tukey [40], probably the first one to use this technique in error propagation,

approximated f by an rth degree polynomial through Taylor series expansion. The lower order terms (up to first or second order are the popular ones) in the expansion are used in the calculation. Xi's.

This approach can, to a certain extent, deal with dependent relationships among

However, because the differentiations of Y with respect to Xi's are encountered, an

analytical form of f is usually required. Only the mean and variance of Y typically can be obtained by this approach.

Error propagation analysis

A third approach applies response surface methodology.

241

Because the uncertainty of Y

originates from statistical variations of Xi's, it is essential that the scheme prescribing these variations maximize their effects on the calculated Y. With the response surface method of uncertainty analysis, the perturbations of the components are carried out according to an experimental design that enables an efficient empirical exploration of the response surface. This is an effective procedure as long as it is augmented by a foldover design and star

points.J20] The combination accounts for the linear and quadratic effects of the components, as well as those of 2-factor interactions between Xi's, while requiring a small number of computer runs.J28] The Monte Carlo method can also be used to obtain empirical response surface equation. This method employs computerized synthetic sampling of the component information. Through repeated random procedures to generate various component information, a sufficient number of responses, Y, can be computed. The four statistical designs discussed above are classified and listed in Table 2. Table 2 Classification of Various Statistical Designs Used in the Study of Error Propagation Design

Comments

References

Classical Approach

Can deal with a limited number of situations which are restricted by the functional form of f.

5, 10, 8, 14 34, 35, 39

Taylor Series Approach

Analytical structure of f is necessary, and only limited information on Y is obtained.

2, 9, 36, 40

Response Surface Method

The only method that can account for the dependent relationships among Xi's.

3, 10, 28, 29

Monte Carlo Method

Expensive, but can test the independence of Xi's.

7, 13

Distribution of Y Since Y is a function of random variables Xi's, i=l, 2,..., N, distribution of Y derived from the assignment of distributions of Xi's is actually a Bayesian-type problem. The distribution of Y is also a foundation to make statistical influence for the decision makers. Two different approaches can be used to construct the Y-distribution. The most popular approach is to use the moment matching technique to match mean, variance, skewness, and kurtosis to those of some member of a family of density functions,

such as the Pearson family of distributions, The moments may be derived from i) evaluating

WA~ Kt:() and V. R. R. Ur, l't lt~l~[

the mean of the power of the response surface equation over the density function of all Xi's, or ii) direct calculation from the results of Monte Carlo simulation.

In the first meth~ t, the

components' variations are obtained most often by assigning analytical distributions [28], whereas, in the second method, the empirical distributions on the components' variations are sufficient.

After the first four moments are calculated, the histogram of Y can be drawn. The

moment matching technique obtains a rough estimate of Y distribution, but does not identify the true distribution. Some moment matching techniques and the selection of the matched distribution functions can be referred to in Bowman [4] and McGrath, et a1.[25] The Monte Carlo simulation obtains the distribution function itself, and evaluates the response surface.

To apply simulation to generate the distribution function, a stratified

sampling procedure is recommended, which will adequately cover all statistical fluctuations. It is always true that simulation provides an independent statistical check of precision. Cox [7, 8] comments that the Monte Carlo simulation produces no reliable way of determining whether any of the components are dominant or more important than others. Furthermore, if a change is made in the density of any input, the entire uncertainty analysis must be redone, The response surface method does not suffer this disadvantage, because the response surface equation is estimated independently of the uncertainty densities. References dealing with moment matching techniques and Monte Carlo methods in the last analysis step of error propagation are classified in Table 3. Once the distribution function of Y is specified, statistical inferences on Y such as the lower or higher confidence interval can be made. On the other hand, even if the distribution function of Y is not specified, conservative confidence intervals can be obtained from the lower moments of Y. Apostolokis and Lee [2] and Gonzalez-Urdaneta and Cory [15] adopted Chebyshev inequality to determine the conservative confidence intervals. Table 3 Classification of Various Approaches to the Determination of Y Distribution in the Study of Error Propagation Approaches

Comments

References

Moment Matching Technique

Relatively inexpensive and can shape the tails more accurately.

2, 4, 5, 6, 7, 9, 8, 15, 22, 27, 29, 34, 36, 35, 37

Monte Carlo Methods

Relatively expensive

7, 13, 15, 23, 25, 33, 36

A Hierarchal Structure of Analysis Model The procedures recommended in error propagation analysis along with various statistical models are outlined in Fig. 1.

Notice that in Fig. 1 screening, statistical

design, and

determination of Y distribution are not always in separate forms or always in sequence.

Error propagation analysis

243

M O D E L SET UP

-

t-

--

-

SCREENING

1

STATISTICAL DESIGN

1. 2. 3. 4.

direct method matrix approach regression technique adjoint method

1. 2. 3. 4.

classical approach Taylor series expansion response surface method Monte Carlo method

1. m o m e n t m a t c h i n g t e c h n i q u e • due to response surface method • due to Monte Carlo method 2. M o n t e C a r l o m e t h o d

DISTRIBUTION OF Y

STATISTICAL INFERENCE ON Y

l

DECISION MAKING

FIG. 1 - A H I E R A R C H I A L

STRUCTURE

OF E R R O R P R O P A G A T I O N

Combinations of some of these procedures are c o m m o n .

ANALYSIS

Also, the Monte Carlo method can bc

used at several stages and can be combined with analytical methods whenever necessary. In summary, the purpose of utilizing and comparing a variety of models is to make statistical inference on Y due to the variations of X(s to serve as a decision-making attribute.

4. COMPUTER CODES

To manually analyze error propagation problems for a large system is almost impossible. Several computer codes have been developed to handle the problem. Each of these computer codes is designed under special demand and for a specific problem, hence all the codes available have limited usage. Some codes are listed as follows. i)

Second-Order Error Propagation Code (SOERP)[9] This code presents the development and use of second-order error propagation

44

WAY K t o zuad V. R. R. [r1'J't:r t

l
equations for the first four moments of a function of independently distributed random variables. ii)

BOUNDS [24, 2] The code calculates for the propagation of moments when the underlying distributions of components in a tree is Iognormally distributed.

iii)

SAMPLE [32] This code uses a Monte Carlo simulation model to evaluate the top-event unavailability.

iv)

M A R C H / C O R R A L [3, 44] Uncertainty evaluation is programmed in both the computer codes and the input data that propagate through to the code output for a critical path.J32]

v)

SCORE [5, 8] A linearized procedure is used in SCORE to approach error propagation problems. This computer code considers the combination of random variables which consists of a systematic combination of varied components. An empirical distribution may be obtained of the top-event performance in a fault.

5. APPLICATIONS

In an expensive or safety-related system, variation of the top-event performance is undesirable.

Nevertheless, in the operating stage, variation of the top-event performance

Table 4 Classification of Major System Problems Approached by Error Propagation Analysis Major System Problem

References

Computer Code Analysis

26, 40

Energy Modeling

1, 31

General Engineering Systems

10, 16, 43, 45, 49

Management Information Systems

34

Nuclear Cross-Section Measurement

12, 14

Nuclear Fault-Tree Analysis

2, 5, 13, 48

Nuclear Reactor Analysis

3, 7, 8, 11, 14, 29, 30, 41, 46, 50

Radiological Assessment

14, 34, 35, 47

System Effectiveness Modelinl~

15, 23, 27, 37

Error propagation analysis

245

propagated from the uncertainties of its components performance often cannot be prevented. Hence, in the design stage, top-event and component variations deserve great concern. Analysis of error propagation, which discusses these variations, has been applied to several types of problems. Table 4 classifies the problems and their references. The table does not cover all problems and references in this field; but it presents major applications.

6. CONCLUSIONS AND DISCUSSIONS

Although extensive research has been undertaken on error propagation in various system modeling problems, great uncertainty still exists. Current methodologies are limited to the determination of top-event error propagated from the variations of components in a predefined model, i.e., f in this study. Even if f is deterministic, current approaches i) do not take care of the dependent situation among component Xi's (which is extremely important in long-term economical plans), ii)do not consider time dependent performance (in a combat mission system, time is probably the key factor evaluate the performance, iii)imposes too many assumptions to restrict the errors incurred in Xi's (which have never been justified), and iv) makes no discussion upon the influence of error (which propagates into top-event performance) to decision makers. In addition, whenever the number of components in a system of interest is large, existing methods to approach error propagation are very costly. Furthermore, the structure of f is not always known. There always exists uncertainty about the model itself, and quite possibly that uncertainty of f propagates major variation on Y. As a conclusion, much research needs to be done.

Some general directions are

recommended as follows. 1.

Basic statistical research on error propagation should be started so that concrete information about Y through different structure of f and Xi's may be obtained. Decision theory also needs to be extended to combine the concrete information with possible loss function associated with the top-event performance.

2.

An efficient method is required (at the least cost) to estimate the variation of Y. There is no guarantee that the current screening methods can significantly delete unimportant Xi's.

There is a risk that an important Xi will be removed from

consideration, and that even after screening the number of components in a system is still large. Current screening methods do not provide answers to these problems. 3.

In practical system modeling, human error may be the biggest attribute to the variation of output performance.

Therefore, a method should be developed to

account for human error. Although, analytical form oi" human error is unknown (and may never be known), recognition of human error scattering to the top-event problem is extremely important.

246

WAy K t o alad V. R. R. l;PI,t,t.L al

7. ACKNOWLEDGEMENTS Uppuluri's research is sponsored by the Division of Risk Analysis, Nuclear Regulatory Commission under Interagency Agreement No. DOE 40-551-75 with the U.S. Department of Energy under contract W-7405-eng-26 with the Union Carbide Corporation. Kuo acknowledges the support from the 1981 Summer Faculty Research Participant project of the Oak Ridge National Laboratory for the U.S. Department of Energy. REFERENCES [1} Alsmiller, R. G., Jr., et al., Interim Report on Model Evaluation Methodology and the Evaluation of LEAP, Report ORNL/TM-7245, Union Carbide Corporation, Nuclear Division, 1980, [2]

Apostolakis, G., and Lee, Y. T., "Methods for the Estimation of Confidence Bounds for the Top-event Unavailability of Fault Trees," Nuclear Engineering and Design, Vol. 41, pp. 411-419 (1977).

[3]

Baybutt, P., and Kurth, R. E., Uncertainty Analysis of Light Water Reactor Meltdown Accident Consequences: Methodology Development, BATTELLE, Columbus Laboratories, 1978.

[4]

Bowman, K. O., One Aspect of the Statistical Evaluation of a Computer Model, Report ORNL/CSD-52, Union Carbide Corporation, Nuclear Division, 1980.

[5] Colombo, A. G., "Uncertainty Propagation in Fault-tree Analysis," in Synthesis and Analysis Methods for Safety and Reliability Studies (Apostolakis, G., Garribba, S., and Volta, G., ed.) N. Y.: Plenum Pub., 1980. [61 Colombo, A. G., and Jaarsma, R. J., "A Powerful Numerical Method to Combine Random Variables," to appear as EUR report. [7] Cox, N. D., "Comparison of Two Uncertainty Analysis Methods," Nucl. Sci. Eng., Vol. 64, pp. 258-265 (1977). [8] Cox, N. D., and Cermak, J. O., "Uncertainty Analysis of the Performance of Complex Systems," Energy Sources, Vol. 1, pp. 339-359 (1974). [9] Cox, N. D., and Miller, C. F., User's Description of Second-order Error Propagation (SOERP) Computer Code for Statistically Independent Variables. Report TREE-1216, Idaho National Engineering Laboratory, 1978. [10]

Crande, ll, K. C., and Seabloom, R. W., Engineering Fundamentals, N. Y.: McGraw-Hill, 1970.

[11]

Denning, R. S., Cybulskis, P., Wooten, R. O., Baybutt, P., and Plummet, A. M., "Methods for the Analysis of Hypothetical Reactor Meltdown Accidents," Proc. ANS Topical Meeting on Thermal Reactor Safety, Idaho, 1977.

[12]

Dragt, J. B., Dekker, J. W. M., Gruppelaar, H., and Janssen, A. J., "Methods of Adjustment and Error Evaluation of Neuton Capture Cross Sections; Application to Fission Product Nuclides," Nucl. Sci. Eng., Vol. 62, pp. 117-129 (1977).

[13]

Fault Tree Handbook, NUREG-0492, U. S. Nuclear Regulatory Commission, 1981.

[14]

Gersth, S. A. W., Dudziak, D. J., and Muir, D, W., "Cross-Section Sensitivity and Uncertainty Analysis with Application to a Fusion Reactor," Nucl. Sci. Eng., Vol. 62, pp. 137-156 (1977).

[15]

Gonzalez-Urdaneta, G. E., and Cory, B. J., "Variance and Approximate Confidence Limits for Probability and Frequency of System Failure," IEEE Trans Reliability, Vol. R-27, pp. 289-293 (1978).

[16]

Herd, G. R., Madison, R. L., and Gottfried, P., "The Uncertainty of Reliability Assessments," Proc. 10th Natl. Symp. on Rel. and Q.C., Washington, D.C., pp. 33-40 (1964).

[17]

Hiller, F. S., and Lieberman, G. J., Introduction to Operations Research, San Francisco, Holden-Day, 1980.

[18]

Himmelblau, D. M., and Bischoff, K. B., Process Analysis and Simulation: Deterministic Systems, N. Y.: Wiley, 1968.

Error propagation analysis

[19] Hwang, C. L., Tillman, F. A., and Lee, K. H., "A Review of Complex System Reliability Evaluation," submitted to IEEE Trans. Reliability, 1981. [20] Kliejnen, J. P. C., Statistical Techniques in Simulation, Part II, N. Y. Mareel-Dekker, 1975. [21] Krieger, T. J., et al., "Statistical Determination of Effective Variables in Sensitivity Analysis," Trans. Am. Nuclear Science, Vol. 28, pp. 515 (1977). [22] Ku, H. H., "Notes on the Use of Propagation of Error Formulas," J. Research Natl. Bur. Stand. pp. 263-273 (1966). [23] Kuo, Way, "Simulation on Bayesian Availability," 1981 Fall ORSA/TIMS National Meeting, October 11-14, 1981. [24] Lee, Y. T., and Apostolakis, G., "Probability Intervals for the Top Event Unavailability of Fault-Trees," Report UCLA-ENG 7663, University of California, Los Angeles, 1976. [25] McGrath, E. J., et al., "Techniques for Efficient Monte Carlo Simulation," Vol. 1, Report ORNL-RSJC-38, Oak Ridge National Laboratory (1975). [26] McKay, M. D., et al., "Report on the Application of Statistical Technique to the Analysis of Computer Codes," Report LA-NUREG-6526-MS, U. S. Nuclear Regulatory Commission (1976). [27] Murchland, J. D., and Weber, "A Moment Method for the Calculation of a Confidence Interval for the Failure Probability of a System," Proc. Annual Rel. & Maint. Sym., 1972. [28] Myers, R. H., Response Surface Methodology, N. Y.: Allyn and Bacon, 1975. [29] Nguyen, D. H., "The Uncertainty in Accident Consequences Calculated by Large Codes due to Uncertainties in Input," Nucl. Tech., Vol. 49, pp. 80-91 (1980). [30] Oblow, E. M., "Sensitivity Theory for Reactor Thermal Hydraulics Problems," Report ORNL/TM-6303, Oak Ridge National Laboratory, 1978. [31] Perey, F. G., Contribution to Screening Methodology, Report ORNL-5744, Union Carbide Corporation, Nuclear Division, 1981. [32] Reactor Safety Study, An Assessment of Accident Risks in U. S. Commercial Nuclear Power Plants, WASH-1400 (NUREG-74/014), 1975. [33] Rettig, W. H., Radd, M. E., Vesely, W. E., and Ybarrondo, L. J., "A Monte Carlo Uncertainty Estimate for a Complex System of Equations," Proc. ANS Meeting, Michigan, 1973. [34] Schaeffer, D. L., and Hoffman, F. O., "Uncertainties in Radiological Assessments- A Statistical Analysis of Radioiodine Transport via the Pasture-Cow-Milk Pathway," Nuclear Technology, Vol. 45, pp. 99-106 (1979). [35] Schwarz, G., and Hoffman, F. O, "Imprecision of Dose Predictions for Radionuclides Released to the Environment: An Application of a Monte Carlo Simulation Technique," Environment International, Vol. 4, (1980). [36] Smith, D. E., "A Taylor's Theorem - Central Limit Theorem Approximation: Its Use in Obtaining the Probability Distribution of Long-Range Profit," Management Science, Vol. 18, pp. B214-B219 (1971). [37] Thompson, M., "Lower Confidence Limits and a Test of Hypotheses for System Availability," IEEE Trans. Reliability, Vol. R-15, pp. 32-36 (1966). [38] Tillman, F. A., Kuo, Way, and Hwang, C. L., "A Numerical Simulation of the System Effectiveness- A Renewal Theory Approach," Proc. 1982 Annual Reliability and Maintainability Symposium, Los Angeles, pp. 252-261 (1982). [39] Tomovic', R., and Vukobratovie', M., General Sensitivity Theory, N. Y.: American Elsevier, 1972. ~40] Tukey, J. W., "The Propagation of Errors, Fluctuations, and Tolerances," Statistical Techniques Research Group, Department of Mathematics, Princeton University, 1954. [41] Vaurio, J. K., Reactor Development Program Progress Report," Report ANL-RDP-76, Argonne National Laboratory, 1978. [42] Vesely, W. E., "Reliability Quantification Techniques Used in the Rasmussen Study," Reliability and Fault Tree Ana~sis, pp. 775-803 (1975). [43] Watson, I. A., "The Rare Event Dilemma and Common Cause Failures," Proc. 1982 Annual Reliability and Maintainability Symposium, pp. 5-10 (1982). [44] Wooten, R. O., and Avei, H. I., "MARCH Code Description and User's Manual," Report NUREG/CR-1711, 1980. [45] Barry, B. A., Errors in Practical Measurement in Science Engineering and Technology, N.Y.: Wiley, 1978.

247

248

Wa~ K t o and V. R. R. UPPUItfRI

[46]

Canali, S., and Olivi, L., "A Statistical Library for the Response Surface Methodology Application to the Nuclear Safety," COMPSTAT 1980, pp 1-7 (1980).

[47]

Coldstein, R. A., and Kaolo, F. R., "Biological Risk Uncertain Analysis," presented at Intl. Syrup. Energy and Ecol. Modeling, Kentucky, April 20-23, 1981.

[48]

The Kuljian Corporation, Sensitivity Analysis for Fault Tree/Event Tree Calculated Risk, Report to EPRI contract No. TSA 81-425, 1981.

[49]

Mazumdar, M., Marshall, J. A., and Chay, S. C., "Propagation of Uncertainties of Problems of Structural Reliability," Nuclear Engineering and Design, Vol. 50, pp 163-167 (1978).

[50]

Olivi, L., "Response Surface Methodology in Risk Analysis," in Synthesis and Analysis Methods for Safety and Reliability Studies (Apostolakis, G., Garribba, S., and Volta, G., ed.), N.Y.: Plenum Pub., pp 313-327, 1980.