A Monte Carlo framework for probabilistic analysis and variance decomposition with distribution parameter uncertainty

A Monte Carlo framework for probabilistic analysis and variance decomposition with distribution parameter uncertainty

A Monte Carlo framework for probabilistic analysis and variance decomposition with distribution parameter uncertainty Journal Pre-proof A Monte Carl...

900KB Sizes 1 Downloads 34 Views

A Monte Carlo framework for probabilistic analysis and variance decomposition with distribution parameter uncertainty

Journal Pre-proof

A Monte Carlo framework for probabilistic analysis and variance decomposition with distribution parameter uncertainty John McFarland, Erin DeCarlo PII: DOI: Reference:

S0951-8320(19)30744-6 https://doi.org/10.1016/j.ress.2020.106807 RESS 106807

To appear in:

Reliability Engineering and System Safety

Received date: Revised date: Accepted date:

7 June 2019 14 November 2019 18 January 2020

Please cite this article as: John McFarland, Erin DeCarlo, A Monte Carlo framework for probabilistic analysis and variance decomposition with distribution parameter uncertainty, Reliability Engineering and System Safety (2020), doi: https://doi.org/10.1016/j.ress.2020.106807

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier Ltd.

Highlights • Limited data for input distributions creates uncertainty in probabilistic results • Distribution parameter uncertainty isolated through standard normal representation • Relative contributions to uncertainty in probability of failure are evaluated • Data prioritization for fatigue crack growth analysis is demonstrated

1

A Monte Carlo framework for probabilistic analysis and variance decomposition with distribution parameter uncertainty John McFarland∗, Erin DeCarlo Southwest Research Institute, 6220 Culebra Rd, San Antonio, TX, 78238, US

Abstract Probabilistic methods are used with modeling and simulation to predict variation in system performance and assess risk due to randomness in model inputs such as material properties, loads, and boundary conditions. It is common practice to assume that the input distributions are known. However, this discounts the epistemic uncertainty in the values of the distribution parameters, which can be attributed to the availability of limited data to define the input distributions. This paper proposes a Monte Carlo framework for unified treatment of both aleatory and epistemic uncertainty types when assessing system performance and risk. A Bayesian philosophy is adopted, whereby epistemic uncertainty is characterized using probability theory. Several computational approaches are outlined for propagation and sensitivity analysis with distribution parameter uncertainty. As a result of the outlined framework, the overall influence of epistemic uncertainties can be quantified in terms of confidence bounds on statistical quantities such as failure prob∗

Corresponding author Email address: [email protected] (John McFarland)

Preprint submitted to Reliability Engineering & System Safety

January 22, 2020

ability, and the relative influence of each source of epistemic uncertainty is quantified using variance decomposition. The proposed methods are demonstrated using both an analytical example and a fatigue crack growth analysis. Keywords: Probabilistic analysis, uncertainty quantification, epistemic uncertainty, Monte Carlo, sensitivity analysis, variance decomposition 1. Introduction Engineering design and analysis often employ modeling and simulation to predict the performance of a component or system. It is well known that inputs to such models are subject to a variety of sources of uncertainty. Developing an understanding of how these uncertainties influence model-based predictions provides a foundation for improved understanding and decisionmaking. This includes risk and reliability analysis, where the objective is to determine the probability that a system will achieve satisfactory performance, in view of the associated uncertainties. A related area is sensitivity analysis, which has been defined as “the study of how the uncertainty in the output of a model (numerical or otherwise) can be apportioned to different sources of uncertainty in the model input” [1, p. 45]. Uncertainty is typically classified using two categories: aleatory (irreducible) and epistemic (reducible). Aleatory uncertainty is associated with inherent randomness and manifested by repeated measurements of the same physical quantity that do not yield the same value, whereas epistemic uncertainty is associated with having imperfect information or limited data. For example, in fatigue and fracture analysis, the fatigue lives of nominally identical material specimens exposed to nominally identical loading scenarios can 3

show as much as one or two orders of magnitude of variation in the number of cycles to failure. This can be due to material variations occurring on the microstructural level, including the sizes, boundaries, and crystallographic orientations of individual grains, voids, inclusions, and surface imperfections due to specimen preparation [2, 3]. This variation may be referred to as irreducible because it is inherent to a particular class of materials or manufacturing process: it cannot be reduced by performing more fatigue tests. On the other hand, fatigue life is also subject to epistemic uncertainty, due to having imperfect information in the form of limited test data. The key distinction is that the epistemic uncertainty can be reduced by performing additional tests. The aim of this paper is to demonstrate a unified framework for uncertainty propagation and variance decomposition that is capable of separating the two types of uncertainty. Separation of the two types of uncertainty can provide additional information to better support decisions that require quantified confidence in model predictions. Further, the separation enables a novel application of variance-based sensitivity methods that provides an intuitive interpretation in terms of which sources of reducible uncertainty have the greatest contribution towards imperfection in one’s state of knowledge about the predicted performance. This paper adopts a Bayesian approach to modeling epistemic uncertainty. The key elements of the Bayesian philosophy are that (a) because we are uncertain about the true value of parameters, we will consider them to be random variables, and (b) probability statements about parameters must be interpreted as degree of belief [4]. This viewpoint provides several practical

4

advantages for uncertainty quantification. First, Bayes’ theorem provides a formal mechanism for quantifying how one’s state of knowledge changes as new observations become available. Second, the use of probabilistic representation of uncertainty makes it possible to apply a variety of established computational tools such as Monte Carlo simulation for uncertainty propagation and variance decomposition for sensitivity analysis. We focus specifically on epistemic uncertainty that is associated with probability distribution parameters, although the framework can also be applied towards other model parameters that are subject to epistemic uncertainty. Central to our proposed Monte Carlo framework is the representation of the performance function in terms of standard normal random variables, which has several advantages. First, the standard normal representation establishes a functional relationship between the model output and all uncertain variables, including the probability distribution parameters. This makes it possible to consider decomposition of variance with respect to the distribution parameters. Second, this representation facilitates an unambiguous description of the nested Monte Carlo procedure for treatment of distribution parameter uncertainty. Other previous work [5, 6] has employed similar transformations for the purpose of computing sensitivity indices for dependent variables, whereas we use the transformation to establish a functional relationship between probability distribution parameters and model output. The contributions of this work are as follows. First, the treatment of probability distribution parameter uncertainty with Monte Carlo methods for reliability analysis is placed within a unified framework through the use of a standard normal representation of the performance function. Through

5

this formulation, we draw connections to two (single- and double-loop) Monte Carlo procedures for uncertainty propagation as well as two corresponding procedures for decomposition of variance. Second, we expand on our previous work [7, 8, 9] for variance decomposition with aleatory and epistemic uncertainties to provide a more formal development and explicit outline of the Monte Carlo computational procedure for decomposition of the variance in a statistical quantity (e.g., probability of failure). Finally, we present a detailed engineering application that illustrates the treatment of probability distribution parameter uncertainty due to limited data for both uncertainty propagation and decomposition. Previous research has recognized that summaries of system performance such as probability of failure depend on the value of the uncertain distribution parameters [10, 11, 12]. Der Kiureghian [12, 13] adopted a Bayesian approach towards representation of epistemic uncertainty, observing that when epistemic uncertainty is modeled using random variables, then summaries such as the probability of failure and reliability index are themselves random variables: “because of the uncertainties arising from estimation error and model imperfection, pf [probability of failure] and β [reliability index] are themselves uncertain and can only be assessed in a probabilistic sense, i.e., through probability distributions.” This leads to the observation that characterization of the distribution of the reliability index requires a “nested application” of the reliability methods. Point estimators of the reliability index that reflect both the aleatory and epistemic uncertainties were proposed. Zhang and Mahadevan [10] also showed that under parameter uncertainty, the probability of failure can be modeled as a random variable. As with Der

6

Kiureghian, they also observed that characterization of the distribution of probability of failure is a nested reliability analysis problem. They proposed an approach for Bayesian updating of the epistemic uncertainties based on reliability testing performed to directly estimate failure probability. Borgonovo et al. [11] addressed epistemic uncertainties from the perspective of probabilistic safety assessment, which involves characterizing system risk in terms of basic event probabilities. They showed explicit dependence of the risk metric on model parameters that are subject to epistemic uncertainty (i.e., parameters defining probabilities associated with the basic events). They also applied global sensitivity analysis to decompose the variance in the risk metric with respect to the model parameters. Other than the published solutions to the NASA Uncertainty Quantification Challenge (see Section 5), this is the only direct application of decomposition of variance in probability of failure that we are aware of. The consideration and separation of aleatory and epistemic uncertainties has been especially prevalent within the area of probabilistic risk assessment for nuclear power. Prominent examples include the NUREG-1150 study for accident risk at nuclear power plants [14], as well as performance assessments for the Waste Isolation Pilot Plant [15, 16, 17] and the Yucca Mountain waste disposal facility [18]. These studies have employed double-loop Monte Carlo sampling approaches to evaluate the effect of epistemic uncertainty, presenting results in terms of a suite of possible cumulative distribution functions for consequences such as radionuclide release. These efforts have also emphasized the use of expert elicitation to quantify epistemic uncertainties in probabilistic terms [19].

7

Because there is, at least in principle, the possibility of reducing epistemic uncertainties over time by collecting new information, having an understanding of the relative importance of various sources of epistemic uncertainty would enable one to make more informed decisions regarding resource allocation for uncertainty reduction. Previous work by Bae et al. [20] and Helton et al. [21] proposed methods for sensitivity analysis with epistemic uncertainty using evidence theory [22]. These investigations did not consider aleatory uncertainty or probabilistic modeling. Other research [23, 24] employed the unified uncertainty analysis framework [25], in which aleatory uncertainties are modeled using probability theory and epistemic uncertainties are modeled using evidence theory. Guo and Du [23] used one-at-a-time parameter variation to explore the impacts of the epistemic uncertainties. Li et al. [24] developed a visual technique based on regional sensitivity analysis in order to analyze contributions of both aleatory and epistemic variables to failure probability. Probability distribution parameter uncertainty was not addressed in either of these works. Sankararaman and Mahadevan [26] developed a fully probabilistic auxiliary variable method that employs variance-based global sensitivity analysis to evaluate the relative contributions of both epistemic and aleatory uncertainty to the overall uncertainty in a model output quantity. We will show connections to this auxiliary variable method in Section 3. The work within the nuclear power community has also emphasized sensitivity analysis that distinguishes between aleatory and epistemic uncertainties. In particular, Helton and others [14, 27] have utilized sampling-based techniques such as scatterplots and partial correlation coefficients [28] to

8

evaluate the relationship between parameters subject to epistemic uncertainty and analysis results such as event frequency. Helton et al. [18] also employed similar techniques to evaluate the effect of epistemic uncertainties on expectations taken over the aleatory random variables, such as expected radioactive dose. The remainder of this paper is organized as follows: Section 2 presents a Monte Carlo framework for treatment of distribution parameter uncertainty, using a standard normal representation of the aleatory random variables. Section 3 describes a computational procedure for application of variancebased sensitivity analysis in the presence of aleatory and epistemic uncertainties. In Section 4, two numerical examples are given: a simple “R minus S” problem is used for comparison with exact results, and then the methods are demonstrated using a small crack growth fatigue analysis. Section 5 discusses previously published work on the NASA Langley Uncertainty Quantification Challenge, which poses specific questions about factors prioritization in the presence of aleatory and epistemic uncertainty. Finally, conclusions and suggestions for future work are given in Section 6. 2. Monte Carlo analysis of multiple uncertainty types It is common practice in the literature to use capital letters to denote random variables and lower-case letters to denote specific values of those random variables. In this paper, we follow Ref. [29] and use a tilde to denote random variables: x˜ is a random variable and x is a specific value of that variable. This notation provides some added flexibility for working in the Bayesian context, in which many quantities may be considered random 9

variables. Consider a deterministic performance model with output y and input vector x = (x1 , . . . , xd ), which is represented as a function: y = g(x). Suppose that the inputs are random variables described by a joint probability density function fx˜|θ (x), which is expressed in terms of a set of distribution parameters θ. Note that the inverse distribution transform can be used to express each individual random variable x˜i in terms of a standard normal random variable u˜i as: x˜i = Fx˜−1 (Φ(˜ ui )) i |θ

(1)

where Fx˜−1 (·) is the inverse of the marginal cumulative distribution function i |θ for x˜i , Φ(·) is the standard normal cumulative distribution function, and u˜i is a normally distributed random variable with zero mean and unit variance. If ˜ are statistically independent, then the random vector x ˜ the components of x ˜ of statistically can be expressed in terms of a corresponding random vector u independent normal random variables:   −1 ˜ = h(u, ˜ θ) = Fx˜−1 x (Φ(˜ u )) , . . . , F (Φ(˜ u )) 1 d x ˜d |θ 1 |θ

(2)

˜ θ) to show explicitly that the transformaWe use the function notation h(u, tion depends on the values of the distribution parameters θ. Extension of Eq. (2) to treatment of correlated random variables is discussed in Appendix A. Through function composition of g(·) with h(·), the model output y can be expressed as: ˜ θ)) ≡ g 0 (u, ˜ θ) y˜ = g (h(u,

(3)

˜ θ), shows explicit dependence Observe that the new transfer function, g 0 (u, 10

on the distribution parameters θ. Summaries of y˜ can now be expressed in terms of g 0 (·). For example, the expectation: Z g 0 (u, θ)φ(u) du E[˜ y] =

(4)

Rd

and the cumulative distribution Z Fy˜(y) = I [g 0 (u, θ) < y] φ(u) du

(5)

Rd

˜ where I[·] is the indicator function, and φ(u) is joint density function of u, given by d Y 1 2 1 √ e− 2 ui φ(u) = 2π i=1

(6)

This formulation makes it clear that summaries of y˜ depend on the distribution parameters θ. Typically, the distribution parameters are assumed to have known values. The Bayesian approach expresses the epistemic uncertainty in the distribution parameters using probability, so that θ is considered ˜ In this case, summaries of y˜ that depend to be a random variable, denoted θ. on θ˜ are themselves random variables. For example, Eq. (5) becomes: Z h i ˜ < y φ(u) du ˜ Fy˜(y) = I g 0 (u, θ)

(7)

Rd

Next, we consider numerical methods for making inference about these ˜ We will focus on the cumulative summaries in light of the uncertainty in θ. distribution function, F˜y˜(y). First, we note that the expected value of F˜y˜(y), also referred to as the predictive distribution, is given by:  h i Z Z 0 ˜ E Fy˜(y) = I [g (u, θ) < y] φ(u) du fθ˜(θ) dθ Θ

Rd

11

(8)

where Θ is the sample space of θ˜ and fθ˜(θ) denotes its probability distribution.1 The Monte Carlo estimator for this quantity is: h

N i 1 X ˜ E Fy˜(y) ≈ I [g 0 (ui , θi ) < y] N i=1

(9)

where (u1 , . . . uN ) is a random sample drawn from the joint density function φ(u) and (θ1 , . . . θN ) is a random sample from fθ˜(θ). The following procedure, adapted from Ref. [31], can be used to make other inferences about F˜y˜(y) by drawing random realizations from its probability distribution. Begin with a random sample (u1 , . . . , uNa ), referred to as the aleatory sample matrix, and a random sample (θ (1) , . . . , θ (Ne ) ), referred to as the epistemic sample matrix. For each realization θ (i) from the epistemic sample matrix, the Monte Carlo estimator for Eq. (5) is computed as: (i) Fy˜ (y)

Na   1 X = I g 0 (uj , θ (i) ) < y Na j=1

(10)

Repeated for each realization from the epistemic sample matrix, this produces (1)

(Ne )

Ne realizations Fy˜ (·), . . . , Fy˜

(·), and this sample of random distribution

functions can be used to make inference about F˜y˜(·). For example, probability (i) bounds for F˜y˜(·) can be obtained through order statistics of the Fy˜ (·)’s. This

procedure can be viewed as a nested or double-loop Monte Carlo simulation, 1

fθ˜(θ) is being used in a general sense to denote the probability distribution describing

the current state of knowledge about θ. The state of knowledge for some components of θ might be based on Bayesian updating with observed data, whereas for other components, the state of knowledge might be based solely on “prior” knowledge. As noted by Lindley [30, Section 7], the terminology is unfortunate because prior and posterior are relative terms: “today’s posterior is tomorrow’s prior”.

12

where the Monte Carlo estimator given by Eq. (10) accounts for the aleatory (i)

˜ and the random sampling of the Fy˜ (·)’s accounts for random variables (u), ˜ In total, Na × Ne evaluations of the the epistemic random variables (θ). performance function g(·) are required. The procedure can be adapted to make inference about F˜y˜−1 (·), i.e. quantiles of the distribution function. Define yα to be the 100α percentile, such (i)

that Fy˜(yα ) = α. For each realization θ (i) , we compute yα , the 100α per(1)

(Ne )

centile of y˜. As above, the realizations yα , . . . , yα

can be used to make

inference about y˜α by using, for example, order statistics to determine probability bounds. 3. Variance decomposition for multiple uncertainty types The previous section outlines two viewpoints for treatment of multiple uncertainty types. First, the cumulative distribution function itself can be viewed as a random quantity, due to the effect of epistemic uncertainty, as shown in Eq. (7). In addition, one may consider the predictive distribution of the model output y˜, which accounts for both sources of uncertainty, as shown in Eq. (8). In this section, we address application of variance-based sensitivity analysis towards each of these formulations, accounting for both aleatory and epistemic uncertainty. Specifically, we focus on calculation of the Sobol’ indices [32, 33] (many other sensitivity approaches have also been developed; see, for example, Refs. [34, 35, 36] for a summary). Consider a function q = f (w), where

13

˜ is a random vector.2 We are interested in quantities such as w V [E (˜ q | w˜i )] V (˜ q)

(11)

˜ ∼i )] ˜ ∼i )] V [E (˜ q|w E [V (˜ q|w =1− V (˜ q) V (˜ q)

(12)

Si = and SiT =

which are referred to as the main and total effect sensitivity indices, respectively, where w∼i denotes the subvector of w containing all elements except wi . More generally, if w is partitioned into sets s and t, the main and total ˜ respectively. effects for the set s are given by V [E(˜ q | s˜)] and E[V (˜ q | t)], In the context of multiple uncertainty types, one approach is to decompose the variance of y˜ under its predictive distribution. This approach follows naturally from the standard normal representation of the performance func˜ in which both u and θ are treated as random variables. ˜ θ), tion, y˜ = g 0 (u, This is equivalent to the auxiliary variable method proposed by Ref. [26], where we have chosen the auxiliary variables to be normally distributed instead of uniformly distributed. With this approach, the sensitivity indices for ˜ represent the relative influence of aleatory randomness the components of u ˜ 3 and the sensitivity indices for the in the corresponding components of x, components of θ˜ represent the influence of epistemic uncertainty about the 2

The notation is changed from Section 2 because the decomposition does not necessarily

need to be performed with respect to the inputs and outputs of the performance model g(·). 3 ˜ are The correspondence between u ˜i and x ˜i does not hold when the components of x correlated. In the case of correlated variables, we recommend calculation of sensitivity indices for subvectors that collect all correlated variables together. Alternatively, see Refs. [5, 6] for discussion of the calculation of sensitivity indices for dependent variables.

14

value of the distribution parameters. The second formulation is based on the viewpoint that the summaries of y˜ such as the expectation or cumulative distribution function are themselves random variables. Thus, the variance in these summaries can also be decomposed. In this case, the decomposition is only with respect to the components ˜ This can be seen from Eq. (7), which depends on the random vector θ˜ of θ. ˜ (u appears only as a variable of integration). but is not a function of u Here, we briefly outline a computational procedure for estimating main and total effect indices associated with the variance decomposition of F˜y˜(y) ˜ We suppose that θ is partitioned with respect to the components of θ. into mutually independent subsets (θU1 , . . . , θUT ). The use of subsets is particularly relevant when working with distribution parameter uncertainty, as distribution parameters associated with the same random variable (e.g., its mean and standard deviation) will typically not be independent after Bayesian updating with observed data. Computing sensitivity indices for such parameters considered as a group admits simpler computational schemes, reduces the required sample sizes, and provides an intuitive interpretation in terms of the overall influence of groups of related distribution parameters. As in Section 2, we begin by generating a random sample of the aleatory random variables, (u1 , . . . , uNa ). Next, we follow the “matrix exchange” procedure outlined in Ref. [1]. Two random samples of size N are generated according to fθ˜(θ), which we label:     (1) (N ) (1) θ . . . θUT θ    U1  ..   ..   .. . . ΘA =  .  =  . . .      (N ) (N ) θ (N ) θU1 . . . θUT 15

(13)

and



  0  0 (1 ) (N 0 ) θ (1 ) θ U1 . . . θ UT     ..   .   . ... ΘB =  ..  =  .. .      (N 0 ) (N 0 ) (N 0 ) θ θU1 . . . θ UT

(14)

From ΘA and ΘB , a “re-sample” matrix can be constructed for each subvector θUj as



(10 )

(10 )

(1)

θ θU2 . . . θUj . . .  U1  . .. Θj =  .. .  (N 0 ) (N 0 ) (N ) θ U1 θ U2 . . . θUJ . . .

 (10 ) θUT  ..  .   (N 0 ) θUT

(15)

Note that all columns of Θj are taken from ΘB , except for those columns associated with subvector θUj , which are taken from ΘA . For each row θ (i) (i)

from each of these matrices, a corresponding realization Fy˜ (y) can be computed using Eq. (10). Using this approach, it is possible to obtain estimates for all subset main and total effect indices using N (T +2) total evaluations of Eq. (10); see, for example, Refs. [1, 37, 33]. Since each evaluation of Eq. (10) requires Na evaluations of the performance function g(·), the procedure requires a total of Na N (T + 2) evaluations of g(·). It is worth noting that Eq. (10) can be computed for different critical values of y using the same function evaluations, so the procedure can produce estimates of sensitivity of Fy˜(·) for multiple critical performance levels using the same set of function evaluations. Note that according to the above description, the same aleatory sample (i)

matrix is used for each calculation Fy˜ (y). This establishes a functional dependence on θ that would not hold if the aleatory sample matrix were changed for each calculation. For example, two different aleatory sample matrices could produce different estimates of Fy˜(y) for the same value of 16

θ. As shown in Ref. [7], this increases the variance in the estimates of the sensitivity indices. The use of the standard normal representation makes it possible to clearly specify what would otherwise be an ambiguous sampling scheme. 4. Numerical examples 4.1. R minus S problem This example is based on the traditional “R minus S” problem in structural reliability analysis [38]. The objective is to analyze the probability that the load will exceed the capacity, where both are random variables. This probability is referred to as the probability of failure. In this example, the random variables representing load and capacity are both defined based on observed data. Since the data are limited in nature, there is epistemic uncertainty associated with the distribution parameters. The two aleatory random variables are assumed to be normally distributed. In order to facilitate comparison against exact solutions, we assume that the standard deviation for each random variable is known. The performance function g(x) is written as y = x1 − x2 , and we de˜ as x˜1 ∼ note the marginal probability distributions of the components of x N (µ1 , σ12 ) and x˜2 ∼ N (µ2 , σ22 ). Because x˜1 and x˜2 are normally distributed

and g(·) is a linear function, it is easy to show that y˜ ∼ N (µ1 − µ2 , σ12 + σ22 ). With σ1 and σ2 known, we take standard reference prior distributions for the means: f (µ1 , µ2 ) ∝ 1. The posterior distributions for the means are given

by µ ˜1 | d ∼ N (¯ x1 , σ12 /n1 ) and µ ˜2 | d ∼ N (¯ x2 , σ22 /n2 ), where x¯1 and x¯2 denote the sample means of the observations for x˜1 and x˜2 , and n1 and n2 denote the 17

sample sizes. For this example, the following parameters are used: n1 = 200, x¯1 = 15, σ1 = 2, n2 = 20, x¯2 = 10, and σ2 = 1. First, we consider inference about the 2.5th percentile of y˜, denoted y0.025 . Since y˜ is normally distributed, y0.025 be expressed analytically in terms of the distribution parameters as y˜0.025

q =µ ˜1 − µ ˜2 + Φ (0.025) σ12 + σ22 −1

Recall that σ1 and σ2 are known, whereas µ ˜1 and µ ˜2 are random (due to epistemic uncertainty). Since y˜0.025 is a linear function of µ ˜1 and µ ˜2 , which are normally distributed, the variance decomposition of y˜0.025 is given by: V (˜ y0.025 ) = V (˜ µ1 ) + V (˜ µ2 ) =

σ12 σ22 + n1 n2

Thus, the main and total effect indices associated with the epistemic uncertainty in µ ˜1 and µ ˜2 are: S1 = S1T = 0.29 and S2 = S2T = 0.71. The results indicate that in order to reduce uncertainty in the 2.5th percentile of y˜, reducing uncertainty in µ ˜2 should be prioritized over reducing uncertainty in µ ˜1 . In other words, collection of additional data for x˜2 should be prioritized. For comparison, the numerical procedure described in Section 3 was applied with Na = 1, 000 and N = 10, 000, which produced the estimates of the main and total effects shown in Table 1. Table 1: Estimated global sensitivity indices for y˜0.025

Random variable

Main Effect

Total Effect

µ ˜1

0.270

0.286

µ ˜2

0.712

0.715

18

Next, we consider inference about the probability of failure, Fy˜(0). We compute 95% probability bounds on F˜y˜(0), in order to characterize the effect of the distribution parameter uncertainty. We employ the full doubleloop Monte Carlo procedure outlined in Section 2, even though simplification would be possible, since Fy˜(0) can be computed exactly for a given realization of θ. Using Na = 5, 000 and Ne = 1, 000, we obtain upper and lower 95% probability bound estimates of (0.0055,0.0216) from the 2.5th and 97.5th (1)

(Ne )

percentiles of the realizations Fy˜ (0), . . . , Fy˜

(0).

4.2. Small crack growth This numerical example is based on Ref. [39], which considered probabilistic life prediction for small fatigue crack growth in Ti-6Al-2Sn-4Zr-6Mo. Small fatigue cracks (dimensions on the order of the microstructural size scale) are recognized to grow below the long-crack growth threshold and to exhibit faster growth rates than long cracks under the same stress intensity factors [40, 41]. A significant portion of the crack growth life after crack initiation can be spent in the small crack regime [42, 43]. Further, small crack growth rates can exhibit greater variability in nominally identical materials, as compared to long cracks. Thus, the small crack growth regime can be an important consideration for probabilistic modeling of fatigue life. Based on Ref. [39], we consider data for stress ratio R = −0.5 (minimum

over maximum stress) for small crack growth at 260 ◦ C. Additional details re-

garding the experimental procedure and data are given in Ref. [39]. The goal of this numerical example is to quantify the effect of epistemic uncertainty on prediction of crack growth, based on the use of finite data to characterize crack growth rate and initial flaw size. 19

The following power-law equation is used to express crack growth rate: da = eC ∆K n dN

(16)

where a is crack length, N is the number of fatigue cycles, ∆K is the stress intensity factor range, and C and n are the crack growth rate parameters. For the selected stress ratio, Ref. [39] provides (C, n) pairs for seven small crack growth tests under nominally identical conditions. We follow the same approach as Ref. [39] and use a bivariate normal distribution to describe the inherent variability (aleatory uncertainty) in crack growth rate. The bivariate normal distribution for (C, n) is characterized by the following distribution parameters, which must be estimated from the data: µC , µn , σC , σn , and ρ, where µ denotes the mean, σ denotes the standard deviation, and ρ denotes the correlation coefficient between C and n. Bayesian inference is used to quantify the epistemic uncertainty in these five distribution parameters. We use the following non-informative prior distribution for the parameters: f (µC , µn , σC , σn , ρ) ∝

1 σC σn

(17)

This prior is independently uniform in µC , µn , log σC , log σn , and ρ. This approach for specification of the prior on the parameters of the covariance matrix is related to the “separation strategy” proposed in Ref. [44]. The slice sampling algorithm [45] is used to generate samples from the joint posterior distribution, a summary of which is shown in Figure 1, with the marginal histograms of the parameters on the diagonal and pair-wise scatter plots on the off-diagonals. 20

Figure 1: Posterior sample summary of the bivariate normal distribution parameters for C˜ and n ˜

Next, variation in crack initiation depth, a0 , is modeled using a lognormal distribution: "

(ln a0 − λ)2 fa˜0 |(λ,ζ) (a0 ) = exp − 2ζ 2 a0 ζ 2π 1 √

#

(18)

The distribution parameters λ and ζ are characterized based on 16 experimental measurements presented in Ref. [39]. As before, Bayesian inference is used to quantify the epistemic uncertainty associated with the distribution parameters. The non-information prior distribution f (λ, ζ) ∝ 1/ζ is used. Figure 2 shows a summary of the posterior samples (after conversion of the lognormal distribution parameters λ and ζ to the mean and standard deviation of a0 ). 21

Figure 2: Posterior sample summary of the converted mean and standard deviation for lognormally distributed a ˜0

Fatigue crack growth analysis is carried out using the NASGRO software [46]. In order to enable use of the Monte Carlo-based uncertainty propagation techniques outlined in Sections 2 and 3, a fast-running response surface model is trained using 100 NASGRO simulations. The training data are generated using a uniformly distributed Latin Hypercube design [47] for a0 , C, and n. It is important to account for both the aleatory and epistemic uncertainty in selection of the training data ranges. Here, the lower and upper bounds h i h i −5 ˜ ˜ for each variable are chosen such that E F (·) = 3 × 10 and E F (·) = h i 0.99997, where E F˜ (·) includes the effect of both aleatory and epistemic

uncertainties, and can be computed using the Monte Carlo procedure given by Eq. (9). The resulting bounds are given in Table 2. Due to the strong correlation between C and n, there is the potential of selecting training points with unlikely combinations of C and n. In order to address this, the Iman and Conover method [48] is used to adjust the sample 22

Table 2: Parameter bounds for design of experiments

Variable Lower bound

Upper Bound

a0

1.5 µm

33.6 µm

C

-26.20

-17.74

n

0.3475

3.66

correlation of the generated Latin Hypercube design to match the maximum likelihood estimate of the correlation coefficient between C and n obtained from the observed data. The resulting design points are then analyzed using NASGRO to compute the cycles to failure for each. A Gaussian process response surface model is fit to the logarithm of cycles to failure as a function of the three random variables. The leave-one-out cross-validation R2 value [49] is found to be 0.999998. The Monte Carlo approach described in Section 2 is used to compute the cumulative distribution function (CDF) of fatigue crack growth life, with quantified uncertainty. Note that the performance function g(·) is given by the response surface model, and the input vector is x = (a0 , C, n). The transformation described in Appendix A is used to express the correlated random variables C˜ and n ˜ in terms of uncorrelated standard normal variables ˜ The sizes of the aleatory and epistemic sample matrices are determined u. using the general rule of thumb of having at least 20/p samples where p is the lowest probability to be captured in the analysis. For instance, the lowest probability evaluated in the aleatory portion of the analysis is 1/1000 (0.1%) resulting in Na = 20, 000 as the size of the aleatory sample matrix. Similarly, an epistemic sample matrix size of 800 is needed to compute the 95%

23

confidence bounds on the CDF (probability of 1/40 at the 2.5th percentile), however, this is rounded to Ne = 1, 000 since the analysis is performed with a fast-running response surface. Figure 3 shows 95% confidence bounds for the CDF of cycles to failure computed using an aleatory sample matrix size Na = 20, 000 and an epistemic sample matrix size of Ne = 1, 000.

Probability of Failure

0.999 0.99 0.9 0.7 0.5 0.3 0.1 0.01 0.001

95% C.I. 103

104 Cycles to Failure

105

Figure 3: CDF with confidence bounds due to epistemic uncertainty in a0 , C, and n

Finally, the variance decomposition method described in Section 3 is used to compute the relative contributions of epistemic uncertainties to the uncertainty in a given percentile of fatigue life. For this example, variance decomposition is performed for the 2.3 percentile value of fatigue life (i.e., the percentile selected at the “u = −2” level on the CDF, which corresponds to a probability of Φ(−2) = 0.023). There are two statistically independent groups of epistemic parameters, a0 and (C, n), since C and n are calibrated from the same dataset and are not independent. According to the rule of thumb for sample sizes discussed above, an aleatory sample matrix size of 24

Na = 1, 000 is needed to adequately capture the 1/43 failure probability at u = −2. Following the notation in Section 3, the size of the random sample matrix needed capture the variability over the epistemic sources of uncertainty is chosen as N = 1, 000 bringing the total number of evaluations of the response surface to Na N (T + 2) = 4e6, with T = 2 for the two independent sets of parameters. Figure 4 shows the sensitivity indices of the CDF at u = −2 to the epistemic parameter groups.4 The conclusion from the variance decomposition analysis is that the epistemic uncertainty in the crack growth rate parameters is the most significant contributor to the uncertainty in cycles to failure at u = −2. Therefore, in order to increase the confidence in cycles to failure, obtaining more fatigue crack growth data should be prioritized.

Sensitivity Index

1.0 0.8

Main Effect Total Effect

0.6 0.4 0.2 0.0

a0

(C, n)

Figure 4: Variance decomposition of cycles of failure at u = −2 with respect to epistemic uncertainties

4

Due to numerical sampling error, the estimated value for the total effect index of

(C, n) is slightly lower than the estimate for the main effect index.

25

5. Discussion Most of the methods that we describe have to some extent been previously reported in the literature. However, there is only limited discussion of the application of variance-based sensitivity analysis towards statistical quantities, such as probability of failure, that are subject to epistemic uncertainty. One particular forum that has generated discussion in this area is the NASA Langley Multidisciplinary Uncertainty Quantification Challenge [50]. The Challenge provided a set of black box models together with information for characterizing aleatory and epistemic uncertainties and posed (among others) a question of what factors to prioritize in order to reduce uncertainty in a probability of failure metric. The challenge problem elicited a variety of responses, which are collected in a special edition of the the Journal of Aerospace Information Systems [51]. Among the responses, Refs. [7, 52, 53, 54] all employed decomposition of the variance in the probability of failure metric with respect to the epistemic parameters. Among these, only Refs. [7, 52] computed the sensitivity indices with respect to parameter subsets, in accordance with the problem formulation. Ref. [52] did not compute the sensitivity indices using Saltelli’s matrix exchange procedure but instead used direct double-loop (three loops, accounting for the aleatory uncertainties) Monte Carlo estimates of the Sobol’ indices. Refs. [55, 56] both employed variance-based approaches to evaluate the sensitivity of the probability of failure metric, but sensitivities were computed with respect to both aleatory and epistemic uncertainties. Ref. [57] did employ variance decomposition for some aspects of the problem, but the 26

authors claimed that it was not appropriate for application to the probability of failure metric. Ref. [58] proposed an extension of the Sobol’ indices for which the use of variance is a special case; they employed a non-probabilistic interval representation of the epistemic uncertainties and used interval length as a replacement for variance in their sensitivity indices. Ref. [59] also employed an interval representation for epistemic uncertainty; they developed a sensitivity metric based on reduction of interval length but not as closely related to the Sobol’ indices. Ref. [60] performed a traditional analysis of variance (ANOVA) using five-level full factorial designs, but they addressed only sensitivity of the model outputs, not sensitivity of the probability of failure metric. The variety of approaches that were taken for the Challenge Problem highlights the lack of clear guidance or consensus for how to deal with multiple uncertainty types within a “factors prioritization” setting for reduction of uncertainty in a probability of failure. We argue that the main effect sensitivity indices of the probability of failure with respect to appropriate subsets of epistemic parameters provide the best guidance for factors prioritization. The formulation of the Challenge Problem itself motivates the need to use subsets: factors to be prioritized include aleatory random variables subject to distribution parameter uncertainty. By grouping together into a subset those distribution parameters, one is able to determine main effect sensitivity indices associated with the requested factors. More generally, this type of grouping is attractive because it links the sensitivity index to collection of new data for the aleatory random variable, as opposed to the decidedly less realistic scenario of reducing uncertainty in an individual distribution

27

parameter (e.g., mean or standard deviation). 6. Conclusions We have presented a framework for the use of Monte Carlo-based methods for separating the influence of epistemic and aleatory uncertainty in both uncertainty propagation and decomposition (sensitivity analysis). This separation makes it possible to explicitly account for the effects of statistical uncertainty in the determination of probability distribution parameters. The effects of these epistemic uncertainties can be summarized and communicated in terms of confidence bounds on computed probabilistic quantities, such as the probability of failure. This provides a quantitative basis to support decisions about whether a given level of epistemic uncertainty is satisfactory, e.g. whether more information/data are needed. Additionally, we have shown how the proposed framework can be used to evaluate the relative contributions of epistemic uncertainties, by using variance-based sensitivity analysis methods. This is a novel approach that has an intuitive appeal: by definition, it is only the epistemic uncertainties that are reducible, so decisions about what new data to collect should be informed by relative contributions of epistemic uncertainties. The key enabler for the proposed methods is the adoption of the Bayesian viewpoint, in which probability is used to describe degree of belief. The use of probability theory to model both aleatory and epistemic uncertainty makes possible the application of a variety of well-established methods for probabilistic analysis, such as Monte Carlo simulation and variance decomposition. One of the primary limitations of the approach is the use of nested Monte 28

Carlo sampling, which requires a large number of model evaluations. This can be addressed through the use of fast-running response surface models, although the selection of training points becomes more challenging due to the need to consider both aleatory and epistemic uncertainties in assessment of parameter ranges. Similarly, selection of the Monte Carlo sample sizes for each of the nested loops (Na and Ne ) can be challenging. We have adopted simple rules of thumb, but further work is recommended for developing guidance for selection of these sample sizes. The generality of the proposed framework suggests a variety of possible avenues for future work. Although we have focused specifically on epistemic uncertainty in probability distribution parameters, the framework can be applied to other types of epistemic uncertainty as well. One area that would be especially interesting to consider would be application of the framework to address the influence of response surface model uncertainty. In particular, the Gaussian Process model provides a probabilistic representation of the state of knowledge about the underlying response function, in light of the available training data. Although there would be additional computational complexity associated with treatment of random processes, conceptually the Gaussian Process uncertainty could be included within the framework. Declaration of interests The authors, John McFarland and Erin DeCarlo, declare that there are no financial or personal relationships with other people or organizations that could inappropriately influence the work being submitted.

29

Acknowledgments The authors gratefully acknowledge support from DARPA/DSO (contract number HR0011-17-C-0024) and AFRL (contract OAI-PACE-170003). The views, opinions, and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. Appendix A. Correlated variables This section describes treatment of correlated random variables within the proposed Monte Carlo framework. In particular, we show how the standard normal transformation of Eq. (2) can be extended to the case in which the ˜ are not statistically independent. components of the random vector x One possibility is to use the Nataf transformation [61], in which the prob˜ is defined in terms of marginal distributions and a ability distribution of x correlation matrix, R. The Nataf transformation consists of two steps. First, ˜ is transformed into the vector of uncorrelated standard normal variables, u, a vector of correlated normal variables z˜ having correlation matrix R0 : ˜ θ) = Lu ˜ z˜ = h1 (u,

(A.1)

where L is a matrix square root such that LLT = R0 . The matrix L is commonly chosen to be the Cholesky factor of R0 . The so-called fictive correlation matrix R0 is not necessarily equal to R. The relation between the two is discussed in, for example [62]. Note that dependence on θ has been explicitly shown to emphasize that L depends on the distribution parameters θ. 30

In the second step of the Nataf transformation, the marginal distributions ˜ of z˜ are transformed to match the marginal distributions of x:   −1 ˜ = h2 (z, ˜ θ) = Fx˜−1 x (Φ(˜ z )) , . . . , F (Φ(˜ z )) 1 d x ˜d |θ 1 |θ

(A.2)

˜ to x ˜ can then be expressed in terms of a function The transformation from u composition as: ˜ = h(u, ˜ θ) = h2 (h1 (u, ˜ θ), θ) x

(A.3)

As a simple example, consider the transformation in the case of a bivariate normal distribution, which can be characterized in terms of the parameters θ = (µ1 , µ2 , σ1 , σ2 , ρ), where ρ is the correlation coefficient between x˜1 and x˜2 . For normally distributed variables, R0 = R and it is easy to show that in the two-dimensional case the Cholesky factor of R is   1 0  L= p ρ 1 − ρ2 Thus, we have

  p z˜ = u˜1 , ρ˜ u1 + 1 − ρ2 u˜2

˜ and z˜ are both normal, the transforSince the marginal distributions of x ˜ θ) is a linear function of z: ˜ mation h2 (z, ˜ = (µ1 + σ1 z˜1 , µ2 + σ2 z˜2 ) x Thus, we have the full transformation:    p ˜ = h(u, ˜ θ) = µ1 + σ1 u˜1 , µ2 + σ2 ρ˜ x u1 + 1 − ρ2 u˜2

31

References [1] A. Saltelli, S. Tarantola, F. Campolongo, M. Ratto, Sensitivity analysis in practice: a guide to assessing scientific models, Wiley, 2004. [2] S. K. Jha, J. M. Larsen, A. H. Rosenberger, Towards a physics-based description of fatigue variability behavior in probabilistic life-prediction, Engineering Fracture Mechanics 76 (2009) 681–694. [3] S. K. Jha, M. J. Caton, J. M. Larsen, Mean vs. life-limiting fatigue behavior of a nickel-based superalloy, in: Superalloys, The Minerals, Metals & Materials Society, 2008, pp. 565–572. [4] W. M. Bolstad, Introduction to Bayesian Statistics, 2nd Edition, WileyInterscience, 2007. [5] S. Kucherenko, S. Tarantola, P. Annoni, Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications 183 (2012) 937–946. [6] T. A. Mara, S. Tarantola, Variance-based sensitivity indices for models with dependent inputs, Reliability Engineering & System Safety 107 (2012) 115–121. [7] J. M. McFarland, Variance decomposition for statistical quantities of interest, Journal of Aerospace Information Systems 12 (1) (2015) 204– 218. [8] J. M. McFarland, D. S. Riha, Variance decomposition in the presence of

32

epistemic and aleatory uncertainty, in: Proceedings of the International Modal Analysis Conference XXIX, Vol. 2, Springer, 2011, pp. 417–430. [9] J. McFarland, B. Bichon, D. Riha, A probabilistic treatment of multiple uncertainty types: NASA UQ Challenge, in: Proceedings of the 55th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, National Harbor, MD, 2014. [10] R. Zhang, S. Mahadevan, Integration of computation and testing for reliability estimation, Reliability Engineering & System Safety 72 (1) (2001) 13–21. [11] E. Borgonovo, G. Apostolakis, S. Tarantola, A. Saltelli, Comparison of global sensitivity analysis techniques and importance measures in PSA, Reliability Engineering & System Safety 79 (2) (2003) 175–185. [12] A. D. Kiureghian, Measures of structural safety under imperfect states of knowledge, Journal of Structural Engineering 115 (5) (1989) 1119–1140. [13] A. D. Kiureghian, Analysis of structural reliability under parameter uncertainties, Probabilistic Engineering Mechanics 23 (2008) 351–358. [14] R. J. Breeding, J. C. Helton, E. D. Gorham, F. T. Harper, Summary description of the methods used in the probabilistic risk assessments for NUREG-1150, Nuclear Engineering and Design 135 (1992) 1–27. [15] J. C. Helton, D. R. Anderson, H.-N. Jow, M. G. Marietta, G. Basabilvazo, Conceptual structure of the 1996 performance assessment for the waste isolation pilot plant, Reliability Engineering & System Safety 69 (2000) 151–165. 33

[16] J. C. Helton, F. J. Davis, J. D. Johnson, Characterization of stochastic uncertainty in the 1996 performance assessment for the waste isolation pilot plant, Reliability Engineering & System Safety 69 (2000) 167–189. [17] J. C. Helton, M.-A. Martell, M. S. Tierney, Characterization of subjective uncertainty in the 1996 performance assessment for the waste isolation pilot plant, Reliability Engineering & System Safety 69 (2000) 191–204. [18] J. C. Helton, C. W. Hansen, C. J. Sallaberry, Uncertainty and sensitivity analysis in performance assessment for the proposed repository for high-level radioactive waste at Yucca Mountain, Nevada, Reliability Engineering & System Safety 107 (2012) 44–63. [19] S. C. Hora, R. L. Iman, Expert opinion in risk analysis: The NUREG1150 methodology, Nuclear Science and Engineering 102 (1989) 323–331. [20] H.-R. Bae, R. V. Grandhi, R. A. Canfield, Sensitivity analysis of structural response uncertainty propagation using evidence theory, Structural and Multidisciplinary Optimization 31 (4) (2006) 270–291. [21] J. C. Helton, J. D. Johnson, W. L. Oberkampf, C. J. Sallaberry, Sensitivity analysis in conjunction with evidence theory representations of epistemic uncertainty, Reliability Engineering & System Safety 91 (10– 11) (2006) 1414–1434. [22] G. Shafer, A Mathematical Theory of Evidence, Princeton University Press, 1976.

34

[23] J. Guo, X. Du, Sensitivity analysis with mixture of epistemic and aleatory uncertainties, AIAA Journal 45 (9) (2007) 2337–2349. [24] G. Li, Z. Lu, Z. Lu, J. Xu, Regional sensitivity analysis of aleatory and epistemic uncertainties on failure probability, Mechanical Systems and Signal Processing 46 (2014) 209–226. [25] X. Du, Uncertainty analysis with probability and evidence theories, in: Proceedings of the ASME 2006 International Design Technical Conferences & Computers and Information in Engineering Conference, American Society of Mechanical Engineers, Fairfield, NJ, 2006. [26] S. Sankararaman, S. Mahadevan, Separating the contributions of variability and parameter uncertainty in probability distributions, Reliability Engineering & System Safety 112 (2013) 187–199. [27] J. C. Helton, Uncertainty and sensitivity analysis in performance assessment for the waste isolation pilot plant, Computer Physics Communications 117 (1–2) (1999) 156–180. [28] J. P. C. Kleijnen, J. C. Helton, Statistical analyses of scatterplots to identify important factors in large-scale simulations, 1: Review and comparison of techniques, Reliability Engineering & System Safety 65 (1999) 147–185. [29] P. Lee, Bayesian Statistics, an Introduction, 3rd Edition, Oxford University Press, Inc., New York, 2004. [30] D. V. Lindley, The philosophy of statistics, Journal of the Royal Statistical Society D 49 (3) (2000) 293–337. 35

[31] J. Oakley, A. O’Hagan, Bayesian inference for the uncertainty distribution of computer model outputs, Biometrika 89 (2002) 769–784. [32] I. M. Sobol’, Sensitivity analysis for non-linear mathematical models, Mathematical Modelling & Computational Experiment 1 (1993) 407– 414. [33] T. Homma, A. Saltelli, Importance measures in global sensitivity analysis of model output, Reliability Engineering & System Safety 52 (1) (1996) 1–17. [34] P. Wei, Z. Lu, S. Song, Variable inportance analysis: A comprehensive review, Reliability Engineering & System Safety 142 (2015) 399–432. [35] E. Borgonovo, E. Plischke, Sensitivity analysis: A review of recent advances, European Journal of Operational Research 248 (2016) 869–887. [36] J. C. Helton, J. D. Johnson, C. Sallaberry, C. B. Storlie, Survey of sampling-based methods for uncertainty and sensitivity analysis, Reliability Engineering & System Safety 91 (2006) 1175–1209. [37] I. M. Sobol’, Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates, Mathematics and Computers in Simulation 55 (2001) 271–280. [38] A. Haldar, S. Mahadevan, Probability, Reliability, and Statistical Methods in Engineering Design, John Wiley and Sons, Inc., New York, 2000. [39] S. K. Jha, R. John, J. M. Larsen, Incorporating small fatigue crack

36

growth in probabilistic life prediction: Effect of stress ratio in Ti-6Al2Sn-4Zr-6Mo, International Journal of Fatigue 51 (2013) 83–95. [40] S.

Suresh,

cracks,

R.

O.

Ritchie,

Propagation

of

short

fatigue

International Metals Reviews 29 (1) (1984) 445–475.

doi:10.1179/imtr.1984.29.1.445. URL https://doi.org/10.1179/imtr.1984.29.1.445 [41] J. Lankford, D. L. Davidson, K. S. Chan, The influence of crack tip plasticity in the growth of small fatigue cracks, Metallurgical and Materials Transactions A 15 (8) (1984) 1579–1588. doi:10.1007/BF02657797. URL https://doi.org/10.1007/BF02657797 [42] M. J. Caton, J. W. Jones, H. Mayer, S. Stanzl-Tschegg, J. E. Allison, Demonstration of an endurance limit in cast 319 aluminum, Metallurgical and Materials Transactions A 34 (1) (2003) 33–41. doi:10.1007/s11661-003-0206-x. URL https://doi.org/10.1007/s11661-003-0206-x [43] J. C. N. Jr., E. P. Phillips, M. H. Swain, Fatigue-life prediction methodology using small-crack theory, International Journal of Fatigue 21 (2) (1999) 109–119. [44] J. Barnard, R. McCulloch, X.-L. Meng, Modeling covariance matrices in terms of standard deviations and correlations, with application to shrinkage, Statistica Sinica 10 (2000) 1281–1311. [45] R. M. Neal, Slice sampling, The Annals of Statistics 31 (3) (2003) 705– 767. 37

[46] Southwest Research Institute, San Antonio, TX, NASGRO Reference Manual, Version 9.0 Final (May 2018). [47] M. D. McKay, R. J. Beckman, W. J. Conover, A comparison of three methods for selecting values of input variables in the analysis of output from a computer code, Technometrics 21 (1979) 239–245. [48] R. L. Iman, W. J. Conover, A distribution-free approach to inducing rank correlation among input variables, Communications in Statistics: Simulation and Computation B11 (3) (1982) 311–334. [49] J. D. Martin, T. W. Simpson, Use of Kriging models to approximate deterministic computer models, AIAA Journal 43 (4) (2005) 853–863. [50] L. G. Crespo, S. P. Kenny, D. P. Giesy, The NASA Langley multidisciplinary uncertainty quantification challenge, in: Proceedings of the 55th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, National Harbor, MD, 2014. [51] L. G. Crespo, S. P. Kenny, Special edition on uncertainty quantification of the AIAA Journal of Aerospace Computing, Information, and Communication, Journal of Aerospace Information Systems 12 (1) (2015) 9–9. [52] S. Sankararaman, Sequential refinement of uncertainty through Bayesian inference and global sensitivity analysis, Journal of Aerospace Information Systems 12 (1) (2015) 49–72. [53] C. Safta, K. Sargsyan, H. N. Najm, K. Chowdhary, B. Debusschere, L. P. Swiler, M. S. Eldred, Probabilistic methods for sensitivity analysis 38

and calibration in the NASA challenge problem, Journal of Aerospace Information Systems 12 (1) (2015) 219–234. [54] E. Patelli, D. A. Alvarez, M. Broggi, M. de Angelis, Uncertainty management in multidisciplinary design of critical safety systems, Journal of Aerospace Information Systems 12 (1) (2015) 140–169. [55] R. Ghanem, I. Yadegaran, C. Thimmisetty, V. Keshavarzzadeh, S. Masri, J. Red-Horse, R. Moser, T. Oliver, P. Spanos, O. J. Aldraihem, Probabilistic approach to NASA Langley Research Center multidisciplinary uncertainty quantification challenge problem, Journal of Aerospace Information Systems 12 (1) (2015) 170–188. [56] C. Liang, S. Mahadevan, Bayesian sensitivity analysis and uncertainty integration for robust optimization, Journal of Aerospace Information Systems 12 (1) (2015) 189–203. [57] A. Srivastava, A. K. Subramaniyan, L. Wang, Hybrid Bayesian solution to NASA Langley Research Center multidisciplinary uncertainty quantification challenge, Journal of Aerospace Information Systems 12 (1) (2015) 114–139. [58] N. Pedroni, E. Zio, Hybrid uncertainty and sensitivity analysis of the model of a twin-jet aircraft, Journal of Aerospace Information Systems 12 (1) (2015) 73–96. [59] A. Chaudhuri, G. Waycaster, N. Price, T. Matsumura, R. Haftka, NASA uncertainty quantification challenge: An optimization-based methodol-

39

ogy and validation, Journal of Aerospace Information Systems 12 (1) (2015) 10–34. [60] K. L. Van Buren, F. M. Hemez, Robust decision making applied to the NASA multidisciplinary uncertainty quantification challenge problem, Journal of Aerospace Information Systems 12 (1) (2015) 35–48. [61] A. M. Hasofer, N. C. Lind, An exact and invariant first order reliability format, Journal of Engineering Mechanics 100 (1974) 111–121. [62] A. D. Kiureghian, P.-L. Liu, Structural reliability under incomplete probability information, Journal of Engineering Mechanics 112 (1) (1986) 85–104.

40