Augmented sample-based approach for efficient evaluation of risk sensitivity with respect to epistemic uncertainty in distribution parameters
Journal Pre-proof
Augmented sample-based approach for efficient evaluation of risk sensitivity with respect to epistemic uncertainty in distribution parameters Zhenqiang Wang, Gaofeng Jia PII: DOI: Reference:
S0951-8320(19)31128-7 https://doi.org/10.1016/j.ress.2019.106783 RESS 106783
To appear in:
Reliability Engineering and System Safety
Received date: Revised date: Accepted date:
5 September 2019 25 December 2019 28 December 2019
Please cite this article as: Zhenqiang Wang, Gaofeng Jia, Augmented sample-based approach for efficient evaluation of risk sensitivity with respect to epistemic uncertainty in distribution parameters, Reliability Engineering and System Safety (2019), doi: https://doi.org/10.1016/j.ress.2019.106783
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Published by Elsevier Ltd.
Highlights • Proposes augmented sample-based approach for evaluation of risk sensitivity • Efficiently calculates risk sensitivity to epistemic uncertainty • Uses only one set of simulations in the augmented parameter and variable space • Approach is highly efficient for systems with high-dimensional uncertainty
1
Augmented sample-based approach for efficient evaluation of risk sensitivity with respect to epistemic uncertainty in distribution parameters Zhenqiang Wanga , Gaofeng Jiaa,∗ a Department
of Civil and Environmental Engineering, Colorado State University, Fort Collins, CO, USA
Abstract This paper proposes a novel augmented sample-based approach for efficient evaluation of risk sensitivity with respect to epistemic uncertainty. Calculation of the risk sensitivity (i.e., Sobol’ indices) with respect to uncertain distribution parameters entails significant computational challenges due to the need to evaluate multi-dimensional integrals, e.g., using Monte Carlo Simulation (MCS). The proposed approach addresses the challenges by defining a joint auxiliary density in the augmented space of both the uncertain distribution parameters and input random variables. It first generates one set of samples from the joint auxiliary density and then based on the corresponding marginal samples estimates the marginal auxiliary densities for the uncertain distribution parameters using kernel density estimation (KDE). Then the KDE estimates are used to efficiently calculate the Sobol’ indices. It relies on only one set of simulations to estimate Sobol’ index for all uncertain distribution parameters without the need to repeat MCS for each distribution parameter. It is especially useful and efficient for evaluation of risk sensitivity for systems with expensive models and large number of inputs and uncertain distribution parameters. The good accuracy and high efficiency of the proposed approach are demonstrated through two illustrative examples and also for different risk definitions. ∗ Corresponding
author Email address:
[email protected] (Gaofeng Jia)
Preprint submitted to Elsevier
January 3, 2020
Keywords: Sensitivity analysis, Risk, Sobol’ index, Sample-based approach, Kernel density estimation, Epistemic uncertainty
1. Introduction Sensitivity analysis (SA) plays an important role in probabilistic studies [1] and have been routinely used to understand behavior of complex system models, to facilitate model reduction, and guide engineering design and decision making [1, 2, 3, 4]. SA examines how the uncertainty in the system inputs impacts the system output or performance quantity of interest. When the performance quantity of interest corresponds to some probabilistic performance or risk (e.g., reliability, life-cycle performances, or resilience etc.), we can define the so-called risk sensitivity. In terms of uncertainty in the system inputs, two types of uncertainties are commonly differentiated: aleatory uncertainty and epistemic uncertainty. The former corresponds to natural and inherent variability of the input and is irreducible, while the latter corresponds to lack of knowledge and can be reduced if more data is collected [5]. To quantify both types of uncertainties, the imprecise probability models have been used [6, 7, 8, 9, 10], where probability distribution models are selected and used to model the aleatory uncertainty in the inputs while epistemic uncertainty (for the selected probability model) is quantified through the uncertain distribution parameters (i.e., hyperparameters for the distributions), which can also be modeled by probability distributions. Note that there could also be additional epistemic uncertainty related to the model form (e.g., the true probability model is unknown), further discussions on model-form uncertainty can be found in [11, 12]. In this context, for given distribution parameter value, the corresponding risk can be quantified by propagating the uncertainties in the inputs with corresponding distributions conditional on the distribution parameter value. Therefore, the uncertainty in the distribution parameters (i.e., the epistemic uncertainty) will lead to variability in the corresponding risk. In such cases, it is important to examine how the epistemic uncertainty or uncertainty in the distribution parameters impacts
3
the variability in the risk, i.e., the risk sensitivity with respect to the epistemic uncertainty. For SA, different global sensitivity measures have been proposed and used [13], mainly including variance-based sensitivity analysis, such as Sobol’ index [2, 14, 15], entropy-based sensitivity indices [4, 16, 17, 18], and distribution based sensitivity indices (e.g., moment independent sensitivity indicator) [19, 20, 21]. Among them, Sobol’ index is one of the most commonly used ones. However, evaluation of each sensitivity index (i.e., Sobol’ index) typically requires a separate Monte Carlo simulation (MCS) with many evaluations of the system model [14, 15], which creates huge challenges for applications with large number of inputs and computationally expensive system models. To reduce the computational burden, researchers have investigated alternative frameworks for approximating Sobol’ indices, for example, efficient simulation methods (e.g., importance sampling, extended MCS) [9, 10, 22, 23, 24] and single-loop sampling procedures [25, 26], and the adoption of metamodel-based approaches, such as Gaussian processes (or kriging) [27, 28, 29] and polynomial chaos expansions (PCE) [8, 15, 30, 31, 32, 33]. Recently, a sample-based approach [34] has also been proposed to generate and use one set of samples from some auxiliary density and use kernel density estimation (KDE) to efficiently approximate the required conditional expectations and subsequently Sobol’ indices without the need to repeat the system model evaluations for different Sobol’ indices. For risk sensitivity analysis with respect to distribution parameters, compared to typical global sensitivity analysis, there is an additional layer of integration, corresponding to the risk integral for any given value for the distribution parameters [24, 35, 36, 37]. The direct estimation (e.g., using MCS) of this risk integral requires repeated evaluation of the system model. When combined with the computational challenges in evaluating Sobol’ index itself, the overall computational effort for risk sensitivity analysis is even higher, which requires the evaluation of triple-loop integration for calculation of each sensitivity index. When the number of distribution parameters is large and/or the number of input random variables is large, the computational effort to calculate all the interested 4
sensitivity indices would be huge. Also, direct application of the sample-based approach mentioned earlier will face challenges in generating samples from the auxiliary density due to the need to evaluate the corresponding risk integral for any given value for the distribution parameters. To address the above challenges, this paper proposes a novel and efficient augmented sample-based approach for risk sensitivity analysis with respect to uncertain distribution parameters. It extends on the sample-based approach in [34] for efficient estimation of Sobol’ index. However, directly applying the approach in [34] for risk sensitivity analysis and working with the uncertain distribution parameters is still challenging, because of the need to evaluate the multi-dimensional risk integral for any given value of distribution parameters. To address this, instead of directly working with the uncertain distribution parameters, the proposed approach here first defines an augmented problem including both uncertain distribution parameters and the input random variables, and then generates samples from a joint auxiliary density that is proportional to the integrand of the augmented risk integral. Based on the corresponding marginal samples, the marginal auxiliary densities for the uncertain distribution parameters can be efficiently approximated using KDE. The marginal auxiliary density is then used to directly approximate the conditional expectations in Sobol’ index, which are used to support the calculation of Sobol’ indices. Using only one set of simulations/samples, the proposed approach can estimate Sobol’ index for all uncertain distribution parameters without the need to run separate MCS for each distribution parameter. Once the required set of samples are generated, the computational burden of the approach is small and practically independent of the dimension of the distribution parameters, which means the proposed approach is expected to provide more benefits for problems with large number of uncertain distribution parameters. The accuracy and efficiency of the proposed approach for estimation of risk sensitivity with respect to distribution parameters are demonstrated through two examples.
5
2. Risk sensitivity analysis with respect to epistemic uncertainty 2.1. Risk and its sensitivity to epistemic uncertainty To formulate the risk sensitivity analysis, consider a system model with input x = [x1 , . . . , xj , . . . , xnx ] ∈ X where xj is the j th input, nx is the total number of input and X denotes the space of possible values of x. Let the output (or risk measure) y = h(x), where h(x) represents the mathematical function characterizing the system output. The joint PDF for input random variables is denoted as f (x|θ) where θ = [θ1 , . . . , θi , . . . , θnθ ] ∈ Θ denotes the uncertain Qnθ distribution parameters with joint PDF f (θ) = i=1 f (θi ) and Θ represents the space of possible values of θ. The uncertainty in the distribution parameters θ
can represent the epistemic uncertainty, which will lead to imprecise probability distributions for x. The PDF of the distribution parameters θ may be estimated using interval estimation approach or Bayesian updating approach when the data of x is incomplete or using a non-parametric Bayesian updating procedure if the distribution type of x is not available [38, 39]. The epistemic uncertainty in the distribution parameters can be reduced when available data is sufficient and accurate, i.e., the PDF of the distribution parameters will approach the true distributions. For given value of θ, using the risk measure and propagating the uncertainties in the inputs, we can establish the corresponding risk
H(θ) = Ex|θ [h(x)] =
Z
h(x)f (x|θ)dx
(1)
X
For risk sensitivity analysis with respect to distribution parameters, we are interested in how the uncertainty in each of the distribution parameters in θ impacts the variability in the risk H(θ). Compared to typical global sensitivity analysis, for risk sensitivity analysis, the output H(θ) corresponds to some probabilistic performance measure (e.g., risk, reliability, life-cycle performances, or resilience etc.). The evaluation of the probabilistic performance integral for given value of θ introduces another layer of integration (i.e., double-loop integration for calculation of global sensitivity measures, as will be shown later) and additional computational challenges besides the computational challenges 6
of typical global sensitivity analysis. Note that when h(x) corresponds to the indicator function IF (x), which takes value of 1 when the system fails and 0 otherwise, the risk H(θ) corresponds to the failure probability PF (θ). The risk sensitivity then corresponds to the failure probability or reliability sensitivity. This work proposes an efficient augmented sample-based approach for risk sensitivity analysis with respect to uncertain distribution parameters. In terms of sensitivity measures, this work focuses on the commonly used Sobol’ index. But the approach can be easily extended to efficiently evaluate other sensitivity measures, e.g., the moment independent delta index. 2.2. Global sensitivity measure: Sobol’ index The first order Sobol’ index Si for θi (also referred as the main effect of θi ) is defined as [14]
Si =
Vi Ei [ξi (θi )2 ] − µ2H = VH VH
(2)
where Vi represents the expected reduction in variance VH due to fixing θi , and
ξi (θi ) = E∼i [H(θ)|θi ] =
Z
Θ∼i
H(θ ∼i , θi )f (θ ∼i |θi )dθ ∼i
(3)
where θ ∼i represents the remaining of the distribution parameter vector excluding θi . Here, the conditional density f (θ ∼i |θi ) is used to represent the most general case that the distribution parameters could be dependent (i.e., dependence between θ ∼i and θi ). For example, the distribution parameters might be dependent when they are inferred from the same data for the random variables x. When θ ∼i and θi are independent, f (θ ∼i |θi ) reduces to f (θ ∼i ). µH and VH correspond to the mean and variance of the risk H(θ)
µH = Eθ [H(θ)] =
Z
H(θ)f (θ)dθ =
Θ
Z Z Θ
7
X
h(x)f (x|θ)dx f (θ)dθ
(4)
2
VH = Eθ [(H(θ) − µH ) ] = =
Z Z Θ
Z
h(x)f (x|θ)dx
X
H(θ)2 f (θ)dθ − µ2H
Θ 2
f (θ)dθ −
(5)
µ2H
Besides first order Sobol’ index, Sobol’ index for higher order interaction, as well as the total sensitivity index can be also defined [2, 14]. 2.3. Computational challenges in calculation of risk-based Sobol’ indices Plugging the expression for ξi (θi ) in Eq. (3) into the term Ei [ξi (θi )2 ] needed to calculate Si , we have Ei [ξi (θi )2 ]
=
R
Θi
hR
X,Θ∼i
h(x)f (x|θ ∼i , θi )f (θ ∼i |θi )dζ
i2
f (θi )dθi
(6)
where the integration in the bracket corresponds to the expanded version of ξi (θi ) = E∼i [H(θ)|θi ] and is with respect to ζ=[x, θ ∼i ]. This is different from the typical sensitivity analysis where the integration would be with respect to θ ∼i alone. Here to evaluate the conditional expectation E∼i [H(θ)|θi ], we need to additionally integrate with respect to x. As can be seen, evaluation of every Si requires knowledge of Ei [ξi (θi )2 ], which involves a double-loop integration. The outer loop corresponds to the calculation of the expectation Ei [ξi (θi )2 ] with respect to θi , and the inner loop corresponds to calculation of the conditional expectation ξi (θi ) = E∼i [H(θ)|θi ] or more specifically the expectation with respect to vector [x, θ ∼i ]. To calculate Sobol’ index for higher order interaction, the integration in Eq. (6) needs to be repeated for different parameter combinations and parameter values. To evaluate the double-loop integral, the general approach is Monte Carlo Simulation [14]. However, direct adoption of MCS entails significant computational burden, especially for systems with expensive models (i.e., calculation of h(x) for given x is expensive) and a large number of inputs (nx is large) and distribution parameters (nθ is large).
8
3. Augmented sample-based approach for efficient estimation of risk sensitivity with respect to distribution parameters To address the above challenges, this work proposes an efficient augmented sample-based approach for risk sensitivity analysis with respect to uncertain distribution parameters. It extends on the sample-based approach in [34] for efficient estimation of Sobol’ index. Instead of directly working with the uncertain distribution parameters, the proposed approach here first defines an augmented problem including both uncertain distribution parameters and the input random variables, and then generates samples from a joint auxiliary density that is proportional to the integrand of the integral for µH . Based on the marginal samples from the generated samples, the marginal auxiliary densities for the uncertain distribution parameters can be efficiently approximated using kernel density estimation (KDE), which is then used to estimate the conditional expectation ξi (θi ) and further the Sobol’ indices. 3.1. Proposed augmented sample-based approach To propose the augmented sample-based approach through extending the sample-based approach in [34], we first define an augmented problem in terms of both the inputs x and the distribution parameters θ, i.e., [x, θ]. Then we define a joint auxiliary density π(x, θ) for [x, θ] that is proportional to the integrand of µH in Eq. (4),
π(x, θ) =
h(x)f (x|θ)f (θ) ∝ h(x)f (x|θ)f (θ) µH
(7)
Then π(θ) simply corresponds to the marginal distribution of the joint auxiliary density π(x, θ), i.e.,
π(θ) =
Z
π(x, θ)dx =
X
9
f (θ)
R
X
h(x)f (x|θ)dx µH
(8)
Similarly, the marginal auxiliary density for θi , π(θi ), can be established by integrating out x and θ ∼i (i.e., ζ), Z π(θi ) = π(x, θ)dζ X,Θ∼i R f (θi ) X,Θ∼i h(x)f (x|θ)f (θ ∼i |θi )dζ = µH f (θi ) = ξi (θi ) µH
(9)
This means the conditional expectation ξi (θi ) can be written as a function of two marginal PDFs for θi as
ξi (θi ) =
π(θi ) µH f (θi )
(10)
From the derivation of Eq. (9), it is clear that π(θi ) corresponds to the marginal auxiliary density for θi of the joint auxiliary density π(x, θ). Therefore, we can first sample from the joint auxiliary density π(x, θ), and the θi components of the samples will correspond to samples from the marginal auxiliary density π(θi ). Compared to directly sampling from π(θ) where the evaluation R of π(θ) for each θ requires evaluation of the integral X h(x)f (x|θ)dx, if we sample from the joint auxiliary density π(x, θ), based on Eq. (7), to evaluate
π(x, θ) for given value of [x, θ], we only need to evaluate the system model h(x), R which is much more efficient than evaluating the integral X h(x)f (x|θ)dx. To sample from π(x, θ), first, a set of candidate samples {[xkc , θ kc ], k =
1 . . . Nc } are generated from some joint proposal density qs (x, θ), and the corresponding h(x) are evaluated, and then stochastic sampling algorithm is used to obtain ns samples from the joint PDF π(x, θ), denoted {[xk , θ k ], k = 1 . . . ns }. Since these samples will be used within KDE to approximate the underlying density, independent samples are needed. Any stochastic sampling algorithms that can provide independent samples can be used. In the current paper, the standard accept-reject method will be used. To improve the sampling efficiency and thus the performance of the proposed approach, an appropriate proposal density qs (x, θ) needs to be selected. Some considerations on how to select the
10
proposal density will be discussed in detail in Section 3.1.3. Then, based on the marginal samples from π(θi ), KDE can be used to efficiently approximate π(θi ) and then ξi (θi ) through Eq. (10). The same set of samples can be used for any θi , which contributes to the high efficiency of the sample-based approach. Using Eq. (2) and Eq. (10), the first order Sobol’ index Si can be written as Z
µ2 Si = H VH
Z
µ2 = H VH
Θi
Θi
π(θi ) f (θi )
2
!
f (θi )dθi − 1
π(θi ) π(θi )dθi − 1 f (θi )
(11)
whose unbiased estimator can be obtained either through numerical integration or using Monte Carlo Integration (MCI) with the ns samples from π(θi ) Sˆi ≈
! ns 1 X π ˜ (θik ) µ ˆ2H − 1 ns f (θik ) VˆH k=1
(12)
The coefficient of variation (CoV) for MCI approximation can be conveniently calculated and used as the statistical error of the estimator. As can be seen, the estimation accuracy of Si relies on the accuracy of KDE. Since the accuracy of KDE improves as the number of samples increases, it is expected that using more samples (i.e., larger value of ns ) will lead to more accurate estimation of Si . Based on Eq. (4), the unbiased estimator of µH can be established directly using MCI based on all the candidate samples from the proposal density qs (x, θ),
µ ˆH ≈
Nc 1 X f (xkc |θ kc )f (θ kc ) h(xkc ) Nc qs (xkc , θ kc )
(13)
k=1
As to VH , it involves double-loop integration; direct estimation would involve first generating for example Nθ samples for θ and then for each θ, generating for example Nx samples for x and evaluating the corresponding h(x). It would require total of Nθ × Nx model evaluations. Instead of repeating the model evaluations as required by direct estimation of the double-loop integration, here we use the same set of simulations for the 11
Nc candidate samples {[xkc , θ kc ], k = 1 . . . Nc } and the corresponding Nc model evaluations h(x) to efficiently estimate VH . This approach uses the concept of importance sampling where the same proposal density is used to estimate R the integral X h(x)f (x|θ)dx under different f (x|θ) through re-weighting the
same set of samples [40, 41]. We can rewrite the joint proposal density as qs (x, θ) = qs (x|θ)qs (θ). We can use qs (θ) as proposal density for the outer loop (i.e., integration with respect to θ), and use qs (x|θ) as proposal density for the inner loop (i.e., integration with respect to x). For given θ value (e.g., θ = θ kc ), R the integral H(θ kc )= X h(x)f (x|θ kc )dx can be estimated using the candidate sample pairs of [xjc , θ jc ], k = 1 . . . Nc ,
Nc X f (xjc |θ kc ) ˆ kc ) ≈ 1 H(θ h(xjc ) Nc j=1 qs (xjc |θ jc )
(14)
where xkc are generated according to proposal density qs (xjc |θ jc ). A special case
would be qs (xjc |θ jc ) = qs (xjc ) where the proposal density for x does not depend on
θ. Using Eq. (14), the same candidate sample pairs of [xjc , θ jc ], k = 1 . . . Nc , can be used for evaluation of H(θ kc ) for different θ kc . In the end, using information in the same set of Nc simulations used for generating samples from the joint auxiliary density, the unbiased estimator of VH can be established through the following equation, 2 Nc Nc k j k X X 1 1 f (x |θ ) c c f (θ c ) VˆH ≈ h(xjc ) −µ ˆ2H j k j Nc Nc j=1 q (θ ) ) q (x |θ c s s c c k=1
(15)
In the end, Eq. (12) can be used to estimate Si . Note that the statistical errors of the estimator µ ˆH and VˆH are assessed using the CoVs for the corresponding MCI approximations. 3.1.1. Higher order sensitivity indices Besides first order sensitivity indices, using the same set of samples, the proposed approach can be directly used to estimate sensitivity indices for the higher order interactions. Let S[ij] represent the second order sensitivity index
12
for the interaction between θi and θj , and Sij represent the joint second order sensitivity index for the subset θ ij = [θi , θj ]. Sij can be estimated using Eq. (12) by changing θi to θ ij and the KDE needs to use the corresponding multivariate KDE. More specifically, Sij can be estimated through Sˆij ≈
! ns π ˜ (θ kij ) 1 X µ ˆ2H −1 k ns f (θ ij ) VˆH k=1
(16)
Based on the relationship S[ij] = Sij −Si −Sj , S[ij] can be estimated through Sˆ[ij] = Sˆij − Sˆi − Sˆj
(17)
Besides second order interaction, other higher order sensitivity indices can be established similarly [34]. As to KDE, to address bounded distribution parameters, which are frequently encountered in various engineering applications, instead of regular KDE, which has boundary bias problem for density with large weights near the boundaries, the multivariate boundary-corrected KDE in [18, 34] can be used to establish more accurate estimations of the density and the resulting sensitivity indices. Due to the curse of dimensionality for KDE and the estimation errors, in general sample-based estimation of sensitivity indices for higher order interactions (e.g., larger than four) should be avoided [34]. 3.1.2. Summary of steps of the proposed approach The proposed augmented sample-based approach has the following steps: Step 1: choose proposal density qs (x, θ) in the augmented space of [x, θ] for stochastic sampling, and generate Nc candidate samples, denoted {[xkc , θ kc ], k = 1 . . . Nc }, from the proposal density; Step 2: evaluate the system model to calculate the corresponding h(x) for each {xkc , k = 1 . . . Nc }, which gives {h(xkc ), k = 1 . . . Nc };
Step 3: use stochastic sampling (e.g., rejection sampling) and the {h(xkc ), k =
1 . . . Nc } information to generate samples from the joint auxiliary density π(x, θ), and these samples are denoted {[xk , θ k ], k = 1 . . . ns }; 13
Step 4: with the information in the candidate sample set, use Eq. (13) to calculate µ ˆH and use Eq. (15) to calculate VˆH ; Step 5: based on the θi component of the samples {[xk , θ k ], k = 1 . . . ns },
denoted {θik , k = 1 . . . ns }, estimate the marginal auxiliary density π(θi ) using boundary corrected KDE; Step 6: use Eq. (12) to estimate the Sobol’ index Si for θi ; Step 7: using the same set of simulations, repeat Step 5 and Step 6 to calculate the sensitivity index for all uncertain distribution parameters. Note that higher order interaction can also be estimated by using the proper equations for Step 5 and Step 6. All these can be done using only one set of simulation. 3.1.3. Computational efficiency and stochastic sampling The proposed sample-based approach is conceptually different from direct estimation of sensitivity indices using stochastic simulation. The former uses stochastic simulation to generate samples from the joint auxiliary density π(x, θ) while the latter uses stochastic simulation to directly calculate the integrals involved in the sensitivity indices. The former only requires one set of simulations or samples to estimate all sensitivity indices, while the latter typically needs to be repeated for each index that needs to be calculated. Regarding computational efficiency, some of the discussions on sample-based approach in [34] also hold for the augmented sample-based approach proposed here. Since the computational effort of KDE is negligible, the overall computational effort and efficiency should be primarily attributed to the computational effort and efficiency of the stochastic sampling for generating samples from the joint auxiliary density π(x, θ). The efficiency of stochastic sampling is typically problem dependent and will depend on how good the chosen proposal densities are [42, 43, 44]. This means a direct theoretical comparison against direct estimation of sensitivity indices by MCS is impossible. Also, the comparison will depend on how many sensitivity indices are estimated [34]. Conceptually, if many sensitivity indices need to be estimated, it is expected that the pro14
posed sample-based approach will be more efficient than direct estimation. For example, for a problem with nθ uncertain distribution parameters, the total number of first order sensitivity indices and second order interaction indices is nθ (nθ + 1)/2. Suppose the sampling efficiency is se , then the total number of model evaluations for the sample-based approach is ns /se , to estimate all the nθ (nθ + 1)/2 sensitivity indices. On the other hand, suppose N samples or model evaluations are used for direct estimation of each index using MCS, then the total number of model evaluations will be N nθ (nθ + 1)/2. If ns /se < N nθ (nθ + 1)/2 or the sampling efficiency se > ns /(N nθ (nθ + 1)/2), then the sample-based approach will outperform the direct estimation. Since N nθ (nθ + 1)/2 scales quadratically with nθ , for larger value of nθ , the condition se > ns /(N nθ (nθ + 1)/2) is typically easy to meet. If higher sampling efficiency is achieved, then the benefits of the sample-based approach becomes even more substantial. To improve the sampling efficiency, advanced stochastic sampling techniques can be used. For example, adaptive kernel sampling density (AKSD) has been proposed to build better proposal density with the explicit objective of maximizing the sampling efficiency [44]. When the risk measure h(x) corresponds to the indicator function IF (x), taking advantage of the property of indicator function (i.e., taking values of either 0 or 1), modified Metropolis-Hastings algorithm [45] and modified rejection sampling [44] have been used to reduce the number of unnecessary model evaluations during the sampling process. Also, for rare events or small failure probability, when directly generating samples is not efficient, sequentially and adaptively generating samples (e.g., in the context of Subset Simulation [44, 45]) could significantly improve the sampling efficiency as well. Since it is not the focus of this paper, exactly which stochastic sampling algorithm to use will not be discussed here in detail.
15
4. Illustrative examples The performance of the proposed augmented sample-based approach is illustrated through two examples. To fully demonstrate the accuracy and efficiency of the proposed approach, different risk measures are considered, i.e., the case when h(x) corresponds to the general risk measure and the case when h(x) corresponds to the indicator function IF (x), with the risk sensitivity for the latter corresponding to reliability sensitivity. Also, for reliability sensitivity, different reliability levels (or failure probability levels) are considered including the case of low failure probability (or rare events). 4.1. Single-degree-of-freedom oscillator The first example considers the single-degree-of-freedom (SDOF) oscillator (Fig. 1) studied in [8, 46]. The system output y(x) is given by 2F1 ω0 t1 y(x) = 3r − sin mω02 2
(18)
where x = [m, c1 , c2 , r, F1 , t1 ] corresponds to the vector of inputs. More specifically, m is the mass; c1 and c2 is the spring constant of the two springs, respec-
tively; r denotes the displacement at which the spring with spring constant c2 yields; F1 represents the amplitude of the force; t1 is the duration of the force; p ω0 = (c1 + c2 )/m is the natural frequency of the oscillator.
Figure 1: SDOF oscillator under loading (adapted from [8])
All the input variables are considered to be probabilistic to account for the limited knowledge of them. Gaussian distribution is used to model the distributions of these variables. The mean and standard deviation (SD) of the distribution for m, c1 and c2 are considered to be constant. To account 16
Table 1: Input variables and associated distribution parameters for the SDOF oscillator.
xi
Distribution
Mean
SD
m
Gaussian
1
0.05
c1
Gaussian
1
0.1
c2
Gaussian
0.1
0.01
r
Gaussian
[0.4, 0.6]
0.05
F1
Gaussian
[0.8, 1.2]
0.2
t1
Gaussian
[0.85, 1.15]
0.2
for the epistemic uncertainty in the distribution parameters, the mean for the distribution of the other three variables is assumed to follow uniform distribution within given intervals while the SD is considered to be constant. Therefore, the uncertain distribution parameters θ in this example include the means of the input variables [r, F1 , t1 ]. Table 1 describes the input variables and associated distribution parameters. The augmented sample-based approach is applied to calculate the sensitivity indices (i.e., Sobol’ indices) for the SDOF oscillator with different definition of risk measures. More specifically, three cases are considered; case 1: h(x) = y(x) + constant, where the constant is added to ensure that h(x) > 0 and the risk H(θ) corresponds to the expected value of h(x); case 2: h(x) = IF (x), where IF (x) = 1 if y(x) > ythres (corresponding to failure) and 0 otherwise, and the risk H(θ) corresponds to the failure probability PF (θ); case 3: same as case 2, but with different selection of ythres so that failure corresponds to rare events. To validate the accuracy of the proposed approach and demonstrate its efficiency, the sensitivity indices are also calculated using MCS as reference values due to the lack of analytical solution. 4.1.1. Case 1 To calculate the sensitivity indices using the augmented sample-based approach for case 1, first samples from π(x, θ) need to be generated. Here, the prior density is selected as the proposal density (i.e., qs (x, θ) = f (x|θ)f (θ)) 17
(i.e., no specific effort is made to select a proposal density for higher sampling efficiency). This selection results in a sampling efficiency of around 50%, i.e., on average ns =500 samples from π(x, θ) can be obtained from 1,000 model evaluations. Table 2 shows the first, second, and third order Sobol’ indices for the considered distribution parameters, which are the average values over 50 different runs using the augmented sample-based approach. To demonstrate the convergence of the KDE accuracy, different number of samples ns are used for the KDE estimation of the marginal auxiliary densities (e.g., π ˜ (θi ) for first order, π ˜ (θ ij ) for second order), and the corresponding sensitivity indices are reported. The value in parenthesis below ns is the corresponding average number of model evaluations (i.e., N ) that is required to estimate all the listed Sobol’ indices. The CoVs for all MCI approximations in the estimation of Sobol’ indices are below 5% even only with ns =500. As reference, the Sobol’ indices are also calculated using MCS and are reported in Table 2. For MCS, to evaluate the double-loop integration in Eq. (6) (e.g., Ei [ξi (θi )2 ] for first order, and Eij [ξi (θ ij )2 ] for second order), direct MCS with 2,000 samples are respectively used for the inner and outer loop to establish good estimation accuracy, i.e., CoVs for MCI approximation of µH , VH and the Vi ’s are below 5%. The number of model evaluations used for the estimation of all listed Sobol’ indices is reported in parenthesis. As can be seen, as more samples (ns increases from 500 to 2,000) are used for KDE, the estimation accuracy of the sensitivity indices improves and converges to the reference values. With only 2,000 samples, the proposed augmented sample-based approach can already provide relatively accurate estimation of both the first order and higher order sensitivity indices. Based on the values for the first order sensitivity index, the importance ranking of the three distribution parameters identified by the proposed approach is consistent with that identified by MCS. Compared to the first order sensitivity indices, the estimate for second order and third order sensitivity indices has relatively larger error. This is attributed to the fact that (i) the sensitivity values for higher order interactions are relatively small for the current problem, which are more sensitive 18
to estimation error and random error (e.g., from random sampling), (ii) to estimate higher order sensitivity indices, KDE for higher dimension (e.g., π ˜ (θ ij ), or π ˜ (θ ijk )) needs to be established, however, KDE suffers from the curse of dimensionality where significantly more samples are needed to establish good KDE accuracy (as also mentioned in Section 3.1.1), and (iii) the accumulation of estimation error from lower order sensitivity estimates, for example, as shown in Eq. (17), to estimate the second order interaction Sˆ[ij] , the quantities Sˆij , Sˆi , and Sˆj need to be estimated first, all of which are subject to errors. Note that when implementing the proposed approach, additional constraints (such as the sensitivity index should be larger than or equal to zero) can be incorporated to make the sensitivity index estimates more robust (e.g., avoid negative values). In terms of efficiency, using the proposed augmented sample-based approach, only around 4,000 model evaluations on average are needed for the calculation of all the sensitivity indices for this example when ns =2,000 samples are used. On the other hand, direct use of MCS requires 4×106 model evaluations to calculate each of the sensitivity index and needs to be repeated for different sensitivity indices (e.g., 2.8×107 model evaluations to calculate all seven sensitivity indices in this example). This high efficiency of the proposed approach stems from the fact that the same set of samples from the joint auxiliary density can be used to calculate all sensitivity indices while MCS needs to evaluate double-loop integral and needs to be repeated for each index. Furthermore, the efficiency advantage of the proposed approach would become more obvious when the number of distribution parameters is large and each model evaluation is expensive. 4.1.2. Case 2 For case 2, a failure threshold ythres is selected such that the average failure probability (i.e., µH ) is around 10% (corresponding to relatively higher failure probability level). The augmented sample-based approach is applied to calculate the reliability sensitivity indices. To obtain the samples from π(x, θ), similar to case 1, the prior density is chosen as the proposal density and the sampling efficiency is around 10% (i.e., equal to µH ). As in case 1, different number of 19
Table 2: Sensitivity indices for SDOF oscillator: Case 1.
Sensitivity
Augmented sample-based approach
MCS
index ns =500
ns =1000
ns =1500
ns =2000
(1 × 103 )
(2 × 103 )
(3 × 103 )
(4 × 103 )
S1
0.6881
0.6707
0.6808
0.6605
0.6520
S2
0.2958
0.2617
0.2489
0.2469
0.2449
S3
0.2111
0.1554
0.1384
0.1285
0.1164
S [12]
0.0754
0.0602
0.0510
0.0491
0.0095
S [13]
0.1297
0.1075
0.0860
0.0821
0.0084
S [23]
0.1683
0.1258
0.1058
0.0938
0.0024
S [123]
0.0000
0.0004
0.0000
0.0020
0.0001
(2.8 × 107 )
Table 3: Sensitivity indices for SDOF oscillator: Case 2.
Sensitivity
Augmented sample-based approach
MCS
index ns =500 3
ns =1000 4
ns =1500
ns =2000
4
(1.4 × 109 )
(5 × 10 )
(1 × 10 )
(1.5 × 10 )
(2 × 104 )
S1
0.6502
0.6371
0.6368
0.6322
0.6212
S2
0.1708
0.1737
0.1704
0.1695
0.1662
S3
0.0812
0.0773
0.0785
0.0775
0.0757
S [12]
0.1245
0.1179
0.1144
0.1108
0.0859
S [13]
0.0708
0.0614
0.0573
0.0552
0.0386
S [23]
0.0209
0.0155
0.0138
0.0121
0.0066
S [123]
0.0626
0.0456
0.0388
0.0356
0.0057
20
Table 4: Sensitivity indices for SDOF oscillator: Case 3.
Sensitivity
Augmented sample-based approach
IS
index ns =500 4
ns =1000 4
ns =1500 4
ns =2000
(1.75 × 109 )
(3 × 10 )
(6 × 10 )
(9 × 10 )
(1.2 × 105 )
S1
0.4164
0.4218
0.4168
0.4184
0.4034
S2
0.1033
0.1049
0.1002
0.1018
0.0986
S3
0.0478
0.0446
0.0455
0.0446
0.0440
S [12]
0.2915
0.2882
0.2722
0.2798
0.2689
S [13]
0.1422
0.1246
0.1264
0.1334
0.1187
S [23]
0.0274
0.0235
0.0221
0.0228
0.0178
S [123]
0.0796
0.0806
0.0759
0.0582
0.0486
samples are used to estimate the sensitivity indices and the values are reported in Table 3. Similar to case 1, CoVs for all MCI approximations are below 5% even with ns =500. The reference values calculated using MCS are also reported. For MCS, 1 × 105 and 2 × 103 samples are respectively used for inner and outer loop for the calculation of each sensitivity index with good estimation accuracy (i.e., CoVs for MCI approximation are below 5%). Ultimately, a total of 1.4×109 model evaluations are needed to estimate all the sensitivity indices. When ns = 2,000 samples are used, the proposed approach only requires around 2 × 104 model evaluations to estimate all the sensitivity indices. In terms of accuracy, the sensitivity index estimated using the proposed approach with 2,000 samples already has good accuracy for both first order and second order sensitivity indices. For the second order sensitivity indices, S[12] is the largest and even larger than S3 , which indicates strong interaction exists between the distribution parameters for r and F1 . Overall, the results further validate the accuracy and efficiency of the proposed augmented sample-based approach.
21
4.1.3. Case 3 For case 3, a failure threshold ythres is selected such that the average failure probability (i.e., µH ) is around 10−3 corresponding to rare event. The augmented sample-based approach is applied to calculate the reliability sensitivity indices. Due to the low failure probability, directly using the prior density as proposal density to generate samples from π(x, θ) will have low sampling efficiency (i.e., around 0.1%). AKSD proposed in [44] is used to sequentially and adaptively build better proposal density and efficiently generate samples from π(x, θ), which in this case corresponds to the joint failure distribution. Details about AKSD can be found in [44]. In the end, an average sampling efficiency of around 1.7% is established. Similar to case 2, different number of samples are used to estimate the sensitivity indices and the values are reported in Table 4. The CoVs for all MCI approximations are below 5% even only with ns =500. When ns = 2,000 samples are used, the proposed approach only requires around 1.2 × 105 model evaluations to estimate all the sensitivity indices. To obtain the reference values for the sensitivity indices, importance sampling (IS) is used instead of direct MCS to improve the estimation accuracy without using a prohibitively large number of simulations. Keep in mind that while the average failure probability (i.e., µH ) is around 10−3 , the conditional failure probability for given θ values could be much lower than 10−3 , which brings challenges to its MCS-based estimation and requires relatively large number of samples to establish good accuracy. Here, IS is implemented for both the inner and outer loop, in which the proposal density is built for each uncertain distribution parameter and input random variable. 5 × 104 and 5 × 103 samples are used for the inner and outer loop, respectively. The CoVs for the calculation of all integrals using IS are below 5%. Ultimately, 2.5 × 108 model evaluations are required for the evaluation of each sensitivity index and a total of 1.75 × 109
model evaluations for all sensitivity indices. In comparison, the proposed approach only requires around 1.2 × 105 model evaluations to estimate all the sensitivity indices and shows significantly higher computational efficiency.
22
Similar to case 2, as ns increases, the sensitivity indices estimated by the proposed augmented sample-based approach get closer to the reference values. The proposed approach shows good convergence and accuracy in the evaluation of all sensitivity indices. Regarding the sensitivity index values, compared to case 2, one interesting observation in this case is that the second order sensitivity indices S[12] and S[13] (especially S[12] ) are larger than the first order sensitivity indices S2 and S3 . This demonstrates the strong interaction between the distribution parameters for r and F1 as well as those for r and t1 and the interaction has larger contribution towards the variability of the failure probability (or risk) compared to case 2. This strong interaction can be partially attributed to the larger importance of r as the failure probability decreases. Overall, the proposed augmented sample-based approach demonstrates good accuracy and high efficiency in the evaluation of risk sensitivity under different risk measures. 4.2. Simply supported truss The second example considers the reliability sensitivity to epistemic uncertainty with relatively larger number of distribution parameters to further highlight the computational efficiency of the proposed augmented sample-based approach. It considers the deflection y at mid-span of a simply supported truss (Fig. 2) studied in [7]. It is assumed that the truss with constant material properties is subjected to seven uncertain loads (i.e., Pi with i=1, 2, ..., 7). This leads to x = [P1 , ..., Pi , ..., P7 ]. The modulus of elasticity is E = 200 × 109 Pa.
The cross section area is A = 0.004 m2 except for the bars of type I with A = 0.00535 m2 and type II with A = 0.0068 m2 . The loads are considered to follow lognormal distribution in which the mean (µi ) and SD (σi ) are defined by uniform distribution on the intervals [95, 105] kN and [13, 17] kN, respectively. In this case, the uncertain distribution parameters θ consist of the mean and SD of the lognormal distribution for the input variable Pi with i=1, 2, ..., 7, i.e., θ = [θ1 , θ2 , ..., θ14 ] = [µ1 , ..., µi , ..., µ7 , σ1 , ..., σi , ...σ7 ]. For the definition of failure, the same failure threshold of ythres = 29 mm as 23
Figure 2: Simply supported truss subjected to uncertain loads (adapted from [7])
in [7] is selected. This selection leads to an average failure probability of around 1% (i.e., µH = 1%). The augmented sample-based approach is implemented to calculate the reliability sensitivity indices. To obtain the samples from π(x, θ), the prior density is selected as the proposal density. The sampling efficiency is around 1% in this case. Like the first example, different number of samples are used to estimate the sensitivity indices and the average values over 50 different runs for all the 14 first order indices are reported in Table 5. With the same set of samples, higher order sensitivity indices (i.e., 91 second order sensitivity indices) are also calculated; however, they are not reported here considering their small values (i.e., less important). The above results for sensitivity indices are obtained with low CoVs (i.e., below 5% even with ns =500) for all MCI approximations. When ns = 2,000 samples are used, only around 2 × 105 model evaluations are needed to calculate all the sensitivity indices. The reference values for the reported sensitivity indices (i.e., first order) are calculated using IS considering the relatively low failure probability level (i.e., 1%) and the results are also presented in Table 5. Here, 5 × 104 and 5 × 103 samples are used for the inner and outer loop, respectively. This leads to below 5% CoVs for the calculation of both inner and outer integrals. For this example, using IS, 2.5 × 108 model evaluations are required to estimate each sensitivity
index, and 2.625 × 1010 model evaluations in total for all the first order and second order sensitivity indices (i.e, 14 first order and 91 second order indices). Note that 3.5 × 109 reported in parenthesis corresponds to the number of model evaluations required to estimate only the 14 first order sensitivity indices. In comparison, the proposed approach shows a much higher computational effi24
Table 5: The first order sensitivity indices for truss
Sensitivity
Augmented sample-based approach
IS
index ns =500 4
ns =1000 5
ns =1500
ns =2000
5
(3.5 × 109 )
(5 × 10 )
(1 × 10 )
(1.5 × 10 )
(2 × 105 )
S1
0.0463
0.0367
0.0351
0.0330
0.0235
S2
0.1207
0.1062
0.1038
0.0982
0.0874
S3
0.1883
0.1849
0.1759
0.1758
0.1800
S4
0.2892
0.2772
0.2710
0.2679
0.2529
S5
0.2141
0.2011
0.1890
0.1877
0.1766
S6
0.1018
0.0946
0.0961
0.0987
0.0939
S7
0.0506
0.0420
0.0383
0.0363
0.0247
S8
0.0280
0.0208
0.0172
0.0164
0.0007
S9
0.0378
0.0217
0.0177
0.0175
0.0043
S 10
0.0482
0.0330
0.0302
0.0293
0.0185
S 11
0.0677
0.0520
0.0491
0.0482
0.0358
S 12
0.0563
0.0405
0.0331
0.0315
0.0169
S 13
0.0426
0.0247
0.0197
0.0188
0.0071
S 14
0.0362
0.0223
0.0171
0.0158
0.0008
25
ciency since it only requires 2 × 105 model evaluations for the estimation of all the reported sensitivity indices. The benefit of computational efficiency of the proposed approach would be even higher when the evaluation of higher order sensitivity indices (e.g., the third order indices) using the same set of simulations is considered. As in the first example, the proposed augmented sample-based approach demonstrates good convergence and accuracy in the estimation of sensitivity indices, which can be clearly seen from Table 5. As ns increases, the values for sensitivity indices calculated using the proposed approach become closer to the reference values. When ns = 2,000 samples are used for KDE, the proposed approach can already demonstrate good accuracy for the estimation of the reported sensitivity indices. Based on the values for the sensitivity indices, the mean (µi ) of the distribution for the input variables overall are more important than the SD (σi ) in this example. The values for both µi and σi demonstrate good symmetry about the mid-span (e.g., S1 is close to S7 , and S8 is close to S14 .). The results also show good consistence with the effect of the loading location on the deflection y at mid-span, i.e., the same load located closer to the mid-span would lead to larger y (or risk) than that located further away from the mid-span. More specifically, among the first order sensitivity indices for µi , S4 , which corresponds to µi for the load at mid-span, is the largest; other values decrease as the distance of the load from mid-span increases (i.e., S4 > S3 > S2 > S1 and S4 > S5 > S6 > S7 ). The same trend can be seen for the first order sensitivity indices for σi . In terms of the values, the summation of the values for all the first order indices is already 0.93 based on the reference values, which indicates that the total impact of the individual distribution parameters together dominates the variability of the failure probability. The large importance of the individual distribution parameter is consistent with the fact that the results for higher order indices overall are very small in this case.
26
5. Conclusions This paper proposed an augmented sample-based approach for efficient risk sensitivity analysis with respect to epistemic uncertainty in the distribution parameters. The proposed approach only needs one set of samples from a joint auxiliary density in the augmented space of uncertain distribution parameters and input random variables that is proportional to the integrand of the augmented risk integral. Using this set of samples and corresponding marginal samples, it can efficiently estimate Sobol’ index for all uncertain distribution parameters (including both first order main effects and higher order interactions). The efficiency of the proposed approach can be further improved when advanced stochastic sampling algorithms are used to improve the sampling efficiency. The proposed approach is general and can be applied to different risk measures, including calculation of reliability sensitivity to epistemic uncertainty. The results in the two illustrative examples demonstrated the good accuracy and high efficiency of the proposed approach. With relatively small number of samples, kernel density estimation (KDE) established good approximations of the marginal auxiliary densities and the sensitivity indices. Larger errors were reported for distribution parameters with small values of sensitivity indices; however, these challenges were also faced by other computational methodologies such as Monte Carlo simulation (MCS). On the other hand, the proposed approach would face challenges when estimating sensitivity indices for higher order interactions (e.g., higher than third order), which would require more samples to establish good KDE accuracy due to the curse of dimensionality of KDE. Overall, the proposed approach is especially useful for efficient evaluation of risk sensitivity for problems with large number of uncertain distribution parameters and computationally expensive models.
References [1] A. Saltelli, Sensitivity analysis for importance assessment, Risk analysis 22 (3) (2002) 579–90. 27
[2] T. Homma, A. Saltelli, Importance measures in global sensitivity analysis of nonlinear models, Reliability Engineering & System Safety 52 (1996) 1–7. [3] W. Chen, R. Jin, A. Sudjianto, Analytical Variance-Based Global Sensitivity Analysis in Simulation-Based Design Under Uncertainty, Journal of Mechanical Design 127 (5) (2005) 875. [4] H. Liu, W. Chen, A. Sudjianto, Relative Entropy Based Method for Probabilistic Sensitivity Analysis in Engineering Design, Journal of Mechanical Design 128 (2) (2006) 326. [5] A. D. Kiureghian, O. Ditlevsen, Aleatory or epistemic? Does it matter?, Structural Safety 31 (2) (2009) 105–112. [6] S. Sankararaman, S. Mahadevan, Separating the contributions of variability and parameter uncertainty in probability distributions, Reliability Engineering and System Safety 112 (2013) 187–199. [7] J. E. Hurtado, Assessment of reliability intervals under input distributions with uncertain parameters, Probabilistic Engineering Mechanics 32 (2013) 80–92. [8] R. Sch¨ obi, B. Sudret, Global sensitivity analysis in the context of imprecise probabilities (p-boxes) using sparse polynomial chaos expansions, Reliability Engineering and System Safety 187 (2019) 129–141. [9] P. Wei, J. Song, S. Bi, M. Broggi, M. Beer, Z. Lu, Z. Yue, Non-intrusive stochastic analysis with parameterized imprecise probability models: I. Performance estimation, Mechanical Systems and Signal Processing 124 (2019) 349–368. [10] P. Wei, J. Song, S. Bi, M. Broggi, M. Beer, Z. Lu, Z. Yue, Non-intrusive stochastic analysis with parameterized imprecise probability models: II. Reliability and rare events analysis, Mechanical Systems and Signal Processing 126 (2019) 227–247. 28
[11] J. Zhang, M. D. Shields, On the quantification and efficient propagation of imprecise probabilities resulting from small datasets, Mechanical Systems and Signal Processing 98 (2018) 465–483. [12] J. Zhang, M. D. Shields, The effect of prior probabilities on quantification and propagation of imprecise probabilities resulting from small datasets, Computer Methods in Applied Mechanics and Engineering 334 (2018) 483– 506. [13] B. Iooss, P. Lemaˆıtre, A review on global sensitivity analysis methods, in: Uncertainty management in Simulation-Optimization of Complex Systems: Algorithms and Applications, 2015, pp. 101–122. [14] I. Sobol, Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates, Mathematics and Computers in Simulation 55 (1-3) (2001) 271–280. [15] B. Sudret, Global sensitivity analysis using polynomial chaos expansions, Reliability Engineering & System Safety 93 (7) (2008) 964–979. [16] B. Krykacz-Hausmann, Epistemic sensitivity analysis based on the concept of entropy, in: Proceedings of SAMO2001, 2001, pp. 1–6. [17] H. Liu, W. Chen, A. Sudjianto, Probabilistic Sensitivity Analysis Methods for Design Under Uncertainty, 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference (2004) 1–15. [18] G. Jia, A. A. Taflanidis, Sample-based evaluation of global probabilistic sensitivity measures, Computers & Structures 144 (2014) 103–118. [19] E. Borgonovo, A new uncertainty importance measure, Reliability Engineering & System Safety 92 (6) (2007) 771–784. [20] Q. Liu, T. Homma, A new computational method of a moment-independent uncertainty importance measure, Reliability Engineering and System Safety 94 (7) (2009) 1205–1211. 29
[21] W. Yun, Z. Lu, X. Jiang, L. Zhang, Borgonovo moment independent global sensitivity analysis by Gaussian radial basis function meta-model, Applied Mathematical Modelling 54 (2018) 378–392. [22] P. Wei, Z. Lu, W. Hao, J. Feng, B. Wang, Efficient sampling methods for global reliability sensitivity analysis, Computer Physics Communications 183 (8) (2012) 1728–1743. [23] P. Wei, Z. Lu, J. Song, Extended monte carlo simulation for parametric global sensitivity analysis and optimization, AIAA Journal 52 (4) (2014) 867–878. [24] Z.-C. Tang, Z. Lu, P. Wang, Y. Xia, P. Yang, P. Wang, Efficient numerical simulation method for evaluations of global sensitivity analysis with parameter uncertainty, Applied Mathematical Modelling 40 (1) (2016) 597–611. [25] B. Krzykacz-Hausmann, An approximate sensitivity analysis of results from complex computer models in the presence of epistemic and aleatory uncertainties, Reliability Engineering and System Safety 91 (10-11) (2006) 1210–1218. [26] W. Yun, Z. Lu, Y. Zhang, X. Jiang, An efficient global reliability sensitivity analysis algorithm based on classification of model output and subset simulation, Structural Safety 74 (April) (2018) 49–57. [27] J. E. Oakley, A. O’Hagan, Probabilistic sensitivity analysis of complex models: a Bayesian approach, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 66 (3) (2004) 751–769. [28] G. A. Banyay, M. D. Shields, J. C. Brigham, Efficient global sensitivity analysis for flow-induced vibration of a nuclear reactor assembly using Kriging surrogates, Nuclear Engineering and Design 341 (July 2018) (2019) 1–15. [29] G. Jia, R.-Q. Wang, M. T. Stacey, Advances in Water Resources Investigation of impact of shoreline alteration on coastal hydrodynamics using 30
Dimension REduced Surrogate based Sensitivity Analysis, Advances in Water Resources 126 (February) (2019) 168–175. [30] L. L. Le Gratiet, S. Marelli, B. Sudret, Metamodel-based sensitivity analysis: Polynomial chaos expansions and gaussian processes, in: Handbook of Uncertainty Quantification, 2017, pp. 1289–1325. [31] K. Konakli, B. Sudret, Global sensitivity analysis using low-rank tensor approximations, Reliability Engineering and System Safety 156 (2016) 64– 83. [32] G. Blatman, B. Sudret, Efficient computation of global sensitivity indices using sparse polynomial chaos expansions, Reliability Engineering and System Safety 95 (11) (2010) 1216–1229. [33] M. Li, G. Jia, R.-Q. Wang, Surrogate modeling for sensitivity analysis of models with high-dimensional outputs, in: 13th International Conference on Applications of Statistics and Probability in Civil Engineering, Seoul, South Korea, 2019. [34] G. Jia, A. Taflanidis, Efficient evaluation of Sobol’ indices utilizing samples from an auxiliary probability density function, Journal of Engineering Mechanics 142 (5) (2016) 1–11. [35] S. Nannapaneni, S. Mahadevan, Reliability analysis under epistemic uncertainty, Reliability Engineering and System Safety 155 (2016) 9–20. [36] V. Chabridon, M. Balesdent, J.-M. Bourinet, J. Morio, N. Gayton, Reliability-based sensitivity estimators of rare event probability in the presence of distribution parameter uncertainty, Reliability Engineering & System Safety 178 (2018) 164–178. [37] S. Au, Reliability-based design sensitivity by efficient simulation, Computers & Structures 83 (14) (2005) 1048–1061.
31
[38] P. Wei, F. Liu, Z. Lu, Z. Wang, A probabilistic procedure for quantifying the relative importance of model inputs characterized by second-order probability models, International Journal of Approximate Reasoning 98 (2018) 78–95. [39] S. M. S. Sankararaman, Likelihood-based representation of epistemic uncertainty due to scarce point data and/or interval data, Reliab. Eng. Syst. Saf 96 (7) (2011) 814–824. [40] Z. Wang, G. Jia, Stochastic Sampling for Efficient Seismic Risk Assessment of Transportation Network, in: 13th International Conference on Applications of Statistics and Probability in Civil Engineering, 2019, pp. 1–8. [41] Z. Wang, G. Jia, Efficient sample-based approach for effective seismic risk mitigation of transportation networks, Sustainable and Resilient Infrastructure (2019) 1–16. [42] C. P. Robert, G. Casella, Monte Carlo Statistical Methods, 2nd Edition, Springer, New York, 2004. [43] G. Jia, A. A. Taflanidis, Non-parametric stochastic subset optimization utilizing multivariate boundary kernels and adaptive stochastic sampling, Advances in Engineering Software 89 (2015) 3–16. [44] G. Jia, A. A. Taflanidis, J. L. Beck, A New Adaptive Rejection Sampling Method Using Kernel Density Approximations and Its Application to Subset Simulation, ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng. 3 (2) (2017) 1–12. [45] S. Au, J. Beck, Subset simulation and its application to seismic risk based on dynamic analysis, Journal of Engineering Mechanics 129 (8) (2003) 901– 917. [46] L. Schueremans, D. Van Gemert, Benefit of splines and neural networks in simulation based structural reliability analysis, Structural Safety 27 (3) (2005) 246–261. 32
CRediT author statement Zhenqiang Wang: Methodology, Formal analysis, Visualization, Writing-Original draft preparation. Gaofeng Jia: Conceptualization, Methodology, Writing- Reviewing and Editing.
Declaration of interests ☒ The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ☐The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: