Development of a localized probabilistic sensitivity method to determine random variable regional importance

Development of a localized probabilistic sensitivity method to determine random variable regional importance

Reliability Engineering and System Safety 107 (2012) 3–15 Contents lists available at ScienceDirect Reliability Engineering and System Safety journa...

713KB Sizes 1 Downloads 21 Views

Reliability Engineering and System Safety 107 (2012) 3–15

Contents lists available at ScienceDirect

Reliability Engineering and System Safety journal homepage: www.elsevier.com/locate/ress

Development of a localized probabilistic sensitivity method to determine random variable regional importance$ Harry Millwater n, Gulshan Singh, Miguel Cortina Department of Mechanical Engineering, University of Texas at San Antonio, San Antonio, TX, USA

a r t i c l e i n f o

abstract

Article history: Received 14 October 2010 Received in revised form 17 March 2011 Accepted 29 April 2011 Available online 7 May 2011

There are many methods to identify the important variable out of a set of random variables, i.e., ‘‘intervariable’’ importance; however, to date there are no comparable methods to identify the ‘‘region’’ of importance within a random variable, i.e., ‘‘intra-variable’’ importance. Knowledge of the critical region of an input random variable (tail, near-tail, and central region) can provide valuable information towards characterizing, understanding, and improving a model through additional modeling or testing. As a result, an intra-variable probabilistic sensitivity method was developed and demonstrated for independent random variables that computes the partial derivative of a probabilistic response with respect to a localized perturbation in the CDF values of each random variable. These sensitivities are then normalized in absolute value with respect to the largest sensitivity within a distribution to indicate the region of importance. The methodology is implemented using the Score Function kernelbased method such that existing samples can be used to compute sensitivities for negligible cost. Numerical examples demonstrate the accuracy of the method through comparisons with finite difference and numerical integration quadrature estimates. & 2011 Elsevier Ltd. All rights reserved.

Keywords: Probabilistic sensitivities Score Function sensitivities Localized sensitivity

1. Introduction Probabilistic sensitivity analysis is often a critical component of a risk assessment. Its purpose is traditionally to identify the important variables in an analysis in order to focus resources, computational, experimental, or both, on the parameters that most affect the system response. The word ‘‘important’’ has different meanings in different contexts. Within a probabilistic analysis, it usually refers to variables that are modeled as random whose variation has the largest affect on the response. Frey and Patil [1] provide an overview article discussing ten sensitivity methods, both probabilistic and deterministic, such as automatic differentiation, regression, scatter plots, ANOVA, and others. Similarly, Hamby [2] discusses fourteen different sensitivity methods including partial derivatives, variation in inputs by 1 standard deviation, regression, Smirnov test, Cramer-von Mises test, and others. Scatter plots and correlation coefficients are a straightforward and low-cost method to define importance [1,3]. Variables that

$ The results indicate that accurate localized sensitivities can be obtained for the dominant random variables as long as sufficient samples are available. n Corresponding author. Department of Mechanical Engineering, University of Texas at San Antonio, One UTSA Circle, San Antonio, TX 78249, USA. Tel.: þ1 210 458 4481 (V); fax: þ1 210 458 6504 (F). E-mail address: [email protected] (H. Millwater).

0951-8320/$ - see front matter & 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.ress.2011.04.003

show a clear ‘‘relationship’’ between the variable and the response are important. Variables for which the scatter plots largely reflect the marginal distribution are not important. Often, the correlation coefficient is used to obtain a numerical value of the relationship. Linear regression is a well-known method to assess the importance of a random variable [3,4]. The standardized regression coefficients indicate the amount of variance of the response explained by each variable and the amount of the response variance defined by the entire linear model and groups of variables. Stepwise regression is particularly useful to determine the parsimonious model that best accounts for the response variance given a fixed number of variables, e.g., the best linear regression model given k-out-of-N variables, where N is the total number of random variables. Variance-based sensitivity methods are powerful in that these methods identify the amount of the total variance that can be attributed to each input random variable and the amount the variance would be reduced if a particular random variable were to be fixed at a specific value [4–6]. Main effects, higher order, and interaction effects can be explored. Other similar sensitivity methods based on the Kullback–Liebler divergence are similar in spirit to variance-based sensitivity metrics but allow consideration of differences higher order than second moments [7]. A number of sensitivity methods are available for the First Order Reliability Method (FORM). Sensitivity factors (derivatives

H. Millwater et al. / Reliability Engineering and System Safety 107 (2012) 3–15

1

Perturbed CDF Original CDF

0.9 0.8 0.7 0.6 Fi

of the safety index with respect to the random variables) [8], derivatives of the probability of failure with respect to the random variable parameters, e.g., @P/@m, @P/@s [8], and omission factors [9] are computed as by-products of an analysis. Various authors develop and discuss the ‘‘Score Function (SF)’’ method for the computation of partial derivatives of a probabilistic performance function (probability-of-failure or response moment) with respect to parameters of the underlying input probability distributions [10–18]. This method provides local partial derivatives of the probability-of-failure or response moments with respect to the parameters of the input PDFs, e.g., qP/@m, qP/@s. Implementation of the methodology is convenient using sampling methods. A significant advantage is that negligible additional computing time is required to determine the sensitivities since the same samples used to compute the probabilistic response can be reused to compute the sensitivities; however, this assumes that sufficient samples are already available in order to obtain convergence of the sensitivity estimates. Wu and Mohanty [15] use the SF method to compute the partial derivative of the response mean with respect to the inputs and combine this information with hypothesis testing to identify the important variables. Sues and Cesare [16] use the SF method to compute the partial derivative of the response standard deviation with respect to the parameters of the input PDFs. Millwater et al. [18] have extended the method to input random variables with arbitrary dimensioned correlated multivariate normal distributions, including providing sensitivities with respect to the correlation coefficients. The Score Function method has some advantages and disadvantages over variance-based importance measures. One advantage is that the quantity of interest can be defined and focused upon, e.g., the probability-of-failure or the mean response. For example, the sensitivities of the mean of the response may be completely different than the sensitivities of the probability-offailure. Although the results are local and depend on the values used when computing the sensitivities, sometimes this is just what is needed, for example, as in reliability-based design. A disadvantage is that a large number of samples may be needed in order to obtain convergence in the sensitivity estimates. In all the methods listed above, the purpose is to identify the important variable or groups of variables among all variables. There is no focus on which part of a variable is important such as the left or the right tails, center region, near center, etc. This information can be useful in a number of contexts. For example, in consideration of experimental design, it would be useful to know where to tailor experiments if possible, how much data is needed to characterize the important region, and what computational strategies may suffice during analysis. Therefore, a methodology was developed and is presented here to compute the partial derivative of the probabilistic response (probability-offailure, response mean, or standard deviation) with respect to a set of discretized CDF values that span the distribution. The methodology is developed for independent random variables and folded into the Score Function approach such that existing samples can be reused to estimate the sensitivities.

0.5 0.4 0.3 0.2 0.1 0 −3

−2

−1

0 Xi

1

2

3

Fig. 1. Schematic of the effect of the localized sensitivity method on the CDF.

0.35 Perturbed PDF Original PDF

0.3 0.25 0.2 fi

4

0.15 0.1 0.05 0 −3

−2

−1

0

1

2

3

Xi Fig. 2. Schematic of the effect of the localized sensitivity method on the PDF.

The concept is shown in Fig. 1, whereby the CDF, Fi, for random variable Xi is discretized at discrete points xi,j, then a local disturbance is input into the CDF at a discretization point, see red dashed line in Fig. 1, yielding F^ i . The perturbation at xj for random variable i only extends over the range xj  1 oxoxj þ 1. The points at which to discretize the CDF are arbitrary and user defined. The effect of the perturbation on the PDF is given in Fig. 2. An estimate to the partial derivative of a probabilistic response, L, with respect to a CDF of random variable Xi at a specific location xj, Fi(xj), can then be obtained using the finite difference method, namely LðF^ i ðxj ÞLðFi ðxj ÞÞ @L DL  ¼ @Fi ðxj Þ DFi ðxj Þ DFi ðxj Þ

ð1Þ

2. Methodology The basic concept is to discretize each random variable CDF into regions using discretization points Xi,j (the jth location of random variable i), introduce a local disturbance into the CDF for each region centered at Xi,j, then determine the relative change in a probabilistic response: probability-of-failure, Pf, or response moments (mean, mg, and standard deviation, sg), for each regional disturbance.

where L denotes either the probability-of-failure, Pf, the mean of the response, mg, or the standard deviation of the response, sg. The response is defined by an arbitrary function g(x) of the random variables X with failure defined when g(x)r0. Calculation of the sensitivity using the finite difference method, while possible, is arduous in that multiple analyses are required (K þ1 for K discretized regions for each random variable) and, if sampling is used, a large number of samples are required

H. Millwater et al. / Reliability Engineering and System Safety 107 (2012) 3–15

3

5

0

2 1

–200

0

–400

–1 –600

–2 –3 –2

0

2 x

4

6

–2

0

2 x

4

6

Fig. 3. Examples of the kernel function for a standard normal distribution: (a) points {  1,0,1} and (b) points {2,3,4}.

1.0

2

0.5

1 0

0.0

–1 –0.5

–2

–1.0

–3 –4

–2

0 x

2

4

–4

–2

0 x

2

4

Fig. 4. Examples of the kernel function times the PDF for a standard normal distribution: (a) points {  1,0,1} and (b) points {2,3,4}.

due to the subtraction of two near equal numbers in Eq. (1). Therefore, the Score Function method was employed in order to increase the efficiency such that the existing samples can be reused to compute the local sensitivities. Consider the derivative of the response mean, mg, with respect to a parameter yi of variable Xi. The Score Function approach to compute the sensitivity for independent random variables is [17] @mg @ ¼ @yi @yi Z

Z

1

gðxÞfx ðxÞdx ¼

1

Z

1

gðxÞ 1



 @fXi ðxi Þ 1 fx ðxÞdx @yi fXi ðxi Þ

1

¼ 1

gðxÞkyi ðxi Þfx ðxÞdx ¼ E½gðxÞkyi ðxi Þ

ð2Þ

where E denotes the expectation operator and k is the kernel function defined generically for an independent random variable as @fX ðxÞ 1 @ y fX ðxÞ

ky ¼

ð3Þ

Now consider the case where yi represents Fi(xj)—a CDF value for Xi at location xj. Assuming the CDF is linearized between discretization points, the PDF for random variable Xi is fj ¼

Fj Fj1 xj xj1

fj þ 1 ¼

Fj þ 1 Fj xj þ 1 xj

xj1 ox o xj xj o x o xj þ 1

ð4Þ

where the shortcut notation fi(xj) ¼fj and Fi(xj)¼Fj has been used since it is obvious from the context that f and F relate to Xi. The derivative of the PDF with respect to the parameter of interest Fj is @fj @Fj

¼

@fj þ 1 @Fj

0

1 xj xj1

¼

1 xj þ 1 xj

xj1 ox o xj xj o x o xj þ 1 otherwise

ð5Þ

and the resulting kernel function for the local disturbance per Eq. (3) is

kFj ðxÞ ¼

8 1 > > < Fj Fj1 1

Fj þ 1 Fj > > : 0

9 xj1 o x oxj > > = xj ox o xj þ 1 > > ; otherwise

ð6Þ

Thus, the kernel function is a localized disturbance over the range xj  1 ox oxj þ 1; positive and constant over xj  1 ox oxj, negative and constant over xj oxoxj þ 1, whose values in each region are a function of F. Fig. 3 shows two examples of the kernel function defined in Eq. (6) for a standard normal distribution with discretization points at xj ¼ { 1,0,1} and xj ¼{2,3,4}. The kernel function is dependent on xj as a step function. That is, for any particular region xj  1 ox oxj or xj oxoxj þ 1, the kernel function is independent of x since its numerical value is a constant but dependent on the region in which x is located. The kernel function also has local support in that it is only non-zero up each neighboring discretization point. Fig. 4 shows corresponding examples of the kernel function times the standard normal PDF for the two cases in Fig. 3. The sensitivity of the response mean with respect to the CDF value Fj can be computed for random variable Xi as 8 gðxÞ Fj F1 j1 > Z 1> < h i @mg ¼ E gðxÞkFj ðxi Þ ¼ gðxÞ Fj þ1 1 Fj @Fj 1 > > : 0

9 xj1 ox oxj > > = xj o x o xj þ 1 fx ðxÞdx > > ; otherwise ð7Þ

and estimated using sampling as 9 8 gðxk Þ Fj F1 j1 xj1 o x o xj > > > > = < N @mg 1X  gðxk Þ Fj þ1 x ox ox j j þ 1 F 1 j > Nk¼1> @Fj > > ; : 0 otherwise

ð8Þ

6

H. Millwater et al. / Reliability Engineering and System Safety 107 (2012) 3–15

where xk denotes the kth realization of the random variables and N is the number of sampling points. Eq. (8) is repeated for each random variable by modifying Fj, i.e., Fj ¼ Fi(xj). The equations for the localized sensitivities of the standard deviation of the response and the probability-of-failure can also be formulated in terms of an expected value integral and estimated using sampling as follows [16,17]: h i h i @sg ¼ ðE gðxÞ2 kFj ðxi Þ 2mg E gðxÞkFj ðxi Þ =ð2sg Þ ð9Þ @Fj h i @Pf ¼ E IðxÞkFj ðxi Þ @Fj

Previous publications using the Score Function method have identified that the kernel functions must satisfy certain conditions, and, most prominently, that the expected value of the kernel function must be equal to zero, E[k(x)] ¼0 [11,17]. This property is necessary to ensure, for example, that if the limit state is a constant, the sensitivity with respect to any parameter must be zero, or, if the probability-of-failure is equal to 1, the sensitivity of the probability-of-failure with respect to any parameter is zero. Using the kernel functions from Eq. (6), the expected value of the kernel function is 9 8 1 xj1 ox o xj > > > Z 1> = < Fj Fj1 1 fx ðxÞdx E½kðxÞ ¼ ð11Þ x o x o x j j þ 1 F F jþ1 j > 1 > > > ; : 0 otherwise Since all random variables besides Xi integrate out xj

1 fX ðxÞdx þ xj1 Fj Fj1



 N @L 1 X ð@L=@yÞ2 ðLðxk Þkðxk ÞÞ2  ¼ 2 @y N N k¼1

ð13Þ

where xk represents a realization of the vector X and N represents the number of sampling points. Given the variance estimated from Eq. (13), the standard deviation, coefficient of variation (COV), and the confidence bounds can be computed. Numerical experiments indicate that the sensitivity estimates follow a normal distribution.

6. Numerical examples

3. Properties of kernel functions

Z

V

ð10Þ

where I(x) denotes the indicator, defined as  1 if gðxÞ r 0 IðxÞ ¼ 0 otherwise

E½kðxÞ ¼

equation derived by Millwater and Osborn [14].

Z

xj þ 1

xj

1 fX ðxÞdx ¼ 0 Fj þ 1 Fj

ð12Þ

Several numerical examples are presented to demonstrate the method. The localized sensitivities are formulated as an expected value integral. Thus, any suitable numerical method can be used such as numerical integration quadrature, First Order Reliability Method, importance sampling, conditional expectation, Monte Carlo sampling, etc. Monte Carlo sampling and numerical integration quadrature estimates are given below. The formulation was originally developed using the linearized CDF, then generalized to a parametric form, e.g., a normal distribution. Therefore, results are given for the linearized CDF using sampling (finite difference and kernel-based) and the parametric CDFs (sampling and numerical integration quadrature). 6.1. Academic example The problem consists of two random variables with the limit state gðxÞ ¼ x31 15x1 þ x32 1

ð14Þ

with X1 N[0,1] and X2  N[2,1]. Each random variable CDF was discretized at 9 points ranging over 74 standard deviations. The size of the kernel function was 71 standard deviation. Thus, the discretization points were X1:j ¼{  4,  3, 2, 1,0,1,2,3,4} and X2:j ¼{ 2, 1,0,1,2,3,4,5,6}. As a result, sensitivity estimates will be provided in the range of 73 standard deviations.

and Eq. (12) is satisfied for any distribution.

4. Relaxation of the linearized CDF requirement The development of the kernel function with local support required the use of a non-parametric distribution. The simplest case of a linear CDF was chosen. However, in a probabilistic analysis one wants to use the actual parametric CDF (normal, lognormal, Weibull, exponential, etc.). It is shown heuristically through numerical examples that the kernel function developed with respect to a linear CDF can be applied with samples from the parametric CDF to produce accurate localized sensitivities. It is clear from Eq. (12) that the requirement E[k(x)] ¼0 will be satisfied for any distribution as long as the CDF values at Fj were used to construct the kernel functions.

Table 1 Academic example: localized sensitivity results @mg/@Fj: finite difference, normal CDF and linearized CDF based on 100,000 samples. CDF Linearized CDF point finite difference (numerical integration)

Parametric CDF kernel-based (numerical integration)

Linearized CDF kernelbased (sampling)

Parametric CDF kernelbased (sampling)

 15.68 1.78 11.36 14.58 11.29 2.28  8.80

 12.12 3.34 10.99 13.44 10.91 3.91  5.73

 8.12 4.09 11.10 13.37 11.10 4.09  8.12

 5.38  0.65  3.44  12.61  27.57  47.84  77.95

 4.07  0.60  3.59  11.57  24.07  41.26  64.68

 2.10  0.46  3.66  11.46  23.99  41.72  65.16

X1  3  12.50 2 2.50 1 11.50 0 14.50 1 11.50 2 2.50 3  12.50 X2

5. Variance estimates If the localized sensitivity is estimated using sampling, the sensitivity results will be dependent on the number of samples used. As a result, variance estimates of the sensitivities are useful. The calculation of the variance is straightforward using the

1 0 1 2 3 4 5

 3.47  0.50  3.50  12.50  27.50  48.50  75.50

H. Millwater et al. / Reliability Engineering and System Safety 107 (2012) 3–15

1 0.8 Normalized sensitivities

0.6 0.4 0.2 0 −0.2 Finite difference Score fun. (Linearized CDF) Score fun. (Standard CDF) Score fun. (Numerical Int.)

−0.4 −0.6 −0.8 −1 −3

−2

−1

0 x1

1

2

3

Fig. 5. Academic example: local sensitivities of @mg/@Fj for X1.

0 −0.1 −0.2 Normalized sensitivities

6.1.1. Response mean sensitivities Table 1 shows the results for the localized sensitivities of @mg/@Fj with the linearized CDF and the normal parent CDFs using both numerical integration quadrature and Monte Carlo sampling with 100,000 samples. The results were computed in four ways: (a) finite difference using the linearized CDF with numerical integration quadrature, (b) the Score Function kernel-based method using the linearized CDF and sampling, (c) the Score Function kernel-based method using the parametric normal CDF and sampling, and (d) the Score Function kernel-based method using the parametric normal CDF and the numerical integration quadrature. Several conclusions were obtained from the numerical results: (a) the Score Function kernel-based and the finite difference results for the linearized CDF were consistent, thereby verifying the accuracy of the formulation (compare columns 3 and 4 of Table 1), (b) the numerical results using sampling and numerical integration with the parametric normal CDFs were consistent (compare columns 5 and 6), (c) the sensitivities obtained using the original parent distribution (normal in this case) and the linearized CDF were consistent, thereby justifying (at least in this case) the use of the samples from the parent distribution for all statistical calculations including the localized sensitivities (compare columns 4 and 5). Fig. 5 shows a comparison plot of the normalized sensitivities (normalized using the largest absolute sensitivity value) among all four methods for X1 and Fig. 6 shows the corresponding results for X2. It is interesting that for X1, the sensitivity is largest at the mean value whereas for X2 the sensitivity is largest in the right tail. The numerical results from Table 1 indicate that the sensitivity of the response mean to the right tail of X2 (@mg/@Fj ¼  65) is larger than the sensitivity with respect to the mean of X1 (@mg/@Fj ¼13), and is of opposite sign. Hence the mean response is most sensitive to variations in the right tail of X2. The net effect from the results listed in Table 1 is to indicate that the parametric kernel-based approach provides accurate localized sensitivities. This method is computationally efficient in that the samples used to compute the probabilistic response can be reused to compute the sensitivities. Table 2 shows the standard deviation estimates of the sampling-based sensitivities computed using Eq. (13) from a single analysis for X1 and X2 compared with the empirical results obtained from 100 reanalyses. That is, the empirical results were obtained by rerunning the sensitivity analysis 100 times using 100,000 samples each time but with a different sequence of random numbers, then the standard deviation was obtained from

7

−0.3 −0.4 −0.5 −0.6

Finite difference Score fun. (Linearized CDF) Score fun. (Standard CDF) Score fun. (Numerical Int.)

−0.7 −0.8 −0.9 −1 −1

0

1

2 x2

3

4

5

Fig. 6. Academic example: local sensitivities of @mg/@Fj for X2.

Table 2 Academic example: variance estimates for @mg/@Fj based on 100,000 samples. Linearized CDF Standard deviation (reanalysis 100 runs)

Standard deviation equation (Eq. (13)—1 analysis)

Parametric CDF COV Eq. (13)

Standard deviation (reanalysis 100 runs)

Standard deviation equation (Eq. (13)—1 analysis)

COV Eq. (13)

X1 3 2 1 0 1 2 3

2.665 0.9242 0.3577 0.2128 0.1910 0.5380 1.601

2.171 0.9409 0.3603 0.1911 0.2049 0.4871 1.762

0.138 0.529 0.032 0.013 0.018 0.214 0.200

3.387 0.9167 0.3408 0.1854 0.1907 0.4539 1.628

2.991 0.9382 0.3282 0.1715 0.1834 0.4361 1.722

0.247 0.281 0.030 0.013 0.017 0.112 0.301

1.130 0.2766 0.1303 0.1164 0.4163 1.973 15.81

1.253 0.2986 0.1311 0.1264 0.4002 2.075 14.56

0.233 0.459 0.038 0.010 0.015 0.043 0.187

1.074 0.2591 0.1266 0.1189 0.3621 1.780 11.03

1.023 0.2870 0.1253 0.1223 0.3639 1.813 11.74

0.251 0.478 0.035 0.011 0.015 0.044 0.182

X2 3 2 1 0 1 2 3

the 100 reanalyses. The results clearly indicate that Eq. (13) does a good job of predicting the standard deviation and, hence, can be used to estimate the confidence bounds of the sensitivities from a single analysis. The coefficient of variation (COV ¼standard deviation/mean) results are also shown in Table 2. The COV results provide a good estimate whether the sensitivity results can be trusted. The standard deviation and COV results shown in Table 2 are largest in the tails of the distribution as expected since the number of samples that fall within a sensitivity bin in the tails is significantly less than for the central region. Thus, the number of samples to use depends in part on how far in the tails one wishes to compute the sensitivities and the desired confidence in the estimated sensitivities.

6.1.2. Response standard deviation sensitivities Table 3 shows the results for the localized sensitivities of @sg/@Fj using the four methods used to estimate @mg/@Fj. As before, the results using all four methods were consistent and close. Fig. 7 shows a comparison plot of the normalized sensitivities among all four methods for X1 and Fig. 8 shows the corresponding

8

H. Millwater et al. / Reliability Engineering and System Safety 107 (2012) 3–15

Table 3 Academic example: Local sensitivity results for @sg/@Fj: finite difference, normal CDF and linearized CDF based on 100,000 samples. Linearized CDF kernelbased (sampling)

Parametric CDF kernelbased (sampling)

 7.19 2.05 6.15 0.01  6.21  2.06 7.06

 9.27 2.13 5.73 0.20  5.87  2.50 10.28

 8.23 3.67 5.80 0.17  5.89  4.05 8.96

1 2.62 0 0.32 1 1.85 2 2.09 3  20.18 4  187.01 5  234.46

4.35 0.26 1.80 2.18  19.59  111.93  397.08

3.29 0.21 1.88 1.77  17.01  92.22  315.61

CDF Linearized CDF point finite difference (numerical integration)

Parametric CDF kernel-based (numerical integration)

X1 3 2 1 0 1 2 3

 6.36 3.69 6.13 0.0017  6.12  3.69 6.36

X2 1.53 0.30 1.93 1.72  16.72  93.46  308.76

1 0.8 Normalized sensitivities

0.6 0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 −1 −2

−1

0 x1

1

2

3

Fig. 7. Academic example: local sensitivities of @sg/@Fj for X1.

1 0.8 0.6 Normalized sensitivities

6.1.3. Probability-of-failure sensitivities Table 5 shows the results for the localized sensitivities of @Pf/@Fj with failure defined as g(x1,x2)r0. As before, the results from all four methods are consistent and close. Fig. 9 shows a plot of the normalized results for X1 and Fig. 10 shows the corresponding results for X2. The results indicate that the sensitivities are zero in the left tail for X1 until a sudden jump at the mean where the sensitivity is maximum. For X2 the sensitivities are largest at the mean and zero in the right tail. Table 6 shows the standard deviation results that verify the accuracy of Eq. (13). The explanation for the X1 sensitivities can be observed in Fig. 11, which shows a plot of the joint PDF in the failure domain, and Fig. 12, which shows the joint PDF in the failure domain times a kernel function for X1 with range {  1,0,1}. Clearly, a localized disturbance for X1 with a range entirely within the region X1 o0 will be zero as the indicator function is zero in that region. Once the kernel function is applied at X1 ¼ 0, a negative sensitivity will result since the positive portion of the kernel function lies in the safe domain; only the negative portion lies in the failure domain (see Fig. 12). Similarly, from Fig. 11 and Eq. (14), the limit state is positive for X2 Z3 independent of X1; hence the sensitivities for X2 Z3 will be zero. 6.2. Fracture mechanics example

Finite difference Score fun. (Linearized CDF) Score fun. (Standard CDF) Score fun. (Numerical Int.)

−3

results for X2. For X1, the sensitivity is largest at 71 and 73, whereas for X2, the sensitivity is largest in the far right tail. Numerical values indicate that the sensitivity of the response standard deviation to the right tail of X2 (@sg/@Fj ¼  308) is much higher than with respect to the maximum sensitivity of X1 (@sg/@Fj ¼6). Table 4 shows the standard deviation estimates from 100 reruns versus the results from a single analysis using Eq. (13), again verifying that Eq. (13) provides a good estimate.

A fracture mechanics fatigue example is demonstrated. The problem consists of an edge crack in a semi-infinite plate subject to constant amplitude loading. The edge crack grows in mode I until the fracture toughness is exceeded. The cycles-to-failure are computed assuming the Paris crack growth law of the form da ¼ C DK m dN

ð15Þ

where a is the crack size, C and m are the Paris constants, N denotes pffiffiffiffiffiffi cycles, and DK ¼ 1:12Ds pa. The critical crack size is given by the relationship KI(ac)¼KIC, where KIC denotes the fracture toughness. The response is defined as the cycles-to-failure. The parameter values are given in Table 7 and are representative of titanium. The problem contains three random variables: initial crack size (ai)—lognormal, fracture toughness (KIC)—normal, and log10 ðCÞ of the Paris constant (C)—normal, as shown in the table.

0.4 0.2 0 −0.2 Finite difference Score fun. (Linearized CDF) Score fun. (Standard CDF) Score fun. (Numerical Int.)

−0.4 −0.6 −0.8 −1 −1

0

1

2 x2

3

4

5

Fig. 8. Academic example: local sensitivities of @sg/@Fj for X2.

6.2.1. Response mean sensitivities Table 8 shows the sensitivity results for @mg/@Fi for the three random variables using the data given in Table 7. The sensitivities were computed four ways: (a) using finite difference with a linearized CDF and sampling, (b) the Score Function kernel-based approach using a linearized CDF and sampling, (c) the Score Function kernel-based approach using the parametric distributions (normal and lognormal) and sampling, and (d) the Score Function kernel-based approach using the parametric CDFs and numerical integration. Fig. 13 shows the comparison between finite difference and the kernel-based approach for each random variable. It is clear from the table and figures that the results are in good agreement for ai and C but significantly in error for KIC. For both ai and C, the left tail is dominant with the effect slightly

H. Millwater et al. / Reliability Engineering and System Safety 107 (2012) 3–15

9

Table 4 Academic example: variance estimates for @sg/@Fj based on 100,000 samples. Linearized CDF

Parametric CDF

Standard deviation (reanalysis 100 runs)

Standard deviation equation (Eq. (13)—1 analysis)

COV Eq. (13)

Standard deviation (reanalysis 100 runs)

Standard deviation equation (Eq. (13)—analysis)

COV Eq. (13)

X1 3 2 1 0 1 2 3

3.170 1.024 0.4051 0.2527 0.2332 0.5808 2.036

2.171 0.9409 0.3616 0.1965 0.2075 0.4870 1.762

0.234 0.442 0.063 0.982 0.035 0.195 0.171

3.345 1.014 0.3640 0.2212 0.2006 0.5886 1.561

2.99 0.9382 0.3294 0.1767 0.1857 0.4361 1.722

0.363 0.256 0.057 1.040 0.032 0.108 0.192

1.095 0.2234 0.0985 0.0472 0.2459 2.723 49.89

1.253 0.2986 0.1314 0.1322 0.4053 2.048 14.52

0.288 1.148 0.073 0.061 0.021 0.018 0.037

1.554 0.2857 0.1265 0.0771 0.1975 2.523 33.52

1.023 0.2870 0.1257 0.1276 0.3676 1.793 11.71

0.311 1.367 0.067 0.072 0.022 0.019 0.037

X2 1 0 1 2 3 4 5

Table 5 Academic example: local sensitivity results for @Pf/@Fj: finite difference, normal CDF and linearized CDF based on 100,000 samples. CDF point

Linearized CDF finite difference (numerical integration)

Linearized CDF kernel-based (sampling)

Parametric CDF kernel-based (sampling)

Parametric CDF kernel-based (numerical integration)

4.04E  02 0.00E þ00  8.40E  03  4.74E  01  2.55E  01  3.82E  02 2.85E  01

1.48E  02 0.00E þ00  8.99E  03  4.76E  01  2.47E  01  4.26E  02 3.45E  01

6.94E  03 4.67E  04  9.08E  03  4.58E  01  2.69E  01  5.59E  02 2.22E  01

2.94E  03 0.00E þ00  3.11E  04  4.62E  01  2.79E  01 1.36E  05 1.47E  01

1.46E  01 1.56E  02 8.08E  02 2.78E  01 1.59E  01 0.00E þ00 0.00E þ00

1.70E 01 9.18E  03 8.16E  02 2.77E  01 1.58E  01 0.00E þ00 0.00E þ00

1.61E  01 1.04E  02 9.78E  02 2.60E  01 1.60E  01 0.00E þ00 0.00E þ00

5.52E  02 1.23E  02 9.66E  02 2.61E  01 1.60E 01 0.00E þ00 0.00E þ00

X1 3 2 1 0 1 2 3 X2 1 0 1 2 3 4 5

1

1

0.9

0.8

0.8 Normalized sensitivities

Normalized sensitivities

0.6 0.4 0.2 0 −0.2 −0.4 −0.6

Finite difference Score fun. (Linearized CDF) Score fun. (Standard CDF) Score fun. (Numerical Int.)

0.7 0.6 0.5 Finite difference Score fun. (Linearized CDF) Score fun. (Standard CDF) Score fun. (Numerical Int.)

0.4 0.3 0.2 0.1

−0.8

0

−1 −3

−2

−1

0 x1

1

2

3

Fig. 9. Academic example: local sensitivities of @Pf/@Fj for X1.

−1

0

1

2 x2

3

4

5

Fig. 10. Academic example: local sensitivities of @Pf/@Fj for X2.

10

H. Millwater et al. / Reliability Engineering and System Safety 107 (2012) 3–15

Table 6 Academic example: Variance estimates for @Pf/@Fj based on 100,000 samples. Linearized CDF

Parametric CDF

Variable Value

Standard deviation (reanalysis 100 runs)

Standard deviation equation (Eq. (13)— analysis)

COV Standard Eq. (13) deviation (reanalysis 100 runs)

Standard deviation equation (Eq. (13)— analysis)

COV Eq. (13)

0.0182 0.0000 0.0005 0.0036 0.0084 0.0203 0.0630

0.0166 0.0000 0.0005 0.0035 0.0083 0.0201 0.0575

1.119 – 0.0585 0.0073 0.0335 0.4713 0.1668

0.0128 0.0000 0.0004 0.0033 0.0078 0.0220 0.0746

0.0196 0.0000 0.0005 0.0034 0.0082 0.0205 0.0747

2.824 0.0000 0.0554 0.0075 0.0304 0.3668 0.3365

0.0612 0.0168 0.0080 0.0045 0.0023 0.0000 0.0000

0.0735 0.0170 0.0071 0.0041 0.0021 0.0000 0.0000

0.4326 1.849 0.0875 0.0148 0.0133 – –

0.0637 0.0159 0.0068 0.0040 0.0019 0.0000 0.0000

0.0580 0.0166 0.0071 0.0040 0.0021 0.0000 0.0000

0.3604 1.597 0.0721 0.0155 0.0131 – –

X1 3 2 1 0 1 2 3 X2 1 0 1 2 3 4 5

Table 7 Fracture mechanics example.

Fig. 11. Plot of joint PDF in failure region.

Fig. 12. Plot of kernel function times joint PDF in failure region with a perturbation on X1 at {  1,0,1}.

Variable Value pffiffiffiffiffiffi 1:12Ds pa 675 MPa L½15:1,8:48 mm pffiffiffiffiffi N½55,5:5 MPa m N[  11.8,0.157] 3.81

DK Dsð ¼ smax Þ ai KIC log10 ðCÞ m

Table 8 Fracture mechanics example: localized sensitivity results @mg/@Fj: finite difference, normal CDF and linearized CDF based on 100,000 samples. CDF point Linearized CDF finite difference (sampling)

Linearized CDF kernelbased (sampling)

Parametric CDF kernelbased (sampling)

Parametric CDF kernel-based (numerical integration)

Initial crack size 2.7E  6 48,780 4.6E  6 26,822 7.1E  6 19,326 1.3E  5 12,509 2.2E  5 7058 3.8E  5 3538 5.0E  5 1387

47,811 27,004 19,627 12,352 7034 3610 2246

42,973 22,828 18,191 11,401 6798 4202 2781

40,100 22,800 17,900 11,600 6810 4100 2130

Paris constant  12.27 28,678  12.11 18,459  11.96 12,749  11.80 8922  11.64 6261  11.48 4404  11.33 1502

32,522 18,475 13,113 8705 6171 4103 3642

29,819 16,349 11,812 7997 5893 4026 3918

23,093 16,254 11,521 8237 5933 4285 3090

Fracture toughness 38.50  134.5 44.00  103.6 49.50  73.3 55.00  54.1 60.50  41.4 66.00  32.4 71.50  12.6

 4079 714.7 332.4  346.8  69.6 818.6  4320.6

 4047 737.4 344.9  351.8  40.5 767.4  3269.0

 127.0  88.9  65.0  49.3  38.7  31.1  25.4

stronger for ai than C, that is, for both random variables, characterization of the distribution in the left tail is most important. Table 9 shows the standard deviation and coefficient of variation (COV) results from reanalysis based on 100 runs compared to Eq. (13) evaluated from a single analysis. It is clear that the variance estimates for ai and C are sufficiently small whereas the variance estimate for KIC is too large for the sensitivity estimate to be useful since the COV values for KIC vary from 0.65 to 4.7. The standard deviation results indicate that the number of samples needs to be increased by approximately 100 times in order to capture the sensitivities for KIC. This situation arises because KIC is not an important random variable; hence its sensitivity variance from sampling is large. For this reason, it is recommended that other sensitivity methods such as global sensitivity analysis [6] be used to determine the important random variables first, and then the localized sensitivity method applied to the important random variables.

6.2.2. Response standard deviation sensitivities The results for @sg/@Fi are given in Table 10, with variance results given in Table 11. Fig. 14 shows the normalized sensitivities. As before, accurate results were obtained for ai and C but not for KIC.

H. Millwater et al. / Reliability Engineering and System Safety 107 (2012) 3–15

1 Normalized sensitivities

Normalized sensitivities

1

11

0.8 0.6 0.4 0.2 0 0

1

2

3

4

5

0.6 0.4 0.2 0 −12.5

6 7 x 10−5

Initial crack size, ai (m)

0.8

−12 −11.5 10C, C: Paris constant

−11

Normalized sensitivities

1 0.5 Finite difference Score fun. (Linearized CDF)

0

Score fun. (Standard CDF) Score fun. (Numerical Int.)

−0.5 −1 30

40

50

60

70

80

Fracture toughness, Kic (MPa m1/2) Fig. 13. Fracture mechanics example: normalized sensitivities for @mg/@Fj. Table 9 Fracture mechanics example: variance estimates for @mg/@Fj based on 100,000 samples. CDF point

Linearized CDF Standard deviation (reanalysis 100 runs)

Parametric CDF Standard deviation equation (Eq. (13)—1 analysis)

COV Eq. (13)

Standard deviation (reanalysis 100 runs)

Standard deviation equation (Eq. (13)—analysis)

COV Eq. (13)

Initial crack size 2.7E 6 8717 4.6E 6 1855 7.1E 6 519.6 1.3E 5 202.3 2.2E 5 151.1 3.8E 5 206.4 5.0E 5 171.2

8888 1872 477.9 202.7 149.7 195.3 163.2

0.1859 0.0693 0.0244 0.0164 0.0213 0.0541 0.0727

10,181 1490 487.6 170.8 149.5 223.0 183.6

11,098 1710 452.1 201.1 154.2 210.6 178.3

0.2583 0.0749 0.0249 0.0176 0.0227 0.0501 0.0641

Paris constant  12.27 8081  12.11 1486  11.96 458.9  11.80 246.4  11.64 201.8  11.48 306.2  11.33 740.3

9245 1547 450.0 221.5 188.5 280.3 747.9

0.2843 0.0837 0.0343 0.0255 0.0306 0.0683 0.2054

7139 1357 440.2 205.4 192.5 304.4 796.7

7441 1434 433.2 216.6 192.4 296.5 847.3

0.2495 0.0877 0.0367 0.0271 0.0327 0.0737 0.2163

Fracture toughness 38.50 2595 44.00 764.9 49.50 312.8 55.00 229.1 60.50 330.9 66.00 691.8 71.50 3068

2786 741.9 326.9 248.8 329.0 773.4 2839

0.6832 1.038 0.9836 0.7176 4.7274 0.9448 0.6571

2705 653.8 310.7 232.6 305.5 638.2 2447

2815 760.8 316.4 240.6 317.0 722.5 2517

0.6956 1.032 0.9174 0.6840 7.827 0.9415 0.7700

Again, the left tail of ai and C most affect the standard deviation of the cycles-to-failure with the effect slightly stronger for ai than for C.

6.2.3. Probability-of-failure sensitivities The sensitivity results for @Pf/@Fj with failure defined as g(x)¼Nf 3000 are given in Table 12 with variance results given in Table 13. Fig. 15 shows the normalized sensitivities. From the table and figures, it is clear that the probability-of-failure is most sensitive

to the extreme right tail of the initial crack size and the crack growth constant with the effect slightly stronger for ai than for C. The variance results for KIC are too large for the sensitivities to be useful.

7. Discussion The localized sensitivity method presented here has the ability to discern the regions of a random variable for which a

12

H. Millwater et al. / Reliability Engineering and System Safety 107 (2012) 3–15

perturbation will have a large effect on a probabilistic response of a model such as mean or standard deviation of the response or the probability-of-failure. The recommended usage is to normalize the numerical results with respect to the largest value within a CDF to highlight the important region; however, the numerical

Table 10 Fracture mechanics example: local sensitivity results for @sg/@Fj: finite difference, normal CDF and linearized CDF based on 100,000 samples. CDF point Linearized CDF finite difference (sampling)

Linearized CDF kernelbased (sampling)

Parametric CDF kernelbased (sampling)

Parametric CDF kernel-based (numerical integration)

Initial crack size 2.7E  6 240,251 4.6E  6 71,336 7.1E  6 21,331 1.3E  5 2078 2.2E  5  3264 3.8E  5  2771 5.0E  5  1326

203,510 72,927 21,698 2093  3225  2846  2260

175,285 59,343 19,730 1869  3193  3534  3171

175,000 54,200 18,300 1430  3350  3450  2260

Paris constant  12.27 105,966  12.11 39,413  11.96 14,137  11.80 3001  11.64  1167  11.48  2434  11.33  1120

99,811 39,419 14,389 3227  1175  2093  3004

88,284 32,612 12,386 2809  1154  2126  3472

77,080 31,764 11,249 2294  1330  2520  2629

Fracture toughness 38.50  32.3 44.00  23.5 49.50  16.3 55.00  12.4 60.50  10.1 66.00  7.3 71.50  3.3

1281  943.9 16.7 237.2  717.3 556.4  2170

1698  840.6  30.7 202.3  560.6 311.7  2086

 20.3  14.6  10.9  8.9  7.2  5.8  5.0

values can also be used to assess the strength of the sensitivity across variables. The kernel function developed is a step function that has local support between the discretization points on either side of the point of interest. The function is constant between discretization points and changes sign, positive in the left half, and negative in the right as shown in Fig. 3. The value of the function depends on inverse of the difference in the CDF values between neighboring discretization points. The kernel function was shown to satisfy the essential condition that its expected value be zero. Thus, for example, in regions where the response is not changing, the sensitivity will be zero. This condition holds independent of the distribution type. The development of the kernel function assumed a linearized CDF but it was shown that, when applied, using samples from either the linearized CDF or the parametric CDF obtained consistent results in the sense that the normalized sensitivities were in agreement. Thus, the same samples used to compute the response of the model can be reused to compute the sensitivities. The discretization points are arbitrary and user-defined. The numerical examples showed an even distribution in X; however, it should be clear that a different discretization can be analyzed without additional sampling; that is, multiple passes can be invoked to provide a more focused look at any particular region. A finer discretization, however, may require additional samples in order to obtain results with a sufficiently small variance. The method will clearly require a larger number of samples than needed to compute the moments or the probability-offailure of the limit state since only the samples that fall within the non-zero portion of the kernel function are used. This is a drawback of the method. However, a variance estimate was derived and verified that can provide the analyst an estimate of the quality of the sensitivity obtained from a single analysis and provide guidance on how many samples are required. In addition, the discretization points can be defined to provide near-equal

Table 11 Fracture mechanics example: variance estimates for @sg/@Fj based on 100,000 samples. CDF point

Linearized CDF Standard deviation (reanalysis 100 runs)

Parametric CDF Standard deviation equation (Eq. (13)—1 analysis)

COV Eq. (13)

Standard deviation (reanalysis 100 runs)

Standard deviation equation (Eq. (13)—1 analysis)

COV Eq. (13)

Initial crack size 2.7E  6 21,939 4.6E  6 3318 7.1E  6 312 1.3E  5 123 2.2E  5 143 3.8E  5 227 6.3E  5 197

8886 1860 477.6 205.7 151.1 195.5 162.5

0.0437 0.0255 0.0220 0.0983 0.0469 0.0687 0.0719

35,503 2556 272.7 132.2 149.4 269.7 225.8

11,076 1701 452.2 203.7 155.4 210.7 177.4

0.0632 0.0287 0.0229 0.1090 0.0487 0.0596 0.0559

Paris constant  12.27 25,108  12.11 2732  11.96 426.2  11.80 160.4  11.64 158.1  11.48 282.9  11.33 810.5

9239 1543 449.5 223.2 189.5 280.5 747.9

0.0926 0.0391 0.0312 0.0692 0.1613 0.1341 0.2490

22,754 2478 428.6 166.1 159.1 306.1 943.7

7438 1431 433.0 217.8 193.3 296.7 847.3

0.0843 0.0439 0.0350 0.0775 0.1676 0.1396 0.2441

Fracture toughness 38.50 2642 44.00 774.1 49.50 403.0 55.00 253.7 60.50 329.7 66.00 969.8 71.50 2956

2786 741.9 326.9 248.8 329.0 773.4 2839

2.176 0.7861 19.58 1.049 0.4587 1.390 1.308

3132 750.0 311.3 219.7 356.9 845.2 3524

2815 760.8 316.4 240.6 317.0 722.5 2516

1.658 0.9051 10.31 1.190 0.5655 2.318 1.207

H. Millwater et al. / Reliability Engineering and System Safety 107 (2012) 3–15

1 Normalized sensitivities

Normalized sensitivities

1

13

0.8 0.6 0.4 0.2

0.8 0.6 0.4 0.2

0 0

1

2 3 4 5 Initial crack size, ai (m)

6

7

0 −12.5

x 10−5

−12 −11.5 10C, C: Paris constant

−11

Normalized sensitivities

1 0.5 Finite difference Score fun. (Linearized CDF)

0

Score fun. (Standard CDF) Score fun. (Numerical Int.)

−0.5 −1 30

40 50 60 70 Fracture toughness, Kic (MPa m1/2)

80

Fig. 14. Fracture mechanics example: normalized sensitivities for @sg/@Fj.

Table 12 Fracture mechanics example: local sensitivity results for @Pf/@Fj: finite difference, normal CDF and linearized CDF based on 100,000 samples. CDF point

Linearized CDF finite difference (sampling)

Linearized CDF kernel-based (sampling)

Parametric CDF kernel-based (sampling)

Parametric CDF kernel-based (numerical integration)

Initial crack size 2.7E  6 4.6E  6 7.1E  6 1.3E  5 2.2E  5 3.8E  5 6.3E  5

0.00E þ00 0.00E þ00 0.00E þ00 0.00E þ00  1.19E  03  8.19E  03  3.62E  02

0.00E þ00 0.00E þ00 0.00E þ00 0.00E þ00  1.25E  03  8.56E  03  6.23E  02

0.00E þ 00  3.15E  04 1.84E  04  1.58E  05  5.89E  04  5.03E  03  2.20E  02

0.00E þ00 0.00E þ00 0.00E þ00 0.00E þ00  2.15E  04  6.10E  03  6.72E  02

Paris constant  12.27  12.11  11.96  11.80  11.64  11.48  11.33

0.00E þ00 0.00E þ00 0.00E þ00 0.00E þ00 0.00E þ00  7.16E  03  2.00E  02

0.00E þ00 0.00E þ00 0.00E þ00 0.00E þ00  2.28E  03  1.13E  02  8.28E  02

0.00E þ 00 0.00E þ 00 0.00E þ 00  8.79E  05  1.46E  03  5.93E  03  5.18E  02

0.00E þ00 0.00E þ00  1.94E  08  1.03E  04  1.17E  03  8.29E  03  2.38E  02

 1.87E  03 1.06E  03 1.06E  04 2.93E  05 8.52E  05  8.13E  04 1.40E  03

 1.87E  03 1.50E  03  1.89E  04 1.46E  04 1.16E  04  6.40E  04 9.35E  04

1.62E  05 2.75E  05 5.30E  05 1.34E  05 4.19E  05 2.13E  06 7.68E  07

Fracture toughness 38.50 0.00E þ00 44.00 0.00E þ00 49.50 0.00E þ00 55.00 0.00E þ00 60.50 0.00E þ00 66.00 0.00E þ00 71.50 0.00E þ00

probability bins so that the variance is more uniform. A more sophisticated approach is to apply Latin hypercube sample, quasiMonte Carlo, or importance sampling methods in order to provide a more balanced allocation of samples into the tails of the

distribution in order to improve the variance of sensitivity estimates in the tails. Analysis of the fracture mechanics problem showed that the quality of the sensitivities for the fracture toughness was very

14

H. Millwater et al. / Reliability Engineering and System Safety 107 (2012) 3–15

Table 13 Fracture mechanics example: variance estimates for @Pf/@Fj based on 100,000 samples. CDF point

Linearized CDF

Parametric CDF

Standard deviation (reanalysis 100 runs)

Standard deviation equation (Eq. (13)—1 analysis)

COV Eq. (13)

Standard deviation (reanalysis 100 runs)

Standard deviation equation (Eq. (13)—1 analysis)

COV Eq. (13)

Initial crack size 2.7E  6 0.00E þ00 4.6E  6 0.00E þ00 7.1E  6 0.00E þ00 1.3E  5 0.00E þ00 2.2E  5 3.29E  04 3.8E  5 4.50E  03 6.3E  5 4.47E  03

0.00E þ 00 0.00E þ 00 0.00E þ 00 0.00E þ 00 3.20E  04 4.03E  03 4.02E  03

– – – – 0.2564 0.4711 0.0645

1.28E  03 3.20E  04 1.49E  04 1.06E  04 1.90E  04 2.33E  03 2.32E  03

4.68E  04 4.73E  04 9.40E 05 1.21E  04 2.21E  04 2.85E  03 2.84E  03

– 1.503 0.5110 7.6493 0.3759 0.5666 0.1292

Paris constant  12.27  12.11  11.96  11.80  11.64  11.48  11.33

0.00E þ00 0.00E þ00 0.00E þ00 0.00E þ00 5.14E  04 3.19E  03 2.88E  02

0.00E þ 00 0.00E þ 00 0.00E þ 00 0.00E þ 00 4.65E  04 2.84E  03 3.07E  02

– – – – 0.2041 0.2515 0.3704

0.00E þ00 1.45E  05 2.68E  05 6.94E  05 3.36E  04 2.21E  03 2.20E  02

0.00E þ 00 0.00E þ 00 0.00E þ 00 5.07E 05 3.56E  04 2.41E  03 2.47E  02

– – – 0.5773 0.2442 0.4061 0.4765

Fracture toughness 38.50 3.11E  03 44.00 8.73E  04 49.50 3.49E  04 55.00 2.51E  04 60.50 3.07E  04 66.00 7.38E  04 71.50 2.74E  03

8.09E  04 8.42E  04 2.97E  04 2.33E  04 3.26E  04 5.52E  04 4.67E  04

0.4328 0.7945 2.806 7.936 3.828 0.6793 0.3338

2.47E  03 6.38E  04 2.42E  04 1.62E  04 2.29E  04 4.45E  04 2.01E  03

6.61E  04 6.81E  04 2.24E  04 1.99E  04 2.65E  04 2.33E  04 0.00E þ 00

0.3534 0.4540 1.186 1.361 2.288 0.3635 0.0000

0 Normalized sensitivities

Normalized sensitivities

1

0.5

0

−0.5

−1 0

2

4

6

Initial crack size, ai (m)

8 x 10−5

−0.2 −0.4 −0.6 −0.8 −1 −12.5

−12 −11.5 10C, C: Paris constant

−11

Normalized sensitivities

1

0.5 Finite difference Score fun. (Linearized CDF)

0

Score fun. (Standard CDF) Score fun. (Numerical Int.)

−0.5

−1 30

40 50 60 70 Fracture toughness, Kic (MPa m1/2)

80

Fig. 15. Fracture mechanics example: normalized sensitivity of @Pf/@Fj.

poor, even for 100,000 samples. This occurs because KIC was not an important variable. For this reason, it is recommended that other sensitivity methods such as global sensitivity analysis be

used to determine the important random variables first, then the localized sensitivity method applied to the important random variables.

H. Millwater et al. / Reliability Engineering and System Safety 107 (2012) 3–15

15

8. Conclusion

References

A sensitivity method was developed and verified for independent random variables that can provide the sensitivity of a probabilistic response (probability-of-failure, mean, or standard deviation of an arbitrary response function) to a localized perturbation in the random variable CDF. This information can be used to ascertain the region of a random variable that most affects the probabilistic response, e.g., the left tail, near-left tail, center region, near-right tail, and right tail. The methodology is an extension of the Score Function method, whereby the sensitivities are formulated as an expected value integral. Evaluation of the integral using sampling is attractive in that the existing samples used to characterize the probabilistic response can be reused to compute the localized sensitivities. Therefore, the sensitivities are obtained with negligible cost. There are no restrictions on the random variable distributions or the limit state formulations and component or system problems can be considered. Numerical examples show that accurate sensitivities can be computed using sampling as compared with numerical integration or finite difference estimation for the important variables. Results for variables of less importance are not accurate; therefore, global sensitivity methods should be used a priori to determine the important random variables.

[1] Frey HC, Patil SR. Identification and review of sensitivity analysis methods. Risk Analysis 2002;22(3):553–78. [2] Hamby DM. A comparison of sensitivity analysis techniques. Health Physics 1995;68(2):195–204. [3] Helton J, Johnson JD, Sallaberry CJ, Storlie CB. Survey of sampling-based methods for uncertainty and sensitivity analysis. Reliability Engineering and System Safety 2006;91:1175–209. [4] Sobol’ IM. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Mathematics and Computers in Simulation 2001;55(1–3):271–80. [5] Homma T, Saltelli A. Importance measures in global sensitivity analysis of nonlinear models. Reliability Engineering and System Safety 1996;52:1–17. [6] Saltelli A, Ratto M, Andres T, Campolongo F, Cariboni J, Gatelli D, Saisana M, Tarantola S. Global sensitivity analysis: the primer. John Wiley & Sons, Ltd.; 2008. [7] Liu H, Chen W, Sudjianto A. Relative entropy based method for probabilistic sensitivity analysis in engineering design,. ASME Journal of Mechanical Design 2006;128:326–36. [8] Madsen HO, Krenk L, Lind NC. Methods of structural safety. Dover Publications; 2006. [9] Madsen HO. Omission sensitivity factors. Structural Safety 1988;5(1):35–45. [10] Kleijnen JRC, Rubinstein RY. Optimization and sensitivity analysis of computer simulation models by the Score Function method. European Journal of Operational Research 1996;88:413–27. [11] Rubinstein RY, Shapiro A. Discrete event systems, sensitivity analysis and stochastic optimization by the Score Function method. Chichester, England: John Wiley & Sons; 1993. [12] Karamchandani AK. New approaches to structural system reliability. PhD thesis. Department of Civil Engineering, Stanford University; 1990. [13] Wu Y-T. Computational methods for efficient structural reliability and reliability sensitivity analysis. AIAA Journal 1994;32(8):1717–23. [14] Millwater HR, Osborn RW. Probabilistic sensitivities for fatigue analysis of turbine engine disks. International Journal of Rotating Machinery, Article ID 2006;28487(12):2006, doi:10.1155/IJRM/2006/28487. [15] Wu Y-T, Mohanty S. Variable screening and ranking using sampling-based sensitivity measures. Reliability Engineering and System Safety 2006;91: 634–47. [16] Sues RH, Cesare MA. System reliability and sensitivity factors via the MPPSS method. Probabilistic Engineering Mechanics 2005;20(2):148–57. [17] Millwater HR. Universal properties of kernel functions for probabilistic sensitivity analysis. Probabilistic Engineering Mechanics 2009;24:89–99, doi:10.1016/j.probengmech.2009.01.005. [18] Millwater HR, Bates A, Vazquez E. Probabilistic sensitivity methods for correlated normal variables. Int. J. Reliability and Safety 2010;5(1):2011.

Acknowledgments This research effort was funded in part under grants from the Air Force Office of Scientific Research (AFOSR award: FA9550-091-0452, through Ohio State University), the National Science Foundation (HRD-0932339 through the CREST Center for Simulation, Visualization & Real Time Computing), and the Federal Aviation Administration (09-G-016).