A software framework for probabilistic sensitivity analysis for computationally expensive models

A software framework for probabilistic sensitivity analysis for computationally expensive models

Advances in Engineering Software 100 (2016) 19–31 Contents lists available at ScienceDirect Advances in Engineering Software journal homepage: www.e...

4MB Sizes 0 Downloads 30 Views

Advances in Engineering Software 100 (2016) 19–31

Contents lists available at ScienceDirect

Advances in Engineering Software journal homepage: www.elsevier.com/locate/advengsoft

A software framework for probabilistic sensitivity analysis for computationally expensive models N. Vu-Bac c, T. Lahmer c, X. Zhuang e,f,∗, T. Nguyen-Thoi d,b, T. Rabczuk a,b,∗ a

Division of Computational Mechanics, Ton Duc Thang University, Ho Chi Minh City, Vietnam Faculty of Civil Engineering, Ton Duc Thang University, Ho Chi Minh City, Vietnam c Institute of Structural Mechanics, Bauhaus-Universität Weimar, Marienstr. 15, D-99423 Weimar, Germany d Division of Computational Mathematics and Engineering, Institute for Computational Science, Ton Duc Thang University, Ho Chi Minh City, Vietnam e State Key Laboratory for Disaster Reduction in Civil Engineering, College of Civil Engineering, Tongji University, Shanghai, China f Institute of Continuum Mechanics, Leibniz University Hannover, Germany b

a r t i c l e

i n f o

Article history: Received 22 August 2015 Revised 30 May 2016 Accepted 5 June 2016

Keywords: Uncertainty quantification Random sampling Penalized spline regression Sensitivity analysis Matlab toolbox

a b s t r a c t We provide a sensitivity analysis toolbox consisting of a set of Matlab functions that offer utilities for quantifying the influence of uncertain input parameters on uncertain model outputs. It allows the determination of the key input parameters of an output of interest. The results are based on a probability density function (PDF) provided for the input parameters. The toolbox for uncertainty and sensitivity analysis methods consists of three ingredients: (1) sampling method, (2) surrogate models, (3) sensitivity analysis (SA) method. Numerical studies based on analytical functions associated with noise and industrial data are performed to prove the usefulness and effectiveness of this study.

1. Introduction In many fields such as structural reliability [30–32], material modeling [30,31,33], finance etc., mathematical (numerical) models are used for predicting the response of a system. Due to the increasing computer power, the complexity of the model is growing. Generally, the more complex the models are, the larger becomes the uncertainty in the model outputs due to randomness in the input parameters. It is essential to determine how much the model output is changed by the variation in input parameters as well as calibrate and validate the mathematical models. Sensitivity analysis (SA) is a great help for these purposes [1]. Therefore, uncertainty and sensitivity analysis have recently received widespread interest of researcher in many fields such as material modeling and structural design. Numerous SA approaches have been developed to quantify the models with uncorrelated parameters [2]. However, engineering systems are complex and frequently contain correlated input parameters such that if one parameter varies, it results in variations in other parameters. The variation in the output of the models with correlated input parameters (e.g., composition constraints in material modeling [3]) is not only contributed by the



Corresponding authors. E-mail addresses: [email protected] (N. Vu-Bac), zhuang@ikm. uni-hannover.de (X. Zhuang), [email protected] (T. Rabczuk). http://dx.doi.org/10.1016/j.advengsoft.2016.06.005 0965-9978/© 2016 Elsevier Ltd. All rights reserved.

© 2016 Elsevier Ltd. All rights reserved.

variations in input parameter itself, but also contributed by the correlated variations in other parameters [4]. Hence, it is more realistic to estimate the effects of changing more than one parameters on the model outputs simultaneously. It is essential to understand the relations among the uncertain input parameters for designing a SA. A few methods have been developed to quantitatively assess the effect of correlated input parameters on the model outputs. For instance, Xu and Gernert [4] improved the original Fourier amplitude sensitivity test (FAST) associated with Iman and Conover method [18] – used to generate correlated samples – to properly measure the sensitivity index for a model with correlated input parameters. Then, they developed another method to evaluate the sensitivity index for the uncorrelated and correlated contributions, see [5]. Nevertheless, those methods are limited in estimating the first-order sensitivity indices and the latter is based on a weak assumption that the model output linearly relates to input parameters. Later, SA methods, see [1,6,7], were proposed to quantitatively assess the total-effect sensitivity index that is essential for model simplification, however, most of them deal with analytical functions but not for experimental or simulation data. Hence, a unified framework that links different steps from generating sample, constructing the surrogate model and implementing the sensitivity analysis method is needed. In this article, a review and computer implementation for uncertainty and sensitivity

20

N. Vu-Bac et al. / Advances in Engineering Software 100 (2016) 19–31

Nomenclature

β ν λ  C R2 X x Y

μ ρ ij  σ d E(·) F(·) f(·) N p p(·) Prob ru RSS Si Sy STi STy V V(·) Vi Xi

Vector of regression coefficients Vector of regression coefficients for penalized spline regression model Smoothing parameter Vector of errors Truncated power basis of degree p Coefficient of determination (COD) Random variable vector X = [X1 , X2 , . . . , XN ]T Vector of design variables Vector of responses at sampled design points Mean vector of variables Coefficient of correlation Covariance matrix Standard deviation (std. dev.) vector of variables Number of variables (parameters) Expected value of the quantity Cumulative distribution function True function to be modeled Sample size Degree of spline basis Probability density function Probability value Uniformly distributed random number Residual sum of squares First-order sensitivity index of ith variable Xi First-order sensitivity index of group y Total-effect sensitivity index of ith variable Xi Total-effect sensitivity index of group y Total variance Variance value of the quantity Partial variance of ith variable Xi Generic variable (parameter)

The article is outlined as follows. In the next section, we briefly describe the flow chart and structure of the framework. The sampling technique is shown in Section 3. Surrogate models containing polynomial and penalized spline regression models are presented in Section 4.2. The SA is described in Section 5. Application of the SA method for models with correlated input parameters are presented in Section 6 including two analytical models with additional noise and an industrial example. Finally, we close the manuscript with concluding remarks. 2. Matlab toolbox: A flowchart and structure of the code A framework including sampling of correlated input parameters, construction of surrogate model, and implementation of sensitivity analysis are respectively described in Fig. 1. Also, the structure and purpose of the code are briefly depicted in Table 1. In the first step, sampling technique is used to randomly generate correlated input values which are then inserted into the computational model to obtain the model response in the second step. In the third step, the regression model is used to approximate the obtained training data from the previous step. Finally, SA approach is employed to estimate sensitivity indices based on the regression model. Accordingly, MATLAB subroutines for each step are given in Table 1. 3. Sampling method Latin Hypercube Sampling (LHS) [14,15] is an improved sampling strategy that enables a reliable approximation of the stochastic properties even for a small number of samples N. LHS is used to provide the design points which are spread throughout the design space. The LHS can be summarized as: •



analysis and its application in engineering analysis has been carried out to provide a robust and powerful modeling tool to support for designing uncertainty and sensitivity analysis. We employ a sensitivity analysis (SA) method for the case of correlated parameters [6] whose formulas were derived similarly to Sobol’ formulas for the case of uncorrelated parameters [8]. For the estimation, the sample data is generated from the joint and conditional probability distribution functions of input parameters which are required to account for the constraints in the inputs space. Gaussian copula is used to generate a joint cumulative distribution function (CDF) (multivariate normal distribution) that requires only marginal distributions and covariance matrix of input parameters. Complex models are often very time-consuming and computationally expensive so that they cannot be used to compute sensitivity indices. Thus, the so-called surrogate-based approach is employed as an approximation of the real model for sensitivity analysis. In [10], the authors presented a penalized spline regression model for a single continuous predictor. Since predictor variables have nonlinear relationships with the model output, the regression models considering multiple smooth functions [11] are adopted in this article to approximate the observed data. Subsequently, the SA indices are computed based on penalized spline regression models. The objective of this work is to provide a MATLAB toolbox consisting of a set of functions that can be used to randomly generate samples, construct the surrogate model and carry out the SA. The computer implementation has been presented in this article. The support MATLAB code can be found at the website (http: //www.uni-weimar.de/Bauing/rabczuk/).

Divide the cumulative curve into N equal intervals on the cumulative distribution of each parameter; A probability value is then randomly selected from each interval of the parameter distribution

P robi = (1/N )ru + (i − 1 )/N,



(1)

in the ith interval, where ru is a uniformly distributed random number varying over the range [0, 1], see [16]; Use the inverse cumulative distribution function (CDF) to map the probability value Probi into the design space as:

x = F −1 (P rob);

(2)

where F −1 denotes the inverse CDF. LHS reduces the computer effort due to the dense stratification across the range of each sampled variable [17]. Random number generators are widely used for sampling strategy of uncorrelated parameters. The MATLAB® program lhsu.m is improved based on the Kriging toolbox [13] and [14,18,19] is used to obtain the location of the design points.

N. Vu-Bac et al. / Advances in Engineering Software 100 (2016) 19–31

Fig. 1. Schematic diagram for sensitivity analysis. Table 1 Structure of the code. Name

Purpose

Main code main.m main code Sampling lhsu.m LHS generates uniformly distributed random variables in [0, 1]. multivar_corr_unif_sampl.m generates uniformly distributed random variates that exhibit a prespecified linear correlation, adapted from Schumann (2002) [9]. multvar_norm_sampl.m transforms uncorrelated uniform pseudo-random variables into correlated normal variables with specific mean and std. dev. values. main_branin_input_scatter.m plots scatter plots of the columns of Xi against the columns of Xj Regression model adapted from [11,12] fsrsaddmodel.m constructs a penalized spline regression model to fit observed data. powerbasis.m [11] returns the power basis functions of a spline of given degree quantileknots.m [11] creates knots at sample quantiles addbasis.m [12] outputs derivatives w.r.t. Xj of the spline basis addmodelpredct.m constructs a penalized spline regression model with common global penalty. addmodelpredctsepglobal.m constructs a penalized spline regression model with separate global penalties. addmodelpredctseplocal.m constructs a penalized spline regression model with separate local penalties. Quadratic regression model quadr_regsn.m contructs a quadratic without mixed terms regression model. Sensitivity analysis cond_dist_samp_y_zbarprime.m generates vector (y, zˇ ) from conditional probability distribution function p(y, zˇ |y ). cond_dist_samp_ybarprime_z.m generates vector (yˇ , z ) from conditional probability distribution function p(yˇ , z|z ). cond_dist_samp_yprime_zhat.m generates vector (y , zˆ) from conditional probability distribution function p(y , zˆ|y ). si_mc_estimates.m computes the sensitivity indices.

21

22

N. Vu-Bac et al. / Advances in Engineering Software 100 (2016) 19–31

4. Surrogate model

4.2.1. Cross-validation (CV) As indicated by [11], the residual sum of squares (RSS) criterion that measures the predictive ability of a regression model is defined by

4.1. Polynomial regression model Given a computational model with d predictor variables, the general multiple regression model is

Y j = β0 +

d 

βi x ji +  j .

(3)

The model response Y of N observed data points can be shown as:

ˆ = Xβ + , Y

(4)

in which Y is an N × 1 vector, X = [1 X1 . . . Xd ],  is an N × 1 error vector and β are the regression coefficients. By minimizing

(Yi − xTi β )2 = Y − Xβ 2

(5)

i=1

the least-squares estimate of β is determined as

βˆ = (XT X )−1 XT Y,

(6)

with x j = [1, x j1 , . . . , x jd ], j = 1, . . . , N. To test the model, we simply compute either the root mean square error (RMSE) [20] given by



RMSE =

N i=1 [Y

(xi ) − Yˆ (xi )]2

2

(Yi − Yˆi )2 = Y − Yˆ  ,

(9)

N N



2

R =1−  N



2 ,

i=1 i=1

Yi − Yˆi Yi − Y¯

where Yi and Yˆi are the i-th observed and predicted values of the model output, respectively. Using this criterion where the RSS is minimized may lead to an approximate-zero smoothing parameter for penalized spline regression. The zero smoothing can result in overfitting the data. Cross-validation (CV) is a technique that assesses the accuracy and validity of a statistical model where we use all training data except the ith one for prediction of Yi . Thus, the prediction Yˆi is determined when the ith data is not used. The idea of the crossvalidation is to split the available data into two subsets. Data in one set is used to construct the predictive model that is then employed to predict the data in the second set. We measure the difference between the predicted values and the observed values in the second set. This process can be reproduced with different splits of the data. If we denote the nonparametric regression estimator by fˆ(x; λ ), the RSS in Eq. (9) can be rewritten as follows

RSS(λ ) =

N 

{Yi − fˆ−i (xi ; λ )}2 .

(10)

i=1

The CV criterion is given by

,

(7)

or the coefficient of determination (COD) R2 , given by 2

N  i=1

i=1

N 

RSS =

N 1 Y¯ = (Yi ). N

CV (λ ) =

N 

{Yi − fˆ−i (xi ; λ )}2 ,

(11)

i=1

(8)

i=1

R2 describes how well the surrogate model approximates the observed data. The smaller RMSE, the more exact the surrogate model approximate the observed data. A surrogate model is considered as a good description of the observed data, if R2 ≥ 0.8 as stated by [21]. The quadratic regression model without mixed terms is illustrated by subroutine quadr_regsn.m as follows:

with fˆ−i being the nonparametric regression estimator applied to the data with (xi , Yi ) omitted. The optimal smoothing parameter λˆ CV is determined by minimizing CV(λ) for λ ≥ 0. In practical applications, an efficient algorithm known as the generalized crossvalidation criterion (GCV) [23] is utilized. The smoothing parameter λˆ GCV is obtained by minimizing the following equation

GCV (λ ) =

RSS(λ ) . {1 − df f it (λ )/N}2

(12)

in which dffit denotes degrees of freedom of a linear smoother. 4.2.2. Penalized spline regression The respective 1, …, d directions are smoothed by the smoothing parameters λ1 , …, λd that are induced to penalize the knot coefficients u1k , . . . , . . . , udk . Given the vector of d predictor parameters for the i-th design point xi = (xi,1 , . . . , xi,d )T , the additive model is defined as

Yi = β0 +

d 

f j (xi, j ) + i .

(13)

j=1

Assume that Kj knots, κ1, j , ..., κK j , j , are assigned to the j-th parameter, the additive spline model is then rewritten as

f (x, ν ) = β0 +

d 



β1, j x j + . . . + β

p p, j x j

As indicated in [11], traditional parametric regression models are impractical to handle nonlinear effects of the experimental data in terms of expertise and computational expense, e.g., the data obtained from branin function with additional noise shown in Section 6.



uk, j (x j − κ

)

p k, j +

,

k=1

j=1

4.2. Nonparametric regression model

+

Kj 

(14) [β T

with ν ≡ = [β0 , β1,1 , . . . , uK1 ,1 , . . . , uKd ,d being the vector of regression coefficients. If we denote the penalty function for the j-th parameter by λj (·), then ν is chosen to minimize uT ]

]T

N 

d 

i=1

j=1

{Yi − f (xi ; ν )}2 +

λ2j (κk, j )u2k, j ,

(15)

N. Vu-Bac et al. / Advances in Engineering Software 100 (2016) 19–31

w.r.t. ν yields

written as follows:

νˆ (λ ) = {C T C + D(λ )}−1C T y.

(16)

V = Vy [Ez f (y, zˇ)] + Ey [Vz ( f (y, zˇ))].

in which



p xi,p1 , . . . , xi,d ,

(xi,1 − κ )

1 p k +, . . . ,

1≤k≤K 1

Sy =

(xi,d − κ )

d p k +

1≤k≤K d

1≤i≤N

Vy [Ez ( f (y, zˇ))] , V

 = diag(0, 0, 0, λ21 1K1 ×1 , . . . , λ2d 1Kd ×1 ).

(18)

The vector of fitted values are obtained with the above penalized estimates of ν

ˆ = C(CT C + )−1 CT Y. Y

(19)

According to the complexity of the model, three levels of the penalty are chosen as proposed by [11]:

• •

λj (·) ≡ λ (a common global penalty); λj (·) ≡ λj (separate global penalties); λj (·) is a linear spline (separate local penalties).

Due to computational expense, it is not recommended searching separate local penalties upon an M-dimensional grid. Instead a search over a 1-dimensional grid for minimizing generalized crossvalidation (GCV) is proposed in [24] as follows: •





Ez [Vy ( f (yˇ, z ))] . V

STz =

and

At first, GCV is minimized to compute a common global penalty. In this step, the input parameters should be standardized. Then, we use the common global penalty as the initial value to compute the separate global penalties by minimizing GCV. The d global penalty estimates are computed during minimization. Finally, the separate global penalties are set as initial values that are used for minimizing GCV to search for separate local penalties. In this way, a total of M1 + ... + Md penalty estimates is found according to the j-th local penalty.

In this work, we use a program Spline regression toolbox developed the above-mentioned algorithm. The gram addbasis.m that fits a 10-knot served data.

fsrsaddmodel.m in MATLAB® by Ruppert [12] to illustrate fsrsaddmodel.m calls the proquadratic spline for the ob-

5. Sensitivity analysis method 5.1. Extension of Sobol’ approach for models with correlated inputs (ESACIs)

(22)

Similarly, the total-effect index of the group y can be computed by

ST y =

Ey [Vz ( f (yˇ, z ))] , V

(23)

where the respective random vectors z and zˇ are sampled based on a joint PDF p(y, z) and a conditional probability distribution p(y, zˇ|y ). The first-order index is expressed as [6]:

Sy =

1 V



Rs

p(y )dy



2

Rd−s

f (y, zˇ) p(y, zˇ|y )dzˇ



− f02 ,

in which p(y) is a marginal distribution; f0 = (24) can be written as

1 V

Sy =

Rs



×

Rd−s

p(y )dy

Rd−s

1 N

(24)

N

i=1

f (xi ). Eq.

f (y, zˇ) p(y, zˇ|y )dzˇ



f (y, zˇ ) p(y, zˇ |y )dzˇ − f02 ,

(25)

in which the random vectors (y, z) and (y , z ) are sampled based on the joint PDF p(y, z). The random vectors (y, z) and (y, zˇ ) are sampled based on the joint PDF p(y, z) and the conditional probability distribution p(y, zˇ |y ), respectively. Then, Eq. (25) can be used to compute the first-order indices

1 V

Sy =

The search strategy for the local penalties involves the following steps: 1. First λ∗1 , . . . , λ∗M are selected as starting values where each equal to the global smoother λ minimizing GCV as presented in Section 4.2.1. 2. Then we vary each λ∗k while keeping the others fixed, over a 1-dimensional grid with the current value of λ∗k located at the center. After each iteration upon this grid, λ∗k is updated by the λ-value minimizing GCV. The minimization problem is repeated Niter times and the λ∗k is obtained accordingly.

(21)

and the total-effect index of the group z by

(17)



(20)

We can estimate the first-order of the group y by

C= 1, xi,1 , . . . , xi,d ,

23



× or

1 V

Sy =

Rs



Rd−s



×

Rs

Rd−s

f (y, z ) p(y, z )dydz

Rd−s

f ( y  , z  ) p( y  , z  ) d y  d z 

p(y )dy

Rd−s

f (y, zˇ ) p(y, zˇ|y )dzˇ



,

(26)

f (y , zˆ) p(y, zˇ|y )dzˇ



f (y, zˇ ) p(y, zˇ |y )dzˇ − f02 .

(27)

Note that random vectors z and z are sampled based on the joint probability distribution p(y, z) and a random vector zˆ sampled based on a conditional probability distribution function p(y , zˆ|y ). By combining

ST y =

V − Vz [Ey ( f (yˇ, z ))] , V

(28)

with Eq. (25), STy can be written in terms of the following formula

ST y =

1 2V

Rd−s

[ f (y, z ) − f (yˇ , z )]2 p(y, z ) p(yˇ , z|z )dydyˇ dz.

(29)

5.2. Correlated normal distribution We consider a computational model f(X1 , X2 , …, Xd ) defined in Rd with a realization matrix X = (X1 , . . . , Xd ). If the realization matrix X is partitioned into y = (Xi1 , . . . , Xis ), 1 ≤ is < k and z = (Xis+1 , . . . , Xd ), the total variance of f(X1 , X2 , …, Xd ) can be

Without loss of generality, let us consider a d-dimensional multivariate normal distribution in Rd characterized by the mean vector μ and covariance matrix  . We can write the mean vector and

24

N. Vu-Bac et al. / Advances in Engineering Software 100 (2016) 19–31

covariance matrix  in terms of



μ=

μy , and μz

=

(30)

yz . z

y zy

(31)

The conditional distribution of φd−s (y, zˇ|y ) is also a normal distribution

1

φd−s (y, zˇ|y ) =

( 2π )

d−s 2



|zc |

1 −1 T e− 2 (zˇ−μzc ) zc (zˇ−μzc ) ,

(32)

with the mean vector and covariance matrix are expressed as

μzc = μy + zy y−1 (y − μy ),

(33)

 zc = z − zy y−1 yz .

(34)

Since the correlated input variables may follow arbitrary distributions in practice, we employ a Gaussian copula to reduce arbitrary correlated random variables with covariance matrix  and cumulative marginal distribution functions to the correlated multivariate normal distribution. Consequently, Monte Carlo estimates are straightforwardly applied to measure main and total-effect indices of the correlated multivariate normal distribution.

5.2.1. Monte Carlo estimates In practical applications, the estimates Sy and STy in Subsection 5.1 are estimated by an MC algorithm. Eq. (25) is rewritten in terms of the MC estimator as follows: 1 N

Sy =

N



i=1



f (yi , zi ) f (yi , zˇi ) −

 1 N N

i=1

f ( yi , zi )

V

2 .

(35)

Eq. (29) are given by

ST y =

1 N

N



i=1

f (yi , zi ) − f (yˇi , zi ) 2V

2 ,

(36)

while Eqs. (26) and (27) is obtained by

Sy =

Sy =

1 N

N

i=1



f (yi , zi ) f (yi , zˇi ) − f (yi , zi )

 (37)

V 1 N

N

i=1





f (yi , zi ) f (yi , zˆi ) − f (yi , zi )



V

5.3. Generation of correlated parameters (38)

In order to implement MC estimates present in Section 5.2.1, MC sampling algorithms used to generate (1) d-dimensional random normal variables and (2) conditional normally distributed variables are required. The algorithms are described subsequently.

(39)

5.3.1. A d-dimensional random normal parameters A d-dimensional multivariate normal distribution is generated by the following steps:

with f0 and D being evaluated at Nyz points of vector (y, z) by

f0 =

Nyz 1  f ( yl , zl ), Nyz l=1

D=

Nyz 1  ( f (yl , zl ) )2 . Nyz

(40)

l=1

In order to compute Sy and STy , the MC estimation for the functions f(y, z), f (y, zˇ ), f (yˇ, z ), f(y , z ) and f (y, zˆ) at each design point is evaluated based on the penalized spline regression model using subroutine addmodelpredct.m, adapted from Ruppert et al. [12] as presented in the following script:







A uniformly random matrix u of d parameters at sample size N is generated in [0, 1]; Mapping elements of the i-th column ui into a standard normal vector X˜ by means of X˜i = F −1 (ui ), where F(·) is CDF of the standard normal distribution; Compute the lower triangular matrix A using Cholesky decomposition

AAT =  ;

(41)

N. Vu-Bac et al. / Advances in Engineering Software 100 (2016) 19–31 •

25

Obtain n-dimensional multivariate normal distribution X by means of

X = AX˜ + μ.

(42)

The following sample code generates Nd realisations of d correlated random variates, where the variates are normally distributed.

The sample code generating Nd realisations of d correlated random variates, where the variates are uniformly distributed, can be found in the toolbox. 6. Application

5.3.2. Conditional normal random distribution In order to construct the conditional random normal distribution, we consider the generation of the set (y, zˇ ) as an example. Other sets f(y , z ) and f (y, zˆ) are generated in the same fashion.















Two different uniformly random matrices u and u of d parameters at sample size N are generated in [0, 1]; The matrix u is split into v = (ui , . . . , ui ), 1 ≤ s < d and w of 1

Azc ATzc •

s

d − s remaining parameters, so that u = (v , w ); Similar to the above-described transformation technique, we map the columns of the matrix u into a normal vector X with mean vector μ and covariance matrix  ; Then the vector X is divided into 2 vectors y and z, so that X = (y, z ); Use the inverse normal CDF to map the i-th column ui of the matrix w into a standard normal variate z˜i ; Compute mean vector μzc whose elements are mean values of the columns in matrix y; Obtain the Cholesky decomposition of  zc using

= zc .



Sˆ j = R2 S j

(45a)

SˆT j = R2 ST j

(45b)

(43) 6.1. Branin function with additional noise

Compute (d − s )-dimensional random vector zˇ by means of

zˇ = Azc z˜ + μzc .

In this article, we use the presented method applied to analytical models for the first two examples and industrial data for the last one. If the input parameters follow a correlated multivariate normal distribution, an application of the method is straightforward. In the general case, arbitrary multivariate data need to be transformed to a correlated multivariate normal distribution through copula-based methods. With regards to the first two analytical, a random noise will be added to model output to mimic the real data in practical application. Hence, the surrogate model will be employed to approximate either the analytical models with additional noise or the industrial data. Robustness of the smoothed spline regression models compared to the polynomial regression model will be illustrated in the first two examples meanwhile the detailed computational procedure will be explained in the last example. Since the first-order Sj and total-effect STj , j = 1, . . . , d indices are computed based on the surrogate model, they are reduced by the COD. It means that only R2 of the observed data can be explained with the surrogate model.

(44)

Obtain (y, zˇ ) by merging vectors y and zˇ .

The algorithm is implemented by the MATLAB program cond_dist_samp_y_zbarprime.m as follows:

Let us consider the Branin function [25,26]

f (x1 , x2 ) = a(x2 − bx21 + cx1 − r )2 + s(1 − t )cos(x1 ) + s

(46)

5.1/ ( 4π 2 ), c

where a = 1, b = = 5/π , r = 6, s = 10 and t = 1/(8π ). The input variables are uniformly distributed on the square [−5, 10] × [0, 15] and the following correlation matrix is assumed:

 =

1

ρ

ρ 1

 (47)

26

N. Vu-Bac et al. / Advances in Engineering Software 100 (2016) 19–31

10

5

0

−5 15

10

5

0 −5

0

5

10

0

5

10

15

Fig. 2. Scatterplot of the correlated inputs for Branin function.

Table 2 Quadratic polynomial without mixed terms regression model result summary.

β0

β1

β2

β 11

β 22

R2

RMSE

93.09

−12.45

−23.91

1.62

2.04

0.72

35.06

Table 3 Penalized spline regression model result summary. Common global penalty

0.94

R

with the correlation between variables X1 and X2 being ρ = 0.8. To mimic the real data, a white noise ℵ(0, 1) has been added to the model output (Y) that is evaluated by the Branin function at each design point. The input and output data can be found in file branin.dat. A scatterplot of uniformly distributed parameters X2 versus X1 is shown in Fig. 2 by main_branin_input_scatter.m to depict the correlation between two random parameters. A three-dimensional scatter plot and the associated regression models versus the variables X1 and X2 are shown in Fig. 3. The sensitivity indices computed by corresponding Equations are summarized in Table 4. Since the input parameters X1 and X2 are correlated multivariate uniform distribution in [−5, 10] and [0, 15], respectively, they are transformed into correlated multivariate standard normal distribution using Gaussian copula approach. Mathematically, all variables Xi are individually transformed to the normally distributed variable in standard normal space by marginal transformations, which is written in [27] by means of:

Zi = −1 [FXi ],

2

(48)

where −1 denotes an inverse cumulative distribution function of a standard normal variable; FXi denotes the cumulative distribution function of Xi . The mapping technique is performed by the following algorithm

Separate global penalties

Separate local penalties

RMSE

R

2

RMSE

R2

RMSE

16.81

0.94

16.81

0.94

16.81

At first, the quadratic regression without mixed terms [10,22] quadr_regsn.m is used to approximate the Branin data. As shown in Table 2, the COD R2 is less than 0.8. Therefore, the quadratic penalized spline regression model with 10 knots should be used to approximate the observed data using subroutine fsrsaddmodel.m. The values of R2 and RMSE computed by the smoothed regression splines using the global penalty (type = 1), the separate global penalties (type = 2) and the separate local penalties (3 subknots) (type = 3) are listed in Table 3, respectively. The obtained values R2 ≈ 0.94 by the smoothed spline regressions infer that the penalized spline regressions can be used to approximate the complex models (or data). Based on the surrogate models, the sensitivity indices are estimated by the

N. Vu-Bac et al. / Advances in Engineering Software 100 (2016) 19–31

27

Fig. 3. Scatter observed data and surface of surrogate model (a) the penalized quadratic spline using the global penalty, (b) the penalized quadratic spline using the separate global penalties, (c) the penalized quadratic spline using the separate local penalties, and (d) the quadratic without mixed terms regression model. Table 4 First-order and total-effect sensitivity indices computed on the penalized quadratic spline regression model.

Sˆia Eq. (25) Sˆib Eq. (26) Sˆic Eq. (27) SˆT i Eq. (29)

Common global penalty

Separate global penalties

Separate local penalties

X1

X2

X1

X2

X1

X2

0.86 0.87 0.87 0.17

0.74 0.74 0.74 0.05

0.87 0.86 0.86 0.18

0.72 0.71 0.72 0.04

0.86 0.86 0.87 0.17

0.75 0.75 0.75 0.05

formulas in Section 5 through the subroutine si_mc_estimates.m. The authors can refer the main_branin.m that addresses the involved steps for the example studied based on Branin function. Note that the proposed copula-based method shown by subroutine multvar_norm_sampl.m in Section 5.3 was used to numerically transform the uniform distributions to a multivariate normal distribution. Since the parameters X1 and X2 are strongly correlated (ρ = 0.8), the main (first-order) indices Sˆi are much higher than the total-effect indices SˆT i and the total-effect indices are approximately close to zero, see Table 4. Besides that, it is worth noting

that total of main indices in the models with correlated input parameters can be higher than one as illustrated in [6]. 6.2. Ishigami function with additional noise The next test case is the Ishigami function whose features are highly nonlinear and nonmonotonic

f (x1 , x2 , x3 ) = sin(x1 ) + 7sin2 (x2 ) + 0.1x43 sin(x1 )

(49)

in which input variables are uniformly distributed: −π  xi  π , i = 1, 2, 3 [28]. For the sake of comparison, the correlation be-

28

N. Vu-Bac et al. / Advances in Engineering Software 100 (2016) 19–31

5

0

−5 5

0

−5 5

0

−5 −5

0

5

−5

0

5

−5

0

5

Fig. 4. Scatterplot of the correlated inputs for Ishigami function.

Table 5 Quadratic polynomial without mixed terms regression model of Ishigami function result summary.

β0

β1

β2

β3

β 11

β 22

β 33

R2

RMSE

4.21

0.77

0.05

0.28

−0.03

−0.22

0.06

0.26

3.11

With the same purpose of mimicking real data, a random noise

ℵ(0, 0.5) is added to the model output (Y).

Table 6 Penalized spline gression model of Ishigami function result summary. Common global penalty R

2

0.83

Separate global penalties 2

RMSE

R

1.48

0.83

Separate local penalties

RMSE

R2

RMSE

1.48

0.83

1.48

tween variables x1 and x3 , ρ13 = 0.5 is assumed as in [6]. The correlation between the input parameters is illustrated in Fig. 4. The correlation matrix is given by:

 =

1 0

ρ

0 1 0

ρ



0 1

(50)

By repeating the same computational steps as Branin function, the surrogate model, main and total-effect sensitivity indices are obtained as presented in above Tables. The sensitivity indices in Table 7 are in quantitative agreement with the analytical solution. It is worth noting that the numerical results in this article is different from the analytical due to the additional noise and error between the reduced value R2 of the surrogate model and the real function. The main_ishigami.m can be referred for Ishigami function with additional noise. From the same table, the respective main indices Sˆ1 and Sˆ3 are higher than the total-effect indices SˆT 1 and SˆT 3 , as the correlation between X1 and X3 (ρ13 = 0.5) is quite high meanwhile the index Sˆ2 ≈ SˆT 2 as this parameter is uncorrelated to the other. 6.3. Industrial example We illustrate the method with a study the net hourly electrical energy output (EP) of the plant [29]. In this example, the hourly full load electrical power output (PE ) of a combined cycle power plant are affected by input variables in the dataset, such as ambient temperature (AT), atmospheric pressure (AP), relative humid-

Table 7 First-order and total-effect sensitivity indices computed on the penalized quadratic spline regression model.

Sˆia Eq. (25) Sˆib Eq. (26) Sˆic Eq. (27) SˆT i Eq. (29)

Common global penalty

Separate global penalties

Separate local penalties

X1

X2

X3

X1

X2

X3

X1

X2

X3

0.29 0.29 0.30 0.21

0.48 0.47 0.47 0.47

0.15 0.15 0.15 0.06

0.30 0.30 0.30 0.21

0.47 0.46 0.47 0.47

0.16 0.16 0.16 0.07

0.29 0.30 0.30 0.21

0.45 0.45 0.45 0.45

0.16 0.17 0.17 0.08

N. Vu-Bac et al. / Advances in Engineering Software 100 (2016) 19–31

29

40 20 0 100 50 0 1040 1020 1000 980 150 100 50 0 0

20

40

0

50

100 950

1000

1050 0

100

200

Fig. 5. Scatterplot of the correlated inputs of a combined cycle power plant (CCPP).

Table 8 Model uncertainty.

Min Max Mean Variance

AT X1

V X2

AP X3

RH X4

PE Y

1.81 37.11 19.65 55.54

25.36 81.56 54.31 161.49

992.89 1033.30 1013.26 35.2

25.26 100.16 73.31 213.17

420.26 495.76 454.37 291.28

and 4th (RH) input parameters can be characterized by truncated normal distributions, respectively. For the sake of utilizing the MC estimates which are implemented upon the multivariate normal random distribution, the input parameters are reduced to correlated standard normal random parameters using Gaussian copula approach. The transformation procedure is illustrated by the following script:

Table 9 Correlation matrix.

AT (X1 ) V (X2 ) AP (X3 ) RH (X4 )

AT X1

V X2

AP X3

RH X4

1 0.84 −0.51 −0.54

0.84 1 −0.41 −0.31

−0.51 −0.41 1 0.1

−0.54 −0.31 0.1 1

ity (RH), and exhaust steam pressure (V). The dataset composes of 9568 samples accumulated from a Combined Cycle Power Plant from 2006 to 2011. Model uncertainty of the dataset are summarized in Table 8. The scatter plot of the input parameters is illustrated in Fig. 5. The input parameters are obviously correlated and their cross-correlation are indicated in Table 9. This example is considered as an arbitrary case of correlated parameters, particularly, the 1st (AT) and 2nd (V) input parameters are truncated Gaussian mixture distribution while the 3rd (AP)

Obviously, we cannot perform SA which requires tens of thousands or millions samples meanwhile the samples (9568 samples)

30

N. Vu-Bac et al. / Advances in Engineering Software 100 (2016) 19–31

Table 10 Penalized spline gression model of CCPP data. Common global penalty

Separate global penalties

Separate local penalties

R2

RMSE

R2

RMSE

R2

RMSE

0.94

4.17

0.94

4.17

0.94

4.17

Table 11 First-order and total-effect sensitivity indices computed on the penalized quadratic spline regression model.

7. Conclusions

Common global penalty

Sˆia Eq. (25) Sˆib Eq. (26) Sˆic Eq. (27) SˆT i Eq. (29)

X1

X2

X3

X4

0.92 0.92 0.91 0.13

0.76 0.76 0.77 0.02

0.29 0.28 0.29 0.00

0.17 0.16 0.17 0.00

in this example (also for other computational models and experiments in practical applications) are limited. Thus, surrogate models such as polynomial or penalized spline regression models have to be utilized as alternative models based on which the sensitivity indices are estimated. The penalized spline regression model is used to approximate the new normalized data. In brief, the procedure of MC estimates are performed as follows: •



• •



the data in those examples as the R2 = 0.94 and 0.83 for data generated from Branin and Ishigami functions, respectively. Furthermore, the results show that the penalized spline regression models using either the global separate penalties or separate global penalties are nearly identical as the model using a common global penalty. However, with regards to models including many parameters, it is necessary to take the penalized spline regression models using global separate penalties or separate global penalties into account in order to approximate the observed data.

Generate normally uncorrelated distributed pseudo-random variables u1 and u2 using LHS method lhsu.m. Correlate vectors u and u with the covariance matrix  using subroutine multvar_norm_sampl.m. By doing so, the correlated standard multivariate normal distribution x and x are obtained. Split vectors x and x to vectors (y, z) and (y , z ), respectively. Generate the set (y, zˇ) of normal random variable from a conditional distribution p(y, zˇ|y ) employing cond_dist_samp_y_zbarprime.m. Likewise, the sets (yˇ, z ) and (y, zˆ) are obtained. Hence, f(y, z), f (y, zˇ ), f (yˇ, z ), f(y , z ) and f (y, zˆ) at each design point can be evaluated. Estimate sensitivity indices (Sy , STy ) using subroutine si_mc_estimates.m.

As observed in Table 10 the RMSE ≈ 4.1 that is in a good agreement the value obtained in [29] RMSE ≈ 3.8. Note that in that literature, the author applied another regression method on the best subset of the data so that results RSME a bit smaller compared to our results. Table 11 shows that the main indices Sˆ1 = 0.92 and Sˆ2 = 0.77 are much higher than SˆT 1 = 0.13 and SˆT 2 = 0.02 result from a strong correlation between AT(X1 ) and V(X2 ) while a weak correlation between X3 and X4 results in Sˆ3 = SˆT 3 and Sˆ4 = SˆT 4 . Also, the ambient temperature AT(X1 ) is estimated as the most important parameter followed by the exhaust steam pressure V(X2 ). The other parameters atmospheric pressure AP(X3 ) and relative humidity RH(X4 ) have insignificant influences on the electrical power output PE .

A unified method for probabilistic sensitivity analysis for computationally expensive models with correlated inputs has been presented. The framework presents sequential steps consisting of generating random sample data by LHS, transforming an arbitrary distributions into a multivariate normal distribution, approximating the observed data via surrogate models and estimating the sensitivity indices for the model with correlated parameters. In summary, the computer implementation involves the following steps: •



A method for estimation variance based sensitivity indices for models with dependent variables. The computer implementation has been presented in this article. The support MATLAB code can be found at the website (http://www.uni-weimar.de/Bauing/rabczuk/).

Two different test functions to which additional noise was added in order to represent the real data in practical applications and the industrial data were employed to illustrate the computer implementation. Additional noise was added to the analytical models in order to represent the real experimental data in practical applications. It is shown that the spline regression model is more robust than polynomial regression model, i.e. the quadratic regression model in this article, as the spline regression model can larger 80% exactly approximate the observed data while the quadratic regression without mixed terms fails to account for them. Furthermore, sensitivity indices show a good agreement with analytical results, see [1,6]. The difference between numerical results and analytical result due to the noise and the reduced COD is insignificant.

Acknowledgements We gratefully acknowledge the support from National Basic Research Program of China (973 Program: 2011CB013800), NSFC (41130751), the Ministry of Science and Technology of China (SLDRCE14-B-31), Science and Technology Commission of Shanghai Municipality (16QA14040 0 0), IRSES-MULTIFRAC. The support from the Alexander von Humboldt Foundation in the framework of the Sofja Kovalevskaja Award endowed by the Federal Ministry of Education and Research is acknowledged by X. Zhuang.

Appendix. Source code 6.4. Discussion The results in Tables 2 and 5 show that the quadratic regression model fails to approximate the observed data as COD < 0.8. Therefore, the penalized spline regression model should be used to approximate the complex observed data of the examples studied in this article. Fig. 3 (for Branin function) and Tables 3 and 6 showed that the penalized spline regression models adequately described

The main code implemented for Branin function with additional noise (expressed by means of branin.dat data) is shown as below. At first, the model uncertainty is measured. Then, the quadratic penalized spline regression model has been used to approximate the data. Finally, based on the established regression model, we have quantified the effects of the correlated input parameters on the model output.

N. Vu-Bac et al. / Advances in Engineering Software 100 (2016) 19–31

References [1] Mara TA, Tarantola S. Variance-based sensitivity indices for models with dependent inputs. Rel. Eng. Syst. Safety 2012;107:115–21.

31

[2] Saltelli A, Annoni P, Azzini I, Campolongo F, Ratto M, Tarantola S. Variance based sensitivity analysis of model output. design and estimator for the total sensitivity index. Comput. Phys. Commun. 2010;181:259–70. [3] Vu-Bac N, Rafiee R, Zhuang X, Lahmer T, Rabczuk T. Uncertainty quantification for multiscale modeling of polymer nanocomposites with correlated parameters. Comp. Part B: Eng. 2015;68:446–64. http://dx.doi.org/10.1016/j. compositesb.2014.09.008 [4] Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput. Stat. Data Anal. 2007;51(12):5579–90. [5] Xu C, Gertner GZ. Uncertainty and sensitivity analysis for models with correlated inputs. Rel. Eng. Syst. Safety 2008(93):1563–73. [6] Kucherenko S, Tarantola S, Annoni P. Estimation of global sensitivity indices for models with dependent variables. Comput. Phys. Commun. 2012;183(4):937–46. [7] Most T. Variance-based sensitivity analysis in the presence of correlated input variables. In: Proceedings 5th international conference on reliable engineering computing (REC), Brno; 2012. [8] Sobol’ IM. Sensitivity analysis for non-linear mathematical models. Math. Model. Comput. Exp. 1993;1:407–14. Translated from Russian: I.M. Sobol’, Sensitivity estimates for nonlinear mathematical models. Matematicheskoe Modelirovanie 2, 112–118, 1990 [9] Schumann E.. http://comisef.wikidot.com/tutorial:correlateduniformvariates. [10] Vu-Bac N, Silani M, Lahmer T, Zhuang X, Rabczuk T. A unified framework for stochastic predictions of mechanical properties of polymeric nanocomposites. Comput. Mat. Sci. 2015b;96:520–35. http://dx.doi.org/10.1016/j.commatsci. 2014.04.066 [11] Ruppert D, Wand MP, Carroll RJ. Semiparametric regression. Cambridge University Press; 2003. ISBN: 0521785162 [12] Ruppert D.. http://people.orie.cornell.edu/davidr/matlab/. [13] Lophaven SN, Nielsen HB, Søndergaard J. DACE A MATLAB Kriging toolbox. Informatics and mathematical modeling. Technical University of Denmark, Lyngby; 2002. [14] McKay MD, Conover WJ, Beckmann RJ. A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 1979;21:239–45. [15] Iman RL, Conover WJ. Small sample sensitivity analysis techniques for computer models with an application to risk assessment. Commun. Stat. 1980;A9:1749–874. [16] Wyss GD, Jorgensen KH. A user’s guide to LHS: Sandia’s latin hypercube sampling software.. Technical Report SAND98-0210. Sandia National Laboratories, Albuquerque, NM; 1998. [17] Helton JC, Davis FJ. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Rel. Eng. Syst. Safety 2003;81:23–69. [18] Iman RL, Conover WJ. A distribution-free approach to inducing rank correlation among input variables. Commun. Stat. - Simul. Comput. 1982;11:311–34. [19] Stein M. Large sample properties of simulations using latin hypercube sampling. Technometrics 1987;29:143–51. [20] Forrester AIJ, Sóbester A, Keane AJ. Engineering design via surrogate modeling: a practical guide. Chichester: John Wiley & Sons; 2008. ISBN 978-0-470-06068-1 [21] Vu-Bac N, Lahmer T, Keitel H, Zhao J, Zhuang X, Rabczuk T. Stochastic predictions of bulk properties of amorphous polyethylene based on molecular dynamics simulations. Mech. Mat. 2014;68:70–84. http://dx.doi.org/10.1016/j. mechmat.2013.07.021 [22] Vu-Bac N, Lahmer T, Zhang Y, Zhuang X, Rabczuk T. Stochastic predictions of interfacial characteristic of polymeric nanocomposites (PNCs). Compos. Part B: Eng. 2014b;59:80–95. http://dx.doi.org/10.1016/j.compositesb.2013.11.014 [23] Hutchinson MF, de Hoog FR. Smoothing noisy data with spline functions. Numer. Math. 1985;47:99–106. [24] Carroll RJ. Spatially-adaptive penalties for spline fitting. Aust. N. Z. J. Stat. 20 0 0;42:205–23. [25] Dixon LCW, Szego GP. The global optimization problem: an introduction. Towards Global Optim. 1978;2:1–15. [26] Surjanovic S., Bingham D.. Virtual library of simulation experiments: test functions and datasets. Retrieved September 7, 2014, from http://www.sfu.ca/ ∼ssurjano. [27] Bucher C. Computational analysis of randomness in structural mechanics. Structures and Infrastructures. Book Series, 3. CRC Press; 2009. [28] Ishigami T, Homma T. An importance quantification technique in uncertainty analysis for computer models. In: Proceedings of ISUMA, first international symposium on uncertainty modelling and analysis; 1990. p. 398–403. [29] Tüfekci P. Prediction of full load electrical power output of a base load operated combined cycle power plant using machine learning methods. Int. J. Electr. Power Energ. Syst. 2014;60:126–40. [30] Ghasemi H, Brighenti R, Zhuang X, Muthu J, Rabczuk T. Optimum fiber content and distribution in fiber-reinforced solids using a reliability and NURBS based sequential optimization approach. Struct. Multidiscip. O. 2015;51(1):99–112. [31] Ghasemi H, Rafiee R, Zhuang X, Muthu J, Rabczuk T. Uncertainties propagation in metamodel-based probabilistic optimization of CNT/polymer composite structure using stochastic multi-scale modeling. Comp. Mater. Sci. 2014;85:295–305. [32] Wu D, Yu L. Torsional vibrations of a cylindrical foundation embedded in a saturated poroelastic half-space. Front. Struct. Civ. Eng. 2015;9(2):194–202. [33] Quayum Md S, Zhuang X, Rabczuk T. Computational model generation and RVE design of self-healing concrete. Front. Struct. Civ. Eng. 2015;9(4):383–96.