Chapter 15
Design of experiments for uncertainty quantification based on polynomial chaos expansion metamodels Subhrajit Dutta1, 2, Amir H. Gandomi3, 4 1 Assistant Professor, Department of Civil Engineering, National Institute of Technology Silchar, Silchar, Assam, India; 2National Institute of Technology Silchar, Department of Civil Engineering, Silchar, Assam, India; 3Professor, University of Technology Sydney, Faculty of Engineering and IT, Ultimo, Australia; 4Stevens Institute of Technology, School of Business, Hoboken, NJ, United States
Chapter points l
l
l
l
This chapter gives an overview of the experimental designs used to construct metamodels for uncertainty quantification of complex systems. Here polynomial chaos expansion is adopted as the metamodeling technique. The accuracy of PCE metamodel compared with the true computational model is estimated using several error estimates. Numerical examples based on uncertainty quantification of structural systems are implemented with increasing level of complexity.
1. Introduction Uncertainties in real-life engineering problems are unavoidable. As an engineer or a scientist, it is important to identify the various sources of uncertainty, characterize them, and finally propagate them through the system or model to get meaningful statistical estimates of the responses under consideration. To this end, uncertainty quantification (UQ) frameworks are developed to probabilistically tackle uncertainties in complex systems. In most cases, a computational/ numerical (say, finite element) model is used to evaluate the system responses. However, in case of large-scale systems, UQ may become prohibitive in the Handbook of Probabilistic Models. https://doi.org/10.1016/B978-0-12-816514-0.00015-1 Copyright © 2020 Elsevier Inc. All rights reserved.
369
370 Handbook of Probabilistic Models
presence of multiple runs of computationally expensive-to-evaluate models (Dutta et al., 2017). In this context, surrogate models or metamodels have gained popularity in the recent past. A computationally inexpensive surrogate model mathematically approximates the original numerical model, thereby reducing the overall computational effort in UQ problems. Examples of some popular metamodels include polynomial chaos expansion (PCE), Kriging or Gaussian process modeling, support vector regression (SVR), and radial basis function (RBF). Here, PCEs are used as the metamodeling tool because it was found to be one of the most efficient methods in computing the stochastic responses of complex engineering systems (Dutta et al., 2018; Blatman and Sudret, 2011a; Dutta and Gandomi, 2019). In general, the construction of a PCE metamodel consists of the following steps: choosing the design of experiments (DoEs) to generate the trained data set, building the metamodel, i.e., computation of metamodel parameters based on the trained samples, assessing the quality of built metamodel, and finally evaluating statistical estimates for the system responses under consideration using the metamodel. PCE metamodels can be built up using intrusive or nonintrusive approaches (Dutta et al., 2018). Here, a nonintrusive approach has been adopted based on least-square minimization in which the computational/ numerical model is taken as a black box function. To compute the metamodel parameters (PCE coefficients in this case) by least-square technique, a DoE of the input random variables is performed. In computer experiments, the input (random) variables are sampled using a particular probability distribution to investigate the system responses. In case of correlated random variables, joint probability distributions are required to characterize the input uncertainty. The accuracy of any metamodel is greatly influenced by the sampling technique used to generate the experimental designs (EDs) of input random variables. Some of the studies on the influence of EDs on metamodel predictions are reported in the studies by Goel et al. (2008) and Simpson et al. (2001). In this line, researchers have suggested the use of space-filling sampling schemes, wherein the design space of input random variables needs to be filled based on some criteria. Two broad categories of space-filling criteria existsduniform filling criteria and distance-based criteria. A uniformity-based design is the one in which the number of samples in a subspace of a domain is proportional to the volume of that subspace. On the other hand, the distance-based criteria sample by maximizing the minimum distance between two sample points and vice versa. In this chapter, some of the well-known space-filling DoEs are reviewed in the context of metamodel (PCE) construction. The performance comparison of these DoEs in PCE-based UQ problems are illustrated using analytical and numerical benchmark problems. This chapter is organized as follows. In Section 3, the basic concept of PCE is introduced. In Section 4, some of the well-known DoEs are reviewed. In Section 5, a comparative study of the different DoE is performed on some benchmark problems with increasing computational complexity.
Design of experiments for uncertainty quantification Chapter | 15
371
2. Polynomial chaos expansions 2.1 Basic formulation Let us consider a physical system which can be idealized by a computational model M. Consider a vector of random variables x with support D x and described by the marginal/joint probability density function (PDF) fx. Considering a finite variance of the physical model Y ¼ M(x) such that Z 2 E Y ¼ M2 ðxÞfx dx (15.1) Dx
the PCE of M(x) is defined as (Ghanem, 1991; Soize and Ghanem, 2011) X ya ja ðxÞ (15.2) Y ¼ MðxÞ ¼ a
where ja is the multivariate orthonormal polynomial, a are is multiindex that maps the multivariate ja to its corresponding bases coordinate, denoted as the deterministic PCE coefficient ya. In PCE, the model uncertainty is characterized in a decoupled form by the random basis function, and the deterministic coefficients are given by the expansion in Eq. (15.2). A square-integrable random variable, random vector, or random process can be written in a mean-square convergent series using random orthonormal polynomial bases known as PCE for Hermite bases and generalized PCE for other bases. For Hermite bases, the random variables should be normally distributed, whereas nonnormal random variables must be transformed to standard normal random space using transformation schemes. For practical purpose, the expansion of Y must be truncated with a limited number of polynomial terms. Extending to M random variables, the complete N-order PCE is defined as the set of all multidimensional Hermite polynomials j whose degree does not exceed N (Ghanem, 1991): Yðx1 ; x2 .xM Þ ¼
P1 X
ya ja ðxÞ
(15.3)
a¼0
where P is the number of terms whose degree N such that P¼
N X k¼0
CkMþkþ1 ¼
ðM þ NÞ! M!N!
(15.4)
The key point in characterizing uncertainty by PCE lies in the determination of its coefficients, which can be achieved by techniques such as Galerkin method by spectral projection (e.g., Monte Carlo sampling, importance sampling, Gauss quadrature, Smolyak’s coarse tensorization) or by collocation (Blatman and Sudret, 2011a; Soize and Ghanem, 2011). Recent studies on PCE illustrated that nonintrusive methods are computationally efficient for coefficient determination (Blatman and Sudret, 2011a). Therefore, the nonintrusive method is discussed here considering its robustness and efficiency.
372 Handbook of Probabilistic Models
2.2 Polynomial chaos expansion coefficient computation In general, the PCE coefficient ya is computed using two approaches: intrusive approaches (e.g., Galerkin scheme) and nonintrusive approaches (e.g., projection, least-square regression). Here, the focus is laid on least-square methods, under the statistical name regression. The term nonintrusive term indicates that the chaos coefficients are evaluated over a set of input realizations x ¼ {x1,x2, ., xM}, referred to as the experimental design (ED). Considering Y ¼ {M(x1),M(x2), ., M(xM)} as the set of outputs of the computational model for each sample point, the set of coefficients ya is then calculated by minimizing the least-square residual of the polynomial approximation of “true” computational model: !!2 M P1 X 1 X ya ja ðxÞ (15.5) yb ¼ argmin Mðxi Þ M i¼1 ya a¼0
2.3 Polynomial chaos expansion model accuracy Once the PCE metamodel is constructed, it is required to assess its accuracy in comparison to the “true” computational model. Several statistical error estimators exist, out of which an effective measure is given by the mean square error (MSE) 2
MSE ¼ E½MðXÞ MPCE ðXÞ
(15.6)
where the validation set X ¼ {x1,x2, ., xval} consists of samples of the input variables that do not belong to the ED used to construct the PCE metamodel MPCE. A lower value of MSE corresponds to a more accurate metamodel. A variant estimator of MSE is the relative mean square error (RMSE) RMSE ¼
MSE s2Y
(15.7)
with s2y being the variance of the computational model response. In practice, a sufficiently large validation set (typically in the range of 104e105) is needed to achieve a stable estimate of the MSE. This error estimate is computationally demanding for analytical functions with closed-form expressions as it requires computing model responses for a large sample set. To overcome this issue, an error estimate is developed based on the ED used for PCE construction. In statistical learning, the leave-one-out error (ErrLOO) or the relative leave-one-out error (εLOO) estimate is computed, which performs very well in terms of the estimation bias and the MSE (Blatman and Sudret, 2011a, 2011b). Statistically, εLOO gives a measure of the coefficient of determination (R2 z Q2 z 1εLOO), while ErrLOO can be related to the
Design of experiments for uncertainty quantification Chapter | 15
373
well-known error estimator, predicted residual sum of squares (PRESS). The prediction accuracy is considered to be higher with a lower value of εLOO and vice versa (Box 15.1).
3. Design of experiments 3.1 Monte Carlo sampling Monte Carlo sampling (MCS) is a popular method developed by Metropolis and Ulum for generating random samples from a vector. In this case, following the probability distributions of the input variables Xi, each sample value xi is generated to fill the design space. MCS used “pseudo” random number generator (PRNG) with the objective of space filling for the input variables under consideration. The term “pseudo” is used here because computer systems use a formula to generate a sequence of random numbers. The following steps are used in MCS to generate N random samples from an input vector X ¼ {x1,x2, ., xN} MCS is a robust DoE approach; however, for practical problems, the use of MCS is inefficient. Some of the issues with MCS are (1) PRNG is reproducible and repeats after a long interval; (2) clustering problem occurs for small sample sizes; (3) for uniform space filling, large sample size is often required, making the simulation computation intensive. To circumvent these issues, researchers proposed other variants of MCS such as quasi-Monte Carlo sampling (QMCS). BOX 15.1 MCS scheme Step 1. Define the input vector, Xi, with its probability density function fXi Step 2. Draw random samples (using PRNG): {x11,x12, ., xn1} Step 3. Feed these samples into the deterministic computational model, M Step 4. Calculate the output vector, Y (set of first random sample of the output variables) fy11 ; y12 ; .; yn1 g ¼ Mfx1 ; x12 ; .; xn1 g Step 5. Repeat from Step 2 for a large number (N) of samples Step 6. The propagated random variable Y1 ¼ {y11,y12, ., yn1} as its random samples is then characterized for uncertainty quantification MATLAB built-in functions for generating pseudo random numbers: l rand(m,n) gives m n uniform random numbers l normrnd(m,s) gives normal random numbers l [exp/bino/nbin/gam/geo/logn/wbl]rnd (par1a, par2b, .) for different types of random numbers m represents mean with sbeing standard deviation and a,b distribution parameters.
374 Handbook of Probabilistic Models
3.2 Quasi-Monte Carlo sampling QMCS, also known as quasi-random low discrepancy sequence (QRLDS), uses a deterministic sampling scheme to fill the space uniformly. QMCS quantifies the uniformity in terms of discrepancy, i.e., closeness to the uniform distribution. Hence, it avoids the localization/clustering of sampling points. The discrepancy or error bound is valid independent of the input dimensionality. The advantage of deterministic sequence with low discrepancy includes a faster convergence rate of random estimators with respect to the standard MCS method. Several QRLD sequences, Halton, Hammersley, and Sobol and their variants, exist in the literature (Sobol, 1967; Halton, 1960). It is noteworthy that even if these methods inherently aims at space filling, they do not quantify of space-filling measures during the process. Hence, other deterministic methods based on stratified space-filling criteria have been proposed.
3.3 Latin hypercube sampling The sampling technique using MCS and QMCS discussed in aforementioned sections is powerful and robust in quantifying uncertainty, albeit at the cost of computation. Latin hypercube sampling (LHS) is a stratified sampling scheme used to reduce the number of simulations in quantifying response uncertainty. In this ED method, the input space is partitioned in different “strata,” and a representative value is selected from each stratum. The representative values for the domain are then combined to check for any repetition in the complete simulation. Following steps are used in LHS to generate random samples from an input vector X (Box 15.2).
3.4 Importance sampling Importance sampling (IS) is one of the popular variance reduction techniques that use additional apriori information about the problem at hand. The basic idea of IS is sampling only in the region of interest. For example, in case of low probability of failure (reliability) estimates, sampling region of interest is close to the failure/safe boundary. In IS technique, an expectation with respect to target density function fX(x) is approximated by a weighted average of random samples drawn from another distribution hV(x), which is termed as “importance sampling” density function. The selection of importance sampling function is crucial to produce a good estimate of quantifying uncertainty for system responses. An efficient importance sampling function hV() should have the following properties: (1) hV() should be positive for nonzero target distribution; (2) hV()zjfX()j; (3) Computation of hV() must be simple for any random sample. Some of the features of IS scheme of experimental design include
Design of experiments for uncertainty quantification Chapter | 15
375
BOX 15.2 LHS scheme
l
l l l
l l
l
l
Step 1. Partition the input sample space of each random variable (RV) into L ranges of equal probability ¼ 1/L. It is not necessary to divide the domain with equal probability Step 2. Generate one representative random sample from each range. Sometimes, the midvalue is used instead of a random sample from range Step 3. Randomly select one value from L values of each RV to get the first sample s1 Step 4. Randomly select one value from the remaining Ldone value of each RV to get the second sample s2, and so on upto L sample sL Step 5. Repeat Steps 1 to 4 for all the RVs Step 6. The rest sampling technique is the same as in MCS MATLAB built-in functions for LHS design: X ¼ lhsdesign(L,K) gives L random samples of each X1,X2, ., XK in an L K matrix X ¼ lhsdesign(L,K,‘smooth’,‘off’) gives the median value for each stratum X ¼ lhsdesign(L,K,‘iterations’,J) runs the simulation for J iterations lhsnorm generates multivariate Gaussian distributions
The sampling scheme has a far lower variance than in MCS A number of IS random samples are in the order of 102 as compared with 105e106 MCS samples The estimate is not affected by e Distribution of the input RVs e Correlation among the RVs Unlike MCS, sampling with correlation does not enter the scheme (as long as fX() is available)
4. Examples In this section, PCE-based metamodels are built-up for the uncertainty quantification (propagation) for response(s) of one analytical problem and two numerical examples from the literature. A nonintrusive approach is adopted to find out the PCE coefficients using design of experiments.
4.1 Analytical problem: Ishigami function The Ishigami function is a well-known benchmark problem for uncertainty quantification. It is a three-dimensional nonmonotonic and highly nonlinear analytical function, given by the following equation: f ðxÞ ¼ sinðx1 Þasin2 ðx2 Þ þ bx43 sinðx1 Þ.
(15.8)
376 Handbook of Probabilistic Models
(A)
(B)
0.2
0.3 MCS Sobol LHS IS
0.25
MSE
0.2
MCS Sobol LHS IS
0.15
0.1
0.15 0.1
0.05 0.05 0 50
100
150
Experimental design size
200
0 50
100
150
200
Experimental design size
FIGURE 15.1 Error estimates for various DoE: (A) Mean square error; (B) Leave-one-out error. DoE, design of experiment; IS, importance sampling; LHS, Latin hypercube sampling; MCS, Monte Carlo sampling; MSE, mean square error.
The input vector consists of three independent and identically distributed (i.i.d) uniform random variables Xi wUðp; pÞ. The coefficient values (a ¼ 7, b ¼ 0.1) are chosen for this example. The performances of the various sampling strategies with respect to the error estimates introduced in the previous section are compared. For validation purpose, an MCS-based approach with a validation set of size Nv ¼ 1000 is considered. Fig. 15.1A and B provide the MSE and εLOO calculated considering N ¼ 50 to 200 sizes of the ED. As expected, the LHS ED shows a decrease in both MSE and εLOO across all ED sizes. All the other design schemes show a consistent decrease in the error estimates, except MCS which shows steady values. In addition, LHS generally shows a more stable behavior with smaller variability between repetitions, especially as the size of the ED increases. In general, the metamodel accuracy seems to be more for larger EDs. Also, comparable results are obtained with IS and Sobol sequence.
4.2 Numerical problems: finite element models 4.2.1 Truss structure A simply supported truss structure shown in Fig. 15.2 is considered next. This planar truss has 23 members and 23 degrees of freedom with horizontal and
FIGURE 15.2 A planar truss structure with 23 members.
Design of experiments for uncertainty quantification Chapter | 15
377
TABLE 15.1 Characterization of input random variables. RV
Distribution
Mean
CoV
A1 (m )
Lognormal
0.002
0.10
A2 (m )
Lognormal
0.001
E1, E2 (MPa)
Lognormal
2.1 10
0.10
P1/P6(kN)
Gumbel
50
0.15
2 2
0.10 5
vertical displacements along the global directions. The input random variables considered in this cases are the vertical loads, P; the cross-sectional area, A; and Young’s modulus E. Ten independent and identically distributed random variables (M ¼ 10) are considered whose distributions and parameters are given in Table 15.1 (Fajraoui et al., 2017). In the underlying deterministic problem, the truss system response considered is the deflection V (Fig. 15.2). This deflection is computed using a finite element MATLAB code considering linear elastic and isotropic material properties. A PCE approach with degree in the range (N ¼ 3e10) is chosen. The accuracy of the PCE is compared with an MCS-based validation set of size Nv ¼ 250. Fig. 15.3A and B depict the performance of different sampling methods in terms of the MSE and εLOO for varying ED sizes from N ¼ 50 to 200. This comparison is made with respect to the MCS-based validation set. It is observed from the error estimates that Latin hypercube design outperforms other sampling schemes. Again, the PCE metamodel accuracy improves for larger size of EDs.
(A)
(B) 0.12
0.15 MCS Sobol LHS IS
0.08
MSE
0.1
MCS Sobol LHS IS
0.1
0.06 0.04
0.05
0.02 0 50
100
150
Experimental design size
200
0 50
100
150
200
Experimental design size
FIGURE 15.3 Error estimates for various DoE: (A) Mean square error; (B) Leave-one-out error. DoE, design of experiment; IS, importance sampling; LHS, Latin hypercube sampling; MCS, Monte Carlo sampling; MSE, mean square error.
378 Handbook of Probabilistic Models
4.2.2 Tensile membrane structure The next example considered here is real conic tensile membrane structure, which is adopted from a recent studies on tensile membrane structure (TMS) design optimization under uncertainty (Dutta et al., 2017, 2018). The details of this structure is given in Fig. 15.4. Additional details for analysis are l
l l
l
The membrane yarn directions are warp along radial and fill along circumferential directions. Thickness of the membrane is 1.0 mm. Material properties of the membrane: Modulus of elasticity, E ¼ 600.0 kN/ m; Poisson’s ratio, v ¼ 0.4; Design/nominal yield stress, fy ¼ 40.0 kN/m. Design wind load intensity, Wn ¼ 1.0 kN/m2.
The membrane is discretized with constant-strain triangular (CST) elements that have three translational degrees of freedom per node {UX,UY,UZ} along the global {X,Y,Z} directions. Proper symmetric boundary conditions are applied for the quarter part of the membrane in addition to the support boundary conditions. The finite element analysis of TMS is implemented in MATLAB to obtain the nodal deformations in global directions. The initial prestress is applied along the element warp (radial) and fill (circumferential) directions as shown in Fig. 15.4A. The quarter part of TMS meshed with triangular CST elements is shown in Fig. 15.4D. The flexible membrane remains in stable condition because of the existing initial prestress applied along yarn (warp and fill) directions. Membrane structures are primarily designed to resist the action of gusty winds. A TMS
(A)
(B)
(C)
(D)
FIGURE 15.4 (A) Frame-supported tensile membrane structure; (B) side elevation; (C) plan; (D) finite element model.
Design of experiments for uncertainty quantification Chapter | 15
379
subjected to a design wind load needs to remain in a stable shape and avoid wrinkling and tearing failures. Such failures are governed by the membrane principal stress values. For an optimal performance of these structures, the membrane deformation must be minimized (Dutta et al., 2018). Hence, the structural response parameters of interest for the optimum design of the TMS are the absolute maximum principal stress (p1max ) to ensure no tearing, the absolute minimum principal stress (p2min ) to ensure no wrinkling/slackness, and the total nodal deformation (fd). These are defined as X Xqffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi U 2Xj þ U 2Yj þ U 2Zj (15.9) dj ¼ fd ¼ j
j
for j ¼ 1; 2; .; Jnode p1max ¼ maxp1l
for
l ¼ 1; 2; .; Lelem
(15.10)
p1min ¼ minp2l
for
l ¼ 1; 2; .; Lelem
(15.11)
l
l
where Lelem and Jnode are the total number of elements and nodes, respectively, used in the finite element model of the TMS. UXj is the jth nodal displacement along X direction. Owing to the inherent uncertainty in wind load, there is a definite need to quantify the uncertainty that propagates to the aforementioned structural responses. For this uncertainty quantification, the wind load intensity (W)d which needs to be characterized probabilisticallydis the only input random variable considered here. The cumulative distribution function (CDF) for W is obtained based on past statistical studies. W follows an extreme type I/Gumbel distribution with a coefficient of variation, VW ¼ 0.37 and a bias factor, lW ¼ 0.78 (Ellingwood and Tekie, 1999). The PCE-based metamodel (as in Eq. 15.3) is built up to propagate the uncertainty. For a selected set of initial prestress values, the PCE metamodel is constructed using the experimental designs of the wind load intensity (i.e., random variable, M ¼ 1). Here, a third-order PCE (N ¼ 3) is constructed for the response parameter using ordinary least-square technique, and PCE coefficients are obtained. The PCE metamodels created this way has to be verified against results from more robust but computation-intensive MCS. In this case, MCS-based validation set of size Nv ¼ 200 is chosen. The PCE metamodel performance of the different experimental designs are compared in Fig. 15.5A and B. A comparable result is observed for both LHS and Sobol sequence as opposed to MCS and IS. Next, the density functions are plotted just to compare the system responses obtained from PCE-based metamodel with the “true” computational model counterpart. Fig. 15.6A and B show comparisons of the PCE-based PDF with the “true” PDF (obtained from validation set size of 200 MCS) for fd and
380 Handbook of Probabilistic Models
(A)
(B) 0.2
0.25 MCS Sobol LHS IS
MSE
0.2
MCS Sobol LHS IS
0.15
0.15
0.1 0.1
0.05
0.05 0 50
100
150
0 50
200
100
150
200
Experimental design size
Experimental design size
FIGURE 15.5 Error estimates for various DoE: (A) Mean square error; (B) Leave-one-out error. DoE, design of experiment; IS, importance sampling; LHS, Latin hypercube sampling; MCS, Monte Carlo sampling; MSE, mean square error.
(A)
(B)
0.4
0.08 True model (MCS) PCE model (LHS)
0.3
True model (MCS) PCE model (LHS)
0.06
0.2
0.04
0.1
0.02
0
0
0
2
4
6
8
0
10
20
30
40
50
60
FIGURE 15.6 Comparison of PCE metamodel with the “true” model PDF. MCS, Monte Carlo sampling; PCE, polynomial chaos expansion; PDF, probability density function.
p1max , respectively, for a particular combination of initial prestress. For both cases, the PCE-based metamodel provides a satisfactory accuracy with a leave-one-out error (εLOO) of 0.01,461 for fd and 0.03,166 for p1max.
5. Summary This chapter introduces the DoE schemes used for quantifying uncertainties in physical system responses. For uncertainty quantification, a metamodel-based approach using PCE is selected. Nonintrusive approach based on least-square technique is used for the determination of PCE parameters/coefficients. For verification of the PCE metamodels, few of the universally used error estimate measures are explained. State-of-the-art experimental design schemes were introduced, and their computer implementations along with features were discussed. Comparative studies of the well-known sampling strategies are performed on numerical/analytical models with varying input dimensionality and computational complexity.
Design of experiments for uncertainty quantification Chapter | 15
381
References Blatman, G., Sudret, B., 2011. Adaptive sparse polynomial chaos expansion based on least angle regression. Journal of Computational Physics 230 (6), 2345e2367. Blatman, G., Sudret, B., 2011. An adaptive algorithm to build up sparse polynomial chaos expansions for stochastic finite element analysis. Probabilistic Engineering Mechanics 25 (2), 183e197. Dutta, S., Ghosh, S., Inamdar, M.M., 2017. Reliability-based design optimization of framesupported tensile membrane structures. ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering 3 (2), G4016001. Dutta, S., Ghosh, S., Inamdar, M.M., 2018. Optimisation of tensile membrane structures under uncertain wind loads using PCE and kriging based metamodels. Structural and Multidisciplinary Optimization 57 (3), 1149e1161. Dutta, S., Gandomi, A.H., 2019. Surrogate model-driven evolutionary algorithms: theory and applications. In: Banzhaf, W., et al. (Eds.), Evolution in Action e Past, Present, and Future: A Festschrift in Honor of Erik Goodman’s 75th Birthday. Springer-Verlag. Ellingwood, B.R., Tekie, P.B., 1999. Wind load statistics for probability-based structural design. Journal of Structural Engineering, ASCE 125 (4), 453e463. Fajraoui, N., Marelli, S., Sudret, B., 2017. Sequential design of experiment for sparse polynomial chaos expansions. SIAM/ASA Journal on Uncertainty Quantification 5 (1), 1061e1085. Ghanem, R., 1991. Spanos PD Stochastic Finite Elements: A Spectral Approach. Springer- Verlag, Berlin, Germany. Goel, T., Haftka, R.T., Shyy, W., 2008. Watson LT Pitfalls of using a single criterion for selecting experimental designs. International Journal for Numerical Methods in Engineering 75, 125e155. Halton, J.H., 1960. On the efficiency of certain quasi-random sequences of points in evaluating multidimensional integrals. Numerische Mathematik 2 (1), 84e90. Simpson, T.W., Poplinski, J., Koch, P.N., 2001. Allen JK metamodels for computer-based engineering design: survey and recommendations. Engineering Computations 17, 129e150. Sobol, I.M., 1967. On the distribution of points in a cube and the approximate evaluation of integrals. Zhurnal Vychislitel noi Matematiki i Matematicheskoi Fiziki 7 (4), 784e802. Soize, C., Ghanem, R., 2011. Physical systems with random uncertainties: chaos representations with arbitrary probability measure. SIAM Journal on Scientific Computing 26 (2), 395e410.