An evolutionary nested sampling algorithm for Bayesian model updating and model selection using modal measurement

An evolutionary nested sampling algorithm for Bayesian model updating and model selection using modal measurement

Engineering Structures 140 (2017) 298–307 Contents lists available at ScienceDirect Engineering Structures journal homepage: www.elsevier.com/locate...

2MB Sizes 1 Downloads 127 Views

Engineering Structures 140 (2017) 298–307

Contents lists available at ScienceDirect

Engineering Structures journal homepage: www.elsevier.com/locate/engstruct

An evolutionary nested sampling algorithm for Bayesian model updating and model selection using modal measurement Feng Qian ⇑, Wei Zheng Department of Civil and Environmental Engineering, Jackson State University, 1400 John R. Lynch St, Jackson, MS 39217, USA

a r t i c l e

i n f o

Article history: Received 23 October 2015 Revised 3 September 2016 Accepted 21 February 2017

Keywords: Nested sampling Bayesian inference Model updating Model selection Evolutionary algorithm Modal measurement

a b s t r a c t Nested sampling (NS) is a highly efficient and easily implemented sampling algorithm that has been successfully incorporated into Bayesian inference for model updating and model selection. The key step of this algorithm lies in proposing a new sample in each step that has a higher likelihood to replace the sample that has the lowest likelihood evaluated in the previous iteration. This process, also regarded as a constrained sampling step, has significant impact on the algorithm efficiency. This paper presents an evolutionary nested sampling (ENS) algorithm to promote the proposal of effective samples for Bayesian model updating and model selection by introducing evolutionary operators into standard NS. Instead of randomly drawing new samples from prior space, ENS algorithm proposes new samples from previously evaluated samples in light of their likelihood values without any evaluation of gradient. The main contribution of the presented algorithm is to greatly improve the sampling speed in the constrained sampling step by use of previous samples. The performances of the proposed ENS algorithm for model updating and model selection are examined through two numerical examples. Published by Elsevier Ltd.

1. Introduction Model updating techniques can be mainly categorized into two categories: deterministic and probabilistic methods. Deterministic methods usually cast model updating into an optimization problem, in which an objective function based on the discrepancy between model prediction and measured data is defined and minimized for improving the model plausibility. Optimization algorithms, including genetic algorithm [1,2], particle swarm optimization [3], simulated annealing algorithm and their derivatives[4], have been applied to model updating. However, deterministic methods can only give one optimal solution, while model updating, like most ill-posed inverse problems, may have more than one potential solution due to insufficient observations, contaminated data or lack of prior knowledge [5,6]. Bayesian probabilistic inference has been demonstrated to be relatively promising in handling practical difficulties of model updating problems [5]. One challenge in implementing Bayesian probabilistic framework is to evaluate the multidimensional integrals over unknown parameter space, which is analytically intractable in general [7]. Thus, stochastic simulation such as importance sampling [8], Gibbs sampling [9], Markov Chain Monte Carlo (MCMC) [10] and their derivative algorithms [11–13] are proposed ⇑ Corresponding author. E-mail address: [email protected] (F. Qian). http://dx.doi.org/10.1016/j.engstruct.2017.02.048 0141-0296/Published by Elsevier Ltd.

to steer away from the computation of complexity integrals. However, most sampling methods fail to obtain model evidence that is critical to the evaluation of model plausibility, and are unable to efficiently propose samples in the high probability of posterior distribution for high dimensional problems. Transitional markov chain monte carlo (TMCMC) algorithm proposed by Ching et al. [11] has been proven to outperform other approaches for its capability of estimating model evidence and smooth convergence merit. However, it has potential problems in tackling higher dimension parameters [12], since its intermediate stage number will augment for the need of more samples while the accuracy of estimators may decrease with increasing parameter dimension [11]. Hybrid Monte Carlo (HMC) method is capable of effectively solving higher-dimensional problems by the guidance of gradient of potential energy [12–13]. Nevertheless, numerically estimating the gradient over all unknown parameters for large complex system is computationally expensive and inaccurate, even infeasible in problems involving a large sample size or streaming data [14]. Furthermore, the leapfrog algorithm involved in the evaluation of Hamilton equations does not always conserve energy in the system with large step size [15], leading to instability of the simulated Hamilton dynamic system, and thereby causing the chain of samples to be trapped in some unexpected regions. In comparison, NS developed by Skilling [16] is a highly efficient and more convenient sampling algorithm. NS primarily aims to evaluate Bayesian evidence by converting the high-dimensional

F. Qian, W. Zheng / Engineering Structures 140 (2017) 298–307

integral into an easily evaluated one-dimensional integral [15,16]. As a by-product, the final set of samples of NS algorithm can be further used to estimate posterior distribution. It has been successfully applied to cosmological field [17], finite element model updating and model selection [18], and surface flow problems [15,19]. There is still an increasing demand for NS algorithm to solve statistical inference and Bayesian model selection for its easy and convenient implementation. The main problem of the standard NS algorithm lies in the constrained step in which a new sample with higher likelihood is proposed to replace the sample with the lowest likelihood. This issue becomes even more challengeable after several iterations as the likelihood has reached a higher value and the parameter space shrinks to a very sharp region. To address this problem, Elsheikh [15] has incorporated the HMC into standard NS algorithm. However, the estimation of energy gradient required in HMC method is computationally expensive and inaccurate as aforementioned. Therefore, more efficient and feasible methods are still needed to be explored for promoting standard NS algorithm by effectively proposing valid samples. This paper presents an ENS algorithm for Bayesian model updating and model selection. ENS is a combination of evolutionary algorithm (EA) and standard NS method. Instead of randomly proposing new samples from prior distribution in the standard NS, ENS generates new samples from previous samples by evolutionary operations, e.g. selection, crossover and mutation. The samples evaluated in the former steps are ranked based on their likelihood values and probably selected to evolve into higher region of posterior distribution. The advantage of the proposed ENS algorithm over standard NS is to make full use of previous samples to draw new samples without any evaluations of gradient. Furthermore, numerical results show that ENS has the capacity of dealing with multimodal problems with sufficient initial samples. The sampling efficiencies of ENS and standard NS are compared. The performance of the presented ENS algorithm for model updating and model selection is examined based on two numerical examples, e.g. a clamped-clamped beam and a truss structure.

2. Theory of finite element model updating The essence of finite model updating is to determine or calibrate the unknown structural parameters to make the predictions of a numerical model match with field measurements as much as possible. Such parameter calibration process can be realized by minimizing the difference between the field measurement and the prediction of the theoretical model. In practice, structural modal characteristics (e.g. modal frequencies and mode shapes) can be identified from time history responses through stochastic subspace identification [20], Hilbert-Huang transform [21] and empirical ^ k 2 RN 0 ; r ¼ ^ r; / mode decomposition methods [22]. Let D ¼ fx r

1; . . . ; m; k ¼ 1; . . . ; N d g denote the extracted modal data at the ^ k are the r-the circular ^ r and / selected N0 observation DOFs, where x r

natural frequency and mode shape, Nd is the number of data sets available, and m refers to the number of observed modes. Suppose h = {h0, h1, h2, . . ., hs} is the unknown parameter vector characterizing the theoretical model of a real structure, where s is the number of unknown structural parameters. The prediction of the parameterized model can be obtained from its eigenvalue equation and denoted as fxr ðhÞ; /r ðhÞ 2 RN0 ; r ¼ 1; . . . ; mg: The difference between measured modal data D and model prediction for the r-the modal frequency and mode shape component can be respectively defined as [23,24],

J xr ðhÞ ¼

ND ND ^ k k2 ^ kr 2 1 X ½xr ðhÞ  x 1 X kbkr ur ðhÞ  / r and Jur ðhÞ ¼ 2 2 ^ kk N D k¼1 ND k¼1 k/ ½xk  r N r

0

ð1Þ

299

where ||||2 means Euclidian norm, and k  k2N0 ¼ k  k2 =N 0 : bkr ¼ ^ k / ðhÞ=k/ ðhÞk2 is a normalized factor that ensures the measured / r

r

r

^ k closest to the mode shape bk / ðhÞ. J ðhÞ and J ðhÞ mode shape / xr /r r r r in Eq. (1) respectively give the mean errors of the r-th frequency and mode shape between the measured modal data and model predictions. Thus the overall error can be obtained from the weighted combination of the contribution of all m modal frequencies and mode shapes.

JðhÞ ¼

m m X X J xr ðhÞ þ w J /r ðhÞ r¼1

ð2Þ

r¼1

where w is the weighting factor that reflects the relative importance of error contributed by mode shapes. Goller and Beck et al. [25] studied the optimum selection of the weighting factors in model updating from the view of Bayesian model evidence. 3. Bayesian model updating and model selection The main idea of applying Bayesian approach to model updating is to obtain a posterior probability density function (PDF) p(h|D, Mj) of uncertainty parameters h for the model class Mj based on available measurement data D. The model class refers to a group of different models resulted from different possible model parameters h due to the associated uncertainties. The key step is to establish the likelihood function that describes the probability of obtaining the measurement D based on the model defined by h. The error between model prediction and measurement D is usually assumed to be a zero-mean stationary normally-distributed stochastic process with a shared standard deviation r. Thus, the likelihood function can be expressed as the conjunct normal distribution of multiple independent variables as following:

  1 JðhÞ pðDjh; Mj Þ ¼ pffiffiffiffiffiffiffi mðN þ1Þ exp  2 0 2r ð 2prÞ

ð3Þ

The shared standard deviation r is also unknown and needs to be determined, extending the unknown parameter vector to be x = {h0, h1, h2, . . ., hm, r}T = {hT, r}T. Thus, the posterior probability density function (PDF) of the uncertainty parameters can be obtained as following according to Bayesian theorem [5],

pðxjD; M j Þ ¼

pðDjx; M j ÞpðxjM j Þ pðDjMj Þ

ð4Þ

where p(D|x, Mj), p(x| Mj) and p(D| Mj) are likelihood function, prior PDF and model evidence, respectively. The model evidence p(D| Mj) is the marginal likelihood function that describes the quality of the model specified by unknown parameters. It can be determined as below,

  pðDx; Mj Þ  pðxMj Þdx  n o  R JðhÞ 1  pðxMj Þdx ¼ pffiffiffiffi mðN þ1Þ  exp  2r2 0

pðDjM j Þ ¼

R

ð5Þ

ð 2prÞ

Commonly, the multidimensional integral (summation for discrete model) over all the space of unknown parameter vector x in the Eq. (5) cannot be analytically evaluated due to its higher dimension. Thus, ENS method is elaborated in this paper to estimate the model evidence in Eq. (5) and approximate the posterior PDF of the uncertainty model parameters in Eq. (4) with the byproduct samples. Because uncertainties associated with model classes, there may be many other competitive model classes that could more accurately represent the real system based on measurements. The Bayesian model selection then can be adopted to determine the most plausible model by their corresponding model probabilities

300

F. Qian, W. Zheng / Engineering Structures 140 (2017) 298–307

among a finite model classes (or sets) M = {Mj: j = 1,2,. . ., NM}. With model evidence p(D| Mj), the posterior probability of each model class Mj can be obtained with given data D based on Bayesian theorem [11,18]

pðMj jD; MÞ ¼

pðDjMj Þ  PðM j jMÞ NM X pðDjM j Þ  PðMj jMÞ

ð6Þ

j¼1

where p(Mj|M) = prior probability over the candidate model classes. Equal prior probability p(Mj|M) = 1/NM can be assigned to each model class if there is no any prior knowledge. The most plausible model could be found by Bayesian selection at the model class level through comparison among alternative candidate model classes. 4. Evolutionary nested sampling This section firstly presents the principle of standard NS algorithm as well as its steps for estimating model evidence. Three evolutionary operations, as well as how they are incorporated into the standard nested sampling to effectively propose new samples based on previously evaluated samples and likelihood values, are then introduced. The detail description of the resulted ENS algorithm is given in the end of this section. 4.1. Standard nested sampling Nested sampling, as a Monte Carlos, but not a Markov Chain sampling method [18], is virtually a process of forcing samples to gradually approach target distribution under the constraint of increasing model likelihood. In this process, it divides the prior volume with prior mass X into a number of ‘‘equal mass” elements dX = p(x|M)dx and sorts them by their corresponding likelihood L (X) = p(D|x, M). The subscript j is dropped hereafter from Mj for notational convenience. The prior mass X can be accumulated from its tiny elements dX as below

Z

XðkÞ ¼

pðxjM Þdx

where k 2 [0 1] can be interpreted as the constraint likelihood value. When k = 0, the cumulated prior mass covers all likelihood values e.g. X = 1, and as k increases and approaches 1, the enclosed mass X decreases from 1 to 0. Thus, the integral of Bayesian evidence in Eq. (5) can be written as

Z

Z

pðDjx; MÞ  pðxjMÞdx ¼

1

LðXÞdX

ð8Þ

0

in which way, the calculation of model evidence has been transformed into a unidimensional and positive integral. Standard NS algorithm assumes that the q samples x1, x2, . . ., xq are drawn from the prior parameter space with the constraint that their likelihood values gradually increase, e.g. L1 < L2 <    < Lq1 < Lq, and their corresponding prior masses are in a decreasing order, 1 > X1 > X2 >    > Xq1 > Xq > 0, as illustrated Fig. 1. Now with the ordered prior masses Xi and their corresponding likelihood Li = L(Xi), the evidence in Eq. (8) can be estimated by any convenient numerical methods, such as the trapezoidal rule:



q X Zi ; i¼1

Zi ¼

Li ðX i1  X i Þ 2

Algorithm 1: Standard NS Algorithm Initialize algorithm parameters: sample space of unknown parameters x, number of samples N, number of NS iterations q. Draw N samples xk (k = 1, 2, . . ., N) from the prior distribution p(x| M) and evaluate the likelihood of each sample. Start Standard NS: 1 Set Z0 = 0, X0 = 1, Sactive={x1, x2, . . . xN}, Sposterior = {}. 2 for i = 1,. . ., q do 3 Find the worst likelihood p(D|xworst, M) and its corresponding sample xworst 4 Set Li = p(D| xworst, M) 5 Set Xi = exp(i/N) 6 Set Zi = Zi1 + Li (Xi1  Xi)/2 7 Set Sposterior Sposterior [{xworst, Li (Xi1  Xi)/2} 8 Draw a new sample xnew from prior distribution p(x| M) and evaluate its likelihood p(D| xnew, M). 9 While p(D| xnew, M) > Li 10 Sactive (Sactiven{xworst})[{xnew} 11 else p(D| xnew, M)  Li 12 Resample xnew p(x|M) and evaluate its likelihood p (D| xnew, M) 13 end while 14 end for 15 Set Z = Zq+(Xq/N) (L (x1) + L (x2) +    + L (xN)) 16 Set Sposterior Sposterior [{xk,(Xq/N) Lk} "xk 2 Sactive

ð7Þ

LðXÞ>k

Z ¼ pðDjM Þ ¼

it is thereby subjected to Beta distribution P(ti) = N tN1 , which is i the probability distribution for the largest of N random numbers from uniform (0 1) [16]. This distribution t  Beta(N,1) has the properties of E(log t) = 1/N and r(log t) = 1/N. Since the individual ti is independent, the prior mass is expected to shrink to log Xi p (i ± i)/N. Approximately, it can be estimated as Xi = exp(i/N) [15,16]. In this context, standard nested sampling can be stated in the following algorithm 1 [15,19].

ð9Þ

The prior masses Xi in above equation can be readily attained from a set of samples xk (k = 1, 2,. . ., N) drawn from the prior distribution with constraint L(xk) > Li1. Then Li can be taken as the worst likelihood of all samples xk in the active set, e.g. Li = L(xworst), where xworst is the corresponding sample. The prior mass shrinkage factor ti = Xi/Xi1 is expected to be a large constant within (0 1) and

A remarkable feature of standard NS, comparing with MCMC, is that only the sample xworst with the lowest likelihood is replaced by the newly drawn sample xnew having a larger likelihood at each step, while the other N-1 samples in the active set Sactive remain unchanged. In this way, the samples in Sactive gradually move to the high region of likelihood and finally reach to the part of maximum likelihood. Although the standard NS aims to evaluate the model evidence Z, the discard samples stored in Sposterior as well as the eventual samples saved in the active set Sactive can be used to approximately represent the posterior distribution p(x| D, M). For more detailed theoretical proof and description of nested algorithm, interested readers are referred to [16,26,27] The key point of standard NS algorithm lies in lines 9–13 of the pseudo code that is to propose new samples under the constraint p (D| xnew, M) > Li. The algorithm efficiency heavily depends on the proposal of new effective samples at each step. However, it is challengeable and inefficient to randomly draw samples from entire prior space under the constraint condition to replace the worst sample in Sactive. This problem becomes even more severe as Li approaches a high area and the potentially parameter space shrinks to a sharp peak region, in which case numerous samples unsatisfying the constraint condition may have to be drawn, evaluated and discarded to find an effective sample within an iteration. MCMC has been recommended and used to address such problem [16,19]. However, efficiently proposing new samples in the high region of likelihood is still hard due to the random walk characteristic of MCMC, especially for high dimensional problems. HMC was also incorporated into standard NS algorithm for generating valid

301

F. Qian, W. Zheng / Engineering Structures 140 (2017) 298–307

b

a

Variable 2

L(X)

0

1

X

Variable 1

Fig. 1. Nested sampling algorithm. (a) Gradually increasing likelihood with corresponding prior masses and (b) Drawn samples (q = 6) in 2-D parameter space.

samples [15], however, the requirement of energy gradient over unknown parameters makes the algorithm become complicated and computationally expensive, sometimes even infeasible for large engineering problems. This paper incorporates evolutionary algorithm into the constrained sampling procedure of the standard NS, in which effective samples are efficiently generated from the previously evaluated samples in the active set by evolutionary operations. Evolutionary algorithm takes full advantage of samples evaluated in previous steps to create new samples and avoids computational burden of the calculation of gradient. 4.2. Drawing new samples by evolutionary algorithm Three evolutionary operators, selection, crossover and mutation imitating natural evolution according to Darwinian theorem in genetic algorithm (GA), are utilized in this paper to efficiently generate new samples. In selection procedure, only samples in the active set with higher likelihood values evaluated in previous iteration are selected to form a mate pool. That is to say no new samples will be created in this operation. For instance, if the selection rate is Rs, the top Rs  N of the samples in descendant ordering in terms of their likelihood will be selected to constitute the mate pool. Crossover and mutation are two of the most important operators, which continually bring in new samples and maintain diversity. In crossover procedure, two parent samples v1 andv2 are randomly chosen by roulette wheel method from the mate pool and combined to give birth to two new samples vnew1 and vnew2 by blending method as following,

vnew1 ¼ bv1 þ ð1  bÞv2 ; vnew2 ¼ ð1  bÞv1 þ bv2

ð10Þ

where b is a random number within (0, 1). It is shown that crossover operator achieves the information interchange between good samples, in which way the resultant new samples can be reasonably deemed as more desirable than those obtained by random method. Mutation operation commonly replaces the parameters chosen according to a certain probability Pm in the new samples vnew1 and vnew2 with those randomly generated from parameter space, in which case the new regions of possible solutions can be reached. There are various mutation operators in evolutionary algorithm, such as Gaussian, exponential mutation and their varietal forms. In this study, the Gaussian mutation is adopted for its easy implementation to make samples vnew1 and vnew2 evolve into xnew1 and xnew2 with higher likelihood. The mutation form of each probably chosen parameter hi in samples vnew1 and vnew2 can be obtained ~i  Ni (hi, r ~ 2 ), in which r ~ 2 is the from the Gaussian distribution h presetting variance to control mutation range. The new sample xnew1 is first evaluated to obtain its likelihood p (D| xnew1, M). If p(D| xnew1, M) > Li, then xnew1 enters in the active set to replace the worst sample xworst, in which case the sample xnew2

will be discarded without the need of evaluation. Otherwise, evaluating p(D| xnew2, M) and replacing xworst by xnew2 if p(D| xnew2, M)> Li. While both p(D| xnew1, M) and p(D| xnew2, M) dissatisfy the rigid constraint condition, i.e. p(D| xnew1, M) Li and p(D| xnew2, M) Li, the algorithm returns to the crossover stage and re-choses samples from mate pool to perform crossover and mutation, this process continues until the constraint condition is satisfied. The pseudo code of the presented ENS algorithm is described in Algorithm 2. Algorithm 2: ENS Algorithm Initialize algorithm parameters: sample space of unknown parameters x, number of samples N, number of NS iterations q, selection rate Rs, mutation probability Pm and ~ 2. variance r Draw N samples xk (k = 1, 2,. . ., N) from the prior distribution p(x| M) in sample space and evaluate the likelihood of each sample. Start ENS: 1 Set Z0 = 0, X0 = 1, Sactive={x1, x2, . . . xN}, Sposterior = {}. 2 for i = 1,. . ., q do 3 Find the worst likelihood p(D|xworst, M) and its corresponding sample xworst 4 Set Li = p(D|xworst, M) 5 Set Xi = exp(i/N) 6 Set Zi = Zi1 + Li (Xi1  Xi)/2 7 Set Sposterior Sposterior [{xworst, Li (Xi1  Xi)/2} Sampling from evolutionary algorithm 8 Select the top Rs  N samples from Sactive to constitute the mate pool 9 Chose v1 and v2 by roulette wheel method from the mate pool 10 Generate vnew1 and vnew2 by crossover operation in Eq. (10) with random number b 11 Draw new samples xnew1, xnew2 by performing Gaussian mutation to vnew1 and vnew2,respectively 12 Evaluate xnew1 13 If p(D| xnew1, M) > Li 14 xnew xnew1 and go to the 22th step 15 else p(D| xnew1, M)  Li 16 evaluate xnew2 17 If p(D| xnew2, M) > Li 18 xnew xnew2 and go to the 22th step 19 else go back to the 9th step 20 end if 21 end if 22 Sactive (Sactiven{xworst})[{xnew} 23 Set Z = Zq + (Xq / N) (L (x1) + L (x2) +    + L (xN)) 24 Set Sposterior Sposterior [{xk,(Xq/N) Lk} 8 xk 2Sactive

302

F. Qian, W. Zheng / Engineering Structures 140 (2017) 298–307

x

5. Accuracy evaluation of model updating To investigate the performance of the presented algorithms for finite element model updating, criterions are needed to be defined for evaluating the accuracy of the updated results. Two types of error measures between the assigned exact values of unknown parameters and the identified counterparts are introduced here to evaluate the precision and robustness of the algorithms. The first one is the weighted average error (WAE) [28]

WAE ¼

 s  X ai  hi 2 i¼0

hi

ð11Þ

where ai and hi are the assigned exact value and updated value of the i-th unknown parameter, respectively. This criterion properly gives the overall evaluation of difference between assigned and identified values of unknown parameters. The other one is the maximum difference between the exact and identified values of unknown parameters [28]

MAE ¼ maxjai  hi j i

ð12Þ

6. Numerical examination Two numerical examples are employed to verify the performance of the presented ENS algorithm for Bayesian model updating and model selection. The first one is a 15-element clampedclamped beam and the other one is a 2-D truss structure consisting of 31 bars. The sampling efficiencies of ENS and NS are tested and compared based on the first example. In addition, an ill-posed problem due to insufficient observation data is designed to test the capability of ENS algorithm for multiple solution problems. In the second example, several competitive model classes are defined based on different uncertainties associated with structural material, assembly and construction defects to test the ability of the proposed ENS algorithm for the estimate of model evidences. The uncertainty parameters to be updated here are simulated as relative reduction factors in the elasticity modulus of corresponding elements. The algorithm parameters involved in evolutionary stage are identically set for the two examples as follows: selection rate ~ 2 = 0.06. Rs = 0.6, mutation probability Pm = 0.3 and variance r 6.1. Clamped-clamped beam The finite element model of a clamped-clamped beam with 15 elements and 16 nodes, as shown in Fig. 2, is established in Matlab software based on Euler–Bernoulli beam theory. Each node has three degrees of freedoms, including displacements in x, y directions and rotation around z axis. The detail geometry of the beam is also presented in Fig. 2. The mass density and elasticity modulus are 7810 kg/m3 and 210 GPa, respectively. The assigned exact reduction factors of elasticity modulus to elements 3, 5, 8, 13 are respectively h1 = 0.5, h2 = 1.0, h3 = 0.8, h4 = 1.0, as illustrated in Fig. 2. In the context of model updating, those factors together with the shared standard deviation r of errors between model prediction and the measurement are taken as uncertainty parameters x = {h1, h2, h3, h4, r}T, which are needed to be updated to make the model mostly fit to the observed measurement D. The simulated modal responses at nodes 2, 6, 10, 14 of the model specified by the assigned parameters are taken as the field measurement. Environmental noise is not included in this example for simplicity. Two cases of different measurements are considered as follows. Case I: only the first m = 6 frequencies are collected in measurement. In this case, an ill-posed problem is designed based on the inadequate observation and the symmetry of element 3 and 13

1

2

3

4

5

6

7

8

9

n1

n2

n3

n4

n5

n6

n7

n8

n9

10

11

12

13

14

15

n10 n11 n12 n13 n14 n15 n16

y

L=2.5 m h=0.02m 0.5

1.0

0.8

1.0

b=0.03m

Fig. 2. Clamped-clamped beam model.

around the central element 8. This geometric symmetry actually results in two potential solutions for h1 and h4, e.g. (h1 = 0.5, h4 = 1.0) and (h1 = 1.0, h4 = 0.5). Case G: the first m = 6 frequencies and modal vectors are collected in measurement. The weight factor w in Eq. (2) is set to be one for equally considering the contribution of frequencies and modal vectors to errors. The algorithm initiates to generate N = 200 samples, and the number of NS iterations q is set to 5000 for both case I and G. To fairly compare the sampling efficiency of ENS with the one of standard NS, both ENS and standard NS are implemented with the same algorithm parameters (N = 200, M = 5000). However, the standard NS has to be forcefully terminated at i = 3000 because it is extremely slow to draw an effective sample as the iteration i increases. The accumulative time consumed by standard NS at each iteration is presented in Fig. 3(a). The standard NS takes up to 467 hours to sample 3000 effective samples and the time exponentially increases as the iteration goes on. The most time burden is reasonably ascribed to the sampling algorithm because the time spent on the likelihood evaluation of samples and structural modal analysis can be negligible due to the simplicity of the finite element model. This indicates that the standard NS faces even more severe challenge in efficiency when it is used to update large-scale structures. The cumulative cost-time curve of the presented ENS at each iteration is illustrated in Fig. 3(b). It takes only 0.0132 hours to draw 3000 effective samples and 0.025 hours for 5000 effective samples, proving that ENS algorithm is computationally more efficient than standard NS algorithm. An approximate linear relationship between the cumulative time and iteration number i can be found in Fig. 3(b), implying that the ENS can maintain its high efficiency as the iteration increases. The updated samples from ENS and standard NS for case I, as well as the initial samples uniformly drawn from prior distribution in the range of [0.4 1.1], are plotted in Fig. 4, in which (a) is for (h1, h4) and (b) is for (h2, h3), and CT denotes ‘‘cumulative time” the algorithm expended at the corresponding iteration. It can be clearly seen from Fig. 4(a) that the samples of (h1, h4) obtained by standard NS at i = 3000 have a zonal distribution between the two potential solutions (h1 = 0.5, h4 = 1.0) and (h1 = 1.0, h4 = 0.5), indicating the problem is unidentifiable by standard NS. While the ENS samples are gradually evolving to the two potential solutions at i = 3000, and eventually converge on the two solutions at i = 5000. It demonstrates that the presented ENS algorithm is able to deal with ill-posed model updating problems. This is because the proposed methodology generates new samples from the evolution of the previous samples and replaces the worst samples in the active set Sactive. Therefore, if the initial samples are more enough to cover all the potential solution areas, the samples in Sactive updated from ENS will gradually move to the high region of likelihood and finally reach the multiple potential solution areas. Fig. 5 shows the sorted logarithmic values of likelihood for the 200 samples before and after the implementation of ENS and standard NS. By comparing Fig. 5(a) and (b), one could observe that

F. Qian, W. Zheng / Engineering Structures 140 (2017) 298–307

303

Fig. 3. Comparison of the sampling efficiency between ENS and standard NS.

Fig. 4. Evolution of samples from ENS and standard NS for case I. (a) (h1, h4). (b) (h2, h3).

Fig. 5. Sorted likelihood values. (a) Initial samples. (b) Samples from ENS and standard NS.

even the lowest log-likelihood value of the final samples from ENS is much higher than those of initial samples. That is to say, the presented ENS algorithm successfully makes the initial samples evolve into those having higher likelihood. It can be observed from Fig. 5 (b) that the likelihood values of all the ENS samples intensively distribute in the range of [0.09 0.08], while the likelihood values of the standard NS samples spread in the range of [0.105 0.08]. This agrees well with the above result that ENS samples are more convergent to the solutions than standard samples as shown in

Fig. 4. It is noteworthy that the number of samples from standard NS with likelihood values in the range of [0.085 0.08] is greater than those from the proposed ENS algorithm. Fig. 6(a) illustrates the log-likelihood curve of the sequentially drawn samples by ENS and standard NS algorithm with respect to the corresponding prior mass. It can be seen that the loglikelihood of ENS samples converges to stable value after log(X) reducing to 10 (i.e. i = 2000) slight faster than the log-likelihood of standard NS samples. This convergence relationship between

304

F. Qian, W. Zheng / Engineering Structures 140 (2017) 298–307

Fig. 6. (a) Log-likelihood values of sequentially drawn samples with respect to prior mass and (b) Log-Bayesian evidence versus iterations.

log-likelihood of samples and their corresponding prior mass provides a reference for the determination of algorithm termination condition. Fig. 6(b) depicts the estimated log-Bayesian evidence for varying iteration number q of nested sampling. It shows that the estimated log-evidence tends to be stable after q > 500, and the stable values are log Z = 11.92 and 12.45 for ENS and standard NS, respectively. The initial and updated samples from both ENS and standard NS algorithms for case G are presented in Fig. 7((a) for (h1, h4) and (b) for (h2, h3)). It can be seen that all the updated parameters of both ENS and standard NS algorithms gradually move to the correct parameter region as the iteration goes on, and the eventually updated/posterior parameter samples in active set Sactive are quite close to the assigned exact values. This indicates that the mode shapes included in the measurement provide enough reference information which makes the model updating become an identifiable problem, compared with the ill-posed problem of case I. It is shown that the standard NS takes more than 392 hours to draw 3000 effective samples under the constraint condition, while the modified algorithm ENS only needs 0.0136 hours for 3000 samples and 0.024 hours for 5000 effective samples. This once again proves that the ENS is far superior to standard NS in efficiency. Fig. 8 shows the convergence process of the mean values of the parameter samples in active set Sactive at different iteration steps of ENS. It intuitively illustrates that the means of the parameter samples in active set gradually evolve into the assigned exact values. 6.2. A 2-D truss structure A 31-bar planar truss that has been studied by many researchers [29–31], as shown in Fig. 9, is selected to examine the perfor-

mance of the presented ENS algorithm for model selection by evaluating Bayesian evidence and test its robustness to noise. The finite element model of the planar truss structure is built using 2-D truss elements in Matlab, with 25 degrees of freedom in total. The material density and elasticity modulus are 2770 kg/m3 and 70 GPa, respectively. The unknown parameters to be updated in this example are relative reduction in the elasticity modulus of the selected bars. It is supposed that there are 15 uncertainty reduction factors of elasticity modulus associated with those bars numbered 2, 3, 6, 9, 12, 13, 16, 19, 20, 22, 23, 26, 27, 29, 30 in the model. So there are totally 16 unknown parameters needed to be updated due to the introduction of a shared standard deviation r in Eq. (3). The assigned exact values for the 15 reduction factors are given in the third column of Table 1, and the model specified by those exact values is regarded as the reference model. The prior distribution of each of the 16 unknown parameters is assumed to be uniformly distributed over the identical interval [0.4 1.1]. Five sensors are supposed to be deployed at nodes n2, n4, n6, n10, and n12 for capturing the measurement of the reference model and the responses of the candidate models parameterized by unknown parameters. The first eight frequencies and modal vectors at the five observation points compose the simulated measurement data D. Considering the higher dimension of the unknown parameters in the current problem, the algorithm initiates with N = 400 samples and totally runs q = 20,000 nested sampling iterations, that is to say, 20,000 new samples are generated by evolutionary algorithm. The robustness and reliability of the presented ENS algorithm for model updating in the noise environment is firstly tested in this example. To perform that, noise at different levels is added to the

Fig. 7. Evolution of samples from ENS and standard NS for case G. (a) (h1, h4). (b) (h2, h3).

305

F. Qian, W. Zheng / Engineering Structures 140 (2017) 298–307

Fig. 8. Convergence of the mean traces of updated parameters at different iteration steps.

1 n1

2

n13

3

6

4

7 8

n12 11

9 5

n2

12 13

n11 16

14 10

n3

17

18

n10 21

19 15

n4

22 23

n9 26

27 28

24 20

L=6x1.52 m

n5

31

29 25

n6

n8

30

1.52 m

n14

n7

Fig. 9. A planar truss structure.

simulated measurement, resulting contaminated measurement  ij ¼ / ð1 þ grandÞ; where /  ij = the ijth noisy modal response, / ij /ij = the ijth measured modal component without noise, g = noise level and rand is a random number within [1 1]. Three noise levels g1 = 0%, g2 = 5%, g3 = 20% are considered in this study. The sample means and estimation errors of the unknown structural parameters updated by the presented ENS algorithm for the three different noise levels are presented in Table 1. It can be seen from the fourth column that the updated parameters with noise free measurement are very close to the assigned exact values with

relative low errors presented in the fifth column. The estimated mean values of the updated samples for 5% noise case are listed in the sixth column. The errors between the estimated means of the updated samples and the exact values for 5% noise case are given in the seventh column, which are less than 10%, indicating that the algorithm can provide favorable results under 5% noise environment. Even for 20% larger noise case, it can be seen from the eighth and ninth columns that the performance of ENS algorithm is still good and acceptable, which demonstrates the robustness and reliability of the presented model updating approach for strong noise environment. Results show that the WAE and MAE of the updated samples gradually become larger and the estimated log-evidence decrease as noise level increases. This implies that increasing noise contamination on the measurement can reduce the accuracy of the presented method. Seven competitive model classes are defined here based on the reference model described previously to investigate the ability of the presented ENS algorithm for effectively estimating Bayesian evidence for model selection. Model errors due to manufacturing flaws, assembly defects in joints and imperfect support joints, which are very common phenomenon in practical engineering, are considered in the seven candidate model classes. To simulate those effects of fabrication and construction errors, the following changes to the reference model are assumed for the truss structure. The reduction in the area of bar members is introduced to model the manufacturing flaws. The discount in the elastic modulus of the members is applied to simulate the assembly defects in the connected joints and imperfect support joints. The detail configurations and assumptions of the different competitive model classes are enumerated in Table 2. The measurement with 5% noise level generated from reference model is used to update the defined seven competitive model classes. All the algorithm parameters remain unchanged in the following simulation. The WAE and MAE of the updated samples for all the model classes are presented in Table 3. It can be seen that M1 has the smallest WAE and MAE, which indicates M1 outperforms other six model classes and is the most accurate model class. The estimated log-evidences of all the defined model classes by the ENS algorithm are given in the last row of Table 3. It is shown that M1 has the largest log-evidence, which explains that M1 is the most plausible model class of the real structure among all the potential model classes. This result agrees well with the findings obtained from WAE and MAE. The accordance in the evaluation of model classes by using log-evidence and WAE and MAE proves that the

Table 1 Updated results of structural parameters for different noise levels. Bar No.

2 3 6 9 12 13 16 19 20 22 23 26 27 29 30

Log-evidence

Parameter

h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15 WAE MAE

Exact hi

0.500 0.500 0.600 0.600 0.800 0.800 1.000 1.000 0.500 0.600 0.800 1.000 0.500 0.600 0.800 – – –

g1 = 0% Noise

g2 = 5% Noise

g3 = 20% Noise

Mean li

Error = |hi-li|/hi

Mean li

Error = |hi-li|/hi

Mean li

Error = |hi-li|/hi

0.496 0.480 0.614 0.602 0.819 0.828 0.992 1.020 0.491 0.600 0.802 1.062 0.496 0.603 0.760 0.011 0.062 8.344  104

0.872% 4.046% 2.263% 0.264% 2.403% 3.525% 0.824% 1.954% 1.750% 0.217% 0.285% 6.231% 0.767% 0.462% 4.98% – –

0.523 0.515 0.587 0.611 0.818 0.851 1.081 1.094 0.478 0.620 0.802 1.000 0.509 0.587 0.818 0.025 0.094 8.947  104

4.624% 3.065% 2.144% 1.869% 2.258% 6.345% 8.078% 9.425% 4.456% 3.406% 0.228% 0.017% 1.746% 2.173% 2.308% – –

0.476 0.486 0.570 0.649 0.627 0.841 1.064 1.039 0.408 0.642 0.831 0.905 0.431 0.531 0.679 0.237 0.173 8.988  104

4.763% 2.804% 5.068% 8.181% 21.575% 5.151% 6.358% 3.935% 18.417% 7.036% 3.915% 9.502% 13.871% 11.559% 15.153% – –

306

F. Qian, W. Zheng / Engineering Structures 140 (2017) 298–307 Table 2 Candidate model classes. Model classes

Error Type

Members

Changes

M1 M2 M3 M4 M5 M6 M7

Reference model Manufacturing flaws Manufacturing flaws Assembly defects in n3 Assembly defects in n5 Imperfect support joints n1 Imperfect support joints n7

– 27 28 29 30 14 16 18 10 11 1415 18 24 145 28 31

– 15% reduction in area 15% reduction in area 5% discount in elastic modulus 10% discount in elastic modulus 5% discount in elastic modulus 5% discount in elastic modulus

Table 3 Log-evidences and the updated errors of different model classes. Model classes

M1

M2

M3

M4

M5

M6

M7

WAE MAE Log-evidence

0.025 0.094 –8.947  104

0.154 0.176 –9.536  104

0.199 0.251 –9.957  104

0.114 0.168 –9.310  104

0.138 0.177 –9.257  104

0.168 0.282 –9.594  104

0.084 0.158 –9.220  104

presented ENS algorithm can effectively estimate Bayesian evidence for model selection. 7. Limitations and future directions Only three classical evolutionary operators, selection, crossover and mutation are considered in this study to improve the sampling efficiency of standard NS algorithm. Actually, there are many other advanced optimization techniques that can be utilized to achieve the goal. The incorporation of some advanced optimization approaches may potentially bring about more efficient sampling algorithm. The performance of the ENS algorithm is only verified based on numerical models. All the measurements used in the model updating and model selection are numerically simulated from finite element models. The effect of structural model complexity on the algorithm is not considered, which can significantly affect the efficiency of Bayesian model updating. Future research will involve to the validation of the presented ENS algorithm based on experiment data or real world structures. Other advanced optimization methods, such as multi-objective genetic algorithm, bee colony optimization and particle swam optimization will be introduced into the standard NS algorithm and comparative studies will be conducted between the derivative algorithms. Model reduction techniques or surrogate model can be adopted to simplify real large complex structure models to accelerate model analysis efficiency involved in the likelihood evaluation. 8. Summary and conclusions This paper incorporates the evolutionary algorithm into the standard NS algorithm to promote the constraint sampling step and thus produces a new ENS algorithm. The resultant ENS algorithm takes full advantage of the previously evaluated samples to efficiently draw new samples, and thus avoids the random walk characteristic and any evaluation of gradient. Both of the standard NS and the proposed ENS algorithms, as well as their implementation pseudo code, are elaborated in the paper for the purpose of comparison and easy understanding. Accuracy evaluation criterions are introduced to measure the performance of the presented ENS algorithm for Bayesian model updating and model selection. Numerical result demonstrates that the proposed ENS algorithm is computationally more efficient than standard NS algorithm. It takes only 0.0132 hours to draw 3000 effective samples and 0.025 hours for 5000 effective samples, while the standard NS needs up to 467 hours to sample 3000 effective samples. The

ENS algorithm can evolve and maintain all the possible solutions to the high regions of likelihood from initial samples. The convergence relationship between log-likelihood of samples and their corresponding prior mass can provide a reference for algorithm termination condition. The robustness and ability of the presented ENS algorithm for model updating and Bayesian evidence estimate in different noise level environment are investigated through a 31-bar planar truss model. For the case of 5% noise, the errors between the estimated means of the updated samples and the exact values are less than 10%, indicating that the algorithm can provide favorable results under noise environments. The updated results are still good and acceptable in 20% larger noise environment. The WAE and MAE of the updated samples gradually increase and the estimated logevidence decrease as noise level goes up. Several competitive model classes are simulated to consider the practical manufacturing flaws, assembly defects in joints and imperfect support joints. The estimated model evidence for the defined candidate model classes agrees well with the findings evaluated from WAE and MAE, proving that the ENS algorithm can effectively estimate Bayesian evidence for model selection. References [1] Zimmerman DC, Yap K, Hasselman T. Evolutionary approach for model refinement. Mech Syst Signal Process 1999;13(4):609–25. [2] Deng L, Cai C. Bridge model updating using response surface method and genetic algorithm. J Bridge Eng 2010;15(5):553–64. [3] Marwala Tshilidzi. Finite-element-model updating using computional intelligence techniques: applications to structural dynamics. Springer Science & Business Media; 2010. p. 67–84. [4] Levin RI, Naj Lieven. Dynamic finite element model updating using simulated annealing and genetic algorithms. Mech Syst Signal Process 1998;12 (1):91–120. [5] Beck JL, Katafygiotis LS. Updating models and their uncertainties. I: Bayesian statistical framework. J Eng Mech 1998;124(4):455–61. [6] Kouchmeshky B, Aquino W, Billek AE. Structural damage identification using co-evolution and frequency response functions. Struct Control Health Monit 2008;15(2):162–82. [7] Cheung SH, Beck JL. Bayesian model updating using hybrid Monte Carlo simulation with application to structural dynamic models with many uncertain parameters. J Eng Mech 2009;135(4):243–55. [8] Yuan C, Druzdzel MJ. Importance sampling algorithms for Bayesian networks: Principles and performance. Math Comput Model 2006;43(9):1189–207. [9] Ching J, Muto M, Beck JL. Structural model updating and health monitoring with incomplete modal data using Gibbs sampler. Comput Aided Civ Infrast Eng 2006;21(4):242–57. [10] Beck JL, Au Siu-Kui. Bayesian updating of structural models and reliability using Markov chain Monte Carlo simulation. J Eng Mech 2002;128(4):380–91. [11] Jianye Ching, Chen Yi-Chu. Transitional Markov chain Monte Carlo method for Bayesian model updating, model class selection, and model averaging. J Eng Mech 2007;133(7):816–32.

F. Qian, W. Zheng / Engineering Structures 140 (2017) 298–307 [12] Cheung S, Beck JL. Bayesian model updating using hybrid monte carlo simulation with application to structural dynamic models with many uncertain parameters. J Eng Mech 2009;135(4):243–55. [13] Boulkaibet I, Mthembu L, Marwala T, Friswell MI, Adhikari S. Finite element model updating using the shadow hybrid Monte Carlo technique. Mech Syst Signal Process 2015;52:115–32. [14] Tianqi Chen, BFox Emily, Guestrin Carlos. Stochastic gradient hamiltonian monte carlo. In: Proceedings of The 31st international conference on machine learning. p. 1683–96. [15] Elsheikh Ahmed H, Wheeler Mary F, Hoteit Ibrahim. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems. J Comput Phys 2014;258:319–37. [16] John Skilling. Nested sampling for general Bayesian computation. Bayesian Anal 2006;1(4):833–59. [17] Pia Mukherjee, Parkinson David, Liddle Andrew R. A nested sampling algorithm for cosmological model selection. Astrophys J Lett 2006;638(2):L51. [18] Linda Mthembu, Marwala Tshilidzi, Friswell Michael I, Adhikari Sondipon. Model selection in finite element model updating using the Bayesian evidence statistic. Mech Syst Signal Process 2011;25(7):2399–412. [19] Elsheikh AH, Ibrahim H, Wheeler Mary F. Efficient Bayesian inference of subsurface flow models using nested sampling and sparse polynomial chaos surrogates. Comput Methods Appl Mech Eng 2014;269:515–37. [20] Bart Peeters, De Roeck Guido. Reference-based stochastic subspace identification for output-only modal analysis. Mech Syst Signal Process 1999;13(6):855–78. [21] Xu YL, Chen SW, Zhang RC. Modal identification of Di Wang building under Typhoon York using the Hilbert-Huang transform method. Struct Des Tall Special Build 2003;12(1):21–47.

307

[22] Yang Jann N, Lei Ying, Pan Shuwen. System identification of linear structures based on Hilbert-Huang spectral analysis. Part 1: normal modes. Earthquake Eng Struct Dynam 2003;32(9):1443–68. [23] Christodoulou K, Evaggelos N, Papadimitriou C, Panagiotis P. Structural model updating and prediction variability using Pareto optimal models. Comput Methods Appl Mech Eng 2008;198(1):138–49. [24] Christodoulou K, Papadimitriou C. Structural identification based on optimally weighted modal residuals. Mech Syst Signal Process 2007;21(1):4–23. [25] Goller B, Beck J, Schueller G. Evidence-based identification of weighting factors in Bayesian model updating using modal data. J Eng Mech 2011;138 (5):430–40. [26] Evans M. Discussion of nested sampling for Bayesian computations by John Skilling. Bayesian Stat 2007;8:491–524. [27] Nicolas Chopin, Robert Christian P. Properties of nested sampling. Biometrika 2010;97(3):741–55. [28] Seweryn Kokot, Zembaty Zbigniew. Damage reconstruction of 3D frames using genetic algorithms with Levenberg–Marquardt local search. Soil Dyn Earthquake Eng 2009;29(2):311–23. [29] Messina A, Williams EJ, Contursi T. Structural damage detection by a sensitivity and statistical-based method. J Sound Vibr 1998;216(5):791–808. [30] Seyedpoor SM. A two stage method for structural damage detection using a modal strain energy based index and particle swarm optimization. Int J NonLinear Mech 2012;47(1):1–8. [31] Mehdi Nobahari, Seyedpoor Seyed Mohammad. An efficient method for structural damage localization based on the concepts of flexibility matrix and strain energy of a structure. Struct Eng Mech 2013;46(2):231–44.