Bayesian state and parameter estimation of uncertain dynamical systems

Bayesian state and parameter estimation of uncertain dynamical systems

Probabilistic Engineering Mechanics 21 (2006) 81–96 www.elsevier.com/locate/probengmech Bayesian state and parameter estimation of uncertain dynamica...

1MB Sizes 0 Downloads 122 Views

Probabilistic Engineering Mechanics 21 (2006) 81–96 www.elsevier.com/locate/probengmech

Bayesian state and parameter estimation of uncertain dynamical systems Jianye Ching a, James L. Beck b,*, Keith A. Porter c a Department of Construction Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, ROC Department of Applied Mechanics and Civil Engineering, California Institute of Technology, Mail Code 104-44, Pasadena, CA 91125, USA c George W. Housner Senior Researcher in Civil Engineering, Mail Code 104-44, California Institute of Technology, Pasadena, CA 91125, USA b

Received 4 August 2004; received in revised form 16 May 2005; accepted 6 August 2005 Available online 4 November 2005

Abstract The focus of this paper is Bayesian state and parameter estimation using nonlinear models. A recently developed method, the particle filter, is studied that is based on stochastic simulation. Unlike the well-known extended Kalman filter, the particle filter is applicable to highly nonlinear models with non-Gaussian uncertainties. Recently developed techniques that improve the convergence of the particle filter simulations are introduced and discussed. Comparisons between the particle filter and the extended Kalman filter are made using several numerical examples of nonlinear systems. The results indicate that the particle filter provides consistent state and parameter estimates for highly nonlinear models, while the extended Kalman filter does not. q 2005 Elsevier Ltd. All rights reserved. Keywords: Bayesian analysis; State estimation; Parameter estimation; Dynamical systems; Monte Carlo simulation; Importance sampling; Particle filter; Stochastic simulation; Extended Kalman filter

1. Introduction 1.1. Applications of state estimation in civil engineering State estimation is the process of using dynamic data from a system to estimate quantities that give a complete description of the state of the system according to some representative model of it. State estimation has the potential to be widely applied in civil engineering. For instance, it can be used in structural health monitoring to detect changes of dynamical properties of structural systems during earthquakes and, more generally, it can be used for system identification to better understand the nonlinear behavior of structures subject to seismic loading. For structural control, the ability to estimate system states in real time may help to accomplish an efficient control strategy. For performance-based earthquake engineering, state estimation can provide crucial information to assess the seismic performance of an instrumented structure in terms of repair costs, casualties and repair duration (dollars, deaths and downtime) shortly after the cessation of strong motion. * Corresponding author. Tel.: C1 626 395 4132; fax: C1 626 568 2719. E-mail addresses: [email protected] (J. Ching), [email protected] (J.L. Beck), [email protected] (K.A. Porter).

0266-8920/$ - see front matter q 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.probengmech.2005.08.003

Because of their wide applicability, state estimation and identification methods have been studied in civil engineering for various purposes. Beck [1] used an invariant-embedding filter for modal identification. Yun and Shinozuka [2] used an extended Kalman filter to study nonlinear fluid-structure interaction. Hoshiya and Saito [3] used the extended Kalman filter for structural system identification. Lin et al. [4] developed an identification methodology for better understanding of the degrading behavior of structures subject to dynamic loads. Koh and See [5] developed an adaptive-filter algorithm that also updates uncertainty estimates. Ghanem and Shinozuka [6] presented several adaptive estimation techniques (e.g. extended Kalman filter, recursive least squares, recursive prediction error methods) and verified them using experimental data [7]. Glaser [8] used the Kalman filter to identify the time-varying natural frequency and damping of a liquefied soil to get insight into liquefaction. 1.2. Development of Bayesian state-estimation algorithms Among state-estimation methodologies, those founded on the Bayesian framework are powerful because: (a) they are rigorously based on the probability axioms and therefore preserve information; and (b) they give the probability density function (PDF) of the model state conditioned on the available information, which may then be used for any probability-based structural health monitoring, system identification, reliability

82

J. Ching et al. / Probabilistic Engineering Mechanics 21 (2006) 81–96

assessment and control techniques. With the PDF available, we can not only estimate the state but also give a description of the associated uncertainties. The most well known Bayesian stateestimation algorithm is the Kalman filter (KF) [9,10], which is applicable for linear models with Gaussian uncertainties. Later, KF was modified to give the extended Kalman filter (EKF) [11] to accommodate lightly nonlinear systems, and, despite several noteworthy contributions since that time [12–18], this is basically the dominant Bayesian state-estimation algorithm for nonlinear models and non-Gaussian uncertainties for the last 30 years. Although EKF has been widely used, it is only reliable for models that are almost linear on the time scale of the updating intervals [12,13]. However, civil-engineering systems are often highly nonlinear when subject to severe loading events, in which case the applicability of the KF and EKF is often questionable. These older techniques have been used by civil-engineering researchers for decades [1,2,3,5] although their applicability for nonlinear models of structures and non-Gaussian uncertainties is seldom verified either empirically or theoretically. Several recent important developments [15,16,17,18,19, 22,23] have produced stochastic-simulation algorithms for Bayesian state estimation that are applicable to highlynonlinear models of a system. These breakthroughs have not yet had significant impact in civil engineering, although two recent publications [20,21] present implementations of an improved version of Kitagawa’s approach [16] for system identification and damage detection. 1.3. Scope of this paper This paper focuses on some recent methods for Bayesian state estimation that use stochastic simulation, especially a technique called the particle filter (PF). These methods have the following advantages: (a) they are applicable to highly nonlinear models with non-Gaussian uncertainties; (b) they are not limited to estimating the first two moments of the state as in the KF and EKF; and (c) as the sample size approaches infinity, the resulting estimates of the state, or any function of the state, at each time converge to their expected values conditional on the model and the dynamic data up to that time. However, the simulation is usually computationally expensive and sometimes the state estimates can be inaccurate because of insufficient samples. Several algorithmic improvements that address the afore-mentioned difficulties are introduced. The performance of the EKF and PF methods is then compared through several numerical examples. This paper has the following structure: In Section 2, the general problems of Bayesian state estimation for nonlinear dynamical models is defined. In Section 3, the particle filter is introduced. In Section 4, several numerical examples are presented to compare the older and newer techniques. 2. Bayesian State Estimation Let xk2Rn denote the model state at time k, while uk2Rp and yk2Rq denote the observed input and observed output for

the actual system at time k. The discrete-time state-space model of a dynamical system can then be modeled by: xk Z fkK1 ðxkK1 ; ukK1 ; wk Þ

yk Z hk ðxk ; uk ; vk Þ (1)

k Z 1; 2.T where wk2Rl and vk2Rm are introduced to account for unknown disturbances exciting the system and unknown prediction errors (from the modeling and measurement processes), respectively; fk is the prescribed state transition function at time k; and hk is the prescribed observation function at time k. The two equations in (1) are called, from left to right, state transition (or evolution) and observation (or output) equations, respectively. The values of the variables xk,yk,wk and vk are uncertain, while uk is assumed to be known excitation. For each time k, the dynamical system input uk and output y^k are measured. Note that yk denotes the uncertain value of the observed output (before it is measured) whereas y^k denotes the measured value of yk that becomes available at time k. Also, fy^1 ; y^2 ; .; y^k g and {u1, u2,.uk} are denoted by Y^ k and Uk, respectively. Our goal is to evaluate sequentially the PDF pðxk jY^ k Þ for the state xk at every time k, i.e. to perform a sequential update of this conditional PDF using the measured system input and output up to the current time, based on prescribed probabilistic models for wk and vk. Note that the conditioning of every PDF on Uk and the chosen model in (1) is left implicit. The basic equations for updating pðxkK1 jY^ kK1 Þ to pðxk jY^ k Þ are the predictor and updater (or corrector) equations that follow from the Theorem of Total Probability and Bayes Theorem, respectively: ð pðxk jY^ kK1 Þ Z pðxk jxkK1 ÞpðxkK1 jY^ kK1 ÞdxkK1 pðy^k jxk Þpðxk jY^ kK1 Þ pðy^ jx Þpðxk jY^ kK1 Þ pðxk jY^ k Þ Z Ð Z k k pðy^k jxk Þpðxk jY^ kK1 Þdxk pðy^k jY^ kK1 Þ

(2)

where Y^ kK1 is dropped in p(xkjxkK1) and pðy^k jxk Þ because the models for the state transition and observation PDFs make it irrelevant. An alternative formulation of the basic equations will prove useful in the next section. If the history of the state up to time k is denoted by XkZ{x0,x1,.,xk}, then from Bayes Theorem: pðXk jY^ kK1 Þ$pðy^k jXk ; Y^ kK1 Þ pðXk jY^ k Þ Z pðy^k jY^ kK1 Þ Z pðXkK1 jY^ kK1 Þ$

pðxk jXkK1 ; Y^ kK1 Þ$pðy^k jXk ; Y^ kK1 Þ pðy^k jY^ kK1 Þ

pðy^k jxk Þ$pðxk jxkK1 Þ Z pðXkK1 jY^ kK1 Þ$ pðy^k jY^ kK1 Þ

(3)

where we have used the fact that pðy^k jXk ; Y^ kK1 ÞZ pðy^k jxk Þ and that pðxk jXkK1 ; Y^ kK1 ÞZ pðxk jxkK1 Þ based on (1) and the fact that the PDFs for vk and wk are prescribed. Evaluating the recursive

J. Ching et al. / Probabilistic Engineering Mechanics 21 (2006) 81–96

equation in (3), it is clear that

pðXk jY^ k Þ Z pðx0 Þ$

83

estimated using the importance sampling technique as follows: 2 3 N P ^bik $rðX^ ik Þ 6 7 6 7 (5) E½rðXk ÞjY^ k  z 6 iZ1 N 7 4 P ^j 5 bk

k Y pðy^m jxm Þ$pðxm jxmK1 Þ pðy^m jY^ mK1 Þ mZ1

jZ1 k pðx0 Þ Y Z pðy^m jxm Þ$pðxm jxmK1 Þ $ pðY^ k Þ mZ1

(4)

The main challenge in Bayesian state estimation for nonlinear models is that these basic equations cannot be readily evaluated because they involve high-dimensional integrations. When fk and hk in (1) are linear in uk, xk, wk and vk and all uncertainties are Gaussian, the conditional PDF pðxk jY^ k Þ is also Gaussian and it can be updated analytically in a sequential manner. The updating algorithm is the well-known Kalman filter (KF). In the case that the adopted dynamical model is slightly nonlinear, an approximation for KF can be derived by linearizing the model. The resulting filter is the well-known extended Kalman filter (EKF). However, both KF and EKF can only update the first two moments of the conditional PDF for the state. For nonlinear models with nonGaussian uncertainties, it is desirable to update the conditional PDF itself because the first two moments may not give an adequate description of the state; however, doing so completely requires an infinite number of parameters to represent the functional form of the conditional PDF. An alternative approach is to conduct stochastic simulation by drawing samples from the conditional PDF so that the conditional expectation of any function of xk can be consistently estimated. In the next section, we focus on stochastic simulation techniques for state estimation and use the term particle filters (PF) to denote the resulting algorithms (following the terminology of [18]), although similar algorithms have been called Monte Carlo filters [16] and sequential Monte Carlo Bayesian filters [17,23].

3. Particle filter 3.1. Stochastic simulation for state estimation Our interest is to adopt a stochastic simuation algorithm for the conditional PDF pðXk jY^ k Þ that is Markovian, in that information is required only from time steps kK1 and k, and the earlier state and observation data can be forgotten. In other words, if X^ kK1 is a sample from pðXkK1 jY^ kK1 Þ, the sample from pðXk jY^ k Þ must have the form X^ k Z fX^ kK1 ; x^k g, where x^k is the new sample. However, such a stochastic simulation algorithm cannot be directly implemented. This is because pðXkK1 jY^ kK1 Þ is different from pðXkK1 jY^ k ÞZ pðXkK1 jY^ kK1 ; y^k Þ. i The basic idea of PF is to draw N samples fX^ k : iZ 1; .; Ng randomly from a chosen importance sampling PDF qðXk jY^ k Þ where qðXk jY^ k Þ is chosen so that it can be readily sampled, then the expectation of any function r(Xk) conditioned on Y^ k can be

where

i b~ k

pðx^i0 Þ$ Z

k Q

pðy^m jx^im Þ$pðx^im jx^imK1 Þ

mZ1 i qðX^ k jY^ k Þ

(6)

is the non-normalized importance weight of the ith sample. Note that the likelihood functions pðy^m jx^im Þ and pðx^im jx^imK1 Þ can be readily evaluated using the prescribed PDFs for vm and wm if the mappings in (1) uniquely specify vm and wm, given ym, xm and x mK1. It is shown in [24] that this estimator is asymptotically unbiased if the support region for pðXk jY^ k Þ is a subset of that for qðXk jY^ k Þ. Any quantity of interest can be estimated with the appropriate r($) function in (5); for instance, if r(Xk)ZXk, E½rðXk ÞjY^ k  is simply the conditional expectation E½Xk jY^ k ; if rðXk ÞZ Xk XkT , E½rðXk ÞjY^ k  is the conditional second moment E½Xk XkT jY^ k . In practice, the quantity of interest might be any facility performance metric such as peak interstory drift, repair cost, repair duration, casualties, etc. The chosen qðXk jY^ k Þ must also be Markovian since a Markovian PF algorithm is desired. In other words, the structure of qðXk jY^ k Þ must be such that XkK1 is independent of yk conditioned on YkK1. The selection of such an importance sampling PDF is discussed in [23,25]. The conclusion is that the following importance sampling PDF is optimal: qðXk jY^ k Þ Z pðx0 Þ$

k Y

pðxm jxmK1 ; y^m Þ

(7)

mZ1

With this choice, one can see that the importance weights can be computed sequentially: pðy^k jx^ik Þ$pðx^ik jx^ikK1 Þ i i b~ k Z b~ kK1 $ pðx^ik jx^ikK1 ; y^k Þ

(8)

Note that sampling from qðXk jY^ k Þ requires the ability to sample from pðxm jxmK1 ; y^m Þ, which is generally not Gaussian. A solution is to find a Gaussian PDF pG ðxm jxmK1 ; y^m Þ whose first two moments are close to this PDF. Such a task can be accomplished by using a single-time-step EKF algorithm [18,23]. Because of the structure of the algorithm, at any time k, we are only required to store the sampled states and weights in time steps k and kK1, if the quantity of interest is r(xk) and so depends only on the current state (although, clearly, additional dependence on the previous state xkK1 can also be treated). As a result, the following recursive algorithm can be used: Algorithm 3.1. : basic PF algorithm 1. Initialize the N samples: Draw x^i from p(x0) and set biZ 1/N, iZ1,.,N.

84

J. Ching et al. / Probabilistic Engineering Mechanics 21 (2006) 81–96

2. At time k, store the previous samples and weights x~i Z x^i

i b~ Z bi

algorithms are completely independent. The resulting algorithm is as follows: (9)

For iZ1,.,N, draw x^ from pG ðxk jxkK1 Z x~ ; y^k Þ and update the importance weight i

i i i i pðy^ jx Z x^ Þ$pðxk Z x^ jxkK1 Z x~ Þ bi Z b~ $ k k pG ðxk Z x^i jxkK1 Z x~i ; y^k Þ

(10)

3. For iZ1,.,N, E½rðxk ÞjY^ k  can be approximated by: 2 3 E½rðxk ÞjY^ k  z

N 6 X 6 iZ1

Algorithm 3.2. : parallel PF algorithm with resampling

i

7 7 6 b ! 7$rðx^i Þ 6 N 7 4 P j 5 b

1. Initialize N samples for each of the L parallel PFs: Draw x^i;j from p(x0) and set bi,jZ1/N for iZ1,.,N, jZ1,.,L. 2. Perform the following steps 3 – 4 for jZ1,.,L independently. Since the processes are completely independent, they can be conducted in parallel. 3. At time k, store the previous samples and weights x~i;j Z x^i;j

i;j b~ Z bi;j

(12)

For iZ1,.,N, draw x from pG ðxk jxkK1 Z x~ ; y^k Þ and update the importance weight i;j

i

(11)

jZ1

where r($) is a function that maps from xk to the quantity of interest. 4. Do Steps 2 and 3 for time steps kZ1,.T. 3.2. Reducing sample degradation: recursive resampling and parallel particle filters

i;j

i;j i;j i;j i;j pðy^k jxk Z x Þ$pðxk Z x jxkK1 Z x~ Þ i;j b Z b~ $ i;j i;j pG ðxk Z x jxkK1 Z x~ ; y^k Þ

(13)

i;j 4. Compute the c.o.v. of fb : iZ 1; .; Ng. If the c.o.v. is larger than the prescribed threshold, then execute the resampling step for iZ1,.,N: i;j b x^i;j Z xi;j w:p: N (14) P  i;j b iZ1

and set bi,jZ1/N for iZ1,.,N. Otherwise, for iZ1,.,N: Note that it is desirable to have the importance weights {bi:iZ1,2,.,N} be approximately uniform so that all samples contribute significantly in (5), but they become far from uniform as k grows, which is due to the recursion in (8) and the fact that qðXk jY^ k Þ spðXk jY^ k Þ. Ultimately, a few weights become much larger than the rest, so the effective number of samples is small. Nevertheless, this degradation can be reduced, as described next. Instead of letting the N samples evolve through time independently (Algorithm 3.1), one can resample the samples when the importance weights become highly non-uniform [16,23,25]. After the resampling, the importance weights become uniform, therefore the degradation problem is alleviated. The resampling step tends to terminate smallweight samples and duplicate large-weight samples and, therefore, forces the N samples to concentrate in the highprobability region of pðxk jY^ k Þ. Although the resampling step sets the weights back to uniform, the price to pay is that the samples become dependent and therefore collectively carry less information about the state. As a result, the resampling procedure should only be executed when the importance weights become highly nonuniform. This paper proposes the following approach: monitor the coefficient of variation (c.o.v.) of the importance weights, and the resampling procedure is executed only when this c.o.v. exceeds a certain threshold, indicating that the variability in the importance weights is large. Another novel way proposed by this research to alleviate the dependency induced by the resampling step is to conduct several independent PF algorithms and combine all of the obtained samples. Although the samples obtained in a single algorithm can be highly dependent, the samples from different

x^i;j Z xi;j

Store

bi;j Z

i;j b N P i;j b

(15)

iZ1 N P j i;j r^k;N Z rðx^ Þ$bi;j iZ1

6. E½rðxk ÞjY^ k  can be then approximated by ! L P j r^k;N jZ1 ^ E½rðxk ÞjY k  z L

(16)

7. Do Steps 2 – 5 for kZ1,.,T. 3.3. Advantages and disadvantages of the PF technique The advantages of the PF technique include: (a) as N (the number of samples per algorithm) approaches infinity, the value of any function of the state xk estimated by PF converges to its expected value; therefore, the PF technique can be used to validate other methodologies; and (b) parallel computations are possible for PF algorithms. A disadvantage of the PF technique is that it is computationally expensive, especially when the degradation is severe so that we need large N and L to have the algorithm converge. In general, the required N and L grow with the size of the effective support region of pðxk jY^ k Þ. A simple test for convergence is to add parallel particle filters until the estimated quantity of interest, r(xk), does not significantly change. It is noted that for linear models with time-varying unknown parameters, a more efficient PF algorithm can be derived, as shown in [24].

J. Ching et al. / Probabilistic Engineering Mechanics 21 (2006) 81–96

4. Examples to compare EKF and PF methods

to base excitation is

We present three examples in this section. We generate data that is contaminated by noise for three simulated dynamical systems. With the simulated data, we use identification models that are derived from these dynamical systems to conduct EKF and PF and we compare their performance. The goal of these examples is to see if these techniques produce consistent results.

M x€ t C Ct x_t C Kt xt Z Fut where 2

4.1.1. Data generation We first describe the system that generates the simulated data. Consider an idealized planar four-degrees-of-freedom (DOF) shear-building system with known time-invariant masses equal to m1Zm2Zm3Zm4Z250,000 kg (subscript denotes the story number). The inter-story stiffnesses k1, k2, k3 and k4 change through time as shown in Fig. 1 (k1, k3 and k4, drift around certain values, while k2 significantly decreases and then partially recovers). The inter-story viscous damping coefficients are c1, c2, c3 and c4 and are also time-varying (Fig. 1). The time evolutions of k1, k3, k4, c1, c2, c3 and c4 are Brownian motions with standard deviation of the drift equal to 2% of their mean values during each sampling interval described later. The governing equation of this system subject

2

c4 (t) (kN/m*sec)

400

0

0

m2

0

0

200

0

5

10

m1

0

0

Kc3;t

0

c3;t C c2;t

Kc2;t

Kc2;t

c2;t C c1;t

k4;t

Kk4;t

0

0

k4;t C k3;t

Kk3;t

0

Kk3;t

k3;t C k2;t

Kk2;t

0

Kk2;t

k2;t C k1;t

15

20

500 0

0

5

10

15

20

0

5

10

15

20

0

5

10

15

20

0

5

15

20

c (t) (kN/m*sec)

1500

400 200

1000 500

3

0

5

10

15

20

0

c (t) (kN/m*sec)

1500

400

1000 500

2

2

200 0

0

5

10

15

0

20

1500 c (t) (kN/m*sec)

600 400

1000

1

1

200 0

0

5

10 Time (sec)

3

7 0 7 7 0 7 5

1000

4

k (t) (MN/m)

m3

0

1500

3

k (t) (MN/m)

0

0

0

600 k (t) (MN/m)

0

0

0 6 6Kk4;t Kt Z 6 6 0 4

600

k (t) (MN/m)

6 6 0 M Z6 6 0 4

3 Km4 6Km 7 6 37 F Z6 7 4Km2 5 Km1 2 Kc4;t c4;t 6 6Kc4;t c4;t C c3;t Ct Z 6 6 0 Kc3;t 4

600

0

m4

(17)

2

4.1. Example 1: planar four-story shear building with time-varying system parameters

0

2

3 x4;t 6x 7 6 3;t 7 xt Z 6 7 4 x2;t 5 x1;t

85

15

20

500 0

10 Time (sec)

Fig. 1. Time evolutions of the actual inter-story stiffnesses and dampings (example 1).

3 7 7 7 7 5 3 7 7 7 7 5

(18)

86

J. Ching et al. / Probabilistic Engineering Mechanics 21 (2006) 81–96

xi,t is the displacement of the (iC1)-th floor (fifth floor is the roof) relative to the ground at time t; ut is the acceleration at the base (first floor of the building); and ci,t and ki,t are the interstory damping coefficient and stiffness of the i-th story, respectively, at time t. We generate the data using white-noise for the excitation ut. The observed output yt is the absolute acceleration time histories at the four stories:

2

xt

3

2

x_t

3

2

0

3

7 7 6 d6 7 6 K1 K1 7 6 K1 7 4 x_t 5 Z 6 4KM Kt xt KM Ct x_t 5 C 4 M F 5ut dt qt 0 0 2

0

3

(20)

6 7 C 4 0 5wt G yt ZKMK1 Kt xt KMK1 Ct x_t C Ht $vt

2

x€ 1;t C ut

3

7 6 6 x€ 2;t C ut 7 7 C G$vt ZKMK1 ½Ct x_t C Kt xt  C G$vt 6 yt Z 6 7 4 x€ 3;t C ut 5 x€ 4;t C ut

(19)

where vt2R4wN(0,I) are the (stationary) measurement uncertainties for yt; GZdiag(g1,.,g4) is such that the overall signal/noise rms amplitude ratios for each channel is roughly equal to 10. Both the excitation and observation are sampled at an interval of 0.02 s and are shown in Fig. 2. With the excitation and observation of the system, i.e. {ut:tZ1,.,T} and fy^t : tZ 1; .; Tg, the goal is to estimate the system states (displacements and velocities) as well as the system parameters (dampings, stiffnesses and the uncertainty parameters) in real time. 4.1.2. Identification model The identification model is now described. The following time-varying linear state-space model is used as the identification model: 10 roof

0 -10

0

2

4

6

8

10

12

14

16

18

20

10 4th floor

Acceleration data (m/sec 2)

0 -10

0

2

4

6

8

10

12

14

16

10

18

20

3rd floor

0 -10

0

2

4

6

8

10

12

14

16

18

20

10 2nd floor

0 -10

0

2

4

6

8

10

12

14

16

10

18

20

base (input)

0 -10

0

2

4

6

8

10

12

14

16

18

Time (sec)

Fig. 2. The simulated excitation and observation data (example 1).

20

where wtwN(0,I); vtwN(0,I); G2R12!12 is a diagonal matrix whose diagonals must be specified (the reason will be explained later); HtZdiag(h1,t, h2,t h3,t h4,t) where the diagonals are unknown parameters (i.e. the four uncertainty parameters); and qt2R12 is the vector containing system parameters (including four stiffnesses, four dampings and four uncertainty parameters). The dimension of the state of the identification model is 20 (four displacements, four velocities, four stiffnesses, four dampings and four uncertainty parameters), whereas the dimension of the state in (17) is only eight since it consists of xt and x_t . To complete the probabilistic identification model, one must also specify the prior PDF for the entire (augmented) state trajectory f½ xTt x_Tt qTt T : tZ 0; .; Tg. More specifically, for the model in (20), one must specify the following: the prior PDF of x0 and x_0 , the prior PDF of q0, and the diagonals of the G matrix. Note that the identification model in (20) uses a Brownian-motion prior PDF for the parameter evolution {qt: tZ0,.T} due to the following dynamical equation in (20): q_t Z G$wt

(21)

A diagonal G matrix means that all system parameters are known a priori to drift independently. The effect of the G matrix is similar to the forgetting factor often used in adaptive filtering [26]. When the entries of G are large, the system parameters are allowed to drift more freely, relaxing the dependency between parameter values of adjacent time steps; therefore, the identified parameters only reflect most recent data. The converse is true when the entries of G are small; in this case, the identified parameters can reflect data from the remote past. In this example, the prior PDF for x0 and x_0 is taken to be zero-mean Gaussian with large variances; the prior PDF of q0 is taken to be Gaussian with mean equal to the actual value of q0 and large variances; the diagonal entries of G are chosen such that in each time step, each parameter drifts with a coefficient of variation (c.o.v., defined by the standard deviation divided by the mean value) equal to 2%. Recall that for k1, k3, k4, c1, c2, c3 and c4, their actual evolutions (see Fig. 1) are Brownian motions with the same 2% drift c.o.v., i.e. there is no modeling error for the evolutions of k1,k3,k4,c1,c2,c3 and c4. But for k2 and the four uncertainty parameters, the actual evolutions are not Brownian motions (the actual evolution of k2 is shown in Fig. 1; the four uncertainty parameters are actually constant),

J. Ching et al. / Probabilistic Engineering Mechanics 21 (2006) 81–96

while the identification model uses a Brownian-motion prior on their evolutions. Before we can proceed, we first convert (20) to the following discrete-time system using numerical integration (integrate over time step): 2

xk

3

02

xkK1

3

1

6_ 7 B6 7 C 4 xk 5 Z fkK1 @4 x_kK1 5; ukK1 ; wkK1 A qk

qkK1 02

xk

3

(22)

1

B6 7 C yk Z hk @4 x_k 5; uk ; vk A qk where fkK1 is evaluated using Matlab command ODE23. 4.1.3. Results The EKF and PF methods are applied to the generated simulated data. The stiffness, damping and uncertainty parameter estimates, and the associated 95% confidence intervals from EKF and PF (with NZ200 samples for each of LZ10 parallel PF and the importance weight c.o.v. thresholdZ200% using Algorithm 3.2) are shown in Figs. 3–8. (EKF does not provide confidence intervals for the uncertainty parameter estimates.) For this example, using more samples in PF than NLZ2000 gives little improvement in the convergence, indicating that the results are close to convergence. One can treat the results from PF as a comparison standard since it

gives asymptotically consistent estimates for the conditional means and variances. In Figs. 3–8, the thick lines indicate the actual parameter evolutions while the thin dashed lines are the conditional means of the identified system parameters and the thin dotted lines indicate the 95% confidence intervals. The results from EKF are similar to those of PF. Both algorithms successfully track the system parameters. For most parameters, the actual parameter evolutions lie within the 95% confidence bounds. Notice that although the Brownian-motion prior for k2 and the uncertainty parameters do not exactly match their actual evolutions, both Bayesian algorithms can still appropriately track k2 and the uncertainty parameters. Compared to the accuracy of the stiffnesses, the estimates of the damping and uncertainty parameters are worse and the associated uncertainties are larger. Although EKF and PF perform roughly equally in this example, there is a noticeable difference in the variances of the identified damping from PF, which are slightly larger than those from EKF. For this example, EKF and PF perform approximately equally because there is no highly nonlinear behavior. 4.2. Example 2: nonlinear hysteretic damping system with unknown system parameters 4.2.1. Data generation The previous example is a time-varying linear system. In the current example, consider a time-varying nonlinear system consisting of a single DOF (SDOF) Bouc-Wen hysteretic damping system [27]. The purpose of this example is to 1000

1000

Actual k 2,t

Actual k 1,t EKF mean estimate

EKF mean estimate

800 Stiffness (MN/m)

Stiffness (MN/m)

800 600 400

600 400 200

200 0 0

5

10 Time (sec)

15

0 0

20

5

10 Time (sec)

20

Actual k 4,t

Actual k 3,t EKF mean estimate

800

Stiffness (MN/m)

Stiffness (MN/m)

15

1000

1000

600 400

EKF mean estimate

800 600 400 200

200 0

87

0

5

10 Time (sec)

15

20

0 0

5

10 Time (sec)

Fig. 3. The EKF estimates of inter-story stiffness (example 1).

15

20

88

J. Ching et al. / Probabilistic Engineering Mechanics 21 (2006) 81–96 1000

1000 Actual k 1,t

600 400 200 0

PF mean estimate

800 Stiffness (MN/m)

800 Stiffness (MN/m)

Actual k 2,t

PF mean estimate

600 400 200

0

5

10 Time (sec)

15

0

20

1000

0

5

10 Time (sec)

Actual k4,t

PF mean estimate

800

Stiffness (MN/m)

Stiffness (MN/m)

20

1000 Actual k3,t

600 400 200 0

15

PF mean estimate

800 600 400 200

0

5

10 Time (sec)

15

0

20

0

5

10 Time (sec)

15

20

Fig. 4. The PF estimates of inter-story stiffness (example 1).

2500

2500 Actual c 1,t EKF mean estimate

2000 Damping (kN/m*sec)

Damping (kN/m*sec)

2000 1500 1000 500 0

0

5

10 Time (sec)

15

1000 500

0

5

10 Time (sec)

15

20

2500 Actual c 3,t EKF mean estimate

2000

Damping (kN/m*sec)

Damping (kN/m*sec)

1500

0

20

2500

1500 1000 500 0

Actual c 2,t EKF mean estimate

0

5

10 Time (sec)

15

20

Actual c 4,t EKF mean estimate

2000 1500 1000 500 0 0

5

10 Time (sec)

Fig. 5. The EKF estimates of inter-story damping (example 1).

15

20

J. Ching et al. / Probabilistic Engineering Mechanics 21 (2006) 81–96 2500

2500 Actual c 1,t

Actual c 2,t PF m ean estimate

2000

Damping (kN/m*sec)

Damping (kN/m*sec)

PF mean estimate

1500 1000 500 0

0

5

10 Time (sec)

15

1500 1000 500 0

0

5

10 Time (sec)

15

20

2500 Actual c 3,t PF mean estimate

2000

Damping (kN/m*sec)

Damping (kN/m*sec)

2000

20

2500

1500 1000 500 0

89

0

5

10 Time (sec)

15

Actual c 4,t PF m ean estimate

2000 1500 1000 500 0 0

20

5

10 15 Time (sec)

20

Fig. 6. The PF estimates of inter-story damping (example 1).

1

1 Actual uncertainty magnitude for y 1,t EKF estimate

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

0

5

10

15

Actual uncertainty magnitude for y 2,t EKF estimate

20

0

0

5

Time (sec)

15

20

1

1 Actual uncertainty magnitude for y 3,t EKF estimate

0.8

0.6

0.4

0.4

0.2

0.2

0

5

10 Time (sec)

15

Actual uncertainty magnitude for y 4,t EKF estimate

0.8

0.6

0

10 Time (sec)

20

0

0

5

10

15

Time (sec)

Fig. 7. The EKF estimates of uncertainty parameters h1,t, h2,t, h3,t, h4,t (example 1).

20

90

J. Ching et al. / Probabilistic Engineering Mechanics 21 (2006) 81–96 1

1 Actual uncertainty magnitude for y 1,t PF mean estimate

0.8

Actual uncertainty magnitude for y

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

0

5

10

15

20

0

0

5

10

Time (sec)

15

20

Time (sec)

1

1 Actual uncertainty magnitude for y3,t PF mean estimate

0.8

0.6

0.4

0.4

0.2

0.2

0

5

10 Time (sec)

15

Actualuncertainty magnitude for y4,t PF mean estimate

0.8

0.6

0

2,t

PFmean estimate

20

0

0

5

10 Time (sec)

15

20

Fig. 8. The PF estimates of uncertainty parameters h1,t, h2,t, h3,t, h4,t (example 1).

compare the performances of different methods for tracking the state and unknown parameters of a nonlinear system. The system that generates the data can be described by the following governing equation: 3

2

x_t

3

7 d6 7 6 7 K1=m$rt C 1=m$ut 4 x_t 5 Z 6 4 5 dt q4;tK1 q4;t rt q1;t $x_t Kq2;t $jx_t jjrt j rt C q3;t $x_t jrt j

3 2

(23)

yt ZK1=m$rt C 1=m$ut C vt where rt is the restoring force of the SDOF system; m is the mass, which is set to unity during the data generation; ut is a white-noise excitation force on the mass; yt is the acceleration measured on the mass; vt is stationary such that the overall signal/noise amplitude ratio is 10; q1,t, q2,t, q3,t, q4,t, are time-varying system parameters (their actual fluctuations are shown in Figs. 10 and 11, and they are Brownian motions with drift c.o.v. equal to 2% during each sampling interval): q1,t is the stiffness, q2,t, q3,t and q4.t are parameters that fine tune the shape of the hysteretic loop. Note that Bouc-Wen hysteretic damping system is Markovian in the sense that one can define a finite-dimensional state such that the current system status is completely

Input force

xt

1 0 -1 -2 -3 -4 0

10

20

30

40

10

20

30

40

50

60

70

80

90

100

50 60 Time (sec)

70

80

90

100

6 Measured Acceleration

2

characterized by this state. Both the excitation ut and observation y^t (shown in Fig. 9) are sampled at an interval of 0.5 s (roughly five sample points per oscillation cycle of the system).

4 2 0 -2 -4 -6 0

Fig. 9. The excitation force ut and the observed acceleration y^t (example 2).

J. Ching et al. / Probabilistic Engineering Mechanics 21 (2006) 81–96

4.2.2. Identification model Given the data ut and y^t , the goal is to estimate the means and variances of the identified system state and system parameters using the following identification model: 3 3 2 2 x_t xt 7 7 6 6 7 K1=m$rt C 1=m$ut 6 x_t 7 6 7 7 6 6 7 6 6 rt 7 6 q K1 q 7 6 q1;t $x_t Kq2;t $jx_t jjrt j 4;t rt C q3;t $x_t jrt j 4;t 7 6 7 7 6 6 7 7 6 q 7 1;t 0 d6 7 6 6 7 7Z6 6 7 dt 6 q2;t 7 6 7 0 7 6 6 7 7 6 6 7 6 q3;t 7 6 0 7 7 6 6 7 6q 7 6 7 4 4;t 5 4 0 5 ht 0 2 3 0 6 7 607 7 C6 6 0 7$wt 4 5 G yt ZK1=m$rt C 1=m$ut C ht $vt (24) where wt2R5wN(0,I); vt2RwN(0,1); and m is assumed to be known so it is not considered one of the uncertain parameters. The prior PDF for x0 and x_0 is taken to be zero-mean Gaussian with large variances; the prior PDF of q1,0, q2,0, q3,0 7.5

91

q4,0,h0 is taken to be Gaussian with mean equal to their actual value at time zero and large variances; the 5!5 matrix G in (24) is taken to be diagonal. The diagonals of G are chosen such that in each time step, each parameter is allowed to drift with a c.o.v. equal to 2%, i.e. there is no modeling error for the evolutions of q1,t, q2,t, q3,t and q4,t but modeling error exists for the evolution of ht in (24) (the actual ht is constant instead of a Brownian motion). The continuous-time model is numerically integrated to get the discrete-time version of this model similar to (22) with sampling rate equal to 0.5 s. 4.2.3. Results Figs. 10–15 show the results of identification of EKF and PF (as before, NZ200 and LZ10 and the importance weight c.o.v. thresholdZ200% using Algorithm 3.2). For this example, using more samples in PF than NLZ2000 gives little improvement in the convergence, indicating that the results are close to convergence. The results from PF are treated as a comparison standard. As before, the 95% confidence intervals on the parameters and states are indicated by thin dotted lines in Figs. 10–13 and 15. It is found that EKF performs less effectively than PF: at some time instants, the EKF estimates of the stiffness parameter q1,t oscillate around the actual evolution (Fig. 10), while this is not seen for PF (Fig. 11). Also, the EKF estimates for q2,t, (Fig. 10) significantly deviate from those of PF (Fig. 11). For the estimation of displacement, velocity and restoring force, the performances from the three methods are similar (Figs. 12 and 13). However, PF estimates 1.3

Actual theta1 EKF mean estimate

7

Actual theta2 EKF mean estimate

1.2 1.1

6.5

1 6 0.9 5.5 0.8 5

0.7

4.5 4

0.6 0

20

40 60 Time (sec)

80

100

-0.02

0.5

0

20

40 60 Time (sec)

80

100

1.4 Actual theta3 EKF mean estimate

-0.04

Actual theta4

1.3

EKF mean estimate

1.2 -0.06

1.1

-0.08

1 0.9

-0.1

0.8 -0.12 -0.14 0

0.7 20

40

60

Time (sec)

80

100

0.6

0

20

40

60

Time (sec)

Fig. 10. The EKF estimates of the system parameters (example 2).

80

100

92

J. Ching et al. / Probabilistic Engineering Mechanics 21 (2006) 81–96 1.3

7.5 Actual theta 1

7

Actual theta 2

1.2

PF mean estimate

PF mean estimate

1.1

6.5

1 6 0.9 5.5

0.8

5

0.7

4.5 4

0.6 0

20

40 60 Time (sec)

80

100

0.5

0

20

40 60 Time (sec)

80

100

1.4

-0.02 Actual theta3

Actual theta4

1.3

PF mean estimate

-0.04

PF mean estimate

1.2 -0.06

1.1 1

-0.08

0.9

-0.1

0.8 -0.12 -0.14

0.7 0

20

40 60 Time (sec)

80

100

0.6

0

20

40 60 Time (sec)

80

100

Fig. 11. The PF estimates of the system parameters (example 2).

2 1 0 Actual velocity EKF mean estimate

-1 -2

0

10

20

30

40

50 Time (sec)

60

70

2

80

90

100

Actual velocity EKF mean estimate

1 0 -1 -2

0

10

20

30

40

50 Time (sec)

60

70

80

90

100

5 Actual restoring force EKF mean estimate

0

-5

0

10

20

30

40

50 Time (sec)

60

70

80

Fig. 12. The EKF estimates of the system states (example 2).

90

100

J. Ching et al. / Probabilistic Engineering Mechanics 21 (2006) 81–96

93

2 1 0 -1 -2

Actual displacement PF mean estimate

0

10

20

30

40

50 Time (sec)

60

70

80

2

90

100

Actual velocity PF mean estimate

1 0 -1 -2

0

10

20

30

40

50

60

70

80

90

100

Time (sec) 5 Actual restoring force PF mean estimate

0

-5

0

10

20

30

40

50 Time (sec)

60

70

80

90

100

Fig. 13. The PF estimates of the system states (example 2).

the uncertainty parameters h 1 much better than EKF (Figs. 14 and 15).

the first few modes of the system is

4.3. Example 3: Lorenz chaotic system

x_3;t Z x1;t x2;t Kbx3;t

The Lorenz system exhibits chaotic behavior, as discovered by Lorenz [28] when he solved a simplified Rayleigh-Bernard problem modeling two-dimensional fluid motion driven by buoyancy due to a temperature difference across its height. The resulting simplified set of differential equations that consider

where x1,t specifies the time evolution of the stream-function of the first mode, whose contours are the streamlines; x2,t and x3,t specify the time evolutions of the temperature of the first two modes of the system; the parameter s depends on the properties of the fluid (for water the value is typically between 1 and 4);

x_1;t ZKsðx1;t Kx2;t Þ

x_2;t Z r$x1;t Kx2;t Kx1;t x3;t (25)

0.6

0.6 Actual uncertainty magnitude EKF estimate

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0 0

10

20

30

40

50 60 Time (sec)

70

80

90

100

Fig. 14. The EKF estimates of the uncertainty parameter (example 2).

0 0

Actual uncertainty magnitude PF mean estimate

10

20

30

40 50 60 Time (sec)

70

80

90

100

Fig. 15. The PF estimates of the uncertainty parameter (example 2).

94

J. Ching et al. / Probabilistic Engineering Mechanics 21 (2006) 81–96

where h is chosen such that the overall signal/noise ratio is 10. Fig. 16 shows the observed value y^t , which clearly shows that the trajectory of x1,t switches several times between the two unstable equilibrium points at x1ZK5 and x1Z5 (especially during 0–25 s).

12 10 8 6

y

t

4 2 0 -2 -4 -6 -8 0

5

10

15

20 25 30 Time (sec)

35

40

45

50

Fig. 16. The plot for y^t (example 3).

the number b depends on the scales of the modes; r is the temperature difference: for small r, the system is asymptotically stable, i.e. x1,t, x2,t, x3/0 as t/N. For large r, chaos occurs with the so-called butterfly attractor where the ultimate fate of a trajectory of the system is to wander around two unstable equilibrium points and the trajectory is extremely sensitive to its initial condition. 4.3.1. Data generation In this example, the values of s, b and r are set to 3, 1 and 26 (r is large so that the butterfly attractor occurs), and one observes x1,t (contaminated by noise) with sampling interval of 0.5 s: yt Z x1;t C h$vt

(26)

4.3.2. Identification model The goal is to estimate the trajectory of the three system states based on y^t using EKF and PF. When applying EKF and PF, it is assumed that one is very uncertain about the position of the initial state (i.e. large variances for the prior PDF of the three states); it is also assumed that s, b, r and h are known, and their actual values are used during the identification. Eqs. (25) and (26) are directly used in the identification model. 4.3.3. Results Figs. 17 and 18 show the estimates made by EKF and PF (with NZ200 and LZ10 and the importance weight c.o.v. thresholdZ200% using Algorithm 3.2). For this example, using NLZ2000 samples in PF is found to be sufficient for convergence, so once again, one can treat the results from PF as a comparison standard. Also, as before, the 95% confidence intervals on the states are indicated by thin dotted lines in Figs. 17 and 18. It is clear that PF can successfully track all three system states, while EKF can only reliably track the observed state variable x1,t. (since y^t directly measures x1,t, it is possible that an inappropriate filtering algorithm can still track x1,t perfectly.) EKF can track some parts of x2,t and x3,t, but performs poorly in other parts, especially in the beginning portions of x2,t and x3,t where the system switches between the two equilibrium points.

20 Actual x 1,t

x1,t

10

EKF mean estimate

0 -10 -20 0

5

10

15

20

25

30

35

40

45

50

30 Actual x 2 ,t

x2,t

20

EKF mean estimate

10 0 -10 -20 0

5

10

15

20

25

30

35

40

45

50

60 Actual x 3,t

x3,t

40

EKF mean estimate

20 0 -20 0

5

10

15

20

25

30

35

40

Time (sec) Fig. 17. The EKF estimates of the system states (example 3).

45

50

J. Ching et al. / Probabilistic Engineering Mechanics 21 (2006) 81–96

95

20

x1,t

10 0 Actual x 1,t

-10

PF mean estimate

-20 0

5

10

15

20

25

30

35

40

45

50

30 Actual x2,t

x2,t

20

PF mean estimate

10 0 -10 -20 0

5

10

15

20

25

30

35

40

45

50

60 Actual x 3,t

x3,t

40

PF mean estimate

20 0 -20 0

5

10

15

20

25

30

35

40

45

50

Time (sec) Fig. 18. The PF estimates of the system states (example 3).

5. Conclusion Two Bayesian state-estimation algorithms are examined: the extended Kalman filter (EKF) and the newer particle filter (PF), a stochastic simulation approach. Their performance is compared using three numerical examples, which show that PF is the better one to use, since EKF can sometimes create misleading results. Basically, the three examples represent three different classes of dynamical systems: a linear model with time-varying system parameters (Section 4.1), the nonlinear hysteretic model (Section 4.2) that can be considered to give moderately nonlinear behavior, and the Lorenz chaotic model (Section 4.3) that gives highly nonlinear behavior. It is believed that PF has performed satisfactorily for all examples, judging from the fact that it always provide estimates for the system state and unknown parameters with associated confidence intervals that are consistent with their actual values. In theory, PF should provide estimates that asymptotically converge to the expected values. It turns out that EKF can only track the system state and unknown parameters for the first example, its performance for the second example is only fair, and it performs poorly for the Lorenz chaotic example. This is consistent with the expectation that EKF is not suitable for highly nonlinear models. References [1] Beck JL. Determining models of structures from earthquake records. EERL report no. 78-01. Earthquake Engineering Research Laboratory, California Institute of Technology, Pasadena, California, 1978. !http:// caltecheerl.library.caltech.edu/183/O.

[2] Yun CB, Shinozuka M. Identification of nonlinear dynamic systems. J Struct Eng 1980;8(2):187–203. [3] Hoshiya M, Saito E. Structural identification by extended kalman filter. J Eng Mech 1984;110(12):1757. [4] Lin CC, Soong TT, Natke HG. Real-time system identification of degrading structures. J Eng Mech 1990;116(10):2258–74. [5] Koh CG, See LM. Identification and uncertainty estimation of structural parameters. J Eng Mech 1994;120(6):1219. [6] Ghanem R, Shinozuka M. Structural-system identification, I. Theory. J Eng Mech 1995;121(2):255–64. [7] Shinozuka M, Ghanem R. Structural-system identification, II: experimental verification. J Eng Mech 1995;121(2):265–73. [8] Glaser SD. Insight into liquefaction by system identification. Geotechnique 1996;46(4):641–55. [9] Kalman RE. A new approach to linear filtering and prediction problems. J Basic Eng 1960;82D:35–45. [10] Kalman RE, Bucy RS. New results in linear filtering and prediction problems. J Basic Eng 1961;83D. [11] Jazwinski AH. Stochastic processes and filtering theory. New York: Academic Press; 1970. [12] Julier SJ, Uhlmann JK, Durrant-Whyte HF. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans Autom Control 2000;45(3):477–82. [13] Wan EA, van der Merwe R. The unscented Kalman filter for nonlinear estimation. IEEE, Lake Louise, Alberta, Canada: Proceedings of symposium on adaptive systems for signal processing, communication and control; 2000. [14] Alspach DL, Sorenson HW. Nonlinear bayesian estimation using gaussian sum approximations. IEEE Trans Autom Control 1972;17(4): 439–48. [15] Gordon NJ, Salmond DJ, Smith AFM. Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEE Proc F 1993;140(2):107–13. [16] Kitagawa G. Monte carlo filter and smoother for non-Gaussian nonlinear state space models. J Comput Graph Stat 1996;5:1–25.

96

J. Ching et al. / Probabilistic Engineering Mechanics 21 (2006) 81–96

[17] Doucet A, de Freitas JFG, Gordon N. Introduction to sequential monte carlo methods. In: Doucet A, de Freitas JFG, Gordon NJ, editors. Sequential monte carlo methods in practice. Berlin: Springer; 2000. [18] van der Merwe R, de Freitas N, Doucet A, Wan EA. The unscented particle filter. Report CUED/F-INFENG/TR380, Department of Engineering, Cambridge University; 2000. [19] van der Merwe R, Wan EA. Gaussian mixture sigma-point particle filters for sequential probabilistic inference in dynamic state-space models. Hong Kong: Proceedings of IEEE international conference on acoustics, speech and signal processing (ICASSP); 2003. [20] Yoshida I, Sato T. Health monitoring algorithm by the Monte Carlo filter based on non-Gaussian noise. J Nat Disaster Sci 2002;24(2): 101–7. [21] Maruyama O, Hoshiya M. Nonlinear filters using monte carlo integration for conditional random fields. San Francisco: Proceedings of the nineth international conference on applications of statistics and probability in civil engineering; July 2003.

[22] Doucet A, Andrieu C. Particle filtering for partially observed gaussian state space models. Report CUED/F-INFENG /TR393, Department of Engineering, Cambridge university; 2000. [23] Doucet A, Godsill S. On sequential simulation-based methods for bayesian filtering. Report CUED/F-INFENG/TR310, Department of Engineering, Cambridge university; 1998. [24] Ching J, Beck JL, Porter KA, Shaikhutdinov R. Real-time Bayesian state estimation of uncertain dynamical systems. EERL report no. 2004-01, Earthquake Engineering Research Laboratory, California Institute of Technology, Pasadena, California, 2004. !http://caltecheerl.library.caltech. edu/370/O. [25] Liu JS, Chen R. Sequential Monte Carlo methods for dynamical systems. J Am Stat Assoc 1998;93:1032–44. [26] Ljung L, Gunnarsson S. Adaptation and tracking in system identification—a survey. Automatica 1990;26(1):7–21. [27] Wen YK. Equivalent linearization for hysteretic systems under random excitations. J Appl Mech 1980;47(1):150–4. [28] Lorenz EN. Deterministic nonperiodic flow. J Atmos Sci 1963;20: 130–41.