PARTICLE FILTER ADAPTATION BASED ON EFFICIENT SAMPLE SIZE

PARTICLE FILTER ADAPTATION BASED ON EFFICIENT SAMPLE SIZE

14th IFAC Symposium on System Identification, Newcastle, Australia, 2006 PARTICLE FILTER ADAPTATION BASED ON EFFICIENT SAMPLE SIZE ˇ Ondˇ rej Straka ...

3MB Sizes 0 Downloads 75 Views

14th IFAC Symposium on System Identification, Newcastle, Australia, 2006

PARTICLE FILTER ADAPTATION BASED ON EFFICIENT SAMPLE SIZE ˇ Ondˇ rej Straka and Miroslav Simandl

Department of Cybernetics and Research Centre: Data - Algorithms - Decision University of West Bohemia in Pilsen Univerzitn´ı 8, 306 14 Plzeˇ n, Czech Republic [email protected], [email protected]

Abstract: The paper deals with the particle filter in state estimation of a discretetime nonlinear nongaussian system. The aim of the paper is to design a sample size adaptation technique to guarantee an estimate quality. The proposed sample size adaptation technique considers an unadapted particle filter with a fixed number of samples that would be drawn directly from the filtering probability density function and modifies the sample size of the adapted particle filter to keep the particle filters estimate quality identical. The adaptation technique is based on the effective sample size and utilizes the sampling probability density function and an implicit form of the filtering probability density function. Application of the particle filter with the sample size adaptation technique is illustrated in a c numerical example. Copyright 2006 IFAC Keywords: state estimation, nonlinear systems, Monte Carlo method, particle filter, sample size, effective sample size

1. INTRODUCTION

ous state space by a cloud of samples (particles) with associated relative weights.

Recursive state estimation of discrete-time nonlinear stochastic dynamic systems from noisy measurement data has been a subject of considerable research interest for the last three decades. General solution of the state estimation problem is described by the Bayesian recursive relations (BRR). The closed form solution of the BRR is available for a few special cases only so usually an approximative solution has to be applied.

The fundamental paper dealing with the MC solution of the BRR was published by Gordon et al. (1993) who proposed the first effective PF called the bootstrap filter. Many improvements of the bootstrap filter have been proposed since, see for example Doucet et al. (2001). Among these improvements, mainly design of the sampling probability density function (pdf) as one of the key parameters of the PF should be mentioned. The design is based either on elaborating the bootstrap filter prior sampling pdf (Liu and Chen, 1998; Pitt ˇ and Shephard, 1999; Simandl and Straka, 2003) or utilization of a filtering pdf produced by another nonlinear filter as the sampling pdf for the PF (van der Merwe and Wan, 2003).

Since nineties, the particle filter (PF) has dominated in recursive nonlinear state estimation due to its easy implementation in very general settings and cheap and formidable computational power. The PF solves the BRR using the Monte Carlo (MC) methods, particularly using the importance sampling method, and approximates the continu-

991

surement at time k, respectively, ek ∈ Rn and vk ∈ Rm are state and measurement white noises, mutually independent and independent of x0 , with known pdf’s p(ek ) and p(vk ), respectively, fk : Rn × Rn → Rn , hk : Rn × Rm → Rm are known vector functions and the pdf p(x0 ) of the initial state x0 is known. The system given by (1) and (2) can be alternatively described by the transition pdf p(xk |xk−1 ) and the measurement pdf p(zk |xk ).

Another key parameter of the PF affecting estimate quality is also sample size (i.e. the number of the particles) nonetheless sample size setting has been disregarded for a long time. The sample size is usually set empirically. Some advances in a ˇ suitable sample size setting were done in Simandl and Straka (2002) where the sample size was considered to be time invariant and the Cram´er Rao ˇ (CR) bound (Simandl et al., 2001) was used as a gauge for quality evaluation of the PF. Sample size adaptation techniques(SSAT) were treated for example in Fox (2003), Koller and Fratkina ˇ (1998), Straka and Simandl (2004). The problem of sample size setting is of great importance nevertheless it has not been solved satisfactorily yet.

The general solution of the state estimation problem in the form of the filtering pdf p(xk |zk ) with T T zk  [zT 0 , . . . , zk ] is provided by the BRR. The idea of the PF in nonlinear state estimation is to approximate the filtering pdf p(xk |zk ), by the empirical filtering pdf rN (xk |zk ) which is given by (i) k Nk random samples of the state {xk }N i=1 and as(i) Nk sociated weights {wk }i=1 . The general algorithm of the PF (Liu et al., 2001) can be summarized using Alg. 1 Note that in this case the sample size remains fixed, i.e. N0 = N1 = · · · = N .

This paper comes out of the paper Straka and ˇ Simandl (2004) which proposed the localizationbased (LB) SSAT based on assessing position of the samples and keeping constant quality of the set of the samples regardless of the sampling pdf. Thus, the PF with a suboptimal sampling pdf can achieve by increasing the sample size by means of the LB-SSAT the same quality as if the optimal sampling pdf would have been available. As the LB-SSAT analyzes location of the samples, it may be a time demanding step for high dimensional systems with multimodal pdf of the noises.

Alg. 1 Particle filter Initialization: Let k = 0. Generate N0 samples (i) −1 0 ), compute {x0 }N i=1 from the prior pdf p(x0 |z (i) N0 ˜ 0 }i=1 the weights {w

Efficiency of the importance sampling method can be measured through the effective sample size (ESS) (Liu, 2001). The goal of the paper is to use the ESS to adapt the sample size keeping a fixed estimate quality.

(i)

rN0 (x0 |z0 ) =

∗(1:Nk )

π(xk+1 |xk

The

(i)

w0 δ(x0 − x0 )

, zk+1 ) where

Nk  (i) ∗(i) ,zk+1 )= vk π(xk+1 |xk ,zk+1 ).

∗(1:Nk )

π(xk+1 |xk

i=1

Consider the discrete time nonlinear stochastic system given by the state equation (1) and the measurement equation (2):

k = 0, 1, 2, . . .

(i)

(3)

approximates the filtering pdf p(x0 |z0 ). The function δ(·) is the Dirac  function defined as δ(x) = 0 for x = 0 and δ(x)dx = 1. ∗(i) k Resampling: Generate a new set {xk }N i=1 by resampling with replacement Nk times from (i) ∗(i) (i) (i) k {xk }N i=1 with probability P (xk =xk ) = wk ∗(i) and set wk = N1k . Filtering: Generate a new set of samples Nk+1 (i) from the global sampling pdf {xk+1 }i=1

This section deals with the state estimation using the PF, a brief overview of the sampling pdf of the PF, and a short survey of some SSAT’s proposed in literature.

zk = hk (xk , vk ),

N0 

(j) ˜0 . w

i=1

2. STATE ESTIMATION BY THE PARTICLE FILTER

k = 0, 1, 2, . . .

i=1,2,. . . ,N

(i) (i)  0 ˜0 / N and normalize, i.e. w0 = w j=1 empirical pdf rN (x0 |z0 ) given as

The paper is organized as follows: State estimation by the PF and a short survey of the SSAT’s are given in Section 2. Then the SSAT based on the ESS is presented in Section 3. Further, application of the proposed SSAT is illustrated in a numerical example in Section 4 and finally Section 5 concludes the paper.

xk+1 = fk (xk , ek ),

(i)

˜ 0 = p(z0 |x0 ), w

(4) To generate the samples firstly Nk+1 the indices {ji }i=1 have to be drawn from the multinomial distribution with parameters (i) k given by the primary weights {vk }N i=1 . Then (i) each sample xk+1 is generated from the local Nk+1 (i) {xk+1 }i=1 ,

(1) (2)

∗(j )

sampling pdf π(xk+1 |xk i , zk+1 ). The weights (i) Nk+1 (i) Nk+1 {wk+1 }i=1 associated to the samples {xk+1 }i=1 are calculated using the following relation

where the vectors xk ∈ Rn and zk ∈ Rm represent the state of the system and the mea-

992

(i)

(i)

˜ k+1 = w

∗(ji )

(i)

p(zk+1 |xk+1 )p(xk+1 |xk (j )

∗(ji )

(i)

vk i π(xk+1 |xk

, zk+1 )

)

∗(ji )

wk

(i)

. (5)

Nk+1 (j) (i) ˜ k+1 / j=1 ˜ k+1 . =w and normalized, i.e. w The empirical pdf rNk+1 (xk+1 |zk+1 ) is given by (i) (i) the samples {xk+1 } and the weights {wk+1 } as

The primary weight vk respects quality of the (i) sample xk−1 with respect to the measurement zk through the following relation

(i) wk+1

rN (xk+1 |zk+1 ) =

Nk+1



(i)

(i)

wk+! δ(xk+1 − xk+1 ).

i=1

Increase k and iterate to step Resampling. Note that the algorithm uses a general sampling ∗(1:Nk ) , zk+1 ) based on utilization of pdf π(xk+1 |xk the current measurement zk+1 and so it is not possible to distinguish between a time update step and the measurement update step as it is possible in the bootstrap filter with the prior sampling pdf. This general sampling pdf covers either the prior sampling pdf or other sampling pdf’s, e.g. the optimal sampling pdf or the auxiliary sampling pdf of the auxiliary PF (Pitt and Shephard, 1999). All these sampling pdf’s will be described in more detail further in this section.

Nk−1

1

i=1

Nk−1

Sample size represents a key parameter of the PF significantly affecting estimate quality. It can be set at each time instant before the filtering step of the Alg. 1. There are several papers dealing with a suitable sample size specification and some of the proposed techniques will be briefly described now. The paper Fox (2003) has proposed an algorithm for adaptive sample size setting. The probability that the distance between the true filtering pdf and the approximate filtering pdf is lower than some  is studied there and the number of samples is adapted in time to keep the probability constant. The true filtering pdf is approximated by a discrete, piecewise constant distribution. The comparison is accomplished by the KullbackLeibler distance (KLD). Drawback of the proposed algorithm can be seen in the assumption of known true filtering pdf and the approximations used for sample size calculation.

(i)

p(xk |xk−1 ). (6)

Another algorithm for sample size adaptation was published in Koller and Fratkina (1998). The algorithm is based on idea that it would be suitable to keep constant sum of likelihoods of the whole sample set instead of keeping constant sample size.

The paper Pitt and Shephard (1999) has proposed the auxiliary PF with global sampling pdf in the form Nk−1

=



(i)

(i)

vk p(xk |xk−1 ).

=

2.2 Suitable sample size specification

This is probably the simplest sampling pdf because each local sampling pdf is given by the transition pdf. In this paper the sampling pdf (6) will be called prior sampling pdf because in does not utilize the current measurement zk which is available.

(1:N ) π(xk |xk−1 k−1 , zk )



1 (i) p(xk |xk−1 , zk ). N k−1 i=1 (9) The sampling pdf (9) minimizes variance of weights which is closely related to the estimate quality. And thus it will be called the optimal sampling pdf. (1:N ) π(xk |xk−1 k−1 , zk )

The paper Gordon et al. (1993) have proposed the bootstrap filter that considered the sampling pdf in the form 

(i)

where μk is mean, mode or another likely value (i) associated with the pdf p(xk |xk−1 ). As the PF with the sampling pdf (7) is called the auxiliary PF, this sampling pdf will be called the auxiliary sampling pdf in this paper. The auxiliary sampling pdf offers in some cases higher estimate quality with respect to the prior sampling pdf. Some improvements of the auxiliary sampling pdf have been proposed in Andrieu et al. (2001) and ˇ Simandl and Straka (2003).

(i)

To demonstrate the SSAT in a numerical example, this paper will consider the prior, auxiliary, and the optimal sampling pdf’s which will be described in this subsection.

=

(8)

p(xk |xk−1 , zk ) and the corresponding optimal global sampling pdf is then given by

2.1 A brief overview of the sampling pdf ’s

Nk−1

(i)

In some special cases it is possible to find the (i) local sampling pdf in the form π(xk |xk−1 , zk ) =

Also note that the resampling step need not be executed at each time instant because it introduces further simulation error into estimates. Liu and Chen (1998) suggest setting up a resampling schedule based on ESS so the resampling step is executed at some time instants only. If resampling ∗(i) (i) does not take place, it holds that xk = xk and ∗(i) (i) wk = wk for each i.

(1:N ) π(xk |xk−1 k−1 , zk )

(i)

vk ∝ p(zk |μk ),

(7)

i=1

993

ˇ The LB-SSAT proposed in Straka and Simandl (2004) is based on monitoring quality of the samples generated from the sampling pdf. The quality of the sample set is assessed according to the position of the samples. Roughly speaking, firstly a criterion is set up with respect to the measurement pdf. Consequently, the samples are being drawn from the sampling density until the criterion respecting their position is met. The LB-SSAT allows the estimate quality to be independent on the PF sampling pdf. This means that the PF with a quality sampling pdf uses fewer samples than the PF with a low quality sampling pdf while both the PF’s provide estimates with the same quality. And on the contrary, if only a low quality sampling pdf is available, the LB-SSAT can increase the sample size of the PF to offer the same estimate quality as if a high quality sampling pdf would have been available.

ESSk (Nk ) = Nk

1 , 1 + d(π, p)

(10)

where d(π, p) is the χ2 distance defined as (1:N

)

π(xk |xk−1 k−1 , zk ) }. d(π, p) = varπ {w(xk )} = varπ { p(xk |zk ) (11) Note that for the purpose of the resampling the distance d(π, p) is usually empirically estimated by the coefficient of variation cv2 of the weights (i) wk Nk (i) (wk )2 Nk j=1 2 cv = N − 1. (12) (i) k ( j=1 wk )2 Now, suppose one is interested in evaluating  μk = Ep {g(xk )} = g(xk )p(xk |zk )dxk (13) then the ratio in (11) is approximately equal to 1 varp {g(xk )} ≈ 1 + varπ {w(xk )} varπ {g(xk )w(xk )}

ˇ The paper Simandl and Straka (2002) considered a constant sample size and the algorithm for a suitable sample size specification assessed quality of the PF with a sample size according to the distance between the mean square error (MSE) matrix of the conditional state estimate mean value and the Cram´er-Rao bound.

(14)

which represents the efficiency of estimating μk using the samples from the filtering pdf relative to the efficiency using the samples from the sampling pdf. The SSAT technique will use the relation (10). Firstly, a fixed ESS denoted as Nk∗ is specified and consequently the sample size Nk is evaluated as (15) Nk = Nk∗ (1 + varπ {w(xk )}) .

Although the sample size specification is one of the key issues of the PF, it has not been satisfactorily solved yet.

Note that the ceiling function · was used because Nk must be an integer. Now, the term varπ {w(xk )} will be calculated. For simplicity, the filtering pdf will be denoted as p(xk ) and the sampling pdf as π(xk ). It holds that

3. SAMPLE SIZE ADAPTATION TECHNIQUE BASED ON IMPLICIT FILTERING PDF As it was already mentioned, the paper comes out from the LB-SSAT. Although the technique is theoretically simple, it requires a permanent checking of the samples position during the process of their generation. This may be a time demanding task for high dimensional systems with multimodal pdf of the measurement noise. The aim of the paper is to propose a SSAT that respects the sampling pdf of the PF and adapts the sample size to provide a prespecified estimate quality without the checking of the samples position.

2

varπ {w(xk )} = Eπ {(w(xk ))2 } − (Eπ {w(xk )}) , (16) where  [p(xk )]2 π(xk )dxk . Eπ {(w(xk ))2 } = [π(xk )]2  [p(xk )]2 dxk (17) = π(xk ) and

 Eπ {w(xk )} =

This section deals with the design of the implicitfiltering-density based SSAT (IFD-SSAT) based on the ESS, derivation of the filtering pdf used in the technique, and application of the technique for the prior sampling pdf.

=



p(xk ) π(xk )dxk . π(xk ) p(xk )dxk = 1.

(18)

Now, substituting (17) and (18) into (16) the relation (15) can be rewritten as  [p(xk )]2 Nk = Nk∗ dxk . (19) π(xk )

Firstly, the IFD-SSAT based on the ESS will be designed. The paper by Kong et al. (1994) has proposed the notion of the ESS which describes the number of samples drawn from the filtering pdf necessary to attain the same estimate quality as Nk samples drawn from the sampling pdf. It is given as

The relation (19) represents the core of the SSAT. The trouble is that an explicit form of the filtering pdf p(xk ) is unknown and the filtering pdf is estimated by the PF. Fortunately, it is possible

994

samples. The samples and the corresponding likelihoods are utilized twice, i.e. for approximation of the filtering pdf and for calculation of Nk and therefore calculation of the sample size Nk almost does not increase computational complexity of the PF. Naturally, the calculated sample size Nk impacts on computational demands of the PF.

to find an implicit form of the filtering pdf and to utilize it in (19). For that reason the proposed SSAT will be called the implicit-filtering-densitybased SSAT. The implicit form of the filtering pdf p(xk |zk ) for (19) will be derived using the Bayesian relations. Consider the empirical filtering pdf rNk−1 (xk−1 |zk−1 ) =

Nk−1



(i)

(i)

wk−1 δ(xk−1 − xk−1 ). (20)

4. NUMERICAL EXAMPLE

i=1

To simplify the following relations, the weights (i) 1 . Then the predictive will be equal wk−1 = Nk−1 k−1 pdf p(xk |z ) is given as p(xk |zk−1 ) =

Nk−1



1

i=1

Nk−1

(i)

p(xk |xk−1 )

To illustrate the proposed IFD-SSAT, a nonlinear Gaussian system is considered with onedimensional state and one-dimensional measurement. The measurement equation of the system is linear to allow calculation of the optimal sampling pdf. The system is given by the relations

(21)

According to the BRR the filtering pdf p(xk |zk ) is given by an implicit form as Nk−1



1 (i) p(xk |xk−1 ), N k−1 i=1 (22)  Nk−1 1 (i) where C = p(zk |xk ) i=1 Nk−1 p(xk |xk−1 )dxk is a normalization constant. p(xk |zk ) = C −1 p(zk |xk )

(25)

zk = xk + vk ,

(26)

where x0 is given by the Gaussian pdf with zero mean and variance 0.001, i.e. p(x0 ) = N {x0 ; 0, 0.001}. The state noise ek and the measurement noise vk are described by p(ek ) = N {ek ; 0, 0.1} and p(vk ) = N {vk ; 0, 0.0001}, respectively. The state is estimated by the PF for k = 0, 1, . . . 14. The PF considers p(x0 |z −1 ) = p(x0 ). The parameter Nk∗ of the IFD-SSAT is Nk∗ = 10. Fig. 1 contains time evolution of the MSE of the adapted PF. The MSE is given as Πk = E[xk − x ˆk ]2 , where x ˆk is the mean of xk described by rNk (xk |z k ). The figure also contains the corresponding CR bound (stars) which serves as a lower bound for the MSE. The MSE and the CR bound have been estimated using the MC method with M = 1000 simulations. Fig. 2 contains time evolution of average sample size ¯k , N ¯k = 1000 1 Nk (s) where Nk (s) is the N s=1 1000 adapted sample size in the sth MC simulation. The PF utilizes the optimal (9) (solid), prior (6) (dashed), auxiliary (7) (dot-dashed) sampling pdf’s and also the filtering pdf (22) as a sampling pdf (dotted). It can be seen that the IFD-SSAT

Now, the relation (19) for sample size calculation in the IFD-SSAT will be evaluated for the prior sampling pdf using the implicit form of the filtering pdf. For simplicity the integral  [p(xk |zk )]2 dxk will denoted as Ip . (1:Nk−1 ) πp (xk |xk−1

xk+1 = xk − 0.2x2k + ek

,zk )

Substituting (6) and (22) to the integral in (19) gives  [p(z |x ) Nk−1 1 p(x |x(i) )]2 k k k k−1 i=1 Nk−1 dxk Ip =  N (i) k−1 1 C 2 i=1 Nk−1 p(xk |xk−1 ) (23) which can be simplified to  Nk−1 1 (i) [p(zk |xk )]2 i=1 Nk−1 p(xk |xk−1 )dxk Ip =  Nk−1 1 (i) 2 [ p(zk |xk ) i=1 Nk−1 p(xk |xk−1 )dxk ] (24) A closed form solutions of the integrals in (24) can not be usually found and so an approximative solution has to be sought. Numerical solution of the integrals is feasible only for a lowdimensional state. As the integrals are of the form  p(x)f (x)dx where p(x) is a pdf and f (x) is an arbitrary function, a MC approximative solution of the integrals can be employed (Tanner, 1996).

−4

MSE

10

As the adapted sample size Nk fulfills Nk ≥ Nk∗ , the first Nk∗ samples drawn from the sampling pdf in the filtering step of the PF algorithm can be used to find an approximate solution of the integrals. The relation for calculating Nk using the MC consists solely of the likelihoods of the

−5

10

−6

10

0

2

6

4

8

10

k

Fig. 1. MSE of the adapted PF and the CR bound in the PF with low-quality sampling pdf’s (prior and auxiliary) increases the sample size more than

995

Fox, D. (2003). Adapting the sample size in particle filters through KLD-sampling. International Journal of Robotics Research 22, 985– 1003. Gordon, N., D. Salmond and A.F.M. Smith (1993). Novel approach to nonlinear/ nonGaussian Bayesian state estimation. IEE proceedings-F 140, 107–113. Koller, D. and R. Fratkina (1998). Using learning for approximation in stochastic processes. In: Proc. 15th International Conf. on Machine Learning. Morgan Kaufmann, San Francisco, CA. pp. 287–295. Kong, A., J. S. Liu and W.H. Wong (1994). Sequential imputations and bayesian missing data problems. J. Amer. Statist. Assoc. 89(425), 278–288. Liu, J. S. (2001). Monte Carlo Strategies in Scientific Computing. Springer Verlag. Liu, J.S. and R. Chen (1998). Sequential Monte Carlo Methods for Dynamic Systems. J. Amer. Statist. Assoc. 93(443), 1032–1044. Liu, J.S., R. Cheng and T. Logvinenko (2001). Sequential Monte Carlo Methods in Practise. Chap. A Theoretical Framework for Sequential Importance Sampling with Resampling. Statistics for Engineering and Information Science. Springer. Pitt, M.K. and N. Shephard (1999). Filtering via simulation: auxiliary particle filter. J. Amer. Statist. Assoc. 94, 590–599. ˇ Simandl, M. and O. Straka (2002). Nonlinear estimation by particle filters and Cram´erRao bound. In: Proceedings of the 15th triennial world congress of the IFAC. Barcelona. pp. 79–84. ˇ Simandl, M. and O. Straka (2003). Sampling Density Design for Particle Filters. In: Proceedings of the 13th IFAC Symposium on System Identification. Rotterdam. ˇ Simandl, M., J. Kr´ alovec and P. Tichavsk´ y (2001). Filtering, predictive and smoothing Cram´erRao bounds for discrete-time nonlinear dynamic filters. Automatica 37(11), 1703–1716. ˇ Straka, O. and M. Simandl (2004). Sample size adaptation for particle filters. In: Preprints of the 16th Symposium on Automatic Control in Aerospace (A. Nebylov, Ed.). Vol. 1. Saint Petersburg, Russia. pp. 444–449. Tanner, M.A. (1996). Tools for Statistical Inference. Springer Series in Statistics. 3rd edition ed.. Springer Verlag. New York. van der Merwe, R. and E.A. Wan (2003). Sigmapoint particle filters for sequential probabilistic inference in dynamic state-space models. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP). IEEE. Hong Kong.

4

10

3

Nk

10

2

10

1

10

0

10

0

2

6

4

8

10

k

Fig. 2. Average adapted sample size 100 times to achieve the same estimate quality measured by the MSE as the PF with the sampling pdf given by the filtering pdf (22) and with only Nk∗ = 10 samples. On the other hand, in the case of the optimal sampling pdf which is a highquality sampling pdf, the sample size of the PF is increased only slightly.

5. CONCLUSION The paper dealt with the particle filter for nonlinear state estimation problem. The implicit filtering density based sample size adaptation technique has been proposed to modify the sample size, which is a key parameter of the PF. The technique is based on the ESS utilizing the sampling pdf and the implicit form of the filtering pdf. The technique keeps the ESS indicating quality of the samples at a prespecified level. The PF with an arbitrary sampling pdf and the adapted sample size provides the same estimate quality as the PF with the sampling pdf given by the filtering pdf and with a given sample size. To allow application of the implicit-filtering-density based sample size adaptation technique for a high-dimensional state estimation, the MC approximation of the ESS has been suggested. Application of the technique for different sampling pdf’s of the PF has been illustrated in a numerical example.

6. ACKNOWLEDGMENT The work was supported by the Ministry of Education, Youth and Sports of the Czech Republic, project No. 1M6798555601.

REFERENCES Andrieu, Ch., M. Davy and A. Doucet (2001). Improved auxiliary particle filtering: Applications to time-varying spectral analysis. IEEE Workshop on Statistical Signal Processing. Doucet, A., de Freitas, N. and Gordon, N., Eds.) (2001). Monte Carlo Methods in Practise. Springer.

996