Phase tracking: what do we gain from optimality? Particle filtering versus phase-locked loops

Phase tracking: what do we gain from optimality? Particle filtering versus phase-locked loops

Signal Processing 83 (2003) 151 – 167 www.elsevier.com/locate/sigpro Phase tracking: what do we gain from optimality? Particle <ering versus phase-...

332KB Sizes 4 Downloads 36 Views

Signal Processing 83 (2003) 151 – 167 www.elsevier.com/locate/sigpro

Phase tracking: what do we gain from optimality? Particle <ering versus phase-locked loops P.O. Amblarda;∗ , J.M. Brossierb , E. Moisana a Laboratoire

des Images et des Signaux Groupe nonlineaire, UMR CNRS 5083, ENSIEG, BP46, 38402 Saint-Martin d’H)eres cedex, France b ST-Microelectronics, AST Grenoble Lab, 12 rue Jules Horowitz, 38019 Grenoble cedex, France Received 11 March 2001; received in revised form 12 July 2002

Abstract This paper studies the problem of tracking a Brownian phase with linear drift observed to within one digital modulation and one additive white Gaussian noise. This problem is of great importance as it models the problem of carrier synchronization in digital communications. The ultimate performances achievable for this problem are evaluated and are compared to the performances of three solutions of the problem. The optimal <er cannot be explicitly calculated and one goal of the paper is to implement it using recent sequential Monte-Carlo techniques known as particle <ering. This approach is compared to more traditional loops such as the Costas loop and the decision feedback loop. Moreover, since the phase has a linear drift, the loops considered are second-order loops. To make fair comparisons, we exploit all the known information to put the loops in their best con&gurations (optimal step sizes of the loops). We show that asymptotically, the loops and the particle <er are equivalent in terms of mean square error. However, using Monte-Carlo simulations we show that the particle <er outperforms the loops when considering the mean acquisition time (convergence rate), and we argue that the particle <er is also better than the loops when dealing with the important problem of mean time between cycle slips. ? 2002 Elsevier Science B.V. All rights reserved. Keywords: Digital phase-locked loops; Costas loop; Decision feedback loop; Particle <ering

1. Introduction and aim Of great importance in communications systems is the problem of phase estimation. This problem occurs in early stages of a receiver, especially in the stage of demodulation. Due to the problem of synchronization between the transmitter and the receiver and problems of Doppler shifts, the base band received signal, after analog-to-digital conversion, is in general disturbed by ∗ Corresponding author. Tel.: +33-76-82-71-06; fax: +33-76-82-63-84. E-mail address: [email protected] (P.O. Amblard).

a phase term that must be eliminated [10]. In general the residual phase is time varying and the problem of estimating the phase can be viewed as a tracking problem. The problem of phase estimation was solved a long time ago by engineers who designed the celebrated phase-locked loop [7]. This analog device has been shown to be an extended Kalman <er and hence is an approximate solution of the optimal nonlinear <er. We concentrate here on the discrete time problem, and similar devices are called digital phase-locked loops (DPLL) [10]; (hereinafter, we will omit the word “digital”). The problem we study here is to

0165-1684/03/$ - see front matter ? 2002 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 5 - 1 6 8 4 ( 0 2 ) 0 0 3 8 6 - 9

152

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167

recover the residual phase after demodulation and matched <ering. We assume that matched <ering is perfect (perfect symbol synchronization and no residual inter-symbol interference). The output of the matched <er is therefore the residual phase multiplicatively corrupted by a digital modulation (the symbols of interest) and additively by an additive noise. Classical algorithms to track the residual phase are the decision feedback loop (DFL) and the Costas loop (CL). In practical situations, there is a constant frequency shift between the transmitter’s clock and the receiver’s clock, and the oscillations suKer from jitters. The residual phase can therefore be modeled eMciently by a Brownian motion (in discrete time) with linear drift. This a priori information can be taken into account and leads to the design of second-order phase-locked loops: a second <er is added in the main loop to <er out the linear drift [2]. The model of the phase can also be used to design the optimal <er [9]. However, the optimal <er is obtained as a recursion equation on the a posteriori probability density function, and this equation for the problem at hand cannot lead to a practical implementation. We then must use an approximate solution. In the past 10 years, the Monte-Carlo solutions of very complicated problems have been developed enormously. For example, the so-called Markov Chain Monte-Carlo methods (MCMC) have allowed the implementation of the Bayesian paradigm in very complicated cases [15]. The optimal nonlinear <ering problem has also inherited Monte-Carlo techniques with the development of the so-called particle <er [3–5,8,11]. The end of the century has seen an increased interest in the application of particle <ering to diKerent problems such as radar/sonar signal processing [10,13], deconvolution, tracking, etc. (see [4] for many applications and references). Here we apply the technique of particle <ering to implement the optimal estimator of a phase. We assume that the phase follows a Brownian motion with linear drift and that it is observed to within one digital modulation and one additive white Gaussian noise. The main goals of the study are to obtain: (1) Ultimate performances for the problem of tracking a Brownian phase with linear drift. (2) Performances of the so-called second-order DFL and CL.

(3) A comparison between the loops and the optimal <er implemented via Monte-Carlo techniques. The structure of the paper is as follows. The next section de&nes the problem and provides three solutions to it: the particle <ering, the Costas loop and the DFL. Section 3 is devoted to the study of the asymptotic performances of the loops, to the comparison of these performances to the posterior CramOer–Rao bound, and to the experimental performances of the particle <er. In Section 4, we turn to the convergence of the loops and the particle <er, and we also provide a discussion of the behavior of the three algorithms in the so-called cycle slip problem. A discussion ends the paper. 2. The problem of phase tracking and three solutions The aim of this section is to present the problem of phase tracking and to give three algorithms: an implementation of the optimal solution using Monte-Carlo techniques, and two suboptimal algorithms. We begin by a description of the problem. 2.1. Model In communications systems over passband channels, the high frequency received signal is demodulated using an analog device. The demodulated signal then passes through a matched <er, and it can then be shown that correctly sampling the output of the matched <er provides suMcient statistics to recover the emitted symbols [10]. In practice however, the demodulation is never perfect, and the sampling is never synchronized with the emitter. Therefore, the residual phase distortion and the best sampling instants must be estimated: it is the problem of synchronization. In this paper, we concentrate on the problem of phase synchronization, and we assume that the sampling is perfect. Therefore, assuming that there is no intersymbol interference, the sampled output of the matched <er is written yk = ak eik + nk ;

(1)

where the sequence ak is an independent and identically distributed (i.i.d.) sequence of complex valued discrete random variables (the symbols to be recovered), each variable takes its value in

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167

{exp(2il=N )}; l = 0; : : : ; N − 1 with probability 1=N . k is the residual phase distortion that must be eliminated, and nk is a complex, circular, white Gaussian noise with power E[|nk |2 ] = n2 . As usual, circularity means that nk = nk exp(i) in distribution, for all . This implies for a Gaussian noise that the real and imaginary parts of nk are independent and have the same power n2 =2. In practice, there is a constant frequency shift between the transmitter’s clock and the receiver’s clock. Thus, the phase distortion is linear. Furthermore, clocks are never perfect, and oscillations suKer from jitters. Therefore, we must add the jitters to the model of the phase: this results in a Brownian phase with a linear drift. We then use this information as a priori information, and we can present our problem as k = k−1 +  + wk ;

yk = ak eik + nk ;

(2)

where the unknown state k , which is observed through a nonlinear function, must be estimated. In this model,  is the unknown constant drift, wk is a white Gaussian noise with zero mean and variance

w2 , and the initial condition is unknown. 2.2. Optimal nonlinear =lter and its particle implementation Formulated as it is, the problem of estimating the phase is a problem of nonlinear <ering [9]. The state to be estimated is Markovian, hence the name hidden Markov model (HMM) [6]. Solving the nonlinear <ering problem in a Bayesian approach [14] amounts to &nding the a posteriori probability of the state conditioned on the past values of the observation. Speci&cally, the equations k = f(k−1 ; wk );

yk = g(k ; nk )

represent a physical quantity k which is measured by a nonlinear device g. wk is an i.i.d. sequence, whereas nk is a white noise, independent of the sequence wk . Note that all the quantities can be vectors of appropriate dimensions. Then, all the statistical information to estimate functions of the state 0:k = (0 ; : : : ; k ) from the observations y1:k = (y1 ; : : : ; yk ) is contained in the a posteriori probability density 1 p(0:k |y1:k ). If 1

We assume that probability measures admit a density with respect to the Lebesgue measure.

153

we only want to estimate the state at time k, we only need the marginal density p(k |y1:k ). Additionally, in the problem of phase tracking, we need an evolutive algorithm. Using the fact that the state is Markovian, it is easy to show the following recursion equation [9]: p(k |y1:k ) =

p(yk |k )



p(k |k−1 )p(k−1 |y1:k−1 ) dk−1 p(yk |y1:k−1 )

(3)

which links the marginal a posteriori density at time k to its precedent value at time k − 1. This equation is the solution of the <ering problem. However, to be of practical interest, this equation must be used in the design of estimators, such as the minimum mean square error estimator which is none other than the a posteriori mean [17]. Unfortunately, calculating explicit values of estimators using integrals that involve Eq. (3) is usually impossible. Therefore, we must use approximations (such as the extended Kalman <er) or numerical methods to perform integrations. Among these methods, Monte-Carlo methods are very powerful, and have been the subject of increased interest in the last decade. The application of the Monte-Carlo technique to nonlinear <ering dates back to the late 1960s, but its use was developed in the 1990s with the dramatic increase of the power of computers (see [3,5] and references therein). The accepted name for the technique is now “particle <ering”, and we now recall the basics of this <ering method, following the unifying view of [3,5]. 2.2.1. Importance sampling As stated above, once the a posteriori density p(0:k |y1:k ) is calculated, we usually use it to evaluate estimators of the form  Ep [f(0:k )] = f(0:k )p(0:k |y1:k ) d0:k : Since in general the integral cannot be evaluated, one possibility to calculate it is to employ a Monte-Carlo approach. The aim is then to draw random variables according to the density p(0:k |y1:k ), and then to approximate the expected value by a sample mean. This approach can be shown to be eMcient under mild conditions [15]. Of course, the approach works if it

154

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167

is easy to draw samples from the density. Unfortunately, this is rarely the case, and other methods must be employed. One of them is called importance sampling [15] and uses a proposal distribution q(0:k |y1:k ) whose support contains the support of p(0:k |y1:k ), and which is easy to draw samples from. N draws x0:k (i); i = 1; : : : ; N are thus generated according to q, and from the fact that  p(0:k |y1:k ) q(0:k |y1:k ) d0:k Ep [f(0:k )] = f(0:k ) q(0:k |y1:k )   p(0:k |y1:k ) = Eq f(0:k ) q(0:k |y1:k )

This form of the expected value is more suitable for the approximation that becomes N f(x0:k (i))w(i) ˆ ; E p [f(0:k )] = i=1N i=1 w(i)

the expected value is approximated by

2.2.2. Sequential importance sampling The next step in developing the particle <er is to obtain a recursive algorithm. It can be shown that a suMcient condition to obtain a recursive importance sampling, is that the proposal density q(0:k |y1:k ) factorizes as q(0:k−1 |y1:k−1 )q(k |0:k−1 ; y1:k ) [3,5]. It is then very easy to show that the weights wk (i) may be recursively updated via

N

1 Eˆ p [f(0:k )] = f(x0:k (i))w∗ (i); N i=1

w∗ (i) =

p(x0:k (i)|y1:k ) : q(x0:k (i)|y1:k )

The quantities w∗ (i) are called the importance weights. According to the law of large numbers, this approximation converges almost surely to the expected value as N approaches in&nity. This seems simple, but there is a problem with the weights. Writing w∗ (i) =

p(y1:k |x0:k (i))p(x0:k (i)) p(y1:k )q(x0:k (i)|y1:k )

shows that the evaluation of the weights requires the evaluation of the integral p(y1:k ) =  p(y1:k |0:k )p(0:k ) d0:k which is usually impossible to calculate. But we also have  p(y1:k |0:k )p(0:k ) q(0:k |y1:k ) d0:k p(y1:k ) = q(0:k |y1:k )   p(y1:k |0:k )p(0:k ) = Eq q(0:k |y1:k ) and therefore,   |0:k )p(0:k ) Eq f(0:k ) p(y1:k q(0:k |y1:k )   Ep [f(0:k )] = : p(y1:k |0:k )p(0:k ) Eq q(0:k |y1:k )

w(i) =

p(y1:k |x0:k (i))p(x0:k (i)) ; q(x0:k (i)|y1:k )

where the x0:k (i) are again N independent draws from the proposal density q. This approximation is biased for a &nite number of particles, but still converges almost surely as the number of particles approaches to in&nity [3,5].

wk (i) = w˜ k−1 (i)

p(yk |xk (i))p(xk (i)|xk−1 (i)) ; q(xk (i)|x0:k−1 (i); y1:k )

wk (i) ; w˜ k (i) = N j=1 wk (j) where xk (i); i = 1; : : : ; N are N draws from q(k |0:k−1 ; y1:k ), and x0:k (i) = (x0:k−1 (i); xk (i)); i = 1; : : : ; N . It has been shown [3] that the proposal q(k |0:k−1 ; y1:k ) = p(k |k−1 ; yk ) minimizes the variance of the weights conditional on 0:k−1 ; y1:k . Unfortunately, this density is usually impossible to calculate, but it can be approximated. Other choices are, however, possible. The simplest choice is to use the state equation and to sample from p(k |k−1 ). The problem with this proposal is that the evolution of the particles does not depend on the observation. But whatever the choice of proposal density, the biggest problem of the recursive importance sampling algorithm is the degeneracy of the weights. Indeed, it can be observed (and shown, see for example [3,5,11]) that the weights of all but one particle tend to zero as time increases. This can be easily understood since the particles evolve according to a

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167

diKusion process, and the likelihood of all the particles is less and less as time goes on. This is especially noticeable when the proposal is p(k |k−1 ) (pure diffusion), but remains true when the proposal is controlled by the new observation (e.g. p(k |k−1 ; yk )). Therefore, particles must interact in some way to move toward the true state trajectory. The interaction is usually done via a resampling of the particles according to the empirical law provided by the weights. Hence, when the resampling occurs, unlikely (small weight) particles will probably die, whereas likely particles will probably proliferate. The diMcult point here is to decide whether to resample or not. We adopt a heuristic argument: when the weights all equal 1=N , the set of particles is very eMcient; when most of the particles have a small weight, the set is not very eMcient. Hence, we measure the eMciency of the set of particles by measuring the entropy of the weights N  Hw˜ (k) = − w˜ k (i) log2 w˜ k (i): i=1

The entropy is maximum (log2 N bits) when the set is the most eMcient, and is minimum (0 bit) when the set is the least eMcient. Therefore, we decide to resample when the entropy Hp (k) is lower than a given threshold. For all practical purposes, the threshold is not really important but: • if the threshold is too low, the algorithm will resample rarely, and the quality of the estimation will be poor; • If the threshold is too high, the algorithm will resample often leading to a time consuming algorithm. Therefore, the compromise is in between! Note that in some works this problem is eluded, and resampling is done at each iteration. 2.2.3. Particle =lter for phase synchronization In this section, we detail the nonlinear optimal <er implemented using the Monte-Carlo approach for the problem of phase tracking. We recall that the phase follows a random walk with unknown linear drift. This parameter then must be incorporated in the model. We then have k = k−1 + k−1 + wk ; state equation: k = k−1 ; observation equation: yk = ak e

ik

+ nk ;

155

where nk and wk are i.i.d. and independent; nk is a complex circular Gaussian noise, with power n2 ; wk is Gaussian, zero mean, with variance w2 . Note that the variances are assumed to be known and that the digital modulation ak is assumed to be a PSK modulation. Below however, all simulations and theoretical calculations are performed with a binary phase shift keying modulation (BPSK). There are two parameters to be estimated (the drift and the phase). These parameters are the coordinates of the particles, and the notation for the particles is

1 xk (i) = ˆk : xk (i) = xk2 (i) = ˆk The particle <er then consists in the following algorithm: (1) initialize N particles x0 (i) according to p(0 ; ); initialize their weights to w˜ 0 (i) = N1 . (2) For k ¿ 1 do (a) prediction step: move the particles according to the instrumental law: xk (i) ∼ q(k |x0:k−1 (i); y1:k ); ∀i = 1; : : : ; N and set x0:k (i) = (x0:k−1 (i); xk (i)) (b) correction step: update the weights according to wk (i) =w˜ k−1 (i)

p(yk |xk (i))p(xk (i)|xk−1 (i)) ; q(xk (i)|x0:k−1 (i); y1:k )

∀i = 1; : : : ; N wk (i) ; ∀i = 1; : : : ; N w˜ k (i) = N j=1 wk (j) (c) resampling step: If the entropy Hw˜ (k) is lower than a fixed threshold, resample according to the discrete probability law defined by the weights w˜ k (i), and set w˜ k (i) = 1=N; ∀i = 1; : : : ; N (d) estimation: N  ˆ k |y1:k ] = ˆk = E[ w˜ k (i)xk1 (i) i=1

ˆ k |y1:k ] = ˆk = E[

N  i=1

w˜ k (i)xk2 (i)

156

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167

Below, the instrumental density is p(xk (i)|xk−1 (i)). The weights can then be updated very easily, since the updating only needs the likelihood. In the case of binary equiprobable symbols, ak = ±1, the likelihood takes the form

1 |yk |2 + 1 p(yk |k ; k ) = 2 exp −  n

n2

2 ×cosh 2 Re(yk e−ik ) :

n Remark 1. Resampling is done using the cumulative density function of the weights w˜ k (i). Several improvements can be proposed such as kernel estimators, use of interpolators, etc. In the following sections, the resampling is performed using a linear interpolation of the cumulative density function of the weights. Remark 2. The constant parameter  gives rise to an estimation problem. The chosen model for its estimation is of course k = k−1 , implying that the second coordinate xk2 (i) of the particles remains constant in time. The coordinate is determined by the initial condition, and the particles do not explore correctly their space to provide a good estimation of . The estimation of  can therefore be very bad, even with a great number of particles. To overcome the problem, better strategies of resampling can be adopted (see Remark 1). A simple strategy consists in “shaking” the particles along its second coordinate after the resampling by adding a random variable (Gaussian or uniform) with a small variance (typically 1=N ). This is equivalent to regularize the discrete probability law de&ned by the weights, the regularization being a convolution by the density used to shake the particles. Note that this approach is detailed in [12], where kernel estimators are used for regularization. An important point is also discussed there: the variance of the variable added after resampling should depend on the coordinate of the particles. For example, in the problem considered here, the shaking may not be the same for the phase and for the constant parameter  (see [12] for details). 2.3. Sub-optimal algorithms: decision feedback loop, Costas loop Several algorithms exist that estimate the phase of a modulated signal (see [10] for a recent and exhaus-

tive review in the context of digital communications). Among these algorithms, the DFL and the CL can be viewed as two extreme cases of the maximum likelihood estimator. To show this we consider the simpler problem of estimating a constant phase. If yk = ak ei + nk has been measured from k = 1 to N , if the sequence ak is a BPSK sequence (ak = ±1), then the likelihood function is written

N 1  |yk |2 + 1 p(y1:N |) = exp − ( n2 )N

n2 k=1

×cosh

2 Re(yk e−i )

n2

and we obtain for the derivative of the log-likelihood N @logp(y1:N |)  2 Im(yk e−i ) = @

n2 k=1

2 −i ×tanh 2 Re(yk e ) :

n

Of course, &nding the maximum likelihood estimator is impossible analytically, but the last expression can be used to obtain an approximate solution using a gradient algorithm of the form ’k = ’k−1 + Im(yk e−i’k−1 )

2 −i’k−1 ×tanh 2 Re(yk e ) :

n Two limiting cases are of great interest: the case of a good signal-to-noise ratio, and the case of a bad signal-to-noise ratio. • Good signal-to-noise ratio: the argument of the hyperbolic tangent is high, and the hyperbolic tangent acts like the sign function. Furthermore, ak = Sign(Re(yk e−i’k−1 )) is an estimation of the symbol ak , an that leads to the DFL ’k = ’k−1 + Im(aˆk yk e−i’k−1 ): • Bad signal-to-noise ratio: the argument of the hyperbolic tangent is small, and the hyperbolic tangent is linear. Furthermore, since 2 Im(z)Re(z) = Im(z 2 ) we obtain the CL ’k = ’k−1 + Im(yk2 e−2i’k−1 ):

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167

157

Since these algorithms are speci&cally designed for the estimation of a constant phase, they cannot work well for the problem at hand: estimation of phase with linear drift. The algorithm will always have a bias because it is diMcult to track the phase. To overcome this problem, one possibility is to <er out the bias by adding a second algorithm that estimates the drift. This leads to the design of second-order loops. The DFL and CL are then written

of these estimators, we must compare them to the ultimate performances any estimator of the phase can achieve. It is well-known that these performances are bounded by the posterior CramOer–Rao bound [17]. In the next sections, we present the bound and the performances of the loops. The last paragraph is then devoted to the comparison of the loops with the particle <er and the bound.

’k = ’k−1 + k−1 + 1 k ;

3.1. Cramer–Rao bound

k = k−1 + 2 k ;

where kcostas

=

Im(yk2 e−2i(’k−1 +k−1 ) );

kdfl = Im(yk e−i(’k−1 +k−1 ) ) ×Sign(Re(yk e−i(’k−1 +k−1 ) )): This kind of algorithm is also known as a multistep algorithm [2]. The following section is devoted to the asymptotic performances of the three algorithms and to the comparison to the ultimate performances achievable.

To obtain the posterior CramOer–Rao bound, we must evaluate the Fisher information contained in p(k ; y1:k ). The variance of any estimator is greater than the inverse of the Fisher information [17]. Here, we use the recent results of Tichavsky et al. [16] to obtain a recursive equation for the bound. All the calculations are performed in Section A.1. The result is that the posterior CramOer–Rao bound for the phase reaches the limit  −a w2 + a2 w4 + 4a w2 C∞ = ; 2a where

3. Phase tracking: posterior Cramer--Rao bound and performances of the loops When solving the problem of tracking the phase with the optimal nonlinear <er, knowledge of the statistics of the noises is explicitly taken into account. Indeed, knowledge of the probability density function of wk (and hence the knowledge of its power) allows the particles to explore the state space; knowledge of the probability density function of the observation noise nk allows us to quantify the importance of the particles using the likelihood. However, when using the suboptimal algorithms, this knowledge is not taken into account. Comparing the particle <er and the loops is then a diMcult task. To make a fair comparison, we must use the knowledge of the variances of the noises in the suboptimal algorithms. One way to take into account this information is to evaluate the asymptotic mean square error of the loops as functions of w and n , and as functions of the step sizes 1 and 2 ; once the asymptotic mean square error is evaluated, we can &nd the step sizes that minimize it. Furthermore, to evaluate the quality



2 Re(yk e−ik )Sign(Re(yk e−ik ))

n2 2 −1= n2 2 1 = √ e : + 2$

n

n

n 



a≈E

The approximation holds for small errors, and is valid for a good signal-to-noise ratio (see Section A.1). In x 2 √ this expression, $(x)=2 0 e−t =  dt is the so-called erf function or probability integral. Finally, note that the CramOer–Rao bound evaluated here should be called sequential since it considers the estimation of the phase at time k only based on the past. However, if we tolerate a delay in the estimation, say estimate ’k based on y1:k+l , the bound should be lower. 3.2. Asymptotic performances of the loops The performances of the CL and of the DFL are analyzed in Sections A.2.2 and A.2.1. The performances of the &rst-order DFL and CL were analyzed in [1]. But to the best of our knowledge, the performances of

158

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167

second-order loops have not been studied. The evaluation of these performances is diMcult since the calculations involved are long and tedious. Furthermore, the nonlinearities involved in the loops forbid an exact calculation. We thus use some approximations, especially for small errors. This means that we cannot study the transient part of the algorithms. This will be done experimentally in Section 4. We show by linearizing (small error approximation) that both loops are asymptotically unbiased. We give in Sections A.2.2 and A.2.1 the set to which the step sizes ’s must belong in order to obtain convergence in the mean. We also evaluate the asymptotic variance of the phase estimation. In both cases, this variance is a function of step sizes 1 and 2 . Since parameter  is constant, the variance is minimal in 2 when this step size approaches zero. But for a given 2 , the variance has a minimum for a nonzero 1 . This is due to the fact that the loops track the phase, and there is a compromise between the tracking ability and the variance estimation. For both loops, we are able to exhibit the optimal step size 1 when 2 reaches zero. This allows us to obtain the minimum mean square error (MMSE) on the phase estimation, and this MMSE can then be compared to the bound evaluated previously. To do so, we let ’˜ k = ’k − k and ˜k = k −  be the errors, respectively, on the phase and on the drift. Considering small errors, we can linearize the problem and show that the loops converge in the mean sense. Furthermore, we evaluate the second-order statistical properties of the loops for small errors. If e2k = (E[’˜ 2k ]E[˜2k ]E[’˜ k ˜k ])T , then we show that for both loops e2k = A(1 ;2 ;') e2k−1 + B1 ;2 ;'; n ; w ; where ' = $(1= n ). Matrices A and vectors B are detailed in Sections A.2.2 and A.2.1. Assuming that the sequence e2k converges to a limit e2∗ , this limit is given by e2∗ = (I − A(1 ;2 ;') )−1 B1 ;2 ;'; n ; w from which we can study the mean square error on the phase E[’˜ 2k ]. As explained above, this error is minimal for a &xed 1 when 2 tends to zero. Evaluating this

limit, we obtain lim E[’˜ 2k ]dU =

2 →0

(1 − 1 )2 w2 + 21 n2 ; 1 (2' − 1 )

lim E[’˜ 2k ]costas =

2 →0

(1 − 21 )2 w2 + 21 (2 n2 + n4 ) : 4(1 − 1 )1

Now, this limit presents a minimum as a function of 1 . The step size which provides the minimum is thus termed optimal step size and reads  − w2 + w w2 (1 − 2')2 + 2'2 n2 ; 1; dU = 2 w2 (' − 1) + ' n2  − w2 + w w2 + 2 n2 + n4 : 1; costas = 2 n2 + n4 In all these equations, ' = $(1= n ). Note that the calculation of e2∗ , of the mean square errors in the limit of small 2 ’s and the optimal 1 ’s have been performed using symbolic calculator software. 3.3. Are suboptimal algorithms far from optimality? Although the mean square errors reached by the loops with their optimal step size have expressions that are too complicated to be presented here, we can plot these errors and compare them to the ultimate performance. The comparison is shown in Fig. 1. We plot the posterior CramOer–Rao bound and the MMSE for the loops as functions of n for w = 0:1. The sequence of symbols is a BPSK sequence, and the constant drift is set at 0.5. The phase needs about 6 samples to perform a rotation of : this is a severe condition rarely seen in practice. As expected, the DFL performs a little bit better than the CL. Even if this eKect is not spectacular here, we mention it because the eKect should be more pronounced for higher-order PSK, since the non linearity involved in the CL should be more violent in that case (|:|p for p symbols). Note that we have performed Monte-Carlo simulations to evaluate the MMSE of the loops. We used 20 snapshots of 5000 samples each. We measured the mean square of the sinus of the error by averaging from sample 3000 to sample 5000, and then by

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167

159

σw=0.1

0.12 Cramér-Rao bound Decision Feedback Loop Costas Loop DFL, experimental Costas, experimental

0.1

0.08

0.06

0.04

0.02

0 0

0.1

0.2

0.3

0.4

0.5 σn

0.6

0.7

0.8

0.9

1

Fig. 1. Asymptotic performance of the Costas loop (−:), the decision feedback loop (−−), and the posterior CramOer–Rao bound on performances for the problem of phase tracking considered in this paper ( – ). The curves show the MMSE of the estimators versus the standard deviation of the observation noise n , and this for the optimal step size 1 when the secondary step size 2 approaches to zero.

averaging over realizations. The results are superposed in Fig. 1: circles for the DFL and squares for the Costas loop. This study was performed to validate the theoretical results obtained by linearizing equations. The experimental results are in good agreement with the theoretical analysis, and this for a quite large range of n ’s. Note, however, that the linearization is better for the DFL than the Costas Loop. Furthermore, the agreement is poorer as w grows. Indeed, the assumption of small errors also requires that w be small. The main conclusion of this study is that the loops are very eMcient when they are placed in their optimal asymptotic behavior. For reasonable noise power, the loops almost reach the CramOer–Rao bound. Therefore, the superiority of an optimal approach (particle <ering) will not appear in the variance of the error. This assertion is con&rmed by inspecting Fig. 2 where we plot the mean square error for the DFL and the particle <er for some values of n . The simulation

was carried out as described above. We plot the mean square error for N = 100 and 500 particles. As can be seen in Fig. 2, the particle <er performs a little bit better with 500 particles, but the main conclusion here is that, asymptotically, the particle <er and the decision feedback loop have the same performances in terms of power of the error. Hence, the superiority (if any) of the particle <er is somewhere else. 4. Acquisition time and cycle slips As seen above, the superiority of the particle <ering is not in the asymptotic mean square error. Therefore, if the particle <er is superior to the loops, it must be in the convergence speed and in the probability of occurrence of so-called cycle slips. To study these concepts, we use extensive Monte-Carlo simulations because theoretical results are almost impossible to obtain.

160

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167 σw=0.1

0.06 Decision Feedback Loop 100 particles 500 particles 0.05

0.04

0.03

0.02

0.01

0 0.1

0.2

0.3

0.4

σn

0.5

0.6

0.7

0.8

Fig. 2. Asymptotic mean square error of the DFL compared to that of the particle implementation of the optimal nonlinear <er.

4.1. Acquisition time To compare the convergence of the algorithms, we study the acquisition time of the loops and the particle <er. The acquisition time is de&ned as the &rst time at which the algorithm enters a small interval around the true solution, and of course stays in the interval. We have chosen in our simulation a 5% wide interval. To be sure that the algorithm has converged, we require that 20 successive samples of the algorithm remain in the interval. Note that, in this study, the step sizes of the loops are set at their optimal values: this ensures that the loops and the particle <er are equivalent asymptotically. We run 2000 snapshots for n =0:5 and n =1:0. We can then evaluate the histogram of acquisition times. These are shown in Fig. 3 for the particle <er (100 particles for n = 0:5 and 500 particles for n = 1), the Costas loop and the DFL. We also present in Table 1 the mean values, standard deviations of the acquisition time for several experimental situations.

The main conclusion of this study is that the particle <er has a higher convergence speed than the loops. Looking at the mean acquisition time E[ta ] we can indeed say that the particle <er outperforms the loops. However, we have to notice that, in general, the standard deviation of the acquisition time for the particle <er is of the order of the mean acquisition time. This is not the case for the loops. This means that contrary to the loops, if the particle <er does not acquire the phase rapidly, it will in the mean converge very slowly. 4.2. Cycle slips Cycle slips are due to the periodic structure of the problem to be solved. Indeed, for binary PSK modulations, the phase has an ambiguity of . Due to the noise and nonlinearities, the estimated phase may change at any time from one multiple of  to another. This phenomenon, known as cycle slip, is illustrated in Fig. 4 where we plot a trajectory of the ratio of the

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167

161

σn=0.5

700

Costas DFL 100 particles

600 500 400 300 200 100 0 0

100

200

300

400

500

600

700

800

900

1000

σn=1

700

Costas DFL 500 particles

600 500 400 300 200 100 0 0

500

1000

1500

2000

2500

3000

3500

4000

4500

5000

Fig. 3. Histograms of acquisition time for the loops and the particle <er. The histograms are evaluated on 2000 snapshots. Each histogram is calculated over 50 classes equally spaced between the minimum and the maximum value over the snaphots. For n = 0:5, the particle <er is run with 100 particles, whereas we use 500 particles for n = 1. Table 1 Statistics of acquisition times of the CL, the DFL and the particle <er for several experimental setups

C

n = 0:1

n = 0:5

n = 0:7

n = 1

DFL

100 particles

500 particles

E[ta ]

Var[ta ]1=2

E[ta ]

Var[ta ]1=2

E[ta ]

Var[ta ]1=2

E[ta ]

Var[ta ]1=2

168 274 422 766

19 84 241 642

267 233 391 921

22 45 141 674

82 84 139 489

80 65 202 668

101 75 88 218

201 66 52 316

phase error over , using the DFL. In this &gure, we omit the transient part (here, acquisition was obtained before t = 1000). As can be seen in the &gure, cycle slip occurs randomly, and a slip is quite unpredictable. Of importance in the study of cycle slips is the mean time between slips, and the probability of a slip. The problem of slips is crucial in synchronization, and a good synchronizer should have a probability of cycle slips several orders of magnitude smaller than the bit error rate. Indeed, if the phase determination is the wrong one, good bits become bad bits. In actual com-

munications systems, tricks such as a known sequence are used to periodically put the phase in the correct determination [10]. For a good signal-to-noise ratio, obtaining precise statistics on cycle slips is very time consuming since these events may appear rarely. For a very bad signal-to-noise ratio, these statistics can be evaluated using Monte-Carlo simulations. We show in Fig. 5 the result of applying the particle <er to the same realization which is used to obtain the result in Fig. 4. We use 500 particles, and we again omit the transient. As it appears in this example, the

162

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167 -75 -76 -77

Phase error / π

-78 -79 -80 -81 -82 -83 -84 -85 1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

Fig. 4. Phase error of the DFL divided by . The transient part is omitted. For this example,  = 0:5, w = 0:1 and n = 1.

1 0.8 0.6

Phase error / π

0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

Fig. 5. Phase error of the particle <er divided by  (500 particles are used). The transient part is omitted. For this example,  = 0:5,

w = 0:1 and n = 1.

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167

number of cycle slips is small (1) compared to that of the DFL (about 10). In this example, we only see an accident at around time 9000 which corresponds to the fact that the set of particles is cut into two subsets, each corresponding to a diKerent phase determination. Note that the algorithm needs a long time (about 1000 time steps) to return to a correct determination. Hence, based on this example, and of course on many others, we conclude that cycle slips are rare in the case of the particle <er, and that trying to get precise statistics is too much time consuming. Therefore, regarding cycle slips, we will simply say that the particle <er is much better than the loops studied in the paper.

163

and it can be improved greatly. Therefore, our results should be understood as a &rst step towards the use of particle <ering for the kind of problem studied here. The third point illustrated in the paper is the behavior of the algorithms in front of cycle slips. Even if we do not show statistics, experiments clearly show the superiority of the particle <ering on the loops. Let us &nally say that implemented this technique on more complicated modulations (higher-order PSK, QAM) is possible and under study.

Appendix A 5. Discussion Addressing the goals of the paper as stated in the introduction, we can make the following comments. The evaluation of the ultimate performances and of the performances of the loops shows that for a reasonable signal-to-noise ratio, the behavior of the loops is very satisfactory. Indeed, the open space between the CramOer–Rao bound and the mean square error achieved by the loops is very small, so small that it is diMcult for the particle implementation of the optimal <er to be there. The second main comment concerns the acquisition. We have shown that in this respect, the particle <er implementation of the optimal <er outperforms the DFL and the CL. This is not surprising. Indeed, it is known that adaptive algorithms cannot be optimized jointly in terms of convergence and tracking. Here, we have chosen to optimize the loops in terms of tracking (minimizing the asymptotic mean square error). Therefore, their convergence rate is bad. On the other side, the optimal <er does not make any difference between slow or rapid nonstationarities. The optimal <er is globally optimal: each phase in the convergence is optimized. Note at this point that we have presented the fact that mean acquisition time of the particle <er is much better than that of the loops, but that if the particle <er does not acquire the phase rapidly it will not converge. This result is quite surprising, and is certainly due to our implementation of the particle <er. In our work, the implementation is very simple,

For all the following calculations, we recall that the model considered here is k = k−1 ;

k = k−1 + k−1 + wk ;

(A.1)

yk = ak eik + nk :

(A.2)

The sequence {ak } is an i.i.d. sequence of ±1 with equal probabilities. The instantaneous likelihood function is given by

1 |yn+1 |2 + 1 p(yn+1 |n+1 ) = 2 exp −  n

n2

2 ×cosh − 2 Re(yn+1 e−in+1 ) (A.3)

n and the a priori density of the phase is p(n+1 |n ; n ) =



(n+1 − n − n )2 exp − 2 w2 2 w2 1

:

(A.4)

A.1. Sequential posterior Cramer–Rao bound Using the results proved in [16], we can show that the posterior CramOer–Rao bound is obtained as follows. Let pk = p(k+1 |k ; k )p(yk+1 |k+1 ). Let Hk11 = E[−@2 logpk =@2k ], Hk12 =E[−@2 logpk =@k @k ], Hk13 = E[ − @2 logpk =@k @k+1 ], Hk22 = E[ − @2 logpk =@k2 ], Hk23 = E[ − @2 logpk =@k @k+1 ] and &nally Hk33 = E[ − @2 logpk =@2k+1 ]. Let the Fisher information

164

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167

matrix for (k ; k )T be partitioned as

11 Jk Jk12 : Jk21 Jk22 Then, it is shown in [16] that 11 = Hk33 − Jk+1

12 Jk+1

22 Jk+1

=

Hk23

=

Jk22

(Hk13 )2 ; + Hk11

Jk11

H 13 (J 12 + Hk12 ) − k 11k ; Jk + Hk11 +

Hk22

(J 12 + Hk12 )2 − k11 : Jk + Hk11

Using Eqs. (A.3) and (A.4) we obtain 1 Hk11 = 2 ;

w Hk12 =

1 ;

w2

Hk13 = − Hk22 =

1 ;

w2

Hk23 = − Hk33

1 ;

w2

1 ;

w2



 1 2 2 −ik −ik = 2 + E 2 Re(yk e )tanh 2 Re(yk e )

w

n

n  4 −E 2 Im2 (yk e−ik )

n 

 2 −ik 1 − tanh2 Re(y e ) : k

n2

Let a be such that Hk33 =1= w2 +a. Then, assuming that 11 converges, we obtain for its limit the sequence Jk+1 11 J∞ the following &xed point equation: 1 1 11 = 2 + a − 4 11 J∞

w

w (J∞ + 1= w2 ) 2

11 11 −aJ∞ −a= w2 =0 whose solutions or equivalently J∞ are  a ± a(a + 4= w2 ) 11 : J∞ = 2 12 Additionally, J∞ = −1= w2 , and Jk22 diverges.

Thus, it is suMcient to assume that parameter n is known. Indeed, if this assumption is considered, the Fisher information Jk obtained is the same as Jk11 . The Fisher information follows the recursion Jk+1 =

(a + 1= w2 )Jk + a= w2 Jk + 1= w2

hence the CramOer–Rao bound Ck follows: Ck+1 =

w2 + Ck : (a w2 + 1) + aCk

Assuming convergence of the sequence, the asymptotic bound is obtained using a &xed point condition and reads  −a w2 + a2 w4 + 4a w2 seq ; C∞ = 2a where 

 2 2 −ik −ik a = E 2 Re(yk e )tanh 2 Re(yk e )

n

n  4 − E 2 Im2 (yk e−ik )

n 

 2 −ik × 1 − tanh2 Re(y e ) : k

n2 If we now assume that the signal-to-noise ratio is good, the hyperbolic tangent can be approximated by a sign function. In this case, we have approximately   2 a ≈ E 2 Re(yk e−ik )Sign(Re(yk e−ik ))

n =

2 E[(ak + Xk )Sign(ak + Xk )];

n2

(A.5)

where Xk = Re(nk exp(−ik )) is a Gaussian random variable with zero mean and variance n2 =2 (this is due to the assumption of circularity made on the observax 2 √ tion noise). Therefore, de&ning $(x)=2 0 e−t =  dt the so-called erf function or probability integral, we &nally obtain 2 −1= n2 2 1 a= √ e : + 2$

n  n n A.2. Performances of the loops For the model k = k−1 +  + wk ;

(A.6)

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167

yk = ak eik + nk

(A.7)

the second-order loops read ’k = ’k−1 + k−1 + 1 k ;

(A.8)

165

For simplicity in the notations, let $ = $(1= n ). Then we immediately obtain





E[’˜ k−1 ] 1 − $1 1 − $1 E[’˜ k ] = : E[˜k ] E[˜k−1 ] −$2 1 − $2 The characteristic polynomial reads

k = k−1 + 2 k ;

(A.9)

The eigenvalues read

where k =

Im(yk2 e−2i(’k−1 +k−1 ) )

P(/) = /2 − (2 − $1 − $2 )/ + 1 − $1 :

k = Im(yk e−i(’k−1 +k−1 ) ) ×Sign(Re(yk e−i(’k−1 +k−1 ) )) for the DFL: Let ’˜ k = ’k − k and ˜k = k −  the errors involved. Then we have ’˜ k = ’˜ k−1 + ˜k−1 − wk + 1 k ;

(A.10)

˜k = ˜k−1 + 2 k :

(A.11)

A.2.1. DFL Let ,k = ’˜ k−1 + ˜k−1 − wk . Then, replacing yk by ak eik + nk in k , and k by k−1 +  + wk , we have for the decision feedback loop k = (−ak sin(,k ) + Im(nk ei(’k−1 +k−1 ) )) ×Sign(−ak cos(,k ) + Re(nk ei(’k−1 +k−1 ) )): In the following, we will assume that ,k is small. This amounts to assuming that the loop is near the solution and that the noise wk is not too powerful. This last assumption is usually veri&ed in practice (small jitters). We extensively use the assumption that the noise nk is a circular Gaussian noise. This means that nk = nk exp(i) in distribution for any . This implies in particular that the noise is zero mean and that moments ∗l ∗ of the type E[nm k nk ] equal zero if m = l ( stands for the complex conjugate). Mean analysis. We obtain

  cos(,k ) E[k ] = −E sin(,k )$

n 1 ≈ −$ (E[˜k−1 ] + E[’˜ k−1 ]):

n



$2 (1 + 2 )2 − 4$2 : 2 For ’s positive and lower than 1, it can be veri&ed that |/+ | 6 1; furthermore, /− is always lower than 1, but /− ¿ − 1 if 1 6 2=$ − 2 =2. Therefore, to ensure the convergence in the mean of the algorithm, the ’s must be chosen in the set de&ned by   2 1 : {0 ¡ 1 ; 2 6 1} ∩ 1 6 − $ 2 /± =

for the CL

2 − $(1 + 2 ) ±

Mean square analysis. We need E[’˜ 2k ]; E[˜2k ]; E[’˜ k ˜k ]. The calculations involve the term E[k2 ] = E[(−ak sin(,k ) + Im(nk ei(’k−1 +k−1 ) ))2 ] = E[sin(,k )2 ] +

n2 2

≈ E[(’˜ k−1 + ˜k−1 − wk )2 ] + the term

n2 2

  cos(,k ) E[,k k ] = −E ,k sin(,k )$

n ≈ −E[(’˜ k−1 + ˜k−1 − wk )2 ]$: Similarly we have

  cos(,k ) E[˜k−1 k ] = E ˜k−1 sin(,k )$

n ≈ E[(’˜ k−1 ˜k−1 + ˜2k−1 )]$: Now, we have E[’˜ 2k ] = E[(’˜ k−1 + ˜k−1 − wk )2 ] + 21 E[(’˜ k−1 + ˜k−1 − wk )k ] + 21 E[k2 ]; E[˜2k ] = E[˜2k−1 ] + 22 E[˜k−1 k ] + 22 E[k2 ]; E[’˜ k ˜k ] = E[’˜ k−1 ˜k−1 ] + E[˜2k−1 ] + 1 E[˜k−1 k ] + 2 E[,k k ] + 1 2 E[k2 ]:

166

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167

Note that

in the mean of the algorithm,  2  : {0 ¡ 1 ; 2 6 1} ∩ 1 6 1 − 2

E[(’˜ k−1 + ˜k−1 − wk )2 ] =E[(’˜ k−1 )2 ] + E[(˜k−1 )2 ]

Mean square analysis. We need E[’˜ 2k ]; E[˜2k ]; E[’˜ k ˜k ]. The calculations involve the term

+ 2E[’˜ k−1 ˜k−1 ] + w2 :

E[k2 ] = E[sin(2,k )2 ]

Finally we obtain 

2

E[’˜ k ]

+ 4E[,k2 ]E[Im2 (nk ei(k−1 ++wk −2’k−1 −2k−1 ) )]



     E[˜2k ]  =   E[’˜ k ˜k ]     

+ E[Im2 (n2k e−2i(’k−1 +k−1 ) )] 2

1 − 2$1 + 1

2

2(1 − 2$1 + 1 )

2

2

22 − 22 $

1 − 2$1 + 1

2

2

   

2

1 − 2$2 + 2

−2 $ + 1 2 1 − (1 + 2 )$ + 1 2 1 − (1 + 22 )$ + 21 2 

2

E[’˜ k−1 ]

  × 

2

E[˜k−1 ] E[’˜ k−1 ˜k−1 ]





2 2

2 2

(1 − 1 ) w + 1 n =2

    +  

2

2

2

2 ( w + n =2) 2 1 2 n =2

+ (1 2 −

≈ −2E[,k2 ] = −2E[(’˜ k−1 )2 ] − 2E[(˜k−1 )2 ]

2 2 $) w

− 4E[’˜ k−1 ˜k−1 ] − 2 w2 :

k = −sin(2,k ) + 2ak Im(nk ei(k−1 ++wk −2’k−1 −2k−1 ) ) + Im(n2k e−2i(’k−1 +k−1 ) ): Mean analysis: Since nk is a circular Gaussian noise, we have E[n2k ] = 0, and therefore

=

Similarly we have E[˜k−1 k ] = −E[˜k−1 sin(2,k )] ≈ −2E[’˜ k−1 ˜k−1 + ˜2k−1 ]: Now, we have E[’˜ 2k ] = E[,k2 ] + 21 E[,k k ] + 21 E[k2 ]; E[˜2k ] = E[˜2k−1 ] + 22 E[˜k−1 k ] + 22 E[k2 ]; E[’˜ k ˜k ] = E[’˜ k−1 ˜k−1 ] + E[˜2k−1 ] + 1 E[˜k−1 k ] + 2 E[,k k ] + 1 2 E[k2 ]: Finally, we obtain

E[k ] = −E[sin(2,k )]:

E[˜k ]

E[,k k ] = −E[,k sin(2,k )]

  : 

A.2.2. CL Let ,k = ’˜ k−1 + ˜k−1 −wk . Then, we have for the CL



the term



To obtain the limit, we assume that the sequence of vectors converges, and we then apply a &xed point condition. The inversion involved in the &xed point equation is solved using symbolic calculator software.

Thus

E[’˜ k ]

≈ 4E[,k2 ] + 2 n2 + n4





1 − 21

1 − 21

−22

1 − 22



E[’˜ k−1 ] E[˜k−1 ]

:

This result is the same as that of the DFL by replacing $ by 2. The eigenvalues hence read  /± = 1 − (1 + 2 ) ± (1 + 2 )2 − 22 : It can be veri&ed that |/+ | 6 1 and that |/− | 6 1 if 1 6 1 − 2 =2. Therefore, to ensure the convergence

 2 E[’˜ k ]      E[˜2k ]  =   E[’˜ k ˜k ]     

(1 − 21 )

2

2

42

(1 − 21 )

2

2(1 − 21 )

(1 − 22 )

2

82 − 42



2

   

2

−22 + 41 2 (1 − 21 )(1 − 22 ) (1 − 21 )(1 − 42 )    × 

2

E[’˜ k−1 ] 2

E[˜k−1 ] E[’˜ k−1 ˜k−1 ]





    +  

2 2

2

2



4

(1 − 1 ) w + 1 (2 n + n ) 2

2

2

  : 

4

2 ( w + 2 n + n ) 2

4

2

1 2 (2 n + n ) + (1 2 − 2 ) w

P.O. Amblard et al. / Signal Processing 83 (2003) 151 – 167

References [1] A. Benveniste, M. Joindot, P. Vandamme, Analyse thOeorique des boucles de phase numOeriques en prOesence de canaux dispersifs, Tech. Rep., NT/MER/TSF/1, CNET Lannion, 1980. [2] A. Benveniste, R. Priouret, M. Metivier, Adaptive Algorithms and Stochastic Approximations, Springer, Berlin, 1990. [3] A. Doucet, Algorithmes Monte-Carlo pour l’estimation BayOesienne de modXeles markoviens cach’s, Application au traitement de signaux de rayonnements., Ph.D. Thesis, UniversitOe d’Orsay, 1997. [4] A. Doucet, N.D. Freitas, N. Gordon, Sequential Monte-Carlo Methods in Practice, Springer, Berlin, 2001. [5] A. Doucet, S. Godsill, C. Andrieu, On sequential Monte-Carlo sampling methods for Bayesian <ering, Statist. Comput. 10 (2000) 197–208. [6] R. Elliot, L. Aggoun, J. Moore, Hidden Markov Models, Estimation and Control, Springer, Berlin, 1995. [7] F. Gardner, Phase Lock Techniques, Wiley, New York, 1968. [8] N. Gordon, D. Salmond, A. Smith, Novel approach to nonlinear/nonGaussian Bayesian state estimation, IEE Proc. F 140 (2) (1993) 107–113.

167

[9] A. Jazwinski, Stochastic Processes and Filtering Theory, Academic Press, New York, 1970. [10] H. Meyr, M. Moeneclaey, S. Fechtel, Digital Communications Receivers: Synchronisation, Channel Estimation and Signal Processing, Wiley, New York, 1998. [11] P.D. Moral, J. Noyer, G. Rigal, G. Salut, ROesolution particulaire et traitement non-linOeaire du signal: Applications radar/sonar, Traitement du Signal 12 (4) (1995) 287–301. [12] C. Musso, N. Oudjane, F.L. Gland, Improving regularised particle <ers, in: A. Doucet, N.D. Freitas, N. Gordon (Eds.), Springer, Berlin, 2001 (Chapter 12). [13] J. Noyer, G. Salut, Filtrage particulaire du signal radar brut sur cible ponctuelle, Traitement Signal 14 (1) (1997) 43–61. [14] C. Robert, The Bayesian Choice. A Decision-Theoretic Motivation, Springer, Berlin, 1994. [15] C. Robert, G. Casella, Monte-Carlo Statistical Methods, Springer, Berlin, 1999. [16] P. Tichavsky, C. Muravchik, A. Nehorai, Posterior CramOer– Rao bounds for discrete time nonlinear <ering, IEEE Trans. Signal Process. 46 (5) (1998) 1386–1396. [17] H.L. Van Trees, Detection, Estimation and Modulation Theory, Vol. 1, Wiley, New York, 1968.