Digital Signal Processing 33 (2014) 71–82
Contents lists available at ScienceDirect
Digital Signal Processing www.elsevier.com/locate/dsp
Self-tuning adaptive frequency tracker Michał Meller 1 Department of Automatic Control, Gda´ nsk University of Technology, Faculty of Electronics, Telecommunications and Computer Science, Narutowicza 11/12, 80-233 Gda´ nsk, Poland
a r t i c l e
i n f o
Article history: Available online 19 June 2014 Keywords: Automatic tuning Adaptive notch filtering Adaptive signal processing Frequency tracking
a b s t r a c t An automatic gain tuning algorithm is proposed for a recently introduced adaptive notch filter. Theoretical analysis and simulations show that, under Gaussian random-walk type assumptions, the proposed extension is capable of adjusting adaptation gains of the filter so as to minimize the mean-squared frequency tracking error without prior knowledge of the true frequency trajectory. A simplified one degree of freedom version of the filter, recommended for practical applications, is proposed as well. © 2014 Elsevier Inc. All rights reserved.
1. Introduction Adaptive notch filters (ANFs) are devices capable of suppressing or enhancing nonstationary narrowband signals buried in noise. The vector of state variables of ANFs often includes, or allows one to determine, instantaneous frequency of the tracked narrowband signal. Such filters, e.g. [1–4], can be used to estimate time-varying frequency of the signal of interest. A typical ANF employs one or more user-dependent adaptation gains which must be judiciously tuned to optimize the filters’ performance. Since tuning of the filter is a time-consuming process, it is desirable to provide some means of automatic adjustment of these gains, e.g. in a form of a supervisory self-optimization layer. Such a combination could equip an ANF with a capability to deliver nearly-optimal performance, despite possible variations of tracking conditions, e.g. the signal to noise ratio or a rate at which the frequency changes. Adaptation gains are most often adjusted with an aim of optimizing signal tracking performance. Popularity of this approach results primarily from two facts. Firstly, the underlying application of an ANF may require tight signal tracking. Such is the case of, among others, filtering power signal from electrocardiogram (ECG) recordings [5], tracking harmonic currents in power applications [6–8] or active control of narrowband acoustic noise [9]. Secondly, in case of signal tracking it is rather simple to distinguish the poorly performing ANF from the one which performs well. This can be done, without any prior knowledge of the true values of the narrowband signal, by evaluating prediction errors yielded by the filter [10]. Due to this relative simplicity of measuring the fil-
1
E-mail address:
[email protected]. Fax: +48 58 347 15 55.
http://dx.doi.org/10.1016/j.dsp.2014.06.004 1051-2004/© 2014 Elsevier Inc. All rights reserved.
ter’s performance, several self-tuning ANF solutions were proposed in signal processing literature, see e.g. [10,11]. The problem of optimizing frequency tracking performance of an ANF has received substantially less attention – existing results are generally limited to evaluations of theoretically achievable mean-square errors and experimental comparisons of various notch filtering approaches [12–16]. This stems from the fact it is actually quite difficult to tell how well the filter tracks the signal’s frequency without a prior knowledge of its true trajectory – it occurs that settings which minimize signal tracking errors are usually different from those which minimize frequency tracking errors [17]. For this reason, existing solutions often underperform in terms of frequency tracking when they are run in a real-world scenario. The problems outlined above are encountered in other approaches to frequency estimation as well. For instance, estimators such as the short time Fourier transform (STFT) or ESPRIT [18] require one to choose local analysis window length, which should be adjusted to the signal’s characteristics. Similarly, trackers based on the extended Kalman filter (EKF) are known to be highly accurate [19–21,11], but difficult to tune – even though some very good rules were pointed in [19]. Recently, new results on ANF application to frequency tracking were established in [22]. It was shown that one can quantify frequency tracking accuracy of an ANF with a special auxiliary predictor. A parallel adaptive frequency tracker was also proposed in [22]. It employs a bank of notch filters and uses the proposed predictor to select the filter which offers the most accurate tracking. The new scheme outperformed several existing algorithms, including a recent self-tuning one. However, the parallel scheme introduced in [22] is not free of drawbacks. First, it has rather high computational complexity, because it involves a bank of notch filters. Second, its accuracy
72
M. Meller / Digital Signal Processing 33 (2014) 71–82
grossly depends on how representative the setting of the filters composing the bank are. If there is no filter ‘matched’ to tracking conditions, the parallel scheme will yield degraded performance. This means that the filterbank must be quite large, which further increases computational complexity. The contribution of the paper is threefold. First, a novel automatic gain adjustment mechanism is proposed for a recently introduced adaptive notch filter. Second, theoretical analysis of the algorithm’s behavior is performed. It is shown that, under Gaussian assumptions, the resulting estimator is locally convergent in mean to the optimal tracker, even in the case of unknown and timevarying signal characteristics. This is, most likely, the first solution with such a capability – as it will be argued later, the parallel tracker from [22] cannot provide performance guarantees similar to those of the sequential approach. Third, the analysis reveals that the preliminary version of the self-adjustment mechanism suffers from slow convergence. Therefore, a normalized variant of the algorithm is proposed, which is free of this drawback. A simplified, one degree of freedom scheme is proposed as well. Finally, rich simulation material confirms good properties of the improved solutions and validates results of convergence analysis. The remaining part of paper is organized as follows. Section 2 formulates the problem and reviews fundamental findings from [22]. Section 3 develops the proposed approach. Convergence of the proposed solution is analyzed in Section 4. Section 5 discusses extensions and compares the sequential approach with the parallel one. Section 6 presents simulation results. Section 7 concludes.
the estimates of the signal’s complex ‘amplitude’, instantaneous value, instantaneous frequency and instantaneous frequency rate [α (t − 1) = ω(t ) − ω(t − 1)], respectively. Adaptation laws of (4) take a form typical to adaptive signal processing [23] and frequency estimation [2,19,20]: new values of variables are obtained by adding an update term, proportional to quantities which may be interpreted as errors ( (t ) and δ(t )). The parameters θ1 > 0, θ2 > 0, θ3 > 0, θ1 θ2 θ3 , are small adaptation gains, determining the rates of amplitude adaptation, frequency adaptation and frequency rate adaptation, respectively. Tracking of frequency rate is quite uncommon among adaptive notch filters and deserves more justification. Many real world narrowband signals exhibit approximately piecewise linear frequency modulation. Such is the case, among others, of radar signals, acoustic engine noise encountered in active noise control systems or vibration generated by rotating machinery during transient states. In these cases estimation of both frequency and frequency rate can improve tracking accuracy considerably – even by an order of ˆ (0) = 0 and magnitude, see e.g. [17]. Finally, observe that setting α θ1 = 0 effectively disables the frequency rate tracking feature. As shown in [17], the algorithm (4) has very good statistical properties. Under the following assumptions: (A1) Instantaneous frequency drifts according to the 2-nd order random walk (also called integrated random walk)
ω(t ) = ω(t − 1) + α (t − 1) α (t ) = α (t − 1) + w (t ),
2. Problem formulation and related results Consider the problem of estimating an unknown slowly timevarying frequency ω(t ) of a narrowband complex sinusoid (cisoid) using noisy measurements
y (t ) = s(t ) + v (t ),
(1)
where t = 0, 1, . . . denotes discrete time, v (t ) is a wideband measurement noise,
s(t ) = a(t )e
j
t
τ =1 ω ( τ )
(2)
is a nonstationary complex sinusoid with instantaneous frequency ω(t ) and
a(t ) = m(t )e j φ0 ,
(3)
where m(t ) is a real valued slowly time varying amplitude and φ0 denotes the initial phase. Tracking of ω(t ) may be accomplished e.g. using the following ANF algorithm, introduced in [17]2
fˆ (t ) = e
ˆ (t −1)+αˆ (t −1)] j [ω
ˆf (t − 1)
aˆ (t ) = aˆ (t − 1) + θ3 fˆ ∗ (t )ε (t )
αˆ (t ) = αˆ (t − 1) + θ1 δ(t ) ωˆ (t ) = ωˆ (t − 1) + αˆ (t − 1) + θ2 δ(t ) ε(t ) aˆ (t − 1) fˆ (t )
sˆ (t ) = aˆ (t ) fˆ (t ),
(4)
where fˆ (t ) is a phase term, (t ) is the prediction error, ∗ denotes ˆ (t ) and αˆ (t ) are complex conjugation, the quantities aˆ (t ), sˆ (t ), ω 2
The original formulation used the symbols γα , γω ,
where { w (t )} forms a stationary zero-mean Gaussian white 2 noise sequence, w ∼ N (0, σ w ), (A2) The sequence { v (t )}, independent of { w (t )}, is a circular complex Gaussian white noise, v ∼ CN (0, σ v2 ), (A3) The magnitude of the narrowband signal is constant, |s(t )| ≡ a0 , a proper choice of the gains θ1 , θ2 , θ3 can turn the algorithm (4) into a statistically efficient frequency tracker. In such case steady ˆ (t ) = ω(t ) − state mean squared frequency tracking errors, ω ωˆ (t ), will reach the fundamental lower bound (called posterior or Bayesian Cramér–Rao bound [24]) which limits mean-squared tracking performance of any estimation scheme. Although closedform expressions for the optimal values of gains θ1 , θ2 , θ3 do not exist, it can be shown that they depend only on the following ‘normalized’ measure of signal nonstationarity [17]
κ=
2 a20 σ w
σ v2
(6)
.
Note that, from a practical point of view, statistical efficiency of (4) is of somewhat questionable value – the values of the parame2 ters σ v2 , σ w and a20 are unlikely to be known, which makes tuning (4) a challenging task. In [22] a novel way of evaluating frequency tracking performance of ANFs was proposed. It was shown that the settings which minimized the mean-squared frequency tracking error also minimized the mean-squared value of the following auxiliary sequence
ε(t ) = y (t ) − aˆ (t − 1) fˆ (t )
δ(t ) = Im
(5)
μ, rather than θ1 , θ2 , and θ3 .
ξ(t ) =
1 − 2q−1 + q−2
θ2 + (θ1 − θ2 )q−1
ωˆ (t ),
(7)
where q−1 denotes the backward shift operator, q−1 u (t ) = u (t − 1). Note that ξ(t ) may be obtained without any prior knowledge of the true values of ω(t ). This makes (7) particularly useful for online optimization.
M. Meller / Digital Signal Processing 33 (2014) 71–82
In the sequel we will develop a sequential self-tuning extension for the ANF filter (4). The resulting algorithm has low computational complexity and is locally convergent in mean to the optimal values of adaptation gains.
2
ξ 2 (t ),
(8)
θ = [θ1 θ2 θ3 ]T .
(9)
Our first design will be based on the stochastic gradient approach
∂ J (t ) ∂θ3
T ∂ξ(t ) x(t ) = x1 (t ) x2 (t ) x3 (t ) = ∂θ1
∂ξ(t ) ∂θ2
∂ξ(t ) ∂θ3
T (12)
denote the vector of derivatives of ξ(t ) with respect to the elements of θ . Then one obtains
∂ J (t ) = ξ(t )x(t ). ∂θ
(13)
In order to find expressions for x1 (t ), x2 (t ), x3 (t ) the following result from [22] can be applied. Let e (t ) = − Im [ v (t )s∗ (t )/a20 ]. Using the approximating linear filter (ALF) method – a linearization approach proposed in [3] for the purpose of analyzing adaptive notch filters – one can show that
θ2 + (θ1 − θ2 )q−1 u (t ), D (q−1 ; θ )
(14)
where
D q
−1
; θ = 1 + d1 q
−1
+ d2 q
−2
+ d3 q
x2 (t ) = x1 (t ) − x1 (t − 1) x3 (t ) = x2 (t ) − x2 (t − 1) x(t ) = [x1 (t ) x2 (t ) x3 (t )]T
d3 = θ3 − 1
(15)
and
u (t ) = ω(t ) + 1 − q−1 e (t ).
(16)
Note that rigorous conditions when ALF can be applied were not established in [3]. However, numerous successful applications of the method show that the approximation works more than well ˆ (t ) ≈ ω(t ). under good tracking, i.e. when ω Combining (7), (14) and (15) leads to the following formula
1 − 2q−1 + q−2 1 + d 1 q −1 + d 2 q −2 + d 3 q −3
u (t ),
(18)
Since u (t ) is an exogenous quantity [cf. (16)] it follows that all derivatives of u (t ) vanish, ∂ u (t )/∂θ1 = 0, ∂ u (t )/∂θ2 = 0, ∂ u (t )/∂θ3 = 0. Therefore, differentiating (18) and employing (15) leads to the following expressions
x1 (t ) = −d1 x1 (t − 1) − d2 x1 (t − 2) − d3 x1 (t − 3) − ξ(t − 1) x2 (t ) = −d1 x2 (t − 1) − d2 x2 (t − 2) − d3 x2 (t − 3)
− ξ(t − 1) − ξ(t − 2)
x3 (t ) = −d1 x3 (t − 1) − d2 x3 (t − 2) − d3 x3 (t − 3)
− ξ(t − 1) − 2ξ(t − 2) + ξ(t − 3) .
(19)
Reverting back to the transfer function approach yields
x1 (t ) = − x2 (t ) = −
1 D (q−1 )
ξ(t − 1)
1 − q −1 D (q−1 )
ξ(t − 1)
1 − 2q−1 + q−2 D (q−1 )
ξ(t − 1).
(20)
It follows that the last two expressions in (19) can be replaced with the following ones
d2 = 3 − 2θ3 − θ2
ξ(t ) =
x1 (t ) = −d1 x1 (t − 1) − d2 x1 (t − 2) − d3 x1 (t − 3) − ξ(t − 1)
−3
d1 = θ1 + θ2 + θ3 − 3
]
Self-tuning
x3 (t ) = −
ε (t )
aˆ (t −1) fˆ (t )
− 2u (t − 1) + u (t − 2). (11)
δ(t ) = Im[
ξ(t ) = −d1 ξ(t − 1) − d2 ξ(t − 2) − d3 ξ(t − 3) + u (t ) T
denotes the gradient of J (t ) with respect to θ(t ). Let
ωˆ (t ) =
ωˆ (t ) = ωˆ (t − 1) + αˆ (t − 1) + θˆ2 (t )δ(t )
(10)
where g > 0 is a small gain and
∂ J (t ) ∂θ2
αˆ (t ) = αˆ (t − 1) + θˆ1 (t )δ(t )
ˆ t ) − g ξ(t )x(t ) ˆ t + 1) = θ( θ(
ˆ t + 1) = θ( ˆ t ) − g ∂ J (t ) , θ( ∂θ ∂ J (t ) ∂ J (t ) = ∂θ ∂θ1
fˆ (t ) = e j [ωˆ (t −1)+αˆ (t −1)] ˆf (t − 1)
ξ(t ) = −(1 − θˆ1 (t )/θˆ2 (t ))ξ(t − 1) + 1/θˆ2 (t )[ωˆ (t ) − 2ωˆ (t − 1) + ωˆ (t − 2)]
where θ is the parameter vector consisting of adaptation gains
Adaptive frequency tracking
aˆ (t ) = aˆ (t − 1) + θˆ3 (t ) fˆ ∗ (t )ε (t )
We will adjust the gains θ1 , θ2 and θ3 so as to minimize the following measure of fit
1
Table 1 Preliminary adaptive frequency tracker with sequential self-tuning.
ε (t ) = y (t ) − aˆ (t − 1) fˆ (t )
3. Sequential optimization
J (t ) = J (t ; θ ) =
73
(17)
which, after replacing all occurrences of q−1 with appropriately delayed signals, can be rewritten into the following recursive equation
x2 (t ) = x1 (t ) − x1 (t − 1) x3 (t ) = x1 (t ) − 2x1 (t − 1) + x1 (t − 2) = x2 (t ) − x2 (t − 1).
(21)
All the above recursions are asymptotically stable if the following (sufficient) stability conditions hold
0 < θ1 < 1,
0 < θ2 < 1,
θ3 (θ2 + θ1 ) > θ1 .
0 < θ3 < 1 (22)
Combining all the partial results derived so far, i.e. (4), (10), (19) and (21), one can obtain the sequential self-tuning adaptive frequency tracker summarized in Table 1. The algorithm consists of two parts. Its inner loop is the ANF (4), where the constant gains θ1 , θ2 , θ3 were replaced with their time varying counterparts, θˆ1 (t ), θˆ2 (t ), θˆ3 (t ). The outer, self-tuning part performs automatic sequential adjustment of θˆ1 (t ), θˆ2 (t ) and θˆ3 (t ) so as to optimize frequency tracking performance of the ANF.
74
M. Meller / Digital Signal Processing 33 (2014) 71–82
Note that the formulas (19) are dynamic ones – they involve feedback loop. Due to this property, it is of particular importance to gain more insight into convergence properties of the preliminary self-tuning scheme. In the next section we will show that, under Gaussian assumptions, the adaptation gains converge in mean to the optimal ones. Additionally, it will be shown that the preliminary scheme exhibits very different sensitivity to errors in different variables. To cope with this issue several extensions will be proposed.
F (θ) = F 1 (θ )
T
F 1 (θ )
2π
F 3 (θ)
e jω
F 1 (θ ) = −
D (e j ω ; θ )
0
S ξ ξ e − j ω ; θ dω
2π j ω e (1 − e j ω ) F 2 (θ ) = − S ξ ξ e − j ω ; θ dω j ω D (e ; θ) 0
2π j ω e (1 − e j ω )2 F 3 (θ ) = − S ξ ξ e − j ω ; θ dω , j ω D (e ; θ)
4. Convergence analysis 4.1. Preliminary considerations
0
Since the proposed gain adjustment mechanism is based on the stochastic descent approach, it is clear that, for small values of the gain g, the adopted cost criterion (8) will decrease. However, it should be clarified how the convergence occurs and if the algorithm has any weak points in its design. This is the purpose of this section.
where D (e j ω ; θ ) = D (q; θ )|q=e jω and S ξ ξ (e − j ω ; θ ) is the power spectral density of {ξ(t ; θ )}. To proceed forward smoothly, we will employ the following result [22]: when θ = θ o the sequence ξ(t ; θ o ) has special interpretation. It holds that
ξ(t ; θ o ) = u (t ) − uˆ (t |t − 1),
(29)
4.2. Associated ODE
where
Under small values of g, the behavior of the constant-gain stochastic gradient adaptive algorithm summarized in Table 1 can be studied by analyzing properties of its associated ordinary differential equation (ODE) [25]. Let Ωs denote the stability region of the proposed scheme and {ξ(t ; θ )}, {x(t ; θ )} denote the stationary processes ξ(t ), x(t ) which settle down for a constant value of the parameter vector θ ∈ Ωs . Denote by θ ∗ the stationary point of the proposed self-optimization loop, i.e. the point in Ωs which satisfies the condition
uˆ (t |t − 1) = E u (t ) U (t − 1)
F (θ ∗ ) = 0,
(23)
where
F (θ) = E ξ(t ; θ)x(t ; θ)
(24)
and E[·] denotes expected value. The linearized ODE associated with the proposed algorithm takes the form
˙ s = − g f (θ ∗ ) X s , X
(25)
where s denotes continuous time, X s = θ(s) − θ ∗ , and
∂ F (θ ) f (θ) = . ∂θ
(30)
is the optimal (in the mean-squared sense) one-step-ahead prediction of u (t ) based on the set of past observations of u (t ), U (t − 1) = {u (t − 1), u (t − 2), . . .}. Due to this property it can be readily concluded that ξ(t ; θ o ) is white noise (see [26]), i.e. that
σ2 , 2π
S ξ ξ e− jω ; θ o =
(31)
2 where σ 2 is a constant which depends on σe2 and σ w in a nontrivial way. Combining (31) with the first integral of (28) one obtains that
σ2 F 1 (θ o ) = − 2π
2π
e jω D (e j ω ; θ o )
0
dω .
(32)
Using the substitution z = e j ω converts (32) into the following form
σ2 2π j
(26)
C
1 D ( z; θ o )
dz,
(33)
where C denotes the unit circle. Since θ o ∈ Ωs it holds that
Our analysis will be done under the assumptions (A1)–(A3). Although (A1)–(A3) can be criticized as being somewhat unrealistic, they enable one to perform a meaningful examination of the properties of the proposed scheme. Using simulations it will later be demonstrated that quantitative results of the below analysis remain valid under relaxed assumptions. Denote by θ o the vector of optimal gains, i.e. such a value of θ which minimizes the mean-squared frequency tracking errors,
F 1 (θ o ) = −
4.3. Stationary point
2 ˆ (t ) → min. E ω
(28)
(27)
We will first show that θ o satisfies (23), i.e. it is a stationary point of the proposed self-tuning loop. Employing basic results from the theory of stochastic processes one can arrive at [cf. (20)]
D z−1 ; θ o = D q−1 ; θ o q=z = 1 + d1 z−1 + d2 z−2 + d3 z−3 (34) has all roots inside the unit circle. Consequently,
D ( z; θ o ) = D q−1 ; θ o q−1 =z = 1 + d1 z + d2 z2 + d3 z3
(35)
has all its roots outside the unit circle. Therefore, by Cauchy’s residue theorem, the integral (33) reduces to zero,
F 1 (θ o ) = 0.
(36)
Using the same argument one can show that F 2 (θ o ) = 0 and F 3 (θ o ) = 0. This confirms that θ o is a stationary point of the proposed self-tuning mechanism. The sketch of the proof that θ o is unique stationary point in Ωs is outlined in Appendix B.
M. Meller / Digital Signal Processing 33 (2014) 71–82
Table 2 Normalized elements of the matrix f (θ o ) for different values of nonstationarity measure
κ
f 1,1 (θ o )/σ
1 · 10−10
4.2 · 10 6.3 · 106 9.6 · 105 1.5 · 105 2.3 · 104 3.8 · 103
2
f 2,2 (θ o )/σ
f 3,3 (θ o )/σ
1.2 · 10 4.0 · 103 1.3 · 103 4.3 · 102 1.5 · 102 5.2 · 101
7
1 · 10−9 1 · 10−8 1 · 10−7 1 · 10−6 1 · 10−5
2
1
Transient behavior around θ o of the proposed algorithm can be analyzed by studying the matrix f (θ o ). This will be done below. Using (28), individual elements of f may be computed by differentiating the integrals (28) with respect to an appropriate variable
2π
− jω e j ω (1 − e j ω )m−1 Sξξ e ; θ dω D (e j ω ; θ )
=−
e
jω
1−e
j ω m −1
0
×
1 D (e j ω ; θ )
∂ ∂ θn
S ξ ξ e− jω ; θ
dω ,
(37)
− jω ∂ 1 S e ; θ ξ ξ ∂ θn D (e j ω ; θ)
σ 2 e− jω (1 − e− jω )n−1 − . =− 2π D (e j ω ; θ o ) D (e j ω ; θ o ) 2π D (e − j ω ; θ o ) D (e j ω ; θ o ) (38) Combining (37) and (38) one obtains that f m,n (θ o ) is the sum of two integrals,
= J 1 (m, n) + J 2 (m, n),
(39)
7.2 · 100 5.0 · 100 3.5 · 100 2.5 · 100 1.9 · 100 1.4 · 100
2σ 2 2π
σ2 J 2 (m, n) = 2π
2π 0
f 3,2 (θ o ) = f 1,2 (θ o ) =
2
e 2 j ω (1 − e j ω )m+n−2 D (e j ω ; θ o ) D (e j ω ; θ o )
dω
(40)
f m,n (θ) = J 2 (m, n).
(41)
Symbolic formulas for individual elements of f (θ o ) can be obtained using the method discussed in the book [27] or the paper [28]. They take the form
f 2,2 (θ o ) = (4 − 2θ3,o )β ∼ = 4β
∼ = − f 2,2 (θ o )
f 3,3 (θ o ) 2
.
(42)
To establish an important result regarding stability of the ODE one must show that all eigenvalues of the matrix f (θ o ) have positive real part. It is possible to show this by e.g. analyzing the characteristic polynomial of f (θ o ). However, this approach is rather tedious. Instead, observe that f (θ o ) can be expressed as [cf. (40)]
σ2 f (θ o ) = 2π
2π
i e − j ω i H e − j ω dω
0
− jω
(1 − e − j ω )n−1 , = D (e − j ω ; θ o )
n = 1, 2, 3.
(43)
Since the matrix under integral (43) is positive semidefinite for ω = 0 and varies nonlinearly with ω, one can conclude that f (θ o ) is positive definite. This means that the associated ODE (25) is stable. Under some additional conditions, stated in [25], one obtains the following result Proposition. For any > 0 and g sufficiently small there exists a constant G ( g , ) such that
)
(44)
) tends to zero as g tends to zero.
4.5. Discussion
∀m, n
θ1,o
2
(θ1,o θ3,o + θ2,o θ3,o − θ1,o )(8 − θ1,o − 2θ2,o − 4θ3,o )
where G ( g ,
(1 − e j ω )m−1 (1 − e − j ω )n−1 dω . D (e − j ω ; θ o ) D (e j ω ; θ o )
4θ3,o − 2θ32,o − θ2,o θ3,o + θ1,o − θ1,o θ3,o
f 3,3 (θ o )
σ2
t →∞
Using the substitution z = e j ω and the method of residues one can immediately conclude that
J 1 (m, n) = 0,
f 2,2 (θ o )
lim sup P θˆ (t ) − θ o > ≤ G ( g ,
2π 0
f 1,1 (θ o ) =
−1.2 · 10 −4.0 · 103 −1.3 · 103 −4.3 · 102 −1.5 · 102 −5.1 · 101
where
J 1 (m, n) =
6.2 · 10 2.0 · 103 6.5 · 102 2.2 · 102 7.4 · 101 9.7 · 101
f 3,1 (θ o ) = f 1,3 (θ o ) = − f 2,2 (θ o ) +
in e
2σ 2 e j ω (1 − e j ω )n−1
m,n (θ o )
f 2,3 (θ o )/σ 2
4
f 2,1 (θ o ) = f 1,2 (θ o ) =
β=
where m, n = 1, 2, 3. For θ = θ o , the derivative appearing above equals (see Appendix A for derivation)
f
f 1,3 (θ o )/σ 2
3
0
2π
f 1,2 (θ o )/σ 2
f 3,3 (θ o ) = (4θ2,o − 2θ1,o )β ∼ = 4βθ2,o
4.4. Local convergence
∂ f m,n (θ) = − ∂ θn
κ. 2
1.4 · 10 1.0 · 101 7.1 · 100 5.1 · 100 3.7 · 100 2.8 · 100
4
75
β∼ = 4β
θ3,o θ1,o
Table 2 shows normalized values f m,n (θ o )/σ 2 obtained for various values of nonstationarity measure κ (6). In agreement with (43), all eigenvalues of the resulting matrix f (θ o ) are positive. Observe huge differences in magnitudes of diagonal components of f (θ o ), which reach six orders of magnitude. It follows that individual components of the vector θˆ (t ) converge with considerably different rates, which makes the preliminary self-tuning loop of little practical value – while convergence of one parameter may be satisfactory, the other two will converge substantially slower or appear stalled. In an attempt to equalize convergence speed, one could employ three different gains for each component of the vector θˆ (t ). However, such an approach has several drawbacks. Observe that the off-diagonal elements of the matrix f (θ o ) are nonzero. The presence of cross couplings in f (θ o ) would make fine-tuning of
76
M. Meller / Digital Signal Processing 33 (2014) 71–82
Table 3 Adaptive frequency tracker with normalized sequential self-tuning.
Table 4 One degree of freedom normalized sequential self-tuning adaptive frequency tracker.
Adaptive frequency tracking
Adaptive frequency tracking
fˆ (t ) = e j [ωˆ (t −1)+αˆ (t −1)] ˆf (t − 1)
fˆ (t ) = e j [ωˆ (t −1)+αˆ (t −1)] ˆf (t − 1)
ε (t ) = y (t ) − aˆ (t − 1) fˆ (t )
ε (t ) = y (t ) − aˆ (t − 1) fˆ (t )
aˆ (t ) = aˆ (t − 1) + θˆ3 (t ) fˆ ∗ (t )ε (t )
ˆ t ) fˆ ∗ (t )ε (t ) aˆ (t ) = aˆ (t − 1) + θ(
αˆ (t ) = αˆ (t − 1) + θˆ1 (t )δ(t )
αˆ (t ) = αˆ (t − 1) + θˆ 3 (t )δ(t )/8
ωˆ (t ) = ωˆ (t − 1) + αˆ (t − 1) + θˆ2 (t )δ(t ) δ(t ) = Im[
ε (t )
aˆ (t −1) fˆ (t )
ωˆ (t ) = ωˆ (t − 1) + αˆ (t − 1) + θˆ 2 (t )δ(t )/2
]
δ(t ) = Im[
Self-tuning
ˆ t )/4)ξ(t − 1) + 2/θˆ 2 (t )[ωˆ (t ) − 2ωˆ (t − 1) + ωˆ (t − 2)] ξ(t ) = −(1 − θ(
x1 (t ) = −d1 x1 (t − 1) − d2 x1 (t − 2) − d3 x1 (t − 3) − ξ(t − 1)
ˆ t ) + 3/8θˆ 2 (t )]ξ(t − 1) x(t ) = −d1 x(t − 1) − d2 x(t − 2) − d3 x(t − 3) − [1 + θ(
x2 (t ) = x1 (t ) − x1 (t − 1)
+ [2 + θˆ (t )]ξ(t − 2) − ξ(t − 3)
x3 (t ) = x2 (t ) − x2 (t − 1)
r (t ) = ρ r (t − 1) + x2 (t )
x(t ) = [x1 (t ) x2 (t ) x3 (t )]
T
θˆ (t + 1) = θˆ (t ) − ξ(t )x(t )/r (t )
R (t ) = ρ R (t − 1) + x(t )x (t ) T
R −1 (t )x(t )ξ(t )
the three-gain mechanism rather difficult – a change of one gain, say the first one, would also affect convergence rates of the other two parameters. Furthermore, since it can be observed that f (θ o ) depends on κ , such an approach cannot change the fact that convergence speed would vary with the value of κ . Having this in mind, in the next section we will propose two extended algorithms with improved convergence properties and easier tuning. 5. Extensions 5.1. Normalized algorithm To avoid difficulties of choosing three weights one can employ a normalized version of the algorithm, which can be obtained using the recursive prediction error (RPE) method [29]
ˆ t ) − R −1 (t ) ∂ J (t ) , θˆ (t + 1) = θ( ∂θ
(45)
where the time-varying gain matrix R (t ) is computed as
R (t ) = ρ R (t − 1) + x(t )xT (t ),
(46)
and 0 < ρ < 1, chosen close to 1, is the exponential weighting factor which governs the effective memory length of the self-tuning loop. The resulting algorithm is summarized in Table 3. To show that the normalized scheme has improved properties, consider the following argument. For ρ sufficiently close to 1 and t → ∞, the matrix R (t ) may be replaced with its mean
R (t ) ∼ = E R (t ) =
]
Self-tuning
ξ(t ) = −(1 − θˆ1 (t )/θˆ2 (t ))ξ(t − 1) + 1/θˆ2 (t )[ωˆ (t ) − 2ωˆ (t − 1) + ωˆ (t − 2)]
θˆ (t + 1) = θˆ (t ) −
ε (t )
aˆ (t −1) fˆ (t )
E[x(t )xT (t )] 1−ρ
.
A simplified version of the algorithm can be obtained by reducing the number of degrees of freedom from 3 to 1, i.e. by replacing the three gains θˆ1 (t ), θˆ2 (t ), θˆ3 (t ) with a single parameter. Theoretical results, simulations, and practical experience with (4) suggest that the gains θ1 and θ2 can be chosen as functions of θ3 (see [17] for details)
θ1 =
θ33 8
θ2 =
θ32 2
.
(50)
This rule allows one to simplify the proposed self-tuning mechanism considerably. The resulting one degree of freedom ‘preoptimized’ algorithm, which additionally employs normalization, is summarized in Table 4. Since the proposed simplification is partially heuristic, some of the results obtained previously no longer hold. In particular, due to constraints (50) placed on adaptation gains, the simplified scheme can no longer converge to the unconstrained optimal point. Moreover, the constrained minimum of J (t ) cannot be guaranteed to ˆ (t )|2 ]. coincide exactly with the constrained minimum of E[| ω Fortunately, simulations confirm that the discrepancy is negligible in practice – for further details see Section 6.3. Last, but not least, note that convergence speed of the simplified scheme depends on the value of ρ , in the same manner as in (49). 5.3. Computational complexity
(48)
According to [29], for X s close to zero, the term E[x(t )xT (t )] averages to f (θ ∗ ). Using this equality in (48) leads to
˙ s = −(1 − ρ ) X s , X
5.2. One degree of freedom algorithm
(47)
It follows that the linearized ODE associated with the normalized algorithm takes the form
˙ s = −(1 − ρ ) E x(t )xT (t ) −1 f (θ ∗ ) X s . X
self-tuning loop, as long as the poles of the ODE are a good deal ‘slower’ (in the sense of their time constants being larger) than the dominant pole of (18). The recommended values of ρ are those from the interval [0.999, 0.9999].
(49)
which demonstrates that normalization decouples all the parameters and makes time constants of the self-tuning loop [equal to inverse of the ODE’s poles, i.e. 1/(1 − ρ )] depend only on ρ . This means that it is very easy to adjust convergence behavior of the
Computational complexity of all the proposed algorithms is moderate. The underlying ANF filter (4) requires 16 real-valued multiplications, 13 real-valued additions/subtractions, 2 divisions and 2 trigonometric operations (sin, cos) per every processed sample of data. The preliminary self-tuning loop requires 13 real-valued multiplications, 20 additions/subtractions and 2 divisions. The normalized self-tuning loop (46) is somewhat more costly – it requires 73 real-valued multiplications, 60 additions/subtractions and 4 divisions – although these are still acceptable values. The simplified ANF filter requires 18 real-valued multiplications, 13 real-valued additions/subtractions, 4 divisions and 2 trigono-
M. Meller / Digital Signal Processing 33 (2014) 71–82
77
Table 5 Comparison of the steady-state means of the parameters θˆ1 (t ), θˆ2 (t ), θˆ3 (t ) with their optimal values for different values of nonstationarity measure
κ.
κ
E[θˆ1 (t )]
E[θˆ2 (t )]
E[θˆ3 (t )]
θ1,o
θ2,o
θ3,o
1 · 10−10 1 · 10−9 1 · 10−8 1 · 10−7 1 · 10−6 1 · 10−5 1 · 10−4
1.47 · 10−5 4.43 · 10−5 1.34 · 10−4 4.16 · 10−4 1.25 · 10−3 3.76 · 10−3 1.09 · 10−2
1.17 · 10−3 2.45 · 10−3 5.09 · 10−3 1.07 · 10−2 2.20 · 10−2 4.42 · 10−2 8.64 · 10−2
4.64 · 10−2 6.82 · 10−2 9.88 · 10−2 1.41 · 10−1 2.00 · 10−1 2.80 · 10−1 3.83 · 10−1
1.38 · 10−5 4.32 · 10−5 1.34 · 10−4 4.15 · 10−4 1.26 · 10−3 3.79 · 10−3 1.11 · 10−2
1.13 · 10−3 2.41 · 10−3 5.09 · 10−3 1.06 · 10−2 2.19 · 10−2 4.43 · 10−2 8.70 · 10−2
4.72 · 10−2 6.85 · 10−2 9.90 · 10−2 1.42 · 10−1 2.01 · 10−1 2.81 · 10−1 3.84 · 10−1
metric operations (sin, cos) per every processed sample of data. Its self-tuning loop can be implemented with 15 real-valued multiplications, 21 real-valued additions/subtractions and 3 divisions (it ˆ t )/4, are was assumed that several quantities, such as θˆ (t )/2 or θ( computed while executing the ANF part and reused in the optimization loop). Obviously, there exist ANFs with considerably lower computational complexity, e.g. [4,14,30]. However, these algorithms do not include frequency rate tracking of any form nor they include any means of automatic adjustment of their adaptation gains. Both of these features justify the increased computational complexity of the proposed scheme, as they can significantly improve frequency tracking accuracy – see Section 6.5 for some examples. 5.4. Safety measures The proposed algorithms, in their present form, are not equipped with any means of assuring that the adjusted gains remain in the stability region Ωs , specified by (22). To reduce potential risk of the algorithm’s divergence it is reasonable to include additional safety mechanism which guarantees that the adjusted parameters remain in Ωs . In case of the three-gain normalized variant, it is recommended to restrain θˆ3 (t ) and θˆ2 (t ) to the interval (0, 1) and θˆ1 (t ) to (0, θ1,max (t )), θ1,max (t ) = θˆ2 (t )θˆ3 (t )/[1 − θˆ3 (t )], which guarantees that (22) holds. In case of the simplified scheme it is sufficient to keep θˆ (t ) in (0, 1). 5.5. Comparison with parallel approach It could perhaps seem that the proposed sequential solution does not exhibit significant advantages over the previously proposed parallel approach [22], except reduced computational complexity. It will be argued that this is not the case and that the sequential approach has some unique properties. The principle behind parallel tracker is quite simple. It runs a bank of notch filters (4) and selects the supposedly best performing one using mean-squared values of (7). However, the fact that (7) attains the smallest mean-squared values for the optimal values of the ANF gains does not actually mean that the smaller value of (7) points to the better performing filter. Fig. 1 shows a comparison of mean squared frequency tracking errors E[| ω(t )|2 ] and mean-squared auxiliary sequence values E[|ξ(t )|2 ] obtained for the integrated random-walk model with κ = 10−8 and a dense filterbank. Selecting the filter which yields the smallest values of E[|ξ(t )|2 ] leads clearly to very good performance. Now suppose that several nearly-optimal filters have been removed from the bank, e.g. those with μ ∈ [0.08, 0.15]. On the basis of E[|ξ(t )|2 ], the filter with μ = 0.155 would have been selected, even though the one with μ = 0.075 actually provides better performance. This means that the parallel approach can point to the best performing filter only when the globally optimal filter is present in the bank. In practice this means that the bank must consist of many filters, so that performance loss caused by wrong choice of the filter is small.
Fig. 1. Comparison of mean squared frequency tracking errors E[| ω(t )|2 ] and mean-squared auxiliary sequence values E[|ξ(t )|2 ].
Obviously, the proposed sequential approach, which relies on the slope of E[|ξ(t )|2 ] rather than its value, is free of this limitation. 6. Simulation results 6.1. Stationary point In the first experiment we will verify that, under (A.1)–(A.3), the proposed self-tuning mechanism converges in mean to the optimal adaptation gains. Table 5 compares steady-state means of the parameters θˆ1 (t ), θˆ2 (t ), θˆ3 (t ) with the optimal settings for several values of nonstationarity measure κ .3 The results were obtained using joint ensemble (50 realizations of { w (t )}, { v (t )}, t ∈ [0, 100 000]) and time (t ∈ [30 000, 100 000]) averaging. In the experiment the constant ρ was set to 0.9999. The parameters of the signal were set as fol2 = κ /a20 . lows: a20 = 10, random initial phase, σ v2 = 1 and σ w The results confirm that the proposed self-tuning scheme converges in mean to the optimal adaptation gains. Minor discrepancies observed in Table 5 can be attributed to insufficient amount of averaging. 6.2. Transient behavior To check if the results of the convergence analysis in Section 4.4 are true, a simple simulation experiment was performed. A narrowband signal with a random initial phase (φ0 uniform in [0, 2π )) and constant amplitude, a20 = 10, buried in Gaussian white noise with variance σ v2 = 1 was used for this purpose. The variance σ w2 was equal to 5 · 10−9 during the first 15 000 samples and altered to 5 · 10−8 for the remaining part of the experiment. This cor3
The optimal settings were computed in [17].
78
M. Meller / Digital Signal Processing 33 (2014) 71–82
Fig. 2. Response to a step change of κ for the preliminary scheme – averaged results 2 of 50 simulations. To introduce the step of κ , the variance σ w was changed from 5 · 10−9 to 5 · 10−8 at t = 10 000. The remaining parameters of the signal were constant: a20 = 10, σ v2 = 1. Solid lines – simulation results. Dashed lines – optimal values of adaptation gains.
Fig. 4. Response to a step change of κ for the normalized scheme – averaged results 2 of 50 simulations. To introduce the step of κ , the variance σ w was changed from 5 · 10−9 to 5 · 10−8 at t = 10 000. The remaining parameters of the signal were constant: a20 = 10, σ v2 = 1. Solid lines – simulation results. Dashed lines – optimal values of adaptation gains. Table 6 Comparison of the steady-state means of the one-degree of freedom algorithm’s ˆ t ) with its optimal values for different values of nonstationarity meaparameter θ( sure κ .
κ
E[θˆ (t )]
θo
1 · 10−10 1 · 10−9 1 · 10−8 1 · 10−7 1 · 10−6 1 · 10−5 1 · 10−4
4.90 · 10−2 7.05 · 10−2 1.02 · 10−1 1.47 · 10−1 2.12 · 10−1 3.01 · 10−1 4.22 · 10−1
4.80 · 10−2 7.02 · 10−2 1.02 · 10−1 1.49 · 10−1 2.16 · 10−1 3.11 · 10−1 4.42 · 10−1
that the error will be reduced to less than 5% in 4000 samples. A small discrepancy may be attributed to the fact that the ODE analysis done in Section 5.1 is local, while the change of κ during the experiment is quite substantial. Fig. 3. Typical transient behavior of the normalized scheme in response to a step 2 change of κ . To introduce the step of κ , the variance σ w was changed from 5 · 10−9 to 5 · 10−8 at t = 10 000. The remaining parameters of the signal were constant: a20 = 10, σ v2 = 1. Solid lines – simulation results. Dashed lines – optimal values of adaptation gains.
responds to changing the rate of nonstationarity from κ1 = 5 · 10−8 to κ2 = 5 · 10−7 . Behaviors of the preliminary and the normalized scheme were compared. The gain g of the preliminary algorithm was set to g = 1 · 10−7 . The forgetting constant ρ of the normalized variant was set to 0.99925. To avoid erratic start, the following startup procedure was employed. The quantities ξ(t ), x1 (t ), x2 (t ), x3 (t ) were evaluated starting from t = 1000, R (t ) from t = 1500, and the self-tuning mechanism was enabled after 2000 samples. Fig. 2 shows averaged trajectories of θˆ1 (t ), θˆ2 (t ) and θˆ3 (t ) for the preliminary scheme (50 realizations of { w (t )}, { v (t )} were used). It can be clearly seen that the rate of adaptation θˆ1 (t ) is reasonable, but the trajectories of the remaining two parameters are almost stalled. Figs. 3 and 4 show typical and averaged behavior of the normalized scheme. It can be seen that introducing the normalization equalized convergence rates of all the variables. More importantly, convergence speed of the algorithm improved. It is also consistent with the results obtained using the ODE method, which predicts
6.3. One degree of freedom scheme The tests of the steady-state and the transient behavior of the simplified one degree of freedom scheme were performed using the same setup as in the previous two subsections. Table 6 compares the steady-state means of the adjusted parameter θˆ (t ) with their optimal values, found using exhaustive numerical search, for different values of κ . Observe that the steadystate means are close to the optimal values, although a small error can be observed, especially for larger values of κ . This can be attributed to the fact that, under the constraints (50), the values of θˆ (t ) minimizing E[|ξ(t )|2 ] and E[| ωˆ (t )|2 ] are not equal. However, further simulations revealed that the error does not affect performance significantly – the loss of mean-squared frequency tracking accuracy was well below 1% in all cases. Fig. 5 shows typical and averaged behavior of the simplified scheme. As expected, its convergence speed is comparable to that of the full normalized scheme. Again, performance loss caused by the proposed modification proved negligible. In the case of the normalized scheme, the mean squared frequency tracking error evaluated after the step change occurred, i.e. during the interval t ∈ [10 000, 15 000], was equal to 1.38 · 10−4 . In the case of the simplified scheme the analogous result reads 1.40 · 10−4 . This shows practical value of the simplified solution.
M. Meller / Digital Signal Processing 33 (2014) 71–82
79
Table 7 Comparison of mean-squared frequency tracking errors yielded the proposed one degree of freedom sequential tracker and seven other approaches. Algorithm
ˆ (t )|2 ] E[| ω
Regalia’s filter Modified complex plain gradient filter Arctangent-based complex filter ANF (4) with constant gains Parallel estimator Proposed sequential self-tuning estimator Sequential self-tuning estimator from [10] EKF tracker
7.77 · 10−7 7.41 · 10−7 5.01 · 10−7 2.69 · 10−7 2.25 · 10−7 2.17 · 10−7 3.00 · 10−6 2.41 · 10−6
ω(t ) =
0.3
for t < 3000
0.2 + 0.1 cos(2π (t − 3000)/ T ω )
for t ≥ 3000
a(t ) = 3 + sin(2π t / T a ), Fig. 5. Typical (top plot) and averaged (bottom plot) response to step change of κ 2 for the one degree of freedom scheme. To introduce the step of κ , the variance σ w was changed from 1 · 10−8 to 5 · 10−8 at t = 10 000. The remaining parameters of the signal were constant: a20 = 10, σ v2 = 1. Solid lines – simulation results. Dashed lines – optimal values of adaptation gains.
(51)
where T ω = 2000 and T a = 1000 denote the periods of frequency and amplitude modulations, respectively. The variance of wideband measurement noise in the experiment was σ v2 = 0.01. The following algorithms were compared:
• Well tuned Regalia’s complex filter [4]. The optimal values of
• • • •
• • ˆ t ) for a step change of Fig. 6. Comparison of averaged trajectories of the parameter θ( κ from 1 · 10−7 to 5 · 10−7 for ρ ∈ {0.9996, 0.99925, 0.9985}. Solid lines – simulation results. Dashed lines – optimal values of adaptation gains.
•
the filter’s parameters, i.e. the values which minimized the mean-squared frequency tracking error, were found using exhaustive search. Well tuned modified complex plain gradient filter from [14]. Well tuned arctangent-based complex filter from [30]. Well tuned ANF (4) with constant gains set according to the rule (50). Adaptive parallel tracker from [22], composed of 9 filters. The gains of the ANFs in the bank were spaced equidistantly on the interval θ3 ∈ [0.05, 0.25]. The size of the local evaluation window was set to 50 samples. Simplified one-degree of freedom scheme proposed in the paper. The algorithm employed ρ = 0.999. Self-tuning scheme from [10], which adjusts adaptation gains of an ANF so as to minimize mean-squared signal prediction errors yielded by the ANF. Well tuned EKF tracker, based on the following signal model
6.4. Choice of ρ Fig. 6 shows a comparison of the averaged trajectories of the ˆ t ) for a step change of κ from 1 · 10−7 to 5 · 10−7 parameter θ( for ρ ∈ {0.9996, 0.99925, 0.9985}. To introduce the step of κ , the 2 variance σ w was changed from 1 · 10−8 to 5 · 10−8 at t = 10 000. The remaining parameters of the signal remained the same as in the previous simulations, i.e. a20 = 10, σ v2 = 1. The following observations can be made: 1. In all cases the time constant agrees well with theoretical analysis, which predicts that mean tracking error will be reduced to less than 5% in 7500 samples, 4000 samples, and 2000 samples, respectively. 2. A bias, which depends on ρ , can be observed. This behavior may be attributed to the fact that, as the gain adjustment loop gets faster, the ODE approach gradually looses accuracy. This causes the stationary point of the adaptation mechanism to shift. Nevertheless, the bias did not degrade tracking accuracy considerably – in all cases mean-squared frequency tracking errors differed no more than 1% from each other. 6.5. Comparison with existing approaches The simplified scheme was compared with several existing approaches using the following two-mode signal
s(t + 1) e j ω(t ) = 0 ω(t + 1)
y (t ) = [ 1
0 ] + v (t ),
0 1
s(t ) + w (t ) ω(t ) (52)
where σ v2 = 0.01 and E[ w (t ) w H (t )] = diag(σ12 , σ22 ). The optimal values of σ12 , σ22 were found using exhaustive search. • Solution from [11] was also implemented but was found to be sensitive to initial conditions and diverge most of the time. For this reason it is not included in the comparison. Note that the comparison was fair, if not actually slightly in favor of the constant-parameters schemes which were tuned to yield their best performance. This could be done only because in the experiment the true values of frequency were known. Since such a situation is unlikely to occur in practice, the relative performance of the constant-parameter schemes is somewhat optimistic. Mean-squared frequency tracking errors yielded by the algorithms are summarized in Table 7. Both the sequential and the parallel approach outperformed other solutions. The algorithm from [10] performs particularly poorly, but this is not surprising given the fact that it is designed to optimize signal tracking performance. The proposed sequential solution performed slightly better than the previously introduced parallel estimator. Fig. 7 depicts behavior of different frequency estimators during the period t ∈ [2800, 3200]. Several conclusions can be drawn
80
M. Meller / Digital Signal Processing 33 (2014) 71–82
Fig. 7. Behavior of different frequency estimators during the period t ∈ [2800, 3200]. (a) Parallel estimator from [22], (b) proposed sequential self-tuning estimator, (c) sequential self-tuning estimator from [10], (d) arctangent based tracker from [30].
Fig. 9. Comparison of the true and the estimated frequency during the channel demodulation experiment. Top plot – true values. Bottom plot – estimates.
ANF (4) with constant gains set according to (50). The parallel tracker yielded MSE of 3.61 · 10−4 , the MSE of the EKF reached 3.67 · 10−4 , while the arctangent ANF scored 3.72 · 10−4 . Interestingly, this trial proved quite difficult for the ANF (4) with constant gains – its MSE was “only” 5.20 · 10−4 . This probably stems from the fact that the speech signal does not contain many quasi-linear trajectories, which works against (4) equipped with the constant gains. In light of this fact, the performance achieved using the proposed self-optimizing scheme deserves even more respect. 7. Conclusions The problem of automatic tuning of an adaptive notch filter was considered. The proposed mechanism adjusts adaptation gains of the ANF so as to minimize frequency, rather than signal, tracking errors. It was shown that the proposed scheme considerably improves tracking accuracy of the ANF algorithm under unknown and possibly time-varying conditions. Acknowledgment
ˆ t ) during the channel demodulation experiFig. 8. Estimated values of the gain θ( ment.
from the comparison. Firstly, the estimates yielded by the algorithm from [10] are rather noisy. Secondly, the behavior of the well tuned arctangent based ANF is similar to that of the proposed estimator. However, it is less accurate both in the constant-frequency and in the varying-frequency phase. Finally, in case of the parallel estimator, occasional larger deviations can be observed, which explains its poorer performance. 6.6. Channel demodulation A frequency-modulated cisoid, carrying a 22 kHz speech recording (“Remember, the Force will be with you... always”) was demodulated using the proposed simplified scheme. The simulated signal to noise ratio was equal to 6 dB and the constant ρ was set to 0.999. Fig. 8 shows how the gain θˆ (t ) changed during the experiment. Observe that θˆ (t ) is large during the fragments containing speech and small when only noise is present. Fig. 9 compares the true and the estimated frequency during the first 40 000 samples of the recording. Despite low signal to noise ratio, the proposed frequency tracker performs well – the mean-squared frequency tracking error evaluated for the recording was equal to 2.86 · 10−4 . The experiment was repeated using the parallel tracker (consisting of 11 filters with θ3 spaced between 0.05 and 0.5, local evaluation window length equal to 1000 samples), the well tuned EKF tracker, the well tuned arctangent-based ANF from [30] and the well tuned
This work was supported in part by the National Science Centre (Grant no. UMO-2011/03/B/ST7/01882). Appendix A. Proof of (38) The derivative in (37) equals
∂ ∂ θn
1 D (e j ω ; θ)
Sξξ e
− jω
;θ
= An (θ) + B n (θ),
(A.1)
where
A n (θ) = − B n (θ) =
D n (e j ω ; θ ) S ξ ξ e− jω ; θ j ω j ω D (e ; θ ) D (e ; θ) 1
∂
D (e j ω ; θ ) ∂θn
D n e jω ; θ =
S ξ ξ e− jω ; θ
∂ jω D e ;θ . ∂θn
(A.2)
Using (15) it is straightforward to show that
D n e jω ; θ = e jω 1 − e jω
n−1
.
To obtain the formula for S ξ ξ (e − j ω ; θ ) and any value of
(A.3)
θ ∈ Ωs
observe that [cf. (17)]
S ξ ξ e− jω ; θ
=
[1 − 2e − j ω + e −2 j ω ][1 − 2e j ω + e 2 j ω ] S uu e − j ω , D (e − j ω ; θ ) D (e j ω ; θ )
(A.4)
M. Meller / Digital Signal Processing 33 (2014) 71–82
which, combined with (31), yields the following result
S ξ ξ e− jω ; θ =
D (e − j ω ; θ o ) D (e j ω ; θ o )
σ2
D (e − j ω ; θ) D (e j ω ; θ ) 2π
.
is equivalent to the following one
(A.5)
Assuming that θ = θ o and combining (A.5) with the first equation of (A.2) and (A.3) one obtains
A n (θ o ) = −
σ 2 e jω (1 − e jω )n−1 . 2π D (e j ω ; θ o ) D (e j ω ; θ o )
(A.6)
On the other hand, differentiating (A.5) yields
σ 2 − jω ∂ S ξ ξ e− jω ; θ = D e ; θ o D e jω ; θ o ∂θn 2π D n (e − j ω ; θ ) × − D (e − j ω ; θ ) D (e − j ω ; θ) D (e j ω ; θ ) D n (e j ω ; θ ) − (A.7) D (e − j ω ; θ ) D (e j ω ; θ) D (e j ω ; θ ) which, after using θ = θ o and (A.3), simplifies to
∂ S ξ ξ e− jω ; θ ∂θn σ 2 e− jω (1 − e− jω )n−1 σ 2 e jω (1 − e jω )n−1 − . =− 2π D (e − j ω ; θ o ) 2π D (e j ω ; θ o )
⎡
1
1
⎣ 1 − p1
1
1 − p2
(1 − p 1 )
2
(1 − p 2 )
2
⎤
r1,1 1 − p 3 ⎦ r1,2 r1,3 (1 − p 3 )2
0
= 0 .
(B.6)
0
Observe that the above system of equations employs the Vandermonde matrix which is nonsingular for the case of distinct poles. Since it is already established that at least one of the residues is nonzero, it follows that (B.6) has no solutions. In a similar way one is able to show that (B.1) does not hold as well for the remaining cases of pole multiplicities. It follows that the stationary point θ = θ o is unique in Ωs . References [1] P. Tichavský, P. Händel, Recursive estimation of frequencies and frequency rates of multiple cisoids in noise, Signal Process. 58 (1997) 117–129. [2] A. Nehorai, A minimal parameter adaptive notch filter with constrained poles and zeros, IEEE Trans. Acoust. Speech Signal Process. 33 (1985) 983–996. [3] P. Tichavský, P. Händel, Two algorithms for adaptive retrieval of slowly time-varying multiple cisoids in noise, IEEE Trans. Signal Process. 43 (1995) 1116–1127.
(A.8)
[4] P. Regalia, A complex adaptive notch filter, IEEE Signal Process. Lett. 17 (2010) 937–940. [5] M. Ferdjallah, Adaptive digital notch filter design on the unit circle for the removal of powerline noise from biomedical signals, IEEE Trans. Biomed. Eng. 41 (1994) 529–536.
Substituting (A.8) into (A.2) one obtains
σ 2 e− jω (1 − e− jω )n−1 2π D (e − j ω ; θ o ) D (e j ω ; θ o ) σ 2 e jω (1 − e jω )n−1 . − 2π D (e j ω ; θ o ) D (e j ω ; θ o )
81
[6] M. Tarek, S. Mekhilef, N. Rahim, Application of adaptive notch filter for harmonics currents estimation, in: Proc. International Power Engineering Conference (IPEC, 2007), 2007.
B n (θ o ) = −
(A.9)
[7] A. Yazdekhasti, M. Mojiri, A method for harmonic extraction from power systems signals based on adaptive notch filter, Adv. Comput. Math. Appl. 1 (2012) 40–46. [8] L. Wang, D. Zhang, C. Zhang, Z. Lu, A high performance PLL for polluted utility grid based on cascade adaptive notch filters, in: Proc. 7th International Power Electronics and Motion Control Conference (IPEMC), 2012, pp. 615–619.
Combining partial results (A.1), (A.6) and (A.9) yields (38). Appendix B. Uniqueness of (36)
[9] M. Bodson, S. Douglas, Adaptive algorithms for the rejection of sinusoidal disturbances with unknown frequency, Automatica 33 (1997) 2213–2221.
To show that the stationary point (36) is unique, one must demonstrate that
[10] M. Niedzwiecki, ´ P. Kaczmarek, Generalized adaptive notch filter with a selfoptimization capability, IEEE Trans. Signal Process. 54 (2006) 4185–4193.
F i (θ) = 0,
[11] H. Hajimolahoseini, M.R. Taban, H. Soltanian-Zadeh, Extended Kalman filter frequency tracker for nonstationary harmonic signals, Measurement 45 (2012) 126–132.
i = 1, 2, 3
(B.1)
does not hold for θ = θ o . To keep the size of the discussion reasonable, we limit ourselves to the case when D (θ , q−1 ) has distinct roots. Using (A.5) the integrals F i (θ ), i = 1, 2, 3 can be expressed as
σ2 F i (θ) = − 2π j
G i ( z)dz = −σ 2 (r i ,1 + r i ,2 + r i ,3 ),
(B.2)
where
G i ( z) = (1 − z)
−1 ; θ ) D ( z ; θ ) o o i −1 D ( z
(B.3)
D ( z −1 ; θ ) D 2 ( z ; θ )
and r i , j denotes the residue of G i ( z) at its j-th pole inside the unit circle, p j , j = 1, 2, 3. Since
r1, j = lim ( z − p j )
D ( z −1 ; θ
o ) D ( z; θ o )
j = 1, 2, 3,
´ Statistically efficient smoothing algorithm for time-varying fre[12] M. Niedzwiecki, quency estimation, IEEE Trans. Signal Process. 56 (2008) 3846–3854. ´ P. Kaczmarek, Tracking analysis of a generalized adaptive notch [13] M. Niedzwiecki, filter, IEEE Trans. Signal Process. 54 (2006) 304–314. [14] A. Nosan, R. Punchalard, A complex adaptive notch filter using modified gradient algorithm, Signal Process. 92 (2012) 1508–1514. [15] W. Loetwassana, R. Punchalard, J. Koseeyaporn, P. Wardkein, Unbiased plain gradient algorithm for a second-order adaptive IIR notch filter with constrained poles and zeros, Signal Process. 90 (2010) 2513–2520. [16] R. Punchalard, Mean square error analysis of unbiased modified plain gradient algorithm for second-order adaptive IIR notch filter, Signal Process. 92 (2012) 2815–2820. ´ M. Meller, New algorithms for adaptive notch smoothing, IEEE [17] M. Niedzwiecki, Trans. Signal Process. 59 (2011) 2024–2037. [18] R. Roy, T. Kailath, ESPRIT – estimation of signal parameters via rotational invariance techniques, IEEE Trans. Acoust. Speech Signal Process. 37 (1989) 984–995. [19] B.L. Scala, R. Bitmead, Design of an extended Kalman filter frequency tracker, IEEE Trans. Signal Process. 44 (1996).
(B.4)
[20] G. Einincke, L. White, R. Bitmead, The use of fake algebraic Riccati equations for co-channel demodulation, IEEE Trans. Signal Process. 51 (2003).
it follows that if D ( z−1 ; θ o ) = D ( z−1 ; θ ), i.e. when θ = θ o , then at least one of the residues will be nonzero. Moreover, since r i , j = (1 − p j )i −1 r1, j , the condition
[21] S. Bittanti, S. Savaresi, On the parametrization and design of an extended Kalman filter frequency tracker, IEEE Trans. Autom. Control 45 (2000) 1718–1724.
z→ p j
F 1 (θ ) F 2 (θ ) F 3 (θ )
D ( z −1 ; θ ) D 2 ( z ; θ )
,
0
= 0 0
´ Parallel frequency tracking with built-in perfor[22] M. Meller, M. Niedzwiecki, mance evaluation, Digit. Signal Process. 23 (2013) 845–851. [23] S. Haykin, Adaptive Filter Theory, Prentice Hall, Englewood Cliffs, NJ, 1996.
(B.5)
[24] H. van Trees, K. Bell, Bayesian Bounds for Parameter Estimation and Nonlinear Filtering/Tracking, Wiley, New York, 2007.
82
M. Meller / Digital Signal Processing 33 (2014) 71–82
[25] M.M.A. Benveniste, P. Priouret, Adaptive Algorithms and Stochastic Approximations, Springer-Verlag, New York, 1990. [26] B. Anderson, J. Moore, Optimal Filtering, Prentice Hall, Englewood Cliffs, NJ, 1979. [27] M. Jury, Theory and Application of the Z -Transform Method, Wiley, New York, 1964. [28] K. Astrom, E. Jury, R. Agniel, A numerical method for the evaluation of complex integrals, IEEE Trans. Autom. Control 15 (1970) 468–471. [29] T. Södertsröm, P. Stoica, System Identification, Prentice Hall, Englewood Cliffs, NJ, 1988. [30] R. Punchalard, Arctangent based adaptive algorithm for a complex IIR notch filter for frequency estimation and tracking, Signal Process. 92 (2014) 535–544.
Michał Meller was born in 1983 in Suwałki, Poland. He received his ´ M.Sc. and Ph.D. degrees in Automatic Control from the Gdansk University ´ of Technology, Gdansk, Poland, in 2008 and 2010, respectively. In 2008 he joined Telecommunications Research Institute and, in 2010, the Department of Automatic Control at the Faculty of Electronics, ´ Telecommunications and Computer Science, Gdansk University of Technology. In 2009 he received the START scholarship from the Foundation for Polish Science and the Innodoktorant scholarship from the Marshal of Pomorskie Voivodeship. His research interests include adaptive signal processing and control, estimation, disturbance rejection and applications.