ARTICLE IN PRESS
Signal Processing 88 (2008) 315–325 www.elsevier.com/locate/sigpro
Direct frequency estimation based adaptive algorithm for a second-order adaptive FIR notch filter R. Punchalarda,, A. Lorsawatsiria, W. Loetwassanaa, J. Koseeyapornb, P. Wardkeina,b, A. Roeksabutra a
Department of Telecommunication Engineering, Mahanakorn University of Technology, Bangkok 10530, Thailand Department of Telecommunication Engineering, Faculty of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
b
Received 12 March 2007; received in revised form 20 July 2007; accepted 2 August 2007 Available online 15 August 2007
Abstract This work deals with the problem of the frequency estimation of a sinusoidal signal corrupted by broad-band noise. The direct frequency estimation based adaptive algorithm for a second-order adaptive finite impulse response (FIR) notch filter (AFNF) is thus proposed. The proposed algorithm employs the bias removal technique to remove the bias existing in the estimated parameter. The performances including the rate of convergence and the mean square error (MSE) can be easily controlled by using only one parameter, i.e., step size parameter. Moreover, the proposed filter is simple to implement and suitable for real-time applications. In addition, the difference equations for the convergence in the mean and mean square, and the closed form expressions for the steady-state estimation bias and MSE are also carried out. Finally, the simulation results are provided to confirm the theoretical analysis. r 2007 Elsevier B.V. All rights reserved. Keywords: FIR notch filter; Gradient algorithm; Unbiased
1. Introduction Discovering the best filter structure and adaptive algorithm for estimating the frequency of single or multiple sinusoidal signals in broad-band noise is the competing topic. The adaptive infinite impulse response (IIR) notch filters (ANFs) are extensively employed due to low complexity. Generally, only one section of a second-order ANF is sufficient for Corresponding author. Tel.: +66 29883655 x220; fax: +66 29884040. E-mail address:
[email protected] (R. Punchalard).
many applications, such as radar, sonar, telecommunication system, biomedical engineering and so on. There are many of ANFs that have been proposed and analyzed [1–16]. Normally, they can be classified into three types which are the secondorder IIR notch filter with constrained poles and zeros [1], the bilinear second-order IIR notch filter [2], and the IIR lattice notch filter [3]. In addition, So [14] has proposed a computationally efficient adaptive finite impulse response (AFIR). The method adopted in [14] is computationally attractive because only seven multiplications, four additions and two look-up operations are required for
0165-1684/$ - see front matter r 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.sigpro.2007.08.004
ARTICLE IN PRESS 316
R. Punchalard et al. / Signal Processing 88 (2008) 315–325
each iteration. It is noted that the first type of ANFs and the AFIR are interested and will be focused in this paper. In parallel, numerous adaptive algorithms have been developed for these IIR and FIR notch filters [1–16], such as the indirect lattice algorithm (ILA) [6], the indirect sign algorithm (ISA) [2], the indirect plain gradient (IPG) algorithm [13], the indirect p-power (IPP) algorithm [7], the indirect memoryless nonlinear gradient (IMNG) algorithm [11], the direct plain gradient (DPG) algorithm [15], and the direct frequency estimation algorithm (DFE) for the AFIR [14]. All mentioned algorithms, except for the ILA, are based primary on the steepest decent optimization or gradient-based adaptive algorithm. For the ILA, it provides good convergence property but has high computational complexity whereas the gradient-based adaptive algorithms have the degradation in the convergence rates when their optimum solutions are far from the initial values of adapting. For the DPG and the DFE, the notch filter is parameterized with the ^ 0 rather than the filter frequency parameter o ^ 0 ). It is, however, suitaparameter a (a ¼ 2 cos o ble for the application of the frequency estimation because the frequency conversion is not required. The gradient-based adaptive algorithms which are employed by the ANFs not only have slow convergence speeds but also produce bias estimates of the desired parameters. The bias, however, can be reduced by setting the pole radius to be close to unit circle but this setting may cause the ANFs to become unstable. It seems that the problem of bias cannot be rejected if the gradient algorithms are employed. For the DFE, although it provides unbiased frequency estimate, the input noise variance must be available. This is the main drawback of the DFE since it is well known that the input noise variance is unknown in real-time applications. As a result, the DFE is also not a good candidate for selection. In this paper, the constrained adaptive FIR notch filter (AFNF) and the unbiased DFE gradient algorithm are proposed and theoretically analyzed. In fact, the proposed adaptive algorithm is modified and based on the DFE. But the difference from the DFE is that the knowledge of input noise variance of the proposed method is not required. Thus the proposed algorithm is more efficient and suitable for real-time applications. The FIR section of the constrained ANF [1] is adopted where the filter ^ 0 [15] and o ^0 parameter a is replaced with 2 cos o
is adjusted by the proposed algorithm. Normally, using only the FIR section of the ANF [1] to estimate the frequency of a sinusoid may lead to bias estimate of the frequency variable due to input noise variance [16]. To circumvent this problem, the technique of bias removal is also introduced. Since the MSE of the proposed filter is a quadratic function with a single global minimum and no local minimum, it thus greatly simplifies the derivation of gradient-based algorithm. Consequently, it encourages the algorithm more rapidly converge to the optimum solution as compared with the conventional direct ANF (DANF) [15]. In addition, the performances of the filter depend only on the step size parameter and its performances can therefore be easily controlled and expected. Furthermore, this work also investigates the difference equations for the convergence in the mean and mean square where the close form expressions for the estimation bias and MSE are determined. Finally, the results obtained from computer simulation are provided to support the given theoretical analysis. The work is outlined as follows. In Section 2, the proposed technique is introduced. The steadystate analysis of the proposed filter for the difference equations of the convergence in the mean and mean square and the close form expressions of the estimation bias and MSE are explained in Section 3. Next, the results are demonstrated and discussed in Section 4. Finally, Section 5 contains some concluding remarks of the proposed paper. 2. Proposed technique 2.1. Direct adaptive FIR notch filter As has been mentioned, the proposed direct adaptive FIR notch filter is modified from [15] whose transfer function is given by ^ 0 Þ ¼ 1 2 cos o ^ 0 z1 þ z2 , Hðz; o
(1)
^ 0 is the frequency variable to be adapted. It where o is assumed that the proposed filter is excited by the input xðkÞ which is of the form xðkÞ ¼ A cosðo0 k þ yÞ þ vðkÞ,
(2)
where A, o0 , y, and vðkÞ are the signal amplitude, the signal frequency, phase, and additive white Gaussian noise with zero-mean and variance s2v , respectively. The corresponding output signal eðkÞ
ARTICLE IN PRESS R. Punchalard et al. / Signal Processing 88 (2008) 315–325
producing by Eq. (1) can be expressed as ^ 0 xðk 1Þ þ xðk 2Þ. eðkÞ ¼ xðkÞ 2 cos o
(3)
For the conventional gradient-based adaptive algo^ 0 can be adjusted by rithm, the frequency variable o the following relationship: m qe2 ðkÞ ^ 0 ðkÞ 2 qo ^ 0 ðkÞ meðkÞsðkÞ, ¼o
^ 0 ðkÞ ^ 0 ðk þ 1Þ ¼ o o
ð4Þ
where m is the step size parameter which is generally positive real number and sðkÞ is the gradient of eðkÞ ^ 0 at o ^0 ¼ o ^ 0 ðkÞ; which is with respect to o sðkÞ ¼
qeðkÞ ^ 0 ðkÞxðk 1Þ. ¼ 2 sin o ^ 0 ðkÞ qo
(5)
It is noted that the gradient filter that produces the signal sðkÞ is defined by 1
^ 0z . ^ 0 Þ ¼ 2 sin o Gðz; o
(6)
Unfortunately, it is well known that the algorithm Eq. (4) provides bias estimate of the frequency ^ 0 due to the noise variance s2v . Thus, to variable o overcome this drawback, the adaptive algorithm Eq. (4) is theoretically analyzed in the next subsection. Additionally, the adaptive algorithm that is capable of removing the bias existing in the estimated parameter is introduced.
error which is time instant k, defined by ( o0 ; f¼ p þ o0 ;
317
^ 0 ðkÞ o0 at replaced by do ðkÞ ¼ o ^ 0 Þ which is f is the phase of Hðz; o o0 p p2 ; o0 4 p2 ;
(12)
and v1 ðkÞ and v2 ðkÞ are the signals, respectively, ^ 0 Þ and Gðz; o ^ 0 Þ when excited produced by Hðz; o with vðkÞ: They are assumed to be white Gaussian noises with zero-mean and variances s2v1 and s2v2 , respectively: Using Parseval’s relation [17], these variances are found to be Z 2 s2v p 2 ^ 0 Þj2 do jHðejo ; o sv1 ¼ E v1 ðkÞ ¼ 2p p ¼ 2s2v ð1 þ 2 cos2 o0 Þ
ð13Þ
and s2v2 ¼ E½v22 ðkÞ ¼
s2v 2p
Z
p
^ 0 Þj2 do jGðejo ; o
p
¼ 4s2v sin2 o0 .
ð14Þ
It is also noted from Eq. (8) that the estimate of ^ 0 Þ ’ sinðo0 Þdo is valid only for o ^0 ’ ðcos o0 cos o o0 (or only at steady-state). By substituting the ^ 0 ðkÞ ¼ do ðkÞ þ o0 into Eq. (4), it relationship o results in do ðk þ 1Þ ¼ do ðkÞ mes ðkÞss ðkÞ.
2.2. Convergence in the mean of Eq. (4) In this subsection, the analysis is begun with defining the steady-state expressions for Eqs. (1), (3), (5), and (6). Let us consider Eq. (1) which can be rewritten as ^ 0 Þ ¼ z þ z1 2 cos o ^ 0 z1 . Hðz; o (7) ^ 0 is close to the At steady-state, the estimate of o true signal frequency o0 ; it is then found that Hðe
jo0
^ 0 Þ ¼ 2ðcos o0 cos o ^ 0 Þe ;o ’ 2 sinðo0 Þdo e
jf
jf
,
^ 0 ðkÞ cosðo0 k þ y fÞ þ v1 ðkÞ es ðkÞ ¼ 2Ado ðkÞ sin o (9) ^ 0 Þ ¼ 2 sin o ^ 0 ejo0 , Gðejo0 ; o
Using Eqs. (8)–(14) in Eq. (15) and further averaging the terms containing the sine and cosine waves, hence, the following expression is obtained: E½do ðk þ 1Þ ¼ E½do ðkÞ mE½es ðkÞss ðkÞ ’ E½do ðkÞ m2A2 sin2 o0 cosðo0 fÞE½do ðkÞ mR1;2 ¼ ð1 mc11 ÞE½do ðkÞ mR1;2 ,
(10)
ð16Þ
where c11 ¼ 2A2 sin2 o0 cosðo0 fÞ
ð8Þ
(15)
(17)
and R1;2 is the correlation between the signals v1 ðkÞ and v2 ðkÞ which is determined by using Parseval’s relation as I s2 ^ 0 ÞGð1=z; o ^ 0 Þz1 dz R1;2 ¼ E½v1 ðkÞv2 ðkÞ ¼ v Hðz; o 2pj C
¼ 2s2v sin 2o0 .
^ 0 ðkÞ cosðo0 k þ y o0 Þ þ v2 ðkÞ. ss ðkÞ ¼ 2A sin o (11) From Eqs. (8)–(11), subscript s stands for steady^ 0 o0 is the estimation state, where do ¼ o
ð18Þ
It is noted that the derivation of Eq. (18) is easily obtained by using the theory of residues. The expression shown in Eq. (16) is the difference
ARTICLE IN PRESS R. Punchalard et al. / Signal Processing 88 (2008) 315–325
318
equation for the convergence in the mean of Eq. (4). It is observed that the term of R1;2 is included and is a function of an observation noise variance s2v : This term therefore makes the adaptive algorithm Eq. (4) produce bias estimate of the estimated frequency ^ 0 ðkÞ. Also, it should be mentioned that the variable o following assumptions are employed to obtain Eq. (16): (A1) do ðkÞ and v1 ðkÞ, do ðkÞ; and v2 ðkÞ are uncorrelated with each other; (A2) v1 ðkÞ and v2 ðkÞ are jointly Gaussian distributed; (A3) sine and cosine waves that appear in deriving of Eq. (16) have zero-mean and variance of 0.5. To overcome the problem of the biased estimation, the bias removal technique is given in the next subsection. 2.3. Bias removal technique As previously mentioned, the technique of removing the bias produced by Eq. (4) is discussed in this subsection. At first glance, let us consider the term of R1;2 which is presented in Eq. (18). At time instant k, this term can be rewritten as ^ 0 ðkÞ. R1;2 ðkÞ ¼ 2s2v ðkÞ sin 2o
(19)
It is observed that Eq. (19) is similar to the last term on the right-hand side of Eq. (5) in [14]. However, in [14], the input noise variance s2v must be known a priori. But its estimated value s2v ðkÞ is employed instead in this paper. At steady-state, it is easy to see that E½xðkÞes ðkÞ " ¼E
the input noise variance. Hence, Eq. (19) becomes ^ 0 ðkÞ. R1;2 ðkÞ ¼ 2xðkÞeðkÞ sin 2o
(22)
By adding Eq. (22) to Eq. (4), the adaptive algorithm with bias removal capability and without knowing the additive noise variance is derived as follows: ^ 0 ðk þ 1Þ ¼ o ^ 0 ðkÞ meðkÞsðkÞ þ mR1;2 ðkÞ. o
(23)
It can be said that when the estimate of noise variance s2v ðkÞ is equal or close to the true noise variance s2v , the adaptive algorithm Eq. (23) can be shown to be unbiased. In the next section, the statistical properties of the proposed adaptive algorithm as shown in Eq. (23) are studied. 3. Steady-state analysis In this section, the performances of Eq. (23) are theoretically studied in terms of the difference equations for the convergence in the mean and mean square. It is noted that the analytical frame work in [16] is adopted in this work. 3.1. Difference equation for the convergence in the mean ^ 0 ðkÞ ¼ do ðkÞ þ By substituting the relationship o o0 and Eqs. (8)–(14) into Eq. (23) where the fluctuations in the coefficient error are assumed to ^ 0 ðkÞ be small so that the frequency parameter o contained in Eq. (19) can be replaced by its mean value o0 , also using the similar assumptions A1–A3 made in Section 2.2, and finally and further averaging the terms containing the sine and cosine waves, we obtain E½do ðk þ 1Þ ¼ E½do ðkÞ mE½es ðkÞss ðkÞ
#
ðA cosðo0 k þ yÞ þ vðkÞÞ
þ mE½R1;2 ðkÞ .
’ ð1 m2A2 sin2 o0 cosðo0 fÞ
ð20Þ
m2A2 sin 2o0 sin o0 cos fÞ
^ 0 Þj cosðo0 k þ y fÞ þ v1 ðkÞÞ ðAjHðejo0 ; o
E½do ðkÞ ¼ ð1 m½c11 þ c12 ÞE½do ðkÞ,
^ 0 Þj 0 at a stationary By assuming that jHðejo0 ; o point, Eq. (20) thus becomes E½xðkÞes ðkÞ E½vðkÞv1 ðkÞ s2v .
(21)
Note that the approximation made in Eq. (21) will ^ 0 Þ, implying be good for narrow bandwidth of Hðz; o that additive noise at the filter input passes the notch filter with little change in energy. As can be seen, Eq. (21) is approximated to the input noise variance s2v ; thus, s2v ðkÞ ¼ xðkÞeðkÞ is an estimate of
ð24Þ
where c12 ¼ 2A2 sin 2o0 sin o0 cos f.
(25)
Eq. (24) is the difference equation for the convergence in the mean of the proposed unbiased adaptive algorithm. As can be seen, the term of R1;2 is removed and hence Eq. (23) provides ^ 0 ðkÞ. unbiased estimate of the frequency variable o
ARTICLE IN PRESS R. Punchalard et al. / Signal Processing 88 (2008) 315–325
þ m2 8A2 s2v sin 2o0 sin2 o0 cosðo0 fÞE½do ðkÞ
It is noted the term mR1;2 ðkÞ which is additionally appeared in Eq. (23) makes the adaptive algorithm become nonlinear and the analysis is complicated. For a sake of simplicity, the mentioned assumptions are employed.
3.2. Difference equation for the convergence in the mean square ^ 0 ðkÞ ¼ do ðkÞ þ o0 in By using the relationship o Eq. (23), squaring and further averaging, it thus yields 2( )3 do ðkÞ mes ðkÞss ðkÞ 2 5 E½d2o ðk þ 1Þ ¼ E 4 þmR1;2 ðkÞ ¼ E½d2o ðkÞ þ M 1 ðkÞ þ M 2 ðkÞ þ 2ðN 1 ðkÞ N 2 ðkÞ N 3 ðkÞÞ.
319
þ m2 2A2 s2v1 sin 2o0 sin o0 cos fE½do ðkÞ þ m2 4s2v R1;2 sin 2o0 ,
ð29Þ
N 2 ðkÞ ¼ E½mdo ðkÞes ðkÞss ðkÞ ’ m2A2 sin2 o0 cosðo0 fÞE½d2o ðkÞ þ mR1;2 E½do ðkÞ,
ð30Þ
N 3 ðkÞ ¼ E½2m sin 2o0 do ðkÞxðkÞes ðkÞ ’ m2A2 sin o0 sin 2o0 cos fE½d2o ðkÞ þ m2s2v sin 2o0 E½do ðkÞ,
ð31Þ
where Rv;1 is the correlation between the signal vðkÞ and v1 ðkÞ which is calculated by ð26Þ
Rv;1 ¼ E½vðkÞv1 ðkÞ s2v ¼ Eq: ð21Þ.
(32)
After long and careful calculations, M 1 ðkÞ, M 2 ðkÞ, N 1 ðkÞ, N 2 ðkÞ and N 3 ðkÞ are, respectively, derived as follows:
By substituting Eqs. (27)–(31) back into Eq. (26) and arranging the results, one has
M 1 ðkÞ ¼ E½fmes ðkÞss ðkÞg2
E½d2o ðk þ 1Þ ¼ ð1 þ m2 c21 mc22 ÞE½d2o ðkÞ þ ðm2 c23 mc24 ÞE½do ðkÞ
’ m2 4A4 sin4 o0 1 þ 12 cosð2½o0 fÞ E½d2o ðkÞ þ þ
m 2A2 s2v2 sin2 o0 E½d2o ðkÞ m2 8A2 R1;2 sin2 o0 cosðo0
þ m2 c25 ,
2
0
þ m2 @
1
2A2 s2v1 sin2 o0
where fÞE½do ðkÞ
A,
þs2v1 s2v2 þ 2R21;2
ð27Þ
M 2 ðkÞ ¼ E½f2m sin 2o0 xðkÞes ðkÞg 4
2
’ m 4A sin 2o0 sin o0 1 þ 12 cos 2f E½d2o ðkÞ þ
m 8A2 s2v sin2 2o0 sin2 m2 16A2 Rv;1 sin2 2o0
þ 2A2 s2v2 sin2 o0
þ 4A4 sin2 2o0 sin2 o0 1 þ 12 cos 2f þ 8A4 sin 2o0 sin3 o0 cos o0 ,
2
2
c21 ¼ 4A4 sin4 o0 1 þ 12 cosð2½o0 fÞ
þ 8A2 s2v sin2 2o0 ; sin2 o0
2
2
ð33Þ
c22 ¼ 4A2 sin2 o0 cosðo0 fÞ
o0 E½d2o ðkÞ
þ 4A2 sin o0 sin 2o0 cos f,
þ sin o0 cos fE½do ðkÞ
ð35Þ
c23 ¼ 8A2 R1;2 sin2 o0 cosðo0 fÞ
2
þ m2 4sin 2o0 12A2 s2v1 þ s2v s2v1 þ 2R2v;1 ,
ð34Þ
þ 16A2 Rv;1 sin2 2o0 sin o0 cos f ð28Þ
þ 8A2 R1;2 sin 2o0 sin o0 cos f þ 16s2v A2 sin 2o0 sin2 o0 cosðo0 fÞ
N 1 ðkÞ ¼ E½2m2 sin 2o0 xðkÞe2s ðkÞss ðkÞ 2
4
3
’ m 4A sin 2o0 sin o0 cos
þ 4A2 s2v1 sin 2o0 sin o0 cos f,
ð36Þ
o0 E½d2o ðkÞ
þ m2 4A2 R1;2 sin 2o0 sin o0 cos fE½do ðkÞ
c24 ¼ 2R1;2 þ 4s2v sin 2o0 ,
(37)
ARTICLE IN PRESS R. Punchalard et al. / Signal Processing 88 (2008) 315–325
320
c25 ¼ 2A2 s2v1 sin2 o0 þ s2v1 s2v2
(C2) The estimation MSE is a linear function of step size parameter m. The larger the step size value is, the higher the MSE will be.
þ 2R21;2 þ 4 sin2 2o0 12A2 s2v1 þ s2v s2v1 þ 2R2v;1 þ 8s2v R1;2 sin 2o0 .
ð38Þ
4. Results
Note that the difference equation for the convergence in the mean square Eq. (33) is obtained by assuming that:
In this paper, the performances of the proposed AFNF are compared with both the DFE [14] and the DANF [15] in the following subsections.
(A1) vðkÞ and v1 ðkÞ, v1 ðkÞ, and v2 ðkÞ are jointly Gaussian distributions. (A2) do ðkÞ and vðkÞ, do ðkÞ and v1 ðkÞ, do ðkÞ, and v2 ðkÞ are uncorrelated with each other. (A3) The terms of dm o ðkÞðmX3Þ are ignored for analytical simplicity. (A4) The sine and cosine waves appearing in the derivations of Eqs. (27)–(31) have zero-mean and variance of 0.5.
4.1. Convergence in the optimum solutions
3.3. Steady-state estimation bias and MSE At steady-state, by using the relationships E½do ðk þ 1Þjk!1 ¼ E½do ðkÞjk!1 ¼ E½do ð1Þ and E½d2o ðk þ 1Þjk!1 ¼ E½d2o ðkÞjk!1 ¼ E½d2o ð1Þ, the difference equations for the convergence in the mean shown by Eq. (24) and mean square defined by Eq. (33) are, respectively, reduced to E½do ð1Þ ¼ ð1 m½c11 þ c12 ÞE½do ð1Þ, E½d2o ð1Þ ¼
mc25 . c22 mc21
(39) (40)
Eq. (39) is a nonlinear equation which is difficult to solve its solution. It is found that, however, E½do ðkÞ ¼ 0 is a stationary point of Eq. (24), indicating that one of the solutions obtained by the proposed algorithm is the correct one. As a result, after convergence, the adaptive algorithm Eq. (23) provides unbiased estimate of the adopted parameter. Note that the steady-state estimation MSE given by Eq. (40) is directly obtained from Eq. (33) by neglecting the term containing E½do ðkÞ. The term of E½do ðkÞ can be neglected because E½do ðkÞ ¼ E½do ð1Þ ¼ 0 at a stationary point. Based on the two steady-state equations derived above, the following conclusions are drawn: (C1) The proposed algorithm is unbiased at stationary point.
To obtain the comparable results, the initial ^ values of adapting oð0Þ and step size values m of all examined algorithms are identical. In addition, in order to reduce the effect of the bias while keeping a good convergence speed of the DANF, the parameter r is set to be close to unity (r ¼ 0:9). Also, it is assumed that a noisy sinusoid with fixed amplitude A, fixed phase y, but varied frequency as o0 ¼ 0:1p, 0:2p, and 0:3p is excited to the tested ^ 0 ðkÞ of each filters. The frequency parameter o algorithm is monitored and demonstrated in Fig. 1. As can be seen, the convergence rate of the DANF is degraded when the optimum solution is far from the initial value whereas the changing of the signal frequency does not effect in the rate of convergence of the proposed AFNF. Moreover, the convergence property of the proposed AFNF is superior to that of the DFE. This is because the estimate of input noise variance s2v ðkÞ ¼ xðkÞeðkÞ at the early stage of adaptation is large due to the component of sinusoid which results in improving convergence property. Note that, for the DANF, the smaller the pole radius r is, the faster the convergence speed will be, but the problem of bias becomes severe (see [15]). 4.2. Convergence in the mean and mean square In this subsection, the convergence in the mean and mean square of the proposed AFNF (simulation and theory), the DANF (simulation) and the DFE (simulation) are demonstrated. The 100 independent runs of each algorithm are ensemble averaged to determine the simulated bias. The comparison of the obtained results are shown in Figs. 2 and 3 for estimation bias and MSE, respectively. It is seen in Fig. 2 that the proposed AFNF and the DFE provide unbiased estimates of the frequency parameter. In addition, the simulated bias of the AFNF shows good estimate of its
ARTICLE IN PRESS R. Punchalard et al. / Signal Processing 88 (2008) 315–325
321
0.5 AFNF DFE DANF
Frequency estimate (× π rad/sample)
0.45 0.4 0.35
ω0=0.3π
0.3 0.25 ω0=0.2π 0.2 0.15 ω0=0.1π 0.1 0
1
2
3 Number of iteration
4
5
6 x 105
^ 0 ðkÞ obtained from the AFNF, the DFE and the DANF (A ¼ 1, y ¼ 0:3p, m ¼ 1 104 , Fig. 1. Evolutions of the filter parameter o ^ 0 ð0Þ ¼ p=2, o0 ¼ 0:1p; 0.2p, 0.3p). SNR ¼ 10 (dB), o
0.16 DANF DFE AFNF(simulation) AFNF(theory)
0.14 0.12 10
Bias estimate
0.1
x 10-4
8 6
0.08
4 0.06 2 0.04
0 7.9632
zoom 7.9633
0.02
7.9634
7.9635 x 104
0 -0.02 0
1
2
3 4 5 Number of iteration
6
7
8 x 104
^ 0 ð0Þ ¼ p=2, o0 ¼ 0:45p, SNR ¼ 2 (dB), Fig. 2. Estimation bias(s) obtainable by the AFNF, the DFE and the DANF (A ¼ 1, o m ¼ 1 104 , r ¼ 0:9; 100 independent runs).
analytical value and that of the DFE as well. On the contrary, the simulated result for the bias of the DANF has revealed that it produces bias estimate
of the adapted parameter. Note that, the dependence of the bias producing by the DANF on the pole radius r, the step size m, the input noise
ARTICLE IN PRESS R. Punchalard et al. / Signal Processing 88 (2008) 315–325
322 -10
DANF DFE -20
AFNF(simulation)
Estimation MSE [dB]
AFNF(theory) -30
-40
-50
-60
-70 0
1
2
3
4 5 Number of iteration
6
7
8 x 104
^ 0 ð0Þ ¼ p=2, o0 ¼ 0:45p, SNR ¼ 2 (dB), Fig. 3. Estimation MSEs obtainable by the AFNF, the DFE and the DANF (A ¼ 1, o m ¼ 1 104 , r ¼ 0:9; 100 independent runs).
variance s2v and the signal frequency o0 can be found in [15]. In Fig. 3, the simulated MSEs obtained from the AFNF and the DFE are almost the same. Both of them show good agreement with the analytical value of the AFNF but are larger than that of the one obtained from the DANF. However, if the signal frequency o0 is very far from the initial ^ 0 ð0Þ, the large step size values m as well as value o pole radius r are required for the DANF to obtain fast convergence speed and to reduce the problem of bias, respectively. Hence, large MSE is consequently obtained for the DANF. However, due to different filter structures between the DANF and the AFNF, they should be compared as follows. 4.3. Steady-state bias and MSE of the AFNF and the DFE Steady-state estimation bias(s) and MSEs, which are obtained by the AFNF (simulation and theory) and the DFE (simulation), with respect to step size parameter m, signal to noise ratio (SNR), and signal frequency o0 are depicted in Figs. 4–6, respectively. In Fig. 4, the simulated bias(s) of both algorithms fit well the theoretical value of the AFNF for slow adaptation (small step size value) whereas the
simulated and analytical MSEs are well agreement. It is observed that the MSEs are linearly proportion to the step size parameter. The larger the step size value is, the higher the MSE will be. The steadystate bias(s) and MSEs versus SNR value are shown in Fig. 5. As can be seen, the simulated bias(s) fit well the analytical value when the SNR value is large whereas the simulated and theoretical MSEs are consistent when the SNR value is low. However, both analytical bias and MSE show good estimate of the simulated values. Finally, the comparisons between the simulated and analytical bias(s) and MSEs with respect to the signal frequency are depicted in Fig. 6(a) and (b), respectively. In Fig. 6(a), the simulated bias(s) fluctuate around the theoretical values, and hence it can be said that the proposed AFNF and the DFE are unbiased. As can be seen in Fig. 6(b), the theoretical MSE values obtained by the AFNF which are outside the signal frequency range 0:4p–0:6p are different from their simulated values. At the authors’ point of view, the reason which can explain such results must be further studied. It is noticed, however, that the simulated MSE is a nonlinear function of the signal frequency whereas the analytical value is carried out by assuming that the last term on the right-hand
ARTICLE IN PRESS R. Punchalard et al. / Signal Processing 88 (2008) 315–325
AFNF(simulation) AFNF(theory) DFE(simulation)
0.04 Steady-state bias
323
0.02 0 -0.02
Steady-state MSE [dB]
-4
-3.8
-3.6
-3.4
-3.2
-3 log10μ
-2.8
-2.6
0
-2.4
-2.2
-2
AFNF(simulation) AFNF(theory) DFE(simulation)
-10 -20 -30 -40 -50 -4
-3.8
-3.6
-3.4
-3.2
-3 log10μ
-2.8
-2.6
-2.4
-2.2
-2
Fig. 4. Comparisons between the analytical steady-state bias and MSE and their simulated values versus step size value obtained from the AFNF and the DFE (A ¼ 1, SNR ¼ 2 (dB), o0 ¼ 0:45p, 100 independent runs): (a) steady-state bias; (b) steady-state MSE.
x 10-3 5 AFNF(simulation) AFNF(theory) DFE(simulation)
Steady-state bias
4 3 2 1 0 -1 -2 0
1
2
3
4
5 SNR [dB]
6
7
8
9
10
Steady-state MSE [dB]
-20 AFNF(simulation) AFNF(theory) DFE(simulation)
-30 -40 -50 -60 -70 0
1
2
3
4
5 SNR [dB]
6
7
8
9
10
Fig. 5. Comparisons between the analytical steady-state bias and MSE and their simulated values versus SNR value obtained from the AFNF and the DFE (A ¼ 1; m ¼ 1 104 , o0 ¼ 0:45p, 100 independent runs): (a) steady-state bias; (b) steady-state MSE.
ARTICLE IN PRESS R. Punchalard et al. / Signal Processing 88 (2008) 315–325
324
x 10-3
Steady-state bias
6 AFNF(simulation) AFNF(theory) DFE(simulation)
4 2 0 -2
Steady-state MSE [dB]
0
0.1
0.2
0.3 0.4 0.5 0.6 0.7 Normalized Frequency (×π rad/sample)
-10
0.8
0.9
1
AFNF(simulation) AFNF(theory) DFE(simulation)
-20 -30 -40 -50 -60 0
0.1
0.2
0.3 0.4 0.5 0.6 0.7 Normalized Frequency (×π rad/sample)
0.8
0.9
1
Fig. 6. Comparisons between the analytical steady-state bias and MSE and their simulated values versus frequency obtained from the AFNF and the DFE (A ¼ 1; m ¼ 1 104 , SNR ¼ 2 (dB), 100 independent runs): (a) steady-state bias; (b) steady-state MSE.
side of Eq. (26) is independent of the estimation error do ðkÞ. This assumption makes Eq. (26) to be linear and analytical simplicity. Nevertheless, the theoretical MSE obtained by the AFNF shows good estimate of the simulated MSE of the DFE, indicating that the analytical frame work presented in this work can be used to analyze the performances of the DFE. From Figs. 1–6, the following remarks are given: (R1) The DANF is inherently biased. (R2) The AFNF and the DFE provide almost the same performances. The major difference between them is that the knowledge of the input noise variance s2v is not required for the AFNF. In addition, the simulated MSE values which are outside the signal frequency range 0:4p–0:6p obtained by the AFNF are considerably improved as compared with the one obtained by the DFE. (R3) The AFNF is more applicable for realtime applications than the DFE because the knowledge of input noise statistic is not required.
5. Conclusion A simple second-order adaptive FIR notch filter (AFNF) using DFE gradient-based adaptive algorithm with bias removal capability and without knowing the additive noise variance is proposed in this paper. The bias removal technique using the estimate of input noise variance is introduced. The proposed AFNF can remove the bias existing in the frequency estimate without the requirement of the knowledge of input noise variance. Since only step size parameter is required to control its performances, the proposed AFNF is thus very simple. As compared with the DANF, the AFNF is unbiased and its convergence speed is almost independent of the signal frequency at the same initial value. Difference equations for the convergences in the mean and in the mean square are established at the steady state of the algorithm. Estimation bias and MSE are then derived in closed forms based on the derived difference equations. The comparisons based on theory as well as simulations have revealed that the AFNF outperforms the DANF in terms of bias and is superior to
ARTICLE IN PRESS R. Punchalard et al. / Signal Processing 88 (2008) 315–325
the DFE in the results of MSE. Although the analytical results shown in this work are consistent with simulated results across a wide range of interesting parameters, the theoretical frame work that can handle the case where the simulation is not in accord with its analytical results is still required and further ;studied.
References [1] A. Nehorai, A minimal parameter adaptive notch filter with constrained poles and zeros, IEEE Trans. Acoust. Speech Signal Process. ASSP-33 (4) (July 1985) 983–996. [2] K. Martin, M.T. Sun, Adaptive filters suitable for real-time spectral analysis, IEEE Trans. Circuits Syst. CAS-33 (2) (February 1986) 218–229. [3] N.I. Cho, C.H. Choi, S.U. Lee, Adaptive line enhancement by using an IIR lattice notch filter, IEEE Trans. Acoust. Speech Signal Process. 37 (4) (April 1989) 585–589. [4] T. Kwan, K. Martin, Adaptive detection and enhancement of multiple sinusoids using a cascade IIR filter, IEEE Trans. Circuits Syst. 36 (7) (July 1989) 937–947. [5] J.F. Chicharo, T.S. Ng, Gradient-based adaptive IIR notch filtering for frequency estimation, IEEE Trans. Acoust. Speech Signal Process. 38 (5) (September 1990) 769–777. [6] N.I. Cho, S.U. Lee, On the adaptive lattice notch filter for the detection of sinusoids, IEEE Trans. Circuits Syst. 40 (7) (July 1993) 405–416. [7] S.-C. Pei, C.-C. Tseng, Adaptive IIR notch filter based on least mean p-power error criterion, IEEE Trans. Circuits Syst. 40 (8) (August 1993) 525–529.
325
[8] S.-C. Pei, C.-C. Tseng, A novel structure for cascade form adaptive notch filters, Signal Process. 33 (1993) 95–110. [9] M.R. Petraglia, S.K. Mitra, J. Szczupak, Adaptive sinusoid detection using IIR notch filters and multirate techniques, IEEE Trans. Circuits Syst. II Analog Digit. Signal Process. 41 (11) (November 1994) 709–717. [10] M.V. Dragosevic, S.S. Stankovic, An adaptive notch filter with improved tracking properties, IEEE Trans. Signal Process. 43 (9) (September 1995) 2068–2078. [11] Y. Xiao, Y. Tadokoro, Y. Kobayashi, A new memoryless nonlinear gradient algorithm for a second-order adaptive IIR notch filter and its performance analysis, IEEE Trans. Circuits Syst. II Analog Digit. Signal Process. 45 (4) (April 1998) 462–472. [12] Y. Xiao, L. Ma, K. Khorasani, A. Ikuta, Statistical performance of the memoryless nonlinear gradient algorithm for the constrained adaptive IIR notch filter, IEEE Trans. Circuits Syst. I 52 (8) (August 2005) 1691–1702. [13] Y. Xiao, Y. Takeshita, K. Shida, Steady-state analysis of a plain gradient algorithm for a second-order adaptive IIR notch filter with constrained poles and zeros, IEEE Trans. Circuits Syst. II Analog Digit. Signal Process. 48 (7) (July 2001) 733–740. [14] H.C. So, Adaptive algorithm for direct estimation of sinusoidal frequency, Electron. Lett. 36 (8) (2000) 759–760. [15] J. Zhou, G. Li, Plain gradient based direct frequency estimation using second-order constrained adaptive IIR notch filter, Electron. Lett. 40 (5) (2004). [16] R. Punchalard, W. Loetwassana, J. Koseeyaporn, P. Wardkein, Performance analysis of the equation error adaptive IIR notch filter with constrained poles and zeros, in: Proceedings of IEEE ISCIT, 2006. [17] A.V. Oppenheim, R.W. Schafer, Digital Signal Processing, Prentice-Hall, Englewood Cliffs, NJ, 1975.