Accepted Manuscript Real-Valued Single-Tone Frequency Estimation Using Half-Length Autocorrelation
Michael A. Martinez, Ashkan Ashrafi
PII: DOI: Reference:
S1051-2004(18)30671-7 https://doi.org/10.1016/j.dsp.2018.08.007 YDSPR 2392
To appear in:
Digital Signal Processing
Please cite this article in press as: M.A. Martinez, A. Ashrafi, Real-Valued Single-Tone Frequency Estimation Using Half-Length Autocorrelation, Digit. Signal Process. (2018), https://doi.org/10.1016/j.dsp.2018.08.007
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Real-Valued Single-Tone Frequency Estimation Using Half-Length Autocorrelation Michael A. Martinez, Ashkan Ashrafi Department of Electrical Engineering, San Diego State University, 5500 Campanile Dr., San Diego, CA 92182 USA
Abstract In this paper a new autocorrelation method called half-length autocorrelation (HLA) is introduced. Unlike regular (biased or unbiased) autocorrelations, the exact statistical characteristics of the HLA can be calculated, which makes the analytical calculation of the variance of the frequency estimator possible. The HLA is also used in two different frequency estimation methods namely, Modified Covariance (MC) and Modified Pisarenko Harmonic Decomposition (MPHD). It is shown that the application of the HLA in these frequency estimators improves their performance, particularly in lower signal-to-noise ratios (SNR). The analytical calculation of the variance (mean square error) of the estimators is also provided in this paper. Simulation results are in agreement with the theoretical calculations. For the MPHD estimator using the HLA (MPHD-HLA), the estimated frequency is very close to the theoretical calculations for SNRs as low as −2dB. Keywords: Frequency Estimation; Autocorrelation; Half-length Autocorrelation; Cramer-Rao Lower Bound Email address:
[email protected],
[email protected] (Michael A. Martinez, Ashkan Ashrafi)
Preprint submitted to Elsevier
August 24, 2018
1. Introduction Frequency estimation is one of the widely used signal processing techniques with numerous applications in RADAR [1], SONAR [2], wireless communications [3], and speech processing [4]. In 1973, Pisarenko published his seminal paper on frequency estimation for a single frequency sinusoid [5]. This method, which is called Pisarenko Harmonic Decomposition (PHD), is the inspiration of many other techniques introduced in the following years. The exact analysis of the PHD method is given in [6]. On the other hand, Rife et al. used the maximum likelihood (ML) method to devise an algorithm that almost attains the Cramer-Rao Lower Bound (CRLB) [7]. The ML estimation shows that the correct frequency can be estimated as the peak of the power spectral density of the complex sinusoid [8]. Kenefic and Nuttall used the ML method for a real-valued sinusoid [9]. The ML-based method provides the best frequency estimation algorithms, but they suffer from intense computational costs, which may not be suitable for fast processing applications. Some other frequency-domain methods provide more computationally efficient frequency estimation algorithms with the same accuracy of the ML algorithm [10, 11, 12, 13, 14]. It should be noted that estimation of the frequency of a complex valued signal is generally done in the frequency domain as the spectrum is only one-sided and the frequency corresponding to where the peak of the spectrum occurs is the estimate whose variance attains the CRLB. Other methods have also been introduced that estimates multi-line frequencies where the goal is to estimate the harmonic and non-harmonic fre2
quency lines of a noisy signal. An enthusiastic reader can be referred to Schmidt’s work [15] called Multiple Signal Classification (MUSIC) as well as other methods introduced in the literature, e.g. [16, 17, 18]. To address the computational intensity of the ML method, several suboptimal frequency estimation algorithms have been proposed in the literature. These methods are generally suitable for single-tone real-valued sinusoids and are based on some variations of the autocorrelation of the noisy sinusoid. Most of these methods are the derivatives of the PHD algorithm. The Modified Covariance (MC) method uses Linear Prediction (LP) for the discrete equation governing the harmonic oscillators [19, 20, 21]. The exact analysis of the MC method is given in [22]. Other derivatives of the MC method have also been proposed. So and Chan introduced the Reformulated PHD algorithm (RPHD) to remove the bias of the MC estimator [23]. Lui and So introduced scaled sample covariance formulas and used their multiple lags to define a new frequency estimator called Modified PHD (MPHD) [24]. They calculated the exact variance of the estimator and showed that their method is close to the CRLB for high signal-to-noise ratios (SNR). Lui and So also proposed an approach in which a two stage method is used [25]. In the first stage, the sample autocorrelation (AC) of the signal is processed and then in the second stage, the samples of the AC of the first AC are used for frequency estimation. Elasmi-Ksibi et al. introduce the Modified Covariance for Correlation (MCC) method in which a coarse estimate of the frequency is found first and then the fine frequency estimate is calculated using the leastsquares optimization of a cost function [26]. The exact analysis of the MCC method is given in [22]. The MCC estimator is biased for higher SNRs. This
3
was later noted in [27]. To correct this problem, a new coarse-fine algorithm is proposed in [27]. The proposed method was tested empirically by several simulations but an analytical proof of the variance (error) of the estimation was not provided. In all aforementioned methods, the main tool for the frequency estimation algorithms is a variant of the sample autocorrelation (AC). The problem with the AC is that the autocorrelation of the signal at each lag comprises different samples of the signal. In other words, the larger lags contain fewer summation terms. This makes the statistical analysis of AC very difficult (if not impossible). Moreover, due to the fewer summation terms, the variance of larger lags is higher than those of the smaller lags and consequently they are less reliable in the estimation. Therefore, the AC needs to be limited to fewer samples [26]. To remedy this problem, in addition to significant approximations, multi-lag AC formulas have been devised [27, 28]. The method introduced in [28] uses unbiased autocorrelation formula. Although the theoretical variance (error) of the estimation is found, the agreement of the theoretical error values and the error values obtained by the numerical simulations appear to be dependent on the lag parameters. Moreover, this method requires a careful selection of two lag parameters for obtaining optimum performance. This was done heuristically and the optimum values of the lags depend on the number of samples in the signal. On the other hand, the method introduced in [27] proposed an algorithm to reduce the phase of the mean of the autocorrelation. The authors provided numerical results for a few special cases and based on that they claimed that their method attains the CRLB. Nevertheless, they did not furnish any
4
theoretical proof for their claim. Fig. 1 illustrates the mean square error of the previous methods versus SNR for N = 200 and ω0 =
√
3 π. 5
As it was explained above, all of the
previous methods have a very poor performance for SNRs lower than 5dB except the MCC, which has a very poor performance at large SNRs (larger than 20dB). Overall, the MCC is not suitable for frequency estimation because of its low performance for high SNRs. Due to the lack of substantiated mathematical proofs, as well as existence of heuristic and/or empirical method of finding some of the parameters, we also did not use the results provided in [27, 28] for the comparison with the results of our method. The proposed method in this paper addresses the aforementioned problems and significantly improves the performance at very low SNRs, while providing comparable performance at high SNRs to the existing methods. In this paper, we introduce a new autocorrelation method called Halflength Autocorrelation (HLA) whose exact statistical characteristics can be derived. The HLA method addresses the large variance of larger lags by considering only half of the length of the signal. We will revisit the MC method where the HLA is used as the signal (HLA-MC) and show that the performance of the frequency estimator will be enhanced dramatically and is comparable to the MPHD method for high SNRs but, outperforms all methods for very low SNRs. We will also give the closed-form formula of the variance of the proposed method for high SNRs. Finally, we will revisit the MPHD method where the HLA is employed (HLA-MPHD). Extensive simulation has been done to evaluate the variance of the estimation for different values of the SNR and number of samples. The results show that the
5
20 CRLB PHD RPHD MC MCC MPHD k=1 MPHD k=k*
Mean Square Error (dB)
0
-20
-40
-60
-80
-100 -20
-10
0
10
20
30
40
SNR (dB)
Figure 1: The Mean Square Error of the previous frequency estimators vs. SNR for N = 200 and ω0 =
√
3 5 π.
optimum variance can be achieved at lag one and the overall performance of the method is better than that of the HLA-MC. This paper has three major contributions. This first contribution is the introduction of a new autocorrelation method (HLA) whose exact statistical characteristics are derived (unlike the sample AC whose exact statistical characteristics are very difficult to derive). The second major contribution of this paper is the application of the HLA in known frequency estimation algorithms, which renders low errors at low SNRs compared to the previous
6
methods. Third, the proposed method has a simple algorithm that does not depend on the heuristic selection of different lags to obtain optimum performance. The rest of the paper is organized as follows. In Section 2, we will introduce the HLA and find its exact variance. In Section 3, The MC estimator with the HLA input is studied and its theoretical variance (error) is derived. Section 4 discusses the use of the HLA in MPHD estimator where the performance of the proposed estimator is explained and verified by empirical calculations. In Section 5, the results of the proposed estimators are given for various simulation scenarios. Finally, in Section 6, the conclusions are drawn. 2. Half-length Autocorrelation The goal of the frequency estimation is to find the frequency of a singletone sinusoid with Additive White Gaussian Noise (AWGN) defined as
xn = sn + qn ,
n = 1, ..., N,
(1)
sn = α cos(ω0 n + φ). In this model, α is the amplitude, ω0 ∈ (0, π) is the angular frequency, φ ∈ [0, 2π) is the phase, xn is the noisy sinusoid, and qn is the AWGN. The amplitude, phase, and frequency are unknown deterministic quantities. The noise has a Gaussian distribution or qn ∼ N (μ, σ 2 ), with mean μ and variance σ 2 . In practice AWGN is assumed to be a zero mean process (μ = 0) with unknown variance σ 2 . In this model the SNR is γ = generality, we can consider φ = 0. 7
α2 . 2σ 2
Without loss of
By assuming that xn is a real-valued signal, the sample autocorrelation (AC) of the signal xn can be defined as N −k 1 xn x∗n+k , rk = N − k n=1
k = 1, 2, ..., N − 1.
(2)
As mentioned in the previous section, the number of terms in the summation becomes smaller when the value of k increases; therefore, the statistical behavior of rk cannot be easily formulated. To remedy this problem, we introduce the half-length autocorrelation (HLA) as follows: M 1 xn x∗n+k , uk = M n=1
where M =
N 2
k = 1, 2, ..., M,
(3)
and N is the length of the signal xn . The lag at k = 0 is
not calculated so that the effect of the AWGN is severely diminished. The HLA results in a sequence that is half the length of the original signal. The main feature of the HLA is that for each lag, uk comprises a summation of M number of terms as opposed to the AC where the number of terms in the summation varies with the value of the lag k. This allows us to calculate the exact statistical characteristics of the HLA at each lag. The mean of the HLA is a sinusoidal signal with the same frequency as the original signal (see Appendix A),
E {uk } =
α2 2
cos(kω0 ).
(4)
The covariance matrix C of the HLA can be exactly calculated (see Appendix A for the proof) as 8
α2 σ 2 [C ]km , 2M 1 = (2M − |m − k|) cos((m − k)ω0 ) M 1 + (2M − k − m) cos((m + k)ω0 ) M 2σ 2 + 2 δm−k . α
[C]km = [C ]km
(5)
where [C]km is the entry at row k and column m of the covariance matrix C. By letting k = m, we can obtain the variance of the HLA:
Var {uk } =
α2 σ 2 σ 4 α2 σ 2 (M − k) cos(2kω0 ). + + M M M2
(6)
The variance of the HLA has two important properties. First, it goes to zero as M goes to infinity. Second, it varies for different values of k, which indicates that the HLA is statistically non-stationary. The variance is a decaying biased sinusoidal function with a frequency twice that of the original signal. In a signal with large SNR, the zero-crossings of the original sinusoid correspond to the minima of the variance function. 3. The Modified Covariance Frequency Estimation Using HLA The HLA signal (3) can be employed as the new sinusoidal signal with new statistical characteristics described by its mean (4) and variance (6). Here we use the MC frequency estimation method on the HLA and find the variance of the estimator. We call this estimator MC-HLA. The MC-HLA
9
estimator is closely related to MCC. The difference between MCC and MCHLA is that the HLA is used as the signal instead of a subset of the sample AC and the MC-HLA does not compute coarse and fine frequency estimates. The frequency estimator for MC-HLA is ω ˆ 0 = cos−1
M −2 k=1
uk+1 [uk + uk+2 ] M −2 2 2 k=1 uk+1
,
(7)
where uk is the HLA. The variance of this method can be calculated using the same approach as the variance calculation for the coarse MCC estimator [22], [26]. The closed-form variance of the MC-HLA is:
Var(ˆ ω0 ) =
4hT C h vT Cv = , sin2 (ω0 ) γ sin2 (ω0 )M (M − 2)2
(8)
where ⎧ ⎪ ⎪ cos(2ω0 ) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ − cos(ω0 ) ⎪ ⎪ ⎨ hi = − cos(M ω0 ) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ cos((M − 1)ω0 ) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩0
i=1 i=2 i=M −1
(9)
i=M elsewhere
The proof of the above equations is given in Appendix B. It should be noted that for high SNRs, the variance of the MC-HLA estimator is of order N −3 , which is the same as the CRLB [8]. The variance never attains the CRLB, but it gets closer than the other sub-optimal estimators discussed earlier except 10
for the MCC estimator [26] in a narrow rage of SNRs. We will further discuss the performance of the MC-HLA estimator and make comparisons with the performance of other state-of-the-art frequency estimation algorithms in Section 5. 4. MPHD Frequency Estimation Using the HLA In the MPHD method the scaled sample covariance ck is used instead of the sample autocorrelation. The scaled sample covariances ck and c2k are defined as
ck = c2k =
N −k
xn−k [xn + xn−2k ] ,
n=4k N
xn−2k [xn + xn−4k ] ,
(10)
n=5k
where the maximum value of the lag k is kmax =
(N −1) 5
. The scaled sample
covariance ck is analogous to a decimation in time of c1 and it does not use all available samples so it will have the same number of samples as c2k [24]. The estimate of ρk = cos(kω0 ) can be found from the scaled sample covariances as
ρˆk =
c2k +
c22k + 8c2k . 4ck
(11)
When k = 1, the estimate ρˆk (11) has the same form as the PHD estimate for cos(ω0 ). The only difference is the use of the scaled sample covariance instead of the sample AC. The lag k can be changed from 1 to kmax . Therefore, there 11
are kmax potential estimates for ρˆk so each one needs to be calculated. The optimal lag is found by minimizing the variance given by
Var(ˆ ω0 ) ≈
1 γk(N − 5k +
1)2 (cos(2kω0 )
+ 2)2 sin2 (kω0 )
,
(12)
where SNR 1 (please see [24] for the details of the derivation). This is equivalent to maximizing the denominator of the variance i.e.,
k∗ =
arg max (k(N − 5k + 1)2 (2ˆ ρ2k + 1)2 (1 − ρˆ2k )).
(13)
k∈{1,...,kmax }
where ρˆk = cos(kω0 ), cos(2kω0 ) + 1 = ρˆ2k and sin(kω0 ) = ρˆ2k − 1. However, ρˆk also corresponds to k possible estimates of ω0 so a procedure analogous to phase unwrapping needs to be performed. Start by calculating
ω ˆ k,i
1 i i−1 −1 (−1) cos (ˆ 2π , = ρk ) + k 2
i = 1, 2, ..., k,
(14)
then determine the i that minimizes the difference between ω ˆ k,i and ω ˆ 1,1 ,
ωk,i − ω ˆ 1,1 | , i∗ = arg min |ˆ
(15)
i∈{1,...,k}
where ω1,1 is the initial estimate that can be directly found from (10) and (11). The best frequency estimate is given by ω ˆ0 = ω ˆ k∗ ,i∗ . There are actually two estimators proposed by the authors of this method [24]. One estimator for k = 1 and another or k = k ∗ . The former estimator has a smaller computational workload since it does not have to go through
12
the optimization procedure; however, it does not perform as well at lower SNRs and the MSE is not constant throughout the frequency range. MPHD frequency estimation utilizing the HLA (MPHD-HLA) replaces all instances of the original signal with the HLA signal in the MPHD estimator. When the HLA is utilized the scaled sample covariance functions become
ck = c2k =
M −k n=4k M
un−k [un + un−2k ],
(16)
un−2k [un + un−4k ],
(17)
n=5k
where M =
N 2
is the length of the HLA signal. The maximum value of k is
also changed to M −1 N −2 = . = 10 5
kmax
(18)
For the purposes of this paper the rest of the calculation for the MPHDHLA estimators corresponding to k = 1 and k = k ∗ proceed in the same manner as the standard MPHD estimation algorithm. Since the MPHD-HLA estimator utilizes the HLA, the variance of the estimator is no longer equivalent to (12). The variance of the HLA, given by (6), has smaller local minima for smaller values of the lag (k). Therefore, lower values of k means the involved samples of the HLA in (16) and (17) have smaller variances, which will result in a smaller overall variance. This predicts that the optimum value of k for the MPHD-HLA method should be k ∗ = 1.
13
16 14 12
k
* 10
8 6 4 2 0 -20 -10 0 1
10 0.8
20
SNR (dB)
0.6 0.4
30 40
0.2 0
Frequency (ω/π)
Figure 2: The optimal lag at each frequency and SNR for a signal that has amplitude α = 1, phase φ = 0, and number of samples N = 200.
To empirically show the validity of this deduction, the variance was determined for a wide range of frequencies and SNRs using a simulation. The constant signal parameters used in the simulation were the amplitude α = 1, phase φ = 0, and number of samples N = 200. The frequency varied from 0.02π to 0.98π in steps of 0.02π and the SNR varied from -20 dB to 40 dB in steps of 1 dB. To produce statistically significant results the variance was averaged over 10,000 iterations. This simulation calculated the variance for every possible lag, frequency, and SNR and then determined which lag produced the minimum variance at each frequency and SNR. Fig. 2 shows the results of the simulation as a 3D plot. 14
The simulation shows that the optimal lag is k ∗ = 1 when ω0 ∈ [0.08π, 0.92π] and γ ∈ [7, ∞) dB. At some frequency values the optimal lag will be k ∗ = 1 when the SNR is as small as 4dB. There is a lot of variation in k ∗ when the SNR is less than 0dB. Similar results were obtained when the simulation was run for a signal with a different number of samples. If the parameters of the signal are always in the ranges where the optimal lag for MPHD-HLA is k ∗ = 1, the computational workload can be significantly reduced with no decrease in performance. This is because calculating the frequency estimate no longer has to go through the optimization procedure that increases the computational workload. The theoretical calculation of the variance of MPHD-HLA is not necessary. Since (16) and (17) are similar to the numerator of (7), the variance of the estimator for k ∗ = 1 will not be very much different than the variance of the MC-HLA shown in (8) for high SNRs. The effect of the MPHD-HLA is more pronounced at lower SNRs due to the double averaging nature of the method (calculating c1 and c2 ). So we can use (8) as the theoretical variance of the MPHD-HLA methods for k ∗ = 1 and large SNRs. The simulation results given in the next section confirm this observation. 5. Results and Discussion Here we discuss the results of simulations for the proposed frequency estimation methods. First, we talk about the computational workload of the method and compare them with the existing methods. Then we show the performances of the proposed methods and compare them with those of the existing methods. 15
5.1. Computational Workload Table 1 shows the computational workload of the proposed estimators along with the relevant existing ones. A fair comparison should be made between the MPHD (k = 1), MPHD (k = k ∗ ) and the proposed methods MC-HLA, MPHD-HLA (k = 1) and MPHD-HLA (k = k ∗ ). To have this comparison, the plots of number of additions and multiplications are given in Fig. 3. These figures show that all of the proposed methods have smaller number of additions than MPHD (k = k ∗ ), which has the best SNR performance among existing methods. However, the number of multiplications of the proposed methods are higher than that of MPHD (k = k ∗ ). Overall, the workload of the proposed methods are slightly higher compared to MPHD (k = k ∗ ), which is a sacrifice we need to make to achieve better error performance for low SNRs (discussed in the next section.) 5.2. Error Performance Comparisons The simulations in this section were based on an average of 3000 independent trials and the amplitude and phase of the signals were set to α = 1 and φ = 0 respectively. The graphs generated show the effect of the SNR, frequency, and number of samples on the Mean Square Error MSE (or variance) of the estimators. In these simulations the proposed estimators are compared to the MPHD (k = k ∗ ) estimators. Fig. 4 illustrates the MSE of the estimators versus the SNR of the signal with ω0 =
√
3 π 5
and N = 200. It can be seen that all estimators that use
HLA outperform the MPHD optimum (k = k ∗ ) estimator in low SNRs. At high SNRs, the MPHD (k = k ∗ ) estimator is slightly better than the MPHDHLA (k = k ∗ ) estimator. It can also be observed that the MPHD-HLA with 16
10
10
4
MPHD k=k* MC-HLA MPHD-HLA k=1
9
4
MPHD k=k* MC-HLA MPHD-HLA k=1
9
MPHD-HLA k=k *
MPHD-HLA k=k *
8
Number of Multiplications
8
Number of Additions
10
10
7 6 5 4 3
7 6 5 4 3
2
2
1
1
0
0 0
100
200
300
400
500
N
0
100
200
300
400
500
N
Figure 3: The number of additions and multiplications for different methods.
k = k ∗ performs much better than the MPHD-HLA with k = 1 at low SNRs, but the MPHD-HLA with k = 1 performs slightly better at high SNRs. Fig. 4 also shows that theoretical MSE (8) of the MC-HLA is in a very good agreement with the MC-HLA and MPHD-HLA (k = 1) simulation results. At low SNRs, the proposed methods significantly outperform the MPHD (k = k ∗ ). Table 2 shows the the error performance of the methods at low SNRs. It can be seen that there is a 26.5 dB and 24.4 dB improvement in the error between the MPHD (k = k ∗ ) and MPHD-HLA (k = k ∗ ) at SNR=0dB and SNR=-2dB, respectively (when N = 200 and ω0 =
17
√
3 π). 5
This is a
Table 1: Computational Workload of Frequency Estimators
Method
Addition
Multiplication
PHD
2N − 1
2N + 3
RPHD
3N − 3
2N + 5
MC
3N − 8
2N − 3
MCC (k1 = 1, k2 =
5N 2 18
N ) 3
+
3N 2
−4
4kmax (N + 1)
kmax (2N + 7)
−10kmax (kmax + 1) + k ∗ + 4
−5kmax (kmax + 1) + 12
N2 4 N2 4
MPHD-HLA (k = 1) MPHD-HLA (k = k )
5N 2 18
+5
2N − 4
MC-HLA
∗
5N 2
4N − 16
MPHD (k = 1) MPHD (k = k ∗ )
+
N2 4
+
3N 2
−8
+ 2N − 16
+ 2kmax ( N2 + 1)
−5kmax ( kmax + 1) + k ∗ + 4 2
N2 4
N2 4
+N −3
N2 4
+N −4
+
kmax (N 2
+ 7)
− 5kmax ( kmax + 1) + 12 2 2
significant reduction in error, which is very useful for applications where the signal is very noisy. The other proposed methods MPHD-HLA (k = 1) and MC-HLA provides us with lower error than the MPHD (k = 1) for lower computational cost. Fig. 5 shows the MSE of the estimators versus frequency of the signal with SNR=20 dB and N = 200. It can be observed from the figure that the MPHD-HLA with k = 1 has the best overall performance, but the MCHLA and MPHD covers more frequencies with small MSE. The theoretical MSE values (8) also have good agreement with the simulation results of the MC-HLA and MPHD-HLA (k = 1). Fig. 6 illustrates the MSE of the estimations versus the number of samples 18
20 CRLB MPHD k=k* MC-HLA Theory MC-HLA
Mean Square Error (dB)
0
MPHD-HLA k=k * MPHD-HLA k=1
-20 -20
-40
-40
-60
-60
-2
-1
0
-72 -74
-80 -76 16
-100 -20
17
-10
18
0
10
20
30
40
SNR (dB)
Figure 4: The Mean Square Error of the estimators vs. SNR for N = 200 and ω0 =
N with ω0 =
√
3 π 5
√
3 5 π.
and SNR=20 dB. The above performance comparisons can
also be observed in this figure. Moreover, the figure shows that the proposed estimators are asymptotically unbiased as their MSE go to zero by increasing N. 6. Conclusions We have introduced the Half-length Autocorrelation (HLA) for frequency estimation purposes. It has been shown that the statistical properties of the HLA can be exactly calculated, which has allowed us to exactly calculate the 19
Mean Square Error (dB)
-64
-66
CRLB MPHD k=k* MC-HLA Theory MC-HLA
-68
MPHD-HLA k=k * MPHD-HLA k=1
-70
-72
-74
-76
-78
-80 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Normalized Frequency
Figure 5: The Mean Square Error of the estimators vs. normalized frequency for N = 200 and SNR=20dB.
mean square error (MSE) of the proposed frequency estimation algorithms. The Modified Covariance and Modified Pisarenko Harmonic Decomposition (MPHD) estimators are considered when the HLA is used as their input signals. The theoretical calculations of the variance (MSE) and the simulation results show that the proposed HLA can make the algorithms more accurate. Particularly, the HLA can improve the frequency estimation accuracy at low SNRs (as low as -2dB) when it is used with the MPHD algorithms. The computational cost of the proposed methods have been determined and they show that by a moderate addition to the computational intensity, we can 20
-30 CRLB MPHD k=k* MC-HLA Theory MC-HLA
Mean Square Error (dB)
-40
MPHD-HLA k=k * MPHD-HLA k=1
-50 -86
-60 -87
-70
-88 -89
-80 -90 460
470
480
490
500
-90
-100 0
100
200
300
400
500
600
700
800
900
1000
N
Figure 6: The Mean Square Error of the estimators vs. N for SNR=20dB and ω0 =
√
3 5 π.
obtain much better estimation performance at low SNRs. The exact statistical characteristics of the HLA has given us an opportunity to gain more insight into the process that could lead us to other applications in frequency estimation and harmonic processing. One of these insights is the coincidence of zero crossings of the HLA with the minima of its variance. This implies that the samples of the HLA at its zero crossings have very low variance. This is more pronounced at smaller lags. The use of this important feature of the HLA is the subject of the authors’ future research.
21
Table 2: Error Performance and Low SNRs for N = 200 and ω0 =
SNR
MPHD
dB
k = k∗
0
-28.5 dB
-2
-23.4 dB
MC-HLA
√
3 5 π.
MPHD-HLA
MPHD-HLA
k=1
k = k∗
-37.8 dB
-51 dB
-55 dB
-29.8 dB
-42.7 dB
-47.8 dB
7. Acknowledgment The authors wish to thank San Diego State University’s Office of the President for funding this research. [1] R. Bamler, Doppler frequency estimation and the cramer-rao bound, IEEE Trans. Geosci. Remote Sens. 29 (3) (1991) 385–390. [2] W. C. Knight, R. G. Pridham, S. M. Kay, Digital signal processing for sonar, Proc. IEEE 69 (11) (1981) 1451–1506. [3] F. Rice, B. Cowley, B. Moran, M. Rice, Cramer-rao lower bounds for qam phase and frequency estimation, IEEE Trans. Commun. 49 (9) (2001) 1582–1591. [4] T. Abatzoglou, A fast maximum likelihood algorithm for frequency estimation of a sinusoid based on newton’s method, IEEE transactions on acoustics, speech, and signal processing 33 (1) (1985) 77–89. [5] V. F. Pisarenko, The retrieval of harmonics from a covariance function, Geophys. J. Int. 33 (3) (1973) 347–366. [6] K. Chan, H.-C. So, An exact analysis of pisarenko’s single-tone frequency estimation algorithm, Signal Process. 83 (3) (2003) 685–690. 22
[7] D. C. Rife, R. R. Boorstyn, Single tone parameter estimation from discrete-time observations, IEEE Trans. Inf. Theory 20 (5) (1974) 591– 598. [8] S. M. Kay, Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory, Prentice Hall, Englewood Cliffs, NJ, 1993. [9] R. J. Kenefic, A. H. Nuttal, Maximum likelihood estimation of the parameters of a tone using real discrete data, IEEE J. Ocean. Eng. 12 (1) (1987) 279–280. [10] M. D. Macleod, Fast nearly ml estimation of the parameters of real or complex single tones or resolved multiple tones, IEEE Trans. Signal Process. 46 (1) (1998) 141–148. [11] T.-H. Li, K.-S. Song, Asymptotic analysis of a fast algorithm for efficient multiple frequency estimation, IEEE Transactions on Information Theory 48 (10) (2002) 2709–2720. [12] K. Mahata, Subspace fitting approaches for frequency estimation using real-valued data, IEEE transactions on signal processing 53 (8) (2005) 3099–3110. [13] B. G. Quinn, E. J. Hannan, The estimation and tracking of frequency, Vol. 9, Cambridge University Press, 2001. [14] S. Ye, J. Sun, E. Aboutanios, On the estimation of the parameters of a real sinusoid in noise, IEEE Signal Processing Letters 24 (5) (2017) 638–642. 23
[15] R. O. Schmidt, Multiple emitter location and signal parameter estimation, IEEE Trans. Antennas Propag. 34 (3) (1986) 276–280. [16] J. X. Zhang, M. G. Christensen, S. H. Jensen, M. Moonen, A robust and computationally efficient subspace-based fundamental frequency estimator, IEEE transactions on audio, speech, and language processing 18 (3) (2010) 487–497. [17] L. Huang, Y. Wu, H.-C. So, Y. Zhang, L. Huang, Multidimensional sinusoidal frequency estimation using subspace and projection separation approaches, IEEE Transactions on Signal Processing 60 (10) (2012) 5536–5543. [18] K. Chan, H.-C. So, Accurate frequency estimation for real harmonic sinusoids, IEEE Signal Processing Letters 11 (7) (2004) 609–612. [19] D. W. Tufts, P. D. Fiore, Simple, effective estimation of frequency based on prony’s method, in: IEEE Int. Conf. Proc. 1996 Acoustics, Speech, and Signal Processing, Vol. 5, 1996, pp. 2801–2804. [20] L. Fertig, J. McClellan, Instantaneous frequency estimation using linear prediction with comparisons to the desas, IEEE Signal Process. Lett. 3 (2) (1996) 54–56. [21] R. M. Adelson, Frequency estimation from few measurements, Digital Signal Process. 7 (1) (1997) 47–54. [22] Y. Cao, G. Wei, F.-J. Chen, An exact analysis of modified covariance frequency estimation algorithm based on correlation of single-tone, Signal Process. 92 (11) (2012) 2785–2790. 24
[23] H.-C. So, K. W. Chan, Reformulation of pisarenko harmonic decomposition method for single-tone frequency estimation, IEEE Trans. Signal Process. 52 (4) (2004) 1128–1135. [24] K. W. K. Lui, H. C. So, Modified pisarenko harmonic decomposition for single-tone frequency estimation, IEEE Trans. Signal Process. 56 (7) (2008) 3351–3356. [25] K. W. Lui, H.-C. So, Two-stage autocorrelation approach for accurate single sinusoidal frequency estimation, Signal Processing 88 (7) (2008) 1852–1857. [26] Y. Cao, G. Wei, F.-J. Chen, A closed-form expanded autocorrelation method for frequency estimation of a sinusoid, Signal Process. 92 (4) (2012) 885–892. [27] Y.-Q. Tu, Y.-L. Shen, Phase correction autocorrelation-based frequency estimation method for sinusoidal signal, Signal Processing 130 (2017) 183–189. [28] R. Elasmi-Ksibi, H. Besbes, R. L´opez-Valcarce, S. Cherif, Frequency estimation of real-valued single-tone in colored noise using multiple autocorrelation lags, Signal Processing 90 (7) (2010) 2303–2307. [29] K. Triantafyllopoulos, Moments and cumulants of the multivariate real and complex gaussian distributions, unpublished (2002). [30] G. W. Bohrnstedt, A. S. Goldberger, On the exact covariance of products of random variables, Journal of the American Statistical Association 64 (328) (1969) 1439–1442. 25
Appendix A. Statistical Characteristics of the HLA To find the statistical characteristics of the HLA, we need to find the mean and covariance matrix of the HLA vectors. The mean of the HLA can be found by substituting (1) into (3):
uk =
1 M
M n=1
sn sn+k +
1 M
M n=1
[sn qn+k + sn+k qn + qn qn+k ].
(A.1)
The first summation is deterministic and the second summation is stochastic because of the noise terms. The second and third terms of (A.1) appear to become negligible when M is large, however, this is not the case. These two terms are correlated so they must be included in the calculation. Rewriting the HLA more simply, uk = sk + qk ,
(A.2)
where sk is the deterministic component of the HLA (the first summation in (A.1)) and qk is the stochastic component of the HLA (the second summation in (A.1)). sk
M 1 2 = α cos(nω0 ) cos((n + k)ω0 ) M n=1
= =
M M α2 α2 cos(kω0 ) + cos((2n + k)ω0 ) 2M n=1 2M n=1
α2 α2 cos((M + k)ω0 ) sin((M + 1)ω0 ) cos(kω0 ) + . 2 2M sin(ω0 )
(A.3)
For large values of M , the second term in (A.3) will be negligible. Even if M is not large, the second term will not change the nature of sk , which is a sinusoid with the frequency of ω0 . The expected value of the HLA is E {uk } = E {sk } + E {qk } = sk 26
α2 cos(kω0 ), 2
(A.4)
because each noise component has zero mean, therefore, qk also has zero mean. Moreover, it can be easily seen from (A.1) that the first and second terms in the second summation are Gaussian variables and the third term is the sum of the product of two Gaussian variables. We know that the product of two Gaussian random variables is a random variable with a Chi-square distribution and the sum of M 1 Chi-square variables has a Gaussian distribution (Central Limit Theorem). Therefore, the random variable qk has a Gaussian distribution. From the expected value it can be concluded that this is an unbiased estimator. The covariance matrix of the vector u, whose entries are uk , k = 1, . . . , M can be derived as [C]km = E {uk um } − E{uk }E{um },
(A.5)
where k and m are the HLA lags. M M 1 1 E {uk um } = E xn x(n + k) x(p)x(p + m) M n=1 M p=1 M M 1 E {xn x(n + k)x(p)x(p + m)} . = 2 M n=1 p=1
(A.6)
Substituting for xn and simplifying (A.6) gives M M M M 1 1 E {uk um } = 2 sn sn+k sp sp+m + 2 sn sn+k E{qp qp+m } M n=1 p=1 M n=1 p=1 M M M M 1 1 + 2 sn sp E{qn+k qp+m } + 2 sn sp+m E{qn+k qp } M n=1 p=1 M n=1 p=1 M M M M 1 1 sn+k sp E{qn qp+m } + 2 sn+k sp+m E{qn qp } + 2 M n=1 p=1 M n=1 p=1
27
+
M M M M 1 1 s s E{q q } + E{qn qn+k qp qp+m }. p p+m n n+k M 2 n=1 p=1 M 2 n=1 p=1
In this calculation the summations over an odd number of noise terms are zero and were not included in (A.7). For simplicity, each of these double summations are calculated separately and their results are shown below. The first term is deterministic and can be simplified as M M 1 A1 = 2 sn sn+k sp sp+m M n=1 p=1 M M α4 = 2 cos(nω0 ) cos((n + k)ω0 ) cos(pω0 ) cos((p + m)ω0 ) M n=1 p=1 M M α4 [cos(kω0 ) + cos((2n + k)ω0 )][cos(mω0 ) + cos((2p + m)ω0 )] = 4M 2 n=1 p=1
=
α4 cos(kω0 ) cos(mω0 ) ≈ E{uk }E{um }. 4
(A.7)
The rest of the terms are stochastic. The second term is calculated as follows A2 =
M M M M 1 α2 σ 2 s s E{q q } = cos(nω0 ) cos((n + k)ω0 )δm n n+k p p+m M 2 n=1 p=1 M 2 n=1 p=1
M M α2 σ 2 α2 σ 2 cos(kω0 )δm . = [cos(kω ) + cos((2n + k)ω )]δ = 0 0 m 2M 2 n=1 p=1 2
Since many of the calculations use the same trigonometric identities to simplify the equation, the steps using these identities will be skipped. The third term is now calculated. A3 =
M M 1 sn sp E{qn+k qp+m } M 2 n=1 p=1
M M α2 σ 2 cos(nω0 ) cos(pω0 )δn−p+k−m = 2 M n=1 p=1
28
=
α2 σ 2 (M − |m − k|) cos((m − k)ω0 ). 2M 2
(A.8)
The fourth term is M M 1 A4 = 2 sn sp+m E{qn+k qp } M n=1 p=1
=
M M α2 σ 2 cos(nω0 ) cos((p + m)ω0 )δn+k−p M 2 n=1 p=1
=
α2 σ 2 (M − k) cos((m + k)ω0 ). 2M 2
(A.9)
Similarly, the fifth term is M M 1 A5 = 2 sn+k sp E{qn qp+m } M n=1 p=1 M M α2 σ 2 cos((n + k)ω0 ) cos(pω0 )δn−p−m = 2 M n=1 p=1
=
α2 σ 2 (M − m) cos((m + k)ω0 ). 2M 2
(A.10)
The sixth term is similar to the third term, but the amplitude is no longer dependent on lags k and m. M M 1 A6 = 2 sn+k sp+m E{qn qp } M n=1 p=1 M M α2 σ 2 cos((n + k)ω0 ) cos((p + m)ω0 )δn−p M 2 n=1 p=1
=
α2 σ 2 cos((m − k)ω0 ). 2M
The seventh term is calculated the same way as the second term. M M 1 A7 = 2 sp sp+m E{qn qn+k } M n=1 p=1
29
(A.11)
=
M M α2 σ 2 cos(pω0 ) cos((p + m)ω0 )δk M 2 n=1 p=1
=
α2 σ 2 cos(mω0 )δk . 2
(A.12)
To calculate the final term of the covariance matrix it is helpful to know the expectation of a product of four Gaussian random variables. Let X1 , X2 , ..., Xn be a set of random variables and cij = E {Xi Xj }. Then E {X1 X2 X3 X4 } = c12 c34 + c13 c24 + c14 c23 [29], [30]. Therefore, M M 1 A8 = 2 E {qn qn+k qp qp+m } M n=1 p=1
=
M M M M 1 1 E {q q } E {q q } + E {qn qp } E {qn+k qp+m } n n+k p p+m M 2 n=1 p=1 M 2 n=1 p=1
M M 1 E {qn qp+m } E {qn+k qp } + 2 M n=1 p=1
=
M M 1 4 σ δk δm + σ 4 δn−p δn−p+k−m + σ 4 δn−p−m δn+k−p M 2 n=1 p=1
M 1 4 σ4 δm−k . = 2 σ δn−p δm−k = M n=1 M
Considering m, k > 0, and adding all the terms together the covariance matrix can be found as α2 σ 2 [C ]km , 2M 1 [C ]km = (2M − |m − k|) cos((m − k)ω0 ) M 1 + (2M − k − m) cos((m + k)ω0 ) M 2σ 2 + 2 δm−k . α [C]km =
30
(A.13)
Appendix B. Proof of the Variance of the MC-HLA Estimator To find the variance of the MC-HLA estimator we should consider that the signal applied to the MC estimator is the HLA of the original signal i.e., ur . The variance can be calculated as Var(ˆ ω0 ) =
Var(ˆ ρ) , 2 sin (ω0 )
(B.1)
where ρˆ = cos(ˆ ω0 ). ρˆ can be expressed as a function of the HLA coefficient estimates (ˆ u), ρˆ = f (ˆ u). Let u = [u1 , u2 , · · · , uM −1 , uM ]T , M −2 uk+1 [uk + uk+2 ] f (ˆ u) = k=1 M −2 2 . 2 k=1 uk+1
(B.2) (B.3)
A first-order Taylor expansion of f (ˆ u) around the correlation coefficients (u) gives an approximation of ρˆ. ρˆ ≈ f (u) + vT (ˆ u − u),
(B.4)
where v = ∇f (ˆ u)|uˆ=u . The variance of ρˆ can be written as Var(ˆ ρ) ≈ vT Cv,
(B.5)
where C ∈ RM ×M is the covariance matrix of u ˆ (5). The next step is to derive the vector v. This vector is found by taking the partial derivative of f (ˆ u) with respect to each correlation coefficient uˆi . First, the numerator of (B.3) can be expanded so it is easier to compute each derivative. M −2
u ˆk+1 [ˆ uk + u ˆk+2 ] = u ˆ1 u ˆ2 + 2ˆ u2 u ˆ3 + · · · + 2ˆ uM −2 u ˆM −1 + u ˆM −1 u ˆM .
k=1
31
(B.6)
Each derivative is evaluated at u ˆ = u. The derivative of f (ˆ u) at uˆ1 is ∂f = ∂ uˆ1
uˆ2 cos(2ω0 ) . = M −1 M −2 2 2 uˆk+1 α2 cos2 (kω0 )
(B.7)
uˆM −1 cos((M − 1)ω0 ) . = M −1 M −2 2 2 2 2 uˆk+1 α cos (kω0 )
(B.8)
k=1
k=2
Similarly, ∂f = ∂ uˆM
k=1
k=2
The derivatives with respect to uˆ2 and uˆM −1 are slightly harder to calculate. The calculation for these derivatives makes use of the linear prediction principle. M −2
∂f = ∂u ˆ2
k=1
u ˆ2k+1 (ˆ u1 + 2ˆ u3 ) − 2 2
M −2 k=1
M −2 k=1
u ˆ2k+1
u ˆk+1 [ˆ uk + u ˆk+2 ]ˆ u2 2
(u1 + 2u3 ) − 4u2 = . M −2 2 2 uk+1
(B.9)
k=1
The numerator is almost in the form of the linear prediction principle, it is missing a factor of u1 . Therefore, ∂f = ∂ uˆ2
− cos(ω0 )
α2
M −1 k=2
.
(B.10)
cos2 (kω0 )
Similarly, ∂f = ∂ uˆM −1
− cos(M ω0 ) . M −1 2 2 α cos (kω0 ) k=2
32
(B.11)
The rest of the terms in v can be shown to be zero. One proof of this is below. ∂f = ∂u ˆ3
2
M −2 k=1
u ˆ2k+1 (ˆ u2 + u ˆ4 ) − 2 2
M −2 k=1
=
M −2 k=1
u ˆ2k+1
u ˆk+1 [ˆ uk + u ˆk+2 ]ˆ u3
2
2(u2 + u4 ) − 4 cos(ω0 )u3 . M −2 2 2 uk+1
(B.12)
k=1
This is in the form of the linear prediction principle so ∂f = 0. ∂ uˆ3
(B.13)
The derivatives of all the terms of f (ˆ u) with respect to uˆ3 , ..., uˆM −2 are zero. ∂f ∂f ∂f = = ··· = = 0. ∂ uˆ3 ∂ uˆ4 ∂ uˆM −2
(B.14)
It can be easily shown that β(ω0 ) =
M −1
cos2 (kω0 ) ≈
k=1
M −2 . 2
(B.15)
Therefore, we can find vector v as v=
2h − 2)
(B.16)
α2 (M
where h ∈ RM ×1 is defined as ⎧ ⎪ ⎪ cos(2ω0 ) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ − cos(ω0 ) ⎪ ⎪ ⎨ hi = − cos(M ω0 ) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ cos((M − 1)ω0 ) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩0 33
i=1 i=2 i=M −1 i=M elsewhere
(B.17)
The variance can now be found by substituting (B.16) into (B.1): Var(ˆ ω0 ) =
vT Cv 4hT C h , = sin2 (ω0 ) γ sin2 (ω0 )M (M − 2)2
(B.18)
where γ is the SNR. We can find (8) by substituting (B.17) into (B.18) and expanding it.
34
Ashkan Ashrafi received his BSc and MSc degrees in Electronics Engineering from K.N.Toosi University of Technology, Tehran, Iran and MSE and Ph.D. degrees in Electrical Engineering from the University of Alabama in Huntsville, Huntsville, AL, USA in 1991, 1995, 2003 and 2006, respectively (all with the highest honor). He is currently an Associate Professor of Electrical and Computer Engineering at San Diego State University, San Diego, CA where he is also the director of the Signal Processing Research Laboratory. He is the recipient of Outstanding Faculty Award of the Department of Electrical and Computer Engineering, San Diego State University in both 2012 and 2013. He served as an associate editor for IEEE Transactions on Circuits and Systems Part-I: Regular Papers between 2011 and 2014 and received the Best Associate Editor Award in 2013. During his graduate studies, he also received the 2005 Iliana Martin Chittur Outstanding Graduate Student Award and the 2005 National Engineer’s Week Outstanding Graduate Student Award from the College of Engineering, University of Alabama in Huntsville. He is a member of Phi-Kappa-Phi, Sigma-Xi and EtaKappa-Nu honor societies. His research interests are digital and statistical signal processing, graph theory and graph signal processing, estimation theory, biomedical signal processing, brain connectivity analysis and audio processing.
Michael Martinez received his B.S. degrees in Mathematics and Biomedical Engineering in 2014 from the University of Arizona and M.S. degree in Electrical Engineering from San Diego State University in 2016. He is currently a Ph.D. student at Duke University working on radar signal processing. His interests include physics and statistics-based array signal processing for estimation, detection, tracking, and classification purposes.