SIGNAL
PROCESSING Signal Processing
Fast high-resolution
44 (1995) 21 l-222
approximation of the Hartley transform at arbitrary frequencies
Jie-Cherng Liu *, Hwang-Cheng Chiang Department of Electrical Engineering,
Tatung Institute of Technoloa,
40 Chung-Shari North Road 3rd Sec., Taipei 10451, Taiwan, ROC
Received 23 June 1994; revised 18 January
1995
Abstract We propose a computationally efficient approximation for numerically evaluating the Hartley transform at arbitrary frequencies of a sequence. By use of this algorithm, one can efficiently calculate the Hartley transform of a sequence of length N at arbitrary M frequency points when M is moderately large. The algorithm is based on the fact that the Hartley transform of a uniformly sampled signal at an arbitrary frequency can be expressed as a weighted sum of the discrete Hartley transform (DHT) coefficients of the signal. Since some summation terms have little effects on the results, a few dominant terms are chosen such that the error of approximation does not exceed the specified limit. We show that for a moderately large M, the proposed algorithm is more efficient than the directly computing method. The computational complexity of the algorithm and its error behavior with white noise and sinusoidals are also described in this paper. Zusammenfassung Wir schlagen eine rechentechnisch giinstige NPherung vor, die die numerische Auswertung der Hartley-Transformation einer Folge bei beliebigen Frequenzen ermcglicht. Mithilfe dieses Algorithmus kann man die Hartley-Transformation einer Folge der LPnge N fiir beliebige M Frequenzpunkte effizient berechnen, wenn M von mtiiger gr6Be ist. Der Algorithmus beruht auf der Tatsache, dal3 die Hartley-Transformation eines gleichfdrmig abgetasteten Signals fiir eine beliebige Frequenz als gewichtete Summe von Koeffizienten der diskreten Hartley-Transformation (DHT) des Signals ausgedriickt werden kann. Da manche Summationsterme geringen Einfliil3 auf das Ergebnis zeigen, werden einige dominante Terme so ausgewlhlt, da0 der Approximationsfehler eine vorgegebene Schranke nicht iibersteigt. Wir zeigen, dafl der vorgeschlagene Algorithmus fiir mll3ig grol3es M effizienter als die direkte Berechnungsmethode ist. Der Rechenaufwand des Algorithmus und sein Fehlerverhalten bei weil3em Rauschen und bei sinusfiirmigen Signalen werden in diesem Aufsatz ebenfalls beschrieben.
Nous proposons une approximation efficiente au niveau du calcul de 1’6valuation de la transform& de Hartley d’une .&quence 9 des frtquences arbitraires. L’utilisation de cet algorithme permet de calculer efficacement la transform& de Hartley d’une siquence de longueur N B M points de frkquence arbitraires lorsque M est mod6r6ment grand. Cet algorithme est bask sur le fait que la transform& de Hartley $ une frCquence arbitraire d’un signal Cchantillonni uniformbent peut &tre exprim&e comme la somme pondtrk de coefficients de la transform& de Hartley discrete (DHT)
*Corresponding
author.
Fax: 886-2-5941371.
E-mail:
[email protected].
016%1684/95/$9.50 0 1995 Elsevier Science B.V. All rights reserved SSDIOl65-1684(95)00025-9
212
J.-C. Liu, H.-C. Chiang / Signal Processing 44 (1995) 211-222
de ce signal. Du fait que certains termes de la sommation ont peu d’effet sur les rtsultats, un nombre rCduit de termes dominants est choisi de telle sorte que I’erreur d’approximation ne dtpasse pas une limite spkifike. Nous montrons que pour un M modtrkment grand, l’algorithme proposi est plus efficace que la mkthode de calcul direct. La complexitt: en calcul de l’algorithme et son comportement en termes d’erreur sur un signal compost: de sinusdides et de bruit blanc sont kgalement d&rites dans cet article. Keywords: Fast approximation
algorithm; Hartley transform
1. Introduction
2. Hartley transform at arbitrary frequencies
Since Bracewell introduced the discrete Hartley transform (DHT) [2,3] as a new member of the sinusoidal transform families, some applications in the field of digital signal processing have been developed, such as interpolation [1,4], digital filter
Let x(n) be a finite sequence of length N, and its corresponding N-point DHT is X(k). Then we have the following transform pair:
design [5,6], etc. All these applications are based on the fact that Hartley transform have two main advantages over Fourier transform. First, it involves only real operations. Second, the transform kernel is the same for both the forward and inverse transforms, which means that any hardware built to compute the forward transform can be used to compute the inverse transform without any modification. The DHT is used to obtain samples of Hartley transform at equally spaced frequencies and we apply the fast Hartley transform algorithm (FHT) [7,8] for efficiently computing the coefficients. For some applications, however, the frequency samples need to be unequally spaced and the analysis need to be carried out over a small portion of the band at closer samples [9, lo]. In this paper, we propose an algorithm for computing the HT at arbitrary frequencies, in which the proposed method is based on the fact that the Hartley transform of a finite length sequence at an arbitrary frequency can be expressed as the weighted sum of its corresponding DHT coefficients, thus retaining the computational efficiency of the fast Hartley transform (FHT). These weights are suitably approximated by a few dominant terms so that the HT is very near to the correct value within a specified error limit but with a dramatically improved speed. Finally, the computational complexity of the algorithm is examined, and the error behavior of some typical signals such as white noise and sinusoidals, are also carefully analyzed.
N-l
X(k)=
1 x(n)cas
k=O,l,...,
,
N-
1,
n=O
(1) 1
N-l
x(n) = N C X(k)cas
,
N-l,
n=O,l,...,
k-0
(2) where cas( .) = cos( .) + sin(.). And the discretetime Hartley transform of x(n) is defined as N-l H(w)
=
0 < 0 < 2x.
C x(n)cas(no), n=O
(3)
Substitute Eq. (2) into Eq. (3), we have cas( no), which can be rewritten as
+ sin[n(=
iNi k-0
X(k)
[
-
+!$)I}
sWW2)
sin(rck/N - w/2) x cos( No/2 + rck/N - 42)
J.-C. Liu. H.-C. Chiang 1 Signal Processing 44 (1995) 21 l-222
sin( No/2) + sin( xk/N + o/2) x sin( NW/~ - nkfN - o/2)
213
into Eq. (4), we obtain
1
H(m,.z)=
- sin[a(m
kN$’ X(k) x cos[n(m
- x(m + E)/N]
sin[xk/N
k-0
+ E)]
+ E) + xk/N - 7c(m + E)/N]
sin[rc(m + E)] + sin[xk/N + x(m + E)/N] x sin[n(m + E) - nk/N - rc(m + c)/N]
where 41(w) =
sin( No/2) cos[(N ;l)w] N sin( o/2)
.
and 42(w)=
=
sin( Nw/2) N sin(w/2) “’
sin( Ire) sin[x(m - k + E)/N]
(5)
x COS[RE - n(m - k + E)/N]
[
(N - 1)~
2
Notice that the function &(o)
‘@)={:;
f;z; X(k){
1 *
+
(6)
fNfX(k) k-0
has the property
sin(xe) sin[x(m + k + E)/N]
x sin[ns - x(m + k + E)/N]
:?:,,...,N-I,
= $lzI
X(k)sin(xe){sin(x.s) - k + .s)/N]}
+ cos(xs)cot[x(m =0
fork=O,l,...,
I
N-l. + $N$lX(k)sin(rts){sin(xs)
Eq. (4) specifies that the Hartley transform of a uniformly sampled signal at any frequency can be obtained by interpolating the DHT coefficients of the signal. Since, so far, there are no fast algorithms for efficiently computing the above scheme; in the following, we will develop a fast computing method for approximating the Hartley transform at arbitrary frequencies. Let x(n) be a finite sequence of length N, and its HT H(w~) be required to be computed at M arbitrary frequencies Wk, k = 1,2, . . . , M, which can be expressed as
k-0
x cot [x( m + k + c)/N] - cos( ne)} =
?!!!?$;$;
X(k)
x {cot[n(m
+
+ k + E)/N] + 1>
sin(xe)cos(7rs) ‘-l N k;. X(k) x {cot[rr(m - k + e)/N - ll}.
(8)
Let N-l Hl(m,e)=
h(~)
1
X(k)CC(m
+
k,E) +
X(k)CC(m
- k-c) -
11
(9)
k=O
where mk is the integer part such that l&k]< 0.5. Given an arbitrary frequency ok, we can find a pair of mk and &kto represent it. For brevity, the subscript will be dropped hereafter. Substituting Eq. (7)
and N-l
H2(m,e)
= h2(&) 1 k=O
11, (10)
214
J.-C. Liu. H.-C. Chiang / Signal Processing 44 (1995) 21 I-222
where
That is,
sin*(rra) hi(E) = 7’
A(m) = C x(n)c(n)cas
N-l
(11)
n=O
N-l
h*(E) = C(k,e)
sin( 7te)cos( 7c.s) N = cot[n(k
+ s)/N]
(13)
and C(O,O) is defined to be zero. Then Eq. (8) consists of two summation terms. In the following, we will analyze the individual summation term and obtain an expression for approximation. First we rewrite Eq. (9) as Hi(m,s) = h,(s)[Nx(O)
,
(12)
’
+ X(N - m)C(O,s)
+ A(m) +
&h&)1,
m=Ol 9 ,a.., N - 1.
Combining A(m) with the term Nx(0) in Eq. (14), we obtain a new sequence Y(m), N-l
Y(m) = 1 x(n)(N n=O m=O 7 1,...,
So, Hl(m,e) (14)
A(m)=
1.
1 X(k)C(m +k,O)
(15)
(20)
can be written as
H*(m,e)= h,(&)[Y(m)+X(N -m)C(O,&) +
k=O
R,(m,E)l.
(21)
Similarly, for the same reason, H2(m, e) can be written as H,(m,&)= h2(&)[-Nx(o)+ X(m)C(O,E)
and
+ B(m) + R2(m,&)l,
N-l R,(wE)
-2n)cas
N-
where N-l
(19)
=
1
X(k)[C(m
+
k,&)
where
k=O,k+m#O
-
C(m + k,O)].(16)
Since X(k) and C( k, tz)are periodic in k with period N, Eq, (16) can be written as 1
N-l
=
1
X(k)[C(m - k,e)- C(m -k,O)]
k=O,k#m N-l =
N-l
Rl(m,e)=
R2(m, s)
kT;
X(k
+
m)CC(
-
k,E)
-
C(
-
kO)l
(22)
X(N-k-m)
k=l
x
[C( - k,c) - C( - k,O)].
(17)
Besides, A(m) defined in Eq. (15) is the mth term of the circular convolution of X( k) and C( - k, 0). Let the DHT of C(k,O) be c(n), it can be shown that (see Appendix A) c(n) =
0,
n = 0,
N-2n,
n=1,2 ,..., N-l.
(18)
N-l
B(m) =
C X(k)C(m
- k,O).
(23)
k=O
It is obvious that B(m) is the mth term of the circular convolution of X(k) and C(k,O). That is, B(m) is the mth term of the IDHT of Nx( N - n)c( n) which is the same as the mth term of the DHT of x( N - n)c( n). Hence, N-l
Thus A(m) is the mth term of the inverse DHT (IDHT) of Nx( N - n)C(N - n) which is the same as the mth term of the DHT of x(n)c(n).
B(m) = c x(N - n)c(n)cas Ii=1 m = 0,l ,..., N - 1.
, (24)
J.-C. Liu, H.-C. Chiang / Signal Processing 44 (1995) 21 l-222
Combine B(m) with a new sequence P(m),
- Nx(O), then we obtain
respectively, R,(m,e)
P(m) = B(m) - Nx(0)
215
z
i k=a,k
X(N - k - m){cot(rrk/N) # 0
N-l =
.:I
x(n)c(N
-
-
n)cas Rz(m,&) z
=-
I$: x(ri)c(n)cas(+)
- Nx(0)
i k=o,k
cot[rr(k -&)/N]},
(28)
X(k + m){cot(rck/N) # 0 -
cot[x(k
- &)/N]}.
(29)
N-l
m=O,l,...,
N-
1,
(25)
in which we have utilized the fact that c(n) is an periodic odd function and with the property - c(n) = c( N - n). Thus, P(m) is corresponding to - Y(N - m) and the expression for H2(mr E) becomes
the approximation Furthermore, we apply cot( 0) g l/0, which is valid for 18I <
H,(m,&) = h2(e)[ - Y(N -m) + X(m)C(o,E) + &(m,E)l. (26) Therefore, the HT of x( n) at an arbitrary frequency is equal to
= hr(s)[ Y(m) + X(N - m)C(O,E) + Rl(m,E)I + hZ(c)[ - Y(N - m) + X(m)C(O,rz) +
&(m,E)l.
(27)
Now we pay our attention to RI (m, E) and Rz (m, E) to derive a fast computing algorithm. Instead of the N summation terms, we choose only some dominant terms for direct computing such that the error is within the allowable limit. The concept is based on the fact that the expression [ C( - k, E) - C( - k, 0)] tends to zero if both k and N - k are much greater than E.Thus for an acceptable error specification, the summations (17) and (22) need to be evaluated only for those k’s for which Ik( or (N - kl is of the order of 1~1.Let us assume that [c( - k,E) - C( - k,O)] is negligible for all values of k from L1 to LZ, and denote (L2 - N + 1) by a and (L, - 1) by b for simplicity, Eqs. (17) and (22) can be rewritten as,
In Eq. (30) Y(m) and Y( N - m) can be obtained simultaneously by one FHT operation, and which are common for all frequencies and thus need to be performed only once. However, the summation terms have to be done for each frequency, we denote the direct computing set {a,a + 1, . . . , b} by {D} or D,+ In the next section, we will derive the relations between the computational error and the direct computing set {D} for various typical signals.
3. Error analysis and computational complexity When applying Eq. (30) to obtain the HT of x(n) at an arbitrary frequency, the error is introduced because of cotangent approximations and neglecting the summation terms of Eqs. (17) and (22) from k = L1 to L2. However for any small angle 0<<7t/2, the cotangent approximation results in negligible error, the main error source is due to the reduction of computation terms. Therefore, the overall error
J.-C. Liu. H.-C. Chiang / Signal Processing 44 (1995) 21 l-222
216
100
can be expressed as e(m,E) = hl(&) f
1
X(N - k - m)w(k,e)
k=L,
+
ME)
X(k + m)w(k,e),
5
(31)
k=L,
where w(k,E) = cot(nk/N)
- cot[R(k
- E)/N].
(32)
Because w( k, E) is a sharply decreasing function of Ik( in (0, N/2), the upper bound of the error e(m,e) increases with
1o--r
*i
1 o-2
%
1
i
10-d
1
10-e
o-=
10-n
0
(a)
Therefore, it is reasonable to assume that the maximum error reduction respect to the number of directly computed points from k = L1 to L2 are chosen such that L1 and L2 are nearly symmetrical with respect to N/2, that is, [L, - (N - L,)] < 1, or Ia + bl < 1. In the following, the condition la + bl = 0 is chosen for experimental demonstrations.
(b)
3.1. Error with sinusoidals From Appendix B, we obtain the normalized error magnitudes for typical signals such as the complex sinusoidal, the real sinusoidal, and the white noise. Consider a complex sinusoidal x(n) = exp[ - j(on + 19)],where 13is an any phase constant. Then from Eq. (B.5) in Appendix B, the error magnitude with respect to the spectral magnitude at that frequency is a function of m. The upper bound of the normalized error is given below:
(33)
Fig. 1. (a) Normalized error versus E for complex sinusoidal with different D,,b and N = 64. (b) Variation of the average normalized error with N for complex sinusoidal with different Do.band E = 0.5.
Fig. l(a) shows the averaged upper bound of eN(m,e) versus E, which is averaged over all m’s (m = 0,1, . . . , N - l), for different sets of {D} with N = 64. Fig. l(b) illustrates the relation between eN(m, 8) and N for different sets {D} assuming the worse case E = 0.5. From Fig. l(b), we conclude that eN(m, E) is relatively unaffected by N. For real sinusoidals, Eq. (B.8) gives the upper bound of the normalized error
J.-C. Liu, H.-C. Chiang / Signal Processing
217
44 (1995’) 211-222
malized error is still relatively unaffected by N and mainly depends on E and {D}.
3.2. Error with white noise From (B.14) in Appendix B, we obtain the normalized error for white noise,
I Ii sin(rt.7)
eN(rn,E)
=
-
N
._
0.5
0
x f
&
(a)
k=L,
‘2
,z
Cw(kE)12+ sin(2nE) L
F 6(i+)w(k,z)w(P,e) p=L,
100
(35) where k”= ((N - k - m))N, 6 = ((P + m))N
1
1o-4 lo-’
10-01
I
0
so0
1000
N
(b)
Fig. 2. (a) Average normalized error versus Efor real sinusoidal with different Da,6 and N = 64. (b) Variation of normalized error with N for real sinusoidal with different Da.* and E = 0.5.
For a fixed N, the normalized r.m.s. error depends on m, E,and the summation terms from Lr to L2. For N = 64, the variation of eN(m, E), which is averagedoverallm(m=O,l,...,N-l),withsfor different (D} are plotted in Fig. 3(a). Fig. 3(b) shows the relation between averaged eN(m, E) and N for different {D} with the worse case E = 0.5. From Fig. 3, we conclude that the normalized error for white noise is also relatively unaffected by N, and primarily depends on E and {D} . Table 1 presents the maximum allowable E for various samples {D} with the upper bound errors which are less than 0.5%.
3.3. Computational complexity +lk~,(l-jcot[~(2m+&+k)])w(k,&)l
+ lk$,(l
-jcot[i(&
- k)])w(k,E)i}.
(34)
Fig. 2(a) shows the variations of the normalized error versus E at the spectral frequency with different sets of {D} and N = 64. Fig. 2(b) shows the variations of the normalized error versus N, which are averaged over all m (m = 0, 1, . . . , N - 1) for different sets of {D} with the worse case E = 0.5. From Fig. 2, it is obvious that the averaged nor-
The algorithm needs N real multiplications for forming y(n) from x(n) and two N-point FHT computations which are common to all frequencies, and the multiplications corresponding to the summation terms for each of the frequencies. Assume that the FHT computation requires N log, N real multiplications and the values of hl (e), h2( .z) and Ne/n(k - E) are precomputed and stored. For computing the M arbitrary frequencies, we assume that each of the M frequencies requires Nd direct computation terms on the average, including
J.-C. Liu, H.-C. Chiang / Signal Processing 44 (1995) 211-222
218
Table 1 Allowable E for different sets of {D} with 0.5% error
4 +! 3
E g
10-l E - 0.50
10-s
n
10-e
2
E
lo--‘m 0
0.5
E
(a)
32
b
1O-2
B
$
D0.b
a
10-q
lo--51
E
D-M -
-
0.45 0.40 0.45 0.30 0.25 0.20 0.15 0.10 0.05 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.50
-9 -8 -6 -5 -4 -3 -3 -2 -1 0 0 -1 -2 -4 -5 -6 -8 -9 - 10
10 8 I 6 5 4 3 2 1 0 0 1 2 3 5 6 7 8 9
I
0
1000
500
N
(b)
Fig. 3. (a) Average normalized error versus E for white noise with different Do.b and N = 64. (b) Variation of normalized error with N for white noise with different D,.b and E = 0.5.
(N/xe)X(N - m) and (N/@X(m) terms. example, with the chosen set for Table 1,
For
Nd = 2. co.05. (20 + 17 + *.. + 20 + 20)] = 21.2. Therefore, the number of real multiplications output point is No = 2 + Nd +
per
N +2NlogzN M
Fig. 4. Required multiplications for various sequence length N and number of frequency points M.
*
To obtain the spectrum at an arbitrary frequency by using Eq. (3), the direct computation requires N multiplications. For the proposed algorithm with M = N and Nd = 21.2, then No = 21og, N + 24.2 which is approximately equivalent to directly computing by using Eq. (3) with N = 34. For M larger than N, the algorithm is more efficient
than the directly evaluating method. When M decreases to about 20, the performance of the proposed algorithm and of the direct calculating method is approximately the same. This is shown in Fig. 4, and we conclude that the algorithm behaves well when a large amount of frequency points are needed for analysis.
219
J.-C. Liu. H.-C. Chiang 1 Signal Processing 44 (1995) 21 l-222
and
4. Conclusions In this paper, we constructed the relation between the Hartley transform at arbitrary frequencies and the DHT coefficients of a signal. After careful analysis, we realized that only some dominant terms are needed for evaluation of the Hartley transform at arbitrary frequencies within allowed error limit. Therefore, an efficient algorithm was presented to compute the Hartley transform of a uniformly sampled sequence at arbitrary frequencies. The computational complexity of the algorithm depends on the length of the input sequence and the number of frequencies to be computed. This algorithm has great flexibility in that the output spacing is arbitrary, and N need not to be the same as M. For large M and N, it is more efficient than the directly computing method when applying the algorithm to high-resolution narrow-band spectral analysis, and time interpolation of data from one sampling rate to any other sampling rate.
=
N(N - 1)/Z
k
= 0,
k = 1,2, . . . , N - 1.
- N/2[1 + cot(nk/N)],
(A.4) Since
=
N(N - 1)/2, k =O, k=O,l,..., - N/2,
N-l,
and
=
k
0, - (N/2) cot( xk/N),
= 0,
k = 0, 1, . . . , N - 1. b4.6)
Appendix A Consider a sequence c(n) which is periodic with period N, and is defined as c(n)
=
(A-5)
N-2n,
k = 0, k = 1,2, . . . . N - 1.
n = 0, n=1,2 ,..., N-l.
0,
Therefore, Eq. (A.2) can be simplified as
(A.1)
(A.7)
Thus the DHT of C(k) is c(n).
Then its IDHT is derived by Appendix B C(k) = ANilr(n)cas n-0
The main error resulting from the approximation procedure has been given in Section 3 and is rewritten as below for analysis convenience:
=$[Nr$:cas(F)
1
k=O 3 1,...,
N - 1.
5 X(N - k -
e(m,c) = hl(&)
N ,
m)w(k,s)
k=L,
+
64.2)
The first term of the right-hand side of (A.2) is
ME)
z
X(k + m)w(k,s),
(B.1)
k=L,
where w(k,t)
= cot[;]
- cot[+$].
(B.2)
J.-C. Liu, H.-C. Chiang / Signal Processing
220
44 (1995) 211-222
B.1. Error with sinusoidals
x w(k,&) + Chl(e)(l +j) + h,(E)(l
If the analyzed signal is a complex sinusoidal defined as x(n)=Aexp(j[wn+8]), n=O,l,..., N - 1, o = 27c(m + &)/IV and 0 is an arbitrary phase term. Then the DHT of x(n) is
L2 x,zL (1 + jcot
I
(
l+jcot
k=O 91,***, N-
[
i(m+e+k)
I)
+ E + k)
I> I w(k,s)
.
(B.4) Then the upper bound of the normalized error magnitude with respect to the exact spectral magnitude at the desired frequency is
X(k) = ae’B(1 - ejznc) (1 -j)
x
i(2m
-dl
II ,
1.
~ Isin(rte)l
Isin2(7ce)l +lsin(xe)cos(rrs)I (
(B.3) x &,
Substituting (B.3) into Eq. (31), we obtain the error at the sinusoidal frequency as
(1 +;:ot
+ik$,(l
,;(s- kiw(k,E),
+jcot[i(2m+s+k)])
e(m,e) = :e@(l - ej2”) hi(e)(l -j)
+ME)U
x w(k,E)
+jlkg,(l +jcot[$E-k)])
. Ii
(B.5)
If x( n) is a real sinusoidal instead of a complex one, x(n) = A cos( on + 0). Then its corresponding DHT is X(k) = $eje(l
- ej’““)
+ {e-j’(l
])+(I
- e-jznr)
+j)(l
])+(l
+jcot[G(m+k-s)])}
+j)(l-jcot[$(m+s+k)])}. (B-6)
Substituting (B.6) into Eq. (31), we obtain the error at the sinusoidal frequency as e(m,E) = +ej’(l
+$eje(l
- ej2”“)[h1(s)(1 -j)
-ej’“)[hi(s)(l
+ h2(s)(1 + j)] { ,“I,( c +j)+h,(a)(l
-j)]{
1 + jcot [$
2
- k)])w(kvs)}
(1 +jcot[$(Zm+e+k)])w(k,n)}
k=L,
A +-e-Je(l-e-j2’E)[h1(a)(1-j)+h2(e)(1+j) 8 A + -eWJe ( 1 - e-j2”“)[h1(e)(1 8
+ j) + h,(s)(l
$ {k=L,(
-j)]
t ?(2m+.c+k)
l-j Co
LN
2 1 - jcot T(.s - k) { k_L,( [N ])w(k”)}*
(B’7)
221
J.-C. Liu. H.-C. Chiang / Signal Processing 44 (1995) 21 I-222
The upper bound of the normalized error with respect to the exact spectral magnitude at the desired frequency is e (m e) = le(~s)
~___ ~ IsinWl ANI2 4
N.
WWI
+ IWdWdl
N2
N2
B.2. Error with white noise Let x(n) be a zero mean, uncorelated white sequence of variance 02. Then the DHT coefficients X(K) are also uncorrelated. Hence,
and ((K))N denotes k module N. Then the normalized root-mean-square (r.m.s.) error is defined as the ratio between the r.m.s. value of the error and that of the signal spectrum. That is, eN(m,s) =
E{X(p)X(q)}
(B.9)
= Ne26(p - q),
where E is the expectation operator and 6(p) is the Kronecker delta function of period N. The mean square value of e (m, E) is
C~{l~~~,~~12J11~21C~{l~~~,~~12f11’2
= [E{le(m,s)J2}]1’2/(No2)1’2.
After some operations, we simplify the normalized r.m.s. error to
I Ii
E{14m,&)12)
sin(aa)
eN(m,E) = -
=E
hi(c)
g
i[
(B.13)
N
X(N-k-m)w(k,~)
L2
,; I
Cw(kE)12
k=L,
1 +[h(E) $X(N -P-mMp,e) 11 =Noq::(&)+ h:(E)] 2[W(k,&)]'
+sin(2ns)
+
hz(E)
F
X(k
+ m)w(k,e)
F k=L,
F 6(R-fi) ~=LI
k=L,
P’LI
+
h2(&)
F
X(P
+
m)w(p,s)
Acknowledgements
P’LI
The authors would like to thank the reviewers for their helpful suggestions contributed significantly to the quality of the final manuscript.
k=L,
+
2Na2h&)h2(c) x
z
z
k=L,
p=Ll
b(E-
p)w(k,~)w(p,~),
(B.lO)
where E= ((N -k
References [I]
- m))N,
L1 d k < La,
d = ((P + m))N, L1 d P G L2,
(B.11) (B.12)
J.I. Agbinya, “Fast interpolation algorithm using Hartley transform”, Proc. IEEE, Vol. 75, No. 4, April 1987, pp. 523-524. [2] R.N. Bracewell, “Discrete Hartley transform”, J. Opt. Sot. Amer., Vol. 73, No. 12, December 1983, pp. 1832-1835.
222
J.-C. Liu, H.-C. Chiang / Signal Processing 44 (1995) 211-222
[3] R.N. Bracewell, “The fast Hartley transform”, Proc. IEEE,
Vol. 72, No. 8, August 1984, pp. 1010-1018. [4] C.-Y. Hsu and T.-P. Lin, “A novel approach to discrete interpolation using subsequence FHT”, Electronics Lett., Vol. 24, February 1988, pp. 208-209. [S] J.-C. Liu and T.-P. Lin, “Running DHT and real-time DHT analyzer”, Electronics Lett., Vol. 24, June 1988, pp. 762-763. [6] J.-C. Liu and T.-P. Lin, “Recursive Hartley filter - A new efficient digital prefilter structure”, IEE Proc.-G, Vol. 139, No. 4, August 1992, pp. 438-444. [7] H. Malvar, “Fast computation of the discrete cosine transform and the discrete Hartley transform”, IEEE Trans.
Acoust. Speech Signal Process., Vol. 35, No. 10, October
1987, pp. 1484-1485. [S] L.R. Rabiner, R.W. Schafer and CM. Rader, “The chirp-z transform and its applications”, Bell systems Technol. .I., Vol. 48, May-June 1969, pp. 1249-1292. [9] H.V. Sorensen, D.L. Jones, C.S. Burrus and M.T. Heideman, “On computing the discrete Hartley transform”, IEEE Trans. Acoust. Speech Signal Process., Vol. 33, No. 4, October 1985, pp. 1231-1238. [lo] R. Sudhakar, R.C. Agarwal and S.C. Dutta Roy, “Fast computation of Fourier transform at arbitrary frequencies”, IEEE Trans. Circuits Systems, Vol. 28, No. 10, October 1981, pp. 972-980.