An optimum block adaptive shifting algorithm using the Toeplitz preconditioner

An optimum block adaptive shifting algorithm using the Toeplitz preconditioner

SIGNAL PROCESSING Signal Processing 49 (1996) 217-221 An optimum block adaptive shifting algorithm using the Toeplitz preconditioner J.S. Lim, K.K...

428KB Sizes 0 Downloads 31 Views

SIGNAL

PROCESSING Signal Processing

49 (1996) 217-221

An optimum block adaptive shifting algorithm using the Toeplitz preconditioner J.S. Lim, K.K. Lee, C.K. Un* Communications

Research Laboratory,

Department of Electrical Engineering, Korea Advanced Institute of Science and Technology. 373-I Kusung-Dong, Yusung-Ku, Taejon. South Korea Received

23 February

1995; revised 2 November

1995

Abstract

We present a new block adaptive algorithm as a variant of the Toeplitz-preconditioned optimum block adaptive (TOBA) algorithm. The proposed algorithm is formulated by combining the TOBA algorithm with a data-reusing scheme that is realized by processing blocks of data in an overlapping manner, as in the optimum block adaptive shifting (OBAS) algorithm. Simulation results show that the proposed algorithm is superior to the OBAS and TOBA algorithms in both convergence rate and tracking property of input signal conditioning. Zusammenfassung

genannten Wir stellen einen neuen blockadaptiven Algorithmus als Variante des “Toeplitz-Preconditioned” blockadaptiven (TOBA-) Algorithmus vor. Das vorgeschlagene Verfahren wird als Kombination des TOBA-Algorithmus mit einem Schema formuliert, das die Daten mehrfach verwendet und dadurch realisiert wird, da8 Datenrahmen tiberlappend verarbeitet werden wie im optimalen blockadaptiven Verschiebungsalgorithmus (OBAS). Simulationsergebnisse zeigen, da13 der vorgeschlagene Rechenweg den OBAS- und TOBA-Algorithmen unabhangig von der Eingangssignalkonditionierung sowohl beztiglich der Konvergenzgeschwindigkeit als such beztighch der Trackingeigenschaften tiberlegen ist.

On presente ici un nouvel algorithme adaptatif par blocs comme variante de l’algorithme adaptatif par blocs optimal base sur un preconditionnement de type Toeplitz (TOBA). L’algorithme propose est Btabli en combinant l’algorithme TOBA avec une methode de reutilisation de donnees rtalisee en traitant les blocs de donnees par recouvrement, de la mime facon que dans l’algorithme adaptatif a dtcalage par blocs optimal (OBAS). Des simulations montrent que I’algorithme propose est superieur aux algorithmes OBAS et TOBA en ce qui concerne le taux de convergence et la poursuite, sans se saucier du conditionnement du signal d’entrte. Keywords: Block adaptive filtering; Preconditioning;

Data reusing -

*Corresponding

author.

Tel.: 82-42-869-3415;

fax: 82-42-869-8520;

0165-1684/96/$15.00 0 1996 Elsevier Science B.V. All rights reserved PII SO165-1684(96)00019-9

e-mail: [email protected].

218

J.S. Lim et al. / Signal Processing

1. Introduction Among adaptive filtering algorithms [l, 3,591, the block least-mean square (BLMS) algorithm [l] is based on the block mean-square error (BMSE) and updates the filter tap weights once for each block of data. Similarly to the LMS algorithm, the BLMS algorithm employs a fixed step size which controls the convergence speed, adaptation accuracy, and stability of the adaptive process. Thus, the choice of the step size is very critical and important for the satisfactory performance of the BLMS algorithm. The optimum block adaptive (OBA) algorithm [7] employs a time-varying step size which is optimized in a least-squares (LS) sense. The OBA algorithm updates the filter tap weights only once in every data block. Thus, the updates must be in increments small enough to ensure the stability of the adaptive process [a]. As a result, as compared with the LMS algorithm, the convergence rate may be slower in case of a colored input, and the tracking property also becomes worse in a nonstationary environment. There are several algorithms employing a data-reusing technique so that the convergence property is improved. The technique is realized by processing blocks of data in an overlapping manner [6,7] or by getting weight updates repeatedly using the same block of data [3]. On the other hand, the OBA algorithm is inherently a gradient algorithm based on the steepest descent method. Consequently, its convergence rate greatly slows down when the eigenvalue spread of the input autocorrelation matrix becomes large. To reduce the dependence of the convergence rate on the eigenvalue spread, many block adaptive algorithms have been proposed [S, 8,9]. Lim and Un [5] proposed the Toeplitz-preconditioned OBA (TOBA) algorithm by employing the Toeplitz preconditioner that is assumed to be a symmetric and Toeplitz matrix. It was shown that its convergence rate is improved significantly as compared with the self-orthogonalizing block adaptive filter (SOBAF) [9], and the TOBA has no instability problem existing in the SOBAF. The paper is organized as follows. In Section 2, the TOBA algorithm is first reviewed and then the TOBAS algorithm is formulated based on the

49 (1996) 217-221

shifting technique. Convergence properties are evaluated through computer simulations in Section 3. Finally, conclusions are drawn in Section 4.

2. Toeplitz-preconditioned OBAS algorithm In the paper, boldfaced upper- and lower-case symbols denote matrices and vectors, respectively. We use j for the block index and k for the iteration index, and superscript T for the transpose of a vector or a matrix. We assume that all inputs are stationary and processed by blocks of L data samples and the data block length L is greater than or equal to the filter order N. It was shown in [S] that the TOBA algorithm (see Appendix A) is very fast in convergence rate as compared with the OBA and SOBAF algorithms and robust to change of the eigenvalue spread of input autocorrelation matrix. However, its convergence and tracking performance is limited because of the fact that the tap weights are updated only once for each block. Thus, to improve the convergence and tracking property of the TOBA algorithm, the “shifting” technique used in the OBAS algorithm [7] can be applied to the TOBA algorithm. In the OBAS algorithm, the data blocks of processed signals are overlapping rather than disjointed, as in the OBA algorithm. In this technique, the oldest K signals are dropped and K new ones are used for (L - K) overlapping signals between the previous and present blocks. Thus, (L - K) signals in the previous block are reused in the present block iteration. The TOBAS algorithm updates once for K input samples, whereas the TOBA algorithm updates once for L input samples. For a given block of L input samples, the filter tap weights can be updated from once (K = L) to L times (K = 1). To formulate the TOBAS algorithm by combining the TOBA algorithm with the shifting technique, let us define the number of iterations per data block as M = L/K. Then, the value of M becomes an integer when K is selected to be only a common divisor of the block size L. With the parameter M, we propose an updating procedure in which the filter tap weights are updated along the negative gradient of the BMSE estimate for M - 1

J.S. Lim et al. / Signal Processing 49 (1996) 217-221

iterations and along the direction vector for the last iteration. For the .jth data block, the TOBAS algorithm is formulated as follows: Fork=O,l,...,

M-l,

ek = dk - xkwk,

(1) (2)

if k #M -1,

pk =&.

if k = M - 1,

Tjpk = gk

tj=Ytj-1

+

(3)

(4)

L With&=-

Pkk

(5)

2 P;x$fkPk

if k # M - 1, repeat. At the last iteration of each block, the Toeplitz preconditioner q is updated recursively and the deconvolution involving Tj is solved efficiently by the split-Levinson algorithm [4]. Since the parameter M is defined as an integer that can take some values in the interval between one and L, the tap weights are updated M times and, at the same time, the data blocks are also shifted M times during the period of a data block. The desired signal vector and the data matrix are shifted by K samples at every iteration. Thus, for example, the data matrix denoted by Xk starts with Xj-l and becomes Xj after M iterations. In kth iteration, the desired signal vector and the data matrix are, respectively, given by dk = [d,

d,+,

. . . 4+L-JT

(6)

and

r Xk

=

Xl X1-l .

x1+1

‘..

&+L-

Xl

...

%+.L-2

*

.” ...

.

... XI-N+1

Xl-N+?

The TOBAS algorithm is identical to the OBAS algorithm if Tj is fixed to be an identity matrix. The algorithm leads to the TOBA algorithm when M is limited to 1. The parameter M can be fixed to be some integer in the interval of 1 d M d L. Thus, the filter tap weights are updated by using the OBA algorithm for M - 1 iterations and the TOBA algorithm for the last iteration. As the value of M increases, it is expected that the convergence speed of the algorithm is improved, but its computational complexity is increased.

with

(1 -y)iXT*jL,

Wk+i=,++C(k&

219

"'

where I = (j - 1)L + kK + 1.

%+L-N

1

1

1

)

(7)

3. Computer simulations Several comparisons were made among the OBA, OBAS and TOBA algorithms for a system identification model. Two types of unknown transversal filters denoted by the coefficient vector h are used. One is a time-invariant system with the coefficient vector given by hi = [0.157,0.134,0.114,0.098,0.038, 0.716,0.061,0.052, 0.0441T.

(8)

The other is a time-varying system whose coefficient vector is given by h2 = hl Cl.0 + 0.8(1.0 - Il.0 - 10-3nl)],

Odn<2000.

(9)

The time-varying system is a ramp function so that all the filter tap weights are increased and decreased periodically. The additive noise is a Gaussian zero-mean white sequence with variance lo- 5 (so that SNR = 50 dB), and it is only used for the time-invariant unknown system. The colored input signal is generated by passing a Gaussian zeromean unity variance sequence through a band-limiting filter, so that the eigenvalue spread becomes about 145. For all simulations, the ensemble averaging was done over 20 independent trials. Figs. 1 and 2 show comparisons of convergence characteristics of the OBA, OBAS, TOBA and TOBAS algorithms in the stationary or the nonstationary environment. It is observed that the TOBAS algorithm outperforms the OBAS algorithm in both the convergence rate and the tracking property. This phenomenon is due to the fact that

220

J.S. Lim et al. 1 Signal Processing

49 (1996) 217-221

TOBAS algorithm is superior to the OBA, OBAS and TOBA algorithms in convergence rate regardless of stationarity. We can observe from Fig. 1 that the TOBAS algorithm yields better adaptation accuracy than the TOBA algorithm. Furthermore from Fig. 2, we can also see that the TOBAS algorithm provides much better tracking property than the other three algorithms. In the simulation, it was assumed that N = L = 16 for all cases and M = 2 for the OBAS and TOBAS algorithms.

OBA A OBAS 0 TOBA Q TOBAS q

4. Conclusions

-601 0

’ 20

’ 40 Number

’ 60

( 80



100

of blocks

Fig. 1. Comparison of convergence characteristics of OBA, OBAS, TOBA and TOBAS algorithms in a stationary environment with p = 145

r OBA A OBAS 0 TOBA Q TOBAS 0

The TOBAS algorithm has been formulated by using a new updating procedure that updates the filter tap weights M times during the period of a data block, and the data block is also shifted M times. It was shown that the TOBAS algorithm is very fast in convergence rate regardless of stationarity, and it provides better tracking property than the TOBA algorithm.

Appendix A

According to [S], the TOBA algorithm at jth iteration is summarized as follows: ej = dj - XjWj,

(A-1)

gj = i Xyl?j,

(A-2)

tj=ytj-1

+

(l-~)~X~~j,,

(A.3)

TjPj = gj, 0

20

40 Number

60

80

100

of blocks

Fig. 2. Comparison of convergence characteristics of OBA, OBAS, TOBA and TOBAS algorithms in a nonstationary environment with p = 145.

the OBA algorithm adjusts the filter tap weights only once for a block of L input samples, while the OBAS algorithm updates twice for that interval. On the other hand, the comparisons show that the

wj+l

= Wj + agj

(A.4)

L

PTgj

with aj = 2 pTXTX$j ’

(A.5)

In the algorithm, gj is the N x 1 negative gradient vector of the BMSE estimate at block j andpj is the N x 1 direction vector that is transformed from gj using Tj so that the direction vector points to the minimum point of the BMSE estimate. The N x N matrix Tj is the Toeplitz preconditioner that is assumed to be a symmetric Toeplitz matrix, the N x 1 vector tj is the first column of the Toeplitz

J.S. Lim et al. / Signal Processing 49 (1996) 217-221

matrix Tj, and y is a smoothing constant that controls estimation accuracy and tracking capacity of time variations. Also, C(jis the time-varying step size that is determined along pj between Wj and Wj+1.

References

Cl1 G.A. Clark, S.K. Mitra and S.R. Parker, “Block implemen-

tation of adaptive digital filters”, IEEE Trans. Acoust. Speech Signal Process., Vol. ASSP-29, No. 3, June 1981, pp. 744152.

Adaptive Filters, PrenticeHall, Englewood Cliffs, NJ, 1985. c31 M.E. Deisher and A.S. Spanias, “Real-time implementation of a frequency-domain adaptive filter on a fixed-point signal processor”, Proc. IEEE Internat. Conf Acoust. Speech Signal Process., Toronto, May 1991, pp. 2013-2016.

PI C.F.N. Cowan and P.M. Grant,

221

c41 P. Delsarte and Y.V. Genin, “The split Levinson algo-

rithm”, IEEE Trans. Acoust. Speech Signal Process.. Vol. ASSP-34, June 1986, pp. 470-478. c51 J.S. Lim and C.K. Un, “Optimum block adaptive filtering algorithms using the preconditioning technique”, IEEE Trans. Signal Process., Submitted. W.B. Mikhael and A.S. Spanias, “A fast frequency-domain adaptive algorithm”, Proc. IEEE, Vol. 76, January 1988, pp. 80-82. c71 W.B. Mikhael and F.H. Wu, “Fast algorithms for block FIR adaptive digital filtering”, IEEE Trans. Circuits Systems, Vol. CAS-34, No. 10, October, pp. 1152-l 160. I?1 G. Panda, B. Mulgrew, C.F.N. Cowan and P.M. Grant, “A self-orthogonalizing efficient block adaptive filter”. IEEE Trans. Acoust. Speech Signal Process., Vol. ASSP-34., No. 6, December 1986, pp. 1573-1582. c91 C.H. Yon and C.K. Un, “Fast multidelay block transformdomain adaptive filters based on a two-dimensional optimum block algorithm”, IEEE Trans. Circuits Systems, Vol. CAS-41, No. 5, May 1994, pp. 337-345.