Signal Processing 27 (1992) 317-331 Elsevier
317
Numerical characteristics of fast recursive least squares transversal adaptation algorithms- A comparative study H. Schiitze
Research Institute, Deutsche Bundespost Telekom, P.O. Box 420200, W-IO00 Berlin 42, Germany
Z. R e n Institute of Control Engineering and System Dynamics, Technische Unioersit?itBerlin, Berlin, Germany Received 20 December 1991
Abstract. In this paper, eight known fast recursive least squares (FLRS) algorithms are compared with respect to computing effort and numerical stability. A uniform mathematical notation makes the close affinity between the different algorithms transparent. For the exact FRLS algorithms (without additional stabilization measures) transformation equations are given which allow an unambiguous mutual conversion. Under certain conditions, FRLS algorithms with additional stabilization are equivalent to the exact ones. Consequently all FRLS algorithms examined should theoretically in each iteration step yield the same estimate values as the well-known recursive least squares (RLS) algorithm. As a result of unpredictable round-offerrors, deviations or even instability may occur. Simulations demonstrate the different numerical sensitivity of the algorithms and the effects of stabilizing efforts.
Zusammenfasstmg. Es werden acht bekannte Fast-Recursive-Least-Squares- (FRLS-) Algorithmen hinsichtlich Rechenaufwand und numerischer Stabilit/it miteinander verglichen. Eine einheitliche Bezeichnung yon FormelgrfBen macht die enge Verwandtschaft zwischen den verschiedenen Algorithmen transparent. Ffir die exakten FRLS-Algorithmen (ohne stabilisierende ZusatzmaBnahmen) werden Transformationsgleichungen angegeben, mit denen man jeden Algorithmus eindeutig in einen anderen umrechnen kann. Die FRLS-Algorithmen mit stabilisierenden ZusatzmaBnahmen sind unter bestimmten Bedingungen mit den exakten Algorithmen identisch. Auf Grund dieser Zusammenhiinge miiBten alle untersuchten FRLS-Algorithmen theoretisch in jedem Iterationsschritt genau die gleichen Sch/itzwerte liefern, wie der gut bekannte Recursive-Least-Squares(RLS-) AIgorithmus. Infolge yon unvorhersagbaren Rundungsfehlern kann es aber trotzdem zu Abweichungen bis hin zur Instabilitiit kommen. Simulationen zeigen zum einen, deB die Algorithmen unterschiedlich numerisch empfindlich sind und demonstrieren zum anderen die Wirkungsweise yon stabilisierenden MaBnahmen.
R~sumr. Dans le prrsent travail, huit algorithmes rrcursifs rapides des moindres carrrs (FRLS) connus sont comparrs du point de vue de l'effort de calcul et de la stabilit6 numrrique. La parent6 6troite entre les diffrrents algorithmes est rendue transparente par une notation mathrmatique uniforme. Pour les algorithmes FRLS exacts (sans mesures de stabilisation additionnelles), sont donnres des 8quations de ransformation permettant une conversion mutuelle univoque. Dans certaines conditions, les algorithmes FRLS fi stabilisation additionnelle sont identiques aux algorithmes exacts. Par consrquent, tousles algorithmes 6tudirs devraient throriquement fournir dans chaque pas d'itrration les mSmes valeurs estimres que l'algorithme rrcursif des moindres carrrs (RLS) bien connu. Or, dfi aux erreurs d'arrondi imprrvisibles, des drviations, voire mrme de l'instabilitr, peuvent cependant se produire. Des simulations drmontrent la sensibilit6 numrrique diffrrente des algorithmes ainsi que les rrpercussions des mesures de stabilisation.
Keywords. Fast recursive least squares algorithm, adaptive transversal filter, adaptation, identification, stability, acoustic echo canceller. 0165-1684/92/$05.00 © 1992 Elsevier Science Publishers B.V. All rights reserved
318
H. Schiitze, Z. Ren / Numerical characteristics of FRLS algorithms
1. Introduction After Ljung et al. had presented the fast Kalman (FK) algorithm in 1978 [10] numerous modifications of it were published by other authors. Since all these algorithms solve the long known least squares problem with fewer computational steps than the recursive least squares (RLS) algorithm, they are usually combined under the collective term fast recursive least squares (FRLS) algorithm. The studies carried out on the FRLS algorithms during recent years had two major objectives: firstly to reduce further the number of multiplications and divisions per recursion step and secondly to improve the numeric stability. Whereas the first aim has lately lost some of its significance due to the rapid advances in signal processor technology, the stability problem has become of decisive importance for the practical application of an identification algorithm. Many proposals for stability improvement have been presented so far which try to transfer the calculations to more favourable numerical ranges by means of appropriate normalizations. Their aim was to obtain numerically robust FRLS algorithms. And, indeed, it became apparent that numerical stability can be influenced by normalization. But despite the different formulations, one arrives always again at the same exact least squares estimates. This group of algorithms can therefore be termed exact FRLS algorithms. A fundamentally different approach to solve the stability problem is a corrective intervention into the algorithm as soon as it threatens to become instable. But strictly speaking the corrective measures also cause deviations from the exact least squares estimate. If these remain small enough, however, they are in any case much better than instability. Starting from the RLS algorithm, we will describe in Section 2 eight FRLS algorithms known from the literature and point out their differences and relationships. Two further FRLS algorithms will be discussed shortly. Signal Processing
Section 3 comprises computer simulations which illustrate the typical convergence and stability behaviour of all algorithms presented. In conclusion, Section 4 once more summarizes the most important findings and statements.
2. Least squares algorithms R L S algorithm The recursive least squares (RLS) algorithm has been known for its extremely rapid convergence properties. Owing to the high computing efforts needed, however, it cannot be employed for adaptive filters of high order in real-time applications. In these cases fast recursive least squares (FRLS) algorithms are continually gaining in importance. They involve only a much smaller computing expense and nevertheless, supply in each iteration step the same parameter and signal estimate values as the original RLS algorithm (Table 1). In the RLS algorithm k
R(k) = Z x ( v ) x r ( v ) + 8L v=l
(1)
I = identity matrix, is the autocorrelation matrix of the exciting signal with the signal vector xr(k) = [x(k), x ( k - 1). . . . . x ( k - N + 1)]. (2) The angle parameter 7(k) = 1 - xV(k)c*(k) = [1 + xV(k)e(k)] -~ = [a(k)] -1
(3)
is of particular significance for the following fast algorithms. The fast relationship in (3) can be read directly from the equations in Table 1. In Tables 1 to 9, all variables with the same notation also have the same meaning, i.e., under the same initial conditions in any iteration step they have also the same numerical values. This facilitates a comparison
H. Schiitze, Z. Ren / Numerical characteristics of FRLS algorithms
319
Table 1 Recursive least squares algorithm (MADPR=multiplications
and divisions per recursion step) RLS algorithm (2N2+4N MADPR)
Initialization: h(O)=O, x(O)=O, R-'(0)=• '1, t~= small positive constant, I= identity matrix For k = 1 to k final do: MADPR c(k) = R - ' ( k - l )x(k) N2 (TI.I) a(k) = 1 + cV(k)x(k)
(or ~,(k)= [1 + cV(k)x(k)l -~)
N
(T1.2)
N
(T1.3)
N2
(TI.4)
e ( k l k - 1)=y(k)-hr(k - l)x(k)
N
(Tl.5a)
h(k)=h(k-1)+c*(k)e(klk-1)
N
(Tl.6a)
c*(k) = c*(k)/a(k)
(or e*(k) = c(k)7(k)) R-~(k) =R ' ( k - 1)-c*(k)cX(k)
Joint process extension: in a priori error formulation
or alternativelyin a posteriori error formulation {e(klk- 1) =y(k) - h'r(k- l)x(k)} {e(klk)=~,(k)e(klk-l)}
{N} {1}
(TI.5b) (Tl.5c)
{h(k)= h(k - 1) + c(k)e(klk)}
{N}
(Tl.6b)
Fig. 1. S y s t e m identification structure.
The unknown system with the time-discrete, timeinvariant weighting function h can be described by the coefficient vector h x = [hi, h2 . . . . . hN].
(4)
The adjustable model is a FIR filter with N coefficients and the time-discrete, time-variant weighting function /~(i). The model coefficient vector hT(i) = [fh(i), h2(i) . . . . .
/~N(i)],
with ie { k - 1, k},
(5)
is adapted by an (F)RLS algorithm intended to minimize the cost function k
Y(k) = Y. e 2 ( v l k ) v=l
between the FRLS algorithms and the original RLS algorithm as well as a comparison among each other. All least squares algorithms in this comparison work with infinite memory. The forgetting factor, which is often found in other papers dealing with least squares algorithms, here always has the value 1 and will therefore not be specially mentioned any more. Further relationships between the formula quantities are illustrated by the general system identification structure in Fig. 1. In the diagram x ( k ) denotes the exciting input signal with x ( k ) = 0 for k ~<0 (prewindowed case), y ( k ) is the undisturbed output signal of the unknown system and e ( k l i ) = y ( k ) - x V ( k ) h ( i ) is the adaptation error in the kth iteration step with a parameter estimate from the Rh iteration step.
k
= Y~ [ y ( v ) - x T ( v ) h ( k ) ]
(6)
2.
v=l
F K algorithm
The first F R L S algorithm was derived by Ljung et al. from the RLS algorithm following a basic idea by M o r f [11] and presented in 1978 [10]. It is generally known as fast Kalman (FK) algorithm (Table 2) and requires 10N+ 3 multiplications and divisions per recursion step (MADPR). Detailed derivations of the F K algorithm are included e.g. in [10, 4]. The basic principle in the derivation was to utilize the so-called shift invariant property of the signal vector
+ lq=V
X+~
L x ( k ) .] L x ( k - N + l ) J ' N + I , XE~ N.
l (7)
Vol. 27, No. 3, June 1992
320
H. Schiitze, Z. Ren / Numerical characteristics o f F R L S algorithms
Table 2 Fast Kalman algorithm
FTF algorithm
FK algorithm(10N+ 3 MADPR) Initialization:
f(O) = b(O)= e*(O)= k(O)= x(O)= 0 d(O) = 6, small positiveconstant For k = 1 to k final do:
MADPR
eY(klk - 1 ) = x ( k ) - f r ( k - l ) x ( k - l)
N
(T2.1)
f ( k ) = f ( k - i) + e * ( k - 1)er(klk - 1)
N
(T2.2)
e/(klk) = x(k) - f T ( k ) x ( k -
N
(T2.3)
l
(T2.4)
N+I
(T2.5,
eh(klk - 1) = x ( k - N ) - b r ( k - 1)x(k)
N
(T2.6)
m*(k) + IJ*(k)b(k - 1 ) e * ( k ) - 1 -lt*(k)e~(klk - 1)
2N+ 1
(T2.7)
b(k) = b(k - l) + e*(k)eb(kl k - l)
N
(T2.8)
{eb(klk)=x(k-N)-bT(k)x(k)}
{N}
(T2.9)
l)
e/(k) = e , f ( k - l ) + e f ( k l k - 1)ef(klk)
m*(k)l=I 0 l eY(k]k)[-I g*(k)] L e * ( k - l ) J - ~
f(k)
{
~b(k)= Eb(k -- 1) + eh(kl k - l)eb(kJ k) _~.b(klk)l
{1}
(T2.10)
y ( k - 1)= 1 - - x T ( k - 1)c*(k- 1),
(8a)
r(k)=
(8b)
1-xT(k)e*(k),
(8c)
y + ( k ) = 1 - xr+(k)c*(k).
Joint process extension:
e ( k l k - 1)=y(k)-[tr(k - l)x(k)
N
(T2.11)
k(k) =k(k- 1)+ e * ( k ) e ( k l k - 1)
N
(T2.12)
By expedient mathematical conversion, it was possible in each recursion step to replace the iterative calculation of the inverses of the covariance matrix R ( k ) in Table 1 by some vector operations. This means a considerable reduction of the computing effort, especially for large N. Equations (T2.9)fand (T2.10) result in the process of deriving the F K algorithm (e.g. [4]). They are superfluous for the recursive calculation of the F K algorithm, but are needed for derivation of the subsequent FRLS algorithms. For computation of the FIR filter with the coefficient vector ~(k) (joint-process extension), the F K algorithm uses the a priori error formulation in Table 1. Signal Processing
After the FK algorithm several modified FRLS algorithms were developed with the aim to further reduce the computing expense. In 1984 the fast transversal filter (FTF) algorithm was presented by Cioffi and Kailath [5]. Later on Alexander [1] described the F T F algorithm very systematically with projections in vector space. This allowed a very descriptive geometric interpretation of the F T F algorithm to be obtained, which is the F T F algorithm's main advantage over other FRLS algorithms. The derivations by Cioffi, Kailath and Alexander will not be dealt with in this paper. Instead, it is very easy to demonstrate that the F K algorithm can be converted with few computational steps into the F T F algorithm, which proves also the equivalence between the two algorithms. The basic difference to the F K algorithm is that the angle parameter ~, known from (3) is introduced in the F T F algorithm. There holds
If we set in (8a) eqs. (T2.1), (T2.2) and (T2.3), we obtain ef(kl k) ~,(k- 1 ) - e / ( k [ k - 1)"
~ (T3.3)
In the same manner (T2.6), (T2.8) and (T2.9) can be inserted in (8b), which results in
r(k) -
eb(klk) eb(k ] k - 1)"
(T3.9)
If we use in (8c) the relation XT++(k)=[xT(k); x ( k - N)] of (7) and for c * ( k ) the left-hand side of (T2.5), which means c * + T ( k ) = [ m * T ( k ) ; /~*(k)l, there results with (T2.6), (T2.7) and (8b) ?,+(k) = [1 - l a * ( k ) e b ( k l k
- 1)]r(k ).
~ (T3.8)
321
H. Schiitze, Z. Ren / Numerical characteristics o f F R L S algorithms
If we use, on the other hand, in (8c) the other relation x X ( k ) = [ x ( k ) ; x T ( k - 1 ) ] of (7) and for c*(k) the right-hand side of (T2.5), we obtain with (T2.1), (T2.2), (T2.4) and (8a)
Table 3 Fast transversal filter algorithm FTF algorithm with (T3.7a): (9N+ 8 MADPR) with (T3.7b): {8N+ 10 MADPR} Initialization:
~+(k)-
d ( k - 1) J(k) ~ ( k - 1).
f(O) =b(O)=c*(O)=h(O)=x(O)=0;
~ (T3.5)
T(O) = 1
e/(0) = 5, small positive constant For k = 1 to k final do:
The last four formulas framed differ in the F T F algorithm (Table 3) compared with the F K algorithm (Table 2). The remaining equations in Tables 2 and 3 are identical. The algorithm in Table 3, without the equations in { }, corresponds to the F T F algorithm with 9 N + 8 M A D P R indicated in [1]. It was used also for the simulation in Section 3. A further reduction to 8N+ 10 M A D P R corresponding to [5] can be achieved if eb(kl k - 1) is calculated not according to (T3.7a) but to (T3.7b). Equation (T3.7b) is obtained if (T2.10) is rearranged after eb(kl k - 1) and (T3.9) as well as (T3.8) are inserted. Equations (T3.9) and (T3.10) are again superfluous for the iterative calculation, but useful for the subsequent derivations. Moreover, they elucidate the relationships between forward and backward prediction in the F T F algorithm.
MADPR - l ) x ( k - 1)
N
(T3.1)
1)+ c * ( k - l)er(k I k - 1)
N
(T3.2)
er(klk) = y(k- l)er(klk - 1)
1
(T3.3)
e/(k) = J ( k - 1) + e/(klk- l)e/(klk) el(k- 1)
1
(T3.4)
2
(T3.5)
N+ 1
(T3.6)
ef(klk - l)=x(k)-fT(k f(k) =f(k-
),+(k)
~f(k)
~'(k- l)
[m'(k)l=[
o
p*(k)J
Le*(k-1)J
-' l oJ(k) Lf(k)J
eh(k Ik - 1) = x ( k - N ) - b T ( k - 1)x(k) b t
--1
9,+(k)
y(k)
(T3.7a)
{2}
(T3.7b)
)
~+(k) 1 -i~*(k)eb(kl k - 1)
h(krk) = y(k)eh(klk - 1)}
2
(T3.8)
{ 1}
(T3.9)
{1}
(T3.10)
2N+l
(T3.11)
N
(T3.12)
N
(T3.13)
N
(T3.14)
eh(k) = eb(k- 1) + eb(klk - 1)eh(klk)
eb(klk) l /t*(k) )
F A E S T algorithm
The fast a posteriori error sequential technique (FAEST) algorithm (Table 4) devised by Carayannis et al. [4] requires only 7 N + 10 MADPR. The main difference between the F K and F T F algorithms on the one hand and the FAEST algorithm on the other, is that the first two use the a priori error e ( k l k - 1 ) , whereas the latter employ the a posteriori error e(klk). This is only a different formulation of the original iteration rule. In any case, we obtain in each iteration step the same estimate values as with the RLS algorithm (Table 1). That is also why an unambiguous relationship exists between the FK-, FTF- and FAEST algorithms.
N
c*(k)=[m*(k)+p*(k)b(k-l)]
r(k) r÷(k)
-
b(k) = b(k - 1 ) + e*(k)eb(k l k - 1 )
Joint process extension: e(klk-
1)=y(k) - fsT(k - l)x(k)
ft(k) = f~(k- 1 ) + e * ( k ) e ( k l k -
1)
Essential for the conversion is the factor known from (T1.2) and (3):
a(k) =
[~,(k)]-'.
The formula quantities marked * in the F K and F T F algorithms and the corresponding quantities Vol. 27, No. 3, June 1992
322
H. Schiitze, Z. Ren / Numerical characteristics of FRLS algorithms
Table 4 Fast a posteriori sequential technique algorithm
to (3), replace ?(k) by a(k), we obtain directly the FAEST algorithm (Table 4).
FAEST algorithm (7N+ 10 MADPR) Initialization:
GFTF algorithm
f(0)=b(0)=c(0)=h(0)=x(0)=0;
a(O)=l
U(0) = E~(0)= & small positive constant For k= 1 to k final do:
MADPR
A further variant of the FRLS algorithms requiring a low number of computational steps is the gain-normalized fast transversal filter (GFTF) algorithm, presented in 1984 by Cioffi and Kailath [5] and well described in [1]. The GFTF algorithm needs only 7N+ 12 MADPR. Based on the FTF algorithm, its fundamental idea is to normalize the gain vectors by the corresponding angle parameters. Equations (10) completely describe the relationship between the FTF and GFTF algorithm:
eY(klk - l ) = x ( k ) - f T ( k - 1)x(k- 1)
N
(T4.1)
ef(klk) =ef(klk - l ) / a ( k - 1)
1
(T4.2)
f(k) = f ( k - 1) + c ( k - l)er(klk)
N
(T4.3)
er(k)=eI(k-l)+ef(klk-1)ef(klk)
1
(T4.4)
N+I
(T4.5)
m(k)=m*(k)[y+(k)]-',
(lOa)
eh(kl k - 1)=/~(k)¢(k- 1)
1
(T4.6)
p(k)=p*(k)[y+(k)] -',
(10b)
c(k) = re(k) + g (k)b(k - 1)
N
(T4.7)
c(k)=c*(k)[r(k)]-L
(10c)
p(k)J
01
kc(k-
eY(klk-1) [ - 1 1 ~ ( k - 1) f ( k - 1)
[et(k Ik- 1)12 a(k)=a(k-1)+ d ( k - 1) --p(k)eb(klk - 1) eh(klk) = eb(k[k- l )/a(k)
3 1 1 N
eh(k)=gh(k-l)+eO(klk-1)eb(klk) b(k)=b(k- 1)+c(k)eb(klk)
(T4.8) (T4.9) (T4.10) (T4.11)
e(klk) =e(klk- l )/a(k) h(k) =h(k - 1) + e(k)e(k Ik)
N
(T4.12)
1 N
(T4.13) (T4.14)
without * in the FAEST algorithm are linked with each other by a(k):
p(k) = l t * ( k ) a ( k - l)
F'f(k)
(9a)
ef(k)
(9b)
e f ( k - 1)'
d ( k - 1)' c(k) = c* (k) a (k).
(9c)
If we replace all quantities m*(k), p*(k) and c*(k) in the FTF algorithm (Table 3) according to (9) by re(k), la (k) and c(k) and additionally according SignalProcessing
a(k)=[y(k)] -1. N F T F algorithm
Joint processextension: e(klk- 1)=y(k)-kV(k- 1)x(k)
re(k) = m * ( k ) a ( k - 1)
It is even simpler to convert the FAEST algorithm into the GFTF type. To this end, we only have to replace a(k) by y(k) according to
Due to accumulating rounding errors, the FRLS algorithms are unfortunately prone to numerical instability. Since the sources of such instability are to be seen in the occasionally unfavourable numerical values of some variables of an FRLS algorithm, it was consequently attempted to create more favourable numerical ranges. The basic principle in this case is to use, instead the energies of the prediction errors, eY(k), Eb(k) and of the angle parameter ~,(k), their roots g f ~ , ~(k) = ~ , ~(k) = x/rCk) = ~/[a(k)] -~. If we use this expression to normalize the prediction errors, forward, backward and gain filter parameters of the FAEST algorithm in Table 4 in the following way: U ( k l k - 1) = ~ ( k - 1)eY(kl k - 1)/~f(k - 1), (lla)
323
H. Schiitze, Z. Ren / Numerical characteristics of F R L S algorithms
Table 5 Gain-normalized fast transversal filter algorithm
with
:#(k)
Initialization:
f(0)=b(0) =c(0)=/z(0)=x(0) =0; o r ( o ) = eb(o) = 6,
and
r(o) = 1
small positive constant
gO(k ) '
N
(T5.1)
eY(klk) = y ( k - 1)ef(k[k - 1)
1
(T5.2)
f ( k ) = f ( k - 1) + c ( k - l)ef(klk)
N
(T5.3)
er(k) = e/(k - 1) +ef(kl k - l)ef(kl k)
1
(T5.4)
2
(T5.5)
[
o
1)/d(k)
1)
pb(k ) _ gb(k-
ef(klk - 1) = x ( k ) - f r ( k - 1)x(k- l)
#(k)l
(11 h)
MADPR
F o r k = 1 to k final do:
r+(k) = r ( k - l ) d ( k -
1)
pf(k) = g-f(k-
GFTF algorithm (7N+ 12 MADPR)
l
we arrive exactly at the n o r m a l i z e d fast transversal filter ( N F T F ) a l g o r i t h m in Table 6. Table 6 Normalized fast transversal filter algorithm NFTF algorithm (I 1N+ 18 MADPR) (+3 Roots)
l_c(k- I)I
Initialization:
ef(klk-1)[-1 ] e l ( k - l ) f ( k - 1) eb(klk - 1)=#(k)eb(k - 1)
N+I
(T5.6)
](0)=g(0)=e(0)=~(0)=x(0)=0; g f ( 0 ) = gb(0) = ,f~,
1
(T5.7)
f(o)= l
small positive constant
For k = 1 to k final do:
MADPR
Roots
r(k) = [l - r+(k)#(k)eb(klk - 1)]-'r+(k) 3
(T5.8)
~ ( k [ k - 1)= f ( k - 1)[x(k)/~f(k - 1)
c(k) = re(k) + p ( k ) b ( k - 1)
N
(T5.9)
eb(klk) = r(k)eb(klk - 1)
1
(T5.10)
- f ' r ( k - 1)x(k- l)] fir(k)=[1 + ( ~ ( k l k - 1)) 2] J/2
N+2 2, l
(T6.1) (T6.2)
eb(k)=eh(k--l)+eb(klk--1)eb(klk)
1
(T5.11)
( f (k) = p f (k)C~f( k l k - I)
1
(T6.3)
b(k)=b(k-1)+c(k)eb(klk)
N
(T5.12)
f(k) = p Y ( k ) f ( k - 1) + ( Y ( k ) e ( k - 1)
2N
(T6.4)
gf(k) = gZ(k- 1)/pf(k)
1
(T6.5)
2N+I
(T6.6)
1
(T6.7)
4, I
(T6.8)
Joint process extension: e ( k l k - l ) = y ( k ) - h T ( k - 1)x(k)
N
(T5.13)
e(klk) = r ( k ) e ( k l k - 1)
1
(T5.14)
h(k) = h(k - 1) + ¢(k)e(kl k)
N
(T5.15)
.~.,.l--l/~(k- 1)q
- ~ [x)L
](k-l)
J
e'b(klk- 1) = fi (k) d(k) ~,(k) = [(gf(k)/gb(k - 1)) 2
~ b ( k l k - - 1)
_
= f(k)eb(klk
- 1 ) / ~ b ( k - - 1),
(llb)
](k) =f(k)/J(k),
(llc)
~(k) = b(k)/~(k),
(lld)
~(k) = c(k) f ( k ) ,
(lle)
m(k) = f(k-
(llf)
(e,b(klk_ 1))2]-1/2
~b(klk- 1)= f,(k)e'b(klk - 1)
1
pb(k) = [1 + ( ~ ( k l k - 1))2]-t/2
2, 1
(T6.10)
~(k)=K~(k)/pb(k) +~b(klk- 1)/~(k- 1)
2N
(T6.11)
~b(k) = gb(k -- 1)/pb(k)
1
(T6.12)
$(k)=pb(k)[b(k - 1)+i(k)~b(klk - 1)]
2N
(T6.13)
N
(T6.14)
Joint process extension: e(k Ik - 1) = y(k) -/iT(k -- 1)x(k)
fi(k) = f(k-
1)pf (k)m(k), 1)pf (k)p(k),
(llg)
(T6.9)
[e(klk)/~(k)] = ~ ( k ) e ( k l k - 1)
1
(T6.15)
h(k) = h ( k - 1) + e(k)[e(klk)/y(k)]
N
(T6.16)
Vol. 27, No. 3, June 1992
324
11. Schiitze, Z. Ren / Numerical characteristics o f F R L S algorithms
A normalized FRLS algorithm was proposed by Cioffi and Kailath [5] in 1984, but without ¢(k) in the above substitution formulas. Later on Fabre and Gueguen [6] supplemented this normalization by ~7(k) in the previously described manner. In 1986 they presented the N F T F algorithm described in Table 6. The normalization is intended to reduce the dynamic ranges of the critical variables d ( k ) , eb(k) and ?'(k), and thus to improve the numerical performance of this FRLS algorithm.
algorithm (Table 7) in which a vector d(k) has to be calculated additionally. Using the start values
c*(k,) =
a(k~)
-
x(k,) 8(kO + xT(kOx(kl) '
we obtain from k>,kl onward the RLS parameter estimate /z(k) = h ( k - 1) + c*(k)e(klk- 1), with the new gain vector c*(k)= RT~(k)x(k). Here k
Rl(k)= ~, X(v)xT(v) +6(kl)l V ~k 1
CFK algorithm As a measure against the numeric instability of FRLS algorithms, Lin [7] and also Cioffi and Kailath [5] proposed in 1984 to restart the FRLS algorithm in the event of instability. Computer simulations show that, shortly before the beginning of instability, the expression r(k) = 1-1~*(k)eb(klk-1) becomes negative although theoretically there would always have to be 0 < r ( k ) ~<1. Therefore Lin and also Ciofli and Kailath suggested to check this 'rescue variable' and to start the algorithm just then anew when r(k) becomes ~<0. With a restart at the instant k=k~ the last parameter estimate h(k~ - 1) can be used as a new starting value, i.e. we can set/i(kl) = h(kl - 1). The old vectorsf(k~ - 1) and b(k~ - 1) which might possibly have become unusable due to rounding errors can be ignored. The old gain vector c*(kl - 1), too, can be falsified by rounding errors and will therefore be reinitialized. Cioffi and Kailath proposed in [5] to set c*(k~) = 0 at the restart which, without any doubt, involves the smallest computing efforts. Unfortunately the last N measured values x ( k ~ - N ) , . . . , x ( k l - 1 ) are left out of account. This means actually implicitly that the prewindowed case is assumed which is generally not satisfied in the event of a restart. Lin instead assumed the unwindowed case and derived in [7] the F K algorithm analogously to [10] but taking into account x(k~ - 1) 50. The result is an extended F K Signal Processing
is the autocorrelation matrix of the exciting signal from kl onward for the unwindowed case and h(k) minimizes the cost function Jl(k) =
k ~ [y(v)-xT(v)h(k)]2-b~(kl) V=kl x [h(k)
- ]l(kl -
1)lT[/t(k) -- h(k' - 1)1.
The positive constants S(k~) express at the restart the confidence in the estimate values h(k~ - l) used so far. FoIlowing a tradition in the speech processing theory, by which unwindowed identification algorithms are called covariance-type algorithms, the modified FK algorithm was named by Lin covariance fast Kalman (CFK) algorithm. Although the computing effort per recursion step is even higher for the C F K algorithm than for the F K type, the first-mentioned ensures a smooth transition at the restart. If the C F K algorithm is employed and the prewindowed case really applies, there always is c*(k)= e~*(k) and d(k)=0 because X(kl- 1 ) = 0 and the C F K algorithm (Table 7) is in each iteration step identical with the F K algorithm (Table 2) and would in this case also manage with 10N+ 3 M A D P R up to the first restart. Let us make a final remark: This rescue operation can prevent instability but not remove its cause.
CFTF algorithm Since it is impossible to prevent rounding errors in exact FRLS algorithms, it was thought to be
H. Schiitze, Z. Ren / Numerical characteristics of FRLS algorithms Table 7 Covariance fast algorithm
Moustakides [ 12] presented in 1989 a corrected fast transversal filter ( C F T F ) algorithm (Table 8) which he had developed together with Botto [3] on the basis of his earlier studies. The C F T F algorithm results from the FAEST or G F T F algorithm with the following fundamental extensions. The backward error eb(kl k - 1) is calculated twice in each iteration step, i.e., once as done for the G F T F algorithm,
CFK algorithm (16N+3 MADPR) Initialization: For k = 1 set: x(l)
c*(1)=d(1)=6+xV(l)x(1 );
f(l)=b(l)=0;
1" [e*(l)e(ll0) first start ( ) = ~h(O) restart
e ~ l ~ ( k ] k - 1) = l z ( k ) e b ( k - 1),
~(l)=3+x2(l);
l
small positive c o n s t a n t at first start
~ (T5.7)
and a second time as in the case of the F T F algorithm,
= ( s u i t a b l e positive c o n s t a n t at restart
MADPR
For k = 2 to k final do:
1 ) = x ( k ) - f ( k - l)x(k- 1)
N
(T7.1)
f(k) = f ( k - 1) + c * ( k - l)eY(klk- 1)
N
(T7.2)
e/(klk) = x(k) - f r ( k ) x ( k - 1)
N
(T7.3)
ed(k) = zS(k - 1) + eY(klk - 1)e1(k[k)
1
(T7.4)
d(klk-
325
e ~ 2 ) ( k l k - 1) = x ( k -
N) - bT(k - 1)x(k),
(T3.7a), (T8.2) and then the difference of the two equations ~ ( k ) = e~2)(klk - 1) - I~ ( k ) e b ( k - - 1)
F o 1
#*(k)]=Lc*(k - l)J
eY(klk) [ -1 ]
N+ 1
(T7.5)
eb(klk - 1)=x(k-N)-br(k - l)x(k)
N
(T7.6)
r(k ) = 1 - p*(k)eb(kJk - l)
1
(T7.7)
2N
(T7.8)
N
(T7.9)
4N
(T7.10)
2N
(T7.11)
e(klk - 1) =y(k) - hT(k-- 1)x(k)
N
(T7.12)
h(k) = h ( k - 1) + c*(k ) e ( k l k - 1)
N
(T7.13)
eY(k)
If
r(k) ~< 0 b(k)
f(k)
set k = 1 and go to Initialization
b ( k - 1) + m*(k)eb(klk - 1) r(k)
e*(k) =m*(k) + p*(k)b(k) d(k)
d(k - 1) - c*(k)xT(k)d(k- 1) 1- xr( ! )c* (k)xr(k)d(k - 1)
c*(k) = c*(k)xT(1)c*(k)
Joint process extension:
expedient to introduce a new quantity ~(k), which is proportional to the present rounding errors in the FRLS algorithm, and to intervene into the computing process in dependence of this 'rounding error measurement'. Along these lines,
is formed. Without rounding errors both backward error calculations would have to agree completely and ~(k)=0. But if ~ ( k ) 5 0 , ~(k) is a measure for the magnitude of the rounding error. The next idea is to change f ( k ) and b ( k ) by means of a correction f ( k ) + A f ( k ) and b ( k ) + Ab(k) in such a way that the rounding errors in the algorithm are minimized. Suitable correction values Af(k) and Ab(k) can be found by deriving a square error quality criterion of ~(k) from Af(k) and Ab(k). This quality criterion includes an appropriate weighting factor p allowing the influence of the correction to be determined. For p = 0 no correction exists and the C F T F algorithm is identical with the G F T F type. The correction vectors Af(k) and Ab(k) are not only used for correctingf(k) and b ( k ) but also all quantities in which f ( k ) and b ( k ) are included. As a result, ~(k) changes to ~¢(k), eY(k) to eYe(k) and eb(k) to e~(k). The index c in each case designates the corrected quantities. From the derivation executed so far we can see that the expression ([y(k)] - 1 - 1) appears in the corrected quantities. To avoid numeric instability for the case y(k) ---,0, we introduce the approximation Vol. 27, No. 3, June 1992
14. Schiitze, Z. Ren / Numerical characteristics o f F R L S algorithms
326 Table 8
Corrected fast transversal filter algorithm
CFTF algorithm (8N+35 MADPR) Initialization:
f(o) = b(O)= e(O)=,~(0) = x(k) = 0 r(O) = l eS(O) = eh(O) = ~, small positive constant
p suitable positive constant MADPR
For k = 1 to k final do: eY(klk - 1)=x(k)-fV(k-
l ) x ( k - 1)
N
- 1)x(k)
N
eh(klk - 1 ) = x ( k - N ) - b T ( k
r ( k - 1) e ' ( k - 1) 7+(k)
ef(k_l)+7(k_l)[ef(klk_l)]2
4
(T8.1) (T8.2) (T8.3)
,9(k) = 1 - ~,+ ( k ) e b ( k l k - 1)
x [cN(k_ 1) e f ( k l k - l ) f N ( k - l ) ] 4
(T8.4)
,~f (k - 1) ~'(k) = y+(k)/,9 (k)
1
(T8.5)
( ( k - 1) = ~'(k- 1)fN(k- 1)
1
(T8.6)
2
(T8.7)
5
(T8.8)
3
J ( k ) = et(k - 1)+ r ( k - 1)[e~(klk- 1)] 2 3
(T8.9) (T8.10)
eb(klk - l)=eb(klk - l ) - p~c(k)[1 - r(k)] 2
(T8.11)
e,b(k) = eb(k-- l) + ~ , ( k ) [ e ~ ( k l k - 1)] 2
3
(T8.12)
N+ l
(T8.13)
N+3 N
(T8.14) (T8.15)
{ ( k ) = e h ( k l k - 1 ) - eb(k - 1)cW(k - l)
+ ( ( k - 1)er(klk - l) {¢(k) = {(k)[l + p(l - ~,(k)) + p ( 2 ( k - 1) ×(1-7(k-I))]
'
1)= eY(klk - l ) - p ~ c ( k ) ( ( k - l)
~(klk-
x [1 - r ( k - 1)]
0
I_~u(k)J
-1] d ( k - 1) El(k- 1)J f(k) = f ( k - 1) + r ( k - 1) x [eY(klk - l) + p ~ c ( k ) ( ( k - l ) ] c ( k - l) e(k) = re(k) + it ( k ) b ( k - 1)
b(k) =b(k- 1) + r(k) N+2
(T8.16)
N
(T8.17)
l
(T8.18)
N
(T8.19)
Joint process extension:
1) = y ( k ) - h V ( k - l)x(k)
e(klk) = r(k)e(klk-
1)
h(k) = h(k - 1) + e(k)e(k Ik) Signal Processing
S F T F algorithm
In 1985, Ljung and Ljung proposed in [9] to analyze the propagation of rounding errors in the state space. In this context, an FRLS algorithm can be described as a time-discrete, nonlinear, dynamic system and it can be examined for its stability. Unfortunately a general stability analysis is extremely difficult to perform because of the high complexity o f such a nonlinear system. The stability analysis is less problematic for the appropriately lifiearized system. For derivation of the stabilized fast transversal filter (SFTF) algorithm Benallal and Giloire [2] linearized in 1988 first the GFTF algorithm and found that this linearized dynamic system is instable. The basic conception of the SFTF algorithm is now to intervene correctively like a control loop into the original GFTF algorithm such that the linearized dynamic system is stable. For this purpose, we devise at first a measuring quantity ~(k) which is proportional to the rounding error. To obtain ~(k), we calculate the backward error twice, once as done for the FTF algorithm (T3.7a), a second time as in the case of the GFTF-type (T5.7) and then form the difference ~ ( k ) = e b ( k l k - 1 ) - l a ( k ) e b ( k - 1).
x [eb(klk - 1)+p~c(k)]c(k)
e(klk-
[x(k)] -~ - 1 ~ 1 - y(k) although it is valid only for x(k) ~ 1, but it really can be 0 < x(k) ~< 1. As long as we have ~(k) = 0, there holds Af(k) = 0, Ab(k) = 0, e f(k) = ef(k) and eb(k) = eb(k). For this case the CFTF algorithm is identical with the GFTF algorithm and supplies exactly the same estimate values as the original RLS algorithm. However, as soon as ~(k)4=0, the additional corrective interventions will cause deviations from the optimum RLS estimate. For practical application this difference should be only of minor significance.
~ (T9.8)
This is exactly the same rounding error measuring quantity as that for the CFTF algorithm, but it will be used differently for the SFTF-type. The product of error quantity ~(k) and a control variable 17 supplies an additive correction value for the
327
H. Schiitze, Z. Ren / Numerical characteristics o f F R L S algorithms
b a c k w a r d error eb(kl k - 1). In the S F T F algorithm, we introduce three constant control variables r/r, r/, and r/a which have an additive effect on ),(k), eb(k) and b ( k ) (see (T9.12), (T9.14), (T9.15)) via the b a c k w a r d error (see (T9.9), (T9.10), (T9.11)) if ~ ( k ) # 0 . Calculation of the correct values for r/r, r/~ and 17a is quite difficult. In [2] suitable values were determined empirically and two sets of control variables 1 ? r = r / , = 0 , 7/a = 1 and 11r = ?/~ = I?a = 1 were finally stated. T h e following relationship exists between the S F T F and G F T F algorithms: (T9.1-T9.7), (T9.13), (T9.16) and (T9.17) of the S F T F algorithm correspond exactly to the G F T F algorithm (Table 5). Equations (T9.8)-(T9.11) describe additional extensions for generating corrected quantities which are included in (T9.12), (T9.14) and (T9.15). F o r the case 07 = r/~= r / s = - 1 the S F T F algorithm is identical with the G F T F - t y p e . An extension of this stabilizing concept for a forgetting factor 2 ~ 1 was published by Slock and Kailath [ 13] in 1991. Their new algorithm initially starts from the same stabilizing principle as the S F T F algorithm, but uses six control variables Kj to K6 to correct the quantities b ( k ) , eb(k), ?,(k) and /.t(k). U n f o r t u n a t e l y it has not been clarified yet h o w to choose Kj to K6 to guarantee stability for any input signals. I f (T9.12) is replaced by ) , ( k ) = [l+cT(k)x(k)] -1, the variables /(1 to K6 can be given for 2 = 1 such that the algorithm by Slock and Kailath is identical with the S F T F algorithm in Table 9.
Table 9 Stabilized fast transversal filter algorithm
SFTF algorithm (8N+ 17 MADPR)
Initialization: f(o) = b(o) = c(O)= ~(o) = x(O) = o ~,(o) = 1
d(O) = eb(o) = 6, small positive constant r/r , r/,, r/b suitably selected For k = 1 to k final do:
MADPR N
(T9.1)
er(kl k) = ),(k- 1)eY(k Ik - 1)
1
(T9.2)
f ( k ) = f ( k - 1) + e(k - 1)er(k Ik)
N
(T9.3)
d ( k ) = ef(k - 1)+ef(klk - 1)eJ(klk)
1
(T9.4)
),.(k) = y(k- l)ef(k - 1)~el(k)
2
(T9.5)
N+I
(T9.6)
N
(T9.7)
~(k) = eb(klk - 1) - # (k) eb(k -- 1)
1
(T9.8)
e r ( k l k - 1) = eb(klk- 1) + rlr~(k )
1
(T9.9)
e " ( k l k - 1) =eb(klk - 1) + r/,~(k)
1
(T9.10)
e°(klk - 1)=eb(klk - 1)+ r/a~(k)
1
(T9.11)
r(k) = [1 - y+(k)l~(k)er(klk- 1)] 'r+(k) 3
(T9.12)
J(klk-
1) =x(k) --fT(k-- 1)x(k- 1)
07
#(k)J I_c(k-
J(klk-1)[-1
d(k-1)
]
f(~-l)
eb(klk - 1) = x ( k - N ) - b T ( k
- 1)x(k)
c(k) =re(k) +lJ(k)b(k - 1)
N
(T9.13)
3
(T9.14)
N+ 1
(T9.15)
e ( k l k - 1) =y(k)-/iT(k-- l)x(k)
N
(T9.16)
h(k) =/z(k- 1) +c(k))'(k)e(klk- 1)
N+ 1
(T9.17)
eb(k)=eh(k-1)+7(k)[e'(klk-1)]
2
b(k) = b ( k - 1) +c(k)y(k)ea(kl k - 1)
Joint process extension:
3. C o m p u t e r s i m u l a t i o n s
The previously described ( F ) R L S algorithms (Table 1 to 9) were simulated on a V A X c o m p u t e r with n o r m a l R E A L accuracy in F O R T R A N 7 7 in order to investigate the typical convergence and stability behaviour. The simulations are based on the structure depicted in Fig. 1. The u n k n o w n system according to (4) contains a r o o m impulse response with N = 1024 coefficients. The adjustable system according to (5) also has 1024 coefficients.
Such high and even higher system orders are typical o f applications in acoustic echo cancellers. F o r elucidation o f the convergence behaviour and o f the numerical stability, the relative n o r m o f p a r a m e t e r errors P ( k ) = 10 log [ h - h ( k ) ] X [ h - / ~ ( k ) ]
hlh
in dB (12)
Vol. 27, No. 3, June 1992
H. Schiitze, Z. Ren / Numerical characteristics of FRLS algorithms
328
has been represented versus the time axis. To allow a better distinction between the individual curves P(k) in the superposed representation chosen here, they have been smoothed to some extent. The inscription for k denotes the number of iteration steps. For the time notation in seconds there holds the relation t=k/fs with the sampling frequency f s = 8 kHz. In all cases 106 iterations were simulated, but for the sake of clarity only 32 000 steps (4 sec) were represented in Figs. 2-4.
First example The exciting signal x(k) is a speech signal. This is the normal case in practical operation for an
identification algorithm in an acoustic echo canceller. Since in the case of speech signal excitation the curve of P(k) and possibly also the stability are strongly dependent on the instantaneous speech level, signal x(k) is additionally represented in this first example (Fig. 2(b)). It could be supposed now that, in the presence of speech excitation, a numeric instability is caused by an ill-conditioned signal x(k). Therefore, the RLS algorithm (Table 1) was also simulated for comparison. In the event of an insufficient speech excitation, the RLS algorithm and at the same time also all other FRLS algorithms would have to become instable. But as the RLS algorithm does not become instable, signal x(k) is obviously not
P(k) dB 10-
1
~SFTF(OO1)~-T-~
O-
-10-
il,^if
,
ISFTF(111) I L
-20-
.....
--,30-
-
4 0 0
0
~[~
4000 0.5
8000 1.0
12000 1.5
~ 16000 2.0
20000 2.5
24000 3.0
28000 3.5
32000 k 4.0 t/s
4000 0.5
8000 1.0
12000 1.5
16000 2.0
20000 2.5
24000 3.0
28000 3.5
32000 k 4.0 t/s
x(k)
0 0
Fig. 2(a). Relative norm of parameter errors with speech excitation, 8 = 10 -4. (b) Exciting speech signal in the first example. Signal Processing
H. Schiitze, Z. Ren / Numerical characteristics of FRLS algorithms
329
P(k) dB I0-- r 0 -10
~sFrF(oo~), I I
-20 -30. -40.
ISFTF(111) I
-50. -60-70. -60-90-1004000 0.5
8()00 1.0
12;00 1.5
16000 2.0
20000 2.5
24000 3.0
28000 3.5
32000k 4.0 US
Fig. 3. Relative norm of parameter errors with white noise excitation, 8 = 10 -4.
P(k) 10
r .
,
--40
ISFTF(11111
-5°1 ~ - i
....
i
o-O°l I.i -60
v,= . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
-1O0 -120
0 0
l
4000 0.5
8000 1.0
12;00 1.5
16000 2.0
20;00 2.5
24000 3.0
28000 3.5
32000k 4.0 Us
Fig. 4. Relative norm of parameter errors with white noise excitation, 8 = 2 x 10 -6. Vol. 27, No. 3, June 1992
330
H. Schiitze, Z. Ren / Numerical characteristics of FRLS algorithms
Results of simulations
responsible for the instability of the other algorithms. All FRLS algorithms with correction measures still require suitable positive constants which have the following values in all 3 examples: - at the restart with C F K ~ = l, - the weighting factor for C F T F is p = 0.05, the control variables for the SFTF(001) are r/r=O, r/~=O, r/t~= 1 and - the control variables for the SFTF(111) are q r = l , r/~= I, r/~= 1. The initialization factor at the start in the first example is always 6 = 10-4.
a. FRLS algorithms in general The previously described examples and other simulations not represented here confirm basically the known characteristics of all FRLS algorithms: - The convergence speed of P(k) is already quite high and the achievable value low for (F)RLS algorithms with speech signal xcitation, but even better values are reached with white noise (cf. Examples 1 and 2). - The convergence speed increases with decreasing initialization factor t~ (cfr. Examples 2 and 3). As long as an FRLS algorithm is stable, it supplies exactly the same estimate values as the original RLS algorithm.
Second example The exciting signal x(k) in the second example is a stationary white random noise signal with a normal distribution. Its purpose is to demonstrate the convergence and stability behaviour in the case of optimum excitation under the same start conditions as in the first example because there is no doubt that this signal x(k) is persistently exciting. Nevertheless, the RLS algorithm was also included here in the simulation in order to show that the RLS and FRLS estimates agree in each sampling instant as long as no instability occurs. The appropriate constants of the FRLS algorithms with correction measures have the same value as in the first example. Likewise the initialization is always 6 = 10 -4.
Third example The exciting signal x(k) in the third example is exactly the same white random noise signal as in the second one. Also, all constants of the FRLS algorithms with correction measures are the same as in the second example. The third example differs from the second example only in the choice of a still more critical initialization factor t~ = 2 x 10 -6. On the one hand, this is to demonstrate the strong influence of ~; on the other, it is intended to show that some FRLS algorithms which exhibited a stable behaviour in the second example are not stable under all circumstances. Signal Processing
b. FRLS algorithms without stabilization measures All investigated exact FRLS algorithms without stabilization measures can become instable sooner or later. The examples indicate the following characteristics: Using white noise as source of excitation, FRLS algorithms remain stable for a longer time than with speech signals (cf. Examples 1 and 2) because the numerical conditions within an algorithm in the event of speech excitation may temporarily become unpredictably unfavourable. - The numeric stability depends decisively on the initialization factor 8. The smaller we choose 8, the earlier numeric instability can occur (cf. Examples 2 and 3). - A regular order in which the different FRLS algorithms become instable cannot be recognized. However, the simulation results show the tendency that an FRLS algorithm is the more prone to instability, the less M A D P R are needed (cf. Example 1: FAEST, G F T F , FTF, F K ; Example 2: FAEST, G F T F , FTK, F K ; Example 3: G F T F , FAEST, FTF). Apparently, root calculations are particularly critical from the numerical point of view. The N F T F algorithm, the only one in this comparison of root calculations, fails immediately if a -
-
H. Schiitze, Z. Ren / Numerical characteristics of FRLS algorithms
radiant becomes negative as a result of rounding errors. c. F R L S algorithms with stabilization measures
The additional stabilization measures proved to be very effective for the C F T F and C F K algorithms and with some limitations also for the S F T F type. In the simulations the C F T F algorithm was always found to be stable. A deviation from the P ( k ) curve with RLS was not noticed (cf. Examples 1, 2 and 3). - The C F K algorithm succeeded in preventing imminent instabilities by restarts. Within the simulation period of 106 iteration steps only one restart occurred in Example 1, at k = 11 071 ( t ~ 1.38 sec), in Example 2 no restart is necessary and in Example 3 one restart happened at k = 1515 (t~0.19 sec). Due to the unwindowed restart, nothing special was noticed in the error signal e ( k Ik - 1). If e(k Ik - 1) were an acoustic signal, as, for instance, in the acoustic echo canceller, the restart would have been inaudible. - Although the S F T F (1, 1, 1) algorithm with the correction parameters 77r = r/~ = r/~ = 1 is stable, in the examined examples it reaches after N sampling steps a kind of 'locking state' in which the adaptation process stops. - A n exception in this group of stabilizing algorithms is the S F T F (0, 0, 1) type. The correction parameters fir=O, r/~=0, r / ~ = l were empirically determined in [2] by means of a system of 32th order. They are obviously totally unsuitable for a system of over 1000th order with the chosen small start parameters ~ as they caused, in all examples, an abrupt instability after about N iteration steps.
4. Conclusions The exact RLS algorithms without stabilization measures investigated in this paper (FK, FTF, FAEST, G F T F and N F T F ) can be mathematically
331
exactly transformed into each other and all algorithms supply, except for rounding errors, totally identical estimate values for e ( k l k - 1 ) and ~(k) in each iteration step. All estimate values are, in addition, identical with the RLS estimates and thus optimal in terms of the least error square sum (6). But unfortunately Ljung's work [8] indicated as early as 1983 that no exact FRLS algorithm in normalized or unnormalized form can be stable for ever and under all circumstances because of the limited computing accuracy in the underlying difference equations. The simulations in Section 3 confirm this supposition at least for the algorithms investigated here. On the other hand, more promising seem to be additional measures which inervene into the algorithm with the aim of achieving a stabilizing effect. The stabilizing methods for the CFK, C F T F and S F T F algorithms discussed in this paper proved to be quite effective, in principle. But the few tests with a maximum of 10 6 iteration steps do not allow a generally applicable conclusion to be made about long-term stability under all circumstances. The additional interventions may cause more or less pronounced deviations from the exact least squares estimate. In particular, the SFTF algorithm has a tendency to the locking effect. On the other hand, the deviations occurring with the C F K and C F T F algorithms can be regarded as negligible. Unfortunately, the improved stability is obtained at the cost of a higher computing effort.
References [1] S.T. Alexander, Adaptive Signal Processing: Theory and Applications, Springer, Berlin, 1986. [2] A. Benallal and A. Gilloire, "A new method to stabilize fast RLS algorithms based on a first order model of the propagation of numerical errors", Proc. lnternat. Conf. Acoust. Speech Signal Process., 1988, pp. 1373-1376. [3] J.L. Botto and G.V. Moustakides, Stabilization of fast recursive least-squares transversal filters, INRIA-IRISA Rapports de Recherche. No. 570, September 1986. [4] G. Carayannis, D.G. Manolakis and N. Kalouptsidis, "A fast sequential algorithm for least-squares-filtering and prediction", IEEE Trans. Acoust. Speech Signal Process., Vol. ASSP-31, December 1983, pp. 1394-1402. Vol. 27, No. 3, June 1992