Order-recursive Least-squares Adaptive Algorithms — a Unified Framework

Order-recursive Least-squares Adaptive Algorithms — a Unified Framework

Cop\Tighl © I FAC ICielllilicali,," "'ICI S' .' "'III Pa rame te r Estim a tioll . He ij il1 g. PRC I qK~ ORDER-RECURSIVE LEAST-SQUARES ADAPTIVE ALGO...

1MB Sizes 28 Downloads 150 Views

Cop\Tighl © I FAC ICielllilicali,," "'ICI S' .' "'III Pa rame te r Estim a tioll . He ij il1 g. PRC I qK~

ORDER-RECURSIVE LEAST-SQUARES ADAPTIVE ALGORITHMS - A UNIFIED FRAMEWORK Fuyun Ling Curil'X Co rp . ,'v[ansfifld, .\1,1 U2U-/8, ['SA

ABSTRACT. This paper provides a unified framwork for existing least-squares adaptive filtering and estimation algorithms that are both time- and orderrecursive (TORLS). It is shown that these algorithms can be derived by the decomposition principle of LS estimation. They can be all realized using only two types of basic processing cells each of which implements a simple set of scalar operations. Thus the TORLS algorithms are suitable for systo1ic array implementation. We also show that the various existing TORLS algorithms can be investigated by exploring the variations of these basic cells and three TORLS algorithm structures. By doing so. we reveal the relationship between the LS estimation algorithm based on Givens rotation and other TORLS algorithms including the LS lattice algorithm. A new LS lattice algorithm based on Givens rotations is derived in this paper. KEY WORDS. Adaptive systems; Order recursive; Time recursive; Least squares estimation; Systo1ic Arrays. new order-recursive algorithms by exploring the relationships between existing algorithms. As an examr p1e. we derive a new LS lattice algorithm based on the Givens rotation. The simplicity and advantage of the unified approach given in this paper could be appreciated from this example.

1. INIROOUCfIOO

Adaptive filtering based on the least-squares (LS) criterion has attracted much attention in the past two decades because of its broad spectrum of applications. Much of the research has been devoted to developing camputationally efficient and numerically robust LS algorithms. The time recursiveness in various LS algorithms has been another focus of research because of its practical usefulness in many applications. Time recursive LS algorithms can be divided into to categories: fixed order LS algorithms and orderrecursive LS algorithms. The LS lattice algorithm [1]. the mu1tichanne1 LS lattice algorithm with a sequential1y processing stage [2]. the time-recursive LS Gramr Schmidt a1gorithm[3] and the algorithms based on the Givens rotation[4] are examples of time- and orderrecnrsive least-squares (TORLS) adaptive algorithms. The fixed order time-recursive LS algorithms inc1nde the time-recursive LS (RLS or Ka1man) a1gorithm[S]. the square-root RLS a1gorithm[6] and the fast RLS-type algorithms [7.8.9]. Unlike the fixed-order algorithms. which have a predetermined order. the order-recursive LS algorithms compute all LS estimates of order from 1 through N. where N is the predetermined maximum order of the estimator. In addition to the flexibility of changing the order of estimation in real time. the order-recursive algorithms have. in general. good numerical properties. This has been demonstrated experimentally in [10]. The numerical properties of the TORLS algorithms can be further improved for fixed point computer implementation by using their normalized. or square-root forms. TORLS adaptive algorithms have been investigated quite intensively but individually. Our research shows that all the TORLS algorithms are closely related. By exploring the relationship among these algorithms. it is possible to derive these algorithms under a unified framework. In this paper we provide a novel unified derivation of these algorithms based on decomposition of LS estimation. Similar to the geometric approach[1.9] for the derivation of LS estimation algorithms. the decomposition method provides insight into these algorithms. but does not require a sophisticated mathematical background to understand. Furthermore. we show that it is possible to obtain

II. DECOMPOSITI
A. Notation: A vector and a scalar are represented by an underline and non-underlined character. respectively. The complex conjugate of a quantity is denoted by a star (*). The transpose of a matrix or a vector and the complex conjugate and transpose (Hermiti~ are denoted by an apostrophe (.) and a superscript H ( ). respectively. B. Model. Assume that we have an N-dimensiona1 vector data sequence. ~(k)}. and a desired scalar signal sequence. (z(k)}. where k = 1.2 •••• • n. We need to determine a coefficient vector. £. which minimizes the sum of exponentially weighted squared errors defined by:

where e~(k.n) is the error sequence of z(k) based on Ilk) at time n. and O
£(n) = RxI(n)Hxz(n) where (3a)

Hxz (n) =

{I

L

n-IL * w -:I(k) z (k).

(3b)

ki'1 ~(n)

and ~(n) in (3) are the covariance matrix of and the crosscorre1ation vector between Ilk) and z(k). respectively.

It£)

1017

1018

Funlll Ling

In some practical applications, it is desirable to obtain the coefficient vector C(n) at each time n. In other applications the desired-quantity is e~(n,n), which is the last error in the error sequence (e~(k,n», and £(n) does not have to be computed e%plicitly. We call the LS algorithms, which compute £(n), coefficientestimation oriented. Most of the fixed order LS algorithms belong to this type. On the other hand we call the algorithms, which only compute the LS error, errorestimation oriented. All of the TORLS adaptive algorithms are estimation error oriented. C. MEIBODS FOR DERIVATION OF LS ADAPTIVE Al.GORI1lIMS. The TORLS algorithms, or more generally any LS algorithm, can be derived using either an algebraic approach[11,12], which is based on direct matrix manipulations, or a method using the geometric concepts of linear vector spaces[l,9]. The latter approach is a natural consequence of the equivalence between the basic geometric principle of orthogonalization and the LS criterion. Suppose we consider the errors z~(k,n) in (1) forming an "error vector". It is easy to show that the error vector is of minimal norm if and only if it is orthogonal to the data subspace, which is spanned by the vectors formed by the elements in Irk). To minimize the Euclidean norm of the error vector is equivalent to finding the LS solution of the linear estimation problem. In turn, this is equivalent to finding the vector that is orthogonal to the data subspace. The geometric approach is especially suitable for deriving error-estimation oriented LS algorithms, as is the case of the TORLS algorithms. Every quantity in these algorithms has a very clear geometric meaning. However, we have noticed that many in the engineering community encounter some difficulty in understanding this advantageous approach. First, in order to apply the geometric approach, a good understanding of the concept of Hilbert space is necessary. It is not such a difficult task when dealing with order-updates in the LS algorithms, in which we are only concerned with a fixeddimensional vector space. However, the difficulty appears in the time updates, where we have to deal with a vector space that has growing dimensions. Secondly, because an LS estimation algorithm is essentially a set of algebraic equations, a reverse translation process is required to obtain the algorithm from its geometric interpretation. In this paper, we provide an alternative approach, which was first established in [11]. We demonstrate that a TORLS algorithm can always be derived from the point of view of partitioning an N dimensional vector timerecursive LS estimation problem into a series of scalar time-recursive LS estimate problems. As a result, time recursiveness can be easily incorporated into any given LS problem by using just a few simple algebraic theorems. Actually, this method can be considered as an algebraic interpretation of the geometric method. When combined with the algebraic interpretation, the geometric approach becomes a powerful analysis tool that is easier to understand. The theorems used in the derivation are given below. The proofs of these theorems are not presented here. Interested readers are referred to [11] for further details. D. Useful Theorems: Theorem 1: Suppose we decompose the vector Irk) into two vectors y(k) and l(k) with smaller dimensions, i.e.: !(k)= [y' (k) ,X' (k)]'

(4)

yy

~ n-k

_

Lw

Ay (n) - [

y

l

£y(k,n)~y

(8)

(k,n)]

k=O

Remarks: 1. This is the central theorem for the decomposition of an LS estimate problem. It states that LS estimation based on a high dimensional vector can be decomposed into LS estimations based on lower dimensional vectors. 2. The essence of this theorem are the same as what stated previously in geometric language, such as in [1] and [9], as an application of Pythagorean theorem to LS estimation. The statement of the theorem is new to the authors knowledge. 3. Although the correlation between the LS error sequences in Theorem 1 is e%pressed as a sum of all the products of the two LS residual errors from ~ to k=n, we can compute these correlations time-recursive1y only by using the most recent errors, ~(n,n) and eY(n,n), in the error sequences. The time-recursion is staTed by the following Theorem. Theorem 2: The time update equation of ~z(n) is as follows: H

~(n)=w~z(n-1)+~~(n,n)~; (n,n)/ay(n)

(9)

where ay(n) is a scalar and defined by: -1

_JI

ay(n) = 1 - Y-(n)RyY(n)l(n)

(10)

Remarks: 1. The time-recursion of r~(n) can be obtained in the same way. 2. ay(n) plays an important role in the time recursion of t~e correlations. If we set ay(n) equal to 1, the algorithm will only perform an approximate LS estimation, which is an LS estimation only in an asymptotic sense (n->~). ay(n) is also called the likelihood factor [1]. Theorem 3: ay(n) can be computed order-recursively, such that if ~

-1

~(n)=l-x-(n)Rxi(n)!(n)

then,

H 1 ~(n) = ay(n) - ~~ (n,n)[~(n)]- ~(n,n)

(11)

Theorems 1 through 3 provide the basic equations for realization of a TORLS algorithm. We can implement Eqs. (5), (6) and (9) using one type of processing cell and implement Eq. (11) and time-updates of the auto- and cross-correlation using another type of cell, as is shown in Figure 1. By using these two types of processing cells as building blocks, all the TORLS algorithms can be implemented in a structured way suitable for systolic array realization. In the next section we show how to use these theorems to derive TORLS algorithms. Ill. DERIVATION OF TORLS ADAPTIVE Al.GORI1lIMS. A. The LS Estimation Algorithm without Constraints - The Time-Recursive Modified Gram--Schmidt (lUllS) Algorithm The data vector Irk) can always be represented in the following form: Irk) = [Xl (k), Iz (k), ••• ,~(k)]' •

(Oik..{n)

(12)

We then define a set of data vectors of dimension from 1 through N, such that,

then (l.9niN) (13)

(5)

where (6)

As is shown below, the LS errors, e (n,n), of z(n) based on X (n) can be computed order rec~sively. eN(n,n) is the~esired LS estimation error. z For m=l, e (n,n) can be computed directly, such 1 that:

and

~(n)=

~ n-k y zH L w £y(k,n)~y (k,n) k=O

(7)

where

Orde r-rccursi\'c Least-squares .-\da pti\'e ..\I WJrithms

1019

where (15)

(27) (16) N-l other errors. e.l(n.n). i=2 ••• N. the LS errors of xi(n) based on x (n)lare computed similarly for later 1 nse. By nsing Theorem 1 given above we can compu~e the i LS error of z(n) based on ~(n). e (n.n). from el(n.n) 2 and e (n.n). The LS errors of x . (n). i=3 •••• N. llased 2l lz(n). denoted by e i2 (n.n). are ~lso computed similarly. In general. at stage m. eZ(n.n). the LS error of z(n) based on Jm(n). can be c~uted using Theorem 1. snch that.

z z z. em(n.n)=em-l(n.n)-km (n)em.m-l(n.n)

(17)

where k

Z

m

(n)=r

z

m

rZ(n) and r 2~ such tha

r.

(n)/r

DID

(n)

(18)

(n) in (18) can be computed using Theorem

According Theorem 2. rZ(n) and rb (n) can be computed time--recursively. as m DID (28) (29) Similar to (21). um(n) is computed as follows: um+l(n)=u (n)-Ib (n.n) 12/rb (n) m

m

(30)

DID

The errors b (n.n) are called the backward prediction errors. The~ can be computed by using the RMGS algorithm given above. However. by exploiting the shifting property of the data vector. the backward errors can be computed more efficiently. First we define another set of LS estimation errors of x(n) based on X (n-l) called forward prediction errors . f (n.n). By~pplying Theorem 2 to f (n.n) and b (n-l.~-l). which is the backward error ~ (n.n) delayed b~ one sampling interval. we can obtain fm:l(n.n) and bm+1 (n. n) as: f· fm+1(n.n)=f (n.n)-k (n)bm(n-l.n-l) m m

(Oim~l)

(31)

(20) According to Theorem 3. um(n) can be computed order recursively as: 2

where

(21)

um(n)=um-l (n)-Iem.m-l (n.n) I /rDlD(n)

These LS errors of x.(n) based on X (n). eim(n.n). i_I ••••• N. are compute! as: -m

(33) rbf(n). rfb(n) and rf (n) in (33) can be computed as ~11ows: m

mm

(35)

where H

rim(n)=wrim(n-l)+em.m-l(n.n)eim(n.n)/um(n)

(23)

eim(n.n) are used stage m+l. An implementation of the RMGS algorithm using the basic processing ce~l is depicted in Figure 2. The RMGS algorithm is an O(~) algorithm. It is not as efficient as the algorithms discussed below. However. since it does not impose any special requirements on the data vector. It is most versatile and is useful for applications such as spatial signal processing. B. LS Estimation for Data Vector with Shifting Property - Single Channel LS Lattice Algorithm In many applications. the data vector at time k has a special structure such that I(k) = [x(k).x(k-1) ••••• x(k-N+l)]·.

(O~in)

(24)

In words. I(k) can be obtained by shifting I(k-l). dropping the oldest element and adding a new element. Below we show how to use this shifting property to obtain a computationally more efficient version of the TORLS algorithms. The result is the well known LS lattice algorithm. We first define a set of data vectors. as

Jm(k)

s

[x(k).x(k-l) ..... x(k-m+1)]·

(lim9') (25).

Assume that we have obtained N LS errors of x(n-m) based on the data vector X (n). denoted by b (n.n). By using Theorem 1. it is ea~ to show that. on~ can obtain the LS error of z(n) based on IN(n.n) order recursively. as z z z. em+1(n.n)=e (n.n)-k (n)bm(n.n) m m

(Oim~l)

(26)

By using the theorems given above. the derivation of these equations is straight-forward. However. the limits on the sUDIDations in Theorem 1 will change from [O.n] to [- l.n-l] because of the time delay in b (n-l.n-l). As a consequence. the theorems will hold onl~ if x(k) is equal to zero for k(O. This case is called the prewindowed data. The structure of the prewindowed LS lattice algorithm using the basic processing cells discussed above is given in Figure 3. C. LS Algorithm for Data Vector with Block Shifting Property - The Multi-channel LS Lattice Algorithm If the data vector at time n can be obtained by dropping p elements. p)l. from the data vector at time n-l. and adding p new elements. we say that snch data vectors have a block shifting property. The LS estimation algorithm for this type of a data vector can be efficiently implemented using a multichannel (p-channel) lattice structure. The algorithm has a form similar to the single channel lattice. The difference is that the forward and backward errors become p-dimensional vectors instead of scalars. and the correlations are px1 vectors or pxp matrices. We can further decompose the multichannel lattice stage using a RMGS type of structure. as is shown in [2.11]. A block diagram of a systolic array that implements a multichannel lattice stage is given in Figure 4. Further details on the multichannel LS lattice algorithm can be found in these references. IV. VARIATIct
As we have shown above. all TORLS algorithms can be realized by cascading two type of basic processing cells. in which only scalar operations are involved. As a consequence. variations of any of the TORLS algorithms discussed above can be obtained by exploiting the varia-

]()2()

FU\'llll Lilll{

tions of these basic cells. Below we discuss some of them. A. A Priori Error TORLS Algorithms The LS estimation errors at time n discussed in the last section are computed using the optimal coefficients at the same time n. Such LS errors are called A posteriori LS errors. In some applications, such as adaptive equalization for data communications, the desired signal will not be available until it is estimated. In such cases one has to use the optimal coefficients at time n-1 to estimate the desired signal at time n, then to compute the LS errors using the optimal coefficients at n-l. Such errors are called A priori errors. The decomposition in terms of the A priori errors can be expressed as z

Z

JZ· (n-1)ey(n,n-1) Y

eX(n,n-1)=ey(n,n-1)-~

(36)

where ~(n,n-1), o;(n,n-1) and e~(n,n-1) are the A priori errors. There is a simPle relation between the a priori and a posteriori errors[2,12]. It i. given by (37) By using the relation (37) it is easy to obtain the time recursions for the auto- and cross-correlation, and the

order-recursion of the likelihood factor, Oy(n). B. Error-Feedback/Direct-Update Formulas. In Theorem I, the estimation coefficient ~(n) is expressed in the form of the quotient of the crosscorrelation between eIZ(k,n) and ei(k,n) and the autocorrelation of ei(k,n). t was shown in [10,11] that direct time-recursion of ~z(n) can be obtained as

~z(n)=~tn-1)+[Oy(n)/~(n)]ei(n,n-1)e~·(n,n-1) (38) An alternative imPlementation of the TORLS algorithms can be obtained by using (38) instead of (6) and (7). Since the estimation error ~(n,n-1) is fed back to update the coefficient estimating it, this type of implementation is called an error-feedback form. The LS lattice and RMGS algorithms employing this errorfeedback form are shown to be more robust to round-off error[3,10]. For the RMGS and multichannel LS lattice algorithm we note that the quantity [Oy(n)/~(n)]e (n,n-1) in (38) is common to many processing cells. It can be computed once and then sent to these cells. As a result, the computational complexity of the RMGS and2multichann~1 LS lattice algorithms can be reduced to NI+7N and 6p N+10N, respectively. It is interesting to note that the computational complexity of the p-channel LS lattice algorithm is only moderately more complex than tRe fast RLS algorithms [9]. The latter requires (S-6)p1N+2pN operations and is known to be numerically unstable. C. LS Estimation Algorithms Based on Givens Rotation Algorithms using the Givens rotation to solve LS problems are well known to have good numerical properties. The structure of the Givens algorithm implemented using a systolic array is given in Figure S. The Givens algorithms can be realized with or without square-root operations[4,13]. However, their connection with the other TORLS algorithms was not examined until recently[14]. It is shown in [14] that the algorithm based on the Givens rotation for LS estimation is algebraically equivalent to the RMGS algorithm given above. Below we derive the algorithm based on the Givens rotation given in [4] from the basic relations given above. By substituting (36) into (38), we have

~z(n)=~(n-1)

+[Oy(n)/~(n)]ei(n,n-1)[e;·(n,n-1)-~(n-1)ei(n,n-1)]

(39) We define

and (41) Using Eqs. (40) and (41) we can rewrite (38) and (11) as

JZ ~

- yz - Z· (n)=ckf (n-1)+sey (n,n-1),

-

~(n)=cOy(n)

(42)

Eq. (42) is the basic Given rotation. The algorithm given in [4] can be obtained by using Eqs. (39) through (42) and identifying the corresponding quantities to rewrite the RMGS algorithm. V. AN LS LATITCE ALGORnmI BASFD ON GIVI;NS ROTATI
As we have shown above, the basic processing equations in the algorithm based on Givens rotation can be viewed as a modified form of the basic partitioned scalar LS estimation equation. Hence, it is not difficult to see that the LS lattice algorithm can be implemented using the basic Givens rotation. The resulting algorithm would share the good numerical properties of the Givens rotation. A block diagram of the LS lattice algorithm based on Givens rotation is presented in Figure 6. A full development and a complete algorithm is not given due to the length limitation of the paper. It will be discussed in a future publication[16]. It is interesting to note that the LS lattice algorithm based on the Givens rotation needs only two angle computation and three rotations for each lattice stage. It is significantly less than the newly published fixed order fast OR algorithm[lS]. In addition, it provides the order-recursions which are not available in the latter.

In this paper, we have shown that any orderrecursive LS adaptive algorithm can be derived from the point of view of partitioning of LS estimations. To realize any of the TORLS algorithms, only two sets of simple scalar time-recursive equations are needed. The order-recursiveness is achieved by cascading these basic processing units. Due to the regular modular structure of the TORLS algorithms, they are most suitable for VLSI systolic or wave-front array implementation. A new method for deriving TORLS algorithms is given in this paper. We try to combine the best aspects of two existing approaches for deriving LS algorithms - the geometric and algebraic approaches. Of course, the same result can be obtained by existing methods. The intention of this paper is to provide a comprehensive yet powerful alternative method. Because the TORLS algorithms are based on the same basic processing units, it is possible to obtain alternative forms of the TORLS algorithms by simply exploiting the variation of these basic equations and combining them with the known TORLS algorithm structures. By doing so, we have shown that the relationship between the Givens rotation and other TORLS algorithms including the well known LS lattice algorithm. We note that the scalar lattice stage, which is built using the basic processing cells as discussed above, can be used as a second level building block to implement other more sophisticated algorithms. Since a lattice stage generates two LS errors at the same time, it will be desirable if both errors can be used in the algorithm[17]. On the other hand, if only one error is actually used in the algorithm, directly using the more basic cells discussed above can be more efficient. For example, the triangular lattice algorithm[18] essentially performs the same function as the RMGS algorithm, while it requires more computation than the latter. Furthermore we have shown that the LS lattice algo-

Ord er-rec u rs ive Leas t-squ ares Ad apti ve Algorithm s

1021

rithm can be implemented using the basic Givens rotation. The resulting algorithm would share the good numerical properties of the Givens algorithm. It needs less computation and has a more regular structure than the fixed order fast QR algorithm given in [lS]. The discussion on variations of TORLS algorithms in this paper is by no means exhaustive. Actually, we ouly discussed unnormalized and erponentially weighted TORLS algorithms. Normalized, covariance form and finitememory types of TORLS algorithms can also investigated in a similar way. They may provide other desirable properties.

[1] D. Lee et aI, waecursive Least Squared Ladder Estimation Algorithm", IEEE Trans. ASSP-29 , pp. 627-641, June 1981. [2] F. Ling, and J.G. Proakis, "A Generalized Mu!tichannel Least-Squares Lattice Algorithm with Sequential Processing Stages", IEEE Trans. ASSP-32 , pp. 381-389, Apr. 1984. [3] F. Ling, D. Manolakis and J.G. Proakis, "A Recursive Modified Gram-Schmidt Algorithm for Least Squares Estimation", IEEE Trans. ASSP-34 , pp. 829-836, Aug. 1986. [4] J. McWhirter, "Recursive Least-Squares minimization using a systolic array", Proc SPIE Paper 431-1S, 1983. [SI D. Godard, "Channel Equalization Using Kalman Filter for Fast Data Transmission", IBM J. Res. Develop., pp. 263-273, May , 1974. [6] F.M. Bsu, "Square- Root Kalman filtering for high speed data received over fading dispersive HF channels", IEEE Trans. Inform. Theory, vol. IT-28 , pp. 7S3-763, Sept. 1982. [7] D.D. Falconer and L. Ljung, "Application of Fast Kalman Estimation to Adaptive Equalization", IEEE Trans. on Communications, Vol. COM-26, pp. 1439-1446, Oct. 1978. [8] G. Carayannis, D. Manolakis and N. Kalouptsidis, "Fast Kalman Type Algorithms for Sequential Signal Processing", IEEE Proc. of ICASSP'83, Boston, Mass., pp. 186-189, Apr. 1983. [9] J.M. Cioffi and T. Kailath, "Fast, Recursive LeastSquares Transversal Filters for Adaptive Processing", IEEE Trans. ASSP-32 , pp. 304-337, Apr. 1984. [10] F. Ling, D. Manolakis and J.G. Proakis, "Least Squares Lattice Algorithms with Direct Updating of the Reflection Coefficients", Annales des Telecommunications, Apr .. 1987. [11] F. Ling, "Rapidly Convergent Adaptive Filtering Algorithms with Applications to Adaptive Filtering and Channel Estimation", Ph.D. Thesis, Northeastern University, Boston, Massachnsetts, Sept, 1984. [12] E.B. Satorius and J.D. Pack, "Application of LeastSquares Lattice Algorithms to Adaptive Equalization', IEEE Trans. Commun., Vol. COM-29, pp. 136-142, Feb. 1981. [13] W.M. Gentleman, 'Least Squares Computation by Givens Transformation Without Sqnare-roots', J. Inst. Maths, Applications, Vol. 12, pp. 329- 336, 1973. [14] F. Ling, D. Manolakis and J.G. Proakis, 'A Flexible, Numerically Robust Array Processing Algorithm and its Relationship to the Givens Transformation', IEEE Proceeding of ICASSP'86, Tokyo, Japan, April, 1986. [IS] J.M. Cioffi, "The Fast QR Adaptive Filter', IEEE Proc. of ICASSP ' 87, Dallas, Texas, Apr. 1987. [16] F. Ling, "Systo1ic Arrays for Implementation of Order-Recursive Least-Squares Adaptive Filtering Algorithms", Proc. of International Conference on Systo1ic Arrays, San Diego, 1988. [17] B. Lev-Ari, ~u1ar Architectures for Adaptive Mu! tichannel Lattice Algorithm', IEEE. Trans. on ASSP, Vol. ASSP-3S, pp . S43-SS2, Apr. 1987. [18] K.C. Sharman and T.S. Darrani, "A Triangular Adaptive Lattice Filter for Spatial Signal Processing", Proc. of IEEE ICASSP'83, pp. 348-3S1, Apr. 1983, Boston, Mass.

Figure 1. Basic Proce,,'ng Cells for TORLS Algorithm.

x

x

X

X

X X X

X

X

X

X

Flgur. 2.

o

X

o

o

x

X X

o

o

o

o o

o o

Triangular Syatolle Arr. y for RMGS Algorithms

l,,(n. n·1) } - - - _ 'm+1 (n .n·1)

1ft,

(n.n· 1)

"m (n .n· 1) 'In+1 (n.n· 1)

Figure 3 . lS lattice Stage Using Elementary Proee,,'ng Cella

1022

Fuyun Ling

Htr----< -'Htr----<

Lm

D

em

D

~

r--

I I

D

H

a

l, ."

!

H!r- - --<

D

1

"\,

./

~

8 m+ 1

.a

~

m+'

D

Um

Um+, D ~-

D

Figure 4. Array ImplementatiC"

of A Multichannel Lattice Stage

x

o

I-------------------------~

fm(n,n·l) ,

• fm.l(n,n. l)

\ - - - -.. ,,",.I(n,n.l ) Om (n,n· l)

D

Rotator

Angle Computor

Figure 5. Systollc Array Implemetatlon of LS Estimation Based On Glvens Rotation

Rotator

o

Angle Computor

Figure 6, An LS Lattice Stage Based on Givens Rotation