i:J,i ELSEVIER
SIGNAL
PROCESSING Signal Processing 36 (1994) 91 98
Subspace rotation using modified Householder transforms and projection matrices - Robustness of D O A algorithms V.Ch. Venkaiah*, A. Paulraj t Central Research Laboratory Bharat Electronics, 25, M.G. Road, Bangalore 560001, India
Received 26 November 1992; revised 23 April 1993
Abstract A new Householder transformation for vectors in complex space is suggested. The transformation does not require the inner product of mirror image vectors to be real. A method, based on this transform, is designed to solve the subspace rotation problem. The method produces a unitary rotation operator. Also, another method that uses projection matrices and yields a nonunitary rotation operator is developed. Robustness of directions-of-arrival (DOA) algorithms that use focussing matrices is discussed. Zusammenfassung Es wird eine neue Householder-Transformation fiir Vektoren in komplexen R~iumen vorgeschlagen. Die Transformation verlangt nicht, dab das innere Produkt spiegelbildlicher Vektoren reell ist. Eine Methode, basierend auf dieser Transformation, wird entwickelt zur L6sung des Unterraum-Rotationsproblems. Die Methode liefert einen einheitlichen Rotations-Operator. Es wird auch eine andere Methode entwickelt, die Projektionsmatrizen benutzt und einen nicht einheitlichen Rotations-Operator liefert. Die Zuverlfissigkeit von Direction-of-Arrival (DOA) Algorithmen, die Fokussierungsmatrizen benutzen, wird diskutiert. R~ume Une nouvelle transformation de Householder pour les vecteurs dans l'espace complexe est sugg~r~e. La transformation ne n6cessite pas le produit scalaire des vecteurs de l'image mirroir pour &re r6elle. Une m&hode, bas6e sur cette transformation, est d6finie pour r6soudre le probl6me du sous-espace de rotation. La m6thode produit un op6rateur de rotation unitaire. De plus, une autre m6thode utilisant les matrices projet6es et produisant un op6rateur de rotation non-unitaire est d6velopp6e La robustesse des directions d'arriv~es (DOA) des algorithmes utilisant des matrices focalisantes est discut6es. Key words: Focussing matrices; Modified Householder transform; Projection matrices; Robustness of DOA algorithms;
Subspace rotation
* Corresponding author. Present address: R&D Group, TATA ELXSI (India) Ltd., 123 Richmond Road, Bangalore 560025, India. t Dr. Paulraj is presently with Information Systems Laboratory, Stanford University, Stanford, CA 94305. 0165-1684/94/$7.00 © 1994 Elsevier Science B.V. All rights reserved SSDI 01 65-1 684(93)E0067-U
92
V. Ch. Venkaiah, A. Paulraj / Signal Processing 36 (1994) 91-98
1. Introduction
Subspace or eigenvector methods such as MUSIC [15] and ESPRIT [12, 14] are known to have high resolution capabilities and yield accurate estimates if the observed data have adequate information content as determined both by the element level SNR and the observation time [16]; also the underlying signal subspace should remain invariant during this time. Since the SNR can be quite low, a large observation interval is often required. In this entire observation interval, platform direction may not be constant (i.e. platform maneuvers) and, hence, the signal subspace can be inconsistent. A solution that requires alionment of signal subspaces and uses subspace techniques to maneuvering platforms is proposed in [11]. Subspace techniques are also extended to the wide-band case [1-7, 10, 17, 19-22]. These extensions may be divided into two different classes. The incoherent processing class and the coherent processing class. Coherent processing class consists of several techniques that have been suggested recently [2-4, 6, 7, 10, 17, 21] and require the alignment of signal subspaces of the narrow-band components, within the bandwidth of the signals. Let a and g denote two subspaces of dimension d, in an m-dimensional complex space. Then the subspace rotation problem is to find a linear operator T that rotates a onto g. Equivalently, find an m × m matrix T that rotates the column space of an m × d matrix A onto the column space of an m x d matrix B, where the underlying field is complex. While it is easy to see that there are infinitely many solutions for this problem, it is preferable to select an appropriate solution depending on the application. For example, in wide-band directions-of-arrival (DOA) estimation, a unitary T is reportedly desirable [10]. Further, given A and B, the rotation operator T may be selected such that (1) only the two subspaces are aligned, i.e., the operator is such that Ta = ~, where ~r = span of A and g = span of B, or (2) there is a one-to-one alignment of column vectors, i.e. TA = B. A class of methods to compute focussing matrix T, also known as signal subspace transformation matrix, are proposed [4, 6, 7, 10, 17, 21]. In this
paper, we propose two more methods to compute T. The first uses Householder transforms defined for C", while the second employs projection matrices. Even though the underlying spirit is essentially the same in the proposed techniques, the Householder method ensures a unitary T. Also, this approach aligns only the two subspaces, while the projection matrix method aligns the column vectors of A and B. In Section 2, the new Householder transform is introduced, and its application to the subspace rotation problem is addressed. In Section 3, the projection matrix method is described. This is followed in Section 4 by an analysis of some issues on robustness of DOA algorithms, and Section 5 contains the concluding remarks.
2. Householder transform in C m
The householder transformation [8, 9] for rotating a vector x onto y is given as ZZ H
H=I-2--
llzl) ~'
where the equinorm vectors x, y e R " and z = x - y, so that Hx = y. Householder transformation can he interpreted geometrically, as the mirror image reflection of the vector x about the hyperplane specified by its normal z [18]. This matrix transform is frequently employed for computing the eigenvalues of a matrix using QR decomposition and also in least squares problems [13]. However, for vectors in complex space C ' , its application is limited to situations wherein the following condition is satisfied:
xny = ynx
(inner product real),
(1)
where superscript H denotes the Hermitian transposition. For example, this condition is satisfied in the case of 'forcing zeros', where the vector y has only one nonzero element. The modified Householder transform presented here can be applied to arbitrary vectors in C ' , where condition (1) need
V. Ch. Venkaiah, A. Paulraj / Signal Processing 36
not be satisfied. Also, the proposed transformation reduces to the usual case whenever Eq. (1) holds good. Let x and y be two vectors over the complex field C such that xHx = yHy and let z = x - y. Then we define the Householder transform in Cm as
ZZH
PZ
( 1 1+x
H=lLEMMA
1. Hx = y,
PROOF.
Consider
.‘.=[I-+$]“[I-~$1 H =
[
-
CH
ZZH ---cj$ /IzI12
=
1 _
C”
ZzH ---_~++(c”+c)~ II z II2
ZZH IIzI12
/I z II2
since c”c = c” + c = I.
Note 1. Since H is unitary it is ‘norm preserving’. That is X~_X= (Hx)~(Hx). Further, for any two vectors x, y, xHy = (Hx)~(H~). This may be interpreted as an ‘angle preserving’ property. Note 2. In contrast to the usual Householder transform, H in Cm is not Hermitian. Therefore, while Hx = y, it is not necessarily true that
=
x- ,;z ;z-
=
(z”z)x - (Z”X)Z- (x”z)z
H
(XHZ)(ZHX) (z”x) II zl12
z
//z/l2 --2(x”x)y - (Y”X)Y_.- (XHY)Y = (X”X - x”y - y”x + yHy)
[2(X”4 -
and consider
(2)
/lz.
93
i 1994) 91-98
Hy = x. Note 3. The Householder
transformation suggested in (2) can be understood in terms of the symmetrized scalar product (x. y) = f (xHy + y”x). This scalar product is real, for any two complex vectors. If z is a unit vector, the operator P for projection onto the hyperplane perpendicular to z, and the operator H for reflection in the hyperplane, can be defined by Px = x - z(z. x), and Hx = x - 22(2.x). With the above definition of
scalar product we find Hx =x -z(zHx
(Y"X)
+xHz) =(I
-iI
+~<)z,~)x.
--_-(XHY)lY
= [2(X”X) - (xHy) -- (yHx)] since X”X = y”y
This is precisely the new reflection operator of (2). Note 4. Let x and y be two m-dimensional complex vectors. Also, let
= y, 0 PROPOSITION PROOF. c=
1. H is unitary.
Define
XT Y = X;yR + x;y, = :(X”y + y”x).
H
( > I+=
where xR, yR are the real parts and x,,y, are the imaginary parts of x and y, respectively. Then
Z”X
Therefore, symmetrized scalar product of two mdimensional complex vectors x and y is the same as
V. Ch. Venkaiah, A. Paulraj / Signal Processing 36 (1994) 91-98
94
the standard scalar product of two 2m-dimensional vectors X a n d Y. Hence, the modified House-holder transform can also be derived using X and Y.
Note that the algorithm can be easily modified to the case in which A and B are not full rank. L E M M A 2. T A ' = B'.
2.1. Appfication o f Householder transform to subspace rotation
A method, based on modified Householder transforms, to rotate the column span of A onto the column span of B is presented. The method uses an orthonormalization procedure to obtain orthonormal columns A ' = [a~ a t . . . a~] of A and B' = [b~ bl • • • b~] of B. Also, it employs modified Householder transforms Hi to reflect Hi-l""Hsa~ onto bl for all i, and forms the rotation operator T = H d H d - s "'" H 2 H t . The operator T satisfies the operator equation TA' = B'. Hence, T{Span of A} = Span of TA = Span of B, but TA need not be equal to B. Given A and B, the following algorithm computes a rotation operator T.
P R O O F . It is enough if we show that H i + l H i " " H l a l = b[ for i < d, because (i)Hj satisfies the relation H t ( H t _ s . . . H l a j ) = b~i, and (ii) the same
argument can be extended to prove the general relation Hi+k "'" H i + l H i " " " H l a ; = bl. Let hi+l denote H i H i - s ' " H l a ~ + s . Also, let the reflection of hi+x be hi+ s. Then the normal to the hyperplane, about which we reflect hi+s onto b~+l, is zi+l = h i + l - b ~ + l . Since H i H i _ l "'" Hsa~ = b~ and a; is orthogonal to ai+s we have from Note 1 that b~nhi+l = 0. Also b~nbi+l = O, since bl and b~+~ are orthogonal. Therefore, H zi+ l bi~ = 0, implying that blp lies on the hyperplane orthogonal to zi+s. Hence, the image of Hi H i - a " " H~al = b~ with respect to the hyperplane specified by Zi+s is itself. That is, Hi+lHi
ALGORITHM
" " "
Hxa~ = b~.
The case in which the reflection of h;+ 1 is - b~+ can be similarly proved. []
1
P=I,Q=I,T=I,
for i:= 1 to d do begin
3. P r o j e c t i o n
al = P a i , d2
b'
1
P = P-
=
a'
Qbi, ls = IIas II, 12 = IIa2 II,
= 1ds.
a'a'n, Q = Q -
b'b'n,
d3 = Ta', z = d3 - b', 13 = d~b', 15 = 2 -
13 - ln3,
if 15 > 0 then 1 - 13 14= 1 - 1H'
else begin Z = d3 -k- b ' , 15 -- 2
end; ZZ H
Hi=l--(1 T= HiT,
end;
+14) 15
+
13 +
l H,
1+13 14 -- 1 q- 1 H '
matrix method
to subspace
rotation
We now present another method to solve the subspace rotation problem. The method is based on projection matrices and results in a nonunitary focussing matrix. Also, the method aligns the column vectors of A and B. That is, rotation matrix T satisfies the matrix equation TA = B. Observe that such a T exists only if rank of Ai+ ~ is greater than rank of Ai whenever rank of Bi+ 1 is greater than rank of Bi, where Xi = [xl x2 •. • xi]. Also, i 1 i-I bl must be equal to ~ j : 1 cjbj if a i = ~j= 1 c Jar, for T to exist. The following is the description of the method. Suppose the computed matrix T satisfies Tai = bi for all i = l, 2 , . . . ,j. I f j = d, then Tis a rotation operator. Otherwise, compute a projection matrix Pt, whose columns span the null space of At. Note that Po = I. Next, project at+ ~ onto the null space of A t by forming tit+ 1 = Ptat+ a. Then rotate the vector ~it+l by computing a diagonal matrix Et+l so that Taj+l + E j + l a t + l = bj+l, and set
V. Ch. Venkaiah, A. Paulraj / Signal Processing 36 (1994) 91-98
T= T + Ej+ 1Pj. Iteration of this procedure produces a rotation operator. This technique is implemented in the following algorithm.
PROOF. Consider
ejj
:
l)j/~lj,
else begin i f v j = O t h e n Q j = 1, else say Projection Matrix
(
= ai 1 :
ai(1
Method failed
end; end; T= T+EP, if (i < d) then begin P = P - ddn/llall 2 , d = Pal+l, end; end else begin if 12 :fi 0 then say there is no T such
~Ildill
--arai'~ ildil[2j
since P i _ l a i =
di
aHpii~ii_. ~lai'~/
= di(1--arPff-lP,-,al)ll 'ii It2 since pU 1Pi-
1 =
Pi- l
iE~2]dr"'
= di(1
and terminate
didr ai
Pial : P i - l a i -
ALGORITHM 2 P=I,d=al, T = 0 , E = diag(eii)= 0, for i := 1 to d do begin V = b i - Tai, ll -- II d II, 12 = II v II, ifll # 0 t h e n begin for j : = 1 t o m d o begin if ~ij # 0 then
95
=0. Then consider /
Pi+lai = t P i
= (Pi
tii + 1 dH+ l"/ .... ~ l ai
Ildi+~ II J
di+ 1aH 1Pr~
IId,+,il J¢ since a i ÷ 1 = Piai+l
=
I
a,+,ar+,'xn .... ~ 1 1-'iai IIag + i/I }
since P is Hermitian
that TA = B
and terminate end; end; Note that the algorithm is applicable even if the matrices A and B are not full rank. If A is nonsingular and B is the identity matrix, then Tis the inverse of A. In this algorithm, computation of E may not be possible or may become unstable, because d may have a zero or a numerical zero element. Hence, the algorithm may not be stable unless some kind of pivoting is introduced. But, in the application of DOA problem the algorithm is found to be stable.
LEMMA 3. For j >1 i, Pja~ = 0, where Pj is the projection matrix P at the end of the jth iteration of the algorithm.
=0.
By induction the lemma follows.
[]
THEOREM 1. TA = B. PROOF. Consider T A = ( ~ dj=lEjPj-1)[al a2 ... ad] =[~EjPj_lal
~EjP~_la2 ...
EjPj lad 1 = [bx
b2 . . .
b~]
follows from Lemma 3 and the algorithm.
[]
V. Ch. Venkaiah, A. Paulraj / Signal Processing 36 (1994) 91-98
96
4. Robustness of DOA algorithms The analysis for robustness of D O A algorithms is carried out only for the maneuvering array problem discussed in Ell], but it can be easily modified to the wide-band source problem studied in [6, 10, 21]. In the maneuvering array problem, it is assumed that the sources are stationary and the array orientation :~ is known exactly. So, a focussing matrix T should rotate the DOAs by ~. That is, T should satisfy the relation T a ( O - c~)= a(O) for every DOA 0. Let direction of arrival 0 and array angle ct be fixed. Suppose the focussing matrix T is computed to satisfy T a ( O i - ~) = a ( O i ) for some guess angles Oi such that {a(Oi - c0: i = 1, 2 . . . . . m}is linearly independent (l.i.) and {a(0i): i = 1, 2 . . . . . m} is 1.i. Also suppose that a(O - ~) = ~, cia(Oi - ct). Then [I Ta(O - ~) - a(O) II = ]l ~ cia(Oi) -- a(0)I] is a function of guess angles, which we denote by f ( 0 1 , 02 . . . . . 0,,). Note that f is zero if
a(O) =
~., cia(Oi).
Different D O A algorithms are analyzed, assuming that guess angles are zeros of f. The general case in which guess angles satisfyf~< ~, where z is a positive real number, may be treated in a similar fashion. Consider a focussing matrix R that satisfies Ra(Oi - ct) = a(Oi) only for i = 1, 2 . . . . . d, d < m. This matrix does not rotate the D O A 0 by the required angle ~ because Ra(O - ~) = ~_, cia(Oi) + i=1
ciRa(Oi - o:) i=d+l
a(O) unless R a ( O i - ~ ) = a ( O i ) for all i - - - d + 1, d + 2 . . . . . m or c i = O for all i = d + 1, d + 2 . . . . . m. The D O A algorithms that we analyze here are based on the ADVA method [11], the projection matrix method, and unitary matrix methods. The D O A algorithm based on the ADVA method computes focussing matrix T = B A 1, where A = [a(Ol - ~ ) . . . a ( O , , - c~)] and B = [a(01) .. • a(0m)]. Since T a ( O i - ~) = a(Oi) for all i, it follows that Ta(O - ~) = a(O). That is, 0 is
rotated by the array angle c~. Therefore, this algorithm is robust. The algorithm based on projection matrix method computes focussing matrix T such that TA = B, where A = [a(01 - ~) . . . a ( O d - - Of)] and B = [ a ( 0 1 ) . . . a(Od)], d <~ m. Clearly, this algorithm can be made robust by taking d to be equal to m. The algorithm based on unitary focussing matrix methods computes T such that T{Span of A} = Span of TA = Span of B, where A = [ a ( 0 1 - ~) . . . a ( O d - c~)] and B = [a(01) . . . a(Od)], d <~ m. Since T only satisfies T a ( O - ~) Span of B but not Ta(O - ct) = a(O), 0 can get rotated by an angle fl ~ :~. As an example, let 0 = 01 and Ta(O - c~) = a(02), then 0 is rotated by an angle fl = 02 - 0 + c~. Hence, this algorithm is not robust. So, for the D O A algorithm to be robust it is necessary that d = m and Ta(O - ~) = a(Oi) for all i. But such a unitary focussing matrix may not exist. To see this, assume that such a T exists. Then for every i and j we have a(Oi
-
~)Ha(Oj
-
~) = ( T a ( O i
-
:O)"(Ta(Oj
-
~))
= a(Oi)Ha(Oj).
That is, the necessary condition for the existence of T is that a ( O i - ~)Ha(Oj - ~) = a(Oi)Ua(Oj) for all i,j. However, the direction vector does not satisfy these relations. For example, in the linear array case these relations are not satisfied. Hence, there is no unitary focussing matrix T such that Ta(Oi - ~) = a(Oi) for all i. Since, there is no (required) unitary T, the question that arises is 'how to compute a unitary T that results in the robust D O A algorithm within the unitary class?' Note that a class of unitary focussing matrices can be computed by applying modified Householder transforms to different sets of orthonormal vectors. Also, different choices of Z result in different unitary focussing matrices in the derivation of the rotational signal-subspace (RSS) focussing matrix [10]. Similarly, [6] provides a class of unitary signal subspace transformation matrices. Of the class of unitary focussing matrices that can be generated by the method given in Section 2.1, we intuitively feel that the rotation operator
V. Ch. Venkaiah, A. Paulraj / Signal Processing 36 (1994) 91 98
that maps a~ onto b;, where A' = [a] . . . a~,] and B ' = [-b'l . . . b ' ] are solutions to the following problems.
where Z = Vn A' U. Hence, Tr[A'HA + A H A ' ] =
~ ~rii~ii+ ~ auZii i-1
min ]bA' - A H2
:
A ' " A ' = I, Span(A') = Span(A), and -
i
1
(3)
subject to
min IIB'
97
B Ii 2
~ O'ii(Zii ~- Zii), i l
where a,'s are the diagonal elements of S, It can be verified that Z is unitary and, hence, Iz.I ~ 1. Therefore, the maximum of T r [ A ' " A + A"A'] occurs at Z = I. That is, when Z = V"A'U = I and, hence, when A ' = VU n. Observe that Span(A)= Span(VS n) = S p a n ( V ) = Span(A')and A ' n A ' = I. B' can similarly be computed.
subject to
B'HB'= I,
5. C o n c l u s i o n s
Span(B') = Span(B), results in a robust DOA algorithm. If the norm is Frobenious, following [8, 10] with modifications, we have the following derivation for A': A ' - All 2 = Tr[(A' - A ) " I A ' - A)]
= Tr[l - A'HA - A " A ' + A"A], smce A' should satisfy A ' n A ' = I. Hence, I [ A ' - A II2 = T r [ l ] - T r [ A ' n A + AriA '] + TrEAnA]. Therefore, minimization of (3) is equivalent to maximization of Tr[A'HA + AUA']. Consider
Tr[A'nA + A " A ' ] = T r [ A 'H VSHU" + U S V"A'], where U S vH is the singular value decomposition of AH. Therefore, Tr[A'HA + AH A'] = T r [ A 'n VZ n U rt] + T r [ U Z V H A '] = Tr[UHA 'n VZ HI + Tr[ZVHA'U] = Tr[ZnS "] + Tr[SZ],
A new transform called Householder transform in C " is proposed. This transform, unlike the usual transform, is applicable even if the inner product of the pair of vectors is not real. The new transform is then used to obtain a method that solves the subspace rotation problem. The method produces a unitary focussing matrix which aligns the subspaces by aligning the orthonormal vectors. Further, another method called the projection matrix method for subspace rotation is designed. The method aligns the column vectors of the corresponding matrices. The computational complexity of Algorithms 1 and 2 can easily be calculated to be (m 3 + 6m 2 + 6m + O(1))d multiplications plus (6m 2 + m + O(1))d additions and (4m 2 + 5m)d multiplications plus (4m 2 + m + O(1))d additions, respectively. In addition, the robustness of DOA algorithms for the maneuvering array problem is discussed. The algorithms considered here are based on the ADVA method [11], the projection matrix method, and unitary matrix methods. Analysis is carried out on the assumption that guess angles are zeros of f(01 . . . . . 0.,) = II ~ c i a ( O i ) - - a(O)H, where 0 is DOA, a(O - or) = ~ cla(Oi - ~), and ~ is the array angle. It is concluded that the ADVA method and projection matrix methods are more robust compared to unitary matrix methods. Moreover, a method to compute orthonormal sets of vectors is presented which, we intuitively
98
V. Ch. Venkaiah, A. Paulraj / Signal Processing 36 (1994) 91 98
feel, will produce an optimal unitary focussing matrix in the sense that it yields a robust DOA algorithm. The case in which guess angles are not zeros of land the problem of computing a unitary focussing matrix that yields the robust DOA algorithm within the unitary class may be investigated separately.
6. Acknowledgments The authors wish to acknowledge Dr. Eric A. Lord, Dr. V. V. Krishna and Mr. S. Mukhopadhyay for discussions. Also, they would like to thank the referees for their careful reading of the manuscript and useful suggestions which contributed to Note 4.
7. References [1] G. Bienvenu, "Eigen system properties of the sampled space correlation matrix", Proc. IEEE Internat. Conf. Acoust. Speech Signal Process. '83. Boston, 14-16 April 1983, pp. 332-335. [2] G. Bienvenu, P. Fuerxer, G. Vezzosi, L. Kopp and F. Florin, "Coherent wide-band high resolution processing for linear array", Proc. IEEE lnternat. Conf. Acoust. Speech Signal Process. '89, Glasgow, 23-26 May 1989, pp. 2799-2802. [3] K.M. Buckley and L.J. Griffiths, "Broad-band signal-subspace spatial-spectrum (BASS-ALE) estimation", IEEE Trans. Acoust. Speech Signal Process., Vol. ASSP-36, July 1988, pp. 953-964. [4] K.M. Buckley and X.L. Xu, "Broad-band beam-space signal-subspace source localization", Trans. AP Symp., June 1988. [5] M. Coker and E. Ferrara, "A new method for multiple source location", Proc. IEEE Internat. Conf. Acoust. Speech Signal Process. '82, Paris, April 1982. [6] M.A. Doron and A.J. Weiss, "On focussing matrices for wide-band array processing", IEEE Trans. Signal Process., Vol. 40, No. 6, June 1992, pp. 1295-1302. [7] W.F. Gabriel, Large aperture sparse array antenna systems of moderate bandwidth for multiple emitter location, NRL Memo. Rep. 6109, November 1987. [8] G.H. Golub and C.F. Van Loan, Matrix Computations, Johns Hopkins Univ. Press, Baltimore, MD, 1983.
[9] A.S. Householder, The Theory of Matrices in Numerical Analysis, Blaisdell, New York, 1964. [10] H. Hung and M. Kaveh, "Focussing matrices for coherent signal-subspace processing", IEEE Trans. Acoust. Speech Signal Process., Vol. ASSP-36, No. 8, August 1988, pp. 1272 1281. [11] V.V. Krishna and A. Paulraj, "Direction of arrival estimation using eigenstructure methods for maneuvering arrays". Proc. IEEE Internat. Conf. Acoust. Speech Signal Process. '90, Albuquerque, 3 6 April 1990, pp. 2835-2838. [12] A. Paulraj, R. Roy and T. Kailath, "Estimation of signal parameters via rotational invariance techniques - ESPRIT", Proc. 19th Asilomar Conf. on Circuits, Systems and Computers, Pacific Grove, CA, 6 8 November 1985, pp. 83-89. [13] C.M. Rader and A.O. Steinhardt, "Hyperbolic Householder transformations", IEEE Trans. Acoust. Speech Signal Process., Vol. ASSP-34, No. 6, December 1986, pp. 1589 1602. [14] R. Roy, A. Paulraj and T. Kailath, "ESPRIT A subspace rotation approach to estimation of parameters of cisoids in noise", IEEE Trans. Acoust. Speech Signal Process., Vol. 34, No. 4, October 1986, pp. 134(~1342. [15] R.O. Schmidt, "Multiple-emitter location and signal parameter estimation", IEEE Trans. Antennas and Propagation, Vol. AP-34, March 1986, pp. 276-280. [16] K.C. Sharman and T.S. Durrani, "Resolving power of signal subspace methods for finite data, lengths", Proc. IEEE lnternat. Conj. Acoust. Speech Signal Process. "85, Tampa, FL, 26-29 March 1985, pp. 1501-1504. [17] A.K. Shaw and R. Kumaresan, "Estimation of angles of arrivals of broad-band signals", Proc. I EEE lnternat. Conf. Acoust. Speech Signal Process. '87. Dallas, 6-9 April 1987, pp. 2296-2299. [18] A.O. Steinhardt, "'Householder transforms in signal processing", IEEE ASSP Mag., Vol. 5, July 1988, pp. 4-12. [19] G. Su and M. Morf, "Signal subspace approach for multiple wide-band emitter location", IEEE Trans. Acoust. Speech Signal Process., Vol. ASSP-31, December 1983, pp. 1502-1522. [20] H.L. Van Trees, Detection, Estimation, and Modulation Theory, Part L Wiley, New York, 1968. [21] H. Wang and M. Kaveh, "Coherent signal-subspace processing for the detection and estimation of angles of arrival of multiple wide-band sources", IEEE Trans. Acoust. Speech Signal Process., Vol. ASSP-33, August 1985, pp. 823 831. [22] M. Wax, T.J. Shan and T. Kailath, "Spatiotemporal spectral analysis by eigenstructure methods", IEEE Trans. Acoust. Speech Signal Process., Vol. ASSP-32, August 1984, pp. 817 827.