SIGNAL
PROCESSING ELSEVIER
Incorporating
Signal Processing 46 (1995) 85- 104
a priori information into MUSIC-algorithms and analysis *
Dare1 A. Linebarger”* *, Ronald D. DeGroat”, Eric M. Dowling”, Petre Stoicab, Gerald L. Fudge” aProgram in Electrical Engineering, EC33, The University of Texas at Dallas, Richardson, TX 75083-0688, USA b Systems and Control Group, Uppsala University, Uppsala, Sweden
Received 28 November 1994; revised 12 May 1995
Abstract Constrained MUSIC and beamspace MUSIC are similar algorithms in that they both require a priori information about signal directions and they both involve linear transformations on the data. Constrained MUSIC uses precise information regarding the directions of a subset of the signal directions to improve the direction estimates for the remaining signals. Beamspace MUSIC uses approximate knowledge regarding all the signal directions to reduce computational complexity and improve breakdown properties. These two methods can be combined, resulting in constrained beamspace MUSIC. We also perform asymptotic analysis of constrained and unconstrained MUSIC demonstrating that (asymptotically) improved subspace estimates always result from the use of constraints, and (asymptotically) the variance of constrained MUSIC is less than that of unconstrained MUSIC under either high coherence, large numbers of sensors, or high SNR conditions. As a part of this analysis, we study the effects of coherence on MUSIC and derive best/worst case coherences in terms of the variance of MUSIC. We also demonstrate that those conditions where the variance of MUSIC is predicted to be less than that of constrained MUSIC generally correspond to conditions where MUSIC is in breakdown (and constrained MUSIC is not). So, unconstrained MUSIC actually does not achieve its theoretically predicted advantage in those cases. While constrained MUSIC requires precise information about the known signal to improve performance when the unknown signal is very near, it can also offer performance advantages with only approximate knowledge if the unknown and known signals are not tpo close to each other. Zusammenfassung Die als Constrained MUSIC und Beamspace-MUSIC bekannten Algorithmen sind insofern Phnlich, als beide Algorithmen a priori Information hinsichtlich der Signalrichtungen voraussetzen und lineare Transformationen der Daten enthalten. Constrained MUSIC verwendet die priizise Kenntnis eines Teils der Signalrichtungen, urn die Richtungsschltzungen fiir die restlichen Signale zu verbessern. Beamspace-MUSIC verwendet die niherungsweise Kenntnis aller Signalrichtungen, urn den Rechenaufwand zu verkleinem und die Breakdown-Eigenschaften zu verbessern. Diese beiden Methoden kiinnen kombiniert werden, wodurch man den Constrained Beamspace-MUSIC Algorithmus
*This work supported in part by Texas Advanced Research Grant 009741-022 and National Science Foundation MIP-9203296. *Corresponding author. 0165-1684/95/%9.50 0 1995 Elsevier Science B.V. All rights reserved SSDI 0165-1684(95)00074-7
Grant
86
D.A. Linebarger et al. 1 Signal Processing 46 (1995) 85-104
erh5lt. Wir fiihren such eine asymptotische Analyse von Constrained MUSIC und Unconstrained MUSIC durch. Diese zeigt, daD Nebenbedingungen (constraints) immer zu (asymptotisch) verbesserten Unterraum-SchSitzungen fiihren, und weiters, dal3 die Varianz von Constrained MUSIC (asymptotisch) kleiner ist als jene von Unconstrained MUSIC im Fall groljer KohHrenz, grol3er Anzahl der Sensoren oder grol3em SNR. Ein Teil unserer Analyse untersucht die Auswirkungen von KohHrenz auf MUSIC und leitet die hinsichtlich der Varianz beste und schlechteste Kohlrenz ab. Wir zeigen weiters, daB jene Flille, in denen die Varianz von MUSIC voraussehbar kleiner ist als jene von Constrained MUSIC, im allgemeinen Situationen entsprechen, wo MUSIC zusammenbricht (nicht jedoch Constrained MUSIC). Dies bedeutet, da13 Unconstrained MUSIC in diesen FHllen nicht tatsgchlich die theoretisch vorhergesagte Verbesserung ergibt. Obwohl Constrained MUSIC im Fall eines sehr nahen unbekannten Signals nur bei prlziser Information iiber das bekannte Signal zu verbesserten Ergebnissen fiihrt, k6nnen sich andererseits such bei nur ngherungsweiser Information verbesserte Resultate ergeben, falls das unbekannte Signal und das bekannte Signal nicht zu kleinen Abstand haben.
MUSIC avec contraintes et MUSIC B espace de voie sont des algorithmes similaires en ce qu’ils r&lament tous deux une information a priori sur les directions du signal et qu’ils rCalisent tous deux des transformations 1inBaires sur les donnks. MUSIC avec contraintes utilise des informations prkcises concernant les directions d’un sous-ensemble des directions du signal pour amkliorer les estimtes de direction pour les signaux restants. MUSIC g espace de voie utilise une connaissance approximative de toutes les directions du signal afin de rbduire la complexiti: de calcul et d’amkliorer les propri&ts de rupture. Les deux mBthodes peuvent &trecombinkes, donnant le MUSIC avec contraintes g espace de voie. Nous faisons tgalement une analyse asymtotique de MUSIC avec contraintes et B espace de voie, demontrant que des estim&es du sous-espace amelio&s (asymptotiquement) rbsultent toujours de l’utilisation de contraintes, et que (asymptotiquement) la variance du MUSIC avec contraintes est infkrieure a celle du MUSIC sans contraintes, que ce soit dans le cas de cohCrence ilevte, d’un grand nombre de senseurs ou de conditions de SNR ileG& Dans le cadre de cette analyse, nous ttudions les effets de la cohtrence sur MUSIC et dCrivons la cohtrence des meilleurs/pires cas par rapport B la variance de MUSIC. Nous dkmontrons tgalement que les conditions dans lesquelles la variance de MUSIC est prtdite comme 6tant infkrieure g celle de MUSIC avec contraintes correspondent en g&n&al &des conditions oi MUSIC est en rupture (et oi MUSIC avec contrainte ne l’est pas). MUSIC sans contraintes ne prtsente done pas l’avantage que la thborie prCdisait dans ces cas. Bien que MUSIC avec contraintes demande une information prtcise sur le signal connu afin d’amtliorer les performances lorsque le signal inconnu est trb proche, il peut igalement offrir des avantages en termes de performances avec une connaissance approximative si les signaux connu et inconnu ne sont pas trop proches l’un de l’autre. Keywords: Direction finding; Constrained
MUSIC; Beamspace MUSIC
1. Introduction Constrained MUSIC was presented in [S] as a method for incorporating information regarding known source directions. Knowledge of a source direction is equivalent to knowledge of one of the dimensions of the signal subspace. By constraining the signal subspace to include this dimension, the variances of other source direction estimates can be reduced. In [S], it was shown that this approach is highly beneficial in problems with closely spaced coherent sources - problems where MUSIC is more likely to break down. In this paper, we pre-
sent a thorough asymptotic analysis of constrained MUSIC, contrasting it with unconstrained MUSIC. We demonstrate that in most circumstances, constrained MUSIC outperforms unconstrained MUSIC. Beamspace methods have received attention primarily for data reduction [8, 22, 4, 2, 241, but certain performance advantages have also been noted [4,11,12]. In general, the variances of direction estimates using beamspace methods are greater than those resulting from element space methods [19]. This result was based on the assumption that the beamspace transformation
D.A. Linebarger et al. / Signal Processing 46 (1995) 85-104
“passed” all signals. In this paper, we will demonstrate that beamspace transformations that do not pass all of the signals (i.e., they incorporate constraints within the beamspace transformation - constrained beamspace MUSIC), may have lower variance than a pure unconstrained element space method. However, it is our experience that in comparing only constrained element space to constrained beamspace, the inclusion of the beamspace transformation increases the variance, as predicted in [19] for unconstrained methods. This increase is slight if the beamspace transformation is well designed, in both unconstrained and constrained cases. Another advantage of beamspace approaches is that they can increase probability of resolution when using spectral methods [12]. A common feature of constrained MUSIC and beamspace MUSIC is that they operate on the array data after the application of a linear transformation. This observation was made in [21] and as indicated therein, this allows for some of the analysis related to beamspace methods to be carried over to the analysis of constrained MUSIC. In particular, the asymptotic variance of constrained MUSIC has exactly the same form as does that of beamspace MUSIC, with the beamspace transformation replaced by the constraining transformation. However, there are some differences worth noting, e.g., as indicated in [19], the variance of beamspace MUSIC is greater than or equal to that of unconstrained element space MUSIC, whereas the variance of constrained MUSIC is usually less than that of unconstrained element space MUSIC (see [S]). This difference is because the constraining transformation does not satisfy criterion (a) of [21]. This criterion essentially states that the beamspace transformation must not be orthogonal to any of the signal direction vectors (or any linear combination thereof). However, by design, the constraining transformation is orthogonal to some of the signal direction vectors. It is this difference which allows the performance of constrained MUSIC to exceed that of unconstrained MUSIC while (under the conditions of [19]) beamspace MUSIC can never exceed the performance of unconstrained MUSIC. This paper is organized as follows. The next section presents our signal model and sets up notation. Sections 3-5 discuss constrained MUSIC.
87
beamspace MUSIC, and constrained beamspace MUSIC, respectively. Section 6 contains a proof that the signal subspace estimates obtained using constraints are closer to the true signal subspace than are those obtained without the use of constraints. Section 7 contains expressions for the asymptotic variance of this family of MUSIC based methods - constrained and unconstrained, element space and beamspace. Section 8 contains analysis comparing the asymptotic variance of constrained MUSIC to that of unconstrained MUSIC. This section also contains analysis of the effects of coherence on the asymptotic variance of MUSIC that allows us to derive best and worst case coherence values. Section 9 contains a discussion of the effects of imprecise knowledge regarding the known signals. In this section we present robustness approaches that allow constrained MUSIC to outperform MUSIC in some cases, even if only approximate information related to the known signal is available. Section 10 concludes the paper.
2. Signal model Assuming that q far-field narrowband signals impinge on an array of m sensors located in the same plane as the signal sources, the array output snapshot vector at time k can be modeled by xk =ASk
+nk,
(1)
where A = [a(6,)[
. . . ~u(OJ-J@“~~)
(2)
is a matrix of source direction vectors (the signal sources are assumed to be stationary - fixed in space), Sk =
[h,k,
-&klT
(3)
is a vector of q monochromatic signals, ai,k, i= l,..., q, for which the ui,k are random complex amplitudes, and the elements of the noise vector, consist of zero mean, white nk = [h,k, -db,,,klT complex Gaussian noise. The superscript (m x q) indicates the dimensions of A. This notation will be used throughout the paper for emphasis and clarification, especially when a matrix is being partitioned into submatrices. It is assumed that the
D.A. Linebarger et al. / Signal Processing 46 (1995) 85-104
88
signal and noise are uncorrelated with each and that the columns of A in (2) are linearly pendent. In the case of a linear equispaced the direction vectors are defined by the array fold a(O) = [l, ej’, ej”, . . . , ej’“- IbelT, where 8 = xdcosv
other indearray, mani-
(4)
and v is the direction angle measured with respect to the main axis of the array. Also, d is the sensor spacing in units of half wavelengths. For the data model in (l), the correlation matrix is of the form R = S[xkx:]
= APA” + 0’1,
(5)
where P = S[s,pF] is the 4 x 4 signal correlation matrix, c2 is the white noise power (or variance) and d [ *] is the expectation operator. The ijth element of the signal correlation matrix P is given by the correlation between the complex amplitudes for the ith and jth signals: P(ij) = 6(aiaj*), where wide-sense temporal stationarity is assumed and the time subscript is dropped from the expectation. The matrix P can be factored in the following manner:
where V, spans the signal subspace (defined by A), V, spans the noise subspace, and & = ~21(m-qxm-q) for white additive noise. The dimensions are left off of the second occurence of V above for space reasons (and since they would be repetitive). In general, the true correlation matrix R is unknown and hence must be estimated. Usually, an estimate of the correlation matrix is obtained via
where each xi represents one snapshot from the array and the m x n data matrix Xis constructed by letting xi correspond to the ith column of X. The eigenstructure of ff is given by
A, 0[Vs i=PAP =[Ps P"] oA 1 V"]", (10)
”
[
where pS comprises the eigenvectors corresponding to the 4 largest eigenvalues. In subsequent sections, the eigenvalues and eigenvectors will be indexed separately in the signal and noise subspaces: 2, = diag(&, 1 . . . &,) and
P = P,P,P,.
(6)
In this equation, the matrix Pa is a diagonal matrix with the root mean square amplitudes of the signals as its elements: P,(i,i) = ,/m. Then, PC is referred to as the coherence matrix. PC is a Hermitian, positive semi-definite matrix (as is P). Its elements are bounded by one in magnitude (they can be complex). The diagonal elements of PC are unity. The coherence between signals i and j is contained in the ijth element of PC and is defined by PC = c&j)
=
b(OLiUi*)
(7) JS(cCiCr~)S(OZjOri*)
’
The magnitude of c(i,j) is referred to as coherence magnitude and its phase is coherence phase. The eigenvectors of R above can be partitioned as follows:
/1, = diag(&, 1 . . . &,_qb). t = C5S* 1-..ci,,,l and p” = [G41
0 1
CK VnlH, (8)
...ft.(m-qJ.
(12)
3. Constrained MUSIC Assume that q1 (ql < q) signal bearings are known. For example, this may be the case in a radar application where the emitted signal is backscattered by a number of stationary objects with known positions (such as buildings) situated in the radar’s viewing field. Define a constant matrix A, as the matrix whose columns are the direction vectors of the known signals. A QR decomposition of A, is given by A,
AAm-qxm-9)
(11)
Likewise,
=
[Q$Wd
Qi;xW,)]
[O%
“: )]
1x1
’
(13)
where T,, is an upper right triangular factor. The factor T,, is usually denoted R, but in this paper we
D.A. Linebarger et al. J Signal Processing 46 (1995) 85-104
89
have reserved the symbol R to represent correlation matrices. Notice also that & contains an orthonormal basis for the column span of the constraint matrix, A,. The spectral MUSIC estimator [16] involves minimizing a quadratic form:
&=
min
where 3, is defined by
f(0) = a”(0) P” PFa(0).
An alternative, usually more efficient, implementation of constrained MUSIC can be derived from the previous discussion. Reconsidering (15), we have
(14)
QczQ%QczQii =Qc&Q%
(18)
0
The matrix p” corresponds to the eigenvectors spanning the noise subspace of A@. If certain signal directions are known, this is equivalent to knowledge of some of the dimensions of the signal subspace, which is the orthogonal complement of V,, the true noise subspace. Thus, the constraint information contained in QGl should be used to insure that the columns of V,,are orthogonal to the known signal direction vectors. This can be accomplished by considering the matrix fi, =
Qc,Qc"zffQczQ,"z,
(15)
where Qc2 is defined in (13). The range of the projection operator Qc,Qz is orthogonal to the known signals but includes all of the noise subspace, thus fi, includes information regarding only the unknown signals and the noise. From the eigendecomposition of ff,, we can estimate the remaining (unknown) components of the signal subspace of b. The eigenstructure of 8, is described by
(19) and the yi correspond to the individually transformed snapshot vectors: yi = Q,HzXi.The matrix ??cism-q,xm-ql, whereas the original matrix ff is m x m. 9, is smaller because the transformation Q,* deflates the signal subspace of 2,. It is straightforward to show that the eigendecomposition of & is easily obtained from that of &. It is more efficient to compute the eigenvectors of & than to compute those of fi,, due to the reduced size. Let the eigendecomposition of & be defined according to $, = oc,;icfiJr = [@;-4,x4-4, x
X
0 0
0
0
xc&s Ll".
0
x CVcslk2 kl”.
,
(16)
The projection operator induces the zero eigenvalues corresponding to the known portion of the signal subspace. It should be noted that Vcsl v:1 = Qcl QF1. From this construction, constrained MUSIC can be defined by min
f(0) = a”(~)P~,,PcH,u(f?),
0
Then, multiplying &oc = i?& Qc2 and inserting Q$Qec2 = 1,
;i~;4*w-?J
0 p-qxm-d cn
$-4,W?,) [
fi, = c~c(snlx4~) vE(sn;xq-q,)V(mxm-q)] c” 0’41x4,)
(17)
0
where now pc,nis orthogonal to the known signal directions and hence will lead to actual zeros in (17) for the known signal directions.
@;-4,rm-q)] 0 R(m-qxm-q) cn
1
(20)
from the left by
Q&Qc"zQc&= Qdc~c,
(21)
but
Qc&Qc"2 =& sothat
(22)
&Q&c = Qcz&Ac
(23)
and hence Qc,oc represents the eigenvectors of ff, which correspond to nonzero eigenvalues. Thus, constrained MUSIC can be more efficiently implemented using the eigenstructure of & via min 0
f(0) = aH(@Qc2 ~~,n~~Q~2u(0)
(24)
90
D.A. Linebarger et al. / Signal Processing 46 (1995) 85-104
;ipl)
and hence the eigenstructure of the smaller matrix S, (or the SVD of @X) should be estimated (or tracked) in place of that of R,.
Beamspace MUSIC was primarily formulated as a data reduction method. However, it also requires a priori information regarding signal locations so that the data reduction does not appreciably affect performance. In the case of beamspace MUSIC, the signal locations need only be known approximately. This might be considered a “soft” constraint. Whereas for constrained MUSIC (a subset of) the signal directions need to be known exactly since it incorporates “hard” constraints. Beamspace [12] and sector-focussed [4] approaches are based on approximate information of the signal locations. For example, if it is known that the signals lie within a particular sector, the data can be reduced in a manner that retains the information from that sector but eliminates information from other sectors. For detailed information on obtaining optimal beamspace transformations, see [12,2]. A simplified discussion of beamspace transformations follows. First, form a dense grid of beams within the sector of interest. One possibility would be to form beams that are separated by a half-beamwidth for the given array. Then the information regarding the signals would be contained within these beams. If the number of beams in the sector is less than the number of sensors, the size of the problem and therefore the required comPUtatiOnS are reduced. Let the m x mb (mb< m and mb > q) matrix Ab represent a beamspace matrix whose columns consist of a set of steering vectors representing the sector of interest. Taking a QR decomposition of Ab yields [Qgxmd
Qgxm-md]
[()z::.J.
(25)
As with constrained MUSIC, a new reduced dimension and transformed correlation matrix is obtained for further processing: gb
=
QbHliQbl
0
0 &mn,-qxmb-_q)
cobs fib6nlH.
(26)
1
Beamspace MUSIC consists of first obtaining an eigendecomposition of & and then minimizing the following quadratic form [ 121:
4. Beamspace MUSIC
Ab =
X
min e
f(e) = U”(@&
&,@,@&7(8).
(27)
The matrix i!&,” corresponds t0 the mb - q SmdkSt eigenvalues of &,. By contrasting Eqs. (24) and (27), it is noted that constrained MUSIC and beamspace MUSIC have the same form as discussed in [21]. The beamspace transformation is designed to pass energy within a designated sector containing the signals of interest. Therefore, by construction, the beamspace transformation does not deflate the signal subspace, but deflates the noise subspace (assuming no out of sector signals). It is important to design the beamspace transformation in an intelligent manner, because as presented in [19], the asymptotic variance of beamspace MUSIC is lower bounded by that of element of space MUSIC (again, assuming no out of sector signals). If beamspace MUSIC is to attain element space performance, the transformation must be designed so that it has minimal effect on the signals in the second sector of interest. So, constrained MUSIC deflates the signal subspace and beamspace MUSIC deflates the noise subspace. To use constrained MUSIC, one must know exactly where a subset of the signals are, which are then removed before locating the remaining signals. To use beamspace MUSIC, approximate information regarding all signal locations is required, so that the transformation includes all of them. For information on selecting efficient beamspace transformations see [22,4,1,2,12]. For information regarding performance of beamspace MUSIC, see [8,4,12,19,23]. For an introduction to root versions of beamspace MUSIC, see [24,25] and for some interesting applications of beamspace MUSIC, see [13,26]. 5. Constrained beamspace MUSIC For the case of constrained beamspace MUSIC, it is assumed that some of the signal directions are
D.A. Linebarger et al. 1 Signal Processing
known precisely and some of them are known only to be in a particular sector. Thus, constraints are used to deflate the signal subspace for the known signals and beamspace techniques are used to deflate the noise subspace for the other signals. We define the m x q1 + mb matrix .&, as follows: ~~~ = [Admxq,) @xmb)].
(28)
The matrix A, (see (13)) is obtained as with constrained MUSIC and Ab (see (25)) is obtained as with beamspace MUSIC. As before, we take a QR decomposition of &:
X
Tg”,;““’ .
0
0
-
cs,i)(us,i
-
Ci,,iIHl
= dCEs.iEZil
z.(4,X%) cb12
0
91
EVD on the transformed problem. In this section, we analyze the subspaces resulting from the EVD of the original correlation matrix and compare them to subspaces obtained from an EVD of the constrained (transformed) correlation matrix. Using the constraint enables us to nail down some of the dimensions of the signal subspace, thus only the remaining dimensions need to be estimated. We will show that this results in subspace estimates that are improved on the average, assuming large numbers of snapshots. From [lo], we have aC(us,i
T F&&J
46 (1995) 85-104
(29)
_&i -N
m-q + ,;, (A
In this case, the transformation is defined by Q cb2 and represents an orthonormal basis for the portion of& that is orthogonal to A, [14,17]. Note that the ordering of the portions of &, is critical since it is necessary for &,2 to be orthogonal to the known signals. Constrained beamspace MUSIC is then described by
“;
s,*
k)2
%k
‘:k
.
(34) where o&n is determined from the eigendecompositiOn Of $,, = &fiQcb2 : &b
=
(31)
Q:b&?cbz
= &&,,?,“,
(32)
= [O~~;x4-4,) ;i(4-4z cbs
X
x
o;t;Xm,-q+q,)
xq-q,)
[
0
@cbs
&bnl”.
where E,,~is the error in the ith eigenvector of the signal subspace. If we consider the subspace distance (SD) [7] (using the Frobenius norm) between the estimated and actual signal subspaces, we have SD=IIVJ’,“-~#:1I:
1 0
1
;i;~n-4+41Xmb-q+q,) (33)
= /I vs v,” - (K + wws
+ wH
II:.
= ll V,E” + EV: + EEH 11;. To continue, note that the Frobenius a Hermitian matrix W can be written I/ WI/g = Trace(WW).
6. Using constraints provides improved subspace estimates
Thus, IIVsEH + EV,H + EE” 11;
From the previous sections, we have seen that incorporation of constraints involves transforming the correlation matrix and then performing an
= Trace[(V,EH
+ EVf + EE”)
x(VsEH + El’,” + EEH)]
(35) norm of (36)
D.A. Linebarger et al. / Signal Processing 46 (1995) 85-104
92
= Trace[V,EHV,EH
+ V,E”EVz
+ V,E”EE”
+ EV,H V,E” + EV:EV,H + EV:EE”
In the above, we have made use of the following: Trace(u,,,uEk) = 1, for all k. Also, from [lo], we have
+ EE” V,E” + EE”EV,H sCEs,il
,
\
+ EE”EE”]. Based on (V, + E)“(V, + E) = I and V,” V, = I, we obtain V;E+EHVs+EHE=O.
(37)
and hence
Eq. (37) can be left multiplied by E and right multiplied by E” to obtain
b[Trace[V,E”
EV,HEE” + EE”V,E”
= L?[Trace[V, VYEV:]]
+ EEHEEH = 0.
(38)
V, V,“]]
Also, based on (37), we have V,E” VsEH + V,E”EE”
= - V, V,HEE”
’
(39)
and
(46)
EV,HEV,H + EE”EV,H = - EE”V,V,H
(40)
and V,E”EV,H = - VsEHVs V,” - VsV,HEV,H,
so that after substituting (43), (44) and (46) into (42), we have
(41)
so that (35) becomes
(47)
SD = 11V, V,” - ps p: 11; = Trace [EE” - V, VF EE” - EEHVsV,H - V,E”V,V,H - V,V,HEV,H]. (42) Taking an expectation, using (34) and (36), and noting that Trace is a linear operator,
The next issue we must determine is whether the subspace distance (as specified above) is certain to be reduced using constraints. Thus, we derive an expression analogous to (47) for constrained MUSIC and compare. Using the same approach, we have
I [Trace [EE “I] = i
6 [Trace [s,, i $i]]
i=l
M ’
(43)
from which (combined with (34)) we have d [ Trace [EEH V, V!]] = 8 [ Trace [ V, VF EE”] ] (44)
20’(m - 4) ‘-” N iZ1
&s,i (&,i
-
g2)2
’
(48)
where the last equality follows from applying (47), with m - q1 and q - q1 replacing m and q there, to & in (20). To prove SD, < SD, we first substitute a change of variables on the eigenvalues in (47) and (48). Writing pi = Li - o2 (the pi are the eigenvalues of the no noise correlation matrix), we have
SDz2~2(m-q) ’ N
c i=l
H.i+02 2 Pu,, i
(49)
D.A. Linebarger et al. / Signal Processing 46 (1995) 85-104
and 202(ni - 4) ‘-” p,,,i + a2 SD c= c N ds,i ’ i=l
(50)
The next step is to show that the (nonzero) eigenvalues of R,interlace those of R.Using the Poincare Separation Theorem (Corollary 4.3.16 of [9]), we have Psi
3
Pcs,i
2
(51)
Ps,(i+q,).
Thus for every term
in (50), there is a corresponding, equal to term
-22-
..,f
in (49). Also, there are q1 remaining nonzero terms in (49) which gives strict inequality, i.e., this proves SD, < SD. To verify these results, in Fig. 1, we compare the distance between the true and estimated subspaces (defined as the Frobenius norm of the difference between the projection operators representing the respective subspaces in (35)), with and without the use of constraints. In this example, the array has 10 equally spaced (half-wavelength) sensors and the signals are from 90” to 92”. (Broadside corresponds to 90”.) The signal to noise ratio is 20 dB for each signal. The intersignal coherence phase is fixed at 4 = ( ,L.u~u~). This choice of coherence phase will be explained in a later section. (See the figure caption for more details.) The coherence magnitude is allowed to vary between + 0.9 - negative coherence magnitude actually represents a coherence phase shift of 180”. The key observations from this figure are (1) the estimated and predicted subspace
UE-MUSIC
x
-24 -26 Is s g -28
greater than or
93
CE-MUSIC
-
- -
UE-MUSIC
Theoretical
CE-MUSIC
Theoretical
s g-30$ g-32 B d -34-
-36-30 -
-40
-1
~._~-~_~_K__*._.~_*._.~._~_-3(_~_rX_-~_*_~*._K_*
-0.8
-0.6
-0.4
-0.2 0 0.2 Coherence Magnitude
I
I
0.4
0.6
0.8
1
Fig. 1. Subspace distance between the actual and estimated signal subspaces is plotted in dB as a function of coherence magnitude. The array is linear equally spaced (half-wavelength) with 10 sensors. There are two signals, one from broadside, and one from 2” past broadside. Their coherence phase is fixed at 4 = L (u$r,) where ei is the direction vector representing the ith signal. The coherence magnitude is varied from -0.9 to 0.9 in increments of 0.1. The negative magnitudes actually correspond to a coherence phase shift of 180”. The SNR is 20 dB for each signal and loo0 snapshots were used for each of 1000 trials at each point on the estimated variance curves.
D.A. Linebarger et al. / Signal Processing 46 (1995) M-104
94
distances are decreased by the use of constraint and (2) the estimated subspace distances closely track their predicted values, with and without the use of constraints. For the negative coherence magnitudes, the subspace distances are actually decreased compared to the case with no inter-signal coherence. On the surface, this would seem to imply lower variance for the bearing estimates, as well as improved breakdown characteristics. Clearly, a perfect estimate of the signal subspace will yield perfect signal bearing estimates, but just moving closer to the true subspace (in the Frobenius norm sense) turns out to be insufficient to guarantee improved (or reduced variance) signal bearing estimates. These issues are studied in subsequent sections.
7. Asymptotic variance of constrained and beamspace MUSIC variations In [18], asymptotic variance expressions were obtained for unconstrained element space MUSIC. In [19], similar expressions were obtained for unconstrained beamspace MUSIC. In this section, we present asymptotic variance expressions for constrained element space and constrained beamspace MUSIC. For this discussion, assume that R is described by
matrices R and R,,, respectively. In the following, we present the variance of beamspace MUSIC in terms of Sb instead of Rb, but as discussed relative to constrained MUSIC in Section 3, the eigendecomposition of the projected correlation matrix R, (or Rb) and that of S, (or S,) are trivially related. See (20), (23), and surrounding discussion. From [18], we have the variance of unconstrained element space MUSIC (for large N): (2,.
,ab2,2
laH(ei)us,k12
IdH(di)Un,k12
c:r,”
(53) where &k’s are the signal subspace eigenvalues (in descending order) of R as defined in (11). The vector V s,k is the kth column of V, and the vector r,,k is the kth column of I’, (see (12)). The noise power o2 is as defined in (5). Also, the vector d(ei) is defined according to d(0i) = da(B)/d010=8i. From [19], the variance of unconstrained beamspace MUSIC is given by (for large N and assuming that QflA has full column rank):
i.b,x,2)2 14wQbl%?.klZ
bs.k
varu,(&) = c;:,”
tdH(h)QblUbn,k12
a21
+
’
(54)
R=a21+APAH =
’
cAyq,)
Apwqd,
p’91 xq-q,) ,!,;:q,xq-qtj
1
[AI
A21H>
(5-a
where Al corresponds to the known signals and A2 corresponds to the unknown signals. In each of the four cases (element space or beamspace, unconstrained or constrained), the variance can be written in terms of the eigenvalues and eigenvectors of the relevant correlation matrix, or in terms of the component matrices of R in (52). As presented in [18, 191, the variance of unconstrained element space (UE) MUSIC and unconstrained beamspace (UB) MUSIC can be written in terms of the eigendecomposition of the correlation
where lbs.k? Ubs,k and &,,,k are obtained from the eigendecomposition Of&. sb iS obtained from R in the same way & is obtained from ff in Eq. (26). Alternative expressions for the variance of unconstrained element and beamspace MUSIC were given in [18,19] in terms of the component matrices that make up R (see (52)). These are as follows: A
VarudQi)
~2CGElii = 2NCHuE,ii
y
(55)
where Gu, and HUE are matrices defined as Gu, = P-’
+ a2P-‘(AHA)-‘P-’
(56)
and HUE = DHII - A(AHA)- ‘AH-@
(57)
D.A. Linebarger et al. / Signal Processing 46 (1995) 85-104
and the matrix D is defined by D = [d(6,). . . d(O,)]. A similar expression for unconstrained beamspace MUSIC is as follows: varu,(&) =
CT’ CGUBlii 2NCHUBlii’
=
c2CG,lii 2N[HUElii’
(63)
where
Gu, = P- 1 + a2P- ’ (AHQblQrlA)-
‘P- ’
(59)
and
xAHQblQbID.
(60)
As previously discussed, the constraining transformation is different from the beamspace transformation in that it impacts the signal subspace, reducing the signal subspace rank. However, we have verified that the same approach used in [ 19, 181 is straightforwardly modified to obtain the asymptotic variance for the constrained methods. Since our proof essentially mimics that in [ 19, 181, we have not included it here. As above, the asymptotic variance of constrained element space MUSIC or constrained beamspace MUSIC can be written in two ways. (Note that since the constraining transformation does not impact the noise subspace, the denominator terms for constrained element space MUSIC are identical to those for unconstrained element space MUSIC.) First in terms of the relevant eigendecompositions:
(61) for constrained element space MUSIC (CE), and
vW2dk)
&I;~;I =
asymptotic variance expressions can be written in terms of the component matrices that make up R, see (52): VaLTC-(6i)
where
95
‘cbs.k (Acbs,k
C~~~q+q’
a2)2
I~H(~i)Qcb2&bs,k12
IdH(~i)Qcb2Ucbn.k12
(62) for constrained beamspace (CB) MUSIC (assuming that QFb2A2 has full column rank, and also that q - q1 < mb < m - ql). Alternatively, each of these
Cc, = P;;
+ a2Pi; (A;Qc2Q$A2)
- ‘Pi:
(64)
and the denominator of (55) is the same as that of (63) as the projection operator Qc2Q:2 does not impact the noise subspace of R. Continuing this progression, we obtain the variance of constrained beamspace MUSIC (for large N): . a2CGc,lii VakdW = 2NCHCB,iiI
(65)
where Gc, = p;z’ + 02p;,‘(A~Q,bzQ~b2A2)-lP~~
(66)
and
In Fig. 2, we have plotted the estimated and predicted variances for unconstrained and constrained element space and beamspace MUSIC. The situation is identical to that in Fig. 1, except that the variance of the direction estimates is plotted instead of subspace distance. The beamspace transformation was obtained as a rank five truncation of an m x 500 matrix containing direction vectors corresponding to equally spaced directions over the sector from 80” to 100“. The direction estimates were obtained using root MUSIC [3]. (In [20], it was proven that the asymptotic properties of root and spectral MUSIC are the same.) From this plot, we note that the estimated and predicted values are again close, but the variance plot is roughly symmetric around the zero coherence point, whereas the subspace distance was decidedly nonsymmetric. Also note that there is almost no loss of accuracy associated with the beamspace methods, as compared to that of the element space methods.
96
D.A. Linebarger et al. 1 Signal Processing 46 (1995) 85-104 -46 -47-48-
...+
UE-MUSIC
x
GE-MUSIC
\
-1
-0.8
-0.6
-
UE-MUSIC
Theoretical
--__
CE-MUSIC
Theoretical
-0.4
0 0.2 -0.2 Coherence Magnitude
0.4
0.6
0.6
1
0.4
0.6
0.6
1
-46 ----+.“’
-47 -46
m^
U&MUSIC
t
;-49-
.&
m-50; i-51s g--52i
_
3-53 -54-55-56’ -1
” -0.8
-0.6
-0.4
-0.2 0 0.2 Coherence Magnitude
Fig. 2. Variance is plotted in dB as a function of coherence magnitude. There are two signals, one from broadside, and one from 2” past broadside. Their coherence phase is fixed at 4 = L (u~u,) where ai is the direction vector representing the ith signal. The coherence magnitude is varied from -0.9 to 0.9 in increments of 0.1. The negative magnitudes actually correspond to a coherence phase shift of 180".The SNR is 20 dB for each signal and 1000 snapshots were used for each of 1000 trials at each point on the estimated variance curves. The array has 10 sensors, spaced by a half-wavelength. The constraint effectively eliminates one of the signals, so that the performance of constrained MUSIC is independent of the coherence. The upper panel contrasts constrained and unconstrained element space; the bottom panel contrasts constrained and unconstrained beamspace. The beamspace transformation was designed to be rank 5 and includes the sector from 80” to loo”.
D.A. Linebarger et al. / Signal Processing 46 (1995) 85-104
8. Comparing the variance of constrained methods to that of unconstrained methods Although it would seem that using information related to the known signal to constrain estimated noise and signal subspaces would only improve MUSIC estimates, this turns out not to be the case. Using constraints improves the direction estimates in most cases, sometimes dramatically, but there are (theoretical) instances where the constraint can make things slightly worse. However, as demonstrated in this section (see Figs. 3 and 4 and the accompanying discussion regarding a two signal scenario later in this section), cases where unconstrained MUSIC is predicted to have lower variance than constrained MUSIC generally correspond to situations where unconstrained MUSIC is in breakdown, but constrained MUSIC is not. So unconstrained MUSIC does not achieve its theoreti-
-56L
-1
-0.5
0
0.5
91
cally predicted advantage in those cases. Also, as proven in Section 6, the subspace distance is always improved by constraints. In the following we will analyze the expressions for the variance of MUSIC and constrained MUSIC to determine how they compare. Case I. If the known signals are highly correlated with the unknown signals, then constrained MUSIC always performs better than MUSIC. If we consider a pair of signals, one known and one unknown, and then allow the correlation between them to vary, as they become completely correlated, P becomes singular and the smallest eigenvalue in the signal subspace approaches o*. Thus, the variance of MUSIC will approach infinity in these cases, while the variance of constrained MUSIC does not depend on the coherence, because the constraint will effectively remove the coherence when it removes the known signal.
I I
-0.5 0 0.5 Coherence Magnitude
Coherence Magnitude
1
-8”
tz
--15. $ .s -20.
‘0 8
SNR = -10 dB
i% 0. ‘im
z p -25. ti p -30 CL -35’ -1
s
SNR=OdB
I -0.5 0 0.5 Coherence Magnitude
1
i-,0. 5 F a
-20 ’ -1
-0.5 0 0.5 Coherence Magnitude
I 1
Fig. 3. Variance is plotted in dB as a function of coherence magnitude and SNR. The signal conditions other than SNR and coherence phase (which are varied in this figure) are identical to those in Fig. 2 - (signals at 90” and 92”, 1000 snapshots, 10 element array, etc.). However, in this figure, we have only the predicted variances. For four different SNRs, we have plotted predicted variance versus coherence magnitude for a range of coherence phases. The most interesting coherence phase corresponds to L y; this phase corresponds to the lowest variance (unconstrained MUSIC) curve for negative coherence magnitudes and the highest variance for the positive coherence magnitudes. At each SNR, the constant line towards the bottom of the plot represents the predicted variance of constrained MUSIC, which is independent of the coherence magnitude.
98
D.A. Linebarger et al. / Signal Processing 46 (1995) 85- IO4
_.+ ..)(.
‘.
‘..
\
-60 -10
..
-5
UE-MUSIC CE-MUSIC
~
UE-MUSIC Theoretical
_-_-_
CE-MUSIC Theoretical
0
10
15
Sk
Fig. 4. Variance is plotted in dB as a function of SNR. At each point, the best case (negative) coherence magnitude was chosen for unconstrained MUSIC (from Fig. 3). Again, the signals are at 90” and 92”, loo0 snapshots were used, and the array had 10 equally spaced elements. The coherence phase was ~7. For the cases where MUSIC was supposed to outperform constrained MUSIC, it has already broken down. At high SNR, the performance of the two algorithms is virtually identical.
Case 2. If the SNR is high (Pii>>a’, for all i) or if m>>1, then the first term in (56) and (64) dominates the second term in (56) and (64). In this case, to compare constrained MUSIC to unconstrained MUSIC, we need only compare P2;’ to the lower right corner of P- ’ (see (52)). Using the block matrix inversion relation in [ 15, p. 231, we have (Pzz - PLP11’P12)- 1
63)
for the lower right corner of P-’ and we wish to show
by which we mean that (Pz2 - Py2P;fP12)-’ P;; is positive semi-definite. Since both Pz2 and Pz2 - Py2P[,tP12 are positive definite and furthermore p22
a
p22
-
p~2:p;1’p12,
corner of P- 1 are greater than those of Pi;. This proves that for large N, the variance of constrained MUSIC is always less than that of unconstrained MUSIC for high SNR or large m situations. Case 3. If the known signals are uncorrelated with the unknown signals (PI2 = 0), then the performances of MUSIC and constrained MUSIC for the DOA’s unknown to constrained MUSIC are identical (for large N), i.e., Pi2 = 0 *
Var&8i)
=
VaL&&).
(71)
To see why, note that as the denominators of (55) and (63) are identical, we need only compare G,, and Cc,. However, GuE is q x q whereas Cc, is (q - ql) x (q - ql), thus we must extract the lower right portion of Gnu (corresponding to the unknown signals) in order to compare. If Pi2 = 0,
(70)
we have that (69) is true by Corollary 7.7.4 of [9] and hence the diagonal elements of the lower right
(72)
D.A. Linebarger et al. / Signal Processing 46 (1995) 85-104
Thus, to compare the variances of the estimates for the unknown signals obtained using MUSIC and constrained MUSIC, we need only compare the lower right q - q1 x q - q1 block of @“A)-’ to (dQc~Q%J’ (see (56) and (64)). Writing
we have from [15, p. 231, that
BF2-q1
xq-ql)
)
1
(74)
where Bz2 (which i; the block to compute in order to compare varuE(Oi) to varce(&) for the unknown signals) is given by
8. I. A comparison of constrained and unconstrained MUSIC for a two signal scenario
We now consider two arbitrary signals in the cases where they are both unknown, or where one is known and one is unknown. In the former case, we will obtain the variances of both signals, using unconstrained element space MUSIC (see (55)). In the latter, we obtain the variance of the unknown signal using constrained element space MUSIC (see (63)). We will then compare and make observations. In the two signal case, we have A = Cal a219
(75)
The term in the brackets, I-A1(A~~l)-lA~, is a projection operator whose span is the orthogonal complement of A 1, which is exactly how QczQFZis defined (see (13)). So, Bz2 = (A~Qc2Q!$12)-’ (see (64)). This proves that, for large N, the variance of constrained MUSIC is the same as that of MUSIC if the unknown signals are uncorrelated with the known signals. For arbitrary SNR and coherence between the known and unknown signals, it is not possible to obtain a general result proving the variance of constrained MUSIC to be lower than that of unconstrained MUSIC. However, it is our experience that the use of constraints lowers the variance in most cases, often dramatically. Moreover, in the cases where unconstrained MUSIC is theoretically predicted to outperform constrained MUSIC, unconstrained MUSIC is generally in breakdown and hence the theoretical predictions do not realize in finite data situations. We will illustrate these points in the following subsection that deals with a two signal scenario - one known and one unknown signal with arbitrary coherence between the two signals.
(76)
where a1 will be a known signal and u2 is an unknown signal. Also, p=
Pll
P12
[ PT2
= (A~[I-A,(A~~,)_‘A~]A,)_‘.
99
P22
1
(77)
.
We assume that a:Ui = m, the number of sensors. Again, the denominator terms in varuE and varCE are identical, thus we will focus only on the diagonal elements of GUEand GcE. If we estimate both signal directions using unconstrained element space MUSIC (considering the known signal to be unknown) and define y = &zl, we have
Gud, 1) = + a2(pi2m
P22 PllP22
2) =
+ a2(pflm
lP1212
+ P~~(PI~Y*
(p11~22
‘%I,(&
-
-
+
Ip12124
Ir12)
’
(78)
Pll PllP22 -
lP12lf
+ P~~(PI~Y* + PT~Y) + Ih212m)
(pll~22
- lp1212)2(m2
and using unconstrained known signal, GCE
+ ~72y)
lpt212)2(m2 -
-
MUSIC
=A f d2(m202m - Iv12)’
Ir12)
’ (79)
with the un-
(80)
From these expressions we make two observations. First, we see that varc-(f12) is independent of p12, the correlation between the signal amplitudes. Also, we verify that if the signals are uncorrelated
100
D.A. Linebarger et al. / Signal Processing 46 (1995) 85-104
(pi2 = 0), varun(&) = varcn(&). Thus, as previously shown in the general case, the asymptotic variance of MUSIC is not reduced by the use of constraints unless the signals are correlated. However, the subspace distance is always reduced, and thus the use of constraints might improve breakdown characteristics. To see the effect of the inter-signal on varuu(&), consider p12. It is related to the intersignal coherence or correlation coefficient, c: and
ICI= j%
4 = LC = spit,
(81)
where 4 is the coherence phase. Now consider the following term from the numerator of (79): (p12y* + P?~Y) = 2 Re(p12y*) = 21~~2lhkco~~
+ hsind+
032)
where yR, yI represent the real and imaginary parts of y. The denominator does not depend on the coherence phase, thus we can look for minima and maxima of the variance with respect to the coherence phase by looking at the derivative of the numerator only: avar,, T=O
*
-yRsin+
+ y,cos$ = 0
(83)
or tan+ =E YR
(84)
so that 4 = L y or 4 = my + x. It is straightforward to verify via a second derivative that 4 = by corresponds to a maximum and 4 = my + K corresponds to a minimum. For a fixed magnitude, this coherence phase yields the minimum/maximum variance for unconstrained MUSIC. We can use the above result to search for cases where the predicted variance of unconstrained MUSIC is actually lower than that of constrained MUSIC. Note that this minimum might be achieved by including intersignal correlation in the model. In many of the cases we have considered, the minimum variance for unconstrained MUSIC was achieved using a nonzero coherence magnitude. See Fig. 3. It is not widely recognized that inter-signal correlation might reduce the variance of MUSIC below that of the uncorrelated case.
In Fig. 3, we plot the predicted variance for both unconstrained and constrained MUSIC over different coherence phases, coherence magnitudes, and different SNRs. The conditions are identical to those in Fig. 2 (1000 snapshots, signals at 90” and 92”, etc.), except that the SNR and coherence phase are allowed to vary. At each SNR, a family of curves are drawn where each curve corresponds to a different coherence phase. The coherence magnitude is allowed to vary between 0.9 and -0.9, so that coherence phase varies only over a range of 180”. For each plot, a minimum in the variance of unconstrained MUSIC is achieved for a negative coherence magnitude at the coherence phase ~7. (The array used is linear equally spaced, halfwavelength spacing, with 10 sensors.) On the surface, Fig. 3 seems to indicate that there are some coherence values at which the variance of MUSIC is less than that of constrained MUSIC. To study this issue, we ran simulations comparing UEMUSIC to CE-MUSIC over a range of SNRs. At each SNR, we used the coherence phase and magnitude which minimized the theoretical variance of UE-MUSIC (obtained from the plots in Fig. 3) and compared the variance of UE-MUSIC with this best case coherence to that of CE-MUSIC. The potential advantage of MUSIC over constrained MUSIC occurs at low SNR, and by the time SNR is low enough for the potential advantage to be visible, MUSIC has broken down and the asymptotic variance expressions are irrelevant. See Fig. 4. 8.2. A comparison of constrained and unconstrained MUSIC for a three signal scenario In this subsection, we present an example with three signals - one known and two unknown. We will first consider the three signal scenario to determine valid inter-signal coherence relationships, and then show a specific example comparing variances from simulations and theoretical predictions. For the three signal case, the coherence matrix is 3 x 3, Hermitian and positive definite: 1 PC = 42 CT3
Cl2
Cl3
1
c23
43
1
.
(85)
D.A. Linebarger et al. / Signal Processing 46 (1995) k-104
This means that the individual coherences cannot vary in a totally independent manner. To understand the relationship between them, we note that unless there are completely coherent signals
101
(ICij( = l.O), PC is Hermitian, positive definite (full
rank), and hence it has a Cholesky decomposition: PC = LL” where L is a lower triangular matrix. Analytically deriving L from PC for the three signal
-20
?
-25 -
-1
-0.8
.+-
UE-MUSIC
X.
CE-MUSIC
-
UE-MUSIC
Theoretical
--_-
CE-MUSIC
Theoretical
-0.6
-0.4
0 -0.2 0.2 Coherence Magnitude
0.4
0.6
0.8
0.4
0.6
0.8
1
-40
.’
-65’ -1
-0.8
-0.6
+.
UE-MUSIC
X-,,.
CE-MUSIC
-
UE-MUSIC
Theoretical
---_
CE-MUSIC
Theoretlcal
-0.4
-0.2 0 0.2 Coherence Magnitude
Fig. 5. There are three signals in this example. Signal 1 is known (W), signals 2 and 3 are unknown (92” and 82”, respectively). Signals 1 and 2 are coherent, 0.4 magnitude and 0 phase. Signals 1 and 3 arc incoherent. The coherence between signals 2 and 3 varies from -0.9 to 0.9, zero phase at each point. The array has 10 sensors with half-wavelength spacing. There are 1000 snapshots used with each trial, and 1000 trials for each point on the plots. The SNR is 20 dB for each of the three signals. The top set of plots is for signal 2, the bottom for signal 3. Once the coherence magnitude between signals 2 and 3 exceeds 0.5, constrained MUSIC begins to outperform unconstrained MUSIC for signal 3, even though signal 3 is not correlated with the known signal.
D.A. Linebarger et al. / Signal Processing 46 (1995) H-104
102
9. Constrained MUSIC with imprecise constraints
case yields the following condition that defines the possible coherence values, insuring that PC is positive definite: Ic~~I’ + Ic1312+
lc2312- 2ReCc2~c1&1 < 1. (86)
and constrained In Fig. 5, unconstrained MUSIC are compared for a 3 signal case where signal 1 is known, and signals 2 and 3 are unknown. In this example, the known signal is correlated with signal 2, but not with signal 3. The coherence between signal 1 (known) and signal 2 (unknown) is fixed at 0.4 (coherence phase of 0.0) (Cl2 = 0.4, Cl3 = 0.0). The coherence magnitude between signals 2 and 3 varies from -0.9 to 0.9 (coherence phase of 0” at all points). It is interesting to note that when the coherence between signals 2 and 3 is significant, the constraint improves the performance for the estimate of signal 3, even though it is incoherent with the known signal. This is of course due to the fact that signal 2 is coherent with the known signal.
0..
CE-MUSIC
(precise)
- +
CB-MUSIC
(precise)
-*
-10
-5
UB-MUSIC
- X-
- O-
A reasonable question about constrained MUSIC concerns its effectiveness if the knowledge regarding the known signal is less than perfect. The real issue is whether a transformation can be constructed that blocks the known signal, but does not otherwise perturb the signal subspace. This can be accomplished if the known signal can be accurately located with respect to the unknown signals. Thus, if the known signal is very close to an unknown signal, then its position must be known with a high degree of accuracy. On the other hand, if the known signal is not very close to an unknown signal, its position need not be known very precisely. We will demonstrate the effectiveness of constrained MUSIC in the presence of imprecise knowledge for the case of widely spaced, but highly coherent signals. In the case of widely spaced signals, our approach is to construct a transformation that blocks out a region; such approaches include the use of derivative constraints to flatten out the null
- CE-MUSIC - CB-MUSIC 0
5
(imprecise) (imprecise) 20
10
25
30
35
40
S&B) Fig. 6. There are two signals in this example. The unknown signal is located at 92” and the known one at 70”. There are point constraints at 68” and 72”. The signals are correlated with 0.999 magnitude and 0 phase. There are 10 equally spaced sensors (halfwavelength spacing). Each point on the curve was obtained from 1000 Monte Carlo trials. The number of snapshots is 100 for each trial. The beamspace transformation used was the same as that in Fig. 2: rank 5, sector of interest from 80” to loo”.
D.A. Linebarger et al. / Signal Processing 46 (1995) 85-104
induced by the constraining transformation as well as the use of multiple point constraints across the region to be blocked. See [6] for an example of constrained MUSIC using the derivative constraint approach. In Fig. 6 we illustrate the multiple point constraint approach and contrast the effect of the imprecise constraints to an implementation with precise constraints. The imprecise constraints can induce a bias in the estimate of the unknown signal’s DOA, thus we have switched to plotting mean squared error (MSE) instead of variance in this plot. In this example, we have an unknown signal located at 92” and a known one at 70”. They are highly correlated. Two point constraints are placed ) 2” around the known signal location; there is no point constraint right on the signal. See Fig. 6. Unconstrained MUSIC is broken down except at very high SNR because of the high coherence magnitude. However, even with these imprecise constraints, constrained MUSIC works well, and is not adversely affected by the high coherence magnitude since it effectively eliminates the coherence using the constraint. For very high SNR, we see that the bias due to the imprecise constraints limits how low the MSE can go in contrast to the same situation with precise constraints or even in contrast to unconstrained MUSIC.
ways reduce the distance between the estimated and actual subspaces. The effects of constraints on the variance of the MUSIC estimators are more complicated, but we were able to prove that constraints always reduce variance under high coherence magnitude, large m, or high SNR conditions. We also showed that cases where unconstrained MUSIC is predicted to outperform constrained MUSIC generally correspond to breakdown conditions for MUSIC. Consequently, we can say that for most practical purposes, constrained MUSIC outperforms unconstrained MUSIC, and sometimes the outperformance is dramatic. As part of this analysis, we also obtained results related to the effect of coherence on the performance of MUSIC. We derived best and worst case coherence phases, in terms of minimizing/maximizing the variance of unconstrained MUSIC. We have also shown that there are occasions where the variance of unconstrained MUSIC decreases as the signals get more correlated - a somewhat unexpected fact. Lastly, we presented simulation results demonstrating the effectiveness of constrained MUSIC if only approximate knowledge regarding the undesired signal’s location is available.
References [l]
10. Conclusions We have presented a unified framework detailing a family of MUSIC algorithms. Constrained and beamspace variations of MUSIC involve application of a linear transformation to the data prior to computing an eigendecomposition. Constraining transformations operate on the signal subspace while beamspace transformations operate on the noise subspace (assuming no out of sector signals). They can be combined resulting in constrained beamspace MUSIC. We have derived asymptotic variance expressions for the new members of this MUSIC family. To compare constrained MUSIC methods to unconstrained methods, we first analyzed the effect of constraints on subspace distance, demonstrating (with analysis and simulations) that constraints al-
103
[2]
[3]
[4]
[S]
[6]
[7]
S. Anderson, “Optimal dimension reduction for sensor array processing”, Proc. 25th Asilomar Conference on Signals, Systems and Computers, November 1991, pp. 918-922. S. Anderson, “On optimal dimension reduction for sensor array signal processing”, Signal Processing, Vol. 30, No. 2, January 1993, pp. 245-256. A. Barabell, “Improving the resolution performance of eigenstructure-based direction-finding algorithms”, Proc. IEEE Internat. Conf Acoust. Speech Signal Process. 1983, pp. 336-339. K. Buckley and X.-L. Xu, “Spatial-spectrum estimation in a location sector”, IEEE Trans. Acoust. Speech Signal Process., Vol. ASSP-38, No. 11, November 1990, pp. 1842-1852. R. DeGroat, E. Dowling and D. Linebarger, “The constrained MUSIC problem”, IEEE Trans. Signal Process., Vol. 41, No. 1, March 1993, pp. 14451449. G. Fudge and D. Linebarger, “Spatial blocking filter derivative constraints for the generalized sidelobe canceller and MUSIC”, IEEE Trans. Signal Process., Submitted. G. Golub and C. Van Loan, Matrix Computations, Johns Hopkins Univ. Press, Baltimore MD, 1989, 2nd Edition.
104 [8] D. Gray, “Formulation
D.A. Linebarger et al. 1 Signal Processing 46 (1995) 85-104
of the maximum signal-to-noise array processor in beam space”, J. Acoust. Sot. Amer., Vol. 72, No. 4, October 1982, pp. 1195-1201. [9] R. Horn and C. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1985. [lo] M. Kaveh and A. Barabell, “The statistical performance of the MUSIC and the Minimum-Norm algorithms in resolving plane waves in noise”, IEEE Trans. Acoust. Speech Signal Process., Vol. 34, No. 2, 1986, pp. 331-341. [11] H. Lee and M. Wengrovitz, “Improved high-resolution direction finding through use of homogeneous constraints”, Proc. IEEE ASSP Workshop on Spectrum Estimation and Modelling, 1988, pp. 152-157. [12] H. Lee and M. Wengrovitz, “Resolution threshold of beamspace MUSIC for two closely spaced emitters”, IEEE Trans. Acoust. Speech Signal Process., Vol. 38, No. 9, September 1990, pp. 1545-1559. Cl33 T.-S. Lee and M. Zoltowski, “Beamspace domain bearing estimation for fast target localization using an array of antennas”, Proc. 25th Asilomar Conference on Signals, Systems and Computers, November 1991, pp. 913-917. [14] D. Linebarger, R. DeGroat, E. Dowling and P. Stoica, “Constrained beamspace music”, Proc. IEEE Internat. Con& Acoust. Speech Signal Process. 1993, pp. IV548-IV551. [15] K. Miller, Some Eclectic Matrix Theory, Krieger, New York, 1987. [16] R.O. Schmidt, “Multiple emitter location and signal parameter estimation”, IEEE Trans. Ant. Prop., Vol. AP-34, No. 3, 1986, pp. 276280. [17] P. Stoica and D. Linebarger, “An optimization result for constrained beamformer design”, IEEE Signal Process. Lett., Vol. 2, No. 4, April 1995, pp. 66-67.
[18] P. Stoica and A. Nehorai, “MUSIC, maximum likelihood, and Cramer-Rao bound”, IEEE Trans. Acoust. Speech Signal Process., Vol. 37, No. 5, 1989, pp. 720-741. [19] P. Stoica and A. Nehorai, “Comparative performance study of element-space and beam-space MUSIC estimators”, Circuits Systems Signal Process., Vol. 10, No. 3, 1991, pp. 285-292. [20] P. Stoica and T. Siiderstriim, “On spectral and root forms of sinusoidal frequency estimators”, Signal Processing, Vol. 24, No. 1, July 1991, pp. 93-103. [21] P. Stoica and T. Soderstrom, “On the constrained MUSIC technique”, IEEE Trans. Signal Process., Vol. 41, No. 11, November 1993, pp. 3190-3193. [22] B. Van Veen and B. Williams, “Structured covariance matrices and dimensionality reduction in array processing”, Proc. IEEE ASSP Workshop Spectrum Estimation and Modelling, 1988, pp. 168-171. [23] X. Xu and K. Buckley, “Statistical performance comparison of music in element-space and beamspace”, Proc. IEEE Internat. Conf. Acoust. Speech Signal Process. 1989,
pp. 2124-2127. [24] M. Zoltowski, G. Kautz and S. Silverstein, “Beamspace root-MUSIC”, IEEE Trans. Signal Process,. Vol. 41, No. 1, January 1993, pp. 344-364. [25] M. Zoltowski and C. Mathews, “Beamspace root-music for rectangular arrays, circular arrays, and nonredundant linear arrays”, Proc. 25th Asilomar Conference on Signals, Systems and Computers, November 1991, pp. 556-560. [26] M. Zoltowski and C. Mathews, “Direction finding with uniform circular arrays via phase mode excitation and beamspace root-MUSIC”, Proc. IEEE Internat. Conf Acoust. Speech Signal Process. 1992, pp. V245-V248.