Signal Processing 115 (2015) 20–26
Contents lists available at ScienceDirect
Signal Processing journal homepage: www.elsevier.com/locate/sigpro
Fast communication
On ordered normally distributed vector parameter estimates Eric Chaumette, François Vincent, Olivier Besson University of Toulouse-ISAE, Department of Electronics, Optronics and Signal, 10 Avenue Edouard Belin, 31055 Toulouse, France
a r t i c l e in f o
abstract
Article history: Received 14 October 2014 Received in revised form 25 January 2015 Accepted 27 February 2015 Available online 18 March 2015
The ordered values of a sample of observations are called the order statistics of the sample and are among the most important functions of a set of random variables in probability and statistics. However the study of ordered estimates seems to have been overlooked in maximum-likelihood estimation. Therefore it is the aim of this communication to give an insight into the relevance of order statistics in maximum-likelihood estimation by providing a second-order statistical prediction of ordered normally distributed estimates. Indeed, this second-order statistical prediction allows to refine the asymptotic performance analysis of the mean square error (MSE) of maximum likelihood estimators (MLEs) of a subset of the parameters. A closer look to the bivariate case highlights the possible impact of estimates ordering on MSE, impact which is not negligible in (very) high resolution scenarios. & 2015 Elsevier B.V. All rights reserved.
Keywords: Multivariate normal distribution Order statistics Mean square error Maximum likelihood estimator
1. Introduction The ordered values of a sample of observations are called T the order statistics of the sample: if θ ¼ θ1 ; θ2 ; …; θM is a 1 vector of M real valued random variables, then θðMÞ ¼ θð1Þ ; θð2Þ ; …; θðMÞ T denotes the vector of order statistics induced by θ where θð1Þ r θð2Þ r ⋯ r θðMÞ [1,2]. Order statistics and extremes (smallest and largest values) are among the
E-mail addresses:
[email protected] (E. Chaumette),
[email protected] (F. Vincent),
[email protected] (O. Besson). 1 The notational convention adopted is as follows: italic indicates a scalar quantity, as in a; lower case boldface indicates a column vector quantity, as in a; upper case boldface indicates a matrix quantity, as in A. The n-th row and m-th column element of the matrix A will be denoted by An;m or ðAÞn;m . The n-th coordinate of the column vector a will be denoted by an or ðaÞn . The matrix/vector transpose is indicated by a superscript T as in AT . For two vectors a and b, a Z b means that a b is positive componentwise. MR ðN; P Þ denotes the vector space of real þl matrices with N rows and P columns. 1m:m denotes the M-dimensional M vector with all components set to 0 except components from m to m þ l set to 1. 1M denotes the M-dimensional vector with all components set to 1. IM A RMM denotes the identity matrix. 1fAg denotes the indicator R function of the event A. EΘ ½gðxÞ ¼ gðxÞp x; Θ dx denotes the statistical expectation of the vector of functions gð Þ with respect to x parameterized by θ. http://dx.doi.org/10.1016/j.sigpro.2015.02.026 0165-1684/& 2015 Elsevier B.V. All rights reserved.
most important functions of a set of random variables in probability and statistics. There is natural interest in studying the highs and lows of a sequence, and the other order statistics help in understanding the concentration of probability in a distribution, or equivalently, the diversity in the population represented by the distribution. Order statistics are also useful in statistical inference, where estimates of parameters are often based on some suitable functions of the order statistics vector (robust location estimates, detection of outliers, censored sampling, characterizations, goodness of fit, etc.). However the study of ordered estimates seems to have been overlooked in maximum-likelihood estimation [4], which is at first sight a little bit surprising. Indeed, if x denotes the random observation vector and p x; Θ denotes the probability density function (p.d.f.) ofx depending on a vector of P real unknown parameters Θ ¼ Θ1 ; …; ΘP to be estimated, then many estimation problems in this setting lead to estimab tion algorithms yielding ordered estimates θ ðM Þ induced by a b vector θ of M estimates formed from a subset of the whole set b . Among all various possible instances of of P estimates Θ this setting, the most studied in signal processing is that of separating the components of data formed from a linear superposition of individual signals and noise (nuisance). For the sake of illustration, let us consider the following simplified
E. Chaumette et al. / Signal Processing 115 (2015) 20–26
example: xt Θ ¼ A θ st þ nt ;
ΘT ¼ θT ; sT1 ; …; sTT
T
ð1Þ
where 1 rt r T, T is the number of independent observations, xt is the vector of samples of size N, M is the number of signal sources, st is the vector of complex amplitudes of the M sources for the tth observation, A θ ¼ a θ1 ; …; a θM and að Þ is a vector of N parametric functions depending on a single parameter θ, nt are Gaussian complex circular noises independent of the M sources. Since (1) is invariant over permutation of signal sources, i.e. for any permutation matrix Pi A RMM : xt Θ ¼ A θ Pi ðPi st Þ þnt ; it is well known that (1) is an ill-posed unidentifiable estimation problem which can be regularized, i.e. transformed into a well-posed and identifiable estimation problem, byimposing the ordering of the unknown parameters θm: θ 9 θ1 ; …; θM ; θ1 o ⋯ o θM ; and of their estimates as b 9θ b . Therefore in the MSE sense, the correct well: θ ðM Þ
statistical prediction is given by the computation of EΘ b θ Þ2 ; 1 r m r M. Unfortunately, the correct statistical ½ðθ ðmÞ m prediction cannot be obtained from scratch since most of the available results in the open literature on order statistics [1– 3] have been derived the other way round, i.e. they request b . The distribution of θ b the knowledge of the distribution of θ can be obtained from a priori information on the problem at hand or may have been derived in some regions of operation of the observation model. For instance, it has been known for a while that, under reasonably general conditions on the observation model [4,7], the ML estimates are asymptotically Gaussian distributed when the number of independent observation tends to infinity. Additionally, if the observation model is linear Gaussian as in (1), some additional asymptotic regions of operation yielding Gaussian MLEs have also been identified: at finite number of independent observations [5– 9] or when the number of samples and the number of independent observations increase without bound at the same rate, i.e. N; T-1, N=T-c, 0 o c o 1 [10]. Nevertheless a close look at the derivations of these results reveals an implicit hypothesis: the asymptotic condition of operation considered yields resolvable estimates [11,12], what prevents from estimates re-ordering. Therefore, under this implicit b ¼θ b . However when the condition of operahypothesis θ ðM Þ
tion degrades, distribution spread and/or location bias of each θb increase and the hypothesis of resolvable estimates does m
not hold any longer yielding observation samples for which θb ðMÞ a θb [11,12]. Therefore it is the aim of this communication to give an insight into the relevance of order statistics in maximumlikelihood estimation by providing a second-order statistical prediction of ordered normally distributed estimates. This second-order statistical prediction allows to refine the asymptotic performance analysis of the MSE of MLEs of a subset θ of the parameters set Θ.2 Indeed, in the setting of a multivariate 2 Note that these results are also applicable to other estimators, such as M-estimators, Bayesian estimators (MAP, MMSE), as long as their distribution is normal.
21
normal distribution with mean vector μb and covariance θ θb N M μb; Cb , with p.d.f. denoted matrix Cb, θ
θ
θ
b ; μ ; C Þ, the most general statistical characterization, i. pN M ðθ b b θ
θ
e. including distribution and moments, have been derived for an exchangeable multivariate normal random vector [1,13,14],
μ, a
that is a normal distribution with a common mean
common variance σ2 and a common correlation coefficient ρ: θb N M μ1M ; σ 2 1 ρ IM þ ρ1M 1TM , with ρ A ½0; 1½. If the focus is on distribution, then the most general result has been released recently in [3] where the exact distribution of linear combinations of order statistics (L-statistics) [15] of arbitrary dependent random variables has been derived (see also [16] for the joint distribution of order statistics in a set of univariate or bivariate observations). In particular, [3] examines the case where the random variables have a joint elliptically contoured distribution and the case where the random variables are exchangeable. Arellano-Vallea and Genton [3] investigate also the particular L-statistics that simply yield a set of order statistics, and study their joint distribution. Unfortunately, general derivations of closed form expressions for moments and cumulants of L-statistics were beyond the scope of [3] and were left for future research. However in the particular case of a multivariate normal distribution, it is possible to obtain closed forms for first and second order moments of its order statistics directly, i.e. without explicitly computing the order statistics distribution (see Section 2). These closed forms not only generalize the earlier work from the exchangeable case to the general case providing a secondorder statistical prediction of L-statistics from multivariate normal distribution but are also required to characterize the MSE of normally distributed vector parameter estimates. Indeed, since it is always assumed that θ has distinct components, any sensible estimation technique of θ must preserve this resolvability requirement and yield distinct mean values, leading to asymptotically non-exchangeable multivariate normal random vector. 2. Second-order statistical prediction of ordered normally distributed estimates b b A Perðθ b Þ, where Perðθ b Þ ¼ fθ b ¼P θ First, note that θ ðM Þ i i ; b i ¼ 1; …; M!g is the collection of random vectors θ correi
sponding to the M! different permutations of the components b . Here P A RMM are permutation matrices with P aP of θ i
i
j
for all ia j. Let Δ A RðM 1ÞM be the difference matrix such T that Δθ ¼ θ2 θ1 ; θ3 θ2 ; …; θM θM 1 , i.e., the mth row of Δ is dm þ 1 dm , m ¼ 1; …; M 1, where d1 ; …; dM b: are the M-dimensional unit basis vectors. Let S i ¼ fθ b b Δθ i Z 0g where θ i N M μi ; Ci , μi ¼ Pi μb, Ci ¼ Pi CbPTi . θ θ Let P ðDÞ be the probability of an event D. As the set of T
T
M events fS i gM! i ¼ 1 is a partition of R , whatever the real valued function f ð ; Þ, by the theorem of total probability we have M! h i X h i b b ;θ b jS P ðS Þ b ;θ ¼ E f θ E f θ ðmÞ ðmÞ i i ðlÞ ðlÞ i¼1
22
E. Chaumette et al. / Signal Processing 115 (2015) 20–26
that is
CRBΘ θ
M! h i X h i b b ;θ b b ¼ jS i P ðS i Þ: E f θ E f θ ; θ ðmÞ i i ðlÞ m
i¼1
ð2Þ
l
However, from a computational point of view, it is wiser to express (2) as M! h i X h i b b ;θ b b ¼ jU i P ðU i Þ E f θ E f θ ; θ ðmÞ i i ðlÞ m
i¼1
ð3Þ
l
b μ N bi: u b i Z Δμi and ui ¼ Δ θ where U i ¼ u M 1 i i T 0; ΔCi Δ . Then a smart exploitation of (3) (see Appendix for details) yields M! h i X b αmji P i þ βTmji ei E θ ðmÞ ¼
ð4Þ
i¼1
b2 E θ ðmÞ ¼
σ 2mji þ α2mji P i þ 2αmji βTmji ei þ βTmji Ri βmji
θb ðm þ lÞ θb ðmÞ
2
0 0 11 M! T X Δμi Δμi T P i þ þl1 @ @ AA1m:m þ l 1 ¼ 1m:m M1 M1 T 2ei Δμi þR i i¼1
ei ¼ E ui 1U i ;
h
R i ¼ E ui ðui ÞT 1U i
i
ð8Þ
where l Z1, αmji ¼ dm μi , μi ¼ Pi μb, Ci ¼ Pi CbP Ti , βmji ¼ θ θ T T T 1 T ΔCi Þ dm . ðΔCi Δ Þ 1 ΔCi dm , σ 2mji ¼ dm ðCb Ci Δ ΔCi Δ θ Finally, the second order statistical prediction of any L-statistics, that is linear combinations of the vector of b , L A M ðN; M Þ, is given by order statistics, b z ¼ Lθ R ðM Þ T
μbz ¼ Lμb ; θ ðM Þ
ð9Þ
which may be different from the (a priori) theoretical CRBbΘ θ . 2.2. Implementation Let Di be the diagonal matrix such that ðDi Þm;m ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi T ðΔCi Δ Þm;m; , m ¼ 1; …; M 1. The computational cost of (4)–(7) can be reduced by reformulating P i , ei , Ri (8) as h i b Ti jV i Di ; b i jV i ; Ri ¼ Di E v bi v P i ¼ P ðV i Þ; ei ¼ Di E v ð10Þ n o b i N M 1 ð0; Di 1 bi : v b i Z Di 1 Δμi and v where V i ¼ v
Cbz ¼ LCb LT ; θ ðM Þ
h i b μ ¼E θ ðM Þ b θ ðM Þ
h i h iT b bT b b C b ¼ E θ ðMÞ θ ðMÞ E θ ðMÞ E θ ðMÞ θ ðM Þ
h i b and can be computed from expressions (4)–(7) of E θ ðmÞ , h i 2 b b b E θ when the L-statistics derive from ðm þ lÞ θ ðmÞ , E θ ðmÞ multivariate normal distribution. 2.1. Cramér–Rao bound for ordered normally distributed estimates Let CRBΘ θ denote the Cramér–Rao bound (CRB) for unbiased estimates of θ [4,7]. Then (4) allows for the computation of the ordered estimates bias b Θ θ ¼ m h i b EΘ θ ðmÞ θ m which may be different from the (a priori) h i b θ , leading to the theoretical bias bΘ θ m ¼ EΘ θ m m CRB for ordered estimates given by [4,7] ! T ∂b Θ θ b CRBΘ θ ¼ b Θ θ b Θ θ þ IM þ T ∂θ
from a numerical point of view, the distribution support of b i m can be restricted to the interval ½ 10; 10 each v b i jV i r bi b i jV i r10P ðV i Þ and E v v leading to E v m
ð6Þ
ð7Þ P i ¼ P ðU i Þ;
;
m
ð5Þ
T
ðΔCi Δ ÞDi 1 Þ is a vector of correlated standard normal b i Z10 o10 23 . Thus, random variables satisfying P v
h i 1 2 2 b b b b2 E θ E θ θb ðm þ lÞ θb ðmÞ ðm þ lÞ θ ðmÞ ¼ ðm þ lÞ þ E θ ðmÞ E 2
E
∂θ
T
M! X i¼1
IM þ
!T ∂b Θ θ
m
l
102 P ðV i Þ. Therefore, since n o b i m Z Di 1 Δμi ; P ðV i Þ r min P v 1rmrM1
m
from a numerical point of view: if (m A ½1; M 1j Di 1 Δμi r 10, then: m h i b Ti jV i ¼ 0; b i jV i ¼ 0; E v bi v P ðV i Þ ¼ 0; E v if 8 m A ½1; M 1; Di 1 Δμi Z10, then: m b i jV i ¼ E v b ¼ 0; P ðV i Þ ¼ 1; E v h i h i i T T bi v b i jV i ¼ E v b i ¼ Di 1 ΔCi ΔT Di 1 : bi v ptE v
h i b Ti jV i can be b i jV i and E v bi v In any other case, P ðV i Þ, E v computed by resorting to algorithms proposed by Genz [17] for numerical evaluation of multivariate normal distributions and moments over domains included in ½ 10; 10M . 3. The bivariate (two sources) case b¼u ¼θ b θ b . Let C 9 Cb, μ 9 μb, dμ ¼ μ2 μ1 , and dθ 1 2 1 θ
θ
As M¼2 then u2 ¼ u1 , Δμ1 ¼ dμ ¼ Δμ2 , P 1 þ P 2 ¼ 1, 2 T 2 11:1 1 ¼ 1, e1 e2 ¼ E ½u1 ¼ 0, R1 þ R2 ¼ E u1 ¼ σ b , ΔC1 Δ ¼ dθ
ΔC2 ΔT ¼ σ 2b ¼ C 1;1 þ C 2;2 2C 1;2 . Moreover, if C ¼ σ 2 ðð1 dθ
ρÞI2 þ ρ12 1T2 Þ, then σ b ¼ dθ
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2σ 2 1 ρ and a few additional
calculations yield h i h i m b b E θ ðmÞ ¼ E θ m þ ð 1Þ hðτ Þσ b ; dθ
2 2 m b b E θ ðmÞ ¼ E θ m þ ð 1Þ hðτÞ μ2 þ μ1 σ b ; dθ h i h i 2 b b Var θ ðmÞ ¼ Var θ m σ bhðτÞðτ þhðτÞÞ dθ
E. Chaumette et al. / Signal Processing 115 (2015) 20–26
23
Mean Square Error shrinkage factor 0.1 0 −0.1 −0.2 −0.3 −0.4 −0.5 −0.6 −0.7 −0.8 −0.9
ρ = −1 ρ = −0.5 ρ=0 ρ = 0.5 ρ=1
(dB)
−1 −1.1 −1.2 −1.3 −1.4 −1.5 −1.6 −1.7 −1.8 −1.9 −2 −2.1 −2.2 −2.3 −50
−45
−40
−35
−30
−25
−20
−15
−10
−5
0
5
10
15
20
τ (dB) Fig. 1. MSE shrinkage factor ξðτ; ρÞ versus τ ¼ dμ=σ b for ρ ¼ 1; 0:5; 0; 0:5; 1. dθ
h i h i 2 b b MSE θ ðmÞ ¼ Var θ m σ b τ hðτ Þ; dθ
where h b MSE θ
ð11Þ
h i hðyÞ ¼ E v1fv Z yg yP ðv Z yÞ and
i 2 θb ðmÞ μm . An interesting feature is ðmÞ 9E
τ ¼ dμ=σ b, dθ
the MSE shrinkage factor: h i b MSE θ ðmÞ h i ¼ 1 2 1 ρ τhðτÞ: ξ τ; ρ ¼ b Var θ m
ð12Þ
The computation of the derivatives vector ∂ξ τ; ρ =∂ τ; ρ
shows that, for a given ρ, ξ τ; ρ admits a global minimum h i which is the solution of E v1fv Z yg 2yP ðv Z yÞ ¼ 0, qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi y ¼ τ= 2 1 ρ , that is, after a numerical resolution, pffiffiffiffiffiffiffiffiffiffiffi y C0:61 and leading to τ C 0:863 1 ρ, ξ τ; ρ Z1 0:2 1 ρ Z 0:6 (see Fig. 1). Therefore the minimum of ξ τ; ρ is 0.6 ( 2.2 dB) and is reached for couples τ; ρ belonging to the set τ; ρ- 1 þ jτ C 1:22 . This result is applicable b is normal with to any instantiation of (1) for which θ T Cb ¼ σ 2 1 ρ I2 þ ρ12 12 , as for example, the asymptotic θ
24
E. Chaumette et al. / Signal Processing 115 (2015) 20–26
Mean Square Error of MLEs −15
Simu CRB Theo
−20
−25
−30
−35
dB
−40
−45
−50
−55
−60
−65
−70
−75
20
25
30
35
40
45
50
55
60
65
Signal to Noise Ratio (dB) Fig. 2. Average empirical and theoretical MSE (11) of order MLEs of two closely spaced frequencies versus SNR.
behaviour of the MLEs of the direction of arrival (DoA) of two sources of equal power impinging on a uniform linear array (ULA) with symmetric DoAs relative to boresight, or two cisoids (tones) of equal power with opposite frequencies, whatever the observation model is conditional or unconditional [5]. Indeed in these two cases CRBΘ θ ¼ σ 2 1 ρ I2 þ ρ12 1T2 [4,5,7] and a priori, a violation of the CRB not attributable to the variance of the empirical MSE is expected in some scenarios, unless the ad hoc CRB (9) is computed. However, a closer look to Fig. 1 reveals that
τ ¼ dμ=σ b must be inferior to 1015=20 C 6 in order to dθ
exhibit a measurable MSE shrinkage effect. As the normality of estimates and convergence to the CRB is obtained only in asymptotic conditions where σ b≪1, this means that dθ
dθ ¼ dμ≪1 as well, that is the scenario considered is a (very) high resolution scenario. To highlight the possible impact of estimates ordering on MSE in (very) high resolution scenarios, we consider the estimation of two tones ðM ¼ 2Þ of equal power with opposite frequencies where the observation T model (1) is deterministic: a θ ¼ 1; ej2πθ ; …; ej2π ðN 1Þθ Þ, N¼8, T¼2, dθ ¼ 1=12N, Cnt ¼ I2 , Cst ¼ SNR=N 1 þ 18 I2 T 1 812 12 Þ
where SNR is the signal to noise ratio measured at the
E. Chaumette et al. / Signal Processing 115 (2015) 20–26
25
Ratio Mean Square Error/CRB of MLEs 10
9
8
Simu Theo 7
6
dB
5
4
3
2
1
0
−1
−2
20
25
30
35
40
45
50
55
60
65
Signal to Noise Ratio (dB) Fig. 3. Average empirical and theoretical MSE shrinkage factor (12) of order MLEs of two closely spaced frequencies versus SNR.
output of the frequency matched filter. Fig. 2 displays the empirical and theoretical MSE of ordered estimates (11) averaged over the two frequencies (since equal by symmetry) and the associated unbiased CRB. Fig. 3 displays the empirical and theoretical MSE shrinkage factor (12) averaged over the two frequencies as well. The empirical MSE are assessed with 1 104 Monte-Carlo trials and a frequency step δθ ¼ 1=12N 64 . As expected there is a perfect match between the theoretical results and the empirical results in the asymptotic region (SNR Z 55 dB) where Gaussianity is reached by MLEs, highlighting the non-negligible impact ( 0.6 dB) of estimates ordering on high resolution scenarios. As the SNR values enter
the threshold area, the MLEs start to explore the side lobe peaks of the matched filter producing outliers and loosing gradually their Gaussian distribution [4]. Because of this phenomenon, as the SNR values enter the threshold area, the results derived in this section must be regarded as an approximation which accuracy is somewhat difficult to quantify in general. Indeed, it would require to recompute the second-order statistical prediction of ordered estimates taking into account the probability of outlier [4], which is far beyond the scope of this communication. We can simply notice, but without generality, that in our example, the approximation is quite good for SNR Z30 dB.
E. Chaumette et al. / Signal Processing 115 (2015) 20–26
26
4. Conclusion
Finally
In this communication we have provided a secondorder statistical prediction of ordered normally distributed estimates which allows to refine the asymptotic performance analysis of the MSE of MLEs of a subset θ of the parameters set Θ. Analytical expressions of the MSEs of the whole parameters set Θ after re-ordering of the subset θ should be the next topic to be addressed. Appendix T T b Þ ;u b μ Þ b i ¼ Δðθ the vector ξbm;i ¼ ððθ u i m bi Þ , i i T N M 1 ð0; ΔCi Δ Þ, is a M-dimensional normal vector resultb , then we ing from a bijective affine transformation of θ i have [1]
E
θb ðm þ lÞ θb ðmÞ
m
T T 1 T T bi where μmji ¼ dm μi þ Ci Δ ΔCi Δ and σ 2mji ¼ dm u
T T 1 Cb Ci Δ ΔCi Δ ΔC i dm . Let U i ¼ ub i : ub i Z Δμi , θ
then whatever the real valued function f ð Þ: h i h i h h i i b b b b i jU i E f θ jS i ¼ E f θ jU i ¼ E E f θ ju i i i m
m
m
k
b b In the particular cases where f θ ðmÞ ¼ θ ðmÞ ; k A f1; 2g: h i b bi b i ¼ μmji ¼ αmji þ βTmji u E θ ju i
m 2 b b i ¼ σ 2 þ μ2mji ¼ σ 2 þ α2mji E θ ju i mji mji m
T b i þ βTmji u biu b Ti βmji þ 2αmji βmji u
T 1 T where αmji ¼ dm μi , βmji ¼ ΔCi Δ ΔC i dm , and (2) can finally be expressed as M! h i X b αmji P ðU i Þ þ βTmji E ub i 1U i E θ ðmÞ ¼ i¼1
X M! b2 σ 2mji þ α2mji P ðU i Þ þ 2αmji βTmji E ub i 1U i E θ ðmÞ ¼ i¼1 h i T b i T 1U i mji bi u þ mji E u
β
β
ð13Þ
From (2), 8 m A ½1; M , 8 ljm þ l A ½1; M : " #
M! 2 X 2 b b b b E θ ðm þ lÞ θ ðmÞ E θi θi jS i P ðS i Þ ¼ i¼1
mþl
m
Additionally, 8 l Z1jm þ l A ½1; M : T þl1 b θb i θ ¼ 1m:m Δθb i i M1 mþl
m
therefore 2 T T þl1 þl1 b θb i θ ¼ 1m:m Δθb i Δθb i 1m:m i M 1 M1 mþl
m
¼
M! X
þl1 1m:m M 1
T
i¼1
b Δθ b T jS 1m:m þ l 1 P ðS Þ E Δθ i i i M1 i
ð14Þ
Last, as
h i b Δθ b T jS ¼ E u b i jU i Δμi T b Ti jU i þ E u biu E Δθ i i i b i jU i T þ Δμi Δμi T þ Δμi E u
As
b b i ; μmji ; σ 2 pN b i ; 0; ΔCi ΔT pN M ξbm;i ; μm;i ; Cm;i ¼ pN 1 θ ju u i M1 mji
2
an equivalent expression of (14) is (7). References [1] Y.L. Tong, The Multivariate Normal Distribution, Springer, New York, 1990. [2] H.A. David, H.N. Nagaraja (Eds.), Order Statistics, 3rd ed. Wiley, New York, 2003. [3] R.B. Arellano-Vallea, M.G. Genton, On the exact distribution of linear combinations of order statistics from dependent random variables, J. Multivar. Anal. 98 (2007) 1876–1894. [4] H.L. van Trees, Optimum Array Processing, Wiley-Interscience, NewYork, 2002. [5] P. Stoica, A. Nehorai, Performances study of conditional and unconditional direction of arrival estimation, IEEE Trans. Signal Process. 38 (10) (1990) 1783–1795. [6] M. Viberg, B. Ottersten, Sensor array processing based on subspace fitting, IEEE Trans. Signal Process. 39 (5) (1991) 1110–1121. [7] B. Ottersten, M. Viberg, P. Stoica, A. Nehorai, Exact and large sample maximum likelihood techniques for parameter estimation and detection in array processing, in: S. Haykin, J. Litva, T.J. Shepherd (Eds.), Radar Array Processing, Springer-Verlag, Berlin, Germany, 1993, pp. 99–151. (Chapter 4). [8] J. Li, R.T. Compton, Maximum likelihood angle estimation for signals with known waveforms, IEEE Trans. Signal Process. 41 (9) (1993) 2850–2862. [9] A. Renaux, P. Forster, E. Chaumette, P. Larzabal, On the high snr cml estimator full statistical characterization, IEEE Trans. Signal Process. 54 (12) (2006) 4840–4843. [10] X. Mestre, P. Vallet, P. Loubaton, On the resolution probability of conditional and unconditional maximum likelihood doa estimation, in: Proceedings of the Eurasip EUSIPCO, 2013. [11] M.P. Clark, On the resolvability of normally distributed vector parameter estimates, IEEE Trans. Signal Process. 43 (12) (1995) 2975–2981. [12] C. Ren, M.N. El Korso, J. Galy, E. Chaumette, P. Larzabal, A. Renaux, On the accuracy and resolvability of vector parameter estimates, IEEE Trans. Signal Process. 62 (14) (2014) 3682–3694. [13] I. Olkin, M.A.G. Viana, Correlation analysis of extreme observations from a multivariate normal distribution, J. Am. Stat. Assoc. 90 (1995) 1373–1379. [14] N. Loperfido, A note on skew-elliptical distributions and linear functions of order statistics, Stat. Probab. Lett. 78 (2009) 3184–3186. [15] J.R.M. Hosking, L-moments: analysis and estimation of distributions using linear combinations of order statistics, J. R. Stat. Soc. 52 (1) (1990) 105–124. [16] H.M. Barakat, Multivariate order statistics based on dependent and nonidentically distributed random variables, J. Multivar. Analysis 100 (2009) 81–90. [17] A. Genz, Numerical computation of multivariate normal probabilities, J. Comput. Graph. Stat. 1 (1992) 141–149.