On the equivalence between likelihood ratio tests and counting rules in distributed detection with correlated sensors

On the equivalence between likelihood ratio tests and counting rules in distributed detection with correlated sensors

ARTICLE IN PRESS Signal Processing 87 (2007) 1808–1815 www.elsevier.com/locate/sigpro Fast communication On the equivalence between likelihood rati...

221KB Sizes 2 Downloads 63 Views

ARTICLE IN PRESS

Signal Processing 87 (2007) 1808–1815 www.elsevier.com/locate/sigpro

Fast communication

On the equivalence between likelihood ratio tests and counting rules in distributed detection with correlated sensors Luis Vergara Departamento de Comunicaciones, Universidad Polite´cnica de Valencia, 46022 Valencia, Spain Received 5 June 2006; received in revised form 13 December 2006; accepted 28 January 2007 Available online 11 February 2007

Abstract We have considered in this paper the conditions for the equivalence of the counting rule and the likelihood ratio test when implementing the fusion rule of decisions corresponding to identically operated correlated sensors. A simple model of correlation, defined by two correlation indices, one for every hypothesis, has been considered. A main conclusion is that for properly operating sensors, the counting rule is almost a UMP test in the correlation indices. r 2007 Elsevier B.V. All rights reserved. Keywords: Distributed detection; Correlation; Counting rules; UMP test

1. Introduction Distributed detection [1,2] is a class of automatic detection that appears in systems having multiple spatially distributed sensors. Every sensor must give a local decision about the observed phenomenon which is to be transmitted to a fusion centre, where a global decision will be done. Among other advantages, distributed detection allows saving of communication resources since only the local decisions are to be transmitted. The case of independent local decisions has been extensively considered in the literature (see [3] as a representative example among many others). Many efforts have been done to include statistical dependence among the local decisions in the fusion model [4–11]. In general, the proposed solutions Tel.: +34 96 3877308; fax: +34 96 3877919.

E-mail address: [email protected].

lead to complicated iterative algorithms where detailed knowledge about the dependence model is required. Of special interest is the work [5], where a simple model of correlated decisions is considered. The correlation model is described by two parameters r0 and r1, that are properly defined correlation indices between every two local decisions, under hypothesis H0 and H1, respectively. In [5], the authors obtain the expression for the likelihood ratio to be used at the fusion centre, assuming knowledge of the probability of detection (PD) and probability of false alarm (PFA) at every local sensor (identical for all of them). Prior knowledge of the values r0 and r1 is also required. On the other hand, it is shown in [5] that the likelihood ratio at the fusion centre may be expressed as a function of the number m of local detectors that decide in favour of H0, i.e., m is a sufficient statistic for this problem. So, one can

0165-1684/$ - see front matter r 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.sigpro.2007.01.023

ARTICLE IN PRESS L. Vergara / Signal Processing 87 (2007) 1808–1815

think about the possibility of using a counting rule, i.e., a rule which simply counts m, and decide H1 when m is smaller than a given (threshold) number. Useless to say that the counting rule has much simpler implementation than the likelihood ratio test, especially regarding no need for a prior knowledge of the correlation model. In this paper, we have done an analysis in an effort to show in what conditions both alternatives are equivalents. This essentially amounts to analysing when the likelihood ratio is a decreasing function of m. Optimality of counting rules for the uncorrelated sensor case have been considered in [12], including the effect of the local distributions. This latter will not be a matter of this work since we assume, as in [5], that local tests are already designed and all of them operate with identical PFA and identical PD. Our problem is the fusion of the (correlated) local decisions, without concern about the way that they were obtained at every sensor. 2. Analysis T

Let us define u ¼ ½u1 . . . uL  , the vector formed by the set of local decisions (ui ¼ 0 when sensor i is in favour of H0, and ui ¼ 1 when sensor i is in favour of H1), corresponding to L local detectors. We consider that m out of L detectors are in favour of H0 (i.e., there are m 0’s in vector u). It is shown in [5] that the likelihood ratio is given by Pðu=H1 Þ ¼ LðmÞ LðuÞ ¼ Pðu=H0 Þ ! m Lmþi2 m Q r1 ðkþ1PDÞþPD P i ð1Þ PD 1þkr1 i i¼0 k¼0 ! ¼ , m Lmþi2 m Q r0 ðkþ1PFAÞþPFA P i ð1Þ PFA 1þkr0 i i¼0 k¼0 0pmpL  2, where the correlation 0pr1 p1 are defined by

ð1Þ indices

0pr0 p1

and

iaj;

n ¼ 0; 1.

The cases m ¼ L1 and m ¼ L are not explicitly considered in [5], but can be readily obtained

LðuÞ ¼

Pðu=H1 Þ ¼ LðmÞ Pðu=H0 Þ

! 8 L1 L1 i1 > P Q > r1 ðkþ1PDÞþPD i > PDþ ð1Þ PD > 1þkr1 > > i¼1 k¼0 i > > ! ; > > L1 > L1 i1 P Q > r0 ðkþ1PFAÞþPFA > i > PFAþ ð1Þ PFA 1þkr0 > > i¼1 k¼0 < i ! ¼ > L L i2 > Q > r1 ðkþ1PDÞþPD > 1LPDþPð1Þi PD > 1þkr1 > > i¼2 k¼0 i > > ! ; > > > L L i2 P Q > r0 ðkþ1PFAÞþPFA > i > 1LPFAþ ð1Þ PFA > 1þkr0 : i¼2 k¼0 i

m ¼ L  1;

ð3Þ m ¼ L:

The optimum rule (the one maximizing the global PD, GPD, for a given global probability of false alarm, GPFA) is obtained by means of the likelihood ratio test ruleopt ðuÞ ¼ ruleopt ðmÞ 8 H1 > > < ¼ H1 with probability g > > :H 0

if LðmÞ4l; if LðmÞ ¼ l; if LðmÞol; ð4Þ

where l40 and g40 fit the GPFA. On the other hand, the counting rule is defined by rulecount ðuÞ ¼ rulecount ðmÞ 8 H1 > > < ¼ H1 with probability g > > :H 0

if mom0 ; if m ¼ m0 ; if m4m0 : ð5Þ

We are interested in deducing when (4) and (5) are equivalent. This happens if and only if L(m) is a 4 decreasing function of m, so that LðmÞ ¼ o o l3m ¼ m0 ðlÞ. Let us express (1) in the form 4

m P

i

m

!

ð1Þ PD  f 1 ði; mÞ i i¼0 NðmÞ ! ¼ , LðmÞ ¼ DðmÞ P m m i ð1Þ PFA  f 0 ði; mÞ i i¼0

E½ui uj =Hn   E½ui =H1 E½uj =Hn  rn ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi E½ðui  E½ui Þ2 =Hn E½ðuj  E½uj Þ2 =Hn  8i; j;

1809

ð2Þ

0pmpL  2,

ð6Þ

ARTICLE IN PRESS L. Vergara / Signal Processing 87 (2007) 1808–1815

1810

m1 X 0 ð1Þi ¼

where we have defined f 1 ði; mÞ ¼

Lmþi2 Y k¼0

f 0 ði; mÞ ¼

Lmþi2 Y k¼0

r1 ðk þ 1  PDÞ þ PD , 1 þ kr1 r0 ðk þ 1  PFAÞ þ PFA : 1 þ kr0

i0 ¼0

ð7Þ

Let us try to find some relation between the numerators N(m+1) and N(m). We start by separating the first and last terms of the summation corresponding to N(m+1). Nðm þ 1Þ ¼

m þ1 X

mþ1

i

ð1Þ

i¼0

mþ1

¼ ð1Þ0 þ

PD  f 1 ði; m þ 1Þ

i !

PD  f 1 ði; m þ 1Þ

i

i¼1

!

mþ1

mþ1

þ ð1Þ

PD  f 1 ðm þ 1; m þ 1Þ.

mþ1

ð8Þ But m X

ð1Þ

i

mþ1

i¼1

Introducing (11) (10) and using the ! into   mþ1 m identity  we may write mþ1 m ! m X m i ð1Þ PD  f 1 ði; m þ 1Þ Nðm þ 1Þ ¼ i i¼0 ! m X m i ð1Þ  PD  f 1 ði þ 1; m þ 1Þ. i i¼0

f 1 ði; m þ 1Þ ¼ f 1 ði; mÞ  g1 ði; mÞ, 1 þ ðL  m þ i  2Þr1 . g1 ði; mÞ ¼ r1 ðL  m þ i  1  PDÞ þ PD

PD  f 1 ði; m þ 1Þ

m X ð1Þi ¼

m

!

!#

m þ

i

i¼1

i1

Nðm þ 1Þ ¼

m X

PD  f 1 ði; m þ 1Þ. ¼

ð9Þ

Introducing (9) into (8) and using the identity     mþ1 m  we may write 0 0 m X

m

i¼0

i

ð1Þi

þ

m X ð1Þi i¼1

þ ð1Þmþ1

ð1Þi

i¼0 m X i¼0

Nðm þ 1Þ ¼

ð11Þ

ð13Þ

Then

!

i "

PD  f 1 ði0 þ 1; m þ 1Þ.

0

It is obvious from (7) that f 1 ði þ 1; m þ 1Þ ¼ f 1 ði; mÞ, since this function depends on the difference mi. On the other hand, we have from (7) that

!

mþ1

ð1Þi

i

!

ð12Þ

PD  f 1 ð0; m þ 1Þ

0

m X

!

m

ð1Þ

i

m i m i

! PD  f 1 ði; mÞ½g1 ði; mÞ  1 ! PD  f 1 ði; mÞ  h1 ði; mÞ, ð14Þ

where

!

1 þ ðL  m þ i  1Þr1 1 r1 ðL  m þ i  1  PDÞ þ PD ð1  r1 Þð1  PDÞ ¼ . r1 ðL  m þ i  1  PDÞ þ PD

PD  f 1 ði; m þ 1Þ m

h1 ði; mÞ ¼

!

i1 mþ1 mþ1

PD  f 1 ði; m þ 1Þ ! PD  f 1 ðm þ 1; m þ 1Þ. ð10Þ

Now we define a new index i0 ¼ i1 to perform the second summation in (10) ! m X m i ð1Þ PD  f 1 ði; m þ 1Þ i1 i¼1 ! m1 X m 0 ð1Þi þ1 0 PD  f 1 ði0 þ 1; m þ 1Þ ¼ i 0 i ¼0

ð15Þ

A similar analysis can be made with the denominator D(m) by simply replacing PD by PFA and r1 by r0 in all the foregoing equations, and defining: 1 þ ðL  m þ i  1Þr0 1 r0 ðL  m þ i  1  PFAÞ þ PFA ð1  r0 Þð1  PFAÞ . ¼ r0 ðL  m þ i  1  PFAÞ þ PFA

h0 ði; mÞ ¼

ð16Þ

ARTICLE IN PRESS L. Vergara / Signal Processing 87 (2007) 1808–1815

2.2. Correlated sensors

Hence finally we may write Lðm þ 1Þ ¼

Nðm þ 1Þ Dðm þ 1Þ m P

¼

i

ð1Þ

i¼0 m P

i

ð1Þ

i¼0

m

!

PD  f 1 ði; mÞ  h1 ði; mÞ i ! m PFA  f 0 ði; mÞ  h0 ði; mÞ i

0pmpL  2.

ð17Þ

The relation between L(m+1) and L(m) would be straightforward in those cases where hn(i,m) n ¼ 0, 1 could be independent of i. Let us consider separately the uncorrelated and the correlated sensors cases.

2.1. Uncorrelated sensors We assume r0 ¼ 0; r1 ¼ 0, then H0(i,m)(1PFA)/PFA and H1(i,m) ¼ (1PD)/PD, so   PFA 1  PD Lðm þ 1Þ ¼ LðmÞ; 0pmpL  2. PD 1  PFA (18) Or equivalently     PFA m 1  PD m LðmÞ ¼ Lð0Þ; PD 1  PFA

1pmpL  1. (19)

We may extend the recurrence (19) to m ¼ L by using both (1) and (5) for the uncorrelated sensors case. From (5) we obtain L(L) ¼ ((1PD)/ (1PFA))L, and from (1) L(0) ¼ (PD/PFA)L so we may write  LðLÞ ¼

PFA PD

L 

1  PD 1  PFA

1811

L

We assume r06¼0 and/or r16¼0. Noting that 0pipm, the only way to make hn(i,m) n ¼ 0,1 independent of i, is to assume m5L, so that we can approximate ð1  r0 Þð1  PFAÞ , r0 ðL  1  PFAÞ þ PFA ð1  r1 Þð1  PDÞ , h1 ði; mÞ ffi r1 ðL  1  PDÞ þ PD

h0 ði; mÞ ffi

Then, after some simple manipulations we obtain Lðm þ 1Þ ¼

and (19) holds too for m ¼ L. We conclude from (19) that the likelihood ratio is a decreasing function of m if the local detectors are uncorrelated and satisfy PD4PFA: the counting rule is equivalent to the likelihood ratio test under these conditions. The result of Eq. (19) for the uncorrelated sensors could be derived directly, without the above mathematical analysis, but we consider of interest to put this result into the general framework of Eq. (17) for a better comparison with correlated cases.

PFA þ ðr0 =1  r0 ÞðL  1Þ PD þ ðr1 =1  r1 ÞðL  1Þ   1  PD  LðmÞ; 0pm5L. 1  PFA

Or equivalently   PFA þ ðr0 =1  r0 ÞðL  1Þ m LðmÞ ¼ PD þ ðr1 =1  r1 ÞðL  1Þ   1  PD m  Lð0Þ; 0pm5L. 1  PFA

ð21Þ

ð22Þ

Comparing (22) with (19), we can clearly see the effect produced in the first factor by the presence of correlation: L(m) will be a decreasing function of m only if r0 and r1 satisfy the condition PFA þ ðr0 =ð1  r0 ÞÞðL  1Þ 1  PD o1; PD þ ðr1 =ð1  r1 ÞÞðL  1Þ 1  PFA

0pm5L

(23) that may be written in the form of an upper bound for r0 r0 o A¼

1 , ð1=AÞ þ 1

  1 1  PFA 1  PFA r1  PD  PFA þ , L  1 1  PD 1  PD 1  r1

0pm5L. Lð0Þ

ð20Þ

ð24Þ

Note that if PD4PFA and r0pr1, condition (24) is satisfied, but if r04r1 it could happen that r0 is greater than the upper bound indicated in (24). It is also possible to find a lower bound for r0 to guarantee that the likelihood ratio is a decreasing function of m for 05mpL. Let us interchange the role of H1 and H0 in the decision problem. The new 0 PD would be PD ¼ ProbfH0 =H0 g ¼ 1  PFA and 0 the new PFA PFA ¼ ProbfH0 =H1 g ¼ 1  PD. We have to interchange the role of the 1’s and the 0’s of the observation vector u. Hence the corresponding likelihood ratio for this modified detection problem

ARTICLE IN PRESS L. Vergara / Signal Processing 87 (2007) 1808–1815

1812

may be obtained from (1) and (3) by simply using the variable l ¼ Lm, the number of sensors in favour of H1 (i.e., the number of 1’s in vector u), instead of m, and changing PD by 1PFA and PFA by 1PD, that is L0 ðuÞ ¼

 Pðu=H0 Þ ¼ LðlÞPD!1PFA; PFA!1PD: Pðu=H1 Þ

(25)

 Pðu=H1 Þ 1 ¼ 0 ¼ LðlÞ PD!1PD; PFA!1PFA: Pðu=H0 Þ L ðuÞ

(26)

But LðuÞ ¼

In conclusion, all the previous analyses could have been made using variable l instead of m, and replacing PD by 1PD and PFA by 1PFA. In particular, we would have arrived to an upper bound (dual to (24)) to guarantee that L(l) is a decreasing function of l for 0pl5L, namely 1 , ð1=A0 Þ þ 1   1 PFA 0  ð1  PDÞ  ð1  PFAÞ A ¼ L  1 PD PFA r1 þ , PD 1  r1 0pl5L.

r0 o

ð27Þ

Since a decreasing function of l is an increasing function of m, (27) implies a lower bound for r0 to make L(m) a decreasing function of m 1 , ð1=A0 Þ þ 1   1 PFA  ð1  PDÞ  ð1  PFAÞ A0 ¼ L  1 PD PFA r1 þ , PD 1  r1 05mpL.

function: decreasing/decreasing (r0 is inside the interval limited by the bounds), decreasing/increasing (r0 is under the lower bound), increasing/ decreasing (r0 is above the upper bound). What happens in the middle? Let us make some heuristic reasoning. In principle, it seems that L(m) (the relative probability of H1) should decrease with the number of sensors in favour of H0. However, we have seen that, in the presence of correlation, this is not necessarily true. There are two ‘‘competing’’ effects defining the evolution of L(m). On one hand, we have the effect of ‘‘number of sensors in favour of a given hypothesis’’ and on the other hand, the effect of ‘‘number of sensors having the same decision’’. If, for example, there is a large correlation index under H1 and a small correlation index under H0, it could happen that L(m) starts decreasing (as we have seen for PD4PFA this will certainly be true) but, after a large enough value of m, the effect of ‘‘number of sensors having the same decision’’, which in this case favours H1, could become more important than the effect of ‘‘number of sensors in favour of H0’’, and then L(m) would start to increase. Once L(m)has started to increase, it should keep increasing until the end (m ¼ L), since the presence of more 0’s in u, will enhance even more the effect of ‘‘more sensors having the same decision’’. Change of tendency will appear only when the predominant effect changes, and would be due to m reaching a

r0 4

ð28Þ

Joining (24) and (28) and considering L large (this is somewhat implicit in the restrictions 0pm5L and 05mpL), we may write PFA  r1 or0 PDð1  r1 Þ þ PFA  r1 ð1  PFAÞ  r1 o . ð1  PDÞð1  r1 Þ þ ð1  PFAÞ  r1

ð29Þ

Let us consider that PD4PFA, so that the upper bound is always greater than the lower bound. Eq. (29) indicates that there are three possible behaviours of L(m) at the beginning/end of the

Fig. 1. Upper and lower bounds of r0 that guarantee a decreasing likelihood ratio. PFA ¼ 0.1, PD ¼ 0.9 (solid lines) and PFA ¼ 0.01, PD ¼ 0.99 (dotted lines).

ARTICLE IN PRESS L. Vergara / Signal Processing 87 (2007) 1808–1815

critical value, so we claim that L(m) may have at maximum one change of tendency. Thus, if L(m) starts decreasing and ends decreasing, it cannot be changes of tendencies (at least two would be necessary) and L(m) must be decreasing for all m. In conclusion, (29) gives the conditions for the counting rule to be equivalent to the likelihood ratio test. Unfortunately a theoretical demonstration of the foregoing heuristic conclusion is not possible as we cannot find a recursion from (17). However, exhaustive computations were made varying the different parameters involved, namely L, PD, PFA r1 and r0 and we never found more than one change of tendency in the likelihood ratio.

1813

3. Some examples Let us gain some insights on (29) by means of some examples. We have represented in Fig. 1 the lower and upper bounds of (29) as a function of r1, for L ¼ 20, PFA ¼ 0.1, PD ¼ 0.9 (solid lines) and PFA ¼ 0.01, PD ¼ 0.99 (dotted lines). Any point inside the area closed by the curves of the upper and lower bounds corresponds to a couple of values r0, r1 making the counting rule equivalent to the likelihood ratio test for every possible threshold l. Note that as the individual sensors improve their performance (larger PD and smaller PFA), the counting rule becomes optimum for an increasing number of couples r1, r0 (the area closed by the

Fig. 2. Log-likelihood ratio for four different cases. L ¼ 20, PFA ¼ 0.1, PD ¼ 0.9.

ARTICLE IN PRESS 1814

L. Vergara / Signal Processing 87 (2007) 1808–1815

dotted lines is larger than the area closed by the solid lines). In principle, the most involved cases are those where one of the correlation indices is close to zero, and the other one is close to 1 (upper-left and lower-right corners of Fig. 1). Using Eqs. (1) and (3) we have computed L(m). In Fig. 2a we have represented the case r1 ¼ 0.8, r0 ¼ 0.6, which satisfies both bounds. We also indicate in Fig. 2a a possible threshold l, and the corresponding threshold m0 of the counting rule. If r0 is changed to 0, the lower bound is not satisfied and L(m) increases in the vicinity of m ¼ L (Fig. 2b). However, it is obvious that there is a large range of thresholds m0 where the corresponding counting rule is equivalent to a likelihood ratio test for some l4L(L). In Fig. 2c we consider the case r1 ¼ 0.05, r0 ¼ 0.2 which also satisfies both bounds. When r0 is changed to 0.8, the upper bound is exceeded and L(m) increases in the vicinity of m ¼ 0 (Fig. 2d). Again, it is obvious that there is a large range of values m0 where the corresponding counting rule finds an equivalent likelihood ratio test for some loL(0). We can appreciate in the last examples that the bounds do not impose severe limitations to the equivalence between the counting rule and the likelihood ratio test. This is a consequence of the large PD

and the small PFA selected. Even in those cases where the upper or lower bounds are violated, there will be a large range of thresholds m0 where counting rules and likelihood ratio tests are equivalent. In conclusion, we can say that for properly operating sensors (PDbPFA) the counting rule is very close to be a UMP test in the parameters r1, r0. Additional insights may be gained by looking at Fig. 3. There we show the value m where a possible change of slope is observed in the likelihood ratio L(m), as a function of r1, r0, for L ¼ 20, and two different cases: PFA ¼ 0.1, PD ¼ 0.9 (Fig. 3a) and PFA ¼ 0.2, PD ¼ 0.8 (Fig. 3b). Both r1 and r0 have been varied from 0 to 0.9 in increments of 0.1 (i.e., a matrix of 100 elements). A zero value in Figs. 3a or b indicates that the likelihood ratio is a decreasing function of m (no change of slope at any m for m ¼ 1 to m ¼ 19) for the corresponding couple r1, r0, that makes the counting rule equivalent to the likelihood ratio test. That is, all zeroes should be inside the area closed by the curves of the upper and lower bounds, except for the possible effect of the asymptotic approximation of L in the derivation of Eq. (29). A nonzero value indicates that one (and only one, as predicted by the heuristic part of the analysis) change of slope has been verified in L(m) at m equal to the magnitude of the indicated

Fig. 3. Value of m where the change of slope is observed in the likelihood ratio L(m) as a function of r1, r0 for L ¼ 20. (a) PFA ¼ 0.1, PD ¼ 0.9. (b) PFA ¼ 0.2, PD ¼ 0.8. See the main body of the text, Section 3, for further explanations.

ARTICLE IN PRESS L. Vergara / Signal Processing 87 (2007) 1808–1815

number. To differentiate the case of slope changing from positive to negative (like in Fig. 2d, at the beginning of L(m)) from the case of slope changing from negative to positive (like in Fig. 2b, at the end of L(m)), a minus sign has been added to the corresponding m value in the first case. Superimposed are the corresponding upper and lower bounds. Note that all zeroes are inside the area closed by the upper and lower bounds, that indicating that the asymptotic approximation is good for L ¼ 20 (a relatively small value). As expected, violation of the upper bound generally implies that the likelihood ratio starts increasing, but it soon decreases (small numbers in Figs. 3a and b, upper-left corners) until m ¼ L. Similarly, violation of the lower bound implies that the likelihood ratio decreases but close to L (large numbers in Figs. 3a and b, lower-right corners) starts to increase until m ¼ L. Also note that as we move from a point where the upper (lower) bound is not satisfied towards the upper (lower) bound curve, the magnitude of the point of change of tendency decreases (increases), i.e., it becomes closer to m ¼ 0 (m ¼ L). Finally, by comparing Fig. 3b with Fig. 3a, we see that the area where counting rule is optimum reduces, and that the magnitudes of the points of change of slope increase (upper corner) or decrease (lower corner), i.e., counting rules in presence of correlation get progressively more involved as the individual sensors operates in worst PD–PFA couple. 4. Conclusions We have deduced the conditions for the equivalence of the counting rule and the likelihood ratio test when implementing the fusion rule of decisions corresponding to correlated sensors. We have assumed that the sensors are identically operated, and that the correlation model is described by two correlation indices. We have found lower and upper bounds for the correlation index under H0, which guarantee that the likelihood ratio is decreasing with the number of sensors m deciding in favour of H0. This has been demonstrated for large L and in the vicinities of both m ¼ 0 and L. Heuristic reasoning extents the conditions to every value m.

1815

A main conclusion is that for properly operating sensors (PDbPFA) the counting rule is almost a UMP test in the correlation indices. Acknowledgments This work has been supported by Spanish Administration, under Grant TEC2005-01820, by European Community, FEDER programme, and by Generalitat Valenciana, under Grant GVEMP06/001. We specially acknowledge some corrections in the equations given by the anonymous reviewers. References [1] R. Viswanathan, P.K. Varshney, Distributed detection with multiple sensors: part I—fundamentals, Proc. IEEE 85 (1) (Jan. 1997) 54–63. [2] R. Viswanathan, P.K. Varshney, Distributed detection with multiple sensors: part II—advanced topics, Proc. IEEE 85 (1) (Jan. 1997) 64–79. [3] Z. Chair, P.K. Varsheney, Optimal data fusion in multiple sensor detection systems, IEEE Trans. Aerospace Electron. Systems 22 (1) (Jan. 1986) 98–101. [4] V. Aalo, R. Viswanathan, On distributed detection with correlated sensors: two examples, IEEE Trans. Aerospace Electron. Systems 25 (3) (May 1989) 414–421. [5] E. Drakopoulos, C.C. Lee, Optimum multisensor fusion of correlated local decisions, IEEE Trans. Aerospace Electron. Systems 27 (4) (July 1991) 593–606. [6] M. Kam, Q. Zhu, W.S. Gray, Optimal data fusion of correlated decisions in multiple sensor detection systems, IEEE Trans. Aerospace Electron. Systems 28 (3) (July 1992) 916–920. [7] V. Aalo, R. Viswanathan, Asymptotic performance of a distributed detection system in correlated Gaussian noise, IEEE Trans. Signal Process. 40 (1) (Jan. 1992) 211–213. [8] B. Chen, P.K. Varsheney, A Bayesian sampling approach to decision fusion using hierarchical models, IEEE Trans. Signal Process. 50 (8) (August 2002) 1809–1818. [9] Q. Yan, R.S. Blum, On some unresolved issues in finding optimum distributed detection schemes, IEEE Trans. Signal Process. 48 (12) (Dec. 2000) 3280–3288. [10] Q. Yan, R.S. Blum, Distributed signal detection under the Neyman–Pearson criterion, IEEE Trans. Inform. Theory 47 (4) (May 2000) 1368–1377. [11] P. Willet, P.F. Swaszek, R.S. Blum, The good, bad, and ugly: distributed detection of a known signal in dependent Gaussian noise, IEEE Trans. Signal Process. 48 (12) (Dec. 2000) 3266–3279. [12] R. Viswanathan, V. Aalo, On counting rules in distributed detection, IEEE Trans. Acoustics, Speech Signal Process. 37 (5) (May 1989) 772–775.