M-ary sequential detection under conditions of a priori uncertainty

M-ary sequential detection under conditions of a priori uncertainty

Signal Processing 4 (1982) 277-285 North-Holland Publishing Company 277 M-ARY SEQUENTIAL DETECTION OF A PRIORI UNCERTAINTY UNDER CONDITIONS L e o...

500KB Sizes 10 Downloads 37 Views

Signal Processing 4 (1982) 277-285 North-Holland Publishing Company

277

M-ARY SEQUENTIAL DETECTION OF A PRIORI UNCERTAINTY

UNDER

CONDITIONS

L e o n i d G. K A Z O V S K Y , m e m b e r E U R A S I P Dept. o]' Electrical Engineering. Ben-Gurion University of the Negev, Beer-Sheva, Israel Received 22 July 1981 Revised 29 November 1981

Abstract. Sequential detection under conditions of a priori uncertainty is investigated. A MAP sequential detector is developed and its performance is evaluated using mean path approximation. The result obtained are verified via comparison with previously published computer simulation research. The comparison shows a good agreement between theory and experiment. The sequential approach is shown to provide a greatly reduced error rate as compared with the nonsequential approach under the same signal/noise conditions. Zusammenfassung. Dieser Beitrag behandelt die sequentielle Detektion unter der Bedingung einer A-priori-Unsicherheit beziiglich der Signalparameter. Ein sequentieller Algorithmus mit maximaler Riickschlu~3wahrscheinlichkeit ("MAPAlgorithmus') wird entwicklet; seine Wirkungsweise wird mit Hilfe eines Approximationsverfahrens analysiert. Die erhaltenen Resultate werden durch Vergelich mit friiheren Ergebnissen verifiziert, die aus einer Reehnersimulation stammen. Der Vergleich zeigt, dal3 Theorie und Experiment gut iibereinstimmen. Der sequentielle Algorithmus ergibt, verglichen mit nichtsequentieller Detektion, unter ansonsten gleichen Signal-Rauschbedingungen eine erheblich geringere Fehlerrate. Rtsum~. La d~tection s~quentielle est investigu~e sous condition d'incertitude ~ priori. Un d~tecteur s~quentiel MAP est d6velopp6 et ses performances sont ~valu~es en uti|isant l'approximation du chemin moyen. Les r~sultats obtenus sont v~rifi~s par comparaison avec les r~sultats de simulation b. l'ordinateur pr~c~demment publi~s. La comparaison montre une bonne concordance entre la th~orie et l'exp~rience. On montre que l'approche s~quentielle conduit ~ un taux d'erreur tr~s r~duit par comparaison avec l'approche non s~quentielle sous les m~mes conditions de rapport signal-sur-bruit. Keywords. Sequential analysis, M-ary detection, radar.

1. Introduction

to solve this p r o b l e m , the C P sends N pulses t h r o u g h e a c h of M a n t e n n a s 2 a n d p r o c e s s e s the

This p a p e r was i n s p i r e d b y the f o l l o w i n g p r o b lem (Fig. 1). A c e r t a i n o b j e c t (it is d e n o t e d as the ' p l a n t ' ) is g u a r d e d b y a r a d a r (or s o n a r ) system which includes an a r r a y of M a n t e n n a s w i r e d to a c o m m o n c e n t r a l p r o c e s s o r (CP). A t a r g e t is k n o w n to a p p r o a c h the plant. H o w e v e r , the spatial p o s i t i o n of the t a r g e t as d e f i n e d b y the d i s t a n c e D a n d the a z i m u t h angle a is u n k n o w n . T h e p r o b l e m is to e s t i m a t e the a z i m u t h angle of t h e t a r g e t * d u r i n g as s h o r t a t i m e as p o s s i b l e . In o r d e r

r e c e i v e d e c h o signals. If a n o n s e q u e n t i a l signal p r o c e s s i n g is e m p l o y e d , then t h e n u m b e r of pulses, N, is an a priori c h o s e n constant. If a s e q u e n t i a l signal p r o c e s s i n g is e m p l o y e d , t h e n N is d e t e r m i n e d b y t h e p r o c e s s o r d u r i n g p r o c e s s i n g of the r e c e i v e d e c h o ; g e n e r a l l y in this case, N is a r a n d o m n u m b e r . A f t e r p r o c e s s i n g of N echo signals, the p r o c e s s o r d e c i d e s which o n e of M a n t e n n a s actually r e c e i v e s the echo signal. T h e a z i m u t h angle of this a n t e n n a is t h e n a c c e p t e d as the

t For example, in order to launch a missile in this direction-see description of the Hawk tracking illuminator [11, p. 80].

2 Each pulse is emitted simultaneously through all the antennas.

0165-1684/82/0000-0000/$02.75

O 1982 N o r t h - H o l l a n d

278

L.G. Kazot'sky / M-ary sequential detection

2. Model description and problem statement Consider the vector detection problem shown in Fig. 2. The vector signal S(t) is known to be

Fig. 2. A vector detection problem.

Fig. 1. A model of a radar (or sonar) guardingsystem.

estimated target azimuth, and the process is terminated. This paper investigates the sequential multiple-decision procedure which arises from the foregoing problem. However, the chosen model is more general (see Section 2) and includes more complicated situations as well. The results obtained are also believed to be applicable to related fields, for example, to M-ary feedback communication [1, 2] wheh the signals used are partly unknown. The signal-processing applications of sequential analysis [3] have been discussed for a long time (see, e.g., [4-8]). In particular, the M-ary case is discussed in [1, 2] and [4, 5]. This paper differs from those previously published at least by the following features: (1) the signals to be detected are not assumed to be completely known; rather, an uncertainty in signal parameters is allowed to exist; (2) the performance of the developed sequential processor is analyzed analytically rather than empirically as was done, e.g., in [2]. The rest of this paper consists of five sections. The model description and the problem statement are included in Section 2. The maximum a posteriori probability (MAP) sequential processor for detection of signals under the conditions stated in Section 2 is developed in Section 3. The performance of this processor is analyzed in Section 4. The results obtained are verified in Section 5 via comparison with previously published computer simulation results. Finally, Section 6 summarizes the results obtained. Signal Processing

equal to one of M possible signals {Sm(t)}~=b all of them being known functions of a time t and of a random parameter vector V. Denote as H,, the hypothesis that S(t) = Sm (t) and define an observation vector R(t): R(t)=S(t)+W(t),

t>0

(1)

where W(t) is a random Gaussian zero mean vector variable [9] whose covariance matrix K(t, u) is known. R(t) is available to the processor which sequentially processes it and decides, at a certain time moment T, what was Hm. The decision of the processor is denoted as H,~. The operation of the processor may be described in terms of a stopping rule (which chooses T) and a decision rule (which chooses/~rm); both rules are discussed in Section 3, The performance of the processor is usually evaluated in terms of an error probability P E A p(/-~r, # H,~) and of an average observation time E ( T ) ; these parameters are evaluated in Section 4. The discussion in the following sections is based on the model shown in Fig. 2 and described by (1). Note that this model includes the problem discussed in Section 1 (and illustrated in Fig. 1) as a special case when S,,(t)=s(t-r)[O 0 ."

1 . - . 0] T

(2)

ra th position

where S,,(t) describes the expected output signal of the antenna array when the azimuth of the mth antenna coincides with the azimuth angle of the target; s(.) is a known signal; ~" is an unknown delay time ( = random parameter vection V) which

279

L.G. Kazot'sky/ M.ary sequentialdetection reflects an unknown distance between the target and the plant; and [. ]x denotes transposition.

where

P(RIH,., V) P(H,., V) dV.

"Ym -~"

3. The MAP sequential detector

3.1. The MAP sequential procedure--General case Generally, the MAP sequential procedure implies computation of M a posteriori probabilities P(H,, IR), m = 1, 2 . . . . M. The MAP sequential decision and stopping rules are then defined as follows [2, 4, 5]. Continue observations until some P(H,,IR), say, P(H,,,,IR) reaches or exceeds a prescribed number 1 - e . Then stop and accept H,, = H,,o. For the model described in Section 2: z~

P(HmtR)=f_P(HmIR, V ) P ( V I R ) d V

(3)

oo

where P(H~ Ill, V) is an a posteriori probability of H~, conditional on both R and V, and P ( V I R ) is an a posteriori probability density of the parameter vector V. Note that

(8)

Expressions (7) and (8) represent a general solution for the problem of the M-ary sequential detection under conditions of a priori uncertainty. Let us apply it to the Gaussian case.

3.2. The M A P sequential procedure-Gaussian case For the following development, we need to expand the vector noise process W(t) in a Karhunen-Loeve series. At least two methods are known for such expansion [9]. The first method implies use of vector eigenfunctions and yields scalar series' coefficients. The second method implies use of scalar eigenfunctions and yields vector series' coefficients. We apply here the first method. Therefore, we consider a KarhunenLoeve expansion based on vector eigenfunctions [9, Section 3.7]: L

P(Hm I R, V) P(VI R)

W(t) = l.i.m, Y~ wi~b,.

= P(RIHm, V) P(Hm, V) P(VIR) P(R, V) = P(RIH,., V) P(H.,, V) P(R)

(9)

where T

wi a=fo (b/TW(t) dt

(4)

T

= fo wT(t)~b(t) dt

But M

P ( R ) = E P(R]H,.)P(H,.)

(5)

= ~" M for w,,dpT'(t)dt

rrt=l

where P(Hm) is an a priori probability of/arm and (see [9])

(10)

m=l

and d~i(t)-a[~b~(t) "'" ~bT'(t)"'" &~(t)] T. (11)

oo

P(RtHm)= f_ P(RIV, H,,,)P(VIH,,,)dV. oo

(6) Substituting (6) into (5), (5) into (4) and (4) into (3) yields 34

(7)

{~bi(t)},.t'=t are the eigenfunctions of the following integral equation [9, exp. 248]: t" T

A~b~(t)=Jo K(t,u)cb~(u)du,

O<~t~ T. (12)

Note that (12) is a rector equation since both its sides are vector variables. At the same time, {X~}-Vol. 4, No. 4 July 1982

280

L.G. Kazovsky / M-ary sequential detection

the eigenvalues of (12)--are scalars. Recall that W(t) is assumed to be Gaussian. Hence {w~} are mutually independent Gaussian random variables [9]. Let us expand R(t) using the same coordinate system: L

R(t)=l.i.m. 5" L--~

ri~i(t)

Cancelling common terms and letting L ~ oo, we obtain from (19) and (20)

P(H,, IR) = l.i.m. Pt.tH,, IN) = a,. L~oo

a,.

(21)

i=l

a., =a l l P(H.,, V) exp [~= ris,m

L

i

1

Ai

if 1

L

+l.i.m. ~

L ~ c i=l

wid~(t), O<<-t<~T

- 2 ,=t "~J dV.

(13)

(22)

Substituting (15) and (17) into (22), we finally obtain

where T

si

1

where

=l.i.m. __. si~i(t) L~cc

/(S)

Io sT(t)tbi(t) dt

,oo

(14)

T

and T

ri & f0 RT(t)~(t) dt.

(15)

_!2 Io S~(t/Gm(t)dt dV

(231

Q(t, u)Sm(u) du

(24)

where Let us approximate R(t) using a finite L and compute an approximate value of P(R IHm, V) based on this approximation:

Q(t,

L

i=l

(16)

where

u) & ~ ~i(t)d~T(u), iffil

Ai

Q(t, u) is called inverse kernel. It may also be found from the following equation:

T

Sire & So

Jo

and

PL(R IUrn, V) = PL(W = R - S,, IV)

= 11 P(wi=ri-si,,[V)

t" T

G.~(t)--a |

S~(t)+~(t) dt.

(17)

T

Io K(t,u)Q(u,z)du=6(t-z)l, Recall that [9]

P(wi) = (21rAi)-t/2 exp[-w2/2Ai].

(18)

Substituting (18)into (16), (16) into (8) and (8) into (7) yields an approximate expression for an a posteriori probability of Hm :

pL(H..[R)=B../(=~=IB.. )

(19)

where ft a__I_~ P(H"' V) exp[ - ,=,~ (r'-s")212Ai J dV. (20) Signal Processing

O
(25)

where I is the unity matrix. Expressions (21) and (23) represent a general solution of the Gaussian problem. In the following paragraph one special case will be considered in somewhat more detail.

3.3. A special case: Sequential target detection under conditions of white noise Let us apply the general results of the previous paragraph to the special case described in Section 1, i.e., assume that Sin(t) is defined by (2) and

L.G. Kazovsky/ M-arysequentialdetection V = r. Assume, in addition, that W(t) is a sample from a stationary white Gaussian vector process with identical components, i.e., K(t, u) = ½NoS(t- u) I

Q(t, u) = ~00 6(t - u) I,

(27)

G,,(t) = ~o S,~(t).

(28)

P(Hm, r) exp

[ ofo

No So llS,,,(t)llz l

T

dt] _

dr.

(30)

-- fNTp

Ils.,(t)ll z at

--a0

(31) and (32) define a MAP sequential procedure for the considered case. The performance of this procedure will be investigated in the following section.

(29)

where N is a number of repetitions and Tp is a pulse repetition period. Hence, for umambiguous targets

T

Ior~ P(H,,, r)

4.1. Performance evaluation using mean path approximation

N=1,2 ....

f0 IIS.,(t)ll dt

A

RT(t)Sm (t) dt

where IJ'}] denotes a norm of a vector and Tp is a pulse repetition period. Note that the limits of integration are now (0, Tp), since for unambiguous targets 0 < r < Tp [11]. The expression (29) may be simplified if we note that s ( t - r ) and, consequently, llS.,(t)ll does not depend on H,,. In addition, T in the problem under consideration may change only in multiples of Tp, i.e.,

T=NTp,

btm

4. Analysis of the performance of a sequential processor

Substituting (28) into (23) yields

fo"

where

(26)

where ½No is the spectral density of the noise. Then (25) and (24), respectively, yield

a,~ =

281

does not depend on the delay time r. Consequently, after substituting (29) into (21) the common term

may be cancelled. The cancellation yields M

(31)

Two major parameters determine the performance of a sequential detector: the error probability PE and the average sample number ASN ~ E(N); in our case, ASN is the average number of repetitions. PE of the MAP sequential processor was shown to be bounded [2, 4, 5] PE ~
(33)

The meaning o.f the parameter e was explained in Subsection 3.1. It was experimentally shown [4] that PE ~- e. Therefore, in this paper we accept PE = e

(34)

and denote the error probability as e in the following discussion. ASN is bounded with the conditional ASN's: ASN(Hj, V) P(Hi) dV

ASN =

(35)

i'=I

where ASN(Hi, V) is the conditional average number of observations. We shall use the following method for computation of ASN(Hi, V): (1) Find [la-E(R IHi, V), the conditional mean of the observation vector R. (2) Apply the sequential test to R. It will terminate at a certain observation step, say, on the N t h step. Vol. 4, No. 4, July 1982

282

L,G. Kazot'sky / M-ary sequential detection

(3) Accept .N as ASN(Hj, V). A similar method (a mean path approximation-MPA) was previously used in [4] and [7] for a computation of ASN. In [7], theoretically (for M = 2), and in [4], experimentally (for M < 1000), MPA was proved to be a satisfactory method for determining the ASN, provided e << 1. In fact, MPA yielded less than a 5 percent error in [4]. Consequently, we use MPA for our case as well; verification will be performed in Section 5.

Special functions uJ(. ) and H ( . ) are defined here after Bracewell [10]. Substituting (38) into (37) we obtain, after transformations:

4.2. Analysis of the performance of the sequential processor using mean path approximation

where A(x) = H(x) * H(x) is a triangular function [10]. Assume that H,, and r are mutually independent. Then, as shown in the Appendix, (41) yields the following approximate expression:

Let us evaluate the ASN of the sequential algorithm described by (31) and (32). In order to apply the MPA, we have to find R a E(RiHi ' V). Since W(t) is assumed to have a zero mean, R is equal to Sj(t), i.e.,

R=Si(t)=s(t-ra)[O 0 . . .

#m = Ior° P(H,~, r)

dr. (37)

Consider a special case (a rectangular pulse train [10]) when

s(t)=Am(t/Tp)* H (-~--a)' t-r

t>0

(41)

¢P(H t) [ 1 - 2 r d P ( r d ) 2 TdP(rd)

~,~ =

q - P

(e°-l)],

P(Hm),

m =j,

(42)

m ~f

P a=NAZTd/½No

(43)

is a signal-to-noise ratio, P(Hm) is an a priori probability of H,, and P(ra) is an a priori probability density of r at the point r = zd. If P(Hm) is constant for all m, then substituting (42) into (31) yields

P(H,. l f i ) 1 -- 2TdP(rd)+ 2TdP(rd)p -1 (e" -- 1)

= M_2TdP(rd)+2TdP(rd)p_ 1 (eP_l),

m =/'

1

(38)

M-2TdP(ra)+2TdP(rd)p -1 ( e * - l ) , where

m #/" (44)

m(t/Tp)A ~ 6(t-kTp),

(39)

H(x) ~= 1,

(40)

k =-co

t 0,

Ix] <½, elsewhere.

Here T,, is a repetition period, r is a delay time, Td is a pulse duration, and * denotes a convolution. Signal Processing

] dr

where

where rd is a delay time ( = V). Now we apply (32) to R = R :

S(t--r)s(t--rd)

P(Hm, r) x exp[~-~o6,,iNA 2TdA ( ~ )

1 . . . 0 0] T (36)

ith position

2

I~,~ =

Recall that according to MPA we should find the smallest N (the smallest p) which satisfies the condition

P(Hm IR) = 1 - e.

(45)

It is clear from (44) that P(Hi[R) is the greatest of all {P(H,, [R)}. Substituting P(Hi[R) into (45)

L.G. Kazovsky / M-ary sequentialdetection

Consequently, the unconditional ASN is also equal to N':

leads to the following equation:

]

[

e p = I + p L 2TdP(ra)e +I .

283

(46)

ASN = ~r = No ln(e -1. - 1)(M - 1)

2AZT d

(54)

Denote a value of p satisfying (46) as P0 and compute

~[ A poNo/2A 2Ta.

(47)

Then, according to MPA, N" should be accepted as ASN(Hi, r ) - - t h e conditional ASN. Finally, substitution of ASN(Hj, r) = N into (35) yields the unconditional ASN. Unfortunately, the developmerit cannot be explicitly performed for an arbitrary P ( r ) since (46) has to be solved numerically. Therefore, we consider a special case:

Pit) = 8(r - rd).

(48)

We have chosen distribution (48) (no a priori uncertainty) because experimental data are available for this case [2]. This data may (and will) be used for the comparison with the theory developed. Note that (48) cannot be directly substituted into (46) since (44) was derived under the assumption that P(r) is a slowly varying function (see Appendix). Hence, (48) must be substituted into (41), yielding , m = P ( H . ~ ) e x p [ ~ ,miNA2Td].

(49)

5. Verification of the results obtained: Comparison with computer simulation results Here we compare the theoretical results of the previous section with the computer simulation results of Benedetto and Biglieri [2]. Their study of M-ary sequential detection refers to the Gaussian white noise case, and the signals are assumed to be completely known. This is exactly the special case considered at the end of Subsection 4.2. Therefore, our results can be compared for verification. The computer simulation results are available [2] in the form of plots of the error probability PE versus AZi'a/No where 3 7~a is the average analysis time. Let us relate A"7"a/No to our research. The sequential processor considered in the previous section analyzes the received signal during a time T~ s=NTd, where N is a number of repetitions and Td is a pulse duration. Hence, on the average, 7~a= NrTd.

Substituting (49) into (31) and assuming equal a

priori probabilities {P(H,,,)}, we obtain for m = ]

P(H~ IR) = e"/(M

- 1 + eP).

(50)

(51)

Eq. (51) has a single root po = ln(e -1 - 1)(M - 1)

(52)

which after substituting into (47) yields ~r = No l n ( e - , _ 1)(M - 1)

2A2T d

Combining (54) with (55), we obtain

AzTPa/No = ~ ln(e - ~ - 1)(M - 1).

(56)

(56) easily yields

Substituting (50) into (45) yields

e°/(M - 1 + e °) = 1 - e.

(55)

(53)

/q is an approximation for a conditional ASN; however, it depends neither on H,, nor on ra.

PE = e = 1 -

exp(2A27"JN°) M - 1 +exp(2A21"a/No)"

(57)

Figs. 3 to 5 show PE versus A"T~/No (riB). Each figure includes three curves: experimental curves corresponding to the sequential and the nonsequential detectors from [2] and the theoretical curve for the sequential detector computed using (57). Comparison of the curves clearly shows that the difference between the theoretical results 3 We have only changed notation: A 2 here was S in [2]. Vol. 4, No. 4, July !.982

L.G. Kazovsky / M-ary sequential detection

284

PE

PE

~0"3

VS"

ty s

M=4

L,~

tTa

---~,

dB

dB Fig. 5. Same as Fig. 4, except that M = 16. Fig. 3. Error probability PE versus m27~a/No (dB) for M = 4. The curves denoted as 'Experimental sequential' and 'Nonsequential" are reproduced from [2], the 'Theoretical sequential' curve was computed using (57).

PE

10.4

-~

the positive simulation errors [2]. Since our theoretical curves always run below the experimental curves,' the actual accuracy of our theory is probably even better than the indicated 1 dB. An addition conclusion which can be drawn from Figs. 3 to 5 is rather obvious: a sequential approach provides a greatly reduced error rate as compared with the nonsequential approach under the same signal/noise conditions.

6. Summary

10t 6

7

8

9

I0

II

12

No"--~ ' dB

Fig. 4. Same as Fig. 3, except that M = 8.

obtained in this paper and the experimental data is rather small: ~1 dB in terms of A 2 T a / N o . Notice also that the experimental results can be considered as an upper bound on PE, because of Signal Processing

Sequential detection under conditions of a priori uncertainty has been considered. A MAP sequential detector has been developed and its performance was evaluated using mean path approximation. The results obtained were verified via comparison with previously published computer simulation results. The comparison showed a good agreement between the theory developed and the experimental data. The sequential approach was shown to provide a greatly reduced rate of error as compared with the nonsequential approach under the same signal/noise conditions.

L.G. Kazorsky / M-ary sequential detection

Appendix

Substituting (A.7) and (A.8) into (A.2) yields

Development of the approximate expression for/z,. Consider a/~,. defined by (41) and assume that H , . and 7 are mutually independent. T h e n

P(H.,, 7) = P(Hm)P(7)

(A.1)

w h e r e P(Hm, 7) is a joint a priori distribution of Hm and 7; P(Hm) and P(7) are the individual distributions of H , , and 7, respectively. Substituting (A.1) into (41), introducing p a=NAETd½No and using the definition of A (.), we obtain

' P ( H i ) [ I i + I 2 + I 3 + I 4 ], lz,~ = { , P(Hm),

m=j, m ~j

(P(Hi)[ 1 2 TdP(7d) Izm= I +2TdP(Ta)p-l(e°-l)], lP(Hm), -

(A.9) represents expression for/x,,

the desired (42)).

re=f, m ~/.

(A.9)

approximate

(see

Acknowledgement T h e author wishes to thank the reviewers for their helpful c o m m e n t s .

(A.2)

References

where P(7) dr,

I1 ~ Jo

I 2 ~ I T~

(A.3)

P(7) e x p [ p r + T d - r d ] d r ,

I "rd+Td

[3 ~ . , ~

I~7-°

(A.4)

Td

•rd--T d

I4 ~

285

P(r) exp[o

7d"~

P(r) dr.

Td-r]

~

j dr,

(A.5)

(A.6)

d~-Td

Let us assume that P ( r ) function, i.e., it varies little (Td -- Td, 7d + Td). T h e n it m a y P(rd) in which case (A.4) and

is a slowly varying within the interval be a p p r o x i m a t e d by (A.5) yield

/'2 = / 3 = .P(7d)rd (e o _ 1). P

(A.7)

N o t e that f Td'+-T d

11 +14 = 1

P(7)

dr ~ 1 -- 2TdP(Td).

-- '1~'d-- Td

(A.8)

[I] A.J, Viterbi, "The effect of sequential decision feedback on communication over the Gaussian channel", Inform. Contr., Vol. 8, 1965, pp. 80-92. [2] S. Benedetto and E. Biglieri, "Sequential decision feedback with a MAP strategy", IEEE Trans. Inform. Theory, Vol. IT-19, July 1973, pp. 565-569. [3] A. Wald, Sequential Analysis, Wiley, New York, 1947. [4] T.S. Edrington and D.P. Peterson, "On the performance of some sequential multiple-decision procedures", IEEE Trans. Aerospace Electronic Systems, Vol. AES-7, September 1971, pp. 906-913. [5] R.E. Bechhofer, J. Kiefer, and M. Sobel, Sequential Identification and Ranking Procedures, Univ. of Chicago Press, Chicago, IL, 1968. [6] W.S. Hodgkiss, L.W, Nolte, "A sequential implementation of optimal array processors", IEEE Trans. Aerospace Electronic Systems, Vol. AES-16, May 1980, pp. 349354. [7] L.G. Kazovsky, Transmissionolin formation in the Optical Waveband, Wiley, New York, 1978. [8] L.G. Kazovsky, "On noise immunity of feedback communication systems", IEEE Trans. Com., Vol. COM-28, Oct. 1980, pp. 1844-1847. [9] H.L. Van Trees, Detection, Estimation and Modulation Theory, Part I, Wiley, New York, 1968. [10] R.N. Bracewell, The Fourier Transform and its Applications, McGraw-Hill, New York, 1980. [11] M.I. Skolnik, Introduction to Radar Systems, McGrawHill, New York, 1980.

Vol, 4, No. 4. July 1982