Pattern Recognition Letters 4 (1986) 259-267 North-Holland
September 1986
Application of the singular value method and signal sub-space methods to sampled radar images Andrew WEBB Royal Signals and Radar Establishment, St Andrews Road, Malvern, Worcestershire WR14 3PS, United Kingdom Received September 1985 Abstract: This paper considers the application of the singular value and signal subspace methods to reconstruct scenes consisting of both coherent and non-coherent point sources from sampled radar images using a data adaptive profile function. Key words: Singular value method, signal subspace method, radar, image.
1. Introduction
Much of the previous work on image restoration techniques applied to imaging radars has assumed that the scene consists of a finite number of discrete point sources which are small in number compared to the number of samples of the image available (Webb, 1985). This may be a valid assumption in m a n y practical situations, and of course is the constraint imposed in a ' m o n o p u l s e ' tracking system (Skolnik, 1980), but situations do occur where the number of reflectors in the scene exceeds the number of image samples and so the sources cannot be resolved individually, even in the absence of noise. Also, m a n y of the methods for resolving and locating sources discussed to date have applied to scenes which are not completely coherent and for which the image covariance matrix forms the basis for the scene reconstruction. These methods fail when the scene contains sources which are coherent as m a y arise in the multipath situation. The purpose of this paper is to consider the singular value method (Bertero and Pike, 1982) as a candidate for scene reconstruction of scenes con~c; British Crown Copyright 1985
taining m a n y sources and for coherent scenes. One o f the difficulties with the method has been the choice of window function applied to the scene. It is necessary to apply a window function in order to 'optimise' the reconstruction. Choosing one too large results in poor resolution, whilst too small a window function m a y result in spurious detail or a completely inaccurate restoration. This paper adopts the profile function approach (Bertero et al., 1984; Luttrell, 1985) and a data adaptive profile function is proposed. This technique has previously been applied successfully to coherent images produced by a synthetic aperture radar system (Luttrell and Oliver, 1985). The method is extended here to apply to multiple snapshot data f r o m a focal-plane array radar of scenes containing either coherent or non-coherent sources. Also, the approach here differs in the way the threshold is applied to the eigenvalue spectrum. Section 2 develops the theory for a single snapshot, Section 3 extends the theory to use multiple snapshots of the image and results are presented in Section 4 for a 'signal subspace singular value' method which may be used for scenes containing both coherent and non-coherent sources. Finally 259
Volume 4, Number 4
PATTERN RECOGNITION LETTERS
Section 5 concludes with a summary and discussion.
September 1986
1
T
qbj(y) = ~ P*(y)h *(y)oj
(7)
are the orthonormal eigenfunctions of the Hermitian kernel ,# + = A +A : 2. Theory - Single image snapshot
Let H 1= ~/'2(-oo, 0o), the Hilbert space of square integrable functions on (-oo, oo) and H 2 = C N, the N-dimensional vector space of complex numbers. The inner products on H 1 and H E are defined as
(f,,f')l=t'?oof(y)f'*(y)dy,
(1)
(g, g')2 = gTg,,
(2)
where f, f ' ~ H 1 and g, g' E H 2 and * denotes complex conjugate. The mapping A from H l to H 2 is defined by
,~/+(y,z)= ~ h?(y)P*(Y)hi(y)P(y) i = P(y)P* (y)h T(y)h * 0'),
( 8)
also with eigenvalues 22. In addition, it follows from equations (6) and (7) that
oj = T ,~j
h(y)P(y)(gj(y) dy.
(9)
--oo
A reconstruction of the scene from an image, I, is taken as
R (y) = P~)f(y) (Af ) i =
hi(y)P(y)f(y) dy
(3)
= P(y)
(Pv;)
Ay)/2j
(lO)
J where (Af ) i denotes the i-th component of the vector AfeH2; hi(Y) is the i-th component of the imaging vector h(y), i.e. for an image vector provided by an array of receivers in the focal plane, it is the response of the i-th receiver element to a point source at position y in the far field; and PO') is the profile or illumination function. Previously, this was taken to be the window function (Bertero and Pike, 1982)
t7
P(y)P*(y)hi(y)h;O,) dy --oo
Then, the functions Oj(y), given by 260
co
=I,
(5)
and (g, A f ) 2 = (A+g,f)l. Now, let oi and 2 2 be the orthonormal eigenvectors and real eigenvalues of the Hermitian matrix ;.j= AA +, with components given by "fff ij =
t' o~ h O')R (y) dy
the original image.
(4)
The adjoint of A , A ÷, is given by
(A+ g)(y) = ~ gih*(y)P(y) i
since the image of R,
: E 1+o;oi J
so that (3) becomes
hi(Y)fO') dy.
f ( y ) = ~ (ITo;)¢flY)/2j, J
tITo; = ~j - . h(y)C)j(y)P(y) dy T
p ( y ) = [10 a
(Af)i =
for f(y) e H I , given by
(6)
The summation in equation (10) need not be over all the components. Previously, modes have been rejected by truncating the eigenvalue spectrum and retaining those modes for which 2f is greater than the inverse of the signal-to-noise ratio (Bertero and Pike, 1982). However, this assumes that the image power is spread equally between all modes and does not depend on the structure in the image. Alternatively, modes can be rejected which contribute mainly noise to the reconstruction; for example, reject those modes for which
f o; l< 2No
Volume 4, Number 4
PATTERN RECOGNITION LETTERS
where N Ois the noise power in each mode, assumed equal. The extra factor of P(y) in the definition of the reconstruction (10) is included to remove large sidelobes which arise outside the region of interest. The question now remains as to the choice of the correct profile function. The form proposed here is one which is image dependent, namely
P(y) = ITh * (y)
(11)
and thus will adapt to a slowly changing environment, Following the approach of Luttrell and Oliver (1985), the process of applying a profile function and performing a reconstruction may be repeated. That is, the reconstruction, (10), may be used as a profile function to perform a further reconstruction using the same image:
Pr+ I(Y) = Rr(Y) = Pr(Y)fr(Y)" Alternatively, for a slowly-varying image, the reconstruction of one image could be used as the profile function for the next. 3. Theory - Multiple snapshot data
When many snapshots of the image are available it is natural to use some form of averaging in order to diminish noise effects and improve source detection and location capability. One way to do this would be to use a smoothed profile function in the reconstruction. Since the image is varying in time (either through fluctuating sources in the scene or noise in the receiver) the profile function is time-dependent
P(y, t) = lX(t )h * (y).
(12)
The expected value may be expressed as
E{ IP(y, t)] 2} = hTqj*h *
(13)
where ~ = E { l ( t ) I T * ( t ) } is the image covariance matrix. Therefore a smoothed profile function would be
p(y) = (h Xq/ .h
,)1/2
(14)
with the matrix .~ given by equation (6) and the basis functions o f the reconstruction by equation (7). A reconstruction of the scene is given by a
September 1986
weighted sum of the basis functions
P ( y ) f ( y ) = P(Y) ~ ajOj(Y)
(15)
J
for constants aj. It is natural to choose the otj so that the image of the reconstruction, I r, is 'close' in some sense to the set of measured images {I(t)}. Now I r is given by
L=
h (y)P(y)f(y) dy
= 2
(16)
Ajoj
J
= ~ fljoj
(17)
J
for Bg=aj2j, i.e. L = V,B
(18)
where V={Ol]O2 "'" ION} is the matrix of eigenvectors of .~/. Choosing fl to minimize the component o f / ~ in the noise subspace gives the 'signal subspace singular value method', i.e. minimize 0 given by
O-~*Tv*Tuu*Tv~
(19)
where U is the matrix of eigenvectors in the noise subspace of q/. The noise subspace of ~ is the set of eigenvectors of ~ ' - ~ ' n with zero eigenvalue (assuming ~'n, the noise covariance matrix, is known). This is equivalent to the space of eigenvectors of q/with eigenvalues No only if ~'n = NoI. The solution to this, subject to the constraint that 1,81= 1 is /~=min. eigenvector of v * T u u * T v
(20)
where 'min. eigenvector' refers to the eigenvector corresponding to the minimum eigenvalue. However, the matrix v * T u u * T v has N - N n 'minimum eigenvectors', where N~ is the dimension o f the noise subspace of q/, since the matrix has N - N n eigenvalues identically equal to zero and N~ unit eigenvalues. Thus there are N - N , independent vectors (corresponding to the N - N ~ independent eigenvectors of v * T u u * T v with zero eigenvalue) which give a minimum of 0 of zero. Hence there are N - N , independent reconstructions
Ri(Y ) = P ( y ) ~ ( y ) = P(y) ~ J
WJ~j(Y)/,~j 261
Volume 4, Number 4 i=1
.....
N-N
PATTERN RECOGNITION LETTERS
n.
where the w i = ( w ) , w 2 . . . . . wiu ) are the eigenvectors of V*Tuu*Tvwith zero eigenvalue. A 'sum' reconstruction, Rs(Y), can then be taken to be
September 1986
(i) Form the image covariance matrix v/. Perform an eigenvector decomposition for the signal subspace singular value method. (ii) Form the smoothed profile function,
P(y) (h V(y) q / , h * (y))l/2. (iii) Calculate =
.N-N, 1/2 gs(Y) = P(Y)fs (Y)~-- P('¥)( t21'= IZ(Y)I2) and this can be used as a profile function in subsequent iterations. The steps in the method are listed below and results are given in Section 4, for simulated data.
t~
,~¢=
oo
P ( y ) P * C v ) h ( . v ) h * T ( y ) dy.
-oo
(iv) Calculate the eigenvectors, values, 2 2, of ,~¢.
oj, and eigen-
,.oo (a) P (y) 0.80-
0.60-
0,40-
0.20-
0.00 ', -5.rr
- 215,rr
o'.o
-21~
o.o
y
i
2.5'rr
~-
21s~"
~"
1.00
(b) P4 (y)f4(y: 0.80'
0.60,
0.40'
0.20-
0.00
-5~-
y
Figure 1. Profile function and fourth reconstruction for 100 snapshots of a scene containing two coherent sources of a unit amplitude separated by a beamwidth, with N o = 10 3.
262
Volume 4, Number 4
(v) Form P* (y)h T,
PATTERN RECOGNITION LETTERS
the
basis
functions
September 1986
Obviously, the method is rather lengthy involving an eigenvector decomposition for each iteration. A method of iterating directly on the eigenvectors o or basis function 0 would be required in order to avoid this computation.
Oj(y)=
(y) Oj/)tj.
(vi) Calculate the matrix K = v * T u u * T v . (vii) Calculate the minimum eigenvector(s) fl; apply a threshold (if necessary) to the coefficients
flj; calculate a (olj=t~j/)tj). (viii) Form the reconstruction Ri(Y ) = P(Y)fi(Y) = P(Y) ~j etjOj(y) and the sum reconstruction Rs(y ) = V'N-N, Ifi 12)1/2. P(Y)fs(Y)=P(Y)(.i=I (ix) Set
4. Application to simulated images (multiple snapshot)
P(y)=Rs(y) and repeat from (iii) if The method of the previous section is now ap-
necessary.
1.00"
/
(a)
P (y/ 0.80-
0.60-
0.40-
0.20-
0.00 -5'rr l.O0P4(y) f4 (y)
-2.5"rr
olo
-2~5~
010
2.5~
5.rr
(b)
0.80-
0.60-
O. 40-
0.20-
0.00
-5"rr
y
2~5,rr
~r
Figure 2. Profile function and fourth reconstruction for 100 snapshots of a scene containing two non-coherent of unit amplitude sources separated by a beamwidth, with N O= 10 -3.
263
Volume 4, Number 4
PATTERN RECOGNITION LETTERS 1.00-
q
(a)
P(y) O. 80.
O. 60-
September 1986
/
0.40"
0.20-
0.00 -5'rr
-2.5'n"
'.°°- (b)
0.0
y
2.5,0-
5,n-
l
P1 (Y)fl (Y) 0.80.
0.60.
0.40'
\
0.20,
0.00 - 5,'rr
!
i
-2.5-0
0.0
y
2.5~
5,rr
Figure 3. Profile function and the first three reconstruction for 100 snapshots of a scene containing two coherent sources of unit amplitude separated by half of a standard beamwidth, with N O= 10-3.
plied to sample scenes of point sources. The simulated image corresponds to that measured by a linear array o f five receivers equally-spaced in the focal plane of an optical system, with each receiver element having a sin(x)/x beamshape giving a 'standard' beamwidth o f rt units. The beam spacing is also at n units, corresponding to Nyquist sampling. Gaussian distributed random noise o f variance No is added to each receiver. The scene is assumed to be uniformly illuminated. Figures 1 and 2 show the profile function and reconstructions after four iterations of a scene con264
sisting of point sources o f unit amplitude separated by a beamwidth, using 100 snapshots o f the image and with N O= 10 -3. In Figure 1 the sources are coherent whilst in Figure 2 the phases o f the sources vary randomly from snapshot to snapshot. Note that the profile functions for the scene (calculated according to equation (14)) differ in each case. In both cases, the sources are clearly located by four iterations. Figure 3 illustrates the performance o f the method against a scene containing two coherent sources of unit amplitude separated by half o f a
September 1986
PATTERN RECOGNITION LETTERS
Volume 4, Number 4 1.00P2 (y) f2 (y)
(c)
0.80-
0.60-
0.40"
0.20-
0.00 -5'rr ,oo
!
-2.5,rr
J
!
0.0
Y
2.5'rr
0.0
y
2.5'rr
~,n-
(d)
P3(Y) f3(Y; 0.80~
0.60
0.40-
0.20-
0.00 - 5'rf-
i
-2.5~
|
5~
Figure 3 (continued).
standard beamwidth. Conventionally, the sources are unresolved, but become resolved by iteration. Finally, the method is illustrated with one example of a three point scene containing both coherent and non-coherent sources. Figure 4 shows the profile function and reconstruction after four iterations of a scene consisting of three sources of amplitudes 1.0, 0.5 and 1.0 at positions - 2 n , - n / 4 and + 5 n / 4 respectively. The phases of the sources at - 2 n and - n / 4 remain fixed from snapshot to snapshot, whilst the phase of the third source varies (0.1 rad/snapshot). All sources are clearly located.
5. Discussion and summary In this paper, a method of locating point sources in noise using the singular value method has been described. This uses the singular value method with an image-dependent profile function (as proposed by Luttrell (1985)) and develops the method to apply to time varying scenes. The method may be applied to a single image snapshot of a scene, or to multiple snapshot data using a signal subspace or maximum likelihood approach. The importance of the method is that it can locate coherent sources as 265
Volume 4, Number 4
PATTERN RECOGNITION LETTERS 1.oo.
(a)
/
/
P (y)
0.80-
September 1986
0.60-
0.40-
-5,rr 1,00.
-2.5,rr
0~0
|
0.0
2.5,rr
5,rr
(b)
P4 (Y)f4 (y) 0.80.
0.60.
0.40,
0.20-
0.00 -5,rr
-2.5-rr
y
I
2.5,rt-
~,rr
Figure 4. Profile function and fourth reconstruction for 100 snapshots of a scene containing three point sources of amplitude 1.0, 0.5 and 1.0 at positions -2z~, -7r/4 and 5n/4 respectively, with No= l0 -3.
well as non-coherent sources. Thus, it can be applied to the output of an FFT to resolve sources in the same doppler gate, to resolve a source from its multipath return and to locate stationary sources in a clutter background. Also, although applied here to focal-plane images, it does not require equally-spaced identical elements and can be applied in the aperture plane to a non-uniform array. The method is an attempt to develop a general algorithm for point-source scene analysis which 266
does not depend on the properties of the sources present. However, there is an increased computational cost compared with the Music algorithm (Schmidt, 1979), for example, though this has not been quantified. Further work must be done in order to provide a complete assessment of the method and before it can be applied confidently in a practical situation. For example, errors in position estimation need to be quantified; knowledge of receiver noise incor-
Volume 4, Number 4
PATTERN RECOGNITION LETTERS
porated; a procedure developed to determine the number of iterations necessary and the threshold to apply to the reconstruction; and an improved method of iteration devised. The technique may also be applied to images of more than one dimension to improve, for example, both spatial and frequency resolution.
Acknowledgements I am grateful to Dr. S.P. Luttrell and Mr. A.C. Sleigh for useful comments and suggestions.
September 1986
References Bertero, M., C. De Mol, E.R. Pike and J.G. Walker (1984). Resolution in diffraction-limited imaging, a singular value analysis. IV The case of uncertain localization or nonuniform illumination of the object. Optica Acta 31(8). Bertero, M. and E.R. Pike (1982). Resolution in diffractionlimited imaging, a singular value analysis. I The case of coherent illumination. Optica Acta 29(6), 727-746. Luttrell, S.P. (1985). Prior knowledge and object reconstruction using the best linear estimate technique. Optica Acta 32, 701-714. Luttrell, S.P. and C.J. Oliver (1985). Prior knowledge in synthetic aperture radar (SAR) Processing. RSRE Research Review, 73-77. Skolnik, M.I. (1980). Introduction to Radar Systems. McGrawHill, New York. Schmidt, R. (1979). Multiple emmitter location and signal parameter estimation. Proc. R A D C Spectrum Estimation Workshop, Rome, NY. Webb, A.R. (1985). Multiple source location using radar focalplane arrays - Non-coherent sources. RSRE SP4 Research Note 014.
267