Applied Acoustics 68 (2007) 752–765 www.elsevier.com/locate/apacoust
Sound source imaging of low-flying airborne targets with an acoustic camera array T. Scott Brandes a
a,*
, Robert H. Benson
b
Department of Interdisciplinary Engineering, Texas A&M University, College Station, TX 77845, USA Department of Physical and Life Sciences, Texas A&M University, Corpus Christi, TX 78412, USA
b
Received 16 June 2005; received in revised form 9 January 2006; accepted 7 April 2006 Available online 13 June 2006
Abstract Two-dimensional images of sound source distribution from near-ground airborne sounds are created using an array of 32 microphones and time-domain beamforming. The signal processing is described and array configurations spanning a square area with a side length of 3.45 m, approximately five wavelengths for a 500 Hz sound, are examined. Simulations of a 32-element under-populated log6 · log6 spaced array are given for sound sources centered over the array at 250 Hz, 500 Hz, and 1000 Hz. Stochastically optimized array geometry with a simulated annealing algorithm is discussed and a 32-element array optimized for a 500 Hz source is given along with a simulated image for direct comparison with the log6 spaced array. Images from field testing a 32-element under-populated log6 · log6 spaced array are provided for a small aircraft flyover. Results show that this type of acoustic camera generates accurate images of sound source location. Suggested uses include monitoring small aircraft flying too low to be detected by radar as well as monitoring ecological events, such as bird migration. 2006 Elsevier Ltd. All rights reserved. Keywords: Sound source imaging; Passive imaging; Localization; Acoustic camera
*
Corresponding author. Present address: Conservation International, Tropical Ecology Assessment and Monitoring (TEAM) Initiative, 1919 M Street, NW, Suite 600, Washington, DC 20036, USA. Tel.: +1 202 912 1580; fax: +1 202 912 0773. E-mail addresses:
[email protected] (T.S. Brandes),
[email protected] (R.H. Benson). 0003-682X/$ - see front matter 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.apacoust.2006.04.009
T.S. Brandes, R.H. Benson / Applied Acoustics 68 (2007) 752–765
753
1. Introduction Generating source position images of airborne sounds is useful in a variety of fields, from military and law enforcement applications to bioacoustics and field biology. Sound source imaging provides a way to detect and track aircraft flying too low to be detected by radar in areas of concern such as along international boarders. Passive imaging systems, where no sound is actively emitted by the researcher, not only lend themselves to being unnoticed by the objects imaged but are the clear choice for studying environmental systems since they provide minimal disturbance. Passive imaging is also a useful bioacoustics tool in field biology and conservation, where there is a growing interest in monitoring bird migration and the foraging behavior of bats. The research presented here suggests the use of passive imaging to create two-dimensional images of airborne sound source distribution, for sources located in the far-field above an array of microphones, termed an acoustic camera. There are several methods for passive localization of sound sources that have been applied to locating aircraft. One approach is to use time of arrival differences in a received signal to geometrically calculate the source position based on the way sound propagates in a particular environment. Methods such as triangulation [1–3] and wavefront curvature estimation [4] use this approach, and are effective in locating sound sources. They rely on several pairs of microphones spaced widely enough apart to accurately measure arrival time differences, and they generate an estimation of a sound source location within a volume; however, they do not work as well with multiple similar sound sources and they do not readily lend themselves to generating an overall image of the solid angle distribution of sound incident on a microphone array. To better accomplish this, methods involving beamforming are used. As with any type of imaging system, the resolution of the image is dependant on both the frequency of the received signal, and the diameter of the receiver. Microphone elements are necessarily small, so the best way to increase the diameter of the receiver is to use numerous microphones spread over several wavelengths of the source sound. With arrays of microphones, beamforming can be a useful method for generating images of sound source distribution. These techniques have been developed in other fields such as active microwave imaging [5–7] and radio astronomy [8]. In the field of acoustics, beamforming is used in the area of active underwater acoustic imaging [9], as well as active underground imaging [10]. Some beamforming work has been done with airborne sounds as well, with both linear arrays [11] and with an elaborate 64 element volumetric array [12]. It is the hope through the work presented here to demonstrate that simple two-dimensional arrays of 32 microphones can inexpensively (under $2500 USD) be used for sound source imaging of far-field sources above an array of microphones. Furthermore, this work discusses two types of array geometry, and suggests the use of simulated annealing, a stochastic optimization method, for determining array geometry. 2. Theory An acoustic camera creates an image by using a set of cophased microphones. The idea is that signals from an array of microphones are phase-synchronized and summed to form a single signal from the entire array. This process is termed beamforming, and is analogous to combining the light from several telescopes pointed at the same object, adjusting their
754
T.S. Brandes, R.H. Benson / Applied Acoustics 68 (2007) 752–765
phases, and focusing their summed outputs to get a single image. This process provides the angular resolution equivalent to a single receiver of the same diameter as the array of receivers, but not the same waveform gathering capacity. Once the beam is formed, it is scanned over the entire field of view (FOV) by adjusting phase shifts of the signal from each element in the array. In this way, an image of the intensity of the sound source distribution throughout the FOV of the camera is mapped out. Sound sources distant enough to form plane-waves incident on the array are considered in the far-field, where no radial component of the sound source is determined, so they only form two-dimensional images. It is for far-field sound sources that our system is designed. Since the array relies on phase shifts within its elements, and not a coherent wavefront collected by a single microphone, its FOV is quite large, and limited by the beam widths of its individual microphones and their orientation. The use of beamforming for this sort of imaging provides the fundamental framework for the signal processing behind the acoustic camera. Output signals from linear shift invariant (LSI) systems can be expressed as the convolution product in time of the input signal and the system’s impulse response [13]. Acoustic signals recorded with a microphone are of this type and can be expressed in this mathematical relation [14], shown in Eq. (1), where B, the measured signal from the microphone, is the convolution product in time of w, the incident sound, and W, the beam pattern of the microphone Bðt;~ rÞ ¼ W ðt;~ rÞ wðt;~ rÞ
ð1Þ
with N microphones, the overall signal captured by the microphone array becomes Bðt;~ rÞ ¼
N X
W n ðt;~ rÞ wn ðt;~ rÞ
ð2Þ
n¼1
In the far-field case, it is not possible to resolve the distance a sound source is from the array, so a unit vector with two degrees of freedom is used, representing a radial projection of a location in the sky, centered in the microphone array [8]. Here, x and y represent axes in the horizontal plane and hx is the angle from the axis normal to this plane to the unit vector ^s, in the direction of the x-axis (Fig. 1) z
θx
θy
s y
x Fig. 1. Coordinate system used to describe the direction of a sound source ^s.
T.S. Brandes, R.H. Benson / Applied Acoustics 68 (2007) 752–765
2
3
sin hx
6 7 sin hy 7 ^s ¼ 6 4 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 5 2
755
ð3Þ
2
1 sin hx sin hy
In the case of M sources, each sound originates from a location in the sky, ^sm , and the total sound wave incident on microphone n is given by Wn ðtÞ ¼
M X
wn ðt; ^sm Þ
ð4Þ
m¼1
Substituting Eq. (4) into Eq. (2), the signal from N microphones of M independent sound sources becomes Bðt; ^sÞ ¼ umNn¼1 W n ðt; ^sÞ
M X m¼1
wn ðt; ^sm Þ ¼
N X
W n ðt; ^sÞ Wn ðtÞ
ð5Þ
n¼1
The ideal beam pattern to form is one that triggers a response only when it is pointed directly at a sound source. This can be achieved by using a beam pattern for microphone n of the form [8] wn 1 xn W n ðt; ^sÞ ¼ d t ^s ~ ð6Þ c N Here, d is a Dirac Delta Function, ~ xn is location of microphone n, c is the sound speed, and wn is the gain for microphone n. Substituting Eq. (6) into Eq. (5) gives N X wn 1 Wn t ^s ~ Bðt; ^sÞ ¼ xn ð7Þ c N n¼1 This provides a way to calculate the image value for a particular sky location, at time t, if a continuous sound signal at each microphone is available. Since digital sampling hardware provides a discrete signal, a continuous signal must be reconstruct. This is accomplished with an interpolation function [13] 1 X sin½ p ðt nT Þ ð8Þ xðnÞ p T xa ðtÞ ¼ ðt nT Þ T n¼1 Here, xa represents the reconstructed analog signal, and T is the sampling interval. Ideally, the reconstructed signal is generated with an infinite sequence. Fortunately, a windowed sequence provides a close approximation. Using a ±Q sample windowed version of Eq. (8), a continuous approximation of Wn is calculated. Q X sin Tp t 1c ^s ~ xn qT 1 Wn t ^s ~ ð9Þ Wn ðqÞ p xn ¼ c t 1c ^s ~ xn qT q¼Q T With Eqs. (9) and (7) a way to calculate the image element at sky location ^s at time t is given by Q N X sin Tp t 1c ^s ~ xn qT wn X Bðt; ^sÞ ¼ ð10Þ Wn ðqÞ p N q¼Q xn qT t 1c ^s ~ n¼1 T
756
T.S. Brandes, R.H. Benson / Applied Acoustics 68 (2007) 752–765
The real part of the incoming sound wave is captured by each microphone. To generate the imaginary part of this complex signal, the real sequence is put through a Hilbert Transform FIR filter of length N, and the real sequence is delayed by (N 1)/2 samples [15]. Once the complex signal is generated, Eq. (10) is used to calculate both the real and imaginary component of the image, and the magnitude is used as the image value. In this way, Eq. (10) is systematically calculated at each location ^s throughout the FOV to generate a composite image of the sound source distribution at time t. 3. Simulations Arrays of discrete elements are used for a variety of purposes, including radio telescopes and interferometers, where there has been much research [16]. There is no clear deterministic way to find an optimal solution for microphone placement within an array, and random configurations can perform quite well. However, they can often be out performed by aperiodic configurations [6]. Arrays with evenly spaced elements have high side or grating lobes [16]. Logarithmically spaced elements provide a systematic way to create arrays with reasonably low side-lobes, and this is our starting point. 3.1. Under-populated log–log grid In trying to balance out what is readily achievable with hardware and what is ideal theoretically, we chose to use a 32-microphone array. To optimize the choice of the 32 microphone locations within the 36-position log6–log6 grid of length 10k · 10k to design, we ranked 10,000 random selections of position configuration of 32 microphones among the 36 grid locations. The configuration that provided the lowest side-lobe magnitude is shown in Fig. 2, and was our choice for the log spaced array to test. Simulated images with a 90-degree FOV of a sound source centered over this array with dimensions 3.45 m by 3.45 m (10k for a 1 kHz sound) at frequencies of 250 Hz, 500 Hz, and 1000 Hz are shown in Fig. 3. By simulating the images of sound sources at a variety of frequencies with the same size array, array performance under a wider range of operating conditions can be assessed. These images are generated by simulating signals received at each microphone
Fig. 2. The best 32-element configuration on a 36-position log6–log6 grid of dimension 10k · 10k, from 10,000 random configurations of 32 microphones on this grid pattern. Notice the asymmetry along the diagonal.
T.S. Brandes, R.H. Benson / Applied Acoustics 68 (2007) 752–765
757
Fig. 3. Top-down and side-view of simulated images with a 90-degree FOV of a sound source of frequency (a) 250 Hz, (b) 500 Hz, (c) 1000 Hz, centered over the array shown in Fig. 2 of dimension 3.45 m · 3.45 m. Normalized amplitudes are shown on the right-hand side. (Numerous figures are given in both color and gray scale. The color images have a better depth of field, but the gray scale are more accurate with black and white printing or photocopying. In these images, the color ones are on top, and the gray scale below.)
in the array, by a sound source centered over the array at each of these frequencies, and then using the same imaging algorithm that is performed on actual sounds collected in the field. In these images, a distinct central peak is clear in each frequency range, as well as limiting angular resolution as a function of the ratio of source frequency wavelength to array diameter. 3.2. Simulated annealing Another approach towards choosing an array pattern is to use a stochastic optimization algorithm, such as simulated annealing. With the microphone array, this optimization method is implemented by generating an array with microphones in a random pattern, and randomly shifting the microphone positions, each time evaluating the resultant image created of a point source centered over the array. If the image quality improves, the posi-
758
T.S. Brandes, R.H. Benson / Applied Acoustics 68 (2007) 752–765
tions are kept. However, if the image quality does not improve, or is degraded, the image is kept if a probability condition is met. The possibility of temporarily keeping a worse configuration helps keep the configuration from settling into a local minimum, increasing the chance that it will settle into a global minimum. With these array configurations, side-lobe magnitude and main-lobe beam-width are minimized. As the iterations progress, the random shifts in microphone placement decrease, and the probability of keeping a worse array configuration decreases, analogous to slow cooling. Eventually, a final array configuration is reached. Numerous simulated annealing methods have been found useful as a means for solutions to stochastic optimization, particularly when a desired global minimum or maximum is hidden among numerous local minima or maxima [17]. We implemented one that minimizes both the magnitude of the largest side-lobe, as well as the main-lobe beam-width at half power for a 500 Hz source. If the total change in maximum side-lobe magnitude is labeled DEsl, and the change in main beam-width at half power is labeled DEbwhp, then a way to quantify these combined measures of beam forming quality change, DE, is DE ¼ ðjDEsl j þ jDEbwhp jÞ=2
ð11Þ
Using Eq. (11), after microphone adjustments the new configurations are kept if 1 DE
ð12Þ
1 þ e T ðtÞ
where e is a random number between 0 and 1, and the adjustment-likelihood rate, T, as a function of iteration t, is given by T ðtÞ ¼
T0 1 þ lnðtÞ
ð13Þ
0.35 0.3 0.25 0.2 0.15 0.1 0.05 9955
9244
8533
7822
7111
6 40 0
56 89
4978
4267
355 6
2134
2845
1423
1
0 712
S i d e L o be Ma gn i tu d e ( n o r m a li z ed to t he m a in lo b e )
where T0 is an initialization constant. Setting T0 to 1000, after around 1500 iterations the array pattern is very near an optimized configuration (Fig. 4); the final array pattern with dimensions approximately 3.45 m by 3.45 m, is shown in Fig. 5. The simulated images, with a 90-degree FOV, generation from a 500 Hz sound source centered over this array is shown in Fig. 6. Once again, we have a very distinct central peak, but we also have a
Iterations Fig. 4. Maximum side-lobe magnitude vs. iteration number for the simulated annealing algorithm for the placement of the 32 microphone within the array.
T.S. Brandes, R.H. Benson / Applied Acoustics 68 (2007) 752–765
759
Fig. 5. The microphone configuration after 10,000 iterations of the simulated annealing algorithm.
Fig. 6. Top-down and side-view of simulated image, with a 90-degree FOV, of a sound source of 500 Hz centered over the array formed from a simulated annealing method, shown in Fig. 5 of dimension 3.45 m · 3.45 m.
noticeable improvement in side-lobe reduction compared with the log-spaced array. The main-lobe is a little wider than that for the aperiodic grid (Fig 3b). That could perhaps be reduced by increasing the weighting of the change in main beam-width at half power with respect to the change in maximum side-lobe magnitude given in Eq. (11). 3.3. Sensitivity of microphone coordinate measurement Along with array configuration performance, it is worthwhile examining image degradation due to errors in microphone placement. Fortunately, the acoustic camera is not overly sensitive to these errors, as shown in Fig. 7, where the degradation of main-lobe magnitude is shown as a function of standard deviation in the error of microphone placements. The maximum side-lobe magnitude stays below 0.6 of the main-lobe magnitude with no placement error, throughout the range of microphone placement errors shown. For the log-spaced array, once the microphone position errors gets over a standard deviation of 0.2k, the magnitude of the main-lobe dramatically drops off. For a 1000 Hz
760
T.S. Brandes, R.H. Benson / Applied Acoustics 68 (2007) 752–765 1.1
M a i n L o b e Ma gn i t u d e
1.0
0.9
0.8
0.7
0.6
0.5
0.4 0.0
0.2
0.4
0.6
0.8
1.0
Error in Microphone Placement (standard deviation of wavelength)
Fig. 7. Normalized magnitude of the main-lobe as a function of microphone position error measured in radial distance of standard deviation of k, normalized to k.
sound, this allows for nearly a 7 cm error in radial distance within the array plane from the desired grid location for each microphone before the main-lobe drops below 95% of its optimal magnitude. 3.4. Field results We built a microphone array in the pattern of Fig. 2, of length 3.45 m · 3.45 m and tested its ability to generate flyover images of small airplanes, four passenger Cessnas, just off the end of a runway at Easterwood Airport in College Station, TX. This configuration was chosen since it was done before the simulated annealing simulations. We chose to test our system on small airplanes, as opposed to migrating birds, since they provide continuous sounds and travel in a predictable trajectory. This array is portable and the sound was captured with a personal computer equipped with a 16 bit, 200 kHz, 32 channel differential data acquisition board (PCI-DAS6402/16, Omega Engineering, Inc.). We designed and built 32 boards containing a microphone, amplifier, and low-pass filter (1.2 kHz), at around $6 USD per board, and the entire system cost, with computer, is under $2500 USD [18]. The sounds are stored on a hard drive and post-processed for sound source image generation. We show images generated from a flyover of a Cessna on a low wind warm sunny morning at a height on the order of 100 m above the array, in two separate frequency bands. The images are generated independently, each with a 50 Hz bandpass filter centered around the frequency of interest. The highest frequency recorded from the plane that is audible throughout the flyover is 250 Hz. Four of the images generated from this flyover are shown in sequence (Fig. 8). The entire flyover throughout the 90-degree FOV takes about 5.5 s, and each image in Fig. 8 is separated by about 1 s. A comparison of these images with the 250 Hz simulation in Fig. 3 show that the camera’s actual performance is quite similar to that of the simulation. In particular, the angular resolution of the sound source from the field test matches the simulation closely. The aircraft is clearly visible upon entering the 90-degree FOV, and its straight trajectory throughout the FOV is clearly dis-
T.S. Brandes, R.H. Benson / Applied Acoustics 68 (2007) 752–765
761
Fig. 8. Top-down and side-view of actual images with a 90-degree FOV generated from an airplane flyover in the 250 Hz spectrum. The images (a)–(d) are taken in sequence.
cernable. There is some noticeable distortion, though, as the plane nears the edge of the FOV which is likely due to air turbulence. The highest frequency we recorded well from this flyover is 500 Hz. Unlike the 250 Hz signal, the 500 Hz signal fades too much soon after the aircraft passes midway over the microphone array, and the plane is no longer tracked. Three images generated from this flyover are shown in sequence (Fig. 9). As with the 250 Hz images, these closely match the simulated 500 Hz images. In these images the angular resolution of the plane has increased as expected, and the side-lobes are small enough to not interfere with accurate sound source imaging. The fading of the 500 Hz signal just after the plane crosses the array provides a way to ground truth the images by looking at a spectrogram of the flyover (Fig. 10). At 4.0 s on the time axis in Fig. 10, notice that the change in Doppler shift is at the midway point of its descent, which corresponds to the aircraft reaching its closest approach to the array. Notice also that the sound level of the 500 Hz signal component drops off at this point, which matches up very well with when the signal to noise ratio in that frequency band is too low for a beamforming image to effectively be generated; the last image in Fig. 9
762
T.S. Brandes, R.H. Benson / Applied Acoustics 68 (2007) 752–765
Fig. 9. Top-down and side-view of actual images with a 90-degree FOV generated from an airplane flyover in the 500 Hz spectrum. The images (a)–(c) are taken in sequence. This is the same flyover event as shown in Fig. 8.
is of the plane at the mid-way point in its flyover, when it is reached its closet approach to the array. 4. Discussion Beamforming can be done in either the time-domain or the frequency-domain. In the time-domain, beamforming is done with a time delay operator (Eq. (4)), whereas in the frequency-domain a phase-shift operator is used [7,8,19]. The use of a phase-shift operator requires a narrow bandwidth signal, which will produce a band-limited image. To create broadband images with frequency-domain beamforming requires superimposing narrowband images, thus beamforming in the time-domain can lend itself more readily to broadband sources. Our system uses time-domain beamforming and the flyover image sets in Figs. 8 and 9 are generated with band-limited signals only to show how the actual array performance compares with simulations of sound sources at specific frequencies. Since our system uses time-domain beamforming, it performs well with signals spanning a wide bandwidth. Our system is particularly useful for Doppler shifted signals since
T.S. Brandes, R.H. Benson / Applied Acoustics 68 (2007) 752–765
763
Fig. 10. Spectrogram of the flyover event shown in Figs. 8 and 9. The plane is within the field of view of the array from about second 1–7. Notice that the 500 Hz component fades out at 4 s, midway through its flyover.
a there is no concern about a signal drifting outside of a frequency bin. Additionally, this sort of acoustic camera is well suited for biological sound sources such as birds and bats that can have calls that sweep through frequencies. Furthermore, by monitoring a broad range of frequencies, an acoustic camera could effectively create flyover images of numerous species of birds or bats since frequency bandwidths among species can vary considerably. One application that this sort of acoustic camera is well suited for is monitoring the effectiveness of collision avoidance devices placed on cellular phone towers to prevent migrating birds from colliding with them. Mortality of migrating birds due to collisions with cellular phone towers has become a particular concern for conservationists [20]. Migrating birds fly low enough to the ground to collide with the upper parts of cellular phone towers and their flight calls are loud enough to be detected by ground microphones. Our system can also be used to track the trajectories of sound sources. Much has been written on this topic and an excellent overview is found in Becker [21]. From the vantage point of the array, all the motion of the sound source is limited by the time history of the line of sight angles and the Doppler-shifted frequency detected. Using the coordinate system shown in Fig. 1, the line of sight angles of the target, hx and hy, are determined when generating the image, and the radial velocity, s 0 , can be calculated from the measured Doppler shifted frequency, m, where m0 is the non-shifted frequency and c is the sound speed s0 ðtÞ mðtÞ ¼ m0 1 ð14Þ c
764
T.S. Brandes, R.H. Benson / Applied Acoustics 68 (2007) 752–765
Estimating the Doppler-shifted velocity requires additional signal processing than that required for beamforming, and much can be done with the line of sight angles alone (bearings-only tracking) [22,23]. In Figs. 8 and 9, the peak indicating sound source location is well defined, regardless of the beam-width size, and an accurate estimate of the line of sight angles is available. This holds true for sources that do not directly fly over the array as well, and examples of how the system performs for sources off-to-the side are shown in the images of the sound source near the edge of the field of view in Figs. 8a, b, and 9a. 5. Summary and future uses The work presented here suggests that an acoustic camera consisting of an array of microphones is a viable way to generate source position images of near-ground airborne sounds and can be done inexpensively with a 32-element planar array. An under-populated log6 · log6 array of 32 microphones provides a scanning beam with reduced sidelobes that generates accurate time series images of the position of low-flying aircraft. Furthermore, improvements in side-lobe reduction can confidently be generated by modifying a random array geometry with the stochastic optimization method of simulated annealing. If needed for the system of study, improvements can be made in angular resolution by using arrays spanning a larger area, or monitoring sources with higher frequency components. Also, the addition of more microphones in the array can increase beamforming performance by reducing side-lobes while allowing the array span to increase to improve angular resolution, and this could be done without increasing the cost significantly. Acoustic cameras of this type are not only useful for imaging low-flying aircraft, but also could be used to monitor ecological systems, particularly since there is the possibility of identifying biological sound sources to the level of species. This coupled with an improved angular resolution would make acoustic cameras useful in monitoring migrating birds or in studying the behavioral ecology of bats foraging in an open area, providing that the microphones are sensitive enough and there is little to no wind or rain. Multiple arrays of microphones could be networked to provide more comprehensive sky coverage for flyovers along wide areas. Along with furthering research possibilities in the area of natural science, acoustic cameras have a use in the field of conservation. In particular, acoustic cameras could be placed around radio and cellular phone towers that migrating birds are drawn to and collide with, to monitor the effectiveness of bird avoidance devices placed on the towers. Further research done with the implementation of acoustic cameras could bring benefits in increased security along international boarders, further understanding in ecology, as well as create better conservation practices. Acknowledgments We thank William Neill at Texas A&M University in assisting with hardware acquisition and both Benjamin Perry and Michael Baden at the Georgia Tech Research Institute for their insightful comments on our drafts. References [1] Furguson BG. Time-delay estimation techniques applied to the acoustic detection of jet aircraft transits. J Acoust Soc Am 1999;106(1):255–64.
T.S. Brandes, R.H. Benson / Applied Acoustics 68 (2007) 752–765
765
[2] Furguson BG, Criswick LG, Lo KW. Locating far-field impulsive sound sources in air by triangulation. J Acoust Soc Am 2002;111(1):104–16. [3] Blumrich R, Ju¨rgen A. Medium-range localization of aircraft via triangulation. Appl Acoust 2000;61:65–82. [4] Furguson BG. Variability in the passive ranging of acoustic sources in air using a wavefront curvature technique. J Acoust Soc Am 2000;108(4):1535–44. [5] Steinberg BD. Radar imaging from a distorted array: the radio camera algorithm and experiments. IEEE Trans Antenn Propag 1981;29(5):740–8. [6] Steinberg BD. Microwave imaging with large antenna arrays: radio camera principles and techniques. New York: John Wiley & Sons; 1983. [7] Steinberg BD, Subbaram HM. Microwave imaging techniques. New York: John Wiley & Sons; 1991. [8] Brown ST. Radio camera arrays for radio astronomy. M.S. thesis, Ohio State University, Columbus, 1993. [9] Papazoglou M, Krolik JL. High resolution adaptive beamforming for three-dimensional acoustic imaging of zooplankton. J Acoust Soc Am 1996;100(6):3621–30. [10] Frazier CH, C ¸ adalli N, Munson Jr DC, O’Brien Jr WD. Acoustic imaging of objects buried in soil. J Acoust Soc Am 2000;108(1):147–56. [11] Furguson BG. Minimum variance distortionless response beamforming of acoustic array data. J Acoust Soc Am 1998;104(2):947–54. [12] Rigelsford JM, Tennant A. A 64 element acoustic volumetric array. Appl Acoust 2000;61:469–75. [13] Proakis JG, Manolakis DG. Digital signal processing principles, algorithms, and applications. 3rd ed. Upper Saddle River (NJ): Prentice-Hall; 1996. [14] Ziomek LJ. Fundamentals of acoustic field theory and space-time signal processing. Ann Arbor (MI): CRC Press; 1995. [15] Embree PM, Danieli D. C++ algorithms for digital signal processing. Upper Saddle River (NJ): PrenticeHall PTR; 1999. [16] Kraus JD. Radio astronomy. 2nd ed. Powell (OH): Cygnus-Quasar Books; 1986. [17] Press WH, Teukolsky SA, Vetterling WT, Flannery BP. Numerical recipes in C. 2nd ed. Cambridge: Cambridge University Press; 1992. [18] Brandes TS. Acoustic camera design and implementation. Ph.D. dissertation, Texas A&M University, 2002. [19] Stergiopoulos S. Advanced beamformers. In: Stergiopoulos S, editor. Advanced signal processing handbook. NY: CRC press; 2001 [Chapter 6]. [20] Trapp JT. Bird kills at towers and other human-made structures: an annotated partial bibliography (1960– 1998). US Fish and Wildlife Service, Arlington, VA. 1998. Available from: http://www.fws.gov/migratorybirds/issues/tower.html. Site last visited December 30, 2005. [21] Becker K. Target motion analysis (TMA). In: Stergiopoulos S, editor. Advanced signal processing handbook. NY: CRC Press; 2001 [Chapter 9]. [22] Nardone SC, Lindgren AG, Gong KF. Fundamental properties and performance of conventional bearingsonly target motion analysis. IEEE Trans Automat Contr 1984;29(9):775–87. [23] Pham DT. Some quick and efficient methods for bearing-only target motion analysis. IEEE Trans Signal Process 1993;41(9):2737–51.