Underwater Acoustic Imaging using Autonomous Vehicles Yue Wang ∗ Islam Hussein ∗ ∗
Mechanical Engineering Department, Worcester Polytechnic Institute, Worcester, MA 01609, USA (e-mail: {yuewang, ihussein}@wpi.edu).
Abstract: This paper studies the underwater acoustic imaging problem using a fleet of autonomous vehicles. Integration of a guidance/control scheme and acoustic imaging process is discussed. A sensor model based on the acoustic sensor’s beam pattern is presented. The goal is to obtain an accurate enough image of an underwater profile using a fleet of autonomous vehicles. The dynamic coverage control law used in this paper guarantees that a desired amount of satisfactory quality samples will be collected at every point. Numerical simulations are presented to illustrate the main results. Keywords: Acoustic imaging, signal processing, dynamic coverage control, underwater technology. 1. INTRODUCTION Underwater exploration is important for many scientific and industrial applications. However, only less than 1% of the earth’s sea floor has been explored. Detailed observation is extremely difficult since the ocean is vast and dark. Therefore, advanced underwater imaging technologies are of great interest for underwater application. Acoustic imaging is an active research field devoted to the study of techniques aimed at the formation and processing of images generated from raw signals acquired by an acoustic system, (see Murino and Trucco (2000) and Istepanian and Stojanovic (2002).) There has also been a growing interest in the area of dynamic (also known as “effective”) coverage control problems with limited range sensors. Different from the fixed sensor network problems presented by Okabe and Suzuki (1997), and Du et al. (1999) and the redeployment approaches developed by Cort´es et al. (2004), Ganguli et al. (2005) on mobile sensor coverage networks, the goal in dynamic or effective coverage control is to collect enough high quality data at each point in a domain of interest, for example, the seabed in underwater applications. In the papers by Hussein and Stipanovi´c (2006a,b, 2007), such a problem of dynamically covering a given region D in R2 using a fleet of N mobile agents with limited-range sensors is investigated. The authors present a control strategy that aims at equally surveying each point within a given domain for some preset desired level C ∗ , with elements of flocking and collision avoidance included in the control laws. The literature on multi- and single-vehicle underwater application is relatively new due to recent advances in autonomous underwater vehicles (AUVs) and underwater positioning and communication technologies. Leonard et al. (2006) study the use of underwater autonomous vehicles with homogenous sensors to estimate scalar fields for ocean sampling. Based upon the dynamic coverage control theory, Wang and Hussein (2007) present an ef-
fective method for underwater sampling using a fleet of autonomous underwater vehicles with vision-based camera sensors. By integrating acoustic imaging and dynamic coverage control, this paper presents a similar effective coverage control scheme as by Hussein and Stipanovi´c (2006a,b, 2007). We apply the control theory to fleets of autonomous vehicles based on the beam pattern of the emitted acoustic signal for underwater sampling. We then illustrate the performance of the integration system through numerical simulations, which is also under experimental test by the authors currently at WPI. The contribution of this paper is as follows. In Section 2, we introduce the basic formulations and assumptions used in this paper. A brief survey on the guidance/control scheme and acoustic imaging process is presented and their integration discussed. In Section 3, we summarize the basic mathematical equations used in the beamforming algorithm. The beam pattern function introduced in this section will be used as the acoustic sensor model in Section 4. Assuming (for simplicity) that all vehicles are moving along a horizontal line within a certain underwater environment, numerical simulations are presented in Section 5 to illustrate the problem. This paper is concluded with a summary, and current and future research in Section 6. 2. INTEGRATION OF GUIDANCE/CONTROL AND ACOUSTIC IMAGING The system is composed of two main tasks: vehicle motion guidance and acoustic image processing. We first investigate each part in detail in the following two subsections, and then present schemes for integrating guidance and imaging. 2.1 Guidance/Control Scheme The basic goal of the control part is to use a fleet of autonomous vehicles to collect enough imaging data at each
] \
x [
Fig. 1. One-dimensional configuration space scenario location in an underwater domain. Domains of interest are generally 3-D volumes in the ocean with vehicles moving in all three directions. Other simpler scenarios include vehicle motions in a horizontal plane (for example, rectangular), which defines a planar configuration space for the vehicles. The space to be scanned is a given area below the plane of motion. For the sake of simplicity, we consider the simpler 1-D case where all the vehicles move along a horizontal line (the configuration space) and the imaging profile is a curve beneath the configuration line (see Figure 1). The simulation results obtained for this simple 1-D scenario can easily be generalized to 2-D horizontal motions in a straightforward manner. For the 3-D configuration space case, however, we need to include gravity and buoyancy in the vertical direction. This introduces nonlinearities in the equations of motion, which is currently under investigation by the authors. In this paper, we assumed linear kinematics to describe the motion of the vehicles. We first introduce some notation. The symbol V will denote a vehicle. The configuration space of a vehicle V is presented by Q = R2 . This is the (planar) space in which the vehicles move. Let D be a compact subset of R2 which denotes a region in R2 that the network is required to cover. The vehicles are free to move anywhere on the entire real plane R2 . Let N be the number of vehicles in the fleet and qi ∈ Q donate the position of a specific vehicle Vi , i ∈ I = {1, 2, 3, ..., N }. Each vehicle Vi , i ∈ I, satisfies the following simple first order equations of motion q˙ i = ui , i ∈ I (1) where ui ∈ R is the control velocity of vehicle Vi in the x− direction coordinates. There is no control in the vertical z direction other than control forces that maintain buoyancy of the vehicle. ˜ . When D ⊂ R2 A point in the domain D is denoted by q ˜ ∈ D will have a position (ri , βi ) with respect the point q to vehicle Vi ’s position qi , where the radial coordinate ri represents the radial distance from the vehicle position qi ˜ , and where βi is the clockwise angle from the vertical to q axis passing through the vehicle Vi . The control goal is to guide the autonomous vehicles in such a way that the entire network coverage will obtain the accurate estimate of the underwater domain. 2.2 Acoustic Imaging Process While we are collecting imaging data during the guidance/control part, we need simultaneously the technology of acoustic imaging to process the images and estimate the profile of the seabed. In underwater imaging, generally, the scene under investigation, the seabed in our case, is first insonified by an acoustic signal s(t), then the backscattered echoes acquired by the system are processed to create the profile. This process can be performed by two different approaches: use of an acoustic lens followed by a retina
of acoustic sensors, or acquisition of echoes by a twodimensional array of sensors and subsequent processing by adequate algorithms, such as the beamforming or the holography class. See Horne (2005) for more detail. In this paper, we adopt the beamforming algorithm to process the acoustic image. Each vehicle is mounted with a sensor array. We assume an acoustic pulse s(t) is emitted and a spherical propagation occurs inside an isotropic, linear, absorbing medium. Beamforming is a spatial filter that linearly combines the temporal signals spatially sampled by the sensor array. The system arranges the echoes in such a way as to amplify the signal coming from a fixed direction (steering direction) and to reduce all the signals coming from the other directions. We will give more detail of the beamforming method in Section 3. 2.3 System Integration When considering the integration of the guidance/control scheme and the acoustic imaging process, we have two different options for the guidance system: either a stochastic or a deterministic approach. Image Quality Feedback Based Error Guidance. We may use the image quality (i.e., estimated error) to guide the vehicles. For example, use the Kalman filter to estimate the field and on the filter’s prediction step to solve for the vehicle’s best next move. See Hussein (2007) for more details. The algorithm presented therein guarantees that the vehicles move to the direction that maximize the quality of the estimated field. Sensor Model Based Feedback Guidance. We may also consider using sensor model (given by the beam pattern function, see next section) for the vehicle guidance. In this paper, we will adopt this deterministic guidance approach together with the beamforming algorithm. 3. BRIEF MATHEMATICAL SUMMARY OF ACOUSTIC IMAGING 3.1 Beamforming Data Acquisition We assume that the imaged scene is made up of M point scatterers, the i-th scatterer is placed at the position ri = (xi , zi ), as shown in Figure 2. We can define the plane z = 0 as the plane that receives the backscattered field. The acoustic signal s(t) is emitted by an ideal point source placed in the coordinate origin (i.e., at vehicle location). Let us consider Ns point like sensors that constitute a receiving 2-D array, numbered by index l, from 0 to Ns −1. We then indicate the steering direction of a beam signal by the angle θ measured with respect to the z axis. By applying the Fourier/Fresnel approximation, one can obtain the following expression for the beam signal: M X 2ri b(t, θ) = q(t − )Ci BPBMF (ω, βi , θ), (2) c i=1 sin[ωNs d(sinβ − sinθ)/2c] , (3) sin[ωd(sinβ − sinθ)/2c] where BPBMF (ω, β, θ) is called beam pattern, which depends on the arrival angle β, the steering angle θ, and the angular frequency ω. We also assume that the array to BPBMF (ω, β, θ) =
P
x
βi θ
ri
The integral over time of the sum over a subset of autonomous vehicles Vi , i ∈ K ⊆ I, gives the effective coverage of the group indexed by K at time t at the point ˜: q Z tX PK (˜ q, t) = BP2i (τ )dτ 0 i∈K
We assume BPi is a function of βi here only, that is, we fix the steering direction θ and angular frequency ω. Since BPi is a function of βi which varies with time because of the change of vehicle position, BPi is implicitly a function of time. The goal is to attain a network coverage of PI (˜ q, t) = C ∗ ˜ ∈ D at some finite time t. The quantity C ∗ for all q guarantees that, when PI (˜ q, t) = C ∗ , one has sampled a point in D with a desired level of accuracy.
z
Fig. 2. Geometry of the data model 0
Next, consider the Zfollowing metric e(t) = h(C ∗ − PI (˜ q, t))φ(˜ q)d˜ q
−10
beam pattern (dB)
−20
where h(x) is a penalty function that is positive definite, twice differentiable, strictly convex on (0, C ∗ ] and that satisfies h(x) = h′ (x) = h′′ (x) = 0 for all x ≤ 0. Positivity and strict convexity means that h(x), h′ (x), h′′ (x) > 0 for all x ∈ (0, C ∗ ], and φ(˜ q, t) is the density function.
−30
−40
−50
−60
−70 −80
(5)
D
−60
−40
−20
0
20
arrival angle (deg)
40
60
80
Fig. 3. Beam pattern for a 40-element array with 1.5 mm spacing and unitary weights, frequency f = 500kHz, steering θ = 0◦ C. be equispaced and centered in the coordinate origin, d to be the inter-element spacing. Figure 3 shows the beam pattern as a function of the arrival angle (visualized on a logarithmic scale normalized to 0 dB) for fixed frequency f = 500KHz and steering angle θ = 0. See Istepanian and Stojanovic (2002) for detail. 3.2 Imaging Processing The analysis of beam signals allows one to estimate the range to a scene. A common method to detect the distance of the scattering object is to look for the maximum peak of the beam signal envelope. Denoting by t∗ the time instant at which the maximum peak (whose magnitude is denoted by s∗ ) occurs, the related distance, R∗ , is easily derivable from it (i.e., R∗ = c·t∗ /2, if the pulse source is placed in the coordinate origin). Therefore, for each steering direction θ, a triplet (θ, R∗ , s∗ ) can be extracted. The set of triplets can be projected to get a range image in which the point defined in polar coordinates by θ and R∗ is converted into a Cartesian point (x∗ , z ∗ ). See Murino et al. (1998) for more details. 4. CONTROL LAW The beam pattern BP given by Equation (3) is used as a sensor model to describes how effective the vehicle surveys ˜ ∈ D. The maximum range of θ and β is derived a point q based on the range of the seabed and the vertical distance of the vehicle by simple geometry, that is, β is given by q˜x − qix β = arctan . (4) q˜z − qiz
The penalty function h penalizes lack of coverage of points in D. An example for the function h(x) is h(x) = (max(0, x))n , (6) where n > 1, n ∈ R+ . It incurs a penalty whenever PI < C ∗ . Once PI ≥ C ∗ at a point in D, the error is zero no matter how much additional time submarines spend sampling that point. The density function φ is used as a weighting function. Regions with a large value of φ are regions of higher degree of relative importance, and vice versa. Next, we adapt the control laws presented by Hussein and Stipanovi´c (2006a,b, 2007) to the beam pattern sensor model. Here we only provide the main results for the control strategies, see Wang and Hussein (2007) for a detailed proof. Consider the following control law u ¯ i (t) = X Z ∂BPi ∂βi k¯i h′ (C ∗ − PI (˜ q, t)) BPi φ(˜ q)d˜ q(7) ∂βi ∂qi D i∈I
where k¯i > 0 are fixed feedback gains. Consider the following condition. Condition C1.
PI (˜ q, t) = C ∗ , ∀˜ q ∈ Wi (t), ∀i ∈ I
where Wi (t) is the main lobe of the beam pattern sensor model of vehicle Vi . It is the effective region of the BP function. The control law (7) guarantees that the system always converges to the state described in Condition C1. This condition describes a coverage situation where the system dwells at a local minimum of the metric e(t). Control Strategy. Under the control law (7), all vehicles in the system are in continuous motion as long as the
We now consider a simple linear feedback perturbation ¯ i that guarantees driving the system away from control u the Condition C1. The controller presented here is analogous to that found in Hussein and Stipanovi´c (2006a,b), but is designed based on the performance error function e(t) in equation (5). Consider the control law ¯ i (t) = −k¯i (qi (t) − q ˜ ∗i (ts )) u (8) ˜ ∗i is the nearest uncovered point to vehicle Vi . where q The above discussions give us the following control law: u∗i (t)
=
u ¯ i if Condition C1 does not hold , ¯ i if Condition C1 holds u
(9)
2000
t
1500
1000
500
0 −10
−5
0
position of vehicle
5
10
Fig. 4. Fleet motion along the line (each vehicle is denoted by a different color). 0.02
0
kuik, i ∈ I
state described in Condition C1 is avoided. Whenever the Condition C1 holds with nonzero error e(t) 6= 0, the system has to be perturbed by switching to some ¯ i that ensures violating the Condition other control law u C1. Once away from the condition C1, the controller is switched back to the nominal control u ¯ i in equation (7). Only when both C1 and e(t) = 0 are satisfied is when there ¯ i . We will often refer to any such is no need to switch to u control law by perturbation controller since it attempts to move the agents to positions where the coverage is nonzero and where the control law u ¯ i is no longer zero.
−0.02
−0.04
−0.06
−0.08
−0.1
0
500
1000
t
1500
2000
1500
2000
Fig. 5. Control effort kui k, i ∈ I. 1
u∗i (t) will guarantee to drive the error e(t) → 0 as t → ∞.
0.9 0.8
5. SIMULATION
0.7 0.6
e(t)
Remark Note that infinite switching without achieving e = 0 is impossible to happen due to the compactness of D. See Hussein and Stipanovi´c (2006b) for more details. •
0.5 0.4 0.3 0.2 0.1
In this section, we provide a set of numerical simulations. We illustrate the performance of the dynamic coverage control strategy with the perturbation control law that ensures the global coverage. As previously mentioned in the paper, the configuration space Q is the entire real line R1 (all vehicles move on a line). The domain D is a curve under the configuration space that we want to estimate. We define the length of D as l = 20 meters in the following simulation. The seabed profile is given by a simple piecewise linear function −gx if x ≤ 0 y= gx if x > 0 where x is the discretization along the seabed length and g = 2.5 is the slope of the linear function. Assume there are 2 submarines (N = 2) with a randomly selected initial deployment as shown in Figure 4. Let the desired effective coverage C ∗ be 6000. Here we use the control law in equation (7) with control gains k¯i = 0.05, i = 1, 2. In this example we employ the switching controller (8). A vehicle is set to switch to the linear feedback control law whenever Condition C1 applies to it. Assume that we do not have any a priori known information about the accuracy of the underwater sampling and, hence, we set φ(˜ q) = 1, ∀˜ q ∈ D. For the beam pattern sensor model, we have set f = 500kHz, θ = 0, d = 1.5mm, Ns = 40, c = 1500m/s for all i = 1, 2. The sensor has a Gaussian random noise with zero mean and a standard deviation of 0.5.
0
0
500
1000
t
Fig. 6. Error e(t). We used a simple trapezoidal method to compute integration over D and a simple first order Euler scheme to integrate with respect to time. The control effort is shown in Figures 5. The global error e(t) is shown in Figure 6. It can be seen to converge to zero. Figure 7 shows the effective coverage at different time steps. Note that we have normalized the error by dividing by (C ∗ )n × l (where n = 2 is defined in equation (6)) so that the initial error is 1. The acoustic image measured by the vehicles using the algorithm discussed in Subsection 3.2 is shown in Figure 8 which shows the comparison between the actual seabed profile and the simulated curve. The result shows that even with sensor noise, the proposed algorithm efficiently estimates the actual profile. 6. CONCLUSION In this paper, we integrated the dynamic coverage guidance/control strategy with the acoustic imaging process for underwater application. The goal is to obtain accurate underwater images of the seabed profile by using a fleet
REFERENCES 7000 6000 5000 4000
c
3000 2000 1000 0 −1000 −2000 −3000 −10
−5
0 x
5
10
5
10
5
10
(a)
7000 6000 5000 4000
c
3000 2000 1000 0 −1000 −2000 −3000 −10
−5
0 x
(b)
7000 6000 5000 4000
c
3000 2000 1000 0 −1000 −2000 −3000 −10
−5
0 x
(c)
Fig. 7. Effective coverage at t = 367, 734, 2152 with control switching.
25
20
y
15
10
5
0 −10
−5
0
x
5
10
Fig. 8. Actual vs. simulated profile of autonomous vehicles. We study the control law and the imaging process that guarantee full coverage of the underwater domain, which lead to high quality images. Numerical simulations were presented to show the performance of the method. Current and future work includes generalizing the result to the case where the vehicle dynamics reflect both gravitational and buoyancy effects (for vertical vehicle motions in 3-D coverage missions) together with 3-D underwater acoustic imaging.
J. Cort´es, S. Mart´ınez, T. Karatus, and F. Bullo. Coverage control for mobile sensing networks. IEEE Transactions on Robotics and Automation, 20(2):243 – 255, April 2004. Q. Du, V. Faber, and M. Gunzburger. Centroidal voronoi tessellations: Applications and algorithms. SIAM Review, 41(4):637 – 676, 1999. A. Ganguli, S. Susca, S. Mart´ınez, F. Bullo, and J. Cort´es. On collective motion in sensor networks: sample problems and distributed algorithms. IEEE Conference on Decision and Control, December 2005. J. K. Horne. Fisheries and marine mammal opportunities in ocean observations. In Proceedings of Underwater Acoustic Measurements: Technologies & Results., Heraklion, Crete, 2005. I. I. Hussein. A Kalman Filter-Based Control Strategy for Dynamic Coverage Control. 2007 American Control Conference, pages 3271 – 3276, July 2007. I. I. Hussein and D. Stipanovi´c. Effective coverage control using dynamic sensor networks with flocking and guaranteed collision avoidance. 2007 American Control Conference, 2007. I. I. Hussein and D. Stipanovi´c. Effective coverage control for mobile sensor networks. 2006 IEEE Conference on Decision and Control, 2006a. I. I. Hussein and D. Stipanovi´c. Effective coverage control for mobile sensor networks with guaranteed collision avoidance. IEEE Transactions on Control Systems Technology, Special Issue on Multi-Vehicle Systems Cooperative Control with Applications, 2006b. R. Istepanian and M. Stojanovic, editors. Underwater Acoustic Digital Signal Processing and Communication Systems. Springer, 1 edition, 2002. N. E. Leonard, D. Paley, F. Lekien, R. Sepulchre, D. M. Fratantoni, and R. Davis. Collective motion, sensor networks and ocean sampling. Proceedings of the IEEE, Special Issue on Networked Control Systems, 2006, pages 1806 – 1811, 2006. V. Murino and A. Trucco. Three-Dimensional Image Generation and Processing in Underwater Acoustic Vision. Proceedings of the IEEE, 88(12):1903 – 1946, December 2000. V. Murino, A. Trucco, and C. S. Regazzoni. A Probabilistic Approach to the Coupled Reconstruction and Restoration of Underwater Acoustic Images. IEEE Transaction on Pattern Analysis and Machine Intelligence, 20(1):9 – 22, January 1998. A. Okabe and A. Suzuki. Locational optimization problems solved through voronoi diagrams. European Journal of Operational Research, 98(3):445 – 456, 1997. Y. Wang and I. I. Hussein. Cooperative vision-based multi-vehicle dynamic coverage control for underwater applications. 16th IEEE International Conference on Conrol Applications, pages 82 – 87, October 2007.