Source Localization and Tracking

Source Localization and Tracking

CHAPTER Source Localization and Tracking 18 Yu Hen Hu Department of Electronics and Communication Engineering, University of Wisconsin-Madison, Mad...

225KB Sizes 1 Downloads 106 Views

CHAPTER

Source Localization and Tracking

18 Yu Hen Hu

Department of Electronics and Communication Engineering, University of Wisconsin-Madison, Madison, WI, USA

3.18.1 Introduction In this chapter, the task of source localization and tracking will be discussed. The goal of source localization is to estimate the location of one or more events (e.g., earthquake) or targets based on signals emitted from these locations and received at one or more sensors. It is often assumed that the cross section of the target or the event is very small compared to the spread of the sensors and hence the point source assumption is valid. If source locations move with respect to time, then tracking will be performed to facilitate accurate forecasting of future target positions. Diverse applications of source localizations have been found, such as Global Positioning System (GPS) [1,2], sonar [3,4], radar [5,6], seismic event localization [7–9], brain imaging [10], teleconference [11,12], wireless sensor networks [13–19], among many others. The basic approach of source localization is geometric triangulation: Given the distance or angles between the unknown source location and known reference positions, the coordinates of the source locations can be computed algebraically. However, these distance or angle measurements often need to be inferred indirectly from a signal propagation model that describes the received source signal as a function of distance and incidence angle between the source and the sensor. Such a model facilitates statistical estimation of spatial locations of the sources based on features such as time of arrival, attenuation of signal intensity, or phase lags. Specific features that may be exploited are also dependent on the specific signal modalities such as acoustic signal, radio waves, seismic vibrations, infra-red light, or visible light. In this chapter, various source signal propagation models will be reviewed, and statistical inference methods will be surveyed. Very often, source localization will be performed consecutively to track a moving source over time. Using a dynamic model to describe the movement of the source, one may predict the probability distribution function (pdf) of source location using previously received source signals. With this estimated prior distribution of source location, Bayesian estimation may be applied to update the source location using current sensory measurements. Thus, source localization and tracking are often intimately related. In the remaining of this chapter, the basic idea of triangulation will first be reviewed. Next, the signal propagation and channel models will be introduced. Statistical methods that implicitly or explicitly leverage the triangulation approach to estimate source locations will then be presented. Bayesian tracking methods such as Kalman filter will also be briefly surveyed.

Academic Press Library in Signal Processing. http://dx.doi.org/10.1016/B978-0-12-411597-2.00018-7 © 2014 Elsevier Ltd. All rights reserved.

799

800

CHAPTER 18 Source Localization and Tracking

3.18.2 Problem formulation Assume N sensors are deployed over a sensing field with known positions. Specifically, the position of the nth sensor is denoted by xn . It is also assumed that there are K targets in the sensing field with unknown locations {rk ; 1 ≤ k ≤ K }. Each target is emitting a source signal denoted by sk (t) at time t. The nth sensor will receive a delayed, and sometimes distorted version of the kth source signal yn,k (t). In general, yn,k (t) is a function of the past source signal up to time t, {sk (t − m); 1 ≤ k ≤ K , m = 0, 1, . . .}, the sensor locations xn , the target locations rk , as well as the propagation medium of the signal. This source signal propagation model will be discussed in a moment. It is noted that prior to target localization, a target detection task must be performed and the presence of K targets within the sensing field must have been estimated. The issues of target detection and target number estimation will not be included in this discussion. We further assume that the measurements at the nth sensor, denoted by yn (t) is a superimposition of {yn,k (t); 1 ≤ k ≤ K }. That is, K  yn,k (t) + en (t), (18.1) yn (t) = k=1

where en (t) is the observation noise at the nth sensor. The objective of source localization is to estimate the source locations {rk ; 1 ≤ k ≤ K } based on sensor readings {yn (t − ); 1 ≤ n ≤ N ,  = 0, 1, . . .} given known sensor positions {xn ; 1 ≤ n ≤ N } and the number of targets K.

3.18.3 Triangulation In a sensor network source localization problem setting, the sensor locations are the reference positions. Based on signals received at sensors from the sources, two types of measurements may be inferred: (i) distance between each sensor and each source and (ii) incidence angle of a wavefront of the source signal impinged upon a sensor relative to an absolute reference direction (e.g., north). In this section, we will derive three sets of formula that make use of (a) distance only, (b) angle only, and (c) distance and angle to deduce the source location. For convenience, the single source situation will be used, with discussions on potential generalization to multiple sources.

3.18.3.1 Distance based triangulation Denote dn to be the Euclidean distance between the position of the nth sensor xn and the (unknown) source location r, namely, dn = xn − r. dn may be estimated from the sensor observations yn (t) for certain type of sensors such as laser or infrared light. One may write down N quadratic equations: dn2 = xn − r2 = xn 2 + r2 − 2xnT r, 1 ≤ n ≤ N .

(18.2)

Subtracting both sides of each pair of successive equations above to eliminate the unknown term r2 , it leads to N − 1 linear equations    1  2 − dn2 , 1 ≤ n ≤ N − 1. xn+1 2 − xn 2 − dn+1 (18.3) (xn+1 − xn )T r = 2

3.18.3 Triangulation

801

These N − 1 linear systems of equations may be expressed in a matrix form: Xr = d.

(18.4)

In general, the accuracy of the distance estimate may be compromised by estimation errors. Thus, a more realistic model should include an estimation noise term: Xr = d + e,

(18.5)

where e is a zero-mean, uncorrelated random noise vector such that   E{e} = 0, cov{e} =  = diag σ12 , . . . , σ N2 . Then r may be estimated using weighted least square method that seeks to minimize a cost function 2 (Xr − d)T  −1 (Xr − d) . The leads to

 −1 rˆ Dis,W L S = XT  −1 X XT  −1 d.

(18.6)

3.18.3.2 Angle based triangulation Denote θn to be incidence angle from the source into the nth sensor using the north direction as the reference direction. We further assume that the angle is positive along the clock-wise direction. Referring to Figure 18.1, it is easily verified that

(18.7) 0 = − cos θn sin θn (xn − r).

FIGURE 18.1 Geometric positions for Triangulation.

802

CHAPTER 18 Source Localization and Tracking

Rearranging terms and collecting all N equations, one has ⎡ ⎤ ⎤ ⎡

− cos θ1 sin θ1 x1 − cos θ1 sin θ1

⎢ − cos θ2 sin θ2 ⎥ ⎢ − cos θ sin θ x ⎥ 2 2 2 ⎥ ⎢ ⎥ ⎢ ⎢ ⎥. .. .. ⎥ · r = ⎢ . .. ⎣ ⎦ ⎣ . . ⎦

− cos θ N sin θ N − cos θ N sin θ N x N       C

(18.8)

p

Or, in matrix formation: C · r = p.

(18.9)

Similar to Eq. (18.5), the observation p may be contaminated with noise. As such, the source location may be obtained via a weighted least square estimate. For the sake of notation simplicity, one may use e to denote the noise vector as in Eq. (18.5). Then, the weighted least square estimation of r becomes:  −1 CT  −1 p. (18.10) rˆ Ang,W L S = CT  −1 C

3.18.3.3 Triangulation: generalizations In Sections 3.18.3.1 and 3.18.3.2, single target triangulation localization algorithms for distance measurements and incidence angle measurements have been discussed. Both of these situations lead to an over-determined linear systems of Eqs. (18.5) and (18.9). Therefore, when both distance measurements and incidence angle measurements are available, these two equations may be combined and solved jointly. When there are two or more sources (targets), a number of issues will need to be addressed. First of all, when a sensor receives signals emitted from two or more sources simultaneously, it may not be able to distinguish one signal from another if these signals overlap in time, space and frequency domains. This is the traditional signal (source) separation problem [20–23]. Secondly, even individual sources may be separated by individual sensors, which of the signals received by different sensors correspond to the same source is not always easy to tell. This is the so called data association problem [24,25]. A comprehensive survey of these two issues is beyond the scope of this chapter. As discussed earlier, while localization may be accomplished via triangulation, the measurements of sensor to source distance or source to sensor incidence angle must be estimated based on received signals. The mathematical model of the received source signals, known as the signal propagation model will be surveyed in the next section.

3.18.4 Signal propagation models Depending on specific applications, in many occasions, the source signal may be modeled as a narrow band signal characterized by a single sinusoid: sk (t) = Ak exp ( j2π f k t + φk ),

(18.11)

where Ak is the amplitude, f k is the frequency, and φk is the phase. While for certain special cases that all or a portions of these parameters are known, in most applications, they are assumed unknown.

3.18.4 Signal Propagation Models

803

In other occasions, the source signal may be a broad band signal which contains numerous harmonics and is difficult to be expressed analytically. Between each source-sensor pair there is a communication channel whose characteristics depend on specific medium (air, vacuum, water, etc.). The net effects of the communication channels on the received signal yn,k (t) can be summarized in the following categories: Attenuation: The amplitude of the source signal often attenuates rapidly as source to sensor distance increases. Hence, examining relative attenuation of signal strength provides an indirect way to estimate source to sensor distance. For a point source, the rate of attenuation is often inversely proportional to xn − rk . The communication channel between the sensor and the source may also be frequency selective such that the attenuation rates are different from different frequency bands. For example, high frequency sound often attenuate much faster than low frequency sound. Thus the amplitude waveform as well as the energy of the source signal at the sensor may be distorted. If the signal propagation is subject to multi-path distortion, the attenuation rate may also be affected. Yet another factor that affects the measured amplitude of received signal is the sensor gain. Before the received analog signal is to be digitized, its magnitude will be amplified with adaptive gain control to ensure the dynamic range of the analog-to-digital converter (ADC) is not saturated. Furthermore, the signal strength attenuation may not be uniform over all directions, and the point source assumption may not be valid at short distance. Time delay: Denote ν to be the signal propagation speed in the corresponding medium, the time for the source signal traveling to the sensor can be evaluated as Dn,k = xn − rk /ν.

(18.12)

However, the time delay due to signal propagation may be impacted by non-homogeneous mediums. The consequence may be refraction or deflection of signal propagation and the accuracy of time delay estimation may be compromised. Phase distortion: The nonlinear phase distortion due to frequency selective channel property may also distort the morphology of the signal waveform, making it difficult to estimate any phase difference between received signals at different sensors. Noise: While the source signal travels to each sensor, it may suffer from (additive) channel noise or interferences by other sources. It is often assumed that the background noise observed at each sensor is a zero-mean, un-correlated, and wide-sense stationary random process, having Gaussian distribution. Moreover, the background noise processes at different sensors are assumed to be statistically independent to each other. Such assumptions may need to be modified for situations when the background noise also include high energy impulsive noise or interferences. A commonly used channel model in wireless communication theory is the convolution model that models the frequency selective characteristics of the channel as a finite impulse response digital filter {h n,k (m); 0 ≤ m ≤ M − 1}. Thus, using the notation defined in this chapter, yn,k (t) =

M−1 

h n,k (m)sk (t − m) + n,k (t).

(18.13)

m=0

The channel parameters are functions of both the sensor and source locations. For source localization application, above model is often simplified to emphasize specific features.

804

CHAPTER 18 Source Localization and Tracking

3.18.4.0.1 Received signal strength indicator (RSSI) A popular simplified model concerns only amplitude attenuation: yn,k (t) =

gn · sk (t). xn − rk 

(18.14)

Here the propagation delay, phase distortion, and noise are all ignored. Instead, gn is used to denote the sensor gain of the nth sensor. Thus, one may write yn (t) = gn ·

K  k=1

sk (t) + en (t). xn − rk 

(18.15)

Averaging over a short time interval centered at the sampling time t, one may express the energy of the received signal during this short period as Yn (t) =

1 T



t+T /2

t−T /2

yn2 (u)du  G n ·

K  k=1

Sk (t) + ζn (t), xn − rk 2

(18.16)

where Yn (t) is the received signal energy at time t at sensor n, G n = gn2 , and 1 Sk (t) = T



t+T /2

t−T /2

[sk (u)]2 du.

It is assumed that the K source signals are statistically, mutually independent, such that 

1 T

t+T /2

t−T /2

sk (u)s j (u)du  0,

k  = j.

Moreover, the background noise en (t) is also independent to the signal, besides being i.i.d. random variables such that  1 t+T /2 sk (u)e j (u)du  0, ∀ k, j T t−T /2 and 1 T



t+T /2

t−T /2

 ek (u)e j (u)du =

ζn (t) k = j, 0 k  = j.

(18.17)

Here ζn (t) is a random variable with a χ 2 distribution. However, as discussed in [19], for practical purposes, ζn (t) can be modeled as a Gaussian random variable with a positive mean value μn > 0 and variance σn2 . Equations (18.15) or (18.16) are often used in source localization algorithms that are based on Received Signal Strength Indicator (RSSI) [26,27] to infer the target location.

3.18.4 Signal Propagation Models

805

3.18.4.1 Time delay estimation Time delay estimation [28,29] has a long history of signal processing applications [30–32]. It is assumed that M−1  h n,k (t)sk (t − m − Dn,k ) + n,k (t). (18.18) yn,k (t) = m=0

If Dn,k can be estimated accurately, the source to sensor distance may be estimated using Eq. (18.12). Thus, the key issue is to estimate Dn,k . If the original source signal sk (t) and the received signal yn,k (t) are both available, then the time delay may be estimated using a number of approaches. Assume sk (t) and yn,k (t) are both zero-mean white sense stationary random processes over a time interval [−T /2, T /2], the cross-correlation between them is defined as  1 T /2 sk (t)yn,k (t + τ )dt. (18.19) Rs,y (τ ) = T −T /2 Substituting yn,k (t) in Eq. (18.18) into Eq. (18.19), and assume h n,k (t) = 1 if t = 0 and h n,k = 0 otherwise, one has   1 T /2 1 T /2 sk (t)sk (t − Dn,k + τ )dt + sk (t) · n,k (t)dt Rs,y (τ ) = T −T /2 T −T /2    = Rs,s (τ − Dn,k ) ≤ Rs,s (0) =

1 T



=0

T /2

−T /2

|sk (t)|2 dt.

(18.20)

Therefore, a maximum likelihood estimate of the time delay Dn,k will be n,k = argτ max Rs,y (τ ). D

(18.21)

There is a fundamental difficulty in applying Eq. (18.21) to estimate source to sensor signal propagation delay: sk (t) may not be available at the sensor. This is often the case when the source is non-cooperative such as an intruder. On the other hand, if the source is cooperative, it may transmit a signal that consists of a time stamp. If the clocks at the sensor and at the source are synchronized as in the case of the global positioning system (GPS) [1], the sensor can compare the receiving time stamp against the sending time stamp and deduce the transit time without using cross correlation. In some applications, the direction of the source is known but the distance between the source and the sensor is to be measured. Then, a round trip signal propagation delay may be estimated using crosscorrelation method by emitting a signal sk (t) from the sensor toward the source and bounding back to the sensor. Then, both the transmitted signal and received signal will be available at the sensor and above cross-correlation method may be applied to estimate the source to sensor distance. In general, when sk (t) is unavailable at the sensor, a difference of time of arrival from the same source at different sensors will allow one to estimate the incidence angle of source signal. This is the Time Difference of Arrival (TDoA) feature mentioned in literatures [33,34]. With TDoA, a Generalized Cross Correlation (GCC) [28] may be applied to estimate δm,n (k) = Dm,k − Dn,k , m  = n.

806

CHAPTER 18 Source Localization and Tracking

Ideally, one would hope ym,k (t) = h m,k (0)sk (t − Dm,k ) and yn,k (t) = h n,k (0)sk (t − Dn,k ). As such, one computes the cross correlation  1 T /2 ym,k (t)yn,k (t + τ )dt Rm,n (τ ) = T −T /2  1 T /2 = sk (t − Dm,k )sk (t − Dn,k + τ )dt T −T /2 = Rss (τ − (Dn,k − Dm,k )) ≤ Rss (0). (18.22) Thus,

δˆm,n (k) = Dm,k − Dn,k = argτ max Rm,n (τ ).

(18.23)

With the presence of channel noise n,k (t) and channel model {h n,k (t)}, Knapp and Carter [28] proposed to pre-filter ym,k (t) and yn,k (t) to improve the accuracy of the TDOA estimate by enhancing the signal to noise ratio (SNR) of Rm,n (τ ) estimate. Specifically, denote Ym,k (ω) and Yn,k (ω) respectively as the Fourier transform of the received signal ym,k (t) and yn,k (t), and also denote Wm,k (ω) and Wn,k (ω) respectively as the spectrum of the pre-filters for ym,k (t) and yn,k (t), the GCC is defined as:  T /2 1 GCC ∗ ∗ (τ ) = Wm,k (ω)Wn,k (ω)Ym,k (ω)Yn,k (ω) exp ( jωτ )dω. (18.24) Rm,n 2π T −T /2 Once δm,n (k) is estimated, one has the following relation: δm,n (k) · v = xm − r − xn − r.

(18.25)

Hence TDOA provides a relative distance measure from the source to two different sensors. We note by passing that Eq. (18.25) defines a parabolic trajectory for potential target location r.

3.18.4.2 Angle of arrival estimation For narrow band source signal, if sensors are fully synchronized, phase difference between received sensor signals may be used to estimate the incidence angle of source signal. Assume a point source emitting a narrow band (single harmonic) signal described in Eq. (18.11). Under a far field assumption, that is max xm − xn   min r − xn  m,n

n

(18.26)

the waveform of the source signal can be modeled as a plane wave such that the incidence angle of the source to each of the sensor nodes will be identical. This makes it easier to represent the time difference of arrival as a steering vector which is a function of the incidence angle and the sensor array geometry. Again, let north be a reference direction of the incidence angle θ which increases along the clockwise direction. Then a unit vector along the incidence angle can be expressed as [− sin θ, − cos θ]T . The time difference of arrival between sensor nodes m and N can be expressed as:   T − sin θ  ν (18.27) tm,N (θ ) = xm − x N − cos θ

3.18.4 Signal Propagation Models

807

Substitute the expression of a narrow band signal as shown in Eq. (18.11) into the expression of received signal described in Eq. (18.18), one has ym,k (t) = {h m,k (t)} ∗ ∗{sk (t − Dm,k )} + m,k (t) = {h m,k (t)} ∗ ∗{sk (t − (D N ,k + tm,N (θk )))} + m,k (t)   = Hm,k ( f k ) · Ak exp j2π f k (t − (D N ,k + tm,N (θk ))) + φk + m,k (t)   = Hk exp − j2π f k tm,N (θk ) · sk (t − D N ,k ) + m,k (t),

(18.28)

where Hn,k ( f k ) = Hk is the frequency response of h n,k (t) evaluated at f = f k and is independent of rk , and xn under the far field assumption Eq. (18.26). Assume K = 1, then the received signal of all N sensors may be represented by yk (t) = akH (θk )sk (t − D N ,k ) · Hk +  k (t), where

ak (θk ) = exp (− j2π f k t1,N (θk ))

···

exp (− j2π f k t N −1,N (θk ))

(18.29) 1

H

is a steering vector with respect to a narrow band source with incidence angle θk using the signal received at the Nth sensor as a reference. It represents the phase difference of the received sensor signals with respect to a plane wave traveling at an incidence angle θk . With K narrow band sources, the received signal vector than may be expressed as: y(t) =

K 

yk (t) = A H (θ )s(t) + (t),

(18.30)

k=1

where

H A(θ ) = a1 (θ1 ) a2 (θ2 ) · · · a K (θ K ) ,

H s(t) = s1 (t − D N ,1 )H1 s2 (t − D N ,2 )H2 · · · s K (t − D N ,K )HK ,

and (t) =

K 

 k (t).

k=1

Equation (18.30) is the basis of many array signal processing algorithms [3,35,36] such as MUSIC [37]. Specifically, the covariance matrix of y(t) may be decomposed into two parts: R yy = E{yy H } = A(θ )SA H (θ ) + σ 2 I = Rs + Rn ,

(18.31)

where Rs and Rs are the signal and noise covariance matrix respectively. In particular, Rs has a rank equal to K. In practice, one would estimate the covariance matrix from received signal and perform eigenvalue decomposition: T −1  yy = 1 y(t)y H (t) = Us s UsH + Un n UnH , R T t=0

(18.32)

808

CHAPTER 18 Source Localization and Tracking

where s = diag{λ1 , . . . , λ K } are the largest K eigenvalues and columns of Us are corresponding eigenvectors. The K dimensional subspace spanned by columns of the Us matrix is called the signal subspace. On the other hand, n = diag{λ K +1 , . . . , λ N } are the remaining N − K eigenvalues. The N − K dimensional subspace spanned by columns of the Un matrix is called the noise subspace. Define

H vk (θ ) = exp (− j2π f k t1,N (θ )) · · · exp (− j2π f k t N −1,N (θ )) 1

(18.33)

as a generic steering vector, the MUSIC method evaluates the function PMUSIC (θ ) =

v(θ )2 . v H (θ )Un 2

(18.34)

This MUSIC spectrum then will show high peaks at θ = θk , 1 ≤ k ≤ K .

3.18.5 Source localization algorithms 3.18.5.1 Bayesian source localization based on RSSI For convenience of discussion, let us rewrite Eq. (18.16) here after a linear transformation of the random variables (18.35) Z n (t) = (Yn (t) − μn )/σn and ζ˜n (t) = (ζn (t) − μn )/σn . Then, ⎡

⎤ ⎡ ⎤ G 1 /(σ1 · x1 − r1 2 ) G 1 /(σ1 · x1 − r2 2 ) · · · G 1 /(σ1 · x1 − r K 2 ) Z 1 (t) ⎢ Z 2 (t) ⎥ ⎢ G 2 /(σ2 · x2 − r1 2 ) G 2 /(σ2 · x2 − r2 2 ) · · · G 2 /(σ2 · x2 − r K 2 ) ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ .. ⎥ = ⎢ ⎥ .. .. .. . . ⎣ . ⎦ ⎣ ⎦ . . . . Z N (t) G N /(σ N · x N − r1 2 ) G N /(σ N · x N − r2 2 ) · · · G N /(σ N · x N − r K 2 ) ⎤ ⎡ ⎤ ⎡ ζ˜1 (t) S1 (t) ⎢ S2 (t) ⎥ ⎢ ζ˜2 (t) ⎥ ⎥ ⎢ ⎥ ⎢ (18.36) ×⎢ . ⎥ + ⎢ . ⎥ . . ⎣ . ⎦ ⎣ . ⎦ S K (t) ζ˜ N (t)

or in matrix notation z = Hs + ζ˜ .

(18.37)

Since ζ is a normalized Gaussian random vector with zero mean and identity matrix as its covariance, the likelihood function that z is observed at N sensors, given the K source signal energy and source locations {rk , Sk (t); 1 ≤ k ≤ K } can be expressed as:   1 T L(rk , Sk (t); 1 ≤ k ≤ K ) = P{z|rk , Sk (t); 1 ≤ k ≤ K } ∝ exp − (z − Hs) (z − Hs) . (18.38) 2

3.18.5 Source Localization Algorithms

809

A maximum likelihood (ML) estimate of {rk , Sk (t); 1 ≤ k ≤ K } may be obtained by maximizing L or equivalently, minimizing the negative log likelihood function (rk , Sk (t); 1 ≤ k ≤ K ) = − log L = z − Hs2 .

(18.39)

Setting the gradient of  against s equal to 0, one may express the estimate of the source signal energy vector as (18.40) sˆ = H† z, where H† is the pseudo inverse of the H matrix. Substituting Eq. (18.40) into Eq. (18.39), one has (rk ; 1 ≤ k ≤ K ) = (I − HH† )z2 .

(18.41)

Equation (18.41) is significant in several ways: (a) The number of unknown parameters is reduced from 3K to 2K assuming that the dimension of rk is 2. (b) To minimize , the source locations should be chosen such that the (normalized) energy vector z falls within the subspace spanned by columns of the H matrix as close as possible. In [19], this property is leveraged to derive a multi-resolution projection method for solving the source locations. In deriving the ML estimates of source locations, no prior information about source locations is used. If, as in a tracking scenario, the prior probability of source locations, p(rk ; 1 ≤ k ≤ K ) is available, then the a posterior probability may be expressed as:   1 (18.42) P{rk , Sk (t); 1 ≤ k ≤ K |z} ∝ p(rk ; 1 ≤ k ≤ K ) · exp − (z − Hs)T (z − Hs) . 2 Maximizing above expression then will lead to the Bayesian estimate of the source locations and corresponding source emitted energies during [t − T /2, t + T /2].

3.18.5.2 Non-linear least square source localization using RSSI Assume a scenario of a single source (K = 1). If one ignores the noise energy term, Eq. (18.16) can be expressed as (using notation in Eq. (18.35)) xn − r2 = S ·

Gn 1 ≤ n ≤ N. σn · Z n (t)

(18.43)

Based on this approximated distance, the source location r and the source energy S may be estimated.

3.18.5.2.1 Nonlinear quadratic optimization Based on Eq. (18.43), one evaluate the ratio xn − r2 G n σm · Z m (t) 2 = · , n  = m. = κn,m xm − r2 G m σn · Z n (t)

(18.44)

After simplification, above equation can be simplified as r − cm,n  = ρm,n where cm,n =

2 x xn − κm,n κm,n xn − xm  m and ρm,n = . 2 2 1 − κm,n 1 − κm,n

(18.45)

810

CHAPTER 18 Source Localization and Tracking

For convenience, one may set m = n + 1 and writes cn , and κn instead of cm,n , and κm,n . The target location may be solved by minimizing a nonlinear cost function: J (r) =

N −1 

(r − cn  − ρn )2 .

(18.46)

n=1

In Eq. (18.46), it is assumed that κn  = 1. It can easily be updated to deal the situation when κn → 1.

3.18.5.2.2 Least square solution If one ignores the background noise, Eq. (18.45) can be seen as a distance measurement of the unknown source location r. Recall the distance based triangulation method described in Section 3.18.3.1. One may consider squaring both sides of Eq. (18.45) using sensor readings of sensor indices n, n+1, and m, m+1: 2 2 ⇒ r2 = 2r T cn,n+1 + ρn,n+1 − cn,n+1 2 , r − cn,n+1 2 = ρn,n+1 2 2 ⇒ r2 = 2r T cm,m+1 + ρm,m+1 − cm,m+1 2 . r − cm,m+1 2 = ρm,m+1

After simplification, one has 2 2 (cn,n+1 − cm,m+1 )T r = ρm,m+1 − ρn,n+1 + cn,n+1 2 − cm,m+1 2 .

(18.47)

Combining above equations for different indices of m, n, one may solve for r by solving an overdetermined linear system C r = h ⇒ rˆ L S = C† h. (18.48)

3.18.5.2.3 Source localization using table look-up Consider a single target (K = 1) scenario. The normalized sensor observation vector z(t) can be regarded as a signature vector of a source at location r. Hence, by collecting the corresponding signature vectors for all possible locations r in a sensing field, the source location may be estimated using table look-up method. To account for variations of source intensity, the signature vector may be normalized to have a unity norm.

3.18.5.3 Source localization using time difference of arrival Denote d N = x N − r.

(18.49)

Substituting Eq. (18.49) into Eq. (18.25) with m = N , one has r − xn  = δn · v + d N , n = 1, 2, . . . , N − 1. Squaring both sides of above equation for both indices m and n, T r2 + xm 2 − 2xm r = (δm · v)2 + d N2 + 2d N δm v,

r2 + xn 2 − 2xnT r = (δn · v)2 + d N2 + 2d N δn v.

(18.50)

3.18.5 Source Localization Algorithms

811

Subtracting both sides of above equations, it yields (for 1 ≤ m, n ≤ N − 1)  1 2 − δn2 ) · v2 = pn,m . (xn − xm )T r + (δn − δm ) · v · d N = xn 2 − xm 2 + (δm 2 Restricting m = n + 1, one may express above into a matrix format: ⎤ ⎡ ⎤ ⎡ p1,2 (δ1 − δ2 ) · v (x1 − x2 )T   ⎢ p ⎢ (x2 − x3 )T 2,3 ⎥ (δ2 − δ3 ) · v ⎥ ⎥ ⎥ r =⎢ ⎢ (18.51) ⎥. ⎢ .. ⎦ ⎣ dN ··· ··· ⎦ ⎣ . (x N −1 − x N )T (δ N −1 − δ N ) · v p N −1,N The source location r may be solved from above equation using least square estimate subject to the constraint quadratic equality constraint according to Eq. (18.49): ⎡ ⎤     10 0 r

T r (18.52) + −2xTN 0 + x N 2 = 0. r dN ⎣ 0 1 0 ⎦ dN dN 0 0 −1

3.18.5.4 Source localization using angle of arrival When there is only a single source in the sensing field, the angle based triangulation method discussed in Section 3.18.3.2 can be applied to estimate the source location. With more than one sources, a data correspondence problem must be resolved. Such a problem has been addressed in terms of N-Ocular stereo [38]. Assume that each of the nth sensor detects K distinct incidence angles from the K sources. Thus, there are N · K incidence angles {θn,k ; 1 ≤ n ≤ N , 1 ≤ k ≤ K }. If at the nth sensor, it receives a source signal with incidence angle θn,k , then the source location may be expressed as   sin θn,k rk = xn + α · u(θn,k ) = xn + α · , α > 0. (18.53) cos θn,k Similarly, for the mth sensor, for the th incidence angle, the source location is at r = xm + β · u(θm, ) β > 0. Therefore, if these two sources are the same source (e.g., rk = r ), then one must have a valid solution, namely, α > 0, and β > 0 to the following linear system of equations:    sin θn,k sin θm, α . (18.54) xm − xn = αu(θn,k ) − βu(θm, ) = cos θn,k cos θm, −β The solution can be represented expressively as:     α − cos θm, sin θm, · (xm − xn ). = − cos θn,k sin θn,k β

(18.55)

If both α and β are positive, then xn + αu(θn,k ) is a potential source location. Since it is derived from observation of two sensors, it is called a bi-ocular solution. Otherwise, the solution will be discarded. Substituting the K 2 pairs angles {(θn,k , θm, ); 1 ≤ k,  ≤ K } into Eq. (18.55), one may obtain up to K 2

812

CHAPTER 18 Source Localization and Tracking

valid solutions which are candidates for the K possible source locations, assuming there is no occlusion. The collection of these solutions will be denoted by P(2) indicating they are consistent with two sensor observations. Now for a third sensor at xq with incidence angles {θq,k ; 1 ≤ k ≤ K }, one may test if any of the potential solutions in P(2), r, lies in any of the K incidence rays originated from xq . This is easily accomplished by evaluating uT (θq,k )(r − xq ) = uT (θq,k ) · {αu(θq,k )} = α.

(18.56)

If the resulting α > 0, then the candidate solution r will be promoted into a tri-ocular solution set P(3) for being consistent with 3 sensor observations. Repeat above procedure until P(N ) is obtained or when the size of the solution set reduces to K. Then the procedure is terminated and the K source positions are obtained. Two issues will need to be addressed when applying above procedures: (a) A false matching solution may be obtained where many incidence rays intersect but no source exists. Fortunately, each incidence ray passing through a false matching position should have two or more matching points. Thus, a false matching solution may be identified if every incidence rays passing through it have more than one matching point. (b) The azimuth incidence angle estimates may be inaccurate. Hence the intersections may not coincide at the same position. Several remedies of this problem have been discussed in [38]. In practice, one may partition the sensing field into mesh grids whose size roughly equal to the angle estimation accuracy. The position of a potential N-ocular solution then will be registered with the corresponding mesh grid rather than a specific point. A mesh grid will be included into P(m) if there are m intersections fall within its range.

3.18.6 Target tracking algorithm So far, in this chapter, source localization is performed based solely on a single snapshot at time t of sensor readings at N sensors in the sensing field. If past sensor readings about the same sources can be incorporated, the source location estimates are likely to be much more accurate.

3.18.6.1 Dynamic and observation models Statistically, tracking is modeled as a sequential Bayesian estimation problem of a dynamic system whose states obey a Markov model. In the context of source localization and tracking, the dynamic system that describes a single target moving in a 2D plane has the form   r(t) = z(t) = Fz(t − 1) + Gw(t) (18.57) r˙ (t) ⎤ ⎤ ⎡ ⎡ 2 0 1 0 T0 0  T0 /2  ⎢ 0 1 0 T0 ⎥ r(t − 1) ⎢ T0 0 ⎥ ⎥ w(t), ⎥ ⎢ =⎢ 2 ⎣ 0 0 1 0 ⎦ r˙ (t − 1) + ⎣ 0 T0 /2 ⎦ 0 0 0 1 0 T0 where T0 is the time duration between t and t − 1, and w(t) is a zero mean Gaussian random number with variance σw2 which models the acceleration.

3.18.6 Target Tracking Algorithm

813

Signal received at N sensors as a function of source location r and source signal s(t) gives the observation model. Examples of observation model include Eqs. (18.15) and (18.18). These nonlinear models may be expressed as: y(t) = h(z(t)) + v(t), (18.58) where the observation noise v(t) is a zero mean Gaussian random variable with variance σv2 . Sometimes, intermediate observation model, such as sensor to source distance estimate Eq. (18.12), Eq. (18.25), or source signal incidence angle estimate such as Eq. (18.53). In general, the sensor observation y(t), or the intermediate measurements are highly nonlinear equations of the source position and speed z(t). Alternatively, one may apply methods discussed in this chapter to estimate z(t) based only on current observation y(t), and express a derived observation equation as:

y(t) = H · z(t) + v(t) = I 0 z(t) + v(t). (18.59)

3.18.6.2 Sequential Bayesian estimation The state transition described in Eqs. (18.57) and (18.58) can be described by a Markov chain model such that P{z(t)|z(t − 1), . . . , z(0)} = P{z(t)|z(t − 1)} and P{y(t)|z(t), z(t − 1), . . . , z(0)} = P{y(t)|z(t)}. As such, it is easily verified that P{z(t), . . . , z(0); y(t), . . . , y(1)} = P{z(0)} ·

t 

P{y(m)|z(m)} · P{z(m)|z(m − 1)}.

(18.60)

m=1

Denote Y(t) = {y(t), y(t − 1), . . . , y(0)} to be the observations up to time t, and P{z(t)|Y(t)} to be the conditional probability of z(t) given Y(t). Given the state estimation (location and speed of the source) at previous time step P{z(t − 1)|Y(t − 1)}, the dynamic model Eq. (18.57) allows the prediction of the location and speed of the source at time t:  P{z(t)|Y(t − 1)} = P{z(t)|z(t − 1)}P{z(t − 1)|Y(t − 1)}dz(t − 1), (18.61) where

  P{z(t)|z(t − 1)} ∼ N F · z(t − 1), σw2 GGT

(18.62)

has a normal distribution. Similarly,

  P{y(t)|z(t)} ∼ N h(z(t)), σv2 I .

(18.63)

Applying Bayesian rule, one has P{z(t)|Y(t)} = =

P{z(t)|y(t)}P{z(t)|Y(t − 1)} P{y(t)|Y(t − 1)} P{z(t)|y(t)}P{z(t)|Y(t − 1)} . P{y(t)|z(t)}P{z(t)|Y(t − 1)}dz(t)

(18.64)

814

CHAPTER 18 Source Localization and Tracking

3.18.6.3 Kalman filter Based on the sequential Bayesian formulation, one may deduce the well-known Kalman filter for the linear observation model (Eq. (18.59)). Specifically, the Kalman filter computes the mean and covariance matrix of the probability distribution P{z(t)|Y(t)} ∼ N (ˆz(t), P(t)), where

  zˆ (t) = E{P{z(t)|Y(t)} and P(t) = E (z(t) − zˆ (t))(z(t) − zˆ (t))T .

3.18.6.3.1 Prediction phase Given zˆ (t − 1), the dynamic Eq. (18.57) allows one to predict the a priori estimate of the current state: z(t|t − 1) = F · zˆ (t − 1).

(18.65)

Hence, z(t) − z(t|t − 1) = F · z(t − 1) + w(t) − F · zˆ (t − 1) = F(z(t − 1) − zˆ (t − 1)). The corresponding prediction error covariance matrix is:   P(t|t − 1) = E (z(t) − z(t|t − 1))(z(t) − z(t|t − 1))T = FP(t − 1)FT + σw2 I.

(18.66)

Equations (18.65) and (18.66) constitute the prediction phase of a Kalman filter.

3.18.6.3.2 Update phase Given the predicted source position and speed z(t|t −1), each sensor may compute an expected received signal y(t|t − 1) = H · z(t|t − 1) and the corresponding innovation (prediction error) when compared to the actual received signal y(t) as y˜ (t) = y(t) − y(t|t − 1) = y(t) − H · z(t|t − 1).

(18.67)

The corresponding covariance matrix then is   Q(t) = E y˜ (t)˜yT (t) = HP(t|t − 1)HT + σv2 I.

(18.68)

Applying least square principle, the optimal Kalman gain matrix may be expressed as: K(t) = P(t|t − 1)HT Q−1 (t).

(18.69)

Finally, the update equation of the state estimation is: zˆ (t) = z(t|t − 1) + K(t)˜y(t) = (I − K(t)H)z(t|t − 1) + K(t)y(t).

(18.70)

In other words, the optimal estimate of the source location and speed is a linear combination of the predicted location and speed and a correction term based on sensor observations. Moreover, the covariance matrix of estimation error will also be updated: P(t) = (I − K(t)H)P(t|t − 1).

(18.71)

References

815

3.18.6.3.3 Nonlinear observation model The Kalman filter tracking equations developed so far is based on the linear observation model Eq. (18.59). It facilitates the close-form expression of the prediction error covariance matrix Eq. (18.68). However, as discussed earlier, many practical observation models are non-linear in nature. A number of techniques have been developed to deal with this challenge. With an extended Kalman filter (EKF), the nonlinear observation model will be replaced by an approximated linear model such that   T + σv2 I, Q(t) = HP(t|t − 1)H  T Q−1 (t), K(t) = P(t|t − 1)H  P(t) = (I − K(t)H)P(t|t − 1),  = ∇z z(t)|z(t|t−1) . There are also Unscented Kalman filter (UKF) [39] that further enhance the where H accuracy of the EKF. For extremely nonlinear models, particle filter [18,40,41] may be applied to facilitate more accurate, albeit more computationally intensive, tracking. Briefly speaking, in a particle filter, the probability distribution is approximated by a probability mass function (pmf) evaluated at a set of randomly sampled points (particles). Then, the sequential Bayesian estimation is carried out on individual particles.

3.18.7 Conclusion In this chapter, source localization algorithms in the context of wireless sensor network are discussed. A distinct approach of this chapter is to separate the received source signal model from the basic triangulation algorithms. The development also revealed basic relations among several well studied families of localization algorithms. The significance of tracking algorithm in the localization task is also discussed and some basic tracking algorithms are reviewed. Relevant Theory: Signal Processing Theory, Machine Learning Statistical Signal processing, and Array Signal Processing See Vol. 1, Chapter 2, Continuous-Time Signals and Systems See Vol. 1, Chapter 3, Discrete-Time Signals and Systems See Vol. 1, Chapter 4, Random Signals and Stochastic Processes See Vol. 1, Chapter 11, Parametric Estimation See Vol. 1, Chapter 19 A Tutorial Introduction to Monte Carlo Methods See this volume, Chapter 5, Distributed Signal Detection See this volume, Chapter 7, Geolocation—Maps, Measurements, Models, and Methods See this volume, Chapter 19, Array Processing in the Face of Nonidealities

References [1] P. Daly, Electron. Commun. Eng. J. 5 (1993) 349–357. [2] B.W. Parkinson, J.J. Spilker (Eds.), Global Positioning System: Theory and Applications, vol. 1, American Institute of Astronautics and Aeronautics, 1996.

816

CHAPTER 18 Source Localization and Tracking

[3] N.L. Owsley, in: S. Haykin (Ed.), Array Signal Processing, Prentice-Hall, Englewood-Cliffs, NJ, 1991. [4] S. Zhou, P. Willett, IEEE Trans. Signal Process. 55 (2007) 3104–3115. [5] P. Valin, A. Jouan, E. Bosse, in: Proc. SPIE-1999 Sensor Fusion: Architectures, Algorithms, and Applications III, vol. 3719, Society of Photo-Optical Instrumentation Engineers, Bellingham, WA, USA, Orlando, FL, USA, 1999, pp. 126–138. [6] G.L. Duckworth, M.L. Frey, C.E. Remer, S. Ritter, G. Vidaver, in: Proc. SPIE, vol. 2344, The International Society for Optical Engineering, 1995, pp. 16–29. [7] C. Friedrich, U. Wegler, Geophys. Res. Lett. 32 (2005) L14312. [8] D. Gajewski, K. Sommer, C. Vanelle, R. Patzig, Geophysics 74 (2009) WB55–WB61. [9] J. Zhao, Seismic signal processing for near-field source localization, Ph.D. Dissertation, University of California, Los Angeles, 2007. [10] R.R. Ramirez, Scholarpedia 3 (11) (2008) 1073. [11] B.C. Basu, S.A. Pentland, in: Proceedings of ICASSP’01, vol. 5, pp. 3361–3364. [12] P. Aarabi, A. Mahdavi, in: Proceedings of ICASSP’02, IEEE, 2002, pp. 273–276. [13] J.C. Chen, K. Yao, R.E. Hudson, IEEE Signal Process. Mag. 19 (2002) 30–39. [14] J. Chen, R.E. Hudson, K. Yao, IEEE Trans. Signal Process. 50 (2002) 1843–1854. [15] Y.H. Hu, X. Sheng, D. Li, in: IEEE Workshop on Multimedia Signal Processing, IEEE, St. Thomas, Virgin Island, 2002. [16] D. Li, Y.H. Hu, EURASIP J. Appl. Signal Process. (2003) 321–337. [17] X. Sheng, Y.H. Hu, in: Proceedings of the International Symposisum on Information Processing in Sensor Networks (IPSN’03), Springer-Verlag, Palo Alto, CA, 2003, pp. 285–300. [18] X. Sheng, Y.H. Hu, in: Proceedings of ICASSP’04, vol. 3, IEEE, Montreal, Canada, 2004, pp. 972–975. [19] X. Sheng, Y.H. Hu, IEEE Trans. Signal Process. 53 (2005) 44–53. [20] S. Amari, A. Cichocki, H.H. Yang, in: Proceedings of Advances in Neural Information Processing Systems (NIPS’96), MIT Press, 1996, pp. 757–763. [21] J.F. Cardoso, Proc. IEEE 86 (1998) 2009–2025. [22] S.C. Douglas,in: Y.H. Hu, J.N. Hwang (Eds.), Handbook of Neural Network Signal Processing, CRC Press, Boca Raton, FL, 2001 (Chapter 7). [23] G. Gelle, M. Colas, G. Delaunay, Mech. Syst. Signal Process. 14 (2000) 427–442. [24] K.-C. Chang, C.-Y. Chong, Y. Bar-Shalom, IEEE Trans. Automatic Control 31 (1986) 889–897. [25] M. Ito, S. Tsujimichi, Y. Kosuge, in: Proceedings of the International Conference on Industrial Electronics, Control and Instrumentation, New Orleans, LA, vol. 3, pp. 1260–1264. [26] C. Savarese, J.M. Rabaey, J. Beutel, in: Proceedings of ICASSP’2001, IEEE, Salt Lake City, UT, 2001, pp. 2676–2679. [27] V. Seshadri, G. Zaruba, M.A. Huber, in: Proceedings of the International Conference on Pervasive Computing and Communications (PerCom’05), IEEE, 2005, pp. 75–84. [28] C.H. Knapp, G.C. Carter, IEEE Trans. Acoust. Speech Signal Process. 24 (1976) 320–327. [29] G.C. Carter (Ed.), Coherence and Time Delay Estimation, IEEE Press, 1993. [30] Y.T. Chan, R.V. Hattin, J.B. Plant, IEEE Trans. Acoust. Speech Signal Process. 26 (1978) 217–222. [31] T.L. Tung, K. Yao, C.W. Reed, R.E. Hudson, D. Chen, J.C. Chen, in: Proceedings of SPIE, vol. 3807, The International Society for Optical Engineering, 1999, pp. 220–233. [32] J. Benesty, J. Chen, Y. Huang, IEEE Trans. Speech Audio Process. 1542 (2004) 509–519. [33] F. Gustafsson, F. Gunnarsson, in: Proceedings of ICASSP’03, vol. 6, IEEE, 2003, pp. 553–556. [34] L. Yang, K.C. Ho, IEEE Trans. Signal Process. 57 (2009) 4598–4615. [35] H. Krim, M. Viberg, IEEE Signal Process. Mag. 1583 (1996) 67–94. [36] E. Tuncer, B. Friedlander (Eds.), Classical and Modern Direction-of-Arrival Estimation, Academic Press, 2010.

References

817

[37] P. Stoica, A. Nehorai, IEEE Trans, Acoust. Speech Signal Process. 37 (1989) 720–741. [38] T. Sogo, H. Ishiguro, M.M. Trivedi, in: Proceedings of IEEE Workshop Omnidirectional Vision, IEEE, 2000, pp. 153–160. [39] S.J. Julier, J.K. Uhlmann, Proc. IEEE 92 (2004) 401–422. [40] M. Arulampalam, S. Maskell, N. Gordon, T. Clapp, IEEE Trans. Signal Process. 50 (2002) 174–188. [41] X. Sheng, Y.H. Hu, in: Proceedings of 4th International Symposium on Information Processing in Sensor Networks (IPSN ’05), IEEE, Los Angeles, CA, 2005.