Target localization in camera wireless networks

Target localization in camera wireless networks

Pervasive and Mobile Computing 5 (2009) 165–181 Contents lists available at ScienceDirect Pervasive and Mobile Computing journal homepage: www.elsev...

2MB Sizes 1 Downloads 73 Views

Pervasive and Mobile Computing 5 (2009) 165–181

Contents lists available at ScienceDirect

Pervasive and Mobile Computing journal homepage: www.elsevier.com/locate/pmc

Target localization in camera wireless networks Ryan Farrell a , Roberto Garcia c , Dennis Lucarelli b , Andreas Terzis c,∗ , I-Jeng Wang b a

Computer Science Department, University of Maryland, College Park, United States

b

Applied Physics Lab, Johns Hopkins University, United States

c

Computer Science Department, Johns Hopkins University, United States

article

info

Article history: Received 16 October 2007 Received in revised form 1 March 2008 Accepted 7 April 2008 Available online 22 April 2008 Keywords: Multi-modal sensor networks Camera Sensor Networks Target detection and localization

a b s t r a c t Target localization is an application of wireless sensor networks with wide applicability in homeland security scenarios including surveillance and asset protection. In this paper we present a novel sensor network that localizes with the help of two modalities: cameras and non-imaging sensors. A set of two cameras is initially used to localize the motes of a wireless sensor network. Motes subsequently collaborate to estimate the location of a target using their non-imaging sensors. Our results show that the combination of imaging and non-imaging modalities successfully achieves the dual goal of self- and targetlocalization. Specifically, we found through simulation and experimental validation that cameras can localize motes deployed in a 100 m × 100 m area with a few cm error. Moreover, a network of motes equipped with magnetometers can, once localized, estimate the location of magnetic targets within a few cm. © 2008 Elsevier B.V. All rights reserved.

1. Introduction The prevailing notion of a sensor network is a homogeneous collection of communicating devices capable of sensing its environment. This architecture offers several benefits including simplicity and robustness to individual node failures. While a number of heterogeneous designs have also been proposed that incorporate multiple classes of devices they, by and large, are intended to address data storage and dissemination objectives. An emerging concept is a sensor network that exploits heterogeneity in sensing, especially by using cameras (e.g. [1]), in which the weaknesses of one sensing modality are overcome by the strengths of another. The use of networked cameras is gaining momentum for key applications such as surveillance, scientific monitoring, location-aware computing and responsive environments. This is primarily because it is far easier for users to glean context and detail from imagery than other sensing modalities. Moreover, many applications from homeland security, to bird watching require an image to perform the sensing task. At the same time, the drawbacks of ad-hoc vision networks are well known: cameras consume considerable power and bandwidth and despite some recent hardware innovations (e.g. [2]) these limitations are still largely unresolved. Thus, cameras should be engaged only when absolutely necessary. This paper presents a complete system that integrates vision networks with a non-imaging mote network for the purpose of target localization and tracking. Specifically, the system initially uses two Pan-Tilt-Zoom (PTZ) cameras to localize the network’s motes. Once localized, motes employ non-imaging sensors (e.g. magnetometers), to detect the existence and estimate the location of targets traversing the deployment area. The motes subsequently task the cameras to capture images of those targets. We argue that the combined system offers performance and efficiencies that cannot be achieved by a single modality working in isolation. Our solution is also self contained in the sense that all operational aspects of the system are



Corresponding author. Tel.: +1 4105165847. E-mail address: [email protected] (A. Terzis).

1574-1192/$ – see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.pmcj.2008.04.004

166

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

handled by the two sensing modalities and no additional elements are needed to perform target detection, acquisition and verification. The successful integration of the camera and mote networks requires that the sensor field be located within the field of view of an ad-hoc deployment of cameras. One solution would be to localize the nodes through other means and solve for the transformation between the two coordinate systems. Alternatively, we solve for all location variables (mote locations and camera orientations/locations) simultaneously in one efficient algorithm. This approach establishes a common frame of reference, such that the location of objects detected by a camera can be described in the spatial terms of the network and vice-versa. The use of PTZ cameras allows the entire sensor network space to be mapped into the camera’s pan-tilt parameterization. Arguably the greatest advantage of the system we propose is the ability to rapidly deploy a wireless sensor network in an ad-hoc fashion without regard to camera field-of-view. Moreover, the mote network serves to limit the use of the camera system which is used only during the initial phase of the network’s deployment. In addition to simple cuing, target classification by the mote network is the primary means of preserving power. Only high probability targets are engaged by the camera network. The rest of the paper is structured as follows. Section 2 reviews previous work in the areas of self- and target-localization. In Section 3 we present an outline of the proposed framework. Section 4 describes how the network can self-localize with the help of two PTZ cameras, while Section 5 presents the target localization algorithms. Both aspects of the localization problem are evaluated in Section 6, which presents results from simulations and experiments with mote hardware. Finally, we close in Section 7 with a summary. 2. Related work The work related to our system falls under two general categories: work in the area of target detection and localization and work in the area of self-localization, especially using imaging sensors. Target detection is one of the compelling applications that drove much of the initial research in sensor networks. The pioneering work by Stankovic et al. [3,4] and Arora et al. [5] among others proved that sensor networks can effectively detect targets over large geographic areas. While those networks used magnetometers, to the best of our knowledge, they were used only for target detection and not localization. On the other hand, we use information collected from the magnetometers to estimate the distance and the heading of ferrous targets as they move over the sensor field. Oh et al. have recently used magnetometers to localize targets in a multiple-target tracking algorithm [6]. That work assumed that mote locations are known in advance. Then, the target’s location is estimated as the centroid of the locations of all the motes that detected the target, normalized by the signal strength measured by each of the motes. This approach has low accuracy because it assumes that the strength of the magnetic field is only a function of the distance between the target and the magnetometer, an assumption that is not valid as we show in Section 5.1. More recent work by Taylor et al. also proposed a method to localize the nodes in a sensor network while concurrently tracking the location of multiple targets [7]. There are two main difference between that work and our approach. First, Taylor et al. localize the network using one or more co-operating mobile targets that periodically emit radio and ultrasound signals. This level of mobility enables the system to collect a large number of ranging estimates that can be used to localize the nodes. On the other hand, our proposal is range-free since it requires only angular information from a small number of static cameras. Second, we detect the location of uncooperative targets through the changes they generate on the ambient magnetic field. Several innovative approaches have been suggested for the problem of self localization. Various technologies such as ultrasound ranging, acoustic time of flight, radio signal strength and interferometry, are used to acquire good pairwise distance estimates, often incorporating the acquired range measurements into a planar graph (see [8] and surveys [9,10]). Poduri et al. however showed that when nodes are placed in three dimensions, the number of neighbors of an average node grows dramatically [11]. This observation suggests the need to investigate methods which do not rely on pairwise distances. To address the limitations of peer-to-peer localization techniques, a number of proposals have investigated the use of cameras to localize a sensor network. Funiak et al. recently proposed a distributed camera localization algorithm to retrieve the locations and poses of ad-hoc placed cameras in a sensor network by tracking a moving object [12]. Unlike this previous work, we use cameras to estimate the locations of static nodes in the sensor field in addition to their own locations. Along the same lines, Rahimi et al. used a network of cameras to track the locations of moving targets while at the same time calibrating the camera network [13]. In our case cameras are used only during the initial phase to estimate mote locations. Furthermore, instead of tracking the trajectory of the target using the cameras, we use non-imaging sensors to estimate the location of magnetic targets. The proposed localization algorithm is versatile enough to run at a central location or in a distributed fashion on the actual motes. In the Computer Vision community, the problem of simultaneously localizing both camera and objects or scenery is frequently referred to as Structure from Motion. Representative work includes Hartley’s eight-point algorithm [14,15], and Dellaert et al.’s [16] approach which avoids explicit correspondence. These approaches are generally applied to fixed fieldof-view cameras, either a stereo pair of static cameras or a camera in motion. As our approach utilizes pan-tilt-zoom cameras to provide a full field of view, we can handle arbitrary, ad-hoc deployments, even when the sensors are scattered on all sides of the camera.

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

167

Lymberopoulos et al. recently considered the use of epipolar geometry to perform localization, using LEDs to assist the visual detection of sensors [17]. We also use LEDs connected to the motes in our network to indicate their presence to the associated cameras. However, unlike the solution proposed by Lymberopoulos et al. we do not require any distance information from the cameras to estimate the mote locations. Even though the only information available in our case are the angular locations of the motes relative to each camera, our solution has comparable localization error. Probably the work most closely related to ours is the Spotlight system [18]. This system employs an asymmetric architecture in which motes do not require any additional hardware for localization while a single Spotlight device uses a steerable laser light source to illuminate the deployed motes. The primary difference between our approach and that work lies in our focus on truly ad-hoc deployment. We assume that the cameras will have unknown location and orientation whereas the Spotlight system relies on accurate knowledge of both camera position and orientation. Furthermore, individual motes in Spotlight must know when the laser light source is supposed to illuminate them, a design that requires time synchronization. 3. Outline The proposed system consists of a network of cameras connected to non-imaging sensors, specifically magnetometers, that are used to detect the presence and estimate the location of ferrous targets. However, target localization requires that mote locations are known in advance. In our system the task of mote localization is undertaken by a number of Pan-TiltZoom (PTZ) cameras that scan the deployment area to detect the locations of the network’s motes. In this section we provide an overview of the proposed system and outline its components.

• Pre-deployment phase: The intrinsic parameters of the cameras and the magnetometers are calibrated prior to deployment. We present a procedure for calibrating the magnetometers in Section 5.4.

• Initialization phase: The network is localized using two cameras and a base station performing non-linear least squares optimization. At the end of this phase the cameras learn their location and orientation and the motes know their location. Section 4 describes the proposed method. • Target localization: At the end of the initialization phase the system is operational and able to detect the presence and estimate the location of targets traveling through the deployment area. Section 5 describes the target localization algorithm we developed for the sensing modality we use. We made a conscious decision to use the cameras only during the initialization phase and after the network’s motes have localized a target, to reduce the energy required to power the cameras. This decision was motivated by the high energy consumption of PTZ cameras and the need to rapidly deploy this system using only battery power. The same approach of exploiting waking up the cameras on demand was suggested by Kulkarni et al. in [1]. As part of our previous work, we investigated the benefits of combining modalities for target localization [19]. Specifically, we showed that feedback from the cameras can be used to calibrate the motes’ magnetometers and thereby improve localization accuracy. 4. Optical self-localization We describe the underlying theoretical basis of our self-localization approach and discuss potential numerical techniques for solving the system of multivariate nonlinear equations resulting from our analysis. 4.1. Deriving the nonlinear formulation Let us consider a planar network of wireless sensor nodes, N = {p1 , p2 , . . . , pn }, where pi = (pxi , pyi , 0) denotes the location of node i. For the cameras,we must describe both position and orientation. With eachcamera j, we therefore associate a position cj = cxj , cyj , czj and a 3 degree-of-freedom (DOF) rotation 2j = θ xj , θ yj , θ zj . As we focus on ad-hoc deployment, the values for pi , cj , and 2j are all unknown. We remove the restriction of having all the nodes on the same plane in Section 4.2 and explore the benefits of having multiple cameras in Section 4.3, but for ease of exposition we first present the solution with two cameras and nodes on the same plane. Without any notion of geodetic location or scale, the best that we can hope to recover is a local and relative coordinate frame. Rather than pick some arbitrary scale, we have chosen to normalize the coordinate frame based on the distance between the two cameras. The camera locations are thus c1 = (0, 0, cz1 ) and c2 = (1, 0, cz2 ). The camera elevations remain unknown as do the orientation parameters. Fig. 1 depicts the normalized coordinate frame (note that the nodes are not constrained to lie within the unit square). The only values we are able to measure are the angular location of a given node relative to each camera. We  parameterize the camera’s pan-tilt space with coordinates (α, β), in radians. Camera j observes node i at location αij , βij , indicating that camera j would need to pan by αij and tilt by βij in order to aim directly at node i. We now derive a relationship between these measurements and the unknown values we wish to determine. In considering the relationship between camera j and node i, for convenience we define the offset vector vij = pi − cj . As

168

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

Fig. 1. Normalized Coordinate Frame. Illustrates the sensor network N = {p1 , p2 , . . . , pn }, pi = (pxi , pyi , 0), with two active cameras, respectively located at c1 and c2 . Geodetic locations are normalized such that the cameras are separated by unit distance in the plane, thus c1 = (0, 0, cz1 ) and c2 = (1, 0, cz2 ). The illustration also depicts node pi localized at pan-tilt coordinates (αi1 , βi1 ) relative to the first camera.

a vector can also be decomposed into the product of its magnitude with a unit vector in the same direction, we can write vij = kvij k · vˆ ij . The unit vector vˆ ij can be expressed in two ways. The first expression is obtained by simply dividing vij by its magnitude, kvij k, yielding vˆ ij =

vij

kvij k

=

pi − cj

kvij k

=

1

kvij k

pxi − cxj pyi − 0 0 − czj

! .

(1)

The second expression is found by rotating the optical axis of camera j such that it passes through pi . The standard camera model used in Computer Graphics (e.g. [20], p. 434) defines the local coordinate frame with the camera looking downward along the z-axis at an image plane parallel to the x–y plane. The optical axis thus points in the −ˆz direction in the local coordinate frame. To transform from the camera’s local frame into world coordinates, we must apply the three-axis rotation R2j defined by the angles 2j . To now point the optical axis at pi , we pan and tilt the camera with the rotations Rαij and Rβij , respectively corresponding to the angles αij and βij . At length, we find the second expression vˆ ij = Rαij Rβij · R2j · −ˆz .



(2)

| {z } |{z} | {z } pan−tilt

w orld

local

Now, we equate these two expressions for vˆ ij . Merging Eq. (1), in terms of pi and cj , with Eq. (2), in terms of αij , βij and

2j , we derive a relationship between the observations and the unknowns we wish to solve for: 1

kvij k

pxi − cxj pyi − 0 0 − czj

q

!

0 0 −1

!

= Rαij Rβij · R2j ·

(3)

2

where kvij k = pxi − cxj + py2i + czj2 . This relationship embodies three constraints per node-camera pair, one each in x, y and z. As only two measurements are used, however, the three constraints cannot be independent. So, each node-camera pair contributes only two to the rank of the system for a total rank of 4n. We have 2n + 8 variables or unknowns to solve for: elevation and 3-DOF orientation (4 parameters) for each camera, and the 2-D position of each of the n nodes. For the system to be sufficiently constrained, we must have 4n ≥ 2n + 8 which yields n ≥ 4. Thus, measurements from at least 4 nodes must be used or the resulting system of nonlinear equations (constraints derived as the components of Eq. (3)) will be under-determined. However, as we will see in Section 6.1, using larger numbers of nodes is preferable, as it helps mitigate the presence of noise in the measurements, greatly improving localization accuracy. So far, we have derived a system of geometric constraints that govern the relationship between the relative configuration of nodes and cameras and the angular locations of the nodes measured by each camera. The localization problem then becomes a problem in nonlinear optimization. An analytical expression of the system’s Jacobian was obtained and used to instantiate a Newton-Raphson solver. As with other techniques, Newton’s method is prone to convergence at local extrema depending on the initial search location. Our implementation is therefore been based on a globally convergent Newton’s approach from Numerical Recipes [21] which uses multiple restarts and attempts to backtrack if it reaches a non-global optimum. 4.2. Extending to three dimensions Thus far, we have maintained the assumption that all nodes in the network lie in some common plane, in our normalized coordinate frame, that is ∀i, pzi = 0. While this may suffice or even be a good approximation for most scenarios, in general,

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

169

we would like to remove this restriction. Doing so requires minimal change to our model, simply making pzi an unknown which modifies Eq. (3) to produce 1

kvij k

pxi − cxj pyi − 0 pzi − czj

q

0 0 −1

!

! = Rαij Rβij · R2j · 2

(4)

2

where kvij k = pxi − cxj + py2i + pzi − czj . As before, we would still have 4n measurements, only now we wish to solve for 3n + 8 variables. This implies that we must have measurements for n ≥ 8 nodes before the system will be determined. 4.3. Extending to multiple cameras Thus far, we have been working under the assumption of just two cameras, c1 and c2 . While our approach is based on solving the nonlinear system of equations derived from the angular locations observed in a pair of pan-tilt cameras, it is not inherently limited to two cameras. While a system with many cameras would require consideration of practical issues such as node and camera densities, range of visibility, etc., it has the potential for scalable localization of much larger networks. The primary challenge, of course, is in ‘‘stitching’’ together all the pairwise coordinate systems into a single global coordinate frame. While we have considered techniques such as least squares or absolute orientation [22] for constructing a global frame, we have presently deferred this to future work. We feel that a more accurate and robust technique should be developed which incorporates not just the estimated local coordinate frames, but the local measurements as well. While additional cameras will likely increase localization accuracy, we have not yet demonstrated this experimentally. 5. Target localization After the network has self-localized with the help of the cameras, it can estimate the location of targets moving through the sensor field using its non-imaging sensors. A number of different sensing modalities can be used for this purpose, such as acoustic or seismic sensors. We chose to use magnetometers which are sensors that detect the presence of ferrous targets through the changes they create to the earth’s magnetic field. Our choice was based on a number of factors. First, it has been proven experimentally that magnetometers can detect the presence of metal objects at distances up to tens of feet [5, 4]. Second, magnetometers can implicitly focus only on ’interesting’ objects moving through the sensor field (e.g. persons carrying large metallic objects). Finally, magnetometers can provide a rich set of information about the target that can be used for classification. For example, the authors of [23] outline an application which uses magnetometers to classify vehicles based on their magnetic signatures. 5.1. Magnetic signal strength model Because we use magnetometers to detect and localize targets, we model these targets as moving magnets. Note that the ferrous targets we are interested in, do not have to carry actual magnets to be detected, because the changes they generate to the earth’s magnetic field can be detected in the same way. At a high level, the magnetometers measure the strength and the direction of the magnetic field generated by the target and translate those measurements to a heading and distance. This translation requires a model that describes the magnetic field at an arbitrary location in space around the magnet. For this purpose, we adopted the model described in [24] (p. 246), which describes the strength of the magnetic field created by a magnetic dipole m at distance r and angles θ , φ from the location of the dipole, as: B(E r) =

µ0 m (2 cos θ rˆ + sin θ θˆ ) = Br + Bθ 4π r 3

(5)

where rˆ , θˆ are unit size vectors in the direction of the radius r and angle θ respectively. The parameter µ0 is the permeability of free space, while m describes the dipole’s magnetic moment, which indicates the dipole’s relative strength. As Fig. 2 illustrates, B(E r ) can be described as the sum of two vectors: Br that is parallel to the radius connecting the magnet to point (x, y, z ) and Bθ that is perpendicular to Br . Note that Eq. (5) does not depend on φ since the magnetic field is uniform at all points at distance r and angle θ . 5.2. Model validation We validated the applicability of Eq. (5) by comparing its predictions with measurements collected by a MicaZ mote [25] equipped with the MTS310 sensor board from Crossbow [26]. The MTS310 sensor board includes an HMC1002 2-axis magnetometer manufactured by Honeywell [27]. Each of the two axes of this magneto-resistive sensor is a Wheatstone bridge, whose resistance changes according to the magnetic field

170

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

Fig. 2. Magnetic field of ideal magnetic dipole m pointing at the z-direction.

Fig. 3. Voltage differential generated by the HMC 1002 magnetometer as a function of the applied magnetic field strength.

applied to it. This change in resistance causes a change in the voltage differential of the bridge which can be measured by the mote’s Analog to Digital Converter (ADC) circuit. According to the manufacturer’s specification, the recorded voltage differential 1V generated by the magnetometer is equal to:

1V = s · B · Vcc + o

(6)

where s is the magnetometer’s sensitivity (in mV/V/G), B is the strength of the applied magnetic field (in G), Vcc is the voltage supplied to the magnetometer, and o is the bridge offset (in mV). While we can measure 1V directly, we are interested in the magnetic field strength B. However to derive B we need to estimate the values of s and o which are device-specific. To do so, we placed a MicaZ with its MTS310 sensor board inside a Helmholtz coil, which is essentially a copper coil attached to an electrical source. As current passes through the coil, it generates a uniform magnetic field that is perpendicular to the coil’s plane. The mote was placed inside the coil such that the magnetic field was aligned with each of the sensor’s axes. We then measured the generated magnetic field B using a calibrated industrial magnetometer and recorded the 1V values from the magnetometer’s X - and Y -axis for different values of B while keeping Vcc constant. The relationship between B and 1V for the Y -axis is plotted in Fig. 3. It is clear from this figure that 1V and B have a linear relationship as described in Eq. (6). Moreover, the parameters of this linear relationship can be derived by linear interpolation of the collected data points.1 Next, we keep the strength B of the magnetic field constant while supplying a variable Vcc . The results from Fig. 4 verify that 1V is indeed linearly related to Vcc .

1 The HMC1022 magnetometer actually provides circuitry for self calibration, where magnetic fields of known magnitudes are generated that can be used to derive s and o using linear interpolation. Unfortunately the MTS310 board that we used, did not expose this functionality and thus we had to resort in calibrating the sensor using the Helmholtz coil.

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

171

Fig. 4. 1V as a function of supplied voltage Vcc .

Fig. 5. Recorded magnetic field strength as a function of distance between magnetometer and magnetic target.

Given the derived values of s and o we can estimate B from 1V for any magnetic field applied to the magnetometer. Specifically, we derive the Bx and By values for the magnetometer’s two axes and then calculate the magnitude of the magnetic field B, as B =

q

B2x + B2y , since the two axes are perpendicular to each other.

We performed two experiments to test the validity of Eq. (5). First, we varied the distance r between the MicaZ and a Neodymium magnet with nominal strength of 5000 G, while keeping θ constant. Fig. 5 presents the results of this experiment. It is evident that field strength decays with the cube of distance r, in accordance with Eq. (5). For the second experiment, we kept r constant q while rotating the magnet around the magnetometer at 22.5◦ increments.

1VY2 + 1VX2 ) collected at the different angles, coupled with the values predicted by Eq. (5). It is clear that there is good match between the model and the experimental data across all values of θ . Fig. 6 plots the raw magnetometer readings (i.e.

5.3. Simplified localization algorithm We use Eq. (5) to estimate the location of a magnetic target located inside a lattice of motes Mi , d distance units apart from each other. Such a lattice with four magnetometers and one magnetic target is illustrated in Fig. 7. For now, we assume that all motes are arranged in such a way that their Y -axes are all aligned and point North. The exact direction is not important, but rather all magnetometers’ directions must be aligned. We will remove this restriction in Section 5.4. The strength of the magnetic field recorded at mote Mi has two components recorded by the magnetometer’s X - and Y -axes, respectively. From Fig. 7 one can see that Xi is the sum of Xri and Xθi which are the projections of Bri and Bθi on the X -axis. Xi = Xri + Xθi

(7)

Xi = Bri sin(φ + θi ) + Bθi cos(φ + θi )

(8)

172

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

Fig. 6. Measured magnetic field strength as a function of the angle between the device and a magnetic target. Also shown is the predicted field strength according to Eq. (5).

Fig. 7. Magnetic target and magnetic field strength measured at magnetometers M1 and M2 .

Xi = Bri (sin φ cos θi + cos φ sin θi ) + Bθi (cos φ cos θi − sin φ sin θi ) Xi = 2Ci cos θi (sin φ cos θi + cos φ sin θi ) + Ci sin θi (cos φ cos θi − sin φ sin θi )

(9) (10)

.. . Xi =

Ci 2

(3 sin(φ + 2θi ) + sin φ)

(11)

µ m

where Ci = 4π0r 3 . i Likewise, we can express Yi as follows: Yi = Yri − Yθi

(12)

Yi = Bri cos(φ + θi ) − Bθi sin(φ + θi )

(13)

.. . Yi =

Ci 2

(3 cos(φ + 2θi ) + cos φ) .

(14)

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

173

Fig. 8. Generalized target detection problem.

Since each mote measures its own Xi , Yi , Eqs. (11) and (14) offer four equations for each mote pair. We have however five unknowns: r1 , r2 , φ, θ1 , θ2 . Fortunately, there is another relationship that we can utilize: d2 = r12 + r22 − 2r1 r2 cos(θ2 − θ1 )

(15)

which is none other than the generalized Pythagorean theorem for triangle MM1 M2 . Analogous sets of equations can be written for all other mote pairs that detect the magnetic target M. We found it helpful in practice to add explicit constraints to ensure that the location corresponding to M1 + E r1 is the same as that of M2 + E r2 . To derive these constraints, we simply equate the X - and Y -components as follows: r1 sin(φ + θ1 ) = r2 sin(φ + θ2 )

(16)

d − r1 cos(φ + θ1 ) = −r2 cos(φ + θ2 ).

(17)

For k magnetometers detecting the target, we thus have 2k + 3

  k 2

nonlinear equations: two equations for the X and Y

magnetometer readings of each node and three equations (the generalized Pythagorean equation and the two ‘‘consistency’’ constraints) for each magnetometer pair. 5.4. Generalized localization problem The algorithm presented in the previous section assumes that all the motes are aligned. As it turns out, we can remove this limitation and support placing motes at arbitrary headings φi as long as these headings are known. In turn, it is possible to estimate the mote’s orientation by taking advantage of the earth’ magnetic field and treating the magnetometer as an electronic compass (as a matter of fact one of the intended uses of the HMC1002 is in compassing applications). Assuming that the earth’s magnetic field points north (with some offset depending on the location in which the system is deployed), one could derive during the initialization phase the heading of the mote φi in degrees using the formulas outlined below:

   180 Bx   · , 90 − arctan    By  π  B 180 φi = 270 − arctan x · ,  B π  y    180, 0,

By > 0 By < 0 By = 0, Bx < 0 By = 0, Bx > 0

where Bx and By are the magnetic field strengths recorded by the magnetometer’s X - and Y -axis respectively. Given mote headings φi estimated by the mechanism described above, the generalized version of the localization problem can be solved. As Fig. 8 illustrates, the target magnet is located at (xm , ym ) and has orientation φm . Each magnetometer Mi is located at (xi , yi ) and has orientation φi . The angle from the magnet to each magnetometer relative to the magnet’s axis is denoted by θi . Instead of adding the magnetometer orientation as a third unknown on top of ri and θi , we now express the quantities ri and θi in terms of the known position (xi , yi ) and the unknown magnet parameters (xm , ym ) and φm .

q (xi − xm )2 + (yi − ym )2   xi − xm θi = arcsin − φm ri =

ri

(18) (19)

and as before Ci =

µ0 m . 4π ri3

(20)

174

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

Fig. 9. Calibration target used to calculate the intrinsic parameters of each camera.

Thus, instead of using three unknowns for each magnetometer, we have only one unknown, φi . The equations relating the magnetometer readings are relatively unchanged, differing only by the incorporation of φi : Bxi =

Ci 2 Ci

(3 sin(2θi + φm − φi ) + sin(φm − φi ))

(21)

(3 cos(2θi + φm − φi ) + cos(φm − φi )). (22) 2 Interestingly, the generalized Pythagorean equation and the two ‘‘consistency’’ constraints are no longer useful as they contribute no rank to the Jacobian matrix used to solve the system of non-linear equations. Byi =

6. Evaluation 6.1. Self-localization Our primary objectives in evaluating the proposed self-localization technique are to quantify its accuracy and confirm its scalability. We first employed a network of 12 MicaZ motes to examine the localization accuracy of our approach under realistic conditions. To overcome the limitations of this small network, we also experimented with a simulated network of 100 motes and found that larger networks offer improved accuracy despite their modest increase in computation overhead. 6.1.1. Camera calibration In order to accurately determine the pan-tilt coordinates of an image point, it is necessary to know the intrinsic or internal characteristics of the camera which include focal length, principal point (the true center of the image) and distortion parameters. Both cameras we used were individually calibrated at their widest field-of-view. Many frames of a checkerboard calibration target (see Fig. 9) at different positions and orientations were used as input to the Camera Calibration Toolbox [28] for MATLAB. The points of intersection between neighboring squares on the calibration target are located in each frame and these points are collectively used to estimate the camera’s internal parameters. These parameters are stored in a configuration file and read into the system at run-time. As the PTZ cameras can also zoom, we first calibrated one camera at various focal lengths. It became obvious that some parameters such as the principal point were not constant across focal lengths, so re-calibration was indeed critical. The most interesting discovery in the calibration process, however, was that even with per-zoom-level calibration, the computation of pan-tilt coordinates was error-prone for all but the widest field-of-view zoom. We eventually concluded that the center of rotation coincides with the focal point only for the widest field-of-view. Other zoom levels therefore introduce a translation as the camera pans or tilts about the center of rotation. 6.1.2. Experimental deployment Our experimental platform consists of a network of 12 MicaZ motes equipped with omni-directional LEDs and two PTZ cameras: a Sony EVI-D100 and a Sony SNC-RZ25N (see Fig. 10). Our application software, written primarily in Java, runs on a standard laptop, using JNI to integrate native code for image processing and for controlling the EVI-D100. Communication with the sensor network is facilitated by a mote wired directly to the laptop. A core component of our system, the nonlinear solver consists of a Java implementation of the globally convergent Newton’s method found in Numerical Recipes [21]. Our solver is substantially (20 − 25×) faster than MATLAB’s Optimization toolbox which we used in initial simulations. To self-localize a network of n nodes, the system follows the procedure outlined below:

• A subset of k nodes, namely Sk = {pi1 , . . . , pik } is selected. • The nodes in Sk are sequentially interrogated, each ascending an omni-directional LED which the cameras use to pinpoint the node’s location in their individual pan-tilt parameterization.

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

175

Fig. 10. Experimental platform: MicaZ sensor network, laptop, 2 PTZ cameras.

• After finding the pan-tilt location for each node p, the corresponding nonlinear system of equations is solved numerically, yielding location estimates for ∀p ∈ Sk . • Many such subsets, Sk(1) , Sk(2) , . . . , Sk(T ) , are randomly chosen and location estimates for each subset Sk(t ) is determined as above.

• When this process completes, we have several location estimates for each node ( Tkn expected) and average them to derive the final location estimate for each given node.

As will be demonstrated below, increasing the size of the subsets reduces the localization error. While a single computation using all of the motes’ measurements yields the minimal localization error, we have adopted the approach using smaller subsets (of size k) because the smaller optimization problems (i) can be solved much faster and with less memory, (ii) can be distributed amongst the nodes in the network and run in parallel, and (iii) quickly approach the minimal localization error (which would be achieved using all motes). In the event that a node is not seen by a camera, perhaps due to visual occlusion, that node is not factored into the computation. The computation could then either replace the occluded nodes with mutually-visible ones or proceed with a subset of fewer than k nodes. Even if a node is occluded in one camera, its location may still be estimated once the final camera parameters have been determined, simply by projecting the pan-tilt location onto the plane. Fig. 11 presents sample results for our network of 12 MicaZ motes. Each dot in these graphs shows an estimated mote location over the unit square [0, 1]2 , as calculated by the system. In Fig. 11(a), the localization results for subsets of size k = 6 are shown, on the left colored by subset ID and on the right colored by mote ID. Fig. 11(b) shows analogous results for subsets of size k = 9, hinting at the improvement in accuracy by using larger subsets. While the absence of ground truth localization prevents us from determining precise localization errors, we can see how the solutions for random subsets collectively suggest an accurate location estimate. 6.1.3. Empirically verified noise model While we could acquire planar ground truth locations for the nodes with relative ease, it is difficult to obtain accurate ground truth values for the camera’s orientation and the location of the focal point inside the camera. We nonetheless hoped that we could characterize the observed noise in such a manner that we might produce similar measurements synthetically. Instead of generating the exact pan-tilt measurements that would result from the ground truth locations, we apply an angular perturbation ∼N (0, ζ ) to each measurement, akin to the model proposed by Ercan et al. [29,30], using ζ = 0.1◦ . We generate synthetic measurements using the mean locations derived from the experimental network using subsets of size k = 6. Fig. 12 presents a comparison between the real and synthetic data. The synthetic noise model closely reflects the error characteristics of the observed data. 6.1.4. Simulation To demonstrate the improved localization accuracy obtained with larger subsets, we turned from our small physical deployment to a larger simulated network. Simulated networks provide us with ground truth for mote locations, essential to properly addressing the question of localization error. As we noted before, localization accuracy is associated not with the size of the network, but with the subset size k. So, whether the network has tens, hundreds or thousands of nodes, the localization accuracy will be same for a given subset size k assuming the number of estimates per node is fixed. For a fixed k, the required computation time grows linearly with the size of the network. If the number of estimates per node L is fixed, then the number of random subsets required to produce L estimates for each of the n nodes is Ln , indicating a linear relationship. k Our simulated network distributed 100 nodes uniformly at random within a unit square [0, 1]2 . The cameras were placed at the same normalized heights as in our experimental setup, the first above the origin (0, 0), the second above (0, 1). The same process of selecting subsets described in Section 6.1.2, angularly locating the motes and solving the nonlinear system is used to find location estimates within the plane. The noise model used is the same described in Section 6.1.3. In Fig. 14 we present results for a 100-node network choosing subsets of size 5, 10, 20, and 50. To articulate these results in real-world terms, if we imagine the deployment area is 100m × 100m, then these errors would be as depicted in Fig. 13 and Table 1. Remarkably, with two cameras positioned 100 meters apart, the system can localize every node to within 15.82 cm of its true location. Moreover, it localizes a majority of them with errors less than 5 cm.

176

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

(a) Results with subsets of size k = 6 nodes.

(b) Results with subsets of size k = 9 nodes. Fig. 11. Self-localization results for our experimental 12-mote network. (a) Shows simultaneous plots of localization from 100 randomly selected 6-mote subsets, on the left colored by subset ID, on the right by mote ID. (b) Shows analogous results for 9-mote subsets.

Table 1 Errors for 100 m × 100 m 100-node network Subset size

50% confidence

95% confidence

5 nodes 10 nodes 20 nodes 50 nodes

6.70 m 43.57 cm 5.26 cm 4.85 cm

96.25 m 2.14 m 17.19 cm 11.73 cm

6.2. Target localization 6.2.1. Simulation Prior to conducting any experiments with actual magnets, we were able to use a virtual sensor network to simulate noisy magnetometer readings, generated by adding white noise to the magnetic field strength provided by Eq. (5), and compare the localization accuracy as well as the orientation estimates. Certain intrinsic properties of the system emerged not through theoretical analysis, but rather through simulation. One such property is the inherent ambiguity in orientation whereas the magnetometer readings will be identical if both the target and network nodes are rotated by 180◦ (see Fig. 15(a)). Fig. 15(b) shows the simulated target-localization accuracy for various configurations (different numbers of nodes detecting the target, constrained/unconstrained orientation, etc.).

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

(a) Observed.

177

(b) Synthetic.

(c) Compared. Fig. 12. Empirically validated noise model. (a) Plots the solutions for 100 subsets using actual measurements obtained from the experimental system (same data displayed in Fig. 11(a)). (b) Plots the solutions for 100 subsets using simulated measurements obtained using the synthetic noise model. (c) Shows a comparison between the the observed and the synthetic data. The mean location for each mote along with the error ellipse indicating 95%-confidence are depicted. As in the first two plots, blue is used for the observed data, red for the synthetic data.

Fig. 13. Localization error distributions. For a simulated 100 × 100 m deployment containing 100 nodes, the curve showing the percentage of nodes localized within any given error is shown for 5-, 10-, 20- and 50-node subsets.

178

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

(a) Solution, k = 10.

(b) Solutions, k = 50.

(c) 95% errors, k = 10.

(d) 95% errors, k = 50.

Fig. 14. Simulated large-scale network. (a)–(b) Present localization results for 10- and 50-node subsets, respectively. (c)–(d) Present the mean of the solutions for each node together with the ellipse of 95% confidence. Each node is plotted in a unique color and a black error line is drawn between each node’s ground truth location and the mean of its solutions. Table 2 Experimental target localization results Min error

Avg. error

Median error

Std. deviation

1.9 cm

13.8 cm

9.5 cm

14 cm

6.2.2. Experimental results We designed an experiment to test mote localization in the following way. First, we selected random locations on a 1 m × 1 m level surface to place a set of six pre-calibrated motes. We then selected a set of fifty target locations by placing a magnet of known strength at each of these locations. Each of the network’s motes initially collects measurements about the ambient magnetic field and subtracts these values from subsequent measurements. Doing so, cancels the contribution of any environmental effects to the target localization process. The collected measurements are finally relayed to a base station which executes the generalized localization algorithm described in Section 5.4. Table 2 presents our initial target localization results from the network of six MicaZ motes. The average localization error was ∼14 cm while the minimum localization error was much smaller. Fig. 16 presents the distribution of localization errors across all experiments. A number of interesting observations can be made from this figure. First, there is good correlation between the experimental results and the simulation results shown in

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

179

Fig. 15. Magnetometer simulations. (a) Shows the 180◦ orientational ambiguity inherent in many of the solutions. (b) Shows the cumulative error distributions for 4-, 6- and 8-motes with arbitrary orientation detecting the target and compares them to 3 motes with aligned headings.

Fig. 16. Cumulative density function of the target localization error. The median localization error is less than 10 cm.

Fig. 15.b. Second, the majority of cases result in low localization error. There are however a few cases with high localization error, indicating that some positions did not localize well. The most likely cause behind these disparities is the calibration process described in Section 5.4. The resolution of the industrial magnetometer is 0.1 G, while the HMC1002 magnetometer on the MTS310 board has a resolution of 200 µG. Since we calibrate each mote independently to within .1 G, it is possible that any two motes will be off from each other by .2 G. Given that measurements at the fringes of the magnetometers’ sensing range are in the order of few mGauss, the lack of accuracy accounts for the cases with large localization error, especially considering that magnetic field strength decreases with the cube of distance. To validate this hypothesis, we investigate whether some motes contribute  inaccurate measurements thus skewing the n results. To do so, after running the localization experiment, we generate k measurement combinations where n is the number of motes  participating in the experiment and k is a subset of these motes. Fig. 17 presents the localization error CDF n for the best k combination, when k ≤ 2, 3, 4. It is evident that for positions that localize well, different k values do not produce sizable performance differences. On the other hand, small k values produce better results for positions that localize poorly. The underlying cause for this seemingly counter-intuitive result is that using small k values provides the opportunity to remove faulty measurements that skew the final result. This result indicates that robust estimation techniques, such as trimmed mean and M-estimates can be used to remove measurement outliers, which are likely to be small magnitude magnetometer readings. 6.3. In-network target localization Since the motes have been pre-calibrated and each knows its own heading, we should in theory be able to detect and even track a target with a known magnetic ‘‘signature’’ (i.e. magnetic dipole moment). The target has three unknowns, its x and y

180

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

Fig. 17. Effect of selecting different measurement subsets on localization error.

location on the plane and the orientation of its magnetic axis, φm . It would seem that the four measurements provided by a pair of nearby motes with two-axis magnetometers should provide enough information to recover these target parameters (xm , ym , φm ). Thus far, the numeric solution of the nonlinear equations describing the localization task occurs at a central location (e.g., at a laptop connected to the sensor network). However, it would preferable if, once the motes know their orientation, they could perform target detection and tracking entirely within the network. We have done just this by implementing a numerical solver in TinyOS that allows mote pairs to collaborate in localizing a target. Our TinyOS implementation is based on levmar, an implementation of the Levenberg-Marquardt algorithm [31]. By hand-coding the required matrix inversion, we were able to pack this powerful solver into a single TinyOS task. Due to hardware limitations such as fixed-point arithmetic we expected very little in the way of performance but were quite surprised to see the MicaZ perform approximately 30 iterations of the solver per second (i.e., 33 milliseconds per iteration). If the initial estimate used to initialize the solver is close to the true location, the solver typically converges in 8–15 iterations, just more than half a second with communication overhead, etc. However, our results here are preliminary as we are still working to carefully measure the localization error with this method. 7. Summary We present a comprehensive solution to the problem of localization in multi-modal sensor networks. First, two Pan-TiltZoom (PTZ) cameras are used to estimate the locations of the network’s motes. Unlike previous solutions, only angular information from the cameras is used for self-localization. Results from an experimental implementation, coupled with results from larger simulated networks indicate that the accuracy of the proposed localization algorithm increases with the size k of the nodes collectively localized in the same batch. This result, coupled with the fact that localization accuracy does not degrade as the size of the network increases, makes the proposed approach very attractive for large scale WSN deployments. Once the network has self-localized, it can detect the presence and estimate the location of ferrous target moving over the sensor field, with the help of magnetometers connected to the network’s motes. We present two variations of the target localization algorithm, one which assumes that motes are deployed on a regular lattice and a generalized version that works with arbitrary (but known) mote headings φi . We have implemented this second version of the localization algorithm on the MicaZ motes of our testbed and evaluated its accuracy through experiments with actual magnets as well as large scale simulations. We find that the generalized version performs almost as well as the version requiring regular mote formations as the number of motes detecting the target increases. Furthermore, early results from our experimental evaluation indicate that it is possible to locate magnetic targets with average localization error in the order of a few cm in a 1 m × 1 m field. Appendix. Derivation of simplified magnetometer equations A handful of simple trigonometric identities allow us to derive Eqs. (11) and (14). We here show just the derivation for Xi (Eq. (11)) as Yi (Eq. (14)) follows the same approach. Xi = 2Ci cos θi (sin φ cos θi + cos φ sin θi ) + Ci sin θi (cos φ cos θi − sin φ sin θi )

= Ci sin φ 2 cos θi − sin θi + 3Ci cos φ sin θi cos θi      3 1 3 1 3Ci = Ci sin φ + cos2 θi − − sin2 θi + cos φ (2 sin θi cos θi ) 2

2

2

2



2

2

2

(≡ 10)

R. Farrell et al. / Pervasive and Mobile Computing 5 (2009) 165–181

Ci

sin φ 3 cos2 θi − sin2 θi + 1 cos2 θi + sin2 θi 2 3Ci Ci 3Ci = sin φ cos 2θi + sin φ + cos φ sin 2θi 2 2 2 Ci Xi = (3 sin(φ + 2θi ) + sin φ) . 2

=





+

3Ci 2

181

cos φ (2 sin θi cos θi )

(≡ 11)

References [1] P. Kulkarni, D. Ganesan, P. Shenoy, Q. Lu, SensEye: A multi-tier camera sensor network, in: Proceedings of the 13th Aannual ACM international conference on Multimedia, 2005, pp. 229–238. [2] M. Rahimi, R. Baer, O. Iroezi, J. Garcia, J. Warrior, D. Estrin, M. Srivastava, Cyclops: In situ image sensing and interpretation in wireless sensor networks, in: Proceedings of the 3rd ACM Conference on Embedded Networked Sensor Systems, SenSys, 2005. [3] L. Gu, D. Jia, P. Vicaire, T. Yan, L. Luo, A. Tirumala, Q. Cao, T. He, J.A. Stankovic, T. Abdelzaher, B. Krogh, Lightweight detection and classification for wireless sensor networks in realistic environments, in: Proceedings of the 3rd ACM Conference on Embedded Networked Sensor Systems, SenSys, 2005. [4] T. He, S. Krishnamurthy, J.A. Stankovic, T.F. Abdelzaher, L. Luo, R. Stoleru, T. Yan, L. Gu, J. Hui, B. Krogh, An energy-efficient surveillance system using wireless sensor network, in: MobiSys, 2004. [5] A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, M. Gouda, Y.-R. Choi, T. Herman, S.S. Kulkarni, U. Arumugam, M. Nesterenko, A. Vora, M. Miyashita, A line in the sand: A wireless sensor network for target detection, classification, and tracking, Computer Networks (2004) 605–634. [6] S. Oh, L. Schenato, P. Chen, S. Sastry, A scalable real-time multiple-target tracking algorithm for sensor networks, Tech. Rep. UCB//ERL M05/9, University of California, Berkeley, Feb. 2005. [7] C. Taylor, A. Rahimi, J. Bachrach, H. Shrobe, A. Grue, Simultaneous localization, calibration, and tracking in an ad hoc sensor network, in: Proceedings of the Fifth International Conference on Information Processing in Sensor Networks, IPSN, 2006, pp. 27–33. [8] D. Moore, J. Leonard, D. Rus, S.J. Teller, Robust distributed network localization with noisy range measurements, in: Proceedings of the 2nd ACM Conference on Embedded Networked Sensor Systems, SenSys, 2004, pp. 50–61. [9] J. Hightower, G. Borriello, Location systems for ubiquitous computing, IEEE Computer 34 (8) (2001) 57–66. URL: http://citeseer.ist.psu.edu/article/ rey01location.html. [10] N. Patwari, J.N. Ash, S. Kyperountas, A.O.H. III, R.L. Moses, N.S. Correal, Locating the nodes: Cooperative localization in wireless sensor networks 22 (4) (2005) 54–69. [11] S. Poduri, S. Pattem, B. Krishnamachari, G.S. Sukhatme, Sensor network configuration and the curse of dimensionality, in: Workshop on Embedded Networked Sensors, EmNets, 2006. URL: http://cres.usc.edu/cgi-bin/print_pub_details.pl?pubid=495. [12] S. Funiak, C. Guestrin, M. Paskin, R. Sukthankar, Distributed localization of networked cameras, in: Proceedings of the Fifth International Conference on Information Processing in Sensor Networks, IPSN, 2006, pp. 34–42. [13] A. Rahimi, B. Dunagan, T. Darrell, Simultaneous calibration and tracking with a network of non-overlapping sensors, in: Proceedings of CVPR, 2004. [14] R. Hartley, In defense of the eight-point algorithm, IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (6) (1997) 580–593. [15] R.I. Hartley, A. Zisserman, Multiple View Geometry in Computer Vision, 2nd edition, Cambridge University Press, ISBN: 0521540518, 2004. [16] F. Dellaert, S. Seitz, C. Thorpe, S. Thrun, Structure from motion without correspondence, in: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2000, pp. II: 557–564. [17] A. Barton-Sweeney, D. Lymberopoulos, A. Savvides, Sensor localization and camera calibration in distributed camera sensor networks, in: Proceedings of IEEE BaseNets, 2006. [18] R. Stoleru, T. He, J.A. Stankovic, D. Luebke, A high-accuracy, low-cost localization system for wireless sensor networks, in: Proceedings of the 3rd ACM Conference on Embedded Networked Sensor Systems, SenSys, 2005, pp. 13–26. [19] M. Ding, A. Terzis, I.-J. Wang, D. Lucarelli, Multi-modal calibration of surveillance sensor networks, in: Proceedings of MILCOM 2006, 2006. [20] D. Hearn, M.P. Baker, Computer Graphics, 2nd ed., Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1994. [21] W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes, in: C: The Art of Scientific Computing, Cambridge University Press, New York, NY, USA, 1992. URL: http://www.library.cornell.edu/nr/bookcpdf/c9-7.pdf. [22] B.K.P. Horn, Closed-form solution of absolute orientation using unit quaternions, Journal of the Optical Society of America (1987). [23] Honeywell Corporation, Application note AN218: Vehicle detection using AMR sensors. Available at: http://www.ssec.honeywell.com/magnetic/ datasheets/an218.pdf. [24] D.J. Griffiths, Introduction to Electrodynamics, Prentice Hall, 1999. [25] Crossbow Corporation MICAz Specifications. Available at: http://www.xbow.com/Support/Support_pdf_files/MPR-MIB_Series_Users_Manual.pdf. [26] Crossbow Corporation, MICA2 Multi-Sensor Module (MTS300/310). Available at: http://www.xbow.com/Products/Product_pdf_files/Wireless_pdf/ MTS_MDA_Datasheet.pdf. [27] Honeywell Corporation, HMC1002 Linear Sensor. Available at: http://www.ssec.honeywell.com/magnetic/datasheets/hmc1001-2&1021-2.pdf. [28] J.-Y. Bouget, Camera calibration toolbox for matlab. Available at: http://www.vision.caltech.edu/bouguetj/calib_doc/. [29] A. Ercan, A.E. Gamal, L. Guibas, Camera network node selection for target localization in the presence of occlusions, in: SenSys Workshop on Distributed Cameras, 2006. [30] A. Ercan, D. Yang, A.E. Gamal, L. Guibas, Optimal placement and selection of camera network nodes for target localization, in: DCOSS, 2006, pp. 389–404. [31] M. Lourakis, levmar: Levenberg-Marquardt nonlinear least squares algorithms in C/C++. Available at: http://www.ics.forth.gr/∼lourakis/levmar/. Andreas Terzis is an Assistant Professor in the Department of Computer Science at Johns Hopkins University. He joined the faculty in January 2003. Before coming to JHU, Andreas received his Ph.D. in computer science from UCLA in 2000. Andreas heads the Hopkins InterNetworking Research (HiNRG) Group where he leads research in wireless sensor network protocols and applications as well as network security.