Pervasive and Mobile Computing 8 (2012) 358–375
Contents lists available at SciVerse ScienceDirect
Pervasive and Mobile Computing journal homepage: www.elsevier.com/locate/pmc
Review
A survey of sensory data boundary estimation, covering and tracking techniques using collaborating sensors Sumana Srinivasan ∗ , Subhasri Dattagupta, Purushottam Kulkarni, Krithi Ramamritham Department of Computer Science and Engineering, Indian Institute of Technology Bombay, Powai, Mumbai 400076, India
article
info
Article history: Received 9 March 2011 Received in revised form 10 October 2011 Accepted 7 March 2012 Available online 17 March 2012 Keywords: Boundary estimation Boundary tracking Boundary covering Wireless sensor networks Mobile sensors Contour covering Contour estimation
abstract Boundary estimation and tracking have important applications in the areas of environmental monitoring and disaster management. A boundary separates two regions of interest in a phenomenon. It can be visualized as an edge if there is a sharp change in the field value between the two regions or alternatively, as a contour with a field value f = τ separating two regions with field values f > τ and f < τ . Examples include contours/boundaries of hazardous concentration in a pollutant spill, frontal boundary of a forest fire, isotherms, isohalines etc. Recent advances in the area of embedded sensor devices and robotics have led to deployments of networks of sensors capable of sensing, computing, communication and mobility. They are used to estimate the boundaries of interest in physical phenomena, monitor or track them over time and also in some cases, mitigate the spatial spread of the phenomena. Since these sensors work autonomously in the environment, minimizing the energy consumed while maximizing the accuracy of estimation or tracking is the main challenge for algorithms for boundary estimation and tracking. Several algorithms with these objectives have been proposed in the literature. In this work, we focus on the algorithms that estimate and cover boundaries found in the sensory data in a field and not the topological boundary of the sensor network per se, which is beyond the scope of this paper. Here, our objective is to provide a comprehensive survey of the algorithms for boundary estimation and tracking by providing a taxonomy based on two broad categories — (i) Boundary estimation and tracking, where the sensors estimate the boundary without physically covering the boundary and (ii) Boundary covering — where the sensors not only predict the location and estimate the entire boundary but also physically cover the boundary by surrounding and bounding it. We further classify the techniques based on (a) sensing capabilities —in situ, range or remote sensing (b) movement capabilities — static or mobile sensors and (c) boundary type — static or dynamic and (d) type of estimation — field estimation where the entire field is sampled to search for contours and localized estimation where sampling is done near the boundary and (e) different types of mobility models in the case of mobile sensors. We believe that such a survey has not been performed before. By capturing and classifying the current state-of-the-art and identifying open research problems, we hope to ignite interest and stimulate efforts towards promising solutions for real-world boundary estimation and tracking problems. © 2012 Elsevier B.V. All rights reserved.
Contents 1. 2.
∗
Introduction.......................................................................................................................................................................................... 359 Taxonomy of existing techniques ....................................................................................................................................................... 359
Corresponding author. E-mail addresses:
[email protected],
[email protected] (S. Srinivasan).
1574-1192/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.pmcj.2012.03.003
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
2.1.
3.
Boundary estimation and tracking ......................................................................................................................................... 2.1.1. Field estimation ........................................................................................................................................................ 2.1.2. Localized estimation................................................................................................................................................. 2.1.3. Active learning based approach to reduce number of samples ............................................................................. 2.2. Boundary covering................................................................................................................................................................... 2.2.1. Uncoordinated mobility ........................................................................................................................................... 2.2.2. Partially coordinated mobility ................................................................................................................................. 2.2.3. Coordinated mobility — approach the contour ...................................................................................................... 2.2.4. Coordinated mobility with approach and surround............................................................................................... 2.2.5. Coordinated mobility with adaptive approach and surround ............................................................................... 2.2.6. Coordinated mobility — energy aware, asynchronous update, adaptive approach and surround...................... Conclusions and open research issues ................................................................................................................................................ References.............................................................................................................................................................................................
359
359 360 361 367 367 368 368 369 370 371 372 374 374
1. Introduction A boundary is a separator of two regions in a phenomenon. It can either be characterized by an edge, which separates regions by a sharp change in the field value or by a contour, which separates regions with field values greater and lesser than a specific threshold. We use the terms boundary and contour interchangeably in the rest of this paper. The term boundary is also used in the context of discovering the topological boundary of a sensor network deployed in a region of interest. Many algorithms have been proposed in the literature to discover the topological boundaries of sensor networks [1]. The primary goal of these algorithms is to discover the network topology and identify geometric boundaries of the same (including inside and outside boundaries when holes are present in the topology) in order to facilitate better point to point routing and data gathering mechanisms, as opposed to identifying boundaries in the physical sensing field manifested due to variation of the sensed value (also known as sensory data) in a given region. Such algorithms are beyond the scope of the current survey. Henceforth, we use the term boundary to refer to the boundaries found in the sensory data within the sensing region. Boundary estimation is important for several reasons. It helps to determine the spatial extent of the phenomenon such as a pollutant spill and also to locate its source by computing contours. Advances in the embedded systems technology has led to devices that are capable of sensing, computing, communication and mobility. Networks of such sensors are envisioned and practically deployed to perform fine grained monitoring and studying of physical phenomena. Since these sensors operate autonomously in the environment, maximizing accuracy of estimation while minimizing the energy consumed by them is of prime importance. In recent years, several algorithms have been proposed in the literature for performing boundary estimation and tracking using sensor networks. In this survey, we aim to provide a taxonomy that helps in understanding the current state-of-the-art in boundary tracking and estimation using networks of sensors. 2. Taxonomy of existing techniques Before we discuss various approaches, we classify the metrics for each of the techniques broadly into two categories — (i) quality based metrics, which measure accuracy and (ii) performance based metrics which measure the efficiency of a given technique used to estimate or track the boundary. The efficiency is typically characterized by the amount of energy consumed to perform the task. Existing literature in the area of estimating boundaries in the sensing region can be classified into two broad categories as shown in Fig. 1 — (i) Boundary estimation — where the goal is to predict as many points on the boundary and estimate it using this information without physically surrounding the boundary and (ii) Boundary covering — where the goal is not only to locate points on the boundary but also to physically surround it. While the former can be performed by both static as well as mobile sensors, the latter can be performed by mobile sensors alone since the static sensors lack the movement ability to surround the boundary. Physically surrounding the boundary is useful in disaster management scenarios such as pollutant spills where the mobile sensors can be used to mitigate the phenomenon, e.g., a mobile sensor can spray anti-pollutant chemicals along the boundary. The leaves in the taxonomy denoted by ovals refer to the actual strategies and the section numbers where they are discussed. Every level in the classification is depicted in the same color for improved readability. 2.1. Boundary estimation and tracking Based on the manner in which data is sampled and information is processed, existing approaches in boundary prediction using sensor networks can be classified into the following categories (as shown in Fig. 2) — (i) Field estimation — sample the field entirely or sparsely and query for contours, (ii) Localized estimation — determine which sensors lie on or near the boundary and use only their measurements to determine the contour and (iii) Active learning based estimation — sparsely sample the field and use the samples to learn where to sample next to determine the contour. In this section, we discuss various approaches and discuss their respective advantages and disadvantages.
360
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
Fig. 1. A taxonomy of boundary estimation, tracking and covering algorithms.
Fig. 2. Classification of boundary estimation and tracking algorithms without physically covering the contour.
2.1.1. Field estimation In this approach, measurements made by the sensors are used to build a model of the entire sensing field and the boundary is computed from the model. Current techniques can be classified based on the type of sensing as shown in Fig. 2 — (i) Remote sensing — where the entire field is sampled using remote satellite imagery, (ii) In situ sensing — where sensors are deployed in the sensing field and the measurements made by these individual sensors at their location are collected. 2.1.1.1. Remote sensing. Remote sensing techniques work within the electromagnetic spectrum. In [2], radar images of an oil spill are analyzed to obtain contour boundaries of different concentration levels. Pixel properties of the images such as intensity and color are correlated to oil concentration, density, etc. Due to lack of proximity to the phenomenon, they are subjective to a large number of false alarms, i.e., phenomenon that may appear like an oil slick but is not [3]. Also, physical
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
361
parameters like glint from sunlight, shimmer due to wind, shadows of clouds and organic material like surface weeds affect accuracy of measurement [4]. The disadvantage of being far from the phenomenon is addressed to some extent by aircraft borne Light Detection and Ranging based sensors (LIDAR) [5]. The advent of laser technology has enabled LIDARs and they are used extensively in topographic mapping applications of coastal engineering, forestry, etc. LIDARs are mounted on an aircraft and are used to survey the sensing region. The measurements made during the survey are used to generate contour maps [5]. The main drawback is that LIDARs, in the current state-of-the-art, do not process data in real time and require additional postprocessing efforts after data collection to completely generate the contour points. In disaster management applications, the delay due to scanning the whole region and post processing can take several hours to several days depending upon the location of the spill from land which is undesirable. 2.1.1.2. In situ sensing. Research advances in Micro Electro Mechanical Systems (MEMS) technology has led to dense deployments of a large number of cheap sensors to map the sensing region. The sensors can measure the field at their location as well as compute aggregates within the network by collaborating with other sensors to answer complex queries. The main objective is to reduce message complexity since communication is the highest energy consumer in static sensor networks. In [6], the authors present simple topographic extensions to the declarative query interface of their Tiny Aggregation Service (TAG) that allow it to efficiently build maps of sensor-value distributions in space using static wireless sensor networks. A hierarchical aggregation tree is built on-line in response to a query. Here, each sensor builds a map of its local area and sends it to its parent which combines the aggregates from other children and neighbors and eventually, the root of the tree has the map of the entire space from which the contour locations can be deduced. The authors compare three algorithms on a grid comprising of 400 sensors namely, (i) Naive — where an aggregate free query is run and the location and attribute value of all sensors are collected and combined outside network to produce a map; (ii) In-network Aggregation — an aggregate called contour-map where a partial state record contains isobars where an isobar is a container polygon and (iii) Lossy algorithm — is similar to In-network algorithm except that the number of vertices in the container polygon is limited by a parameter of the aggregate. The results of the simulation of their algorithms indicate that the Naive algorithm using the most communication results in highest accuracy but the other two strategies, which use less communication, do not indicate a significant loss in accuracy. Another technique [7] generates contour maps at the sink by collecting information from the entire network. The technique uses spatial and temporal data suppression and reconstructs the contour at the sink using interpolation and smoothing and a routing strategy using multi hop communication. The main idea is to reduce the number of transmissions required to convey relevant information to the sink. In this technique a sensor does not transmit if its value is within a fixed bound of a neighboring sensor which has already transmitted to the sink (spatial suppression). They also do not transmit if the value has not changed significantly since the previous transmission (temporal suppression). This is in contrast to [6] where the data is aggregated and there is no temporal suppression. Both of these techniques rely on high node density for accuracy of estimation and contour maps are processed at sink based on the knowledge of the entire terrain gathered from individual sensor nodes. The differences lie in the way communication overhead is reduced using intelligent in-network aggregation and suppression of data transmission. In the next section, we look at techniques that reduce communication overhead by intelligently determining which nodes lie along the boundary and using data from only the boundary nodes to reconstruct the boundary. 2.1.2. Localized estimation The basis of boundary estimation algorithms is the detection of the edge that separates two different regions. There are two main categories of algorithms in boundary estimation as shown in Fig. 2, (i) those that identify sensors that lie on the boundary using collaboration and estimate the boundary using information from the sensors close to the boundary and (ii) those that compute approximation of the boundary assuming that the sensors can already sense the boundary and track the boundary. We discuss both static and mobile sensor approaches in the latter category. We give a brief description of these approaches along with the cost they optimize. 2.1.2.1. Identifying sensors that lie on the boundary. There are three categories of techniques in this approach — (i) threshold based (ii) correlation based and (iii) fault tolerant boundary detection approaches. In the former the sensors know the threshold value that determines whether the value sensed is on the boundary or not and in the latter technique, the sensors use correlation to determine whether a change in value detected is a boundary location or faulty sensor reading. 2.1.2.1.1. Threshold and correlation based strategies. Localized edge detection techniques [8] determine whether or not a sensor lies on the boundary or an edge by using neighborhood information. An edge is considered as a step function separating two regions. The ability to perceive an edge accurately is determined by the amount of information collected by the sensor which depends on a parameter called the probing radius. The energy/accuracy trade-off is dependent on the probing radius. Larger the probing radius, more is the information collected, higher is the accuracy and larger is the energy required due to communication. The first technique is a statistical approach where the measurements are collected from the probing neighborhood and a statistic is defined. Based on the statistic the sensors determine the presence or absence of an edge. The statistic is defined as follows. An edge is considered to be present if measurements from sensors in the neighborhood form a bimodal distribution (spikes at 0 and 1). This technique does not use the geographical location
362
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
(a) Sensor on edge.
(b) Sensor not on edge.
Fig. 3. Classifier scheme partitions all measurements in the probing radius to 1s and 0s. If the line passes through the area of tolerance of a sensor then it is deemed as a edge sensor [8].
information of the sensors. The second scheme is inspired by image processing that uses a high pass filter that retains high frequencies (corresponds to abrupt changes) which is a standard way of detecting edges in images and takes into account the geographic location. Both these techniques depend on thresholds to detect the presence of an edge. The third scheme is a classifier based scheme, where the sensors partition the measurements in their probing neighborhood using a classifier into measurements lying exterior to the edge or interior to the edge. A successful partition determines the presence of an edge. This performs better than the other two schemes and does not use thresholds. The uncertainty of classification depends on the probing radius and sensing errors. Fig. 3 depicts the classifier scheme that labels measurements in the probing radius into 1s and 0s using a linear classifier. In Fig. 3(a), the linear classifier separates regions with majority of sensors with reading ‘1’ and ‘0’. If the line passes through the sensing radius of the sensor then the sensor is an edge sensor. In Fig. 3(b), there are no edge sensors since all sensors inside tolerance radius is classified as ‘0’. The accuracy of the predicted boundary depends upon the number of edge sensors from whose measurements the boundary is interpolated. The quality of edge detection [8] is given by the missed detection error and the mean thickness ratio, while the energy consumed is measured by the ratio of the probing radius of a sensor R and the tolerance radius r (an edge sensor lies within a pre-specified distance r from the ideal edge). The Percentage Missed Detection Error, em is determined as follows. If Strue is the actual set of edge sensors and Sdet depicts the sensors determined as edge sensors, then, em =
|Strue − Sdet | . |Strue |
(1)
If t (S , E ) represents the distance of all sensors in set S from edge E, then the Mean thickness ratio et is determined by et =
t (Sdet , E ) − t (Strue , E ) t (Strue , E )
.
(2)
2.1.2.1.2. Consensus based boundary detection. In [9], the authors propose two localized algorithms, one for detecting faulty sensor and the other a consensus based fault tolerant event boundary detection. Their consensus based algorithm disambiguates faulty sensor readings with actual event detection, in this case, an abrupt change in the measured values depicting a boundary. The disambiguation is based on whether a single sensor registers an abrupt change in the values or a collection of neighboring sensors detect a change in the values. In the former case, it is inferred that the single sensor is faulty and the latter case it is inferred that the sensor and its neighbors have detected a boundary. The advantage of this approach is that they do not use a pre-defined threshold to detect the boundary. The quality of estimation is determined by the metric degree of fit. If C3 is the set of sensors used to compute the contour boundary and is determined by how many sensors that are far away is included in this set, and S represents the set of all sensors, then, for a given positive number r, BA(E ; r ) denotes the set of all points belonging to ℜ2 whose distance is at most r to the boundary B(E ). The degree of fit is given by (Eq. 5 in [9]), a(C3 , r ) =
|BA(E ; r ) ∩ C3 | . |BA(E ; r ) ∩ S |
(3)
Here, BA(E ; r ) represents a strip of width 2r centered around the event boundary. This is similar to the notion of Confidence Interval defined in [10] which will be discussed later in Section 2.1.2.2.1. The degree of fit is the ratio of the number of sensors in the computed contour that lies in the band to the actual number of sensors that lie in the band and thus is a measure of quality of estimation. 2.1.2.1.3. Hierarchical clustering based approach. Another solution proposed in the literature for boundary estimation is a hierarchical cluster based approach [11]. They identify the two basic limitations of boundary estimation problems — (i) accuracy of estimate is dictated by the density of the sensors and error in measurement and (ii) energy constraints dictate the quality of the boundary estimate transmitted to the final destination. The authors derive a fundamental trade
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
(a) Initial partitioning of region.
363
(b) Pruned partitions approximating the boundary.
(c) Final boundary transmitted to sink.
Fig. 4. Sensors approximating the boundary with pruned partitioning [11].
off between energy spent and accuracy of boundary estimation (measured as mean square error (MSE)) and explores the trade-off between MSE and Energy in terms of the node density. In the scheme described in [11], the sensing region is divided into nonuniform sized rectangular √ dyadic √ partitions determined by collaboration. Given n sensors distributed uniformly in a region, the region is divided into n× n partitions as shown in Fig. 4(a). The sensors in the region collaboratively determine the pruned partitions that match the boundary as shown in Fig. 4(b). The authors assume a hierarchical structure of cluster heads where the nodes in each square communicates its measurement to the cluster head. The cluster head computes the average of the measurements. The goal is to collaboratively arrive at a nonuniform partition of the sensing region (more partitions near the boundary). The partitions near the boundary have a higher resolution while those far away from the boundary have a lower resolution (see Fig. 4(b)). By doing so, the algorithm is amenable to analysis and the authors build on their previous work to derive theoretical bounds on MSE and energy consumption. The authors describe in detail (a) the implementation of the pruning process in the network and (b) how to determine the best pruned tree. The boundary estimate is a staircase approximation of the actual boundary √ as√shown in Fig. 4(c). The accuracy of the boundary generated is n × n lattice and let this region be divided into recursive dyadic defined as follows. Let n sensors nodes be arranged in a √ partitions of resolution 1/ n. If θ represents the average of all field value readings in a given square in the partition and x = xij represents the actual reading, then, the performance is measured by the sum of squares error √
R(θ , x) =
n
(θ (i, j) − xi,j )2 .
(4)
i,j=1
In their approach, the energy consumption is defined by (i) the cost of communication due to the construction of the tree (in-network cost) and (ii) the cost of communicating the final boundary estimate √ (out-of-network cost). The out-of-networkn . The in-network cost is computed in terms of cost is proportional to final estimate of the boundary and is of order O the expected size of the final tree and is given by 1/2log2 n−1
E =k
√
nj 2j / n ,
(5)
j =0
√
where k is the number of bits per measurement and knj bits are transmitted 2j / n meters. 2.1.2.1.4. Identifying isoline nodes. In [12], the static sensors identify the isoline nodes and transmit the field values and the location information to the sink. Upon receiving a query, a sensor node p determines if it is an isoline node based on whether its field value is within the specified tolerance band of the contour field value, νi ± ϵ and one of its neighboring sensors q has a field value νq such that νp < νi < νq . Each isoline node transmits a 3-tuple report comprising of the measurement at the node, location and the gradient information computed using local neighbor information in order to improve the accuracy of the boundary reconstruction at the sink. This helps to disambiguate contour geometries when the isoline nodes are situated
364
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
(a) Isoline nodes send information to sink.
(b) Sink draws perpendiculars to gradients.
(c) Sink performs smoothing for final contour. Fig. 5. Deducing contours by identifying isoline nodes and gradients [12].
sparsely along the isoline. Fig. 5 depicts how the contours are reconstructed using isoline nodes and the gradient direction. In Fig. 5(a), the isoline nodes of the same isolevel report to sink, Fig. 5(b) depicts the positions and the gradient directions and the sink builds the Voronoi diagram at each of the positions. Further, for each cell, a line perpendicular to the gradient is drawn passing through the isoposition. The sink performs smoothing of the boundary as shown in Fig. 5(c). The quality of contour maps generated [12] is measured by mapping accuracy which is defined as the ratio of accurately mapped area in the contour map to the total area. Energy efficiency is measured by the number of kilobytes of message transfers. The main disadvantage of static sensor network approaches is that the accuracy depends on density of deployment. Static sensors do not adapt to the spatial dynamics of the phenomena. Large and dense deployments may not be feasible in applications like oil spills. Next we take a look at boundary estimating techniques that assume that the sensors have already arrived close to the boundary or can sense the boundary using range sensing from their location. 2.1.2.2. Tracking of boundary by sensors. Tracking of boundary comprises of energy-efficient, accurate boundary estimation from noisy observations followed by continuous tracking of the boundary. Target tracking [13], where useful information about the target’s state is extracted from sensor observations is different from boundary tracking in the following respects. In target tracking the main focus is on tracking the current location(s) of one or more targets and not in obtaining the entire track of a target over time whereas in boundary tracking, the focus is on estimating the entire boundary. Moreover, the goal in target tracking is to keep the probability of missing a target low, not estimating the target’s location with a high accuracy whereas in boundary tracking the goal is to estimate the boundary with a desired accuracy. In this survey, we focus on boundary tracking approaches. Boundary tracking approaches in the literature can be classified (see Fig. 2) into two categories based on the type of the sensors used to perform the tracking. They are — (i) static sensors and (ii) mobile sensors. Amongst the techniques that use static sensors to perform boundary tracking, the algorithms can be further classified into those that provide, (i) a confidence band around the boundary being tracked and (ii) an approximation of the boundary. While tracking the boundary, the focus could be on (a) energy efficiency or (b) tracking dynamics of the boundary through modeling approach. In this section we discuss some of the approaches in these categories. 2.1.2.2.1. Confidence interval around actual boundary. This technique [10] uses static range sensors capable of sensing a boundary from a distance. Here, the authors estimate a confidence band around the actual boundary using regression relationship between the distance to the boundary and sensor locations. In their approach, they assume that the sensors can ‘‘sense’’ the boundary remotely and focus on reducing the message complexity to determine the boundary within a confidence interval abbreviated as CI. Fig. 6 depicts the scenario with range sensors determining the boundary and estimating a confidence band around it. In (a), a sensor at location (xs , ys ) positions its beam at an angle θ and detects a point (xi , yi ) on the boundary with error αi in the sensing direction. Given n such observations the objective is to determine a confidence band of specific width δ as shown in (b). Another objective is to reduce the number of sensors required for estimation (number of ON sensors). In one dimension, if a sensor is located at x, and its measurement of the boundary is
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
(a) Range sensing scenario.
365
(b) Estimated confidence band.
Fig. 6. Determining a confidence band around the boundary using range sensors [10].
depicted as y, then a regression relationship is defined between x and y. They use nonparametric regression using kernel smoothing to determine the boundary. If an estimated boundary point does not fall within the band but just outside the confidence band they assume that coverage is zero. However, there may be situations where the estimated points may lie just outside the confidence band and the boundary can still be covered. In this approach, the sensors are static and the accuracy of approximation depends upon the density of deployment, the sensing range as well as the sensing errors. The accuracy of coverage is defined in terms of loss of coverage (LOC ) which is the probability of the band not covering the actual boundary. If (xi , d(xi )) is a point on the actual boundary, LOC over a set of n sensors is defined as LOC (δ) =
n 1
n i =1
(I (|dˆ (xi ) − d(xi )| > δ)),
(6)
where I (a) is the indicator function; i.e., I (a) = 1 if a is true, I (a) = 0 otherwise and d(xi ) is the actual distance from estimation point xi and d(ˆxi ) is its estimate. Minimizing LOC implies maximize accuracy of coverage. In the absence of knowledge of the actual LOC, the heuristic uses another metric, prediction error, to decide the termination criterion for additional estimation points. The authors propose a technique for estimating boundary with a small set of estimation points. The iterative algorithm for selecting the number of estimation points starts with initial points in a predefined neighborhood and additional points are included until LOC reaches a certain upper bound. The prediction error at a specific location is the absolute difference between the observation and the estimated boundary at that location. When estimated over a set of n sensors, the probability of the prediction error being greater than δ can be used as a representative of LOC. This probability is evaluated as follows: Prediction error(δ) =
n 1
n i =1
I (|dˆ (xi ) − yi | > δ).
(7)
The proposed algorithm is a combination of spatial and temporal estimations. The overall communication overhead is the cumulative number of transmissions required for the spatial and temporal estimation techniques. 2.1.2.2.2. Approximation of actual boundary. Several other techniques for event boundary detection using in situ sensors are proposed in [14,9]. These papers focus on estimating the boundary in an energy-efficient manner in the presence of faulty sensors. A more general version of boundary tracking includes tracking all homogeneous regions within a sensor field. In the region tracking approach proposed in [15], authors use kernel density estimator to approximate the data distribution at each sensor and use a classifier based method to approximate region boundaries consisting of piecewise line segments. An alternative technique to Kalman Filter-based approach for tracking dynamics of the boundary is simple state space models [16]. If the boundary follows non-linear dynamics or errors are non-Gaussian, simple Kalman Filtering approach becomes inapplicable. In that case, more advanced techniques like extended Kalman Filtering [17] or Particle Filtering [18] are used. The basic idea of extended Kalman Filtering is to approximate nonlinear functions in the equations for the state and observations by a Taylor-series expansion. Then, it applies the Kalman filter equations for the state update. Particle Filters are sequential Monte Carlo methods based on ‘‘particles’’ or point mass representations of probability densities and can be applied to any state-space model. However, due to high computational overheads, particle filtering may not be applicable for energy constrained sensors. To monitor boundaries with non-stationary dynamics, switching Kalman Filters are used [19]. While Kalman Filters and their extensions are used for prediction and tracking of objects with known or fixed dynamics, authors in [19] focus on tracking storm cells where a storm can grow in size over time. Here the root mean square error (RMSE) is used as a metric for tracking. Techniques in tracking can also be classified based on the dynamics of individual points of the boundary such as constant velocity model or white-noise acceleration model. More advanced dynamic models like constant-acceleration model [20] assume that the acceleration is a Weiner process (a process with independent increments). This requires only the state change equation of Kalman Filter to be updated differently.
366
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
(a) Single UAV latency profile.
(b) Pair UAVs latency profile.
Fig. 7. Latency profile of unmanned vehicles performing aerial surveillance [21]. The thickness of the path is a measure of the latency of update at that point when information is transmitted to the base station. Latency is lesser when a pair of UAVs are used.
2.1.2.2.3. Minimizing latency of updates. Another technique using mobile UAVs (Unmanned Air Vehicles) uses range sensing to detect the perimeter or boundary of a phenomenon such as a forest fire [21] where on-board cameras are used to obtain information about the area of fire and image algorithms are used at the sink to obtain boundary segmentation. The UAVs transmit the images obtained by the cameras to the sink. The authors assume that the UAVs have limited and noisy communication links and cannot upload data to sink unless they are within a certain range. The time taken between two updates is defined as the latency. Let δ(x, t ) depict the latency associated with a point x at time t. With increase in time, the data at the station gets more latent until a UAV transmits a more recent information. The main goal is to design a cooperative monitoring scheme that minimizes latency thus reducing the time between updates. The minimum latency δmin corresponds to the time taken by the UAV to reach the point of interest from the base station and the maximum latency depends on the total time taken by the UAV to make an observation and deliver the data to the base station. Fig. 7(a) depicts a single UAV and we observe that the path is thicker indicating higher latency. Since the state of the fire is transmitted only after the UAV travels the entire perimeter, the greatest latency is at the beginning of the flight. In Fig. 7(b), a pair of UAVs monitoring a circular fire and the thickness of the path is proportional to the latency (smaller in this case compared to single UAV) which is the delay between the time taken to collect the images and the time to update the sink. The main goal is to spread the sensors evenly along the boundary in a minimum latency configuration. The UAVs can only communicate when they are close to the sink. The motion of the UAVs are such that they are distributed evenly along the perimeter and for every pair, one is headed in the clockwise direction and the other in the anticlockwise direction. Pairs of UAVs meet and exchange information and then reverse the direction to meet the other neighbor to exchange information. This increases the frequency of updates reducing latency. The sensors are launched from the base station and they traverse the boundary to attain the minimum latency configuration. The authors do not consider minimizing the energy required by the UAVs to perform tracking. 2.1.2.2.4. Maximizing accuracy by spatial interpolation. In this technique [22] the goal is to approximate the boundary by a polygon. The mobile sensors use the sensed information to place interpolation points that define the polygon estimate. These points are distributed uniformly along the boundary and the boundary is approximated by the polygon formed by connecting the interpolated points. The approach assumes the knowledge of local tangent and curvature to the boundary and every sensor exchanges this information with other sensors in a ring-topology. The sensors update and interpolate points on the polygon representing the boundary and move along the interpolated points. 2.1.2.2.5. Minimizing sensing error by boundary crossing. In this approach [23], a single agent detects boundary crossings using noisy sensor data and steers the sensor along the boundary moving in and out of the boundary. The sensors do not assume the knowledge of tangent or curvature like the previous approach. Here the sensor is required to move in and out of the boundary to track the boundary. The steering controller is used to minimize the number of boundary crossings to flatten out the trajectory into a straight path. In the multiple vehicle case, they assume that the boundary can be approximated by an ellipse and formulate the boundary estimation problem as an optimization problem. The drawback is that, in the absence of errors, the distance traveled by the sensors using this method increases due to boundary crossings compared to just moving along one side of the boundary. In summary, the techniques used for determining the boundary using static sensors require a high density sensors for high accuracy. Range sensor based approaches use spatio-temporal correlations of multiple sensor readings to determine the boundary efficiently. The mobile sensor techniques that are used for tracking the boundary assume functional forms for the boundary or the knowledge of tangent and curvature to determine the boundary.
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
367
Fig. 8. Classification of boundary covering algorithms with sensors physically covering the contour.
2.1.3. Active learning based approach to reduce number of samples Recently, active learning techniques have been proposed to identify contours or function threshold boundaries [24]. This technique has been developed outside the sensor network framework. The main objective here is to make an informed choice regarding future experiments based on current set of observations such that the over all number of samples required to approximate the boundary of a threshold function is reduced. The technique does not consider distance between samples as a cost metric. The authors present a learning based strategy that focuses on reducing the number of points to sample in order to learn the boundary of a contour. They approximate the boundary to be a Gaussian Process (GP). The technique uses Kriging, a form of GP to determine the prediction at a point which is assumed to be Gaussian since the joint distribution of a finite set of sampled points for GPs is Gaussian. The prediction is done based on the measurements from already sampled points whose distribution is normal, with mean as the true functional value and variance as the sampling noise. While the previous approaches performed queries (sampling) where the sensors are distributed through out the region, in this approach, the locations to sample in the beginning are chosen at random from the sensing region. The algorithm scores each location and chooses the one with the highest score as the location of the next sampling. The authors propose a variety of scores such as random, probability of misclassification, entropy, variance, product of metrics and straddle. The straddle metric allots the highest score to those points which are both previously unknown and near the boundary. By using this metric, the number of experiments required to obtain the boundary with 99% accuracy is the least when compared to the others. This technique does not investigate the energy overhead for obtaining the boundary. While the number of points to be sampled is minimized, there is no cost attributed to sampling. If the number of candidate points are small, the algorithm takes longer to converge. Next we discuss algorithms that are designed for locating as well as physically surrounding the contour using mobile sensors. 2.2. Boundary covering There are three categories of solutions in this area based on the nature of coordination of mobile sensors as depicted in Fig. 8, (i) Uncoordinated mobility — where the sensors do not collaborate to determine the direction of movement and make measurements along an arbitrary path or a pre-specified path, (ii) Partially coordinated mobility — where the sensors initially move in a random manner in the field until a location on the contour is determined by any sensor and then the
368
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
sensors coordinate to move towards that location and (iii) Coordinated mobility — where the sensors determine the direction of movement based on measurements made in the sensing region and collaboration. We discuss various techniques for contour covering in the literature, under each of these categories. 2.2.1. Uncoordinated mobility In uncoordinated mobility, the sensors do not coordinate with each other to determine the direction of movement. Based on the mobility model we can classify into the following categories as depicted in Fig. 8 — (i) Random mobility — where the sensors move in random directions through out the sensing region and (ii) Controlled mobility — where mobile sensors move in previously specified patterns, e.g., raster scan. We discuss two strategies for contour covering that use these mobility models. 2.2.1.1. Random mobility model. Uncoordinated mobility is used to estimate level sets [25]. The sensors are assumed to be mounted on agents that make measurements along an arbitrary path. The sensing region is partitioned into a number of cells and each set is assigned to a leader node, which maintains a list of cells for which it is the leader. A node receives information from other nodes only for those cells for which it is a leader. When two nodes are in communication range, they exchange information about their cells. If a node is near the base station, it uses linear Support Vector Machine to compute the level set within its list of cells and transmits the information to the base station. This technique also assumes the availability of a training set of samples comprising of points on the contour to help the classifier learn what lies on the boundary and what does not. In the oil spill scenario, such a training set may be difficult and time consuming to obtain. In this approach, coverage and latency are used as metrics. Coverage is the percentage of the boundary estimated within the error threshold at any given time by the algorithm and average latency is defined as the average time spent before achieving the required error threshold for the estimate of the boundary. The objective is minimizing error in coverage and the trade-off with latency and communication cost are discussed. The main drawback of this algorithm is that the coverage is dependent on whether a node visits a particular region of the boundary. Under energy constraints, the sensors may not visit all parts of the boundary leading to very low coverage values. Uncoordinated mobility is unsuitable for disaster management scenarios since the sensors move arbitrarily and do not move with the intent of maximizing the coverage of the contour. The amount of time required for accurate coverage can be arbitrarily large even though it guarantees convergence asymptotically. 2.2.1.2. Controlled mobility model and adaptive sampling. The technique proposed in [26] uses active learning to reduce the number of samples for monitoring a spatial field by using a two pass strategy. The mobile sensors coarsely scan the entire field once to determine the coarse approximation of the boundary and, using these samples perform path planning to reduce the number of samples required to determine the boundary within a given error tolerance. Their strategy comprises of two steps. First, the mobile sensors begin at uniformly spaced locations in the field and survey the field in a raster scan fashion. A rough estimate of the contour is computed using these measurements. In the second pass, the sensor starts close to the contour estimated in the previous phase and performs a more refined scan. If l is the length of the field and k is the number of mobile sensors, then the rows of raster scan are l/k apart. The measurements are sent to a sink where a coarse estimate of the contour is computed. Next the mobile sensors are guided along the regions identified as containing the boundary in the coarse survey and collect additional n/k samples where n is the number of samples in the entire field. The minimum path length for each sensor scanning the field is at least l2 /k since the sensors perform raster scan to visit points on the grid. While the adaptive sampling method guarantees convergence, it does not exploit information available to the sensor locally as well as from other sensors to determine the direction of motion. When the length of the field, l, is large spanning several kilometers, sampling the whole field might be prohibitively expensive and cause undue delays. Also, in the case of energy constraints, the sensors do not maximize coverage in this approach since the sensors do not move relative to the contour. The main objective is to minimize the number of samples in the second pass based on the results of the first pass. It is our belief that the sensors should exploit whatever information available to do path planning and resort to scanning only in the case where such information is unavailable in order to minimize energy consumption and delay. 2.2.2. Partially coordinated mobility In this category, the sensors perform uncoordinated mobility until one of them finds the contour and then collaborate to move in the direction of the converged sensor. We describe this approach below. 2.2.2.1. Using random search for detection and coordinated tracking. In this strategy [27], the sensors use a random coverage controller (RCC) to scan the area and then use spread the word technique to locate the sensors on the perimeter. The RCC uses logarithmic spiral to search for the boundary. Once it detects the boundary it sends the coordinates of the perimeter to other sensors within its transmission range. These sensors use distance minimization, a potential function controller (PFC) to approach the perimeter. Once they are in the range of the perimeter, they switch to a tracking controller (TC) which avoids collision with already tracking sensors. The main objective is to continuously track the perimeter. The sensors do not share the task of tracking since they are in cyclic pursuit trajectory. The main drawback is that the sensors do not use any information to determine the direction of the boundary until one of the sensors locate the boundary in the course of random search. This can lead to a high path length. The algorithm does not minimize path lengths for boundary tracking and do not consider energy constraints to maximize coverage and minimize coverage error. Since the sensors move directly towards
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
(a) Sensors in a swarm inside contour.
369
(b) Sensors in a swarm outside contour.
Fig. 9. Initial deployments of sensors and the evolution of the swarm at different times when the sensors are inside and outside the contour [28].
a sensor that has located the boundary, there are possibilities where they are collocated on the perimeter and the task of tracking different parts of the contour is not shared. 2.2.3. Coordinated mobility — approach the contour In coordinated mobility, the sensors communicate with all other sensors at every step to determine the direction of motion. Based on the objective of the algorithms proposed in this category, we classify them into the following categories that minimize— (i) communication cost and (ii) mobility cost as shown in Fig. 8. 2.2.3.1. Approaches that minimize communication cost. In the first category, we discuss multiple strategies where the sensors use point measurements from other sensors to determine the direction of motion. We first describe swarm based approaches where the direction of motion of the swarm is determined by point measurements from all the sensors in the swarm. Another approach describes a static sensor network deployment accompanying mobile sensors and at every step, the mobile sensors use point measurements from the static sensors to determine the direction of motion. 2.2.3.1.1. Swarm based collective motion algorithms. Since the sensors are collocated in the swarm, they arrive at the contour in a clustered manner and track the boundary as a single unit. The virtual snake algorithm [29] in image segmentation is directly adapted for collective motion to determine environmental boundaries in [28]. In image segmentation, the goal is to find the boundary of the object. This is accomplished by defining a snake or a virtual contour which is an energy minimization curve. The snake is represented as a controlled continuity spline and moves in response to internal and external forces simulating an elastic band. The objective is to dynamically conform to object shapes. The internal force impose smoothness constraint on the curve and the external forces draw the snake towards the object it is trying to locate. The image gradient acts as the external force. This approach assumes that the sensors can be placed at regular intervals along the virtual contour and the readings from each sensor are used to compute the direction of the gradient of the field at every sensor. The algorithm imposes constraints on the choice of the distance between the sensors. The distance should be small for accurate computation of the gradient. In the snake algorithm, the virtual contour is a continuous spline. Here, the authors discretize the virtual contour to be a configuration of points associated with the sensor locations. The algorithm comprises of movement of the sensors towards the boundary and followed by a sparsing phase, where the sensors move away from each other along the boundary. The major drawbacks are as follows. First, the algorithm works in the case where the sensors are overlapping with the actual boundary as shown in Fig. 9(a) (virtual contour is inside the actual boundary) and Fig. 9(b) (actual boundary is inside the virtual contour). If there is no overlap between the actual boundary and the virtual contour, once the sensors arrive at the boundary, then the group will track the boundary like a single mobile sensor. The sensors cannot use sparsing since this will lead to sensors moving far away from each other as well as the boundary causing instability. Second, in order to simulate the virtual contour curve, a large number of mobile sensors are needed as shown in Fig. 9. Third, the objective is not to maximize coverage or minimize latency but to just locate the boundary. Fourth, the coverage and the distance traveled by the sensors are highly dependent on the initial location of the virtual contour with respect to the boundary. In images, minimizing distance traveled or maximizing coverage under energy constraints is not an issue. However, in applications of mobile sensor networks, these parameters are critical and hence directly adapting the virtual snake algorithm is not practical. The approach in [30], uses lesser number of mobile sensors, in fact four sensors to locate and track the boundary compared to the previous approach. The sensors compute the gradient collectively from point measurement from each of the sensors. In Fig. 10(a), the direction of the swarm is determined by the center of swarm rc where r1 , r2 , r3 and r4 are the vehicle positions in the swarm. Fig. 10(b) depicts the tracing of the level curve using the swarm. Here rc depicts the center of the formation and rE the center of formation of vehicles r1 , r3 and r4 . rF denotes the center of formation for r2 , r3 and r4 . rJ and rk are points on the level curve. Given ri , the measurement taken by the ith platform is given by zi = ri + ni
(8)
370
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
(a) Estimating collective gradient of swarm.
(b) Estimating level curve using swarm.
Fig. 10. Approaching and covering contour using swarm motion [30].
here ni ≡ ℵ(0, σ 2 ) is the Gaussian noise (i.i.d.). The goal is to find zc for z (rc ), the measurement estimate at the center of control and Dc for ∇ z (rc ), the gradient estimate. In Fig. 10(a), a and b are designed to get minimum mean square error in estimates for zc and Dc . The formation is maintained such that the center of formation tracks the level curve using the above measurements. The control laws derived guarantees that the center of formation moves along the level curve at unit speed and stabilizes the shape of the formation. The main difference between this approach and [28] is in the number of sensors used as well as in the computation of the gradient. Here, the authors derive convergence laws and the sensors move along in a symmetric formation. Once they arrive at the contour, the group of sensors move along the boundary tracking it as a single unit. Again, this approach has several drawbacks. First, the goal is to locate and track the boundary but not minimize distance traveled or maximize coverage under energy constraints. Second, the task of tracking the boundary is not shared by the mobile sensors and finally, the sensors need to communicate at every step and synchronize their movement to maintain the formation. 2.2.3.1.2. Static sensors guiding mobile sensors. Another interesting approach proposed for contour covering using a single mobile sensor is described in [31]. Here the authors describe a setup comprising of static sensors deployed uniformly randomly in the region and a single mobile node. They assume that the node degree of the static sensor network is 6–12 which indicates a high node density. The mobile node computes the direction of motion based on the location of two neighboring nodes with the best gradients and moves along a path equidistant from both the nodes. The metrics for evaluating the performance of this approach are (i) percentage of success — percentage of trials when the mobile node found the contour successfully and (ii) ratio of the distance traveled by the mobile node to the shortest distance to the contour from its initial location. The main drawbacks of this approach are as follows. First, the authors only address situations where a random deployment of static and mobile sensors are possible. In scenarios such as oil spill in an ocean, the time taken to deploy a large number of static sensors to make this algorithm work reliably may be large. Second, the authors do not discuss the multiple mobile sensor node scenario. They compare the length of the path to the contour with the shortest path to the contour and do not consider the path traversed along the contour. They do not focus on minimizing the over all latency of arriving at the contour and tracing the contour by multiple mobile nodes. In contours with a large perimeter, the task of reducing the combined path length of the mobile sensors from their initial position to a point on the contour as well as their path along the contour is important. Third, in the case where the mobile sensors are deployed close to each other, the sensors do not distribute the task of covering the contour by surrounding it. The authors show that the percentage of successful trials is a function of node degree and the sensors need a high node degree of at least 6 for 100% success which results in a dense deployment of sensors. Finally, they do not discuss the impact of node degree on the success percentage of their algorithm in nonuniform fields. 2.2.4. Coordinated mobility with approach and surround In the coordinated mobility strategies we have looked so far, the sensors collaborate to approach the contour but do not surround the contour. In [32], the authors propose three strategies by deriving a cost model to determine the direction of greatest benefit, given the objective is to approach and surround the contour. All the strategies assume the knowledge of an interior point inside the contour or the centroid of the contour. The sensors’ movement comprises of converge phase, where the sensors collectively move towards the contour and coverage phase where the sensors that have landed on the contour, cover it by moving along the contour. They derive a cost model which makes a decision between approach and surround. The sensors surround the contour with respect to the centroid. At every step, the direction is computed using a cost model that combines approach and surround using a bias factor. The benefit of moving to a particular neighboring location is determined by the cost at that location which is defined per grid position per sensor. In the first strategy, Greedy Algorithm (GA), the sensor moves to the neighboring
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
371
Fig. 11. Sensors executing MCD algorithm in a 2D plane sensing region [32].
position with least cost. In the second strategy, Simulated Annealing (SA), the sensor chooses a neighboring position at random. If the cost at the neighboring position is less than the cost at the current position, it moves to the neighboring position. Otherwise it moves to the new position (bad move) with a certain probability. In the third strategy, Minimizing Centroid Distance (MCD), the sensors move to lowest cost position similar to GA. However, if the sensor gets trapped in local minimum, it moves to that neighboring position that minimizes the distance between the actual contour centroid and the centroid of the convex hull of locations of all other sensors and the neighboring point. This provides the required course correction to move towards the contour. This strategy is based on increasing the overlap between the actual contour and the convex hull of the sensor locations as they approach and surround the contour as shown in Fig. 11. The quality metric is characterized by the Relative Contour Error (RCE) which is defined to be the relative difference between the area of the polygon formed by the points on the actual contour and the polygon formed by the points on the estimated contour. RCE =
|Aest − Aact | Aact
.
(9)
The performance metric is characterized by latency which is defined as the maximum number of steps taken by the sensors to estimate the contour. If ti denotes the number of steps taken by the ith sensor, then, latency L is L = argmax(ti ) where i = 1, . . . , N .
(10)
The main drawback is the assumption of the knowledge of the centroid of the contour. In many realistic situations, such information may not be available and is therefore a strong assumption. Second, in MCD, the latency of estimation depends upon the number of converged sensors. The larger the number of converged sensors required to start the coverage phase, higher is the latency. This is because, the converged sensors wait for far off non-converged sensors to land on the contour before commencing the coverage phase. The latency is also sensitive to the bias factor which is a pre-determined constant, that chooses between approach or surround at every step. Third, it cannot precisely estimate contours with nonconvex arbitrary shapes. 2.2.5. Coordinated mobility with adaptive approach and surround A centralized solution using periodic update of estimated contour points called Adaptive Contour Estimation (ACE) Algorithm is presented in [33] which addresses several of the drawbacks in their previous work [32]. ACE dynamically estimates the contour and its centroid and adaptively chooses between approach and surround using this information. It assumes that the sensors can make measurements within a small sensing radius, communicate to a centralized sink and have sufficient energy for moving towards and surrounding the contour. Given an input model of the field and the locations traced by already converged sensors (if any), the centralized sink computes the contour defined as a bounding convex hull of the points already traced by converged sensors and estimates of the convergence points of nonconverged sensors. The sink estimates the target angles for the sensors in the network based on the computed contour. The sensors in turn use the gradient information and the target angle information to approach and surround the contour. In this approach, the sensors begin to trace the contour as soon as it converges on to the contour using the wall-following algorithm. The bias factor is estimated as a function of the sensor’s distance from the contour, area of the computed contour and the extent of spread of sensors in the field.
372
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
(a) Directly approaching vs. surrounding the contour with sensors in a cluster far away from the contour.
(b) Directly approaching vs. surrounding the contour with sensors evenly distributed and near the contour.
Fig. 12. Directly approaching the contour results in lower latency compared to surrounding it when the sensors are clustered and far away from the contour or when the sensors are distributed evenly and close to the contour.
Fig. 13. Directly approaching the contour results in a higher latency compared to surrounding the contour when the contour is large and the sensors are close to the contour and clustered together [34].
The quality metric for the ACE algorithm is given by Precision which is defined as follows. If Cest represents the set of points on the contour estimated by a given algorithm and C the set of points on the actual contour, then, in the absence of sensing errors, Precision =
|Cest ∩ C | . |C |
(11)
The performance metric is defined by latency which is defined as the maximum number of steps taken by the sensors to estimate the contour as defined in Eq. (10). There are several drawbacks with the periodic update centralized scheme. First of all, the fusion center is a single point of failure and it is better to replicate the shared state within the sensors as opposed to relying on a single fusion center. Second, the latency is dependent upon the period of update. Third, the ACE algorithm does not make use of remaining energy information to surround the contour. 2.2.6. Coordinated mobility — energy aware, asynchronous update, adaptive approach and surround In the follow up work to ACE [34], the authors present an asynchronous update and energy aware algorithm E , that determines the choice between approach and surround. They propose an energy aware bias factor that takes into account the energy remaining in the sensor and an estimate of the amount of work remaining to be done by the sensor to determine the direction of movement in addition to distance to the contour, perimeter of the contour and the extent of distribution of sensors in the field. They compare the efficacy of E with two baseline algorithms (i) G where the sensors just approach the contour but not surround it and (ii) S where the sensors approach and surround the contour assuming the knowledge of the centroid. If the sensors are already distributed evenly in the field with respect to the contour, then it is beneficial to directly approach or if the sensors are clustered in the field, then it is better to surround as well. Figs. 12 and 13 depict example scenarios where G or S alone is unsuitable for efficient contour covering and thus motivating the need for an adaptive algorithm. The proposed algorithm E dynamically decides between approach and surround based on the parameters such as distance to the contour, perimeter of the contour, distribution of sensors in the field and the energy remaining in the sensors to perform the task. By doing so, the authors demonstrate that the E reduces latency and improves precision and coverage under energy constraints when compared to G and S . They also show that a trade-off exists between precision and coverage when far lying sensors directly approach the contour instead of surrounding it. The authors provide sensitivity analysis to noise, reduction in communication range and scaling of number of sensors. The efficiency of the algorithm depends upon the quality of the model of the field that is input to the algorithm. The quality of E is measured by coverage and precision. Coverage C , is defined as the ratio of the area of intersection the convex hulls of estimated and actual contours to the area of the actual contour. It indicates how much of the actual contour
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
373
Table 1 Summary of comparison of strategies for boundary estimation, covering and tracking. Category
Strategies
Trade-off factors
Energy aware without physically covering
Energy aware with physically covering
Field estimation
Remote sensing [4] LIDAR [5] In-network aggregation TAG [6] Data suppression [7]
Accuracy vs. image resolution Accuracy vs. sampling resolution Accuracy vs. communication Accuracy vs. communication
No No Yes Yes
No No No No
Boundary estimation
Localized edge detection [8] Hierarchical cluster [11] Identify isoline nodes [12]
Accuracy vs. probing radius Accuracy vs. node density Accuracy vs. communication
Yes Yes Yes
No No No
Using regression [35] Prediction based energy savings [36] Pairs of vehicles moving in opposite directions [21] Interpolation points approximating polygon [22] Using known functional form [23]
Accuracy vs. communication No. of samples vs. sampling frequency Communication latency
Yes Yes
No No
No
No
Accuracy vs. sensing errors
No
No
Accuracy vs. no. of crossings
No
No
Scoring of samples for next sampling [24]
Accuracy vs. no. of samples
No
No
Sensors move in a random manner [25] Sensors move in predefined a manner [26]
Accuracy vs. communication
Yes
No
Accuracy vs. no. of samples
No
No
Sensors move in a random manner until boundary is detected [27]
No trade-off Locate only
No
No
Sensors move as a single group [28] [30]
Accuracy vs. dist. between nodes
No
No
Accuracy vs. sensing errors
No
No
Hybrid networks
Static sensors guide mobile node [31]
Accuracy vs. node density of static deployment
No
No
MCD
Nonadaptive approach and surround [32]
Accuracy vs. latency
No
No
Adaptive, Energy aware
E
Accuracy vs. latency, coverage
No
Yes
Boundary tracking
Active learning estimation
Uncoordinated mobility
Partially coordinated mobility
Coordinated mobility
is covered by the estimated contour. C =
Area of (Cest ∩ Cact ) Area of Cact
.
(12)
Precision φ , is defined as the ratio of the area of intersection of the convex hulls of estimated and actual contour to the area of estimated contour. It measures how much of the estimated contour is the actual contour.
Φ=
Area of (Cest ∩ Cact ) Area of Cest
.
(13)
The energy consumed is measured by Latency, L, which is defined as the maximum distance traveled by the sensors to cover the contour. If Ti is the total distance traveled by sensor Si , from beginning to termination then, L = max(T1 , . . . , Tn ).
(14)
In Eq. (14) the latency is measured in terms of actual distance traveled as opposed to number of steps in Eq. (10). If the step-length is non-uniform then Eq. (14) is a better measure. The main drawbacks of E are as follows— (i) It assumes that the contour is a connected component. In reality contours can be disjoint. However deploying multiple robots at different regions in the sensing field can cover multiple level sets but there is no guarantee that the algorithm will always find all the level sets, (ii) it assumes that each sensor can make several measurements at any given location within the sensing range and the final measurement at a given location is an average of all the measurements. Their algorithm is sensitive to the variance in the sensed measurements.
374
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375 Table 2 Summary of metrics for boundary estimation and tracking. Type
Metrics
Reference
Quality
Mean thickness ratio and Missed Detection Error Degree of fit Mean Square Error Mapping accuracy Loss of Coverage and Prediction Error Root Mean Square Error Time between updates Percentage of boundary estimated within error threshold Percentage of success Relative Contour Error Precision Coverage, Precision and Coverage Error
[8] [9] [11] [12] [35] [19] [21] [25] [31] [32] [33] [34]
Performance
In-network + Out-network communication cost Number of bytes transferred Number of transmissions Average time spent before achieving error threshold Square of the distance between nodes Ratio of distance traveled to shortest distance to contour Maximum number of steps taken by the sensors in the network Maximum distance traveled by the sensors in the network
[11] [12,6] [35] [25] [31] [32,33] [34]
3. Conclusions and open research issues In summary, this paper provides a taxonomy for boundary estimation and tracking techniques in the literature. It classifies the techniques based on several axes such as the task goals, type of sensors, type of sampling and type of sensing for estimation, tracking and covering boundaries. Apart from the taxonomy, it also highlights the objectives, metrics, trade-off factors, pros and cons of the mechanisms for boundary estimation approaches. Table 1 summarizes different categories of techniques discussed in this survey along with the cost they optimize and the trade-off factors such as accuracy vs. sampling resolution, communication range, node density, distance traveled by the sensors etc. It also summarizes whether or not the techniques perform energy aware contour covering since energy optimization is an important requirement for real world boundary estimation applications. Table 2 encapsulates the different quality and performance metrics in the techniques discussed in the survey. The taxonomy identifies the complete spectrum of boundary estimation and tracking techniques in the literature that cover the main tasks starting from boundary detection, estimation of its extent, covering and tracking spatially as well as temporally using different types of sensors. While the techniques are diverse, the objectives of these techniques are to maximize accuracy and minimize energy consumption. As the survey indicates, a single technique may not be sufficient to tackle all aspects of boundary estimation. Therefore, an integrated approach such as using remote sensing for a coarse estimation and then deploying static and mobile sensors for finer estimation for covering and tracking is warranted. Further, most of the techniques deal with two dimensional sensing regions. Natural phenomena occur in three dimensions and it would be interesting to see whether these techniques scale up and if not, investigate novel techniques for addressing boundary estimation in higher dimensions. References [1] Y. Wang, J. Gao, Boundary recognition in sensor networks by topological methods, in: Proceedings of the ACM/IEEE International Conference on Mobile Computing and Networking, MobiCom, 2006, pp. 122–133. [2] F. Galland, P. Refregier, O. Germain, Synthetic aperture radar oil spill segmentation by stochastic complexity minimization, IEEE Geoscience and Remote Sensing Letters (2004) 265–299. [3] R.H. Goodman, Application of Technology in Remote Sensing of Oil Slicks, John Wiley and Sons, 1989. [4] M.F. Fingas, C.E. Brow, Review of oil spill remote sensing, Spill Science and Technology Bulletin 4 (1997) 199–208. [5] D. Satale, M. Kulkarni, Lidar in mapping, in: Proceedings of Map India Conference GIS development.net, 2003. [6] J.M. Hellerstein, W. Hong, S. Madden, K. Stanek, Beyond average: towards sophisticated sensing with queries, in: Proceedings of the 2nd International Conference on Information Processing in Sensor Networks, 2003, pp. 63–79. [7] X. Meng, T. Nandagopal, L. Li, S. Lu, Contour maps: monitoring and diagnosis in sensor networks, Computer Networks 50 (15) (2006) 2820–2838. [8] K. Chintalapudi, R. Govindan, Localized edge detection in a sensor field, in: Proceedings of the First IEEE International Workshop on Sensor Network Protocols and Applications, May 2003, pp. 59–70. [9] M. Ding, D. Chen, K. Xing, X. Cheng, Localized fault-tolerant event boundary detection in sensor networks, in: Proceedings of the 24th Annual IEEE INFOCOM, volume 2, March 2005, pp. 902–913. [10] S. Dattagupta, K. Ramamritham, P. Kulkarni, K. Moudgalya, Tracking dynamic boundary fronts using range sensors, in: Wireless Sensor Networks, in: Lecture Notes in Computer Science, vol. 4913, Springer, Berlin, Heidelberg, 2008, pp. 125–140. [11] R. Nowak, U. Mitra, Boundary estimation in sensor networks: theory and methods, in: Proceedings of Information Processing in Sensor Networks, IPSN, 2003, pp. 80–95. [12] Y. Liu, M. Li, Iso-map: Energy-efficient contour mapping in wireless sensor networks, in: Proceedings of International Conference on Distributed Computing Systems, Toronto, Canada, June 2007, pp. 36–44. [13] R. Brooks, P. Ramanathan, A.M. Sayeed, Distributed target classification and tracking in sensor networks, Proceedings of the IEEE 91 (2003) 1163–1171. [14] K. Ren, K. Zeng, W. Lou, Fault-tolerant event boundary detection in wireless sensor networks, in: Proceedings of IEEE GLOBECOM, 2006.
S. Srinivasan et al. / Pervasive and Mobile Computing 8 (2012) 358–375
375
[15] S. Subramaniam, V. Kalogeraki, T. Palpanas, Distributed real-time detection and tracking of homogeneous regions in sensor networks, in: Proceedings of Real-time Systems Symposium, RTSS, 2006. [16] J. Liu, M. Chu, J. Liu, Distributed state representation for tracking problems in sensor networks, in: Proceedings of Information Processing on Sensor Networks, IPSN, April 2004. [17] B.D. Anderson, J.B. Moore, Optimal Filtering, Prentice-Hall, New Jersey, 1979. [18] M. Coates, Distributed particle filters for sensor networks, in: Proceedings of Information Processing on Sensor Networks, IPSN, April 2004. [19] V. Manfredi, S. Mahadevan, J. Kurose, Switching Kalman filters for prediction and tracking in an adaptive meteorological sensing network, in: IEEE Conference on Sensor and Ad Hoc Communications and Networks, SECON, 2005. [20] M. Athans, R.P. Wishner, A. Bertolini, Suboptimal state estimation for continuous-time nonlinear systems from discrete noisy measurements, IEEE Transactions on Automatic Control AC-13 (1968) 504–514. [21] D.W. Casbeer, D.B. Kingston, R.W. Beard, T.W. McLain, Cooperative forest fire surveillance using a team of small unmanned air vehicles, International Journal of Systems Science 37 (6) (2006) 351–360. [22] S. Susca, S. Martinez, F. Bullo, Monitoring environmental boundaries with a robotic sensor network, Transactions on Control Systems Technology 16 (2) (2008) 288–296. [23] Z. Jin, A. Bertozzi, Environmental boundary tracking and estimation using multiple autonomous vehicles, in: IEEE Conference on Decision and Control, New Orleans, LA, December 2007, pp. 4918–4923. [24] B. Bryan, J. Schneider, R.C. Nichol, C.J. Miller, C.R. Genovese, L. Wasserman, Active learning for identifying threshold boundaries, in: Proceedings of 19th Conference of Neural Information Processing Systems, NIPS, 2005. [25] G. Gupta, P. Ramanathan, A distributed algorithm for level set estimation using uncoordinated mobile sensors, in: GLOBECOM 2007, November 2007, pp. 1180–1184. [26] A. Singh, R. Nowak, P. Ramanathan, Active learning for adaptive mobile sensor networks, in: Proceedings of Information Processing in Sensor Networks, IPSN, Nashville, TN, April 2006, pp. 60–68. [27] J. Clark, R. Fierro, Mobile robotic sensors for perimeter detection and tracking, ISA Transactions 46 (1) (2007) 3–13. [28] D. Marthaler, A.L. Bertozzi, Recent Developments in Cooperative Control and Optimization, Vol. 3, Kluwer Academic Publishers, 2004, pp. 317–330 (Chapter 17). [29] L. Cohen, On active contour models and balloons, Computer Vision, Graphics and Image Processing: Image Understanding 53 (2) (1991) 211–218. [30] F. Zhang, N. Leonard, Generating contour plots using multiple sensor platforms, in: Proceedings of 2005 IEEE Symposium on Swarm Intelligence, Pasadena, California, 2005, pp. 309–314,. [31] K. Dantu, G. Sukhatme, Detecting and tracking level sets of scalar fields using robotic sensor network, in: Proceedings of the International Conference on Robotics and Automation, Roma, Italy, April 2007, pp. 3665–3672. [32] S. Srinivasan, K. Ramamritham, Contour estimation using collaborating mobile sensors, in: DIWANS ’06: Proceedings of the 2006 Workshop on Dependability Issues in Wireless Ad Hoc Networks and Sensor Networks, ACM, New York, NY, USA, ISBN: 1-59593-471-5, 2006, pp. 73–82. [33] S. Srinivasan, K. Ramamritham, P. Kulkarni, Ace in the hole: Adaptive contour estimation using collaborating mobile sensors, in: Proceedings of Information Processing in Sensor Networks, IPSN, 2008, pp. 147–158. [34] S. Srinivasan, Challenges and techniques for efficient contour covering using collaborating mobile sensors, Ph.D. Thesis, Indian Institute of Technology Bombay, 2010. [35] S. Dattagupta, K. Ramamritham, P. Kulkarni, K. Moudgalya, Tracking dynamic boundary fronts using range sensors, in: The 3rd IEEE International Conference on Mobile Adhoc and Sensor Systems, Vancouver, Canada, October 2006. [36] Y. Xu, J. Winter, W. Lee, Prediction-based strategies for energy saving in object tracking sensor nektworks, in: Proceedings of IEEE International Conference on Mobile Data Management, MDM04, 2004.