Pervasive and Mobile Computing (
)
–
Contents lists available at ScienceDirect
Pervasive and Mobile Computing journal homepage: www.elsevier.com/locate/pmc
Self-orienting the cameras for maximizing the view-coverage ratio in camera sensor networks Chao Yang a , Weiping Zhu b , Jia Liu a , Lijun Chen a,∗ , Daoxu Chen a , Jiannong Cao b a
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
b
Department of Computing, The Hong Kong Polytechnic University, Hongkong, China
article
info
Article history: Received 20 February 2013 Received in revised form 27 March 2014 Accepted 1 April 2014 Available online xxxx Keywords: Camera sensor network View-coverage ratio Overlapping view-coverage degree Boundary effect Dynamic rotating angle
abstract In recent years, camera sensor networks are widely studied due to the strength of the camera sensors in retrieving more types of information in terms of videos or images. Different from traditional scalar sensor networks, camera sensors from distinct positions can get distinct images with the same object. The object is more likely to be recognized if its image is captured around the frontier view of the camera. To this end, a new coverage model full-view coverage Wang and Cao (2011) is proposed for the camera sensors, to judge whether an object is recognized no matter which direction it faces to. However, the full-view coverage model fails to evaluate the coverage quality when the object is not full-view covered. In this paper, we introduce a novel view-coverage model which measures the coverage quality with a finer granularity for the purpose of face recognition. Based on this model, we propose a distributed multi-round view-coverage enhancing (VCE) algorithm by the self-orientation of the camera sensors. In this algorithm, sensors are continuously rotated to reduce the overlapping view-coverage with their neighbors until reaching the stable state. Furthermore, we address two important issues in the VCE algorithm, and propose the corresponding refinement procedures. The first one is about the sensors near the boundary of the target region whose view-coverage may include the outside of the target region, which is meaningless for our problem. The second one is about the rotating angle which should be set appropriately to achieve a global optimal solution. Simulation results show that our algorithm brings a significant improvement on the viewcoverage ratio compared with random deployment. Also, the refinement procedures make a remarkable improvement over the basic VCE algorithm. Moreover, we evaluate the performance of our algorithm with real deployed camera sensors. © 2014 Elsevier B.V. All rights reserved.
1. Introduction Recent years have witnessed a booming development of camera sensor networks (CSNs) fostered by the advances in the technologies of camera sensors and embedded processors. Compared with traditional scalar sensors, camera sensor networks enrich the information acquired from the physical environment in the form of videos or images. Such network enables a wide range of applications such as traffic monitoring, health care, battlefield surveillance and so on [1]. Coverage is a fundamental problem in wireless sensor networks. In traditional scalar sensor network, a target (or an area) is considered to be covered if it is within the sensing area of some sensors, which can be roughly a disk
∗
Corresponding author. Tel.: +86 13951987786. E-mail addresses:
[email protected] (C. Yang),
[email protected] (L. Chen).
http://dx.doi.org/10.1016/j.pmcj.2014.04.002 1574-1192/© 2014 Elsevier B.V. All rights reserved.
2
C. Yang et al. / Pervasive and Mobile Computing (
)
–
(in omni-directional coverage model) or a sector (in directional coverage model). In CSNs, however, an effective surveillance system should not only ensure the detection of the object, but also the recognition of it. As studied in [2], an object is more likely to be recognized if its image is captured around the frontier view of the camera. i.e., the object is facing straight or nearly straight to the camera. Motivated by this requirement, Wang et al. in [3] propose a novel coverage model in CSNs, which is called full-view coverage. An object is considered to be full-view covered if no matter which direction the object faces to, it is recognized by at least one camera sensor. The full-view coverage model gives the yes or no answers of whether an object (or a region) is full-view covered. However, it fails to evaluate the coverage quality when the object (or a region) is not full-view covered. In fact, full-view coverage can hardly be achieved in a large-scale random deployment scenario unless with very high density [3]. Therefore, in this paper we propose a view-coverage model to characterize the coverage quality for the purpose of face recognition with finer granularity. A facing direction of an object is said to be view-covered if there is at least one sensor whose sensing range includes the object and the angle between the sensor’s viewing direction and the object’s facing direction is below a given threshold. Accordingly, we define the metric of view-coverage ratio of an object, which is the percent of its facing directions that are view-covered. In fact, full-view coverage is a special case of view-coverage, where an object is full-view covered if the view-coverage ratio of this object is 100%. As an extension, we define the metric of the view-coverage ratio of a target region, and derive the view-coverage ratio of a target region under random uniform deployment. The random deployment is easy and less expensive for large sensor networks, and sometimes inevitable, e.g., in the hostile environments. However, it does not ensure optimal coverage, and some improvement is needed by moving or rotating the sensors. Although there are considerable research works on enhancing the coverage in WSNs, most of them focus on the disk (or sector) sensing model, and no result can be directly applied to the view-coverage model, which is intrinsically much more complex and challenging. We propose a distributed multi-round view-coverage enhancing algorithm (VCE algorithm) by the self-orientation of the camera sensors after they are randomly deployed in a sensor field. At each round of the algorithm, each sensor collects the current orientation information of its sensing neighbors and makes a decision of rotating clockwise or anti-clockwise with a predefined rotating angle which results in less overlapping view-coverage with its sensing neighbors. In this way, the view-coverage ratio of the target region is gradually increased until reaching the stable state that each sensor oscillates around some direction. Furthermore, we propose some refinement procedures on the basis of the algorithm. First, for the sensors that are near the boundary of the target region, the view-coverage on the outside should also be avoided since it contributes none to the view-coverage of the target region, and we deal with this problem by our effort of Boundary Effect Mitigation. Second, if sensors rotate with a fixed small angle at each round of the algorithm, it is more likely to be trapped into local minima, and cannot achieve global optimal solution. To this end, we propose the Dynamic Rotating Angle scheme, where sensors first rotate with a large angle to make a coarse adjustment and then rotate with a small angle to make a finer adjustment. Then we combine the two refinement procedures into the VCE algorithm, and propose the improved view-coverage enhancing algorithm (IVCE algorithm). The main contribution of this paper lies in the following aspects: 1. we introduce the view-coverage model which measures the coverage quality for the purpose of facing recognition. Compared with the full-view coverage model, it enables a finer-granularity measurement of the coverage quality in capturing the objects’ face; 2. we propose a distributed algorithm to improve the view-coverage ratio under random deployment by self-orientation of the camera sensors. Also, we address some refinement issues to improve the performance of the basic algorithm. One is to alleviate the boundary effect, and the other one is to adopt dynamic rotating angle to achieve global optimization solution. Both issues are rarely studied in similar works; 3. we evaluate the performance of our algorithm with real deployed camera sensors. By using our algorithm, the recognition ratio (i.e., how much of the photos are correctly recognized) is greatly improved. To the best of our knowledge, this work is the first to evaluate the coverage performance by real deployed camera sensors in conjunction with face recognition system. The rest of the paper is organized as follows. In Section 2, we present the related work. Section 3 proposes the view coverage model and formulates the maximal view-coverage problem. In Section 4, we propose a basic distributed viewcoverage enhancing algorithm. Section 5 improves the basic view-coverage enhancing algorithm by mitigating the boundary effect and adopting dynamic rotating angle. The evaluation by simulation and experiment are discussed in Section 6 and Section 7 respectively. Finally, Section 8 concludes the paper. 2. Related works Camera sensors are a kind of directional sensors which may adjust their working directions on the requirement of applications. The optimal coverage problems in directional sensor networks have been widely studied in recent years. The survey can be found in [4]. Based on the subject to be covered, the coverage problem can be categorized into three types: point (target) coverage, area coverage and barrier coverage, and we survey the related work from these three types. The issues on point coverage in directional sensor networks are discussed in [5–8]. In [5], the authors propose the MCMS problem which achieves the maximum point coverage with minimum sensors and present a centralized as well as distributed
C. Yang et al. / Pervasive and Mobile Computing (
)
–
3
algorithm to solve this problem. The work in [6] addresses the similar problem as in [5]. However, they propose a greedy algorithm with bounded performance guarantee. In [7], the authors organize the directions of sensors into a group of nondisjoint cover sets that are activated successively, so as to maximize the network lifetime while guaranteeing point coverage. As an extending work, [8] considers the coverage quality, which is affected by the distance between the sensor and the target, as the requirement conditions, and assigns each target point of differentiated coverage quality requirements. In area coverage problems, the grid-based approach [9] is a common method which simulates the area by the grid points within the area. Some other works are devoted to maximize the area coverage with a minimization of overlapping and occlusion. Tao et al. [10] propose a centralized area coverage-enhancing algorithm by minimizing the overlapping sensing area of directional sensors. Tezcan and Wang [11] take the occlusion effect into consideration and study the problem of self-orientation of multimedia sensors to maximize the occlusion-free area. Some efforts are devoted to improve the barrier coverage in DSNs which concerns with constructing barriers for intrusion detection. In [12], the authors define and solve the Maximum Directional Sensor Barrier Problem (MDSBP) to find the maximum number of disjoint directional sensor barriers, and thus maximize the network lifetime by the sleeping-awake scheduling among the barriers. Some works exploit the potential field technique to improve the coverage under random deployment [13,14]. In these works, each sensor is subjected to the ‘‘virtual force’’ from all other sensors. This force directs the virtual movement of the sensors to achieve a global uniform deployment and thus improve the coverage. In [15], the authors propose a distributed coverage enhancing algorithm by self-orientation of the sensors. In this approach, the centroid of the sensing area is treated as the virtual charged particle, and the sensor’s rotation is governed by the virtual repel force from other sensors which is inversely proportional to the Euclidean distance between the centroids of their sensing areas. The optimal camera placement and selection problem in deterministic deployment is studied in [16–19]. Some works study the issues of multi-perspective coverage [20–23] for the wireless multimedia sensor networks. The problem of finding the minimum cost of nodes to cover 360° of the target is discussed in [20,21]. In [22], the authors introduce the metric of ω-pC, and find the minimum cost cover which preserves all the angles of view. Liu et al. [23] propose the concept of effective sensing and effective k-covering which is similar to our view-coverage model. However, this work is limited to the mathematical analysis of the relationship between the density of the network and the rate of the directional k-coverage. The full-view coverage model was first proposed in [3], for determining whether an object is guaranteed to be recognized. In [24], the authors derive the probability that a point is full-view covered under Poisson and uniform deployment. Wang and Cao [25] investigate the problems of camera barrier, which is a connected zone across the monitoring field such that each point within this zone is full-view covered. The authors in [26] study the problems of selecting minimum number of sensors from a random deployment to form a camera barrier. All the above works are based on the full-view coverage model, and none of them consider how to measure the coverage quality for the purpose of face recognition when the area is not full-view covered although this is the common case in random deployment. 3. Problem description In this section, we first propose the view-coverage model. Then we analyze the view-coverage ratio with random deployment. At last, we will give the definition of the maximum view-coverage problem. 3.1. View-coverage model The view-coverage model is different from the traditional sensing model mainly in that it considers not only whether the object is located in the sensing area of the cameras, but also whether the object’s facing direction is nearly straight to the cameras. In other words, the view-coverage model includes two aspects. The first one concerns the location of the object, and the second one concerns the facing direction of the object. For the first aspect, we adopt the directional sensing model [4,15,27] and the sensing area of a camera sensor is a sector. This is because: (1) most cameras have finite angle of view and thus cannot see the whole circular region; (2) when the object-to-camera distance is too long, the resolution of face image becomes very low, and as a result degenerates both the performance of face detection and recognition [28]. In this paper, we employ the 2D directional sensing model where the
− → − →
sensing sector of a sensor Si is denoted by a 4-tuple (Si , r , di , ψ). Here, without ambiguity, Si also denotes the sensor’s position, r is the sensing range, ψ is the angle of view, and di is the working direction of Si (shown in Fig. 1(a)). For convenience, we denote the sensing sector of Si as sci . For the second aspect, two ‘‘directions’’ need to be taken into consideration. First, the direction that a point P faces to is
− →
called the facing direction. Second, the vector PSi is called the viewing direction of the sensor Si on P. The angle between the facing direction and the viewing direction is called the pose angle. Then P is recognized by Si only when the facing direction of P is sufficiently close to Si ’s viewing direction on P. In other words, the pose angle is smaller than a given threshold. Thus, the view-coverage model is formally described as follows:
− →
− →
− →
Definition 1. Let the tuple ⟨P , f ⟩ denote the facing direction f of a point P, and sci be the sensing sector of Si , then ⟨P , f ⟩ is view-covered by Si , if and only if the following conditions are satisfied:
4
C. Yang et al. / Pervasive and Mobile Computing (
a
)
–
b i
i
2
le e ang
pos 2
i
i
Fig. 1. The view-coverage model. An object is view-covered by a camera sensor if and only if (1) it is located in the sensing area of this camera sensor, and (2) the pose angle is smaller than a predefined threshold.
(1) P ∈ sci ;
− → − →
(2) α(PSi , f ) ≤ θ , where θ ∈ [0, π /2) is a predefined threshold which is called the maximum pose angle (shown in Fig. 1(b)).
− →
− →
− →
More formally, let Iv c (⟨P , f ⟩, Si ) be an indicator function where Iv c (⟨P , f ⟩, Si ) = 1 if ⟨P , f ⟩ is view-covered by Si and 0 otherwise. Then we have:
− →
− → − →
Iv c (⟨P , f ⟩, Si ) = I (P ∈ sci )I (α(PSi , f ) ≤ θ )
(1)
where I (F ) is an indicator function where I (F ) = 1 if F is true, and 0 if otherwise. Based on the definition of the view-coverage model, we propose the metric of view-coverage ratio which measures the view-coverage quality. Definition 2. Given the node set S = {S1 , . . . , Sn }, the view-coverage degree of a point P, denoted as vpt (P , S ), is the amount
− →
− →
of all its facing directions f ∈ [0, 2π ) such that ⟨P , f ⟩ is view-covered by at least one of sensors in S. More formally, by referring to the indicator function Iv c (·) defined above, it is given by:
vpt (P , S ) =
2π
1− 0
n
− → − → (1 − Ivc (⟨P , f ⟩, Si )) d f .
(2)
i=1
Definition 3. Given the node set S = {S1 , . . . , Sn }, the view-coverage ratio of a point P, denoted as Vpt (P , S ), is the viewcoverage degree of P over 2π : Vpt (P , S ) =
vpt (P , S ) . 2π
(3)
Accordingly, the view-coverage ratio of a target region is defined as follows: Definition 4. Given the node set S = {S1 , . . . , Sn }, the view-coverage ratio of a target region R, denoted as Vreg (R, S ), is the integral of the view-coverage ratio of the point within R over the area of R.
Vreg (R, S ) =
R
Vpt (P , S )dp
|R|
(4)
where |R| is the area of R. 3.2. View-coverage ratio of random uniform networks We analyze the expected view-coverage ratio of the network with random deployment. Let the target region be denoted as Rt . If cameras are deployed randomly within Rt , the view-coverage ratio of the target points near the boundary of Rt is less than the interior. To ease this effect, some sensors should be deployed outside Rt . Thus, we expand the sensor deployment region by r on the basis of Rt , where r is the sensing range of the cameras. In this way, the target points near the boundary of Rt are expected to be view-covered with the same probability as the interior.
C. Yang et al. / Pervasive and Mobile Computing (
)
–
5
Given the target region Rt and N sensors are randomly deployed within Rs which is expanded by r on the basis of Rt ,
− →
− → − → − → such that (1) Si is in the sensing sector denoted by (P , r , f , 2θ ), and (2) P ∈ sci . Therefore, the probability that ⟨P , f ⟩ is
for any object P within Rt and its facing direction f , ⟨P , f ⟩ is view-covered if there exists at least one sensor Si ∈ Rs view-covered is given by:
Pr = 1 −
1−
N N θr2 θr2 ψ θ r 2ψ + 1− =1− 1− . |Rs | |Rs | 2π 2π |Rs |
(5)
This probability is approximately equal to the view-coverage ratio under random deployment. From this equation, we know that the view-coverage could be enhanced by increasing the density of the network, i.e., increasing the sensor number (N ), the angle of view (ψ) or the sensing range (r ), or by leveraging more efficient facing recognition algorithms and relaxing the requirement of the maximum pose angle (θ ). In addition to adjusting these parameters, another efficient approach for improving the view-coverage is to adjust the initial locations or orientations of the sensors to achieve a more uniform distribution. In this paper, we take advantage of the rotating behavior of the camera sensors to enhance the view-coverage under random deployment. 3.3. Problem definition To make our problem tractable, we make the following assumptions: 1. the camera sensors are deployed in a bounded target region without any obstacles, and the task is to recognize the object of interest which potentially occurs any time at any position of the region, and each position of the region is equally important; 2. the speed of the object of interest is low enough that the camera can capture and analyze the image of it; 3. each sensor is homogeneous, i.e., the sensing range r and the angle of view ψ of each sensor are the same; 4. all the camera sensors can self-orient horizontally to any directions. The prevalent PTZ cameras make this assumption feasible. For example, the EverFocus EPTZ900 [29] Camera and the Axis 215 PTZ Network Camera [30] provide 360° continuous horizontal rotation range; 5. each camera sensor is aware of its exact location and orientation, which can be realized by equipping it with GPS and compass; 6. all the sensors can communicate with each other directly or via multi-hop data transmission, and the wireless channel is ideal, i.e., we do not consider the transmission loss or error. Then we formulate the view-coverage optimization problem as follows: Definition 5 (Maximum View-Coverage (MVC) Problem). Given the target region Rt and a group of nodes S = {S1 , . . . , Sn },
− →
− →
how to decide their orientations d = { d 1 , . . . , d n }, such that Vreg (Rt , S ) is maximized. 4. A distributed solution to the MVC problem In this section, we present a distributed multi-round view-coverage enhancing algorithm (VCE algorithm) for the MVC problem. Although the distributed algorithm is not expected to achieve as good performance as the centralized one, it does not incur high communication overhead as required by a centralized solution and more scalable. The main idea of this algorithm is to rotate sensors to reduce the entire overlapping view-coverage with their sensing neighbors. More specifically, at each round of the algorithm, each sensor makes a choice of rotating with a predefined angle clockwise or anti-clockwise which results in less entire overlapping view-coverage with their sensing neighbors. After several rounds of adjustment, the sensors tend to reach the stable state that oscillates around some orientation, which is taken as the final orientation.1 We first give the definition and calculation of overlapping view-coverage degree between sensors, then we introduce the VCE algorithm in detail. At last, we derive the upper-bound distance between sensing neighbors in the view-coverage sensing model. 4.1. Overlapping view-coverage degree In the common sensing model, the coverage area is increased with reduced overlapping area. Correspondingly, in order to improve the view-coverage, sensors should rotate to eliminate the overlapping view-coverage among them. Before we introduce the overlapping view-coverage among sensors, we first give the definition of the overlapping view-coverage degree of a point by sensors and the approach of calculating it.
1 All the rotation during the execution of the algorithm is virtual. The real rotation is conducted after the algorithm stops, following the final orientation by the algorithm.
6
C. Yang et al. / Pervasive and Mobile Computing (
)
–
Definition 6. Given a point P, and two sensors Si and Sj with sensing areas sci and scj , the overlapping view-coverage degree
− →
of the point P by Si and Sj , denoted as the opt (P , sci , scj ), is the amount of the facing direction f
− →
∈ [0, 2π ) of P, such that
⟨P , f ⟩ is view-covered by Si and Sj . More formally, it is given by: 2π − → − → − → f (⟨P , f ⟩, Si )f (⟨P , f ⟩, Sj )d f opt (P , sci , scj ) = 0
= I (P ∈ sci )I (P ∈ scj )
2π
− → − →
− → − →
− →
I (α(PSi , f ) ≤ θ )I (α(PSj , f ) ≤ θ )d f .
(6)
0
To calculate opt (P , sci , scj ), we first determine whether opt (P , sci , scj ) is zero, which corresponds to the case that there is
− →
− →
no facing direction f of P, such that ⟨P , f ⟩ is view-covered by Si and Sj . To this end, we have the following lemma.
− →
− →
Lemma 1. For a point P, there exists a facing direction f such that ⟨P , f ⟩ is view-covered by Si and Sj , if and only if P ∈ sci
− → − →
and P ∈ scj , and α(PSi , PSj ) 6 2θ .
→ → − → − → − − → − Proof. For the part of ‘‘if’’, we can choose a facing direction f ′ of P such that α(PSi , f ′ ) = α(PSj , f ′ ). Then we have
→ → − → − → − − → − α(PSi , f ′ ) = α(PSj , f ′ ) ≤ θ , and because P ∈ sci and P ∈ scj , ⟨P , f ′ ⟩ is view-covered by Si and Sj . For the part of ‘‘only if’’, we can prove it by contradiction according to Eq. (6): if P ̸∈ sci or P ̸∈ scj , then I (P ∈ sci ) = 0 or I (P ∈ scj ) = 0; and if − → − → − → − → − → − → − → α(PSi , PSj ) > 2θ , there does not exist any facing direction f , such that I (α(PSi , f ) ≤ θ ) = 1 and I (α(PSj , f ) ≤ θ ) = 1 are satisfied simultaneously. Therefore, the ‘‘only if’’ is proved.
Then we give the calculation of the OVCD of a point by two sensors: Theorem 1. Given a point P, and two sensors Si and Sj with sensing sectors sci and scj , the overlapping view-coverage degree of P by Si and Sj is given by: opt (P , sci , scj ) =
− → − →
2θ − α(PSi , PSj ), 0,
− → − →
if P ∈ sci ∩ scj and α(PSi , PSj ) 6 2θ; otherwise.
(7)
− → − → − → − → −→ −→ (2) 0 < α(PSi , PSj ) ≤ θ which correspond to the cases shown in Fig. 2(a) and (b), respectively. In both cases, [PPi1 , PPi2 ] −→ −→ − → − → and [PPj1 , PPj2 ] are respectively the angle range of the facing direction f such that ⟨P , f ⟩ is view-covered by Si and Sj , −→ − → − → −→ −→ − → − → −→ −→ −→ − → − → i.e., α(PPi1 , PSi ) = α(PSi , PPi2 ) = α(PPj1 , PSj ) = α(PSj , PPj2 ) = θ . In Case 1, opt (P , sci , scj ) = α(PPj1 , PPi2 ) = α(PSi , PSj ) − − → − → − → − → −→ −→ − → − → − → − → 2(α(PSi , PSj ) − θ ) = 2θ − α(PSi , PSj ), and in Case 2, opt (P , sci , scj ) = α(PPj1 , PPi2 ) = 2(θ − α(PSi , PSj )) + α(PSi , PSj ) = − → − → 2θ − α(PSi , PSj ). Thus the ‘‘if’’ is proved. Proof. The part of ‘‘otherwise’’ is proved by Lemma 1. And we prove the ‘‘if’’ from two cases: (1) θ < α(PSi , PSj ) ≤ 2θ and
Based on OVCD of a point by two sensors, we give the definition of OVCD of a region by two sensors, which is the integral of the OVCD of the points within the region. Definition 7. Given the region R, and two sensors Si and Sj with sensing sectors sci and scj , the overlapping view-coverage degree of the region R by Si and Sj , denoted as oreg (R, sci , scj ), is given by: oreg (R, sci , scj ) =
opt (P , sci , scj )dP .
(8)
R
Then we give the definition of the overlapping view-coverage degree (shorten as OVCD) between two sensors: Definition 8. Given two sensors Si and Sj with their sensing sectors sci and scj , the overlapping view-coverage degree between Si and Sj , denoted by o(sci , scj ), is the overlapping view-coverage degree of the intersection region of sci and scj by Si and Sj . More formally, it is given by: o(sci , scj ) = oreg (sci ∩ scj , sci , scj ).
(9)
Since the intersection region of two sensing sectors often has irregular shape which depends on the positions of the two sensors, it is difficult to derive a closed-form solution of Eq. (9), and we exploit the Monte Carlo method to simulate it. More specifically, we divide the target region into a grid of regions, where each region is an µ × µ square, and the view-coverage problem of a region is simulated by the summation of the grid points within the region. Let P = {P [0], . . . , P [m]} be the
C. Yang et al. / Pervasive and Mobile Computing (
a
)
–
7
b
⃗ i , PS ⃗ j ) ≤ 2θ and (b) 0 < α(PS ⃗ i , PS ⃗ j) ≤ θ . Fig. 2. Calculate OVCD of a point by two sensors from two cases: (a) θ < α(PS
array of all the grid points within the target region. In order to the calculate OVCD between Si and Sj , we first find all the grid points are within sci ∩ scj , and then summate the OVCD of these grid points by Si and Sj . Thus, Eq. (9) is calculated by: o(sci , scj ) = oreg (sci ∩ scj , sci , scj )
=
opt (P [i], sci , scj ).
(10)
P [i]∈sci ∩scj
4.2. View-coverage enhancing algorithm We now give the details of the view-coverage enhancing algorithm. After receiving a message notifying the beginning of the algorithm from the base station or a sink node, each node starts to perform the VCE algorithm. At each round of the algorithm, sensors exchange the current orientations with their sensing neighbors, and make a decision of rotating clockwise or rotating anti-clockwise with a fixed rotating angle ∆α , which results in less entire OVCD with all their sensing neighbors. More formally, for any sensor Si and its current sensing sector sci , let sci+ and sci− be the sectors by rotating with an angle ∆α clockwise and anti-clockwise from sci , respectively, and Ωi be the set of sensing neighbors of Si . Then the rotating rule is given as follows:
Motion =
+,
if
−,
otherwise
o(sci+ , scj ) <
o(sci− , scj );
(11)
j∈Ωi
j∈Ωi
where ‘‘+’’ stands for rotating clockwise and ‘‘−’’ stands for rotating anti-clockwise. Actually, there should be another motion choice to be considered that keeps static, since the OVCD of the current orientation may be less than that of rotating clockwise or anti-clockwise. However, this scheme is more sensitive to the local minima, and we will discuss it in Section 5.3. After the adjustment, nodes broadcast their updated orientations to their view-coverage neighbors for the reference of making the decision of the next round. After several rounds of rotation, the sensor tends to oscillate around a certain orientation, and we consider it comes into the stable state. To decide the stable state, we set a threshold ϵ with a small value (∆α for example), and a history window of size m which records the historical orientations within last m rounds. If the difference between any element in this window is no larger than ϵ , we consider that it comes into the stable state, and its orientation will be no longer changed. To guarantee the stop of the algorithm, we set an expiration time threshold tthresh . The process of the view-coverage enhancing algorithm is given in Algorithm 1. 4.3. View-coverage sensing neighbor At each round of the VCE algorithm, sensors collect the orientation information from all their sensing neighbors to decide their motion. In the view-coverage sensing model, two camera sensors are view-coverage sensing neighbors if any sensing sectors of one sensor have overlapping view-coverage with any sensing sectors of the other one. In this subsection, we give the formal definition of view-coverage sensing neighbor and derive the upper-bound distance between view-coverage sensing neighbors. Definition 9. Two sensors Si and Sj are view-coverage sensing neighbors if there exist sensing sectors sci and scj respectively
− →
− →
of them, and a point P with its facing direction f , such that ⟨P , f ⟩ is view-covered by Si and Sj . According to Definition 9 and Lemma 1, we have the following lemma. Lemma 2. Two sensors Si and Sj are view-coverage sensing neighbors if there exist sensing sectors sci and scj respectively of them,
− → − →
and a point P ∈ sci ∩ scj , such that α(PSi , PSj ) ≤ 2θ . Based on Lemma 2, we get the upper-bound of the distance between view-coverage sensing neighbors.
8
C. Yang et al. / Pervasive and Mobile Computing (
)
–
Algorithm 1 The Basic View-Coverage Enhancing Algorithm − → Input: d i (0) - Initial working direction of Si ; ∆α - The rotating angle; Ωi - The set of sensing neighbors of Si ; tthresh - The expiration time; ϵ - The angle threshold; m - The size of the history window. − → Output: D i - Final working direction. 1: t ← 0; 2: while t < tthresh do 3: t ← t + 1 4: if j∈Ωi o(sci+ , scj ) − j∈Ωi o(sci− , scj ) < 0 then 5: 6: 7: 8: 9: 10:
− →
Determine d i (t ) by rotating the angle ∆α clockwise; else − → Determine d i (t ) by rotating the angle ∆α anti-clockwise; end if //If the difference of the angles within the history window is less than ϵ , it comes into the stable state, and does not rotate any more.
− →
− →
if t > m and maxp,q∈[t −m,t ] | d i (p) − d i (q)| ≤ ϵ then
− →
− →
11: Di = d 12: end if 13: end while
i
(t ); break;
Fig. 3. Proof of Theorem 2.
Theorem 2. Two sensors Si and Sj are view-coverage sensing neighbors if and only if d(Si , Sj ) ≤ 2r sin θ , where r is the sensing range and θ is the maximum pose angle. Proof. We construct the circles Ci and Cj with radius of r centered at Si and Sj , respectively (shown in Fig. 3). According to Lemma 2, if Si and Sj are view-coverage sensing neighbors, Ci and Cj should be overlapped, and we denote the intersection
− → − → points of Ci and Cj as P and Q . Let α(QSi , QSj ) be δ . Then we have d(Si , Sj ) = 2r sin 2δ . Hence, 2δ 6 θ . Therefore we can prove − → − →
‘‘if’’, because the point Q ∈ sci ∩ scj and α(QSi , QSj ) 6 2θ . Next, we prove the ‘‘only if’’ by contradiction. Assume that d(Si , Sj ) > 2r sin θ , then δ > 2θ . For any point T on the
− → − →
boundary of the intersection area of Ci and Cj , we denote the angle α(TSi , TSj ) as β . Then it can be proved that β > δ by the
− → − →
law of sines [31]. Further, we consider any point U on the segment TSi or TSj . Let α(USi , USj ) be denoted as γ . Then it is obvious
−→ −→ − → − → In other words, there does not exist any point P, such that P ∈ sci ∩ scj , and α(PSi , PSj ) ≤ 2θ . According to Lemma 2, Si and
that γ > β . Therefore, it can be concluded that for any point W within the intersection area of Ci and Cj , α(WSi , WSj ) > 2θ . Sj are not view-coverage sensing neighbors and the contradiction occurs. Thus, the ‘‘only if’’ is proved.
As we know, the distance between sensing neighbors in the common sensing model is at most 2r, where r is the sensing range. Therefore, the upper-bound distance of sensing neighbors in the view-coverage sensing model is sin θ of the one in common sensing model. Suppose that θ = π /6, the upper-bound distance of the view-coverage sensing neighbors is only half of the distance between sensing neighbors in common sensing model, and the expected number of sensing neighbors is only a quarter. As a consequence, our effort of deriving the upper-bound distance between view-coverage sensing neighbors reduces excessive communication and computation to a large extent, especially with a small θ . 5. Refinement and discussion In this section, we discuss the improvement issues on the VCE algorithm. First, the VCE algorithm does not consider the view-coverage on the outside of the target region, which is meaningless for our problem. Second, it does not tell how
C. Yang et al. / Pervasive and Mobile Computing (
Sj
)
–
9
Outside of the target region
Si Inside of the target region
Fig. 4. Mitigation of boundary effect. Si is inside the target region, and performs the VCE algorithm according to Eq. (13); Sj is outside the target region, and it rotates such that VCDO is minimized (depicted by enclosing with dashed lines) at the beginning the VCE algorithm.
to set the rotating angles properly. We propose the corresponding refinement procedures and present the improved VCE algorithm in Algorithm 2 (to be differentiated, we call Algorithm 1 the basic VCE algorithm). 5.1. Boundary effect mitigation The basic VCE algorithm makes an effort to reduce OVCD among sensors. However, it does not take into account the boundary effect. For the sensors on the boundary of the target region, the view-coverage with the outside of the target region should also be avoided, since it contributes none to the view-coverage of the target region. The definition of the view-coverage degree on the outside of the target region (shorten as VCDO) is given as follows: Definition 10. Let the outside of the target region be Rtc , for a sensor Si and its sensing sector sci , the view-coverage degree of Si on the outside of the target region, denoted as vout (sci , Rtc ), is given by:
v ( ,
c out sci Rt
)=
sci ∩Rtc
vpt (P , Si )dP .
(12)
Hence, the rotation direction at each round of the algorithm is determined by the amount of VCDO and OVCD, i.e., the rotating rule at each round of the algorithm in Eq. (10) is modified as follows:
Motion =
+,
if vout (sci+ , Rtc ) +
−,
otherwise.
j∈Ωi
o(sci+ , scj ) < vout (sci− , Rtc ) +
j∈Ωi
o(sci− , scj );
(13)
However, for the sensors outside the target region, their sensing sectors may be totally outside the target region, and the VCDO may reach the maximum value when either rotating clockwise or anti-clockwise. In this case, we let these sensors rotate such that the VCDO is minimized to get a good initial deployment at the beginning of the algorithm. We show an example in Fig. 4. 5.2. Dynamic rotating angle We implement the basic VCE algorithm with different rotating angle ∆α to study the impact of rotating angle to the performance of this algorithm. The result is shown Fig. 5, from which we find that an extremely large or small value of ∆α results in poor performance. For the searching problem, intuitively, the finer granularity of step size tends to generate a better result since the searching is expected to be performed in a larger space. However, the VCE algorithm is a kind of local search algorithm that sensors only compare the OVCD of neighboring directions, which rotate +∆α or −∆α from the current directions, and sensors stop searching if they rotate to a certain direction such that the OVCD with its neighbors is the minimum among the nearby directions. This strategy is prone to the so-called local minima that the search is limited within a certain range and cannot be performed globally. A smaller step size is more easy to be trapped into the local minima. To alleviate the effect of local minima, we divide the process of the algorithm into several phases which are different in the rotating angles. More specifically, we construct an array with decreasing order of ∆α . At the i-th phase, all the nodes rotate with the angle of ∆α[i] at each round of rotation. We set an array STOP [] to indicate the stable state of each phase. At the beginning of each phase, all the nodes are set as unstable state. For each node, if the difference of its historical orientations of last m rounds is less than the angle threshold of the current phase, the node is considered to come into the stable state, and does not need to rotate anymore within the current phase. To synchronize the rotating angle for each node, we set the array
10
C. Yang et al. / Pervasive and Mobile Computing (
)
–
Fig. 5. The view-coverage ratio with different rotating angle for the basic VCE algorithm. (N = 1000, r = 40, ψ = 90°, θ = 30°).
Algorithm 2 The Improved View-Coverage Enhancing Algorithm − → Input: d i (0) - Initial working direction of Si ; Rtc - The outside of the target region; Ωi - The view-coverage sensing neighbor set of Si ; Np - The number of phases; ∆α[] - Rotating angle for each phase; tthresh [] - Expiration time for each phase; ϵ[] - Angle threshold for each phase; m - The size of the history window. − →
Output: D i - Final working direction. 1: Initialization: phase ← 1; t ← 0; for each phase, STOP [phase] ← false; 2: while phase ≤ Np do 3: t ← t + 1; 4: if STOP [phase] == false then 5: if vout (sci+ , Rct ) + j∈Ωi o(sci+ , scj ) < vout (sci− , Rct ) + j∈Ωi o(sci− , scj ) then 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19:
− →
Adjust d i (t ) by rotating clockwise with ∆α[phase]; else − → Adjust d i (t ) by rotating anti-clockwise with ∆α[phase]; end if − → − → if t > m and maxp,q∈[t −m, t ] | d i (p) − d i (q)| ≤ ϵ[phase] then STOP [phase] ← true; end if //If the difference of the angles within the history window is less than ϵ[phase], it comes into the stable state, and does not rotate any more within the current phase. end if if t == tthresh [phase] then phase ← phase + 1; end if end while − → − → D i ← d i (t );
of expiration time tthresh []. When the nodes come into the stable state of the current phase, it waits until the time expires for the current phase, and comes into the next phase. The array of the expiration time could be set through experience. A larger ∆α corresponds to a smaller tthresh , since it is expected to take less time to enter the stable state. It is worth mentioning that if ∆α < ψ/2, the sensing sectors by rotating clockwise and anti-clockwise from the current orientation overlap with each other. To reduce the computation cost, we only need to find the regions which reflect the difference between the two sectors. More specifically, for any sensor Si with its current sensing sector sci , the sensing sectors by rotating with ∆α clockwise and anti-clockwise are denoted as sci+ and sci− , respectively. We first find the regions which reflect the difference between the sectors sci and sci+ as well sci and sci− , i.e., ∆+ sci+ = sci+ − sci , ∆− sci+ = sci − sci+ ; and ∆+ sci− = sci− − sci , ∆− sci− = sci − sci− . As shown in Fig. 6(a)(b), the sector enclosed with solid lines is the current sensing sector sci , and the sectors enclosed with dashed lines are sci+ and sci− , respectively. Then the difference between the sectors sci+ and sci− is given by: sci+ − sci− = (∆+ sci+ ∪ ∆− sci− ) − (∆+ sci− ∪ ∆− sci+ ).
(14)
C. Yang et al. / Pervasive and Mobile Computing (
)
–
11
Fig. 6. When ∆α is small, replace the calculation of the amount of VCDO and OVCD on large sectors with small sectors.
Let ∆+ sci = ∆+ sci+ ∪ ∆− sci− and ∆− sci = ∆+ sci− ∪ ∆− sci+ . Then Eq. (13) is converted to: Motion =
+,
if vout (∆+ sci , Rtc ) +
−,
otherwise,
j∈Ωi
o(∆+ sci , scj ) < vout (∆− sci , Rtc ) +
j∈Ωi
o(∆− sci , scj );
(15)
where sci+ and sci− are replaced with small sectors ∆+ sci and ∆− sci (enclosed with dashed lines in Fig. 6(c)), and the computation cost is reduced to a large extent, especially with a very small ∆α . We propose the improved view-coverage enhancing algorithm (IVCE algorithm) in Algorithm 2, which combines the mitigation of the boundary effect and the effort of dynamic rotating angle into the basic VCE algorithm. 5.3. Discussion In our proposed algorithm, sensors are forced to rotate clockwise or anti-clockwise which results in less amount of VCDO and OVCD unless coming into the stable state. Intuitively, however, there should be another case to be considered that keeping the orientation unchanged, whose amount of VCDO and OVCD may be less than that of rotating clockwise or anticlockwise. For convenience, we call this scheme the 3-state IVCE, and our scheme the 2-state IVCE. Compared with 2-state IVCE, the 3-state IVCE is more easy to enter the stable state, since it adds the choice of keeping the orientation unchanged at each round of the algorithm. However, this limits the range of searching, and is more likely to be trapped into local minima. In the simulation part, we will compare the performance of the 2-state IVCE and the 3-state IVCE to verify our thought. The IVCE runs in several phases with different rotating angle, and we expect at each phase the sensors would reach the stable state in which oscillate around some directions. In the simulation part, we will demonstrate that all the nodes will converge to the stable state after certain rounds of rotation. The theoretical analysis of convergency is left for future study. It is unlikely that our algorithm would reduce the view-coverage ratio under random deployment. However, we have to admit that our algorithm is a heuristic algorithm that cannot guarantee to increase the view-coverage ratio, especially under some deterministic initial deployment. To this end, we have constructed a triangle lattice based deployment which achieves the full view coverage, as introduced in [3], and rotate the sensors by IVCE. According to the results, the loss of the view-coverage ratio is less than 1.5%. Since the authors in [3] state that it is the optimal deployment pattern in terms of sensor density, we believe that the loss of the view-coverage ratio by IVCE could be less under other deployment which achieves the full-view coverage. In this paper, we mainly focus on improving the view-coverage of the camera sensor network. In fact, the area coverage is still an important concern in the monitoring systems. For example, in the application of intrusion detection, the camera sensor network can find an intruder object which is detected by at least one sensor, and send an alarm to the base station, even if it fails to recognize the face of the intruding object. To this end, we will demonstrate the performance of the area coverage ratio of our algorithm in the simulation part. 6. Simulation results In this section, we conduct several groups of simulation to testify the effectiveness of our algorithm. We consider three metrics in the performance evaluation: view-coverage ratio, area coverage ratio and convergence time. We implement a simulation environment using Matlab to evaluate the performance of our algorithm. The target field is a 400 m × 400 m square region, and the sensors are deployed within a (400 + 2∆d) m × (400 + 2∆d) m region, with the same center of the target region. We call ∆d the boundary range. To calculate the view-coverage ratio of the target region, we partition the target region into equal small grids with the size of 1 m × 1 m, and average the view-coverage ratio of each grid point as the view-coverage ratio of the target region. Each group of simulation is run 100 times, and the results are averaged. All the parameters used and their default values are shown in Table 1.
12
C. Yang et al. / Pervasive and Mobile Computing (
)
–
Table 1 Summary of parameters in simulation. Symbol
Definition
Default value
Rt Rs ∆d
Target region Sensor region Boundary range Number of sensors Sensing range Angle of view Maximum pose angle Rotating angle of VCE Angle threshold of VCE Number of phases in IVCE Array of rotating angle of IVCE Array of angle threshold of IVCE Array of expiration time of IVCE Size of the history window
400 m × 400 m 440 m × 440 m 20 m 1000 40 m 90° 30° 0.01 0.02 3 [1, 0.1, 0.01] [2, 0.2, 0.02] [100, 200, 400] 15
N r
ψ θ ∆α ϵ Np
∆α[] ϵ[] tthresh [] m
(a) The effect of the angle of view.
(b) The effect of the number of sensors. Fig. 7. The benefit of the boundary effect mitigation.
6.1. View-coverage ratio For the performance of view-coverage ratio, we first testify the benefit of our effort in mitigating the boundary effect. We have conducted two groups of simulation to study the impact of the number of sensors (N ) and the angle of view (ψ) on the performance. In both simulations, we compare three groups of data. The first one is the view-coverage ratio under random deployment, which is denoted as Random; the second one is the result of the basic VCE algorithm, which is denoted as VCE; and the third one is the result obtained by the algorithm which mitigates the boundary effect on the basis of the VCE algorithm, and we denote it as VCE-B. The results are shown in Fig. 7(a) and (b). From both figures, we find that when the density of the network grows, VCE-B always outperforms VCE and Random. Also, we observe that our approach of mitigating the boundary effect if not so effective in extremely high/low density. This is because when the density is very low, the overlapping view-coverage degree on the outside is also very little; on the opposite, when the density is very high, both the view-coverage ratio at the boundary and interior of the target region approach 100%, and the difference is very small. Under both cases, VCE-B produces less significant effect than in the moderate density. Second, we conduct a simulation to testify the benefit of dynamic rotating angles. We have conducted three groups of simulation. The first one adopts the fixed rotating angle of 0.01. The second one runs in two phases with the array of rotating angles ∆α = [1, 0.1]. The third one adds another finer-adjustment phase with ∆α = 0.01 based on the second one, i.e., the array of rotating angles is ∆α = [1, 0.1, 0.01], We adjust the sensor number from 800 to 1200. The results are depicted in Fig. 8(a). From this figure, one can see that the dynamic ∆α has better performance than the fixed ∆α , and for the dynamic ∆α , the performance is better for the algorithm which runs in more phases with finer granularity of adjustment. Then, we conduct a simulation to compare the results of the 2-state IVCE and the 3-state IVCE. For each scheme, we conduct two simulations. One is with fixed rotating angle ∆α = 0.01, and the other is with dynamic rotating angle ∆α = [1, 0.1, 0.01]. The result is depicted in Fig. 9. From this figure, one can see that the 2-state IVCE performs better than the 3-state IVCE. Specifically, the performance of 3-state is very poor if we take a fixed small ∆α , only a little better than the result of random deployment. Finally, we evaluate the performance of the proposed algorithm by adjusting the density of the network, in terms of the following parameters: the number of sensors (N ), the sensing range (r ), the angle of view (ψ), and the maximum pose angle
C. Yang et al. / Pervasive and Mobile Computing (
)
–
13
Fig. 8. The benefit of dynamic rotating angle.
Fig. 9. Comparison of the 2-state IVCE and the 3-state IVCE.
(θ). As a comparison, we implement the ACE (Area Coverage Enhancing) algorithm proposed in [15]. Though not with the purpose of improving view-coverage, this work is perhaps the closest one to our work in that it is also a distributed algorithm to fine-tune the angle of directional sensor by the virtual force between neighboring sensors. However, this algorithm does not consider the boundary effect, which yields to poor performance around the boundary of the target region. To alleviate this effect, we make minor revisions of the algorithm by adding a virtual force from the outside of the target region. For each simulation, we compare four groups of data. The first one is the view-coverage ratio at initial random deployment. The second one is the view-coverage ratio by the ACE algorithm. The third one is the result by the basic VCE algorithm, and the last one is the result obtained by the IVCE algorithm. The results are shown in Fig. 10. As expected, the performance of IVCE is the best among all the algorithms, which shows the superiority of our algorithm. Also, we find that ACE performs comparably with IVCE when the density is very low. This is because in a sparse network, most target points are covered at most once by ACE, with the view-coverage degree of 2θ of individual target points. However, when the density increases, the majority of target points are covered more than once, and ACE no longer takes effect on improving the view-coverage ratio. Another phenomenon is that when the network takes extremely low or high density, the improvement of IVCE on view-coverage ratio is very limited compared with random deployment. This can be explained as follows. When the density is very low, the expected OVCD and VCDO are very little, and thus the reduction of OVCD and VCOD by IVCE is also limited. On the opposite, when the density is very high, the view-coverage ratio is already approaching 100% under initial deployment, leaving very little improvement space. According to Fig. 10(a), we can see the benefit of our algorithm in reducing the cost. For example, the view-coverage ratio in random deployment is 64.8% when N = 1000. After performing IVCE, the view-coverage ratio reaches 79.9%. To achieve the same ratio, nearly 2000 nodes are needed for random deployment. In other words, IVCE saves about 50% of the nodes while achieving the same view-coverage ratio. 6.2. Area coverage ratio As discussed in Section 5.3, some monitoring applications are concerned with both area coverage and view-coverage. To this end, we compare the area coverage ratio of IVCE, ACE and random deployment by varying the density of the network.
14
C. Yang et al. / Pervasive and Mobile Computing (
)
–
(a) The effect of the number of sensors.
(b) The effect of the sensing range.
(c) The effect of the angle of view.
(d) The effect of maximum pose angle.
Fig. 10. The effect of different parameters on the view-coverage ratio of ACE, VCE and IVCE.
From the results shown in Fig. 11, one can see that IVCE performs better than the random deployment. Although the performance of IVCE is worse than ACE, the gap between the two gradually reduces when the density increases. Moreover, both of them achieve nearly full area coverage when the density of the network is high enough. For example in Fig. 11(a), when N ≥ 400, ACE shows an advantage of the area coverage ratio over IVCE by less than 2%, and both IVCE and ACE achieve an area coverage ratio of larger than 99% when N ≥ 600. Because IVCE usually applies to a network with high density (In those situations, the improvement of view-coverage ratio is more significant, see Fig. 10), it is expected that the loss in the area coverage is limited. 6.3. Convergence time We also validate the convergence of the proposed algorithm. As stated in Line 10–12 of Algorithm 2, each node converges to the stable state of each phase, if the difference of the directions within the last m rounds is less than the angle threshold of the current phase. To show the convergency at different phases, we exhibit the number of nodes that have not yet reached the stable state (denoted as unstable nodes in Fig. 12) during the process of the IVCE algorithm. The algorithm runs in three phases with ∆α[] = [1, 0.1, 0.01], ϵ[] = [2, 0.2, 0.02] and tthresh [] = [100, 200, 400]. The size of history window is 15. All the nodes are set as unstable in the beginning of each phase of IVCE. As shown in Fig. 12(a), the number of unstable nodes is reduced when the round increases for each phase, and almost all the nodes come into the stable state after 50 rounds in Phase 1 and Phase 2, while the convergence time in Phase 3 is longer, about 180 rounds until all the nodes come into stable state. As a comparison, we also implement the basic VCE with ∆α = 0.01 and ϵ = 0.02. As shown in Fig. 12(b), the number of unstable nodes gradually decreases with the increase of adjustment rounds, and all the nodes reach the stable state after about 310 rounds. Fig. 12(c) shows the instant view-coverage ratio at each round of the two algorithms. One can see that the view-coverage ratio of IVCE converges to better results than VCE. Moreover, the view-coverage ratio is improved with more rounds of
C. Yang et al. / Pervasive and Mobile Computing (
(a) The effect of the number of sensors.
)
–
15
(b) The effect of the sensing range.
(c) The effect of the angle of view. Fig. 11. The area coverage ratio by ACE and IVCE.
adjustment, but the improvement is marginal when the number of adjustment reaches a certain value. Hence, we can even tolerate the early stop of the algorithms to some degree at a little cost of view-coverage. 7. Experimental evaluation In this section, we evaluate the performance of our algorithm with real deployed camera sensors. We first introduce the setting up of the experiment, and then propose the approach of obtaining the sensing parameters of the camera. Finally, we present the experiment result and make an analysis of it. 7.1. Experiment setup Our experiment is implemented in an indoor environment with ten Logitech C525 cameras [32]. This type of camera has a resolution of 1024 ∗ 720 pixels. Each camera is connected by a USB cable to a laptop which is used for transmitting the control command and storing photos. The cameras are deployed within a square region of 4.8 m ∗ 4.8 m, and the target field is inside the camera deployment region, with the size of 3.6 m ∗ 3.6 m. The floor is composed of square bricks each with the size of 0.6 m ∗ 0.6 m. Therefore, the size of the camera deployment region is of 8 ∗ 8 bricks and the size of the target region is of 6 ∗ 6 bricks. Each grid point of the bricks in the target region (a total of (6 + 1) ∗ (6 + 1) = 49 grid points) is taken as the sampling point where a person stands and rotates from 0° to 337.5° with a gap of 22.5° (a total of 360/22.5 = 16 directions). For convenience, we call each facing direction at sampling point a state. A photo is taken by the cameras when the person is at each state. To achieve this, a cell phone is programmed to make a control command to each laptop through wireless local area network (WLAN) protocol. The topology of the experiment is shown in Fig. 13. We support each camera
16
C. Yang et al. / Pervasive and Mobile Computing (
(a) The number of unstable nodes during the process of IVCE.
)
–
(b) The number of unstable nodes during the process of VCE.
(c) The view-coverage ratio during the process of VCE and IVCE. Fig. 12. The convergency of VCE and IVCE.
Fig. 13. Experiment setup.
C. Yang et al. / Pervasive and Mobile Computing (
)
–
17
Fig. 14. A snapshot of the experiment.
with a tripod, and adjust the height a little lower than the person’s face such that the person’s face is taken from a nearly horizontal viewpoint, and the cameras would not obstruct each other to capture the image of the person’s face. Fig. 14 shows a scene of our experiment. Each photo taken by the cameras is processed by a face-recognition system to test whether the person’s face is correctly recognized. The face-recognition system involves two stages [33]: (1) Face Detection, where the face is captured from a photo; (2) Face Recognition, where the detected face is processed and compared to a database of known faces, to determine who the person is. We use the Viola–Jones detection algorithm [34] for the face detection, and the parallel deformation technique [35] for recognizing the face. The face database is retrieved from the ORL Database of Faces [36], and contains 50 faces with 5 different poses, including the face of the person to be photographed in our experiment. The view-coverage ratio is approximated by the metric of recognition ratio, which is defined as the ratio of the number of recognizable states to the total number of the states, which is 49 ∗ 16 = 784. Each state is considered to be recognized if at least one of the cameras
− →
− →
detects the person’s face and correctly recognizes this person. Let Ik (Pi , f j ) be an indicator function where Ik (Pi , f j ) = 1 if the person standing at the ith sampling point and facing to the jth direction is recognized by the kth camera, and otherwise 0. Then the recognition ratio is given by: 49 16
Rate =
1−
i=1 j=1
n
− →
(1 − Ik (Pi , f j ))
k=1
784 where n is the number of the cameras.
(16)
7.2. Determine parameters The efficiency of our algorithm depends on a number of sensing parameters, including the angle of view ψ , the sensing range r and the maximum pose angle θ . To estimate the angle of view of the camera, a person stands in front of the camera, which is rotated horizontally until the image of the person’s face cannot be totally captured by the camera. As we estimate, the critical value is around +/−25°. Thus, we set ψ = 50°. To determine the values of r and θ , pictures of the person are taken with different distances away from the camera, and with different facing directions. Then we perform the facerecognition algorithm to test whether the person’s face can be correctly recognized from each picture. The distance ranges from 0.6 m to 7.2 m with a gap of 0.6 m, and the facing direction ranges from −60° to +60° with a gap of 15° (the frontal view is assumed to be 0°). The test is repeated 10 times by locating the camera at different locations and towards different directions, and then the corresponding recognition rates are obtained. The result is shown in Table 2. In this table, the value of each item stands for the recognition rate with corresponding object–camera distance and pose angle. We notice that when the distance is within 6.0 m and the pose angle is within 30°, the recognition rate is relatively high, and as they become larger, the recognition rate drops dramatically. Therefore, we set the sensing range r = 6.0 m and the maximum pose angle θ = 30°. 7.3. Experimental results We have conducted four groups of experiments, with the deployment of 4, 6, 8, and 10 cameras respectively. Initially, the cameras are randomly oriented, and the topologies are shown in Fig. 15(a)(b)(c)(g). Then we rotate the cameras by the
18
C. Yang et al. / Pervasive and Mobile Computing (
)
–
(a) 4 cameras before rotation.
(b) 6 cameras before rotation.
(c) 8 cameras before rotation.
(d) 4 cameras after rotation.
(e) 6 cameras after rotation.
(f) 8 cameras after rotation.
(g) 10 cameras before rotation.
(h) 10 cameras after rotation.
Fig. 15. The topology before and after performing the IVCE algorithm.
IVCE algorithm with the sensing parameters obtained in Section 7.2. The final topologies are shown in Fig. 15(d)(e)(f)(h) respectively. For each deployment, we test the recognition ratio by the approach described in Section 7.1. As a comparison, we also obtain the theoretical recognition ratio for each topology. The results are shown in Fig. 16. From this figure, we find that the experimental recognition ratio after rotating the cameras by the IVCE algorithm is significantly greater than the one when the cameras are randomly oriented. However, the experimental values are smaller than the theoretical ones. This is due to the fact that in real experiment, the person is not always recognized by a camera if it is located in the sensing sector of the camera and its pose angle is below the predefined threshold. In our experiment, we set the threshold as r = 0.6 m and θ = 30°, and consider theoretically the person is definitely recognized if the corresponding sensing parameters are below these values. However, the average recognition rate when r ≤ 0.6 m and θ ≤ 30° is around 0.715, according to Table 2. Also, when the person gets too close to the camera, typically shorter than 0.5 m, the camera can hardly capture the whole face of the person, even if the person stands in front of the camera and faces straight to the camera. In this case, it often fails to detect and recognize the face, which is recognized theoretically. This indicates that the sensing area of a
C. Yang et al. / Pervasive and Mobile Computing (
)
–
19
Table 2 Recognition rate with different distances and pose angles. Distance (m)
0.6 1.2 1.8 2.4 3.0 3.6 4.2 4.8 5.4 6.0 6.6 7.2
Facing direction
−60°
−45°
−30°
−15°
0°
15°
30°
45°
60°
0.0 0.1 0.0 0.1 0.0 0.0 0.0 0.1 0.0 0.1 0.0 0.0
0.3 0.3 0.4 0.3 0.1 0.1 0.0 0.1 0.0 0.0 0.1 0.0
0.4 0.6 0.7 0.6 0.7 0.4 0.7 0.5 0.4 0.4 0.2 0.1
0.8 0.9 0.8 1.0 0.6 0.5 0.8 0.8 0.5 0.7 0.3 0.3
0.9 0.9 1.0 0.9 0.8 1.0 0.9 0.9 0.9 0.8 0.7 0.3
0.9 0.6 0.7 0.5 0.8 0.8 0.8 0.9 1.0 0.7 0.6 0.2
0.7 0.4 0.7 0.5 0.6 0.6 0.5 0.5 0.7 0.5 0.5 0.1
0.1 0.0 0.1 0.1 0.2 0.1 0.3 0.3 0.1 0.3 0.1 0.0
0.0 0.0 0.0 0.2 0.0 0.0 0.0 0.0 0.1 0.0 0.0 0.1
Fig. 16. The recognition ratio before and after rotating the cameras by IVCE algorithm.
camera network is not an ideal sector, and a probabilistic sensing model may characterize the recognition behavior more realistically compared with the proposed boolean sensing model. In future, we will study to build the probabilistic viewcoverage model to better capture the recognition behavior by joint consideration of the object–camera distance, the pose angle, and the face-recognition algorithm. 8. Conclusion and future work The full-view coverage model provides a novel perspective of the camera sensor network for the purpose of recognizing object’s face. In this paper, we propose the view-coverage model to measure the coverage quality of the camera sensor network with finer granularity, especially when the object is not full-view covered. Based on this model, we propose a distributed view-coverage enhancing algorithm by the rotation of the camera sensors. Also, we address some refinement issues on this algorithm. Through extensive simulations and test bed experiments, we show that our algorithm makes a significant improvement in the view-coverage ratio compared with the random deployment. However, this work only proposes a prototype of the view-coverage model, and does not consider many practical issues. In future, as stated in Section 7.3, we will study the probabilistic view-coverage model, as well as the corresponding problems and solutions. Also, we will extend the 2D view-coverage model to the 3D view-coverage model. The notion of ‘‘3D’’ includes not only deploying the cameras in a 3D space instead of 2D plane, but also considering the facing directions of the objects in 3D space, which is more complicated and challenging. Acknowledgment This research is financially supported by the National Natural Science Foundation of China (No. 60873026 and 61272418), the National Science and Technology Support Program of China (No. 2012BAK26B02), and Industrialization of Science Program for University of Jiangsu Province (No. JH10-3) References [1] I.F. Akyildiz, T. Melodia, K.R. Chowdhury, A survey on wireless multimedia sensor networks, Comput. Netw. 51 (4) (2007) 921–960. [2] P.J.P.V. Blanz, P. Grother, T. Vetter, Face recognition based on frontal views generated from non-frontal images.
20 [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36]
C. Yang et al. / Pervasive and Mobile Computing (
)
–
Y. Wang, G. Cao, On full-view coverage in camera sensor networks, in: INFOCOM, 2011 Proceedings IEEE, 2011, pp. 1781–1789. M. Guvensan, A. Yavuz, On coverage issues in directional sensor networks: a survey, Ad Hoc Netw. 9 (7) (2011) 1238–1255. J. Ai, A.A. Abouzeid, Coverage by directional sensors in randomly deployed wireless sensor networks, J. Comb. Optim. 11 (2006) 21–41. G. Fusco, H. Gupta, Selection and orientation of directional sensors for coverage maximization, in: SECON’09, IEEE Press, Piscataway, NJ, USA, 2009, pp. 556–564. Y. Cai, W. Lou, M. Li, X.-Y. Li, Target-oriented scheduling in directional sensor networks, in: 26th IEEE International Conference on Computer Communications, INFOCOM 2007, IEEE, 2007, pp. 1550–1558. H. Yang, D. Li, H. Chen, Coverage quality based target-oriented scheduling in directional sensor networks, in: 2010 IEEE International Conference on Communications, ICC, 2010, pp. 1–5. H.W.H. Chen, N. Tzeng, Grid-based approach for working node selection in wireless sensor networks, in: 2004 IEEE International Conference on Communications, Vol. 6, IEEE, 2004, pp. 3673–3678. D. Tao, H. Ma, L. Liu, Coverage-enhancing algorithm for directional sensor networks, in: Mobile Ad-hoc and Sensor Networks, in: Lecture Notes in Computer Science, vol. 4325, Springer, Berlin, Heidelberg, 2006, pp. 256–267. N. Tezcan, W. Wang, Self-orienting wireless multimedia sensor networks for maximizing multimedia coverage, in: ICC ’08, 2008, pp. 2206–2210. L. Zhang, J. Tang, W. Zhang, Strong barrier coverage with directional sensors, in: Proceedings of the 28th IEEE Conference on Global Telecommunications, GLOBECOM’09, 2009, pp. 1816–1821. S. Poduri, G.S. Sukhatme, Constrained coverage for mobile sensor networks, in: The 2004 IEEE International Conference on Robotics and Automation, 2004, pp. 165–171. M.J.M.A. Howard, G.S. Sukhatme, Mobile sensor network deployment using potential field: a distributed scalable solution to the area coverage problem, in: The 6th Inter. Symposium on Distributed Autonomous Robotics Systems, DARS02, 2002, pp. 299–308. H. Ma, X. Zhang, A. Ming, A coverage-enhancing method for 3d directional sensor networks, in: INFOCOM 2009, IEEE, 2009, pp. 2791–2795. E. Horster, R. Lienhart, On the optimal placement of multiple visual sensors, in: Proceedings of the 4th ACM International Workshop on Video Surveillance and Sensor Networks, VSSN ’06, ACM, New York, NY, USA, 2006, pp. 111–120. R. Hoster, E. Lienhart, Approximating optimal visual sensor placement, in: Proceedings of IEEE International Conference on Multimedia and Expo, 2006, pp. 1257–1260. A. Mittal, L. Davis, Visibility analysis and sensor planning in dynamic environments, in: Proceedings of 8th European Conference on Computer Vision, Prague, Czech Republic, 2004, pp. 175–189. J. Zhao, S. Cheung, T. Nguyen, Optimal camera network configurations for visual tagging, IEEE J. Sel. Top. Signal Process. (2) (2008) 464–479. K.-S. Hung, K.-S. Lui, On perimeter coverage in wireless sensor networks, IEEE Trans. Wirel. Commun. 9 (7) (2010) 2156–2164. K.-Y. Chow, K.-S. Lui, E. Lam, Maximizing angle coverage in visual sensor networks, in: IEEE International Conference on Communications, 2007. ICC ’07. 2007, pp. 3516 –3521. A. Newell, K. Akkaya, E. Yildiz, Providing multi-perspective event coverage in wireless multimedia sensor networks, in: 2010 IEEE 35th Conference on Local Computer Networks, LCN, 2010, pp. 464–471. L. Liu, H. Ma, X. Zhang, On directional k-coverage analysis of randomly deployed camera sensor networks, in: ICC’08, 2008, pp. 2707–2711. Y. Wu, X. Wang, Achieving full view coverage with randomly-deployed heterogeneous camera sensors, in: 2012 IEEE 32nd International Conference on Distributed Computing Systems, ICDCS, 2012, pp. 556–565. Y. Wang, G. Cao, Barrier coverage in camera sensor networks, in: Proceedings of the Twelfth ACM International Symposium on Mobile Ad Hoc Networking and Computing, MobiHoc ’11, ACM, New York, NY, USA, 2011, pp. 12:1–12:10. H. Ma, M. Yang, D. Li, Y. Hong, W. Chen, Minimum camera barrier coverage in wireless camera sensor networks, in: INFOCOM, 2012 Proceedings IEEE, 2012, pp. 217–225. M.L.Y. Cai, W. Lou, Cover set problem in directional sensor networks, in: Proc. of IEEE Intl. Conf. on Future Generation Communication and Networking, FGCN 07, 2007, pp. 274–278. M. Ao, D. Yi, Z. Lei, S.Z. Li, Face recognition at a distance: system issues, in: Handbook of Remote Biometrics, Springer, 2009, pp. 155–167. EverFocus, Ptz cameras eptz 900. http://www.manualslib.com/. A. Communications, Axis 215 ptz network camera. http://www.axis.com/. H.S.M. Coxeter, S.L. Greitzer, Geometry Revisited, Math. Assoc. Amer., Washington, DC, 1967. Logitech, Logitech hd webcam c525. http://www.logitech.com/en-us/product/hd-webcam-c525. W. Zhao, R. Chellappa, P.J. Phillips, A. Rosenfeld, Face recognition: a literature survey, ACM Comput. Surv. 35 (4) (2003) 399–458. P. Viola, M. Jones, Robust real-time object detection, Int. J. Comput. Vis. (2001). D. Beymer, T. Poggio, Face Recognition From One Example View, Tech. Rep., Cambridge, MA, USA, 1995. A.L. Cambridge, The orl database of faces, http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html.