Maximizing full-view target coverage in camera sensor networks

Maximizing full-view target coverage in camera sensor networks

Ad Hoc Networks 94 (2019) 101973 Contents lists available at ScienceDirect Ad Hoc Networks journal homepage: www.elsevier.com/locate/adhoc Maximizi...

2MB Sizes 0 Downloads 24 Views

Ad Hoc Networks 94 (2019) 101973

Contents lists available at ScienceDirect

Ad Hoc Networks journal homepage: www.elsevier.com/locate/adhoc

Maximizing full-view target coverage in camera sensor networks Jinglan Jia a, Cailin Dong b, Yi Hong c, Ling Guo d, Ying Yu e,∗ a

School of Information and Mathematics, Yangtze University, Jingzhou, China School of Mathematics and Statistics, Central China Normal University, Wuhan, China c School of Information Science and Technology, Beijing Forestry University, Beijing, China d School of Information Science & Technology, Northwest University, Xi’an, China e School of Computer, Central China Normal University, Wuhan, China b

a r t i c l e

i n f o

Article history: Received 23 June 2018 Revised 26 May 2019 Accepted 26 July 2019 Available online 2 August 2019 Keywords: Approximation algorithm Camera sensor networks Full-view target coverage

a b s t r a c t Traditional target coverage only ensures monitoring of targets. However, as people’s security awareness increases, the requirement for target coverage also increases from monitoring to identification. Thus fullview coverage model is proposed to guarantee that any facing direction of a target could be covered. Based on this coverage model, we study the maximum full-view target coverage problem in camera sensor networks, where each camera sensor has P working directions, aiming at maximizing the number of full-view covered targets by scheduling the working directions of camera sensors. To solve this problem, we design a (1 − 1e )-approximation algorithm based on pipage rounding and an efficient heuristic algorithm. Finally, simulation results are presented to demonstrate the performance of our algorithms.

1. Introduction Coverage problem is one of the most fundamental problems in wireless sensor networks, which reflects how well a region is monitored. The emergence of camera sensors adds new vitality to this topic, because they can provide much richer information about monitoring environment through videos or images. Such sensor networks have wide application perspective in many fields, such as military reconnaissance, environment monitoring, intelligent transportation, medical care, industrial control and disaster management [1]. In most of the previous studies, sensors are based on omnidirectional sensing model, in which the sensing region of a sensor is abstracted as a disk, and whether a target is covered only depends on its location. Afterwards directional sensing models are proposed [2–4]. Compared with omnidirectional sensors, directional sensors have limited angle of sensing range, and the sensing region of a directional sensor is modeled as a sector, which not only depends on its location but also depends on its orientation. The camera sensor is a special kind of directional sensor, which may generate very different views of the same target [5]. With people’s safety consciousness increasing gradually, new demands on capturing clear profile of targets have risen beyond traditional coverage of simply detecting them [6]. For example,



Corresponding author. E-mail address: [email protected] (Y. Yu).

https://doi.org/10.1016/j.adhoc.2019.101973 1570-8705/© 2019 Elsevier B.V. All rights reserved.

© 2019 Elsevier B.V. All rights reserved.

when a criminal suspect appears in the crowded public places, it needs camera surveillance system to obtain enough facial information of the criminal suspect to confirm his or her identity. Against this background, full-view coverage is introduced in camera sensor networks to satisfy the security requirement [5]. Previous studies, such as [5,7–10], on full-view coverage mainly focus on the sensing model with one fixed working direction, which leads to a waste of sensing resources. One intuitive way is to utilize the sensors with rotation ability to improve the utilization of sensors instead of leaving them idle. Besides, in many practical applications, such as wildlife monitoring and battlefield surveillance, camera sensors are randomly deployed by airplanes, and camera sensors initially deployed may not be sufficient to provide full-view coverage for all targets, thus it needs to schedule camera sensors to full-view cover as many targets as possible. However, most of the existing efforts on full-view coverage focus on how to judge a region or targets are full-view covered, and there still lack of efficient algorithms with theoretical bounds to schedule camera sensors to realize full-view coverage. Take that into account, based on the full-view coverage model, we study the maximum full-view target coverage problem (MFTC) in camera sensor networks, where each camera sensor has P working directions with mutually disjoint sensing sectors combined to generate a circle and can rotate to cover different sensing sectors. Our goal is to maximize the number of full-view covered targets by scheduling the working directions of camera sensors. To solve this problem, we design a (1 − 1e )-approximation algorithm and an efficient heuristic algorithm. In particular no study, to our knowledge, has considered the MFTC problem in uniformly

2

J. Jia, C. Dong and Y. Hong et al. / Ad Hoc Networks 94 (2019) 101973

randomly distributed camera sensor networks, where every camera sensor has P disjoint working directions. Several difficulties make our issue challenging. First, every camera sensor has P working directions and can rotate its orientation around the center, thus the working direction of each camera sensor can vary from one sensing sector to another. Since at any time at most one working direction of each sensor can be activated, it induces conflict among targets covered by different working directions of the same camera sensor. Second, unlike traditional target coverage, full-view target coverage requires that, for any facing direction of targets, there is always at least one camera sensor obtaining the positive images of those targets (formal definition is given in Section 3.1), which makes the coverage relationship between camera sensors and targets more complicated. Further, in order to achieve full-view coverage of targets, camera sensors need to cooperatively turn to specific working directions at the same time. Due to the aforementioned challenges, we solve the MFTC problem incrementally. For ease of handling, two new concepts are introduced, basic full-view cover set (BFCS) and normal full-view cover set (NFCS) (formal definitions are given in Section 3.1). We first design FindBFCSs algorithm to find out all BFCSs for targets. Considering that different targets may be full-view covered by some of the same working directions, we construct NFCSs based on BFCSs by ConstrustNFCSs algorithm. Then we formulate the MFTC problem as an optimization problem selecting some NFCSs, which are not in conflict with each other, such that the total number of full-view covered targets is maximized. To make the problem tractable, we formalize the optimization problem as an integer programming problem, and further relax it into a linear programming problem. We can obtain an optimal solution to the linear programming optimization in polynomial time. After that we develop an approximation algorithm applying the pipage rounding method to convert the optimal solution into an integer solution, which yields a feasible solution for the MFTC problem with (1 − 1e ) performance ratio. Besides, we also design an efficient heuristic algorithm to address the MFTC problem. The rest of this paper is organized as follows. Section 2 briefly reviews related research. Section 3 defines the camera sensor network model, the full-view coverage model and the MFTC problem. Section 4 presents an approximation algorithm and an efficient heuristic algorithm to solve the MFTC problem. Section 5 presents simulation results of our algorithms. Section 6 concludes the paper.

2. Related work Coverage problem in wireless sensor networks has attracted widespread attention in recent years. Traditional target coverage in [3,11,12] only solves the target monitoring problem, and there is no way to capture the facial information of targets. With the introduction of requirements to recognize targets in coverage problem, Wang and Cao first propose full-view coverage model in [5] and study the problem of constructing camera barrier in [7]. On this basis, Ma et al. focus on the minimum camera barrier coverage problem (MCBCP) in camera sensor networks aiming at reducing the number of required camera sensors in [8]. In order to improve the utilization of camera sensors, Gui et al. take the first attempt to explore the deployment strategy to achieve full-view barrier coverage with rotatable camera sensors in [13]. In [14], mobile camera sensors are deployed to form full-view barrier coverage in a random environment. All the algorithms mentioned above are centralized, Yang et al. propose a distributed algorithm to solve the full-view barrier coverage problem with rotatable camera sensors in [15]. In [16], Yu et al. propose local face-view coverage, a novel concept to achieve statistical barrier coverage in camera sen-

sor networks, which dramatically reduce the number of required camera sensors. These efforts on camera sensor networks mentioned above are dedicated to full-view barrier coverage, which is efficient for some applications, for example, border monitoring. However, it loses sight of what is happening inside the monitoring field. Wu and Wang study the sufficient and necessary conditions of ensuring full-view coverage under uniform and poisson deployment in [17]. Hu et al. mainly focus on the full-view coverage problem in mobile heterogeneous camera sensor network, and derive the critical condition to achieve full-view coverage in [18]. He et al. study the minimum number full-view area coverage problem in camera sensor networks in [10]. They prove full-view area coverage can be ensured as long as a selected full-view ensuring set of points is fullview covered and design two approximation algorithms to solve the minimum number full-view point coverage problem. Zhang et al. investigate the fairness based full-view coverage maximization problem in camera sensor networks by scheduling camera sensors to maximize the minimum accumulated full-view coverage time of targets in [6]. In [19], Liu et al. study the problem of maximizing targets of full-view coverage in camera sensor networks based on poisson distribution model and propose a greedy algorithm to solve the problem. However, the problem of maximum full-view target coverage in uniformly randomly distributed camera sensor networks, where every camera sensor has P disjoint working directions, has rarely been studied directly. 3. Model and problem description In this section, we present the camera sensor network model and the full-view coverage model, and formulate the MFTC problem. 3.1. Camera sensor network model We consider the camera sensor network with N camera sensors S = {s1 , s2 , ..., sN } and M targets T = {t1 , t2 , ..., tM }. Targets with known locations are deployed in a finite two-dimensional plane. Camera sensors are randomly scattered to monitor targets. xy(node) denotes coordination function providing the location of node (camera sensor or target). For each camera sensor si , its sensing region can be modeled as a sector, which is characterized by three parameters: sensing − → range R; sensing angle α ; orientation di . Sensing range R and sens− → ing angle α are unique for all camera sensors, and orientation di (the internal bisector of si ’s sensing area) of each sensor can rotate around the center. We view every sector as a working direction. Assume every camera sensor has P disjoint working directions, which are independent, that is, the sensing sectors of any two different working directions are disjoint and all the mutually disjoint sensing sectors can be combined to generate a circle. Note that every time only one working direction of each sensor can be activated. Denote di,p as the pth working direction of camera sensor si , and the set of working directions of camera sensor si is    Di = {di,1 , di,2 , ..., di,P }. Denote D = D1 D2 ... DN as the set of working directions of all camera sensors. A target tj is covered by camera sensor si if and only if it falls within the sensing sector of the activated working direction, i.e., − → −→ (i)si tj  < R and (ii)∠( di , si t j ) < α2 , where si tj  denotes the Eu− → −→ clidean distance between si and tj , and ∠( di , si t j ) ∈ [0, π ] denotes − → −→ the angle between di and si t j . As shown in Fig. 1(a), camera sensor si has 4 disjoint working directions and the sector marked with solid line is the activated working direction. Target t1 is covered by camera sensor si at present, while target t2 is not. For target

J. Jia, C. Dong and Y. Hong et al. / Ad Hoc Networks 94 (2019) 101973

Fig. 1. (a)The sensing model of camera sensor si ; (b)The full-view coverage model.

identification, it is significant to capture the positive images of targets from all directions, which is characterized as full-view coverage [5]. Assume the facing direction of target tj is fj , fj ∈ [0, 2π ]. Then we give explicit definitions based on the camera sensor network model. Definition 3.1 (Full-view Coverage). A target tj is full-view covered if for any facing direction fj ∈ [0, 2π ], there is always at least one camera sensor si such that target tj is covered by si and − → −→ ∠( f j , t j si ) ≤ θ , where θ ∈ (0, π2 ] is a predefined threshold called effective angle, which mainly depends on the specific applications. − → −→ An illustration is shown in Fig. 1(b). Since ∠( f1 , t1 si ) = − → −→ − → −→ ∠( f2 , t1 si ) = θ and ∠( f3 , t1 si ) > θ , target t1 is not full-view covered by si . Definition 3.2 (Basic Full-view Cover Set (BFCS)). A set {Dt j , {t j }}, where Dt j ⊆ D and tj ∈ T, is a basic full-view cover set, if (i) every working direction in Dt j could cover target tj , (ii) all working directions in Dt j together exactly full-view cover target tj , (iii) ∀Dt  Dt j , all working directions in Dt together do not full-view j

j

3

Fig. 2. Dividing C(tj , R) into K sectors.

4. Algorithms for MFTC problem In this section, we focus on solving the MFTC problem. Every BFCS is consist of a critical set of working directions of camera sensors and a single target set, and the target is exactly full-view covered by the working directions. Every NFCS is consist of a critical set of working directions of camera sensors and a critical set of targets exactly full-view covered by the working directions. We first find all BFCSs for every target. Considering that different targets may be full-view covered by some of the same working directions, then we merge the BFCSs of different targets to construct NFCSs. Since every camera sensor can only choose at most one working direction to work at any time, which depends on whether the camera sensor is activated, the MFTC can be transformed to select some NFCSs, which are not in conflict with each other, such that the total number of full-view covered targets is maximized. Then we design two algorithms to solve the problem. One is an approximation algorithm with (1 − 1e ) performance ratio and the other is an efficient heuristic algorithm. 4.1. Algorithm to find BFCSs

cover target tj any longer. Definition 3.3 (Normal Full-view Cover Set (NFCS)). A set {DT , T }, where DT ⊆ D and T ⊆T, is a normal full-view cover set, if (i) all working directions in DT together exactly full-view cover all targets in T , (ii) ∀D T  DT , all working directions in D T together do not full-view cover all targets in T any longer, (iii) ∀T T , there exist redundant working directions in DT to exactly full-view cover all targets in T . 3.2. Problem definition Under the uniform random deployment strategy, camera sensors initially deployed may not be sufficient to full-view cover all targets. Our goal is to schedule the working directions of camera sensors to maximize the number of full-view covered targets, which we call the MFTC problem. According to the camera sensor network model, the MFTC problem can be stated as follows. Problem 3.1 (Maximum Full-view Target Coverage (MFTC)). Given a camera sensor network with camera sensor set S = {s1 , s2 , ..., sN } and target set T = {t1 , t2 , ..., tM }. The maximum full-view target coverage problem is to schedule the working directions of N camera sensors such that the total number of full-view covered targets is maximized. Theorem 3.1. The MFTC problem is NP-hard. Proof. When P = 1 and θ = 2π , the MFTC problem is equivalent to the classic MAX_COVER problem in [20], which is known to be NP-hard. Hence the MFTC problem is NP-hard. 

Denote S(tj ) is the set of camera sensors covering target tj ∈ T. For each target tj , draw a circle C(tj , R) centered at tj with radius of R. Without loss of generality, select a radius randomly as the start line and rotate it for angle θ anti-clockwise continuously, then we get sectors γ 1 (tj ), γ 2 (tj ), ..., γ K (tj ), where K = 2θπ . If K = 2θπ , the K sectors will have the same sector angle θ ; otherwise, the first K − 1 sectors will have the same sector angle θ , and the last sector will have the sector angle 2π − (K − 1 )θ . Each sector can be represented by the set of camera sensors located in it. Fig. 2 shows an example to divide C(tj , R) into K sectors with the red radius as the start line. Theorem 4.1. Target tj is full-view covered if there is at least one camera sensor si ∈ S(tj ) in each sector γ m (tj ), m = 1, 2, ..., K. Proof. For any si ∈ γ m (tj ) and si ∈ γmod (m+1,K ) (t j ), ∠si t j si ≤ 2θ . So − → −→ −−→ for any facing direction f j between t j si and t j si , there is al− → −→ − → −−→ ways ∠( f j , t j si ) ≤ θ or ∠( f j , t j si ) ≤ θ . Thus target tj is full-view covered.  According to Theorem 4.1, we propose Algorithm 1 to find out all BFCSs for targets. Now we show an example of finding BFCSs, and assume θ = π2 . In Fig. 3, there are 4 camera sensors s1 , s2 , s3 , s4 covering target t1 . Denote t1 s1 as the start line and rotate it for π2 anti-clockwise continuously to get 4 sectors set  = {γ1 (t1 ), γ2 (t1 ), γ3 (t1 ), γ4 (t1 )}, where γ1 (t1 ) = {s1 , s2 }, γ2 (t1 ) = {s3 }, γ3 (t1 ) = {s4 }, γ4 (t1 ) = {s1 }.  satisfies the Theorem 4.1. Then implement camera sensor selection traversal, and there are two cases: {s1 , s3 , s4 } and {s2 , s3 , s4 , s1 }. Since {s1 , s3 , s4 }⊆{s2 , s3 , s4 , s1 }, {s2 , s3 , s4 ,

4

J. Jia, C. Dong and Y. Hong et al. / Ad Hoc Networks 94 (2019) 101973

Algorithm 2 ConstructNFCSs

Fig. 3. An example of finding BFCSs.

s1 } is a redundant combination. For the camera sensor combination {s1 , s3 , s4 }, s1 , s3 , s4 can exactly full-view cover target t1 , therefore b f cs = {{d1,4 , d3,1 , d4,2 }, {t1 }} and B = B ∪ {b f cs} = {{{d1,4 , d3,1 , d4,2 }, {t1 }}}. 4.2. Algorithm to construct NFCSs Since different targets may be full-view covered by some of the same working directions, we can merge BFCSs to construct NFCSs. Index each element in B obtained via Algorithm 1 by the order of its appearance in the collection. The details of constructing NFCSs is shown in Algorithm 2.

Input: The set B of all basic full-view cover sets obtained via Algorithm 1. Output: A set N of all normal full-view cover sets. 1: N = {b f cs1 } 2: i = 2 3: while 1 < i ≤ |B | do 4: b f csi = {Dts , {ts }} j=1 5: 6: while j ≤ |N | do n f cs j = {DT , T } 7: if ts ∈ / T then 8:   if Dts DT = ∅ and the working directions in Dts DT 9: are not in conflict with each other then  if Dts DT has redundant working directions to ex10:  actly full-view cover the targets in {ts } T then  delete redundant working directions in Dts DT 11: and save the remaining working directions in D else 12:  D = Dts DT 13: 14: end if  n f cs = {D, {ts } T } 15: else 16: n f cs = b f csi 17: 18: end if end if 19:  N = N {n f cs} 20: end while 21: 22: end while 23: delete the redundant combinations in N 24: return N

Algorithm 1 FindBFCSs Input: Camera sensor set S = {s1 , s2 , ..., sN }, target set T = {t1 , t2 , ..., tM }, sensing radius R, sensing angle α , coordinate function xy(node ), and effective angle θ . Output: A set B of all basic full-view cover sets. 1: B = ∅ 2: for j = 1 to M do compute S(t j ), and sort the camera sensors in S(t j ) 3: for i = 1 to |S(t j )| do 4: −→ denote t j si as the start line and rotate it for angle θ 5: anti-clockwise continuously to get K sectors set  = {γ1 (t j ), γ2 (t j ), ..., γK (t j )} if  satisfies the Theorem 4.1 then 6: implement camera sensor selection traversal in  (se7: lect one camera sensor from each γm (t j ) respectively), and delete the redundant combinations for each of these camera sensor combinations do 8: merge the set of working directions covering t j of 9: these camera sensors and {t j } as b f cs  10: B = B {b f cs} end for 11: 12: end if end for 13: 14: end for 15: return B Fig. 4 shows an example of constructing NFCSs. According to Algorithm 1, B = {b f cs1 , b f cs2 , b f cs3 }, where b f cs1 =

Fig. 4. An example of constructing NFCSs.

{{d1,4 , d2,1 , d4,2 }, {t1 }} and b f cs2 = {{d1,4 , d3,1 , d4,2 }, {t1 }} are two BFCSs for target t1 , and b f cs3 = {{d1,4 , d2,1 , d3,1 }, {t2 }} is a BFCS for target t2 . At the beginning of Algorithm 2, N = {b f cs1 }. The targets in bfcs1 and bfcs2 are the same, therefore bfcs1 and bfcs2 can not be merged. In bfcs1 and bfcs3 , since t1 = t2 and {d1,4 , d2,1 , d4,2 } ∩ {d1,4 , d2,1 , d3,1 } = {d1,4 , d2,1 }, bfcs1 and bfcs3 can be merged to construct n f cs = {{d1,4 , d2,1 , d3,1 , d4,2 }, {t1 , t2 }}, N = N ∪ {n f cs} = {{{d1,4 , d2,1 , d4,2 }, {t1 }}, {{d1,4 , d2,1 , d3,1 , d4,2 }, {t1 , t2 }}}. Merge bfcs2 and bfcs3 to construct n f cs = {{d1,4 , d2,1 , d3,1 , d4,2 }, {t1 , t2 }}, which has been contained in N . There are no BFCSs and NFCSs meeting the condition of merging. Since {d1,4 , d2,1 , d4,2 }⊆{{d1,4 , d2,1 ,

J. Jia, C. Dong and Y. Hong et al. / Ad Hoc Networks 94 (2019) 101973

d3,1 , d4,2 } and {t1 }⊆{t1 , t2 }, {{d1,4 , d2,1 , d4,2 }, {t1 }} is a redundant combination. Delete redundant combinations in N , and N = {{{d1,4 , d2,1 , d3,1 , d4,2 }, {t1 , t2 }}}.

subject to

yj ≤

The MFTC problem can be transformed to select some NFCSs, which are not in conflict with each other, such that the total number of full-view covered targets is maximized. Firstly, under the condition that each camera sensor can only activate at most one working direction at any time, we mathematically formulate the problem into an integer linear programming. Then relax and convert it into a linear programming (LP). Building further on this foundation, we develop a pipage rounding algorithm (PRA) to covert the LP solution into an integer solution, which yields a feasible solution for the MFTC problem. The algorithm has a (1 − 1e )performance ratio compared with the optimal solution. Assume the total number of NFCSs is W and sort NFCSs in ascending order according to the ratio of the number of targets fullview covered by every NFCS to the number of camera sensors it uses. We formulate the problem as follows: Given: N = {n f cs1 , n f cs2 , ..., n f csW }, and T = {t1 , t2 , ..., tM }.

ai,k =

1, 0,

n f csk contains one working direction of si ; otherwise.

1, 0,

n f csk contains t j ; otherwise.

 b j,k =

xk =

 yj =

1, 0,

n f csk is selected; otherwise.

1, 0,

t j is full-view covered; otherwise.

yj

W 



xk , y j ∈ {0, 1}, 1 ≤ k ≤ W, 1 ≤ j ≤ M.

(8)

To make the problem tractable, we further relax the problem of integer linear programming (5)–(8) into a linear programming (LP) problem. Maximize M 

yj

(9)

j=1

subject to

yj ≤

W 

b j,k · xk , 1 ≤ j ≤ M.

(10)

k=1

ai,k · xk ≤ 1, 1 ≤ i ≤ N.

(11)

0 ≤ xk ≤ 1, 1 ≤ k ≤ W.

(12)

0 ≤ y j ≤ 1, 1 ≤ j ≤ M.

(13)

yj

(14)

j=1

b j,k · xk , 1 , 1 ≤ j ≤ M.

yj = 1 −

ai,k · xk ≤ 1, 1 ≤ i ≤ N.

W 

(3)

W

(1 − b j,k · xk ), 1 ≤ j ≤ M.

(15)

k=1

(2)

k=1

ai,k · xk ≤ 1, 1 ≤ i ≤ N.

(16)

k=1

0 ≤ xk ≤ 1, 1 ≤ k ≤ W.

k=1

xk ∈ {0, 1}, 1 ≤ k ≤ W.

(4)

The objective function (1) maximizes the number of full-view  covered targets. In constraint (2), if W k=1 b j,k · xk ≥ 1, then y j = 1, which indicates that target tj can be full-view covered by some  NFCSs; if W k=1 b j,k · xk = 0, then y j = 0, which means that target tj can not be full-view covered. So yj ∈ {0, 1}. The constraint (3) insures that one camera sensor can only choose one working direction to work at any time depending on whether the sensor is activated. The constraint (4) shows restriction on variables. The above integer programming problem can be equivalently transformed into the following integer linear programming problem. Maximize

j=1

(7)

subject to



M 

ai,k · xk ≤ 1, 1 ≤ i ≤ N.

k=1

M 

(1)

subject to

W 

(6)

We can obtain an optimal solution by solving the LP problem in polynomial time. After that we apply the pipage rounding method proposed in [21,22] to convert the solution into an integer solution, in which we need the following nonlinear programming (NLP). Maximize

j=1

y j = min

b j,k · xk , 1 ≤ j ≤ M.

k=1

The optimization problem can be written as: Maximize M 

W 

W 

Since all NFCSs are certain, ai,k and bj,k are constants. Define boolean variables:



W  k=1

4.3. Pipage rounding algorithm for MFTC problem



5

yj

(5)

(x∗ ,

(17)

y∗ )

Assume is the optimal solution of the LP problem. Algorithm 3 describes the details of Pipage Rounding Algorithm. Lemma 4.1. NLP (x, y ) ≥ (1 − 1e ) · LP (x, y ). Proof. Assume A = {k : b j,k = 1}, q = |A| then

1−

W

(1 − b j,k · xk ) = 1 −

k=1



( 1 − xk ).

k∈A

By the arithmetic mean and geometric mean inequality, we have

1−

W



(1 − b j,k · xk ) ≥ 1 −

k=1

k∈A

= 1−

1−

( 1 − xk ) q

 k∈A

q

xk

q

q .

6

J. Jia, C. Dong and Y. Hong et al. / Ad Hoc Networks 94 (2019) 101973

Algorithm 3 Pipage Rounding Algorithm (PRA)

Algorithm 4 Heuristic Algorithm (HA)

Input: An optimal solution (x∗ , y∗ ) of the LP problem. Output: An integer solution (x, y ). 1: x ← x∗ 2: while x has non-integral components do randomly choose two non-integral components 0 < xu < 1 3: and 0 < xv < 1 define x(ε ) by 4:

Input: The set N = {n f cs1 , n f cs2 , ..., n f csW } of all normal full-view cover sets obtained via Algorithm 2. Output: A set N ∗ of normal full-view cover sets. 1: N ∗ = ∅ 2: while N = ∅ do sort NFCSs in N in descending order according to the 3: weight. if some NFCSs have the same weight, give priority to the NFCS with the smallest ID 4: choose the first NFCS n f csk from N  N ∗ = N ∗ {n f csk } 5: delete n f csk and the NFCSs using the same camera sensors 6: with n f csk from N 7: end while 8: return N ∗

 xk ( ε ) =

xk , xu + ε , xv − ε ,

i f k = u, v; i f k = u; i f k = v.

let

5:

ε1 ← min{xu , 1 − xv } ε2 ← min{1 − xu , xv }

constraint. We keep on selecting NFCSs until the set of NFCSs is empty.

if NLP (x(−ε1 )) > NLP (x(ε2 )) then x ← x ( −ε 1 ) else x ← x ( ε2 ) end if end while return (x, y )

6: 7: 8: 9: 10: 11: 12:

Let f (z ) = 1 − (1 − qz )q , where z = have

f (z ) =



1−

f (z ) = −

z q

q−1

 k∈A

5. Simulation results

xk . Then, for 0 ≤ z ≤ q, we

≥ 0,

q−1 z 1− q q

q−2

≤ 0.

Therefore, f(z) is a monotone increasing and convex in the interval [0, q]. Moreover, f (0 ) = 0. It follows that f(z) ≥ zf(1) for z ∈ [0, 1]; and f(z) ≥ f(1) · min{1, z} for z ∈ [0, q]. Note that f (1 ) = 1 − (1 − 1 q 1 q ) ≥ 1 − e , thus

1−

W k=1



1 (1 − b j,k · xk ) ≥ 1 − e



· min 1,

Thus, NLP (x, y ) ≥ (1 − 1e ) · LP (x, y ).

W 



b j,k · xk .

k=1



Theorem 4.2. Algorithm 3 is a polynomial time (1 − 1e )-factor approximation algorithm. Proof. Obviously, Algorithm 3 runs in polynomial time. By Lemma 4.1, NLP (x, y ) ≥ (1 − 1e ) · LP (x, y ). Since LP(x, y) ≥ Opt and NLP(x, y) is non-decreasing during the pipage rounding process, we have NLP (x, y ) ≥ (1 − 1e ) · Opt. Therefore Algorithm 3 is a polynomial time (1 − 1e )-factor approximation algorithm.  4.4. Heuristic algorithm for MFTC problem The basic idea of the heuristic algorithm (Algorithm 4) is as follows. Assume the total number of NFCSs is W. Denote nfcsk as the kth NFCS, and define w(nfcsk ) as the weight of nfcsk , which equals to the ratio of the number of targets that nfcsk can full-view cover and the number of camera sensors that nfcsk uses. When select NFCSs, keep on choosing the NFCS with the maximum weight. If some NFCSs have the same weight, give priority to the NFCS with the smallest ID. Then delete the NFCSs using the same camera sensors with the selected NFCS due to the single working direction

In this section, we evaluate the performance of our algorithms through simulations. All simulations are implemented via Matlab 2013a on Windows 7. We run 100 times through random placement of camera sensors and targets in each simulation and compute its average value. In our simulations, we deploy N camera sensors with sensing radius R and sensing angle α and M targets randomly in a region of 100 m × 100 m area. The effective angle is θ . There are two cases: π , π respectively; (2) θ = π , α = π , π , 2π (1) α = π2 , θ = π3 , 512 2 2 3 2 3 respectively. Through changing the number of camera sensors, the number of targets and sensing radius in the two cases, we investigate the effect of parameters (the number of camera sensors, the number of targets, sensing radius, sensing angle, and effective angle) on our algorithms. We also compare the performance of PRA algorithm and HA algorithm. 5.1. The number of camera sensors In this subsection, we set M = 40, R = 30, and the number of camera sensors N varies from 30 to 70 with an increment of 10. Fig. 5 presents the number of targets full-view covered by PRA algorithm and HA algorithm changing with the number of camera sensors N under different effective angle θ and different sensing angle α . For both PRA algorithm and HA algorithm, the number of full-view covered targets increases along with the increasing of the number of camera sensors. The reason for this is that more camera sensors increase the possibility of full-view covering targets. 5.2. The number of targets In this subsection, we set N = 50, R = 30, and the number of targets M increases from 20 to 60 with an increment of 10. Fig. 6 depicts the number of targets full-view covered by PRA algorithm and HA algorithm changing with the number of targets M under different effective angle θ and different sensing angle α . For both algorithms, the number of full-view covered targets increases as the number of targets increases. Because when the number of targets increases, a camera sensor can serve several targets at the same time. 5.3. Sensing radius In this subsection, we set N = 50, M = 40, and sensing radius R changes from 15 to 35 with an increment of 5. Fig. 7 shows the

J. Jia, C. Dong and Y. Hong et al. / Ad Hoc Networks 94 (2019) 101973

Fig. 5. The number of full-view covered targets changing with the number of camera sensors N under different effective angle θ and different sensing angle α .

Fig. 6. The number of full-view covered targets changing with the number of targets M under different effective angle θ and different sensing angle α .

7

8

J. Jia, C. Dong and Y. Hong et al. / Ad Hoc Networks 94 (2019) 101973

Fig. 7. The number of full-view covered targets changing with sensing radius R under different effective angle θ and different sensing angle α .

Fig. 8. PRA versus HA: the number of full-view covered targets changing with the number of camera sensors N under different effective angle θ and different sensing angle α .

number of targets full-view covered by PRA algorithm and HA algorithm changing with sensing radius R under different effective angle θ and different sensing angle α . Along with the increasing of sensing radius, the number of targets full-view covered by two algorithms also increases. The reason is that as the sensing radius increases camera sensors can cover more targets, therefore targets have more chance to be full-view covered.

5.4. Sensing angle and effective angle Figs. 5–7 reveal the number of targets full-view covered by PRA algorithm and HA algorithm under different effective angle θ and different sensing angle α . From these figures, we can see that effective angle θ has a higher impact on the performance of algorithms than sensing angle α . The number of full-view covered

J. Jia, C. Dong and Y. Hong et al. / Ad Hoc Networks 94 (2019) 101973

9

Fig. 9. PRA versus HA: the number of full-view covered targets changing with the number of targets M under different effective angle θ and different sensing angle α .

Fig. 10. PRA versus HA: the number of full-view covered targets changing with sensing radius R under different effective angle θ and different sensing angle α .

targets increases with the size of effective angle θ , while it is not sensitive to sensing angle α . This is because sensing angle only has to do with coverage (a target is within the sensing region of some camera sensors), and effective angle is more closely related to full-view coverage. The larger effective angle is, the lower coverage quality requirement targets require. Thus targets are more easily full-view covered. 5.5. Comparison between PRA and HA In this subsection, we compare the efficiency of PRA algorithm and HA algorithm. Figs. 8–10 present the performance of PRA algorithm compared with HA algorithm changing with the number of camera sensors N, the number of targets M, sensing radius R under different effective angle θ and different sensing angle α respectively. The simulation results show that the number of targets full-view covered by HA algorithm is always greater than that by

PRA algorithm, that is, although algorithm HA has no approximate ratio, it is very efficient. 6. Conclusion In this paper, we investigated the MFTC problem in camera sensor network with the objective to maximize the number of full-view covered targets, where every camera sensor has P working directions. We studied the intrinsic relationship between camera sensors and targets, and constructed NFCSs based on BFCSs. Then we formally formulated the MFTC problem into an optimization problem to select some NFCSs, which are not in conflict with each other, such that the total number of full-view covered targets is maximized. We proposed PRA algorithm with (1 − 1e ) performance ratio for the problem. Besides, we also designed an efficient HA algorithm to solve the problem. Finally, the simulation results showed the efficiency of our algorithms.

10

J. Jia, C. Dong and Y. Hong et al. / Ad Hoc Networks 94 (2019) 101973

Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgement This work is partly supported by the Doctoral Scientific Research Foundation of Yangtze University under grant No. 801070010135. References [1] I.F. Akyildiz, T. Melodia, K.R. Chowdhury, A survey on wireless multimedia sensor networks, Computer Networks 51 (4) (2007) 921–960. [2] H. Ma, Y. Liu, On coverage problems of directional sensor networks, in: Mobile Ad-hoc and Sensor Networks, MSN 2005, Lecture Notes in Computer Science, 3794, 2005, pp. 721–731. [3] J. Ai, A.A. Abouzeid, Coverage by directional sensors in randomly deployed wireless sensor networks, J. Comb. Optim. 11 (1) (2006) 21–41. [4] J. Jia, C. Dong, X. He, et al., Sensor scheduling for target coverage in directional sensor networks, Int. J. Distrib. Sensor Netw. 13 (6) (2017) 1–12. [5] Y. Wang, G. Cao, On full-view coverage in camera sensor networks, in: Proceedings of IEEE Conference on Computer Communications (INFOCOM 2011), 10-15 April, Shanghai, China, 2011, pp. 1781–1789. [6] Q. Zhang, S. He, J. Chen, Toward optimal orientation scheduling for full-view coverage in camera sensor networks, in: Proceedings of IEEE Global Communications Conference (GLOBECOM 2016), 4-8 December, Washington, USA, 2016. [7] Y. Wang, G. Cao, Barrier coverage in camera sensor networks, in: Proceedings of the 12th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc 2011), Paris, France, 17-19 May, 2011. [8] H. Ma, M. Yang, D. Li, et al., Minimum camera barrier coverage in wireless camera sensor networks, in: Proceedings of IEEE Conference on Computer Communications (INFOCOM 2012), Orlando, USA, 25-30 March, 2012, pp. 217–225. [9] Y. Wang, G. Cao, Achieving full-view coverage in camera sensor networks, ACM Trans. Sensor Netw. 10 (1) (2013) 1–31. [10] S. He, D.H. Shin, J. Zhang, Full-view area coverage in camera sensor networks: dimension reduction and near-optimal solutions, IEEE Trans. Veh. Technol. 65 (9) (2016) 7448–7461. [11] S. Meguerdichian, F. Koushanfar, M. Potkonjak, et al., Coverage problems in wireless ad-hoc sensor networks, in: Proceeding of IEEE Conference Computer Communications (INFOCOM 2001), Alaska, USA, 22-26 April, 2001. [12] M. Cardei, M.T. Thai, Y. Li, et al., Energy-efficient target coverage in wireless sensor networks, in: Proceedings of the 24th Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM 2005), Miami, USA, 13-17 March, 2005, pp. 1976–1984. [13] Y. Gui, F. Wu, X. Gao, et al., Full-view barrier coverage with rotatable camera sensors, in: Proceedings of IEEE/CIC International Conference on Communications in China (ICCC 2014), Shanghai, China, 13-15 October, 2014, pp. 818–822. [14] X. Liu, B. Yang, S. Zhao, et al., Achieving full-view barrier coverage with mobile camera sensors, in: Proceedings of International Conference on Networking and Network Applications (NaNA 2016), Hakodate, Japan, 23-25 July, 2016, pp. 73–76. [15] R. Yang, X. Gao, F. Wu, et al., Distributed algorithm for full-view barrier coverage with rotatable camera sensors, in: Proceedings of IEEE Global Communications Conference (GLOBECOM 2015), San Diego, USA, 6-10 Decemeber, 2015. [16] Z. Yu, F. Yang, J. Teng, et al., Local face-view barrier coverage in camera sensor networks, in: Proceedings of IEEE Conference on Computer Communications (INFOCOM 2015), Hong Kong, China, 26 April-1 May, 2015, pp. 684–692. [17] Y. Wu, X. Wang, Achieving full view coverage with randomly-deployed heterogeneous camera sensors, in: IEEE 32nd International Conference on Distributed Computing Systems (ICDCS2012), Macau, China, 18-21 June, 2012, pp. 556–565. [18] Y. Hu, X. Wang, X. Gan, Critical sensing range for mobile heterogeneous camera sensor networks, in: Proceedings of IEEE Conference on Computer Communications (INFOCOM 2014), Toronto, Canada, 27 April-2 May, 2014, pp. 970–978. [19] W. Liu, J. Liu, L. Wang, et al., Full-view coverage algorithm in camera sensor networks with random deployment, Adv. Wireless Sensor Netw. 334 (2013) 280–290. [20] D.S. Hochbaum, Approximating covering and packing problems: set cover, vertex cover, independent set, and related problems, Approximation Algorithms for NP-hard Problems, PWS Publishing, 1997.

[21] A. Ageev, M. Sviridenko, Pipage rounding: a new method of constructing algorithms with proven performance guarantee, J. Comb. Optim. 8 (3) (2004) 307–328. [22] Z. Lu, T. Pitchford, W. Li, et al., On the maximum directional target coverage problem in wireless sensor networks, in: Proceedings of 10th International Conference on Mobile Ad-hoc and Sensor Networks (MSN 2014), Maui, USA, 19-21 December, 2014, pp. 74–79. Jinglan Jia received the B.S. degree in the School of Mathematics and Information Science from Hebei Normal University in 2012 and received the Ph.D. degree in the School of Mathematics and Statistics from Central China Normal University in 2018. She is now a lecturer of School of Information and Mathematics, Yangtze University. Her research interests include sensor networks, algorithm design and analysis.

Cailin Dong is a professor of Central China Normal University. He received the B.S. degree in Mathematics from Central China Normal University in 1985 and obtained the Ph.D. degree in Management Science and Engineering from Huazhong University of Science and Technology in 2005. His research interests include wireless networks, social networks and algorithm design.

Yi Hong received the Ph.D. degree in the Department of Computer Science, Renmin University of China in 2015. She is now a lecturer in the School of Information Science and Technology, Beijing Forestry University. Her research interests include wireless networks, ad hoc & sensor networks and algorithm design and analysis.

Ling Guo received the B.S. degree in Computer Science from Shaanxi Normal University in 2012 and received his Ph.D. degree from Renmin University of China in 2017. He is now a lecturer of School of Information Science and Technology, North West University. His research interests include wireless networks, social networks and approximate algorithms.

Ying Yu is a associate professor in School of Computer, Central China Normal University. She received the B.S. degree and M.S. degree in Computer from Central China Normal University, China, in 1995 and 1998 respectively. She obtained the PhD degree in Computer Application from University of Science and Technology Beijing in 2009. Her research focus is machine learning.