Ad Hoc Networks 11 (2013) 1006–1021
Contents lists available at SciVerse ScienceDirect
Ad Hoc Networks journal homepage: www.elsevier.com/locate/adhoc
A genetic algorithm based exact approach for lifetime maximization of directional sensor networks Alok Singh a,⇑, André Rossi b a b
Department of Computer and Information Sciences, University of Hyderabad, Hyderabad 500 046, India Lab-STICC, Université de Bretagne-Sud, F-56321 Lorient, France
a r t i c l e
i n f o
Article history: Received 3 February 2012 Received in revised form 13 November 2012 Accepted 14 November 2012 Available online 29 November 2012 Keywords: Column generation Constrained optimization Directional sensor networks Genetic algorithms Matheuristic Wireless sensor networks
a b s t r a c t This paper addresses the problem of maximizing lifetime of directional wireless sensor networks, i.e., where sensors can monitor targets in an angular sector only and not all the targets around them. These sectors usually do not overlap, and each sensor can monitor at most one sector at a time. An exact method is proposed using a column generation scheme where a two level strategy, consisting of a genetic algorithm and an integer linear programming approach, is used to solve the auxiliary problem. The role of integer linear programming (ILP) approach is limited to either escaping from local optima or proving the optimality of the current solution. Computational results clearly show the advantage of the proposed approach over a column generation approach based on solving the auxiliary problem through ILP approach alone as the proposed approach is several times faster. 2012 Elsevier B.V. All rights reserved.
1. Introduction Rapid advancements in embedded systems and wireless communication technologies have facilitated the wide spread use of Wireless Sensor Networks (WSNs) for data gathering in remote or inhospitable environments such as battlefield surveillance, fire monitoring in forests, ecological and tsunami monitoring in deep sea. Under such environments, sensors are usually deployed in an ad hoc or random manner as their accurate placement is ruled out owing to risks and/or cost considerations. To cope with this random deployment, more sensors are deployed than actually required. This over deployment also makes WSNs more resilient to faults as some targets are redundantly covered by multiple sensors. Each sensor operates on a battery that has a limited lifetime. Moreover, replacement of batteries is not possible in remote or inhospitable environments. Therefore, prolonging the network lifetime by ⇑ Corresponding author. E-mail addresses:
[email protected] (A. Singh), andre.rossi@ univ-ubs.fr (A. Rossi). 1570-8705/$ - see front matter 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.adhoc.2012.11.004
making best use of available resources is of prime concern in the design of WSNs for such environments. Most of the existing techniques for prolonging the network lifetime make use of redundancy in sensor deployment. These techniques divide the set of sensors into a number of subsets or covers (not necessarily disjoint), such that sensors in each subset covers all the targets. Amount of time during which each cover is used is also determined. Then, these covers are activated one-by-one for their determined time duration, i.e., at any instant of time only sensors belonging to a single cover are active, whereas all other sensors are inactive. This leads to a significant increase in lifetime because of the following two reasons. First, energy consumption of sensors in inactive state is negligible in comparison to active state [1,2], and, therefore, only keeping a required minimum subset of sensors active at any particular instant of time saves a lot of energy. Second, a sensor’s battery lasts for longer duration, if it oscillates frequently between active and inactive states. In fact, if a battery is discharged in many short bursts with considerable off time then its lifetime may double in comparison to a situation where it is discharged continuously [3,4].
A. Singh, A. Rossi / Ad Hoc Networks 11 (2013) 1006–1021
This paper is concerned with the lifetime maximization problem mentioned above in the Directional Sensor Networks (DSNs), i.e., wireless sensor networks consisting of directional sensors only. Directional sensors can monitor targets in an angular sector only and not all targets around them. Video sensors [5,6], ultrasonic sensors [7] and infrared sensors [8] are common examples of directional sensors. In this paper, directional characteristics of sensors are assumed to be limited to sensing only and not to their communication ability. The sensing ability of directional sensors can be extended in several ways [9,10]. We can place several directional sensors of the same type on one sensor node in such a way that each sensor covers a different angular sector. A practical example of this kind of arrangement can be found in [8], where four ultrasonic sensors are placed on a single node to detect ultrasonic signals from any direction. Alternatively, each sensor node can be fitted with a mobile device so that nodes can move around. Another possibility is to equip each sensor node with a device that can switch the direction of the sensor over a range of directions in such a way that it can sense in all the directions though not at the same time. The direction of an active sensor at any instant of time is called its work direction at that instant of time. Like several previous approaches [9–13], in this paper we have considered the last model and assumed that several possible directions in which a sensor can work do not overlap and a base station is located within the communication range of each sensor, i.e., we have not discussed the connectivity issue of the active sensors. The lifetime maximization problem in directional sensor networks, which we denote by LM-DS (Lifetime Maximization with Directional Sensors), can be formally stated as follows. Given m targets with known locations and n directional sensors randomly deployed in the vicinity of the targets, each sensor, when active, can monitor targets in one of r non-overlapping directions at any instant of time and the sensor i has battery life bi. The LM-DS problem consists in scheduling the sensor’s activity in such a way that the network lifetime is maximized under the restriction that during the entire lifetime, each target is covered by the work direction of at least one active sensor. It has been shown in [14] that the problem is NP-Hard in the strong sense even when there is no restriction due to the work direction (i.e., all sensors have a 360 sensing ability), then so is LM-DS, which is a generalization of this problem. Figs. 1 and 2 illustrate LM-DS with the help of a simple example. Fig. 1 shows a simple directional sensor network with n = 9, m = 4 and r = 4. For the sake of simplicity, all sensors are assumed to have the same orientation and same battery life time of one time unit. Fig. 2 shows an optimal solution to LM-DS problem in directional sensor network of Fig. 1 with nine covers and total lifetime of 2.25 units. Fig. 2 shows for each cover, the work direction of each active sensor in that cover as a shaded sector centered at that active sensor. The time duration of each cover is also mentioned in this figure. All the approaches available in the literature for LM-DS are heuristic in nature, i.e., there is no guarantee of the optimality of the obtained solution. In contrast to the existing approaches, this paper describes an exact approach
1007
based on column generation for the LM-DS. Column generation based approaches have already been used successfully for addressing lifetime maximization problems in wireless sensor networks [15–18]. However, so far no one has addressed lifetime maximization problems in directional sensor networks using column generation based approaches. This has motivated us to develop the approach described in this paper. When a linear program consists of a very large number of variables and a reasonably low number of constraints, it can be decomposed through Dantzig–Wolfe decomposition algorithm into two subproblems which are referred to as the master problem, and the auxiliary problem. A column generation approach [19–21] alternately solves the master problem and the auxiliary problem until an optimal solution is found. The master problem is a version of the original problem with reduced number of variables, and the auxiliary problem generates one or more variables (a variable is represented by a column in the constraint matrix of the master problem) for adding into the master problem provided that these variables can aid in further improving the objective function value. The variables, which can aid in further improving the objective function value, have a strictly positive reduced cost ([22]), and are termed attractive. Indeed, by the linear programming theory, any non-basic variable with a positive reduced cost has a potential for increasing the objective function value in a maximization problem, whereas the variables that have a non-positive reduced cost cannot help in improving the current basic solution. If it can be proved that no attractive variables exist, then the current solution to the master problem is also proved optimal, and the column generation approach stops. Otherwise, the auxiliary problem returns one or more attractive variables which are added to the master problem and the whole process is repeated. Clearly, in the context of LM-DS problem, covers are the variables. Therefore, to use column generation approach, LM-DS problem is decomposed as follows: 1. A master problem that schedules existing covers with the objective of maximizing the network lifetime. 2. An auxiliary problem that builds attractive covers for the master problem, so that it can maximize the network lifetime even further. To solve the auxiliary problem, a two level procedure consisting of a genetic algorithm and an integer linear programming approach is used. First, the genetic algorithm (GA) is used to solve the auxiliary problem. If it fails to find even one attractive cover then another run of the GA is made. If this also fails, then the integer linear programming approach (ILP) is used as a last resort. If ILP also fails to find an attractive cover then that means that the current solution to the master problem is optimal and the column generation process stops. Therefore, the ILP is used only for escaping from local optima or proving the optimality of the current solution. As ILP, in general, consumes much more time than GA and produces a single cover, whereas GA can generate multiple covers, restricting the use of ILP can lead to significant savings in execution time. Our
1008
A. Singh, A. Rossi / Ad Hoc Networks 11 (2013) 1006–1021
Fig. 1. A simple directional sensor network with n = 9, m = 4, r = 4 and bi = 1"i 2 {0, . . . , 8}. Targets are labelled t0, t1, t2 and t3 and sensors are labelled s0, . . . , s8.
computational results prove this point. Our proposed column generation approach will be referred to as GA + ILP hereafter. Our proposed approach belongs to a new and emerging domain for application of metaheuristics, where they are used within a column generation scheme to speed-up the column generation process. Filho and Lorena [23] produced the seminal work in this domain when they used a genetic algorithm to solve the auxiliary problem within a column generation approach for graph coloring. Optimality of the solution is not guaranteed in this scheme as the auxiliary problem is solved using the genetic algorithm alone. To guarantee the optimality of the solution obtained through column generation approach, Puchinger and Raidl [24,25] were the first to use a multilevel approach for the auxiliary problem. In their column generation approach for the three-stage two-dimensional bin packing problem, a four level approach is used to solve the auxiliary problem. First, a greedy heuristic is used to solve the auxiliary problem, failing which a genetic algorithm is used. If genetic algorithm also fails then an ILP approach is used to solve the restricted version of the auxiliary problem. If this also fails then the full version of the auxiliary problem is solved through ILP. Such approaches where a metaheuristic is hybridized with a mathematical programming technique are termed as matheuristic approaches [26]. The remainder of this paper is organized as follows. Section 2 provides a brief survey of related works in the literature. Section 3 introduces some definitions and states
useful properties for restricting the search space. Section 4 describes our GA + ILP approach for the LM-DS problem. Detailed computational results are presented in Section 5. Finally, Section 6 contains some concluding remarks and directions for future research. 2. Related work Ma and Liu [27] produced the seminal work in the area of directional sensor networks where they analyzed sensor deployment strategies for satisfying given area coverage requirements. Since then a lot of work has been done on various aspects of directional sensor networks. In this section, we survey only those works which are related to the topic of this paper, i.e., prolonging the lifetime of directional sensor networks. For a detailed literature survey on various aspects of DSNs, the reader is referred to [28]. Ai and Abuzeid [29] described the maximum coverage with minimum number of sensors (MCMS) problem consisting of covering as many targets as possible with as few sensors as possible and proposed an integer linear programming (ILP) formulation, a centralized greedy algorithm (CGA) and a distributed greedy algorithm (DGA) for this problem. They defined the network lifetime as the time until half of the sensors deplete their energy and proposed sensing neighborhood cooperative sleeping (SNCS) protocol for prolonging the network lifetime. SNCS protocol does dynamic scheduling among sensors based on the amount of their residual energy. This protocol consists of
A. Singh, A. Rossi / Ad Hoc Networks 11 (2013) 1006–1021
1009
Fig. 2. An optimal solution with nine covers and total life time of 2.25 units for LM-DS problem in directional sensor network of Fig. 1.
two phases: scheduling and sensing. In scheduling phase, all sensor nodes become active. Thereafter, DGA algorithm is executed to determine the status of each sensor node in the subsequent sensing phase by taking into consideration the residual energies of sensors. In the sensing phase, depending on the outcome of DGA algorithm, inactive sensor nodes turn off their sensing and communication units and remaining active sensors perform their tasks. These two phases are repeated periodically. SNCS protocol involves a trade-off between coverage enhancement and prolonging the network lifetime. Actually, SNCS protocol was designed with the intention of achieving energy balancing across the network while providing a solution to the MCMS problem. Cai et al. [9,10] proposed a MILP formulation for LM-DS and relax integrality constraints to get heuristic algorithms. The first heuristic called Progressive [9,10] is obtained by extending the LP-MSC heuristic proposed in [30] for lifetime maximization in omnidirectional sensor networks. Progressive algorithm follows an iterative
process. During each iteration it solves the LP relaxation of LM-DS problem and then using greedy rules eliminates the conflicting directions followed by redundant sensors from covers and determines the current network lifetime. The algorithm stops if an iteration fails to increase the lifetime by a significant amount. The second algorithm called Feedback [10] is an improvement over Progressive algorithm in the sense that only one cover is added to the set of covers in each iteration and in all no more than K covers are generated. Generated covers may be less than K, if an iteration fails to find a cover. At the end of Feedback algorithm, all generated covers are rescheduled by following a process similar to a single iteration of Progressive algorithm. Cai et al. [10] also proposed two more heuristics that do not use LP relaxation of LM-DS problem. The first heuristic called MDCS-Greedy iteratively generates covers and schedules each of them for fixed duration dt. This process is repeated till it is no more possible to get a cover which can be scheduled for dt. They also proposed a distributed heuristic called MDCS-Dist, which consists of
1010
A. Singh, A. Rossi / Ad Hoc Networks 11 (2013) 1006–1021
rounds of scheduling and sensing phases like SNCS protocol. Like MDCS-Greedy, each sensing phase lasts for dt. Sensors that will be active in the sensing phase are determined by a scheme based on priority of targets. The priority of a target is inversely proportional to the number of sensors covering it. As per the results reported in [10], among these heuristics, Feedback heuristic performed the best in terms of solution quality, but required a longer execution time. Wang et al. [11] proposed a randomized heuristic for the LM-DS based on an iterative process. During each iteration fixed number of feasible covers are generated randomly and then all covers generated since beginning including these newly generated covers are scheduled using linear programming to maximize the lifetime. The algorithm terminates if four consecutive iterations fail to improve the lifetime by a significant amount. Yang et al. [31] describes two greedy heuristics for a variation of LM-DS problem where different targets require different coverage qualities. A MILP formulation is also provided. Makhoul et al. [32] addressed the problem of adaptive scheduling of wireless video sensor nodes for critical surveillance applications. First, a distributed algorithm is presented for ensuring both coverage of the deployment area and network connectivity by providing multiple cover sets to manage field of view redundancies. Then, using behavior functions modeled by modified Bezier curves to define application classes, a multiple levels activity model was proposed that permitted adaptive scheduling. Gil et al. [12] and Gil and Han [13] proposed a greedy heuristic for LM-DS by modifying the MSC-Greedy heuristic of [30]. This heuristic generates as many covers as possible and schedule each of them for fixed time duration t. Therefore, the higher will be the number of generated covers, more will be the lifetime. Each cover is build in the following way: Initially, we start with an empty cover and mark all targets as uncovered and then we iteratively add sensors with positive lifetime to the cover till all targets are covered. During each iteration, first an uncovered target Tc that is covered by the least number of sensors is determined (ties are broken arbitrarily). A sensor which is not yet included in the cover and has greatest contribution among all candidate sensors that can cover Tc, is added to the cover (ties are broken arbitrarily). All those uncovered target which can now be covered by this newly added sensor are marked as covered and then another iteration begins. The contribution of a sensor is determined according to a function that is the weighted sum of residual lifetime of that sensor and the number of uncovered targets that it can cover in the direction required to cover Tc. Once a cover has been generated, residual lifetime of all sensors belonging to this cover are decremented by t. Those sensors whose lifetime becomes zero are removed from the list of available sensors for covers to be generated subsequently. This process is repeated till it is not possible to generate any feasible cover. A generational genetic algorithm is also proposed by Gil and Han [13] for LM-DS. This genetic algorithm represents a chromosome by N K matrix, where N is the total number of sensors and K is the maximum number of covers in a
solution, i.e., each row represents a potential cover. A value of k > 0 at the jth column of ith row indicates that the sensor j is present in ith cover and its kth direction is active, whereas k = 0 at the same location indicates that sensor j is not present in ith cover. Each cover is scheduled for a fixed time duration t. K is set to nts , where ns is the number of sensors that can cover the most sparsely covered target through one of their directions. Actually, nts is an upperbound on the number of covers for any scheme which schedules each cover for a fixed duration t. To compute the lifetime of the solution represented by a chromosome, the number of feasible covers K0 is determined and multiplied with t. This genetic algorithm always copies two best chromosomes from current generation to the next generation and remaining chromosomes are selected using roulette-wheel selection method. Based on standard one point crossover, two crossovers are defined – one crossover for exchanging covers between two chromosomes, whereas another crossover for exchanging sensors along with their respective directions between two covers in the same chromosome. Mutation operator simply changes the value of k at every location (i, j) of the chromosome with small probability. This change can either be purely random or it can be simply a switch to the immediate left or immediate right directions. Simulation results showed that genetic algorithm performed slightly better than the greedy heuristic. In contrast to the heuristic approaches described above, our approach for LM-DS is an exact approach as already mentioned. Despite this fact, our approach is tested on largest instances with up to 300 sensors (each with 4 directions) and 180 targets. On the other hand, heuristic approaches proposed in Cai et al. [9,10] and randomized approach of Wang et al. [11] are tested on networks with up to 80 sensors (each with 3 directions) and 20 targets. The heuristic of Gil et al. [12] is tested on networks with up to 110 sensors (each with 3 directions) and 15 targets. The genetic algorithm of Gil and Han [13] is tested on networks with up to 50 sensors (each with 3 directions) and 10 targets. If we look at the Fig. 14 of Cai et al. [10], which reports the run time (in minutes) versus number of sensors for 10 targets for various heuristic approaches proposed, the two best performing heuristics viz. Feedback and Progressive require very large amount of time even on instances with 80 sensors (each with 3 directions) and 10 targets. On a 3 GHz CPU with 1 GB memory, Feedback requires on an average around 250 min (15,000 s) on these instances to provide a heuristic solution, whereas Progressive requires around 30 min (1800 s). Similarly, on a 1.5 GHz CPU with 512 MB memory, randomized heuristic of Wang et al. [11] requires on an average around 7 s on instances with 80 sensors (each with 3 directions) and 10 targets to provide a heuristic solution which is inferior than Feedback. In contrast to these, our proposed approach takes around 0.1 s on an average on instances with 100 sensors (each with 3 directions) and 10 targets to return a proven optimal solution on a Core 2 Quad CPU running at 2.83 GHz with 4 GB memory. Presumably, all these existing approaches could not be tested on larger instances either due to the prohibitive execution times or due to the significant deterioration in solution quality.
1011
A. Singh, A. Rossi / Ad Hoc Networks 11 (2013) 1006–1021
3. Definitions and properties
r X anþgþrði1Þ;j ¼ ai;j 8i 2 f1; . . . ; ng g¼1
3.1. Definitions
The last condition is enforced by Let n be the number of sensors and m be the number of targets. For all i 2 {1, . . . , n}, bi is the total battery life of sensor i. If bi is equal to one hour, sensor i can be used for one hour continuously, or this one hour duration can be split in several periods of time during which sensor i is active (it is in inactive state in between). Each sensor is associated r work directions, that are referred to as sectors. A work direction is an angular sector centered at the sensor. Sectors are numbered in {1, . . . , r}. Dn+g+r(i1) is the set of targets covered by sensor i when sector g is active, for all i 2 {1, . . . , n} and for all g 2 {1, . . . , g} (note that D1, . . . , Dn are not used). For all target k 2 {1, . . . , m}, Ck is defined as the set of all pairs (i,g) 2 {1, . . . , n} {1, . . . , r} such that sensor i covers target k when its sector g is selected. In practice, each pair (i,g) is represented by the integer n + g + r(i 1). A cover Sj is a n(r + 1)-column binary vector that specifies which sensors are part of the cover, and which sector is used:
2
a1;j ...
3
7 6 7 6 7 6 7 6 7 6 a n;j 7 6 7 6 anþ1;j 7 6 7 6 7 6 .. 7 6 . 7 6 7 6 7 6 anþr;j 7 6 7 6 a ðnþrÞþ1;j 7 Sj ¼ 6 7 6 7 6 .. 7 6 . 7 6 7 6 6 aðnþrÞþr;j 7 7 6 7 6 .. 7 6 . 7 6 7 6 6 anþðn1Þrþ1;j 7 7 6 7 6 .. 7 6 . 5 4 anþðn1Þrþr;j
X
anþgþrði1Þ;j P 18k 2 f1; . . . ; mg
i 2 f1; . . . ; ng g 2 f1; . . . ; rg n þ g þ rði 1Þ 2 C k These two constraints (plus integrality requirements on ai,j) are part of the auxiliary problem. 3.2. Cover properties Two lemmas are proposed in this section for restricting the search for attractive covers to non-dominated covers. A valid cover is a cover in which all targets are covered. A dominated cover is a valid cover Sj such that there exists another valid cover Sj0 with anþgþrði1Þ;j0 6 anþgþrði1Þ;j for all pairs (i, g) 2 {1, . . . , n} {1, . . . , r}, and there exists at least one pair (i0, g0) in {1, . . . , n} {1, . . . , r} such that anþg0 þrði0 1Þ;j0 < anþgþrði1Þ;j . Therefore, in any non-dominated cover, removing any sensor compromises coverage. Consequently, a non-dominated cover can be built from any valid cover by removing sensors as long as it is possible to do so without compromising targets coverage. Algorithm 1 is a refining procedure which can be used for building a nondominated cover form any valid cover. Algorithm 1. Refining Sj (under binary encoding) into a non-dominated cover
The n first elements represent sensor activity: ai,j is set to one if and only if sensor i is part of the cover for all i 2 {1, . . . , n}. The nr last elements represent sector activity: an+g+r(i1),j = 1 if and only if sensor i is part of Sj and its sector g is used, for all (i, g) 2 {1, . . . , n} {1, . . . , r}. For Sj to be a consistent cover, the following three conditions must hold: If sensor i is not part of Sj, then none of its sector is active. If sensor i is part of Sj, exactly one of its sector is active. All the targets must be under the coverage of at least one active sector. The following constraints ensure that the first two conditions are satisfied (with ai,j 2 {0, 1} for all i 2 {1, . . . , n(r + 1)}).
Lemma 1. There exists an optimal solution to LM-DS where non-dominated covers only are used a non-zero amount of time. Proof. Lemma 1 is proved by contradiction. Assume that each optimal solution to LM-DS uses at least one dominated cover for a non-zero amount of time. Then, any dominated cover can be refined into a non-dominated cover
1012
A. Singh, A. Rossi / Ad Hoc Networks 11 (2013) 1006–1021
which can be used instead of dominated cover in the optimal solution for the same amount of time. Doing so does not violate any constraints as non-dominated cover is a valid cover which is a subset of the original dominated cover. By repeating this process for all dominated covers used for a non-zero amount of time in the optimal solution, an alternative optimal solution with non-dominated covers only is obtained which contradicts our assumption. Hence an optimal solution to LM-DS always exists where non-dominated covers only are used a non-zero amount of time. h A corollary of Lemma 1 is that we can restrict the search of an optimal solution to LM-DS by using non-dominated covers only. Lemma 2. A non-dominated cover has at most m sensors
Proof. Each target requires at least one sensor to be active, so at most m different sensors are required in any valid cover. Moreover, if a valid cover has strictly more than m sensors, then it is dominated as at least one sensor can be removed without compromising target coverage. h Lemma 2 can be seen as a necessary condition for a cover to be non-dominated (it is not sufficient).
limitations are satisfied for all sensors. Eq. (3) enforces that at most one work direction is used in each sensor in every cover. Eq. (4) makes sure that each target is covered by at least one sensor in each cover. Eq. (5) states that the maximum number of sensors in a cover is m. Eq. (6) states that if a work direction does not cover any target, then it should not be used in a cover. Finally, Eqs. (7) and (8) enforce integrality and non-negativity requirements, respectively. This mathematical model is non-linear because of Eq. (2), where xir+g,j and tj are multiplied. In order to linearize this model, we introduce a new set of decision variables denoted by zir+g,j for replacing xir+g,jtj for all i 2 {0, . . . , n 1}, g 2 {0, . . . , r 1} and j 2 {0, . . . , c 1}. We obtain a linearized model by removing Eq. (2), and by introducing Eqs. (9)–(13)
c1 X r1 X zirþg;j 6 bi
zirþg;j 6 t j
2 f0; . . . ; r 1g;
8j 2 f0; . . . ; c 1g
ð10Þ
8i 2 f0; . . . ; n 1g; 8g
2 f0; . . . ; r 1g; This section describes our proposed approach (GA + ILP) for LM-DS problem. First of all, a mixed integer linear programming formulation is presented in Section 4.1 and its unsuitability is shown even for solving small sized instances. Subsequent subsections describe our proposed column generation based approach.
ð9Þ
8i 2 f0; . . . ; n 1g; 8g
zirþg;j 6 xirþg;j
4. Problem formulation and solution approach
8i 2 f0; . . . ; n 1g
j¼0 g¼0
8j 2 f0; . . . ; c 1g
ð11Þ
zirþg;j P t j þ xirþg;j 1 8i 2 f0; . . . ; n 1g; 8g 2 f0; . . . ; r 1g;
8j 2 f0; . . . ; c 1g
zirþg;j P 0 8i 2 f0; . . . ; n 1g; 2 f0; . . . ; r 1g;
ð12Þ
8g
8j 2 f0; . . . ; c 1g
ð13Þ
4.1. A mixed integer linear programming model for LM-DS LM-DS can be formulated as follows: Maximize
c1 X
tj
ð1Þ
j¼0
!
c1 X r1 X xirþg;j t j 6 bi 8i 2 f0;...;n 1g
r1 X
ð2Þ
g¼0
j¼0
xirþg;j 6 1 8i 2 f0;...;n 1g;
g¼0
8j 2 f0;...;c 1g X xirþg;j P 1 8k 2 f0;...;m 1g;
ð3Þ
ði;gÞ2C k
8j 2 f0;...;c 1g n1 X r1 X xirþg;j 6 m 8j 2 f0;...;c 1g
ð4Þ ð5Þ
i¼0 g¼0
xirþg;j ¼ 0 8ði;gÞ 2 f0;...;n 1g f0;...; r 1g; ð6Þ Dirþg ¼ ; xirþg;j 2 f0;1g 8i 2 f0;...;n 1g; 8g 2 f0;...; r 1g; 8j 2 f0;...;c 1g ð7Þ t j P 0 8j 2 f0;...;c 1g
ð8Þ
Eq. (1) is the objective function, that aims at maximizing the network lifetime. Eq. (2) ensures that battery
Eq. (9) is the linearized version of Eq. (2), and Eqs. (10)– (13) are the classical constraints required for zir+g,j to be equal to xir+g,jtj. The linearized model for LM-DS has been solved with the commercial solver Xpress-MP [33] on a very small sized instance shown in Fig. 1. No optimal solution was found even after one hour of computational time, whereas the proposed column generation based approach (described in subsequent subsections) can optimally solve it in 0.007 s. This shows that column generation is particularly efficient for this type of problems, as can also be seen in [18]. In order to apply column generation, LM-DS is split into a master problem and an auxiliary problem that are introduced in Section 4.2 and Section 4.3 respectively. Section 4.4 presents the genetic algorithm that is used as the primary way to generate covers for the master problem. 4.2. Master problem The master problem aims at maximizing lifetime based using a given set of p covers. It can be stated as the following linear program:
1013
A. Singh, A. Rossi / Ad Hoc Networks 11 (2013) 1006–1021
Maximize
p X tj
ð14Þ
Maximize 1
p X ai;j tj 6 bi
8i 2 f1; . . . ; ng
ð15Þ
Subject to
r X anþgþrði1Þ;j 6 1 8i 2 f1; . . . ; ng
p X anþgþrði1Þ;j tj 6 bi
ð19Þ
g¼1
j¼1
X
8i 2 f1; . . . ; ng;
j¼1
8g 2 f1; . . . ; rg t j P 0 8j 2 f1; . . . ; pg
anþgþrði1Þ;j P 1 8k 2 f1; . . . ; mg
i 2 f1; . . . ; ng ð16Þ
g 2 f1; . . . ; rg
ð17Þ
n þ g þ rði 1Þ 2 C k
In the Master problem defined by Eqs. (14)–(17), ai,j are not decision variables, they are input to the master problem. More precisely, for any given cover j, the ai,j specify which sensors are part of the cover, and which work direction for each sensor is actually used in the cover. Eqs. (15) and (16) enforce that the lifetime of sensor i cannot exceed bi, whatever be the work directions and the covers used in the solution. The second set of constraints are obviously redundant because of the first set of constraints. However, they are kept in the LP so as to take advantage of their n(r + 1) associated dual variables in the auxiliary problem. If they were absent, it would not be possible to distinguish between the sectors of a sensor, and the auxiliary problem would not be able to build a valid cover. The master problem is initialized with a single cover (p = 1) which is full of zeros. Its contribution to the objective function is also zero. Thus, the master problem has a initial solution with a zero lifetime. The motivation for such a choice is twofold: first, finding a ‘good’ initial cover is not easy, and second, the genetic algorithm will return several profitable covers very fast, so the impact of the first cover is not significant. The auxiliary problem is to generate an attractive cover Sj for the master problem i.e., it has to compute ai,j for all i 2 {1, . . . , n} and an+g+r(i1),j for all i 2 {1, . . . , n} and for all g 2 {1, . . . , r}. The auxiliary problem is first stated as an ILP, then a genetic algorithm GA is designed for generating attractive covers. 4.3. Auxiliary problem ILP formulation Since the auxiliary problem is to compute an attractive cover Sj, it should have n(r + 1) decision variables. But it can be observed that it is sufficient to know that sector n + g + r(i 1) is used for deducing that sensor i is active. Consequently, the actual decision variables of the ILP formulation of the auxiliary problem are ai,j, for all i 2 {n + 1, . . . , n(r + 1)}. Then, knowing that a sensor is part of the cover if and only if one of its sector is used, the numerical value of ai,j for all i 2 {1, . . . , n} is deduced from the decision variables with the following post-processing formula:
ai;j ¼
ð18Þ
i¼1 g¼1
j¼1
Subject to
n X r X anþgþrði1Þ;j ðyi þ ynþgþrði1Þ Þ
r X anþgþrði1Þ;j 8i 2 f1; . . . ; ng g¼1
The objective function is the reduced cost value associated with the new cover, which is 1 yTSj. Note that for all P r i 2 {1, . . . , n}, ai,jyi is replaced with g¼1 anþgþrði1Þ;j yi .
n X r X anþgþrði1Þ;j 6 m
ð20Þ
ð21Þ
i¼1 g¼1
ai;j ¼ 0 8i 2 fn þ 1; . . . ; nðr þ 1Þg; Di ¼ ;
ð22Þ
anþgþrði1Þ;j 2 f0; 1g 8i 2 f1; . . . ; ng; 8g 2 f1; . . . ; rg Constraint (19) states that at most one sector is used in any sensor. Constraint (20) enforces that each target is covered by at least one sensor in the cover. Eq. (21) is the necessary condition for a cover to be non-dominated (see Lemma 2). Since this condition is not sufficient, Algorithm 1 is run as a post-processing procedure for making the cover nondominated. Eq. (22) states that if there is no target in the work direction of a sensor (i.e., Di = ;), then no cover should use that work direction. 4.4. A genetic algorithm for the auxiliary problem Solving the auxiliary problem to optimality through ILP approach can be very time consuming sometimes and yields a single cover. To overcome these drawbacks, we have designed a genetic algorithm which is referred to as GA. First, the auxiliary problem is attempted through GA. Upon failure, GA is applied afresh once more, and ILP is used only as a last resort (i.e., when both runs of GA have failed to find any attractive cover). Therefore, role of ILP approach is restricted to escaping from local optima or proving the optimality of the current solution to the master problem. Chromosome representation. Each chromosome represents a cover and is encoded using an integer vector of length n, where n is the total number of sensors. A value of g 2 {1, . . . , r} at the ith position indicates that sensor i is in the cover with sector g as its work direction, whereas a 0 at the ith position indicates that sensor i is not in the cover. For example, the first cover in Fig. 2 is encoded as 033440000, if we assume that for all sensors in this example, upper right direction is the first direction, upper left direction is the second direction, lower left direction is the third direction and lower right direction is the fourth direction. Similarly, cover 9 in this figure is encoded as 440400002. It should be stressed that the covers are represented in a different way in GA than in the master problem. Therefore, whenever a cover is passed to the master problem from GA, it is converted into the format required by the master problem. There are two reasons for using a different representation for GA. First reason is concerned with ease of use. Our genetic operators (crossover and mutation)
1014
A. Singh, A. Rossi / Ad Hoc Networks 11 (2013) 1006–1021
can be applied more easily with the representation used here than with the representation used in the master problem. Second reason is concerned with efficiency. The chromosome length is n here, whereas the representation used in master problem would have resulted in length n(r + 1). This leads to considerable savings in execution time. Fitness. To determine the fitness of a cover, two fitness functions are used. The primary fitness function is same as the objective function of the auxiliary problem which needs to be maximized. The secondary fitness function is the number of sensors in the cover which needs to be minimized. The secondary fitness function is required to differentiate between two covers having the same primary fitness function value. A cover is considered to be better than the other, if either it has a higher value according to the primary fitness function or the primary fitness values are equal, but it has lesser number of sensors than the other. The reason for using two fitness functions lies in the fact that many covers have the same value for the primary fitness function. Selection. We have used the binary tournament selection method for selecting the two parents for crossover. In this method, to select a parent, two chromosomes are picked uniformly at random from the current population and then a comparison is performed between their fitnesses. The chromosome with better fitness is selected to be a parent with probability pbt, otherwise the chromosome with worse fitness is selected. The second parent is selected likewise. Crossover and mutation operators We have used uniform crossover operator to construct a child chromosome from the two selected parents. This crossover operator constructs a child chromosome position-by-position. The value at each position of the child is copied from the corresponding position of one of the two parents. For each position, one of the two parents is selected uniformly at random to contribute the value. The child chromosome obtained through crossover is subjected to mutation. The mutation operator considers each sensor one-by-one, and, checks whether it is present in the child or not. If a sensor is present in the child, the mutation operator deletes it with probability pm. If a sensor is absent, then, with probability pm mutation operator inserts it with a randomly chosen direction as its work direction. Repair operator. The repair operator is needed because there is no guarantee of the feasibility of the child chromosome obtained through crossover and mutation operators. Our repair operator not only transforms the child into a feasible cover, but also into a good non-dominated cover. It consists of two stages. The first stage transforms the child into a feasible cover. The second stage tries to improve the value of the primary fitness function by removing some redundant sensors. The first stage considers each target one-by-one and if the target under consideration is not covered by any sensor then it first tries to add a new sensor to the cover in such a way that the target in question is covered by one of the directions of new sensor and that leads to minimum reduction in the value of the primary fitness function. If such a
sensor is found then it is added to the cover with the work direction that can cover the target in question and the coverage information of all affected targets are updated. If no such sensor exists then the target in question requires for its coverage change in work direction of one of the sensor already belonging to the cover. Such targets are added to the list of uncovered targets and processed separately after all targets are considered once. Targets belonging to the list of uncovered targets are covered one-by-one in an iterative manner. During each iteration a target, say k, is selected randomly and a sensor s that can cover this target is also selected randomly from Ck. Obviously, s is already present in the cover, but with a work direction that can not cover k. Now, the work direction of s is changed so that it can cover target k and the coverage information of all targets affected are updated. Target k along with all other uncovered targets which are now covered by the new work direction of s is deleted from the list of uncovered targets. As a result of change in work direction of s, some new targets may now become uncovered. These newly uncovered targets are added to the list of uncovered targets. These newly uncovered targets are also processed in an analogous manner except for one difference. Actually, these targets can be covered by a sensor which are not present in the cover. Therefore, whenever such a target is selected in an iteration, a check is made to determine whether a sensor s that is not already present in the cover can cover this target (ties are broken arbitrarily). If it is so, then s is added to the cover and the coverage information of the affected sensors are updated, otherwise a change in work direction of a sensor already belonging to the cover is required and hence the target is processed as described already. The whole process is repeated till the list of unassigned targets becomes empty. The second stage deletes those sensors from the cover all of whose covered targets are covered by other sensors also. It follows an iterative process. During each iteration it begins by finding the set Xr of those sensors in the cover, all of whose targets are redundantly covered. Then from Xr, a sensor s whose removal from cover will lead to maximum increase in the value of the primary fitness function is selected. The sensor s is removed from the cover and the coverage information of the affected targets are updated. This process is repeated as long as Xr is nonempty. Replacement policy. We have used steady-state population replacement method in our genetic algorithm [34]. In this method genetic algorithm repeatedly selects two parents, performs crossover and mutation to produce a single child. This child will replace a less fit member of the population. This differs from generational replacement method where a new population of children is created every generation and the entire parent population is replaced. The steady-state population replacement method has an advantage over generational method since the best solutions are always retained in the population and the child is immediately available for selection and reproduction. This can help in finding better solutions faster. Another advantage lies in the ease with which we can avoid the multiple copies of the same individual in the population. In the generational approach multiple copies of the
A. Singh, A. Rossi / Ad Hoc Networks 11 (2013) 1006–1021
same individual may exist in the population. Normally, these individuals are among the best individuals, but they can quickly dominate the whole population. Under such a situation, crossover becomes totally ineffective and the mutation becomes the only way to improve solution quality. Therefore, improvement, if any, in solution quality is quite slow under such a situation. Such a situation is known as the premature convergence. In the steady-state method, we can easily avoid this situation by comparing each newly generated child with the current population members, and discarding the child, if it is identical to one of the current members. Here, we have followed the policy that if the generated child is unique with respect to current population members then it will always replace the worst member of the population irrespective of its own fitness, otherwise it is discarded. Initial population generation. Each member of the initial population is generated randomly by following a two stage procedure. Before beginning the first stage a decision is made regarding selecting some sensors using a greedy policy in this stage or selecting all sensors randomly. The parameter pi controls this decision. With probability pi, we set the probability of greedy selection pg to a non-zero value, otherwise it is set to zero. The first stage follows an iterative process. Initially all targets are uncovered. During each iteration it selects an uncovered target randomly. Then a check is made to determine whether this target can be covered by a sensor not yet included in the cover. If there are more than one such sensors then from these sensors, with probability pg, we add a sensor to the cover whose inclusion leads to the least reduction in the value of the primary fitness function, otherwise a sensor is chosen randomly and added to the cover. If no such sensor exists then that means a change is required in the work direction of a sensor already present in the cover. In this case, a sensor which is already present in the cover and can cover the target in question is selected randomly and its work direction is changed so that it can now cover the target in question. Once we have covered the target in question, the coverage information of all affected targets are updated. The targets in question along with all previously uncovered targets which are now covered are removed from the list of uncovered targets and any target which becomes uncovered now is added to the list of uncovered targets. After that another iteration begins. This process is repeated till all target are covered. The second stage is exactly same as the second stage of repair operator. Each newly generated chromosome is compared with the population members generated so far, and, if it is identical to one of them then it is discarded, otherwise it is included in the initial population. Other features. If the genetic algorithm is successful in finding an attractive cover then up to UB_COVER best attractive covers are returned to the master problem in each iteration of the column generation process thereby accelerating convergence. If the genetic algorithm fails to find even one attractive cover, it is applied afresh one more time before using the ILP based approach. The stopping criterion that is used for our genetic algorithm is the maximum number of consecutive iterations
1015
MAX_IT without improvement in fitness of the best solution. However, the value of MAX_IT varies over the set CHOICES = {250, 500, 750, 1000, 1500} from one run to the other. If in the previous run genetic algorithm returned less than UB_COVER attractive covers then MAX_IT is set to next higher value, if possible, in CHOICES, otherwise MAX_IT is set to next lower value, if possible, in CHOICES. MAX_IT is set to 250 in the very first run of the genetic algorithm for all instances with more than 50 sensors. For instances with not more than 50 sensors, MAX_IT is set to 50 in the very first run of the genetic algorithm and CHOICES contains 50 also. The genetic algorithm can also terminate prematurely if it fails to find a solution different from current population members in 10 consecutive trials.
5. Computational results Our proposed approach (GA + ILP) has been implemented in C and executed on an Intel Core 2 Quad system with 4 GB RAM running under Ubuntu 9.04 at 2.83 GHz. The GLPK [35] library is used to solve LP and ILP. To reduce the execution time, those covers which are used for zero amount of time are discarded when the number of covers exceeds 20n in the master problem. Whenever ILP is used to solve the auxiliary problem, Algorithm 1 is applied for transforming the cover found by the ILP approach into a non-dominated cover. As the genetic algorithm generates non-dominated covers only, Algorithm 1 is not needed for covers generated by the genetic algorithm. For the sake of comparison, we have also implemented the column generation approach without the genetic algorithm, where the ILP approach alone is used to solve the auxiliary problem. This approach will be referred to as ILP Alone subsequently. Our genetic algorithm uses a population of min (n,100) individuals. The parameter UB_COVER, which determines the maximum number of best attractive covers that genetic algorithm can return to master problem, is set to 10. Mutation probability pm is set to 0.05. The probability pi with which a partial greedy policy is used in sensor selection in generating a member of initial population is set to 0.9. If partial greedy policy is used in generating a member of initial population then sensors are selected using greedy policy with probability pg = 0.4. Two different values of pbt are used for selecting the two parents through binary tournament selection. The first parent is selected with pbt = 0.9, whereas the second parent is selected with pbt = 0.8. These parameter values are chosen empirically after large number of trials. These parameter values may not be optimal for all instances, but provide good results on all instances. The instances that we have used in our computational experiments have been generated in the following manner. n directional sensors each with r non-overlapping directions and m targets are placed at random in a 500 m 500 m area, where n 2 {50, 100, 150, 200, 300}, m 2 {0.3n, 0.6n} and r 2 {3, 4}. The sensing range of each sensor is set to 150 m and initial battery life of each sensor is set to 1 time unit. For each combination of n, m and r, five instances have been generated leading to a total of 100 instances. Each instance name is of the form dirnXXXmYYYrZZZsVViWW, where XXX is the number of
1016
A. Singh, A. Rossi / Ad Hoc Networks 11 (2013) 1006–1021
Table 1 Results on instances with m = 0.3n & r = 3. Instance
dirn050m015r150s03i00 dirn050m015r150s03i01 dirn050m015r150s03i02 dirn050m015r150s03i03 dirn050m015r150s03i04 dirn100m030r150s03i00 dirn100m030r150s03i01 dirn100m030r150s03i02 dirn100m030r150s03i03 dirn100m030r150s03i04 dirn150m045r150s03i00 dirn150m045r150s03i01 dirn150m045r150s03i02 dirn150m045r150s03i03 dirn150m045r150s03i04 dirn200m060r150s03i00 dirn200m060r150s03i01 dirn200m060r150s03i02 dirn200m060r150s03i03 dirn200m060r150s03i04 dirn300m090r150s03i00 dirn300m090r150s03i01 dirn300m090r150s03i02 dirn300m090r150s03i03 dirn300m090r150s03i04 a
ILP Alone
GA + ILP
Opt./Best
Time (Seconds)
#Cov.
Opt./Best
Time (Seconds)
#Cov.
5.50000 4.00000 4.00000 6.00000 5.00000 9.50000 9.33333 8.50000 9.12500 5.00000 5.00000 11.73360 9.00000 11.75000 11.66670 10.00000 17.35410a 13.20000 12.00000 16.66670 23.38640a 22.38710a 22.72590a 19.00000 15.50000
0.068 0.068 0.056 0.131 0.067 2.558 0.980 5.055 2.026 0.408 0.723 40.318 3.492 6.104 8.643 7.485 3600.000 27.900 11.296 73.187 3600.000 3600.000 3600.000 153.020 75.180
49 39 34 83 50 293 183 431 257 47 46 935 139 312 361 147 2033 502 237 862 2637 2282 2197 733 378
5.50000 4.00000 4.00000 6.00000 5.00000 9.50000 9.33333 8.50000 9.12500 5.00000 5.00000 11.73360 9.00000 11.75000 11.66670 10.00000 17.37820a 13.20000 12.00000 16.66670 23.60710a 22.44440 23.17390a 19.00000 15.50000
0.044 0.024 0.019 0.066 0.026 1.453 0.625 3.043 1.071 0.106 0.146 38.266 0.601 2.111 1.695 0.569 3600.000 7.903 1.323 13.966 3600.000 705.917 3600.000 11.832 2.137
90 70 60 160 80 680 420 1000 550 100 80 4608 300 690 530 210 22758 1280 410 1660 31106 19080 33040 910 420
Opt./Best
Time (Seconds)
#Cov.
Opt./Best
Time (Seconds)
#Cov.
5.42857 4.00000 4.00000 5.71429 5.00000 8.77778 8.50000 7.92690 8.70000 5.50000 5.00000 10.86280 9.00000 11.76300 11.25000 10.00000 15.23490a 13.00000 13.00000 15.06500a 19.98100a 18.48870a 18.37610a 18.00000 15.50000
0.162 0.088 0.067 0.172 0.105 8.902 2.472 6.584 5.603 0.604 1.494 157.857 6.020 399.862 24.334 13.634 3600.000 229.942 131.216 3600.000 3600.000 3600.000 3600.000 3041.060 172.726
85 47 44 89 49 548 334 621 574 74 77 1433 232 1383 674 201 1885 1119 1067 2160 2194 1623 1543 1907 697
5.42857 4.00000 4.00000 5.71429 5.00000 8.77778 8.50000 7.92690 8.70000 5.50000 5.00000 10.86280 9.00000 11.76300 11.25000 10.00000 15.24540a 13.00000 13.00000 15.07800a 20.32780a 19.33910a 19.27940a 18.00000 15.50000
0.116 0.035 0.039 0.102 0.032 7.422 2.856 11.046 4.501 0.150 0.171 178.591 0.962 201.962 16.153 0.913 3600.000 54.654 72.904 3600.000 3600.000 3600.000 3600.000 654.460 9.063
230 90 100 220 90 1808 1010 2143 1288 150 110 13205 350 16467 2150 300 25571 3514 4912 26636 44049 34416 32624 15502 890
Non-proven optimal best value.
Table 2 Results on instances with m = 0.3n & r = 4. Instance
dirn050m015r150s04i00 dirn050m015r150s04i01 dirn050m015r150s04i02 dirn050m015r150s04i03 dirn050m015r150s04i04 dirn100m030r150s04i00 dirn100m030r150s04i01 dirn100m030r150s04i02 dirn100m030r150s04i03 dirn100m030r150s04i04 dirn150m045r150s04i00 dirn150m045r150s04i01 dirn150m045r150s04i02 dirn150m045r150s04i03 dirn150m045r150s04i04 dirn200m060r150s04i00 dirn200m060r150s04i01 dirn200m060r150s04i02 dirn200m060r150s04i03 dirn200m060r150s04i04 dirn300m090r150s04i00 dirn300m090r150s04i01 dirn300m090r150s04i02 dirn300m090r150s04i03 dirn300m090r150s04i04 a
ILP Alone
GA + ILP
Non-proven optimal best value.
sensors, YYY is the number of targets, ZZZ is the sensing range, VV is the number of directions and WW is the instance number. To our knowledge, no one has considered so far this much big instances for LM-DS. This is specially signif-
icant as our approach is an exact approach, whereas all previous approaches reported in the literature are heuristic in nature. However, due to significant slowing down of the exact solver, we have not considered instances of even big-
1017
A. Singh, A. Rossi / Ad Hoc Networks 11 (2013) 1006–1021 Table 3 Results on instances with m = 0.6n & r = 3. Instance
dirn050m030r150s03i00 dirn050m030r150s03i01 dirn050m030r150s03i02 dirn050m030r150s03i03 dirn050m030r150s03i04 dirn100m060r150s03i00 dirn100m060r150s03i01 dirn100m060r150s03i02 dirn100m060r150s03i03 dirn100m060r150s03i04 dirn150m090r150s03i00 dirn150m090r150s03i01 dirn150m090r150s03i02 dirn150m090r150s03i03 dirn150m090r150s03i04 dirn200m120r150s03i00 dirn200m120r150s03i01 dirn200m120r150s03i02 dirn200m120r150s03i03 dirn200m120r150s03i04 dirn300m180r150s03i00 dirn300m180r150s03i01 dirn300m180r150s03i02 dirn300m180r150s03i03 dirn300m180r150s03i04 a
ILP Alone
GA + ILP
Opt./best
Time (s)
#Cov.
Opt./best
Time (s)
#Cov.
3.00000 1.00000 4.31629 4.36842 2.00000 3.00000 4.50000 8.41105a 7.41667 6.50000 3.50000 10.00000 10.35290 12.01600a 11.25080a 10.00000 13.00000 13.50000 14.70510a 13.00000 18.43040a 17.78760a 18.57330a 18.78600a 17.99980a
0.144 0.027 1.073 0.421 0.070 0.768 1.474 3600.000 15.114 2.262 2.867 175.163 58.955 3600.000 3600.000 45.221 478.206 2427.283 3600.000 1593.835 3600.000 3600.000 3600.000 3600.000 3600.000
25 4 214 139 19 26 56 1105 412 108 34 663 790 1318 1231 248 776 1317 1278 1040 1309 1296 1332 1454 1278
3.00000 1.00000 4.31629 4.36842 2.00000 3.00000 4.50000 8.41133 7.41667 6.50000 3.50000 10.00000 10.35290 12.04770a 11.29570a 10.00000 13.00000 13.50000 15.04810a 13.00000 20.01410a 19.06220a 19.71900a 19.97180a 19.42100a
0.038 0.011 1.186 0.265 0.013 0.141 0.197 2016.583 3.666 0.325 0.284 12.464 17.344 3600.000 3600.000 1.402 14.924 281.784 3600.000 58.298 3600.000 3600.000 3600.000 3600.000 3600.000
80 20 837 350 40 70 120 6055 1010 210 80 1650 2110 16547 15390 340 1420 10726 17202 3593 25776 22795 22051 23003 22268
Opt./best
Time (s)
#Cov.
Opt./best
Time (s)
#Cov.
3.00000 1.00000 3.81250 3.73333 3.00000 3.00000 3.50000 7.31442 6.65625 7.00000 3.00000 8.89045a 9.00000 9.68412a 9.53306a 10.00000 11.22130a 10.67990a 11.40900a 10.58480a 13.28020a 13.17900a 13.60500a 13.76800a 13.23670a
0.233 0.021 0.688 0.359 0.154 1.694 2.798 602.312 1061.845 51.863 3.055 3600.000 350.405 3600.000 3600.000 1139.381 3600.000 3600.000 3600.000 3600.000 3600.000 3600.000 3600.000 3600.000 3600.000
31 5 162 99 37 32 59 1099 1061 622 39 1287 909 1074 1147 686 1068 911 991 976 871 849 864 920 828
3.00000 1.00000 3.81250 3.73333 3.00000 3.00000 3.50000 7.31442 6.65625 7.00000 3.00000 8.95387a 9.00000 9.78956a 9.59367a 10.00000 11.56400a 11.39680a 12.05490a 11.12970a 15.83720a 15.21020a 15.96800a 16.01350a 15.47450a
0.039 0.016 0.575 0.176 0.031 0.191 0.250 281.630 617.875 20.730 0.281 3600.000 41.572 3600.000 3600.000 9.164 3600.000 3600.000 3600.000 3600.000 3600.000 3600.000 3600.000 3600.000 3600.000
70 20 525 280 70 80 120 9546 11406 3324 80 26395 3864 22066 19904 870 26691 26364 25327 25728 26247 17033 26185 28468 22587
Non-proven optimal best value.
Table 4 Results on instances with m = 0.6n & r = 4. Instance
dirn050m030r150s04i00 dirn050m030r150s04i01 dirn050m030r150s04i02 dirn050m030r150s04i03 dirn050m030r150s04i04 dirn100m060r150s04i00 dirn100m060r150s04i01 dirn100m060r150s04i02 dirn100m060r150s04i03 dirn100m060r150s04i04 dirn150m090r150s04i00 dirn150m090r150s04i01 dirn150m090r150s04i02 dirn150m090r150s04i03 dirn150m090r150s04i04 dirn200m120r150s04i00 dirn200m120r150s04i01 dirn200m120r150s04i02 dirn200m120r150s04i03 dirn200m120r150s04i04 dirn300m180r150s04i00 dirn300m180r150s04i01 dirn300m180r150s04i02 dirn300m180r150s04i03 dirn300m180r150s04i04 a
ILP Alone
GA + ILP
Non-proven optimal best value.
ger sizes. Both the approaches (GA + ILP and ILP Alone) have been executed once on each instance and have been allocated a maximum of one hour on each instance.
Tables 1–4 respectively report the results on instances with m = 0.3n & r = 3, m = 0.3n & r = 4, m = 0.6n & r = 3 and m = 0.6n & r = 4. For each instance, these tables report
1018
A. Singh, A. Rossi / Ad Hoc Networks 11 (2013) 1006–1021
Table 5 The overall picture. Instance type
m = 0.3n m = 0.3n m = 0.6n m = 0.6n All
& & & &
r=3 r=4 r=3 r=4
ILP alone
GA + ILP
GA + ILP advantage
%Opt
Avg. time (s)
%Opt
Avg. time (s)
Speedup
%Gain
84.000 80.000 64.000 52.000 70.000
19.941 210.145 300.180 247.293 180.562
88.000 80.000 68.000 52.000 72.000
4.144 60.807 24.521 74.810 38.115
4.812 3.456 12.242 3.306 4.737
0.874 2.437 5.096 10.005 5.293
the optimal value or the best value (in case the approach is not able to finish within one hour), the execution time in seconds and the number of covers generated by the two approaches (ILP Alone and GA + ILP). Non-proven optimal best values are indicated by ‘a’ after them. Finally, Table 5 presents the overall picture by highlighting the differences between addressing the auxiliary problem with the ILP approach alone and using the proposed GA + ILP approach. This table compares the two approaches on each class of instances and overall in terms of the percentage of optimal solutions found and average execution time in seconds on those instances where both approaches found an optimal solution within the maximum allocated time of one hour. To clearly show the advantages of GA + ILP approach, the last two columns of this table present the speedup of GA + ILP approach over ILP Alone on those instances where both approaches found an optimal solution within the allotted time of one hour and percentage gain in average solution quality of GA + ILP approach over ILP Alone on those instances where both approaches could not find an optimal solution within the allotted time. More precisely, let IAvg and GAvg respectively be the average solution quality returned by ILP Alone and GA + ILP on those instances where both ILP Alone and GA + ILP could not find an optimal solution within one hour vg then %Gain ¼ 100 GAvIAgIA v g . Fig. 3 graphically compares
both the approaches through a bar chart on each class of instances and overall in terms of average execution time in seconds on those instances where both approaches found an optimal solution within the maximum allocated time of one hour. These tables and Fig. 3 clearly show the superiority of the proposed GA + ILP approach over ILP Alone. The proposed approach, in general, is several times faster than the ILP Alone and difference in speed grows with instance size. Considering all instances solved optimally by both the approaches, GA + ILP approach provides an average speedup of 4.737 over ILP Alone. This value reaches 12.242 for instances solved optimally in class m = 0.6n & r = 3. However, there are four instances viz. dirn050m 030r150s03i02, dirn100m030r150s04i01, dirn100m 030r150s04i02 and dirn150m045r150s04i01, where GA + ILP requires more time than ILP Alone. The proposed approach generates more covers to reach the optimal solution. This is expected as in the proposed approach up to UB_COVER best non-dominated covers are returned from the auxiliary problem to the master problem, whereas ILP Alone returns a single non-dominated cover. Overall, out of 100 instances, GA + ILP is able to solve 72 instances optimally within the allotted time of one hour, whereas ILP Alone solves 70, so there is not much difference between the two approaches in terms of number of instances solved
Fig. 3. Comparison of average execution times of GA + ILP and ILP Alone on different classes of instances.
1019
A. Singh, A. Rossi / Ad Hoc Networks 11 (2013) 1006–1021
7 GA+ILP
9.5 GA+ILP
9
Network Lifetime
8.5 8 7.5 7 6.5 6 5.5 10
20
30
40
50
60
Number of Targets Fig. 5. Network lifetime versus number of targets.
11
GA+ILP
10
Network Lifetime
9 8 7 6 5 4 3 2 1 100
125
150
175
200
Sensing Range (Meters) Fig. 6. Network lifetime versus sensing range.
14 GA+ILP
13 12
Network Lifetime
optimally. This is due to the tail-off effect of column generation [20], i.e., at the end of the search, a very large number of iterations are required for a very modest lifetime increase. However, GA + ILP is much faster than ILP Alone when the optimal solution is found, and returns a better solution otherwise: the gain is of 5.293% in average solution quality over ILP Alone when we consider those instances which could not be solved optimally by both the approaches. The class of instances with m = 0.6n & r = 4 seems to be most difficult as both the approaches are able to find optimal solution for only 13 instances out of 25. For the remaining 12 instances of this class, GA + ILP yields a gain of 10.005% in average solution quality over ILP Alone. We have performed some additional set of experiments to investigate the effect of variation in number of targets, number of directions, sensing range and number of sensors on network lifetime and execution times of our approaches. All these experiments except for the variations in number of sensors have been performed on 5 instances dirn100m 060r150s03iWW, WW =01 . . . 04, where we varied r, m and sensing range one-by-one while keeping all other parameters to their standard values in these instances, whereas the effect of variation in number of sensors are studied on 5 instances dirn200m060r150s03iWW, WW =01 . . . 04, where we varied the number of sensors while keeping all other parameters to their standard values in these instances. Reported results are the average of respective 5 instances. Figs. 4–7 show the variation in network lifetime with respect to number of directions, number of targets, sensing range and number of sensors respectively. Only GA + ILP results are reported as ILP Alone results are either same or very nearly same (for cases where ILP Alone could not finish in allocated time of one hour). As expected, network lifetime decreases with the increase of number of directions and number of targets, whereas network lifetime increases with the increase of sensing range and number of sensors. Actually, with the increase of number of directions, each sensor covers lesser number of targets with each direction, and therefore, more number of sensors are required in a cover for complete target coverage. This ultimately leads to reduced network lifetime. Similarly, with the increase of number of targets, more sensors are required to cover all the targets, thereby reducing lifetime. On the contrary, with
11 10 9 8 7 6
6.5
Network Lifetime
5 4 75
6
100
125
150
175
200
Number of Sensors Fig. 7. Network lifetime versus number of sensors.
5.5
5
4.5
1
2
3
4
5
Number of Directions Fig. 4. Network lifetime versus number of directions.
6
the increase of sensing range, a sensor can cover more targets and as a result lesser number of sensors are required in a cover for complete target coverage. This leads to an increase of network lifetime with the increase of sensing range. With the increase of number of sensors, more sensors are available to cover the targets, and as a result, lifetime increases.
1020
A. Singh, A. Rossi / Ad Hoc Networks 11 (2013) 1006–1021 350
30 ILP Alone GA+ILP
ILP Alone GA+ILP
25
Execution Time (Seconds)
Execution Time (Seconds)
300 250 200 150 100
20
15
10
5
50 0 1
2
3
4
5
6
0 75
100
Number of Directions Fig. 8. Execution Time versus number of directions.
175
200
number of directions nor the number of targets nor the sensing range nor the number of sensors are the sole criterion for the difficulty of the problem. The difficulty of a problem also depends on the relative locations of sensors and targets. Please also note that the execution time of GA + ILP is always less than that of ILP Alone in these figures.
ILP Alone GA+ILP
25
Execution Time (Seconds)
150
Fig. 11. Execution Time versus number of sensors.
30
20
15
6. Conclusions
10
5
0 10
20
30
40
50
60
Number of Targets Fig. 9. Execution Time versus number of targets.
400
ILP Alone GA+ILP
350
Execution Time (Seconds)
125
Number of Sensors
300 250 200 150 100 50 0 100
125
150
175
This paper discussed the lifetime maximization problem of directional sensor networks. Owing to directional characteristic of sensors, this problem is more complex than its counterpart in conventional omnidirectional WSNs. In contrast to previously proposed approaches for the problem which are all heuristic in nature, a column generation based exact approach referred to as GA + ILP is proposed where a two level strategy comprising a genetic algorithm and an integer linear programming (ILP) approach is used to address the auxiliary problem. Computational results demonstrated the superiority of the proposed approach over a column generation approach relying on ILP approach alone to solve the auxiliary problem as the proposed approach is found to be several times faster. Our proposed approach is about 4.74 times faster on an average over all instances than the approach which uses the ILP alone. The advantage of the proposed approach is most pronounced on instances with m = 0.6n & r = 3, where it is about 12.24 times faster on an average. Our approach can be extended to cater to directional sensor networks with one or more of the following additional requirements:
200
Sensing Range (Meters) Fig. 10. Execution Time versus sensing range.
Figs. 8–11 show the variation in execution time of GA + ILP and ILP Alone with respect to number of directions, number of targets sensing range and number of sensors respectively. Reported execution times are the average execution times on those instances where both approaches have been able to find the optimal solution within the allocated time of one hour. These figures show that neither the
Where different targets have different quality-of-service requirements. Where there is a bandwidth constraint on the number of sensors that can be active simultaneously. Where sensors can communicate to a base station only through a multi-hop path of active sensors. Where sensors have adjustable sensing ranges. Our future work will focus on aforementioned directions. The ideas presented in this paper can be used to develop exact methods for other problems also.
A. Singh, A. Rossi / Ad Hoc Networks 11 (2013) 1006–1021
Acknowledgments We are grateful to four anonymous reviewers for their valuable comments and suggestions which helped in improving the quality of this paper. References [1] I.F. Akylidiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, A survey on sensor networks, IEEE Communication Magazine 40 (2002) 102–114. [2] V. Raghunathan, C. Schurgers, S. Park, M. Srivastava, Energy aware wireless microsensor networks, IEEE Signal Processing Magazine 19 (2002) 40–50. [3] L. Benini, G. Castelli, A. Macii, E. Macii, M. Poncino, R. Scarsi, A discrete-time battery model for high-level power estimation, in: Proceedings of the IEEE Design Automation and Test in Europe (DATE-00), 2000, pp. 35–39. [4] L. Benini, D. Bruni, A. Macii, E. Macii, M. Poncino, Discharge current steering for battery lifetime optimization, IEEE Transactions on Computers 52 (2003) 985–995. [5] M. Rahimi, R. Baer, O. Iroezi, J. Garcia, J. Warrior, D. Estrin, M. Srivastava, Cyclops: in situ image sensing and interpretation in wireless sensor networks, in: Proceedings of ACM Conference Embedded Networked Sensor Systems (SenSys), 2005. [6] W. Feng, E. Kaiser, W. Feng, M. Baillif, Panoptes: scalable low-power video sensor networking technologies, ACM Transactions on Multimedia Computing, Communication and Applications 1 (2005) 151–167. [7] J. Djugash, S. Singh, G. Kantor, W. Zhang, Range-only slam for robots operating cooperatively with sensor networks, in: Proceedings of IEEE International Conference on Robotics and Automation, 2006. [8] R. Szewczyk, A. Mainwaring, J. Polastre, J. Anderson, D. Culler, An analysis of a large scale habitat monitoring application, in: Proceedings of ACM Conference on Embedded Networked Sensor Systems (SenSys), 2004. [9] Y. Cai, W. Lou, M. Li, X.-Y. Li, Target oriented scheduling in directional sensor networks, in: Proceedings of the IEEE INFOCOM 2007, 2007, pp. 1550–1558. [10] Y. Cai, W. Lou, M. Li, X.-Y. Li, Energy efficient target-oriented scheduling in directional sensor networks, IEEE Transactions on Computers 58 (2009) 1259–1274. [11] J. Wang, C. Niu, R. Shen, Randomized approaches for target coverage scheduling in directional sensor network, Lecture Notes in Computer Science 4523 (2007) 379–390. [12] J.-M. Gil, C. Kim, Y.-H. Han, A greedy algorithm to extend the lifetime of randomly deployed directional sensor networks, in: Proceedings of the 5th International Conference on Ubiquitous Information Technologies and Applications (CUTE), 2010, pp. 1–5. [13] J.-M. Gil, Y.-H. Han, A target coverage scheduling scheme based on genetic algorithm in directional sensor networks, Sensors 11 (2011) 1888–1906. [14] C. Wang, M. Thai, Y. Li, F. Wang, W. Wu, Optimization scheme for sensor coverage scheduling with bandwidth constraints, Optimization Letters 3 (2009) 63–75. [15] A. Alfieri, A. Bianco, P. Brandimarte, C. Chiasserini, Maximizing system lifetime in wireless sensor networks, European Journal of Operational Research 181 (2007) 390–402. [16] Y. Gu, H. Liu, B. Zhao, Target coverage with QoS requirements in wireless sensor networks, in: Proceedings of the 2007 International Conference on Intelligent Pervasive Computing, IEEE Computer Society, 2007, 2007, pp. 35–38. [17] Y. Gu, Y. Ji, J. Li, B. Zhao, QoS-aware target coverage in wireless sensor networks, Wireless Communications and Mobile Computing 9 (2009) 1645–1659. [18] A. Rossi, A. Singh, M. Sevaux, A column generation algorithm for sensor coverage scheduling under bandwidth constraints, Networks 60 (2012) 141–154. [19] P. Gilmore, R. Gomory, A linear programming approach to the cutting-stock problem, Operations Research 9 (1961) 849–859. [20] M. Lübbecke, J. Desrosiers, Selected topics in column generation, Operations Research 53 (2005) 1007–1023. [21] F. Vanderbeck, M. Savelsbergh, A generic view of Dantzig–Wolfe decomposition in mixed integer programming, Operations Research Letters 34 (2006) 296–306.
1021
[22] V. Chvátal, Linear programming, W.H. Freeman and Company, New York, 1980. [23] G. Filho, L. Lorena, Constructive genetic algorithm and column generation: an application to graph coloring, in: Proceedings of the Fifth Conference of the Association of Asia-Pacific Operations Research Societies within IFORS (APORS 2000), 2000. [24] J. Puchinger, G.R. Raidl, An evolutionary algorithm for column generation in integer programming: an effective approach for 2d bin packing, Lecture Notes in Computer Science 3242 (2004) 642– 651. [25] J. Puchinger, G. Raidl, Models and algorithms for three-stage twodimensional bin packing, European Journal of Operational Research 183 (2007) 1304–1327. [26] M. Boschetti, V. Maniezzo, M. Roffilli, A. Röhler, Matheuristics: optimization, imulation and control, Lecture Notes in Computer Science 5818 (2009) 171–177. [27] H. Ma, Y. Liu, On coverage problems of directional sensor networks, Lecture Notes in Computer Science 3794 (2005) 721–731. [28] M. Guvensan, A. Yavuz, On coverage issues in directional sensor networks: a survey, Ad Hoc Networks 9 (2011) 1238–1255. [29] J. Ai, A. Abouzeid, Coverage by directional sensors in randomly deployed wireless sensor networks, Journal of Combinatorial Optimization 11 (2006) 21–41. [30] M. Cardei, M. Thai, Y. Li, W. Wu, Energy-efficient target coverage in wireless sensor networks, in: Proceedings of the IEEE INFOCOM 2005, 2005, pp. 1976–1984. [31] H. Yang, D. Li, H. Chen, Coverage quality based target-oriented scheduling in directional sensor networks, in: Proceedings of the IEEE ICC 2010, 2010, pp. 1–5. [32] A. Makhoul, R. Saadi, C. Pham, Adaptive scheduling of wireless video sensor nodes for surveillance applications, in: Proceedings of the ACM Workshop on Performance Monitoring and Measurement of Heterogeneous Wireless and Wired Networks (PM2HW2N-09), 2009, pp. 54–60. [33] FICO, Xpress-MP, 2009.
. [34] L. Davis, Handbook of Genetic Algorithms, Van Nostrand Reinhold, New York, 1991. [35] GLPK (GNU Linear Programming Kit), 2011.
.
Alok Singh received the Bachelors and Masters degrees in Computer Science from Banaras Hindu University, Varanasi, India in 1996 and 1998 respectively. He received the Ph.D. degree in Computer Science from University of Allahabad, Allahabad, India in 2006. He is currently an Associate Professor in the Department of Computer and Information Sciences at the University of Hyderabad, Hyderabad, India. His primary research interests lie in the area of combinatorial optimization using heuristic and metaheuristic techniques.
André Rossi received his M.S. degree in electrical engineering from ENSIEG, France in 2000, his Ph.D. degree in robust scheduling from Institut National Polytechnique de Grenoble, France in 2003 and his habilitation in 2012 from Université de Bretagne-Sud, France, where he currently holds an Associate Professor position. His main scientific interests are operations research and network optimization.