Swarm and Evolutionary Computation 13 (2013) 74–84
Contents lists available at ScienceDirect
Swarm and Evolutionary Computation journal homepage: www.elsevier.com/locate/swevo
Regular Paper
Detection and diagnosis of node failure in wireless sensor networks: A multiobjective optimization approach Arunanshu Mahapatro a,n, Pabitra Mohan Khilar b a b
National Institute of Science and Technology, Berhampur, India National Institute of Technology Rourkela, Rourkela, India
art ic l e i nf o
a b s t r a c t
Article history: Received 25 December 2011 Received in revised form 22 February 2013 Accepted 10 May 2013 Available online 22 May 2013
Detection of intermittent faults in sensor nodes is an important issue in sensor networks. This requires repeated application of test since an intermittent fault will not occur consistently. Optimization of inter test interval and maximum number of tests required is crucial. In this paper, the intermittent fault detection in wireless sensor networks is formulated as an optimization problem. The two objectives, i.e., detection latency and energy overhead are taken into consideration. Tuning of detection parameters based on two-lbests based multiobjective particle swarm optimization (2LB-MOPSO) algorithm is proposed here and compared with that of non-dominated sorting genetic algorithm (NSGA-II) and multiobjective evolutionary algorithm based on decomposition (MOEA/D). A comparative study of the performance of the three algorithms is carried out, which show that the 2LB-MOPSO is a better candidate for solving the multiobjective problem of intermittent fault detection. A fuzzy logic based strategy is also used to select the best compromised solution on the Pareto front. & 2013 Elsevier B.V. All rights reserved.
Keywords: Fault detection Intermittent fault Multiobjective optimization WSNs
1. Introduction Wireless sensor network (WSN) is a special kind of network composed of hundreds or even thousands of autonomous sensor nodes. The nodes can perform sensing, processing, and wireless communication tasks [1,2]. Experimental studies have shown that more than 80% of the faults that occur in real systems like WSNs are intermittent faults [3,4]. An intermittent fault originates from inside the system when software or hardware is faulty. By its nature, an intermittent fault will not occur consistently, which makes its diagnosis a probabilistic event over time [5]. Since the effect of fault is not always present, detection of intermittent fault requires repetitive testing at a discrete time kT ðk ¼ 1; 2; …Þ in contrast to single test for detection of permanent fault. Intuitively this implies that to detect an intermittent fault the issues like number of test required and inter test interval (T) are crucial. If T is too large, then probability that the error appears after kth test and disappears before k þ 1 th test increases and thus detection accuracy decreases. Diagnostic latency is expected to be more for larger value of T which might not be acceptable for application with short mission time. Improvement in both detection accuracy and latency can be achieved with smaller value of T. However, if T is too small, then frequent exchange of sensor measurements is
n
Corresponding author. Tel.: +91 9437438294. E-mail addresses:
[email protected] (A. Mahapatro),
[email protected] (P. Mohan Khilar). 2210-6502/$ - see front matter & 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.swevo.2013.05.004
required as message exchange is the only means to detect faults. This in turn increases the energy overhead. Thus, the following questions might be of interest:
What should be the value of T? How many tests required to detect an intermittent fault? These issues motivate to find an trade-off between detection accuracy, detection latency and energy overhead. As it can be perceived, finding a good trade-off can be formulated in several possible ways, and with emphasis on various aspects of the final output expected. Thus, there may not exist a single optimal solution but rather a whole set of possible solutions of equivalent quality. This motivates to use Multiobjective Optimization algorithm that deals with such simultaneous optimization of multiple, possibly conflicting, objective functions. This work introduces the two-lbests based multiobjective particle swarm optimization (2LB-MOPSO) [6] algorithm as a tool in finding trade-offs accounting for the relative importance of detection accuracy, latency of isolation of unhealthy nodes and energy overhead. As suggested in [7] a fuzzy based mechanism is employed to extract the best tradeoff solution from the Pareto optimal solutions provided by 2LB-MOPSO. The specific contributions of this paper are listed below:
Proposes a generic parameterize diagnosis scheme that identifies permanent and intermittent faults with high accuracy while maintaining low time, message and energy overhead.
A. Mahapatro, P. Mohan Khilar / Swarm and Evolutionary Computation 13 (2013) 74–84
Formulate intermittent fault detection as a multiobjective
optimization problem. Tuning of detection parameters like T and kmax based on the 2LB-MOPSO algorithm is proposed and compared with that of NSGA-II [8] and MOEA/D [9].
The remainder of the paper is organized as follows: Section 2 presents background of related works. The system model is described in Section 3. Section 4 presents the fault detection algorithms. Section 5 presents formulation of fault detection problem. The multiobjective optimization problem is discussed in Section 6. In Section 7 presents performance metrics and best trade-off solution. Simulation experiments are described in Section 8. Finally, Section 9 deals with conclusions.
75
3. System model 3.1. Network model The proposed algorithm considers a network with n sensor nodes non-uniformly distributed in a square area of side L, which is much larger than the communication range (rtx) of the sensor nodes. Every node maintains a neighbor table NðÞ. Each sensor periodically produces information as it monitors its vicinity. Similar to [13], nodes with malfunctioning sensors are allowed to act as a communication node for routing. However, these nodes are asked to switch off their sensors. Only those sensor nodes with a permanent fault in the transceiver and power supply are to be removed from the network. 3.2. Fault model
2. Related research The classical model for considering system-level faults is that introduced by Preparata, Metze, and Chien in [10]. This so-called PMC model is intended to diagnose permanent faults in a wired inter connected system. The problem of permanent fault detection and diagnosis in wireless sensor networks is extensively studied in literatures [11–15]. Luo et al. [16] proposed a fault-tolerant detection scheme that explicitly introduces the sensor fault probability into the optimal event detection process where the optimal detection error decreases exponentially with the increase of the neighborhood size. In [13] the authors present a distributed fault detection model for wireless sensor networks where each sensor node identifies its own state based on local comparisons of sensed data against some thresholds and dissemination of the test results. Krishnamachari et al. have presented a Bayesian fault recognition model to solve the fault-event disambiguation problem in sensor networks [14]. In [12], the authors have proposed time redundancy to diagnose the intermittent faults in sensing and communication in a sensor network. They assume that each sensor has at least three neighboring nodes, which may not be always possible for sparse networks. The evolutionary approach for fault detection was introduced in [17] and a comparison of evolutionary algorithms for systemlevel diagnosis can be found in [18]. The genetic approaches have been used previously for fault identification in [19–21]. Genetic Algorithms (GAs) offer several advantages over traditional optimization techniques. These techniques require either gradient descent information or any other internal knowledge. On the opposite, GAs require only fitness information, which makes them very suitable for fault identification. In addition, GAs are designed to search highly nonlinear spaces for global optima. In [20], a parallel evolutionary approach for identifying faults in diagnosable systems is proposed. The new parallel version considerably improves the efficiency of the serial genetic approach, making a significant contribution to the state of the art on fault diagnosis algorithms. An ant-colony based fault diagnosis algorithm is proposed in [21]. Experimental results are presented for both the traditional GA and specialized versions of the GA in [22]. In summary, most of the existing evolutionary approaches for fault detection focus on wired inter connected system. Further, most cited works on distributed diagnosis for WSNs work with the assumption that sensors are either permanent faulty or fault-free. This assumption may not be true in real time applications. This paper introduces and examines a generic detection scheme, which can detect both permanent and intermittent faults in WSNs and to establish a good trade-off between detection latency and energy overhead.
The proposed algorithm considers both hard and soft faults [5]. A hard faulty node is unable to communicate with other nodes in the network, whereas a node with soft-fault continues to operate and communicate with altered behavior. These malfunctioning (soft faulty) sensors could participate in the network activities since still they are capable of routing information. The proposed algorithm assumes that the sensor fault probability p is uncorrelated and symmetric, i.e., PðS ¼ xjA ¼ pxÞ ¼ PðS ¼ pxjA ¼ xÞ ¼ p
ð1Þ
where S is the sensor measurement (say temperature) and A is the actual ambient temperature. 3.3. Energy consumption model Similar to [23], this work assumes a simple model for the radio hardware energy dissipation. The transmitter dissipates energy to run the radio electronics and the power amplifier. The receiver dissipates energy to run the radio electronics. Both the free space (D2 power loss) and the multipath fading (D4 power loss) channel models are used, depending on the distance between the transmitter and receiver. The energy spent for transmission of an r-bit packets over distance D is 8 < rEelec þ rϵfs D2 ; D o D0 α ETx ðr; DÞ ¼ rEelec þ rϵD ¼ ð2Þ : rEelec þ rϵamp D4 ; D≥D0 The electronics energy, Eelec, depends on factors such as the digital coding, and modulation. The amplifier energy, ϵfs D2 or ϵamp D4 , depends on the transmission distance and the acceptable bit-error rate. To receive this message, the radio expends energy: ERx ðrÞ ¼ rEelec
ð3Þ
4. Fault detection 4.1. Permanent fault detection This section introduces a detection algorithm which follows the general principle, where working nodes perform their own independent diagnosis of the system. The detection algorithm uses timeout mechanism to detect hard faulty nodes. At each detection round each node broadcasts its own sensor reading. The node vi detects node vj ∈Nðvi Þ as hard faulty, if vi does not receive the sensor reading from vj before Tout. Tout should be chosen carefully so that all the fault-free nodes vj ∈Nðvi Þ must report node vi before Tout. The soft faults can be detected as follows. This approach exploits the fact that sensor faults are likely to be stochastically unrelated, while sensor measurements are likely to be spatially
76
A. Mahapatro, P. Mohan Khilar / Swarm and Evolutionary Computation 13 (2013) 74–84
Fig. 2. Appearance and disappearance of fault.
Fig. 1. Flow diagram to detect intermittent fault.
correlated. In WSNs, sensors from the same region should have recorded similar sensor reading [24]. Let vi be a neighbor of vj, xi and xj are the sensor reading of vi and vj respectively. In this work xi is similar to xj when jxi −xj j o δ where δ is application dependent. An arbitrary node vi receives the sensor reading from neighboring nodes and generates a set (fEg⊂fNðvi Þg) of nodes with similar reading S. The node vi is detected fault-free if reading Si agrees with S and the cardinality of set fEg is greater than the threshold (θ) else vi is marked as possibly soft faulty. The optimal value for θ is 0:5ðN−1Þ (see Appendix) where N is the number of neighbors. This decision is then broadcasted. A final decision on a node marked as possibly soft faulty is taken as follows. A node vi identified as possibly soft faulty, first checks for a node vq ∈fN i g such that the qth entry in its fault table is fault-free. If such vq exists and vq ∈fEg then vi is detected as fault-free or else faulty. This final decision is next broadcasted. 4.2. Intermittent fault detection To test for permanent faults, any particular test need only be applied once as these faults are software or hardware faults that always produce errors when they are fully exercised. In contrast, the only approach to test for intermittent faults is through repeated application of tests. Thus to detect intermittent faults, each node executes the algorithm discussed in Section 4.1 at discrete times kT ðk ¼ 1; 2; 3; …Þ. The operation of the algorithm is described by the flow diagram in Fig. 1. The conditional block labeled “Faulty?” represents the snapshot view of the current diagnostic round. The algorithm loops as long as no errors from a node are detected. The node is isolated when a fault is observed.
5. Problem formulation 5.1. Stochastic model for intermittent fault Once intermittent fault is activated in a sensor node, faults are observable for a duration FAD (fault appearance duration) before they disappear. Eventually, errors will reappear after FDD (fault disappearance duration) either because of permanent faults or correlated intermittent faults. This is depicted in Fig. 2. The behavior of the intermittent fault can be characterized by measuring or estimating the probabilities of error disappearance and reappearance in discrete time kT. The state of a sensor node is modeled as four-state Markov model. Fig. 3 depicts this model where the transition probabilities between different states of the sensor node are shown. According to the proposed model, the node can be in either one of the four states—fault-free (FF), permanent faulty (PF), intermittent faulty and fault is active (FA), and intermittent faulty but fault is inactive (FD). The sensor node in FF state can make a transition to either PF state or FA state with a rate γ. From FD state, it can either go to PF state or to FA state or stay in FD state. In order to analyze intermittent fault in more details we focus on FA and FD states, which can be visualized as a two-state Markov model. The state FA (1) corresponds to fault exits and appears at the scheduled time of
Fig. 3. Analytical model for the occurrence of intermittent fault.
test and state FD (0) corresponds to fault exits but does not appear at the scheduled time of test. The probabilities for going from one state at time kT to either state FA or FD at time ðk þ 1ÞT depend on FDD and FAD respectively. The FDD for intermittent faults in sensor node is system and deployment specific, thus, unpredictable in most practical scenarios. Intermittent faults usually exhibit a relatively high occurrence rate after its first appearance and eventually tends to become permanent. Therefore, as suggested in [25] a Weibull distribution is considered for FDD with shape parameter β 4 1 and failure rate λk . An exponential distribution is considered for FAD with a constant failure rate μ ¼ ð1=mean time in FA stateÞ [26,25]. A similar distribution is considered for time to failure of a fault-free node with the constant failure rate γ ¼ ð1=mean time in the fault free stateÞ. In practice μ⪢λk ⪢γ. In order to devise such a model, let fF j g be the state space where F0 denotes that node is fault-free and F1 denotes that node is intermittent faulty. Let ft k g is the test pattern where tk is the kth test performed by the sensor node by using algorithm discussed in Section 4.1 at time kT ðk ¼ 1; 2; …Þ. The outcome of the kth test is 0 if the node is either fault-free or node is intermittent faulty but the fault does not appear during the test. Since effect of fault is not always present, deriving an optimal test pattern which can certainly detect the intermittent fault is hard to realize. In order to get a near optimal test pattern, we consider an inequality where the probability that an intermittent fault exists and is not detected must be smaller than the error threshold θ1 . Using the fact that the network is sampled with sampling period T, the following inequality is obtained: PðF 1 jt k ¼ 0Þ ≤θ1
ð4Þ
For k ¼1, using Baye's rule we can write Pðt 1 ¼ 0jF 1 Þ PðF 1 Þ ≤θ1 Pðt 1 ¼ 0jF 0 Þ PðF 0 Þ þ Pðt 1 ¼ 0jF 1 Þ PðF 1 Þ
ð5Þ
For kmax number of tests the above equation can be rewritten as ∏kkmax Pðt k ¼ 0jF 1 Þ p ¼1 ð1−pÞ þ ∏kkmax Pðt k ¼ 0jF 1 Þ p ¼1
≤θ1
ð6Þ
Pðt k ¼ 0jF 1 Þ of (6) defines the probability that the The term ∏kkmax ¼1 fault remains inactive at time instants kT where k ¼ 1; 2; …; kmax . Thus, the inequality can be rewritten as ∏kkmax P ðkTÞ p ¼ 1 00 ð1−pÞ þ ∏kkmax P ðkTÞ p ¼ 1 00
≤θ1
ð7Þ
where P 00 ðkTÞ is called the state transition probability, which is the conditional probability that the sensor node will be in state FD (0) at time kT immediately after the next transition, given that it
A. Mahapatro, P. Mohan Khilar / Swarm and Evolutionary Computation 13 (2013) 74–84
was in state FD (0) at time ðk−1ÞT. This probability is [27,26] μ λk P 00 ðkTÞ ¼ þ e−ðλk þμÞT μ þ λk μ þ λk
The FDD is assumed to follow a Weibull distribution with increasing failure rate ðβ ¼ 1:5Þ and expected value of 1 h. We run the experiment until the fault is detected, and the results are shown in Fig. 4(c) and (d). As discussed earlier and shown in Fig. 4(a) and (c), the number of tests required and thus the number of messages exchanged to detect the intermittent faults decreases for an increase in T. Fig. 4(b) and (d) shows the latency in detecting the intermittent fault. It is observed that the latency tends to increase with T. As comprehended from Fig. 4(a) and (b), better detection accuracy, i.e., extremely small value for θ1 can be achieved at the cost of the number of messages to be exchanged and detection latency.
ð8Þ
Eq. (7) is derived under perfect test condition, i.e., a fault is always detected by a test when it occurs. Since we adopt neighbor coordination as a test to detect faults, thus, a fault is detected by a test with probability 1−P e and is not detected with probability Pe. The probability Pe is (A.9) ! Pe ¼ f 1
N
1−p−
∑
ð1−pÞf l þ pf N−l
l ¼ 0:5ðN−1Þ
where fl is the probability that l out of N 1-hop neighbors of a node are fault-free. For imperfect test condition, Eq. (7) can be rewritten as ∏kkmax P ðkTÞ p ð1−P e Þ ¼ 1 00 ð1−pÞ þ ∏kkmax P ðkTÞ p ð1−P e Þ ¼ 1 00
5.3. Calculation of objectives
ð9Þ
≤θ1
77
From the above discussions, it can be concluded that the objectives are conflicting. These two conflicting objectives are: (1) to minimize the detection latency and (2) at the same time, to minimize energy overhead (energy overhead is proportional to number of tests), while satisfying detection error constraints. This problem is formulated, mathematically, in this section.
As comprehended from (9), a better trade-off between detection accuracy, detection latency and energy overhead can be achieved by properly tuning the detection parameters kmax and T. 5.2. Impact of design parameters on fault detection
5.3.1. Energy overhead In an n-node WSN, each node has a unique identifier which can be encoded with log2 n bits. As discussed in Section 4.1, a single test requires exchange of three diagnostic messages. The first diagnostic message is the sensor reading which is represented by z number of bits. The energy dissipated in exchanging first diagnostic message is nðz þ log2 nÞðETx þ ERx Þ. The second diagnostic message is the initial decision taken at each node. The corrected decisions about the nodes detected as possibly soft faulty are exchanged as third diagnostic message. Since the state of each node is identified with a single bit (0: fault-free, 1: faulty), the energy dissipated in exchanging the initial and correct decisions is 2nðlog2 n þ 1ÞðETx þ ERx Þ. Thus, the total energy dissipated per test
The modeling framework discussed in earlier sections allows us to highlight detection accuracy, detection latency and energy overhead trade-offs in detecting an intermittent fault. To evaluate the impact of the design parameters on these trade-offs we have first used (9) to find out the number of tests required to detect faults and the detection latency at varying values of T and θ1 . These theoretical results are shown in Fig. 4(a) and (b) respectively. Second, we have conducted a simulation on a simple network to find the impact of these design parameters. This simple network we considered has one intermittent faulty node surrounded by four fault-free one-hop neighbors. For simulation, the mean value of FAD is considered 50 ms where FAD is exponentially distributed.
Number of tests
12000 10000 8000 6000
θ1=10
−2
θ1=10
−4
θ1=10
−6
θ1=10
−8
θ1=10
−10
θ1=10
−12
60
Detection latency (hours)
14000
4000 2000 0
1
10
20
30
40
50
−4
θ1=10
−6
θ1=10
40
−8
θ1=10
−10
θ1=10
30
−12
θ1=10
20 10 0
60
−2
θ1=10
50
1
10
20
Detection latency (hours)
Number of tests
40
50
60
40
50
60
60
15000
12000
9000
7000
4000
30
T (sec)
T (sec)
1
10
20
30
40
50
60
50 40 30 20 10 0
0
10
T(sec)
20
30
T (sec) Fig. 4. Impact of design parameters.
78
A. Mahapatro, P. Mohan Khilar / Swarm and Evolutionary Computation 13 (2013) 74–84
is nð3 log2 n þ z þ 2ÞðETx þ ERx Þ and the energy dissipated in detecting intermittent faults is nkmax ð3 log2 n þ z þ 2ÞðETx þ ERx Þ. Thus, the first objective function can be given by F 1 ¼ nkmax ð3 log2 n þ z þ 2ÞðETx þ ERx Þ
ð10Þ
5.3.2. Detection latency Detection latency is the time elapsed between the first occurrence of the fault and the fault detected. Thus, the detection latency is a function of kmax and T. As discussed earlier and shown in Fig. 4(b) and (d), large detection latency increases with T and might be undesirable for critical applications with short mission time. The detection latency can be expressed as F 2 ¼ kmax T
ð11Þ
5.4. Constraint function (CF) There is mainly one constraint corresponding detection error that should be satisfied, which is given as (9) ∏kkmax P ðkTÞ p ð1−P e Þ ¼ 1 00 ð1−pÞ þ ∏kkmax P ðkTÞ p ð1−P e Þ ¼ 1 00
≤θ1
ð12Þ
6. Multiobjective optimization problem Multiobjective optimization is the process of simultaneously optimizing two or more conflicting objectives subject to certain constraints. Since many conflicting objectives to be optimized simultaneously, there is a set of possible solutions of equivalent quality. Most real-world problems employ the optimization of several objectives, which are often conflicting in nature. A multiobjective optimization problem with M conflicting objectives can be defined as in [28]: Maximize/minimize y ¼ f ðxÞ ¼ ðf 1 ðxÞ; f 2 ðxÞ; …; f M ðxÞÞ;
x∈½X min ; X max
subject to: g j ðxÞ ≤0; hk ðxÞ ¼ 0;
j ¼ 1; …; J k ¼ 1; …; K
where x and y are the decision vector and the objective vector respectively. Different from the single objective optimization, there are two spaces to be considered. One is the decision space denoted as x and the other is the objective space denoted as y. Definition 1 (Zhao and Suganthan [6]). Let wi and wj are two solutions to a multiobjective problem. wi dominates wj if wi performs at least as good as wj with respect to all the objectives and performs strictly better than wj in at least one objective. Definition 2 (Zhao and Suganthan [6]). Among a set of solutions W, the non-dominated set of solutions W′ are those that are not dominated by any member of the set W. Definition 3 (Zhao and Suganthan [6]). When the set W is the entire feasible search space, the resulting non-dominated set W′ is called the Pareto-optimal solution set. 6.1. Finding Pareto optimal solution Numerical techniques can be adopted to find the set of solutions for a multiobjective optimization problem. In this work we describe how predictor–corrector method like algorithm
CONT-Recover [29] is used for the numerical treatment of our multiobjective optimization problem. Starting with a given Karush–Kuhn–Tucker point (KKTpoint) x~ of an multiobjective optimization problem, algorithm CONT-Recover is applied to detect ~ In the subsequent further KKT-points in the neighborhood of x. steps, further points are computed starting with these new-found KKT-points. To maintain a good spread of these solutions, algorithm CONT-Recover uses boxes for the representation of the computed parts of the solution set. Though predictor–corrector methods are quite effective, they are, however, based on some assumptions. First, an initial solution has to be computed before the process can start. Due to their local nature, predictor–corrector methods are restricted to the connected component that contains the given initial solution [30]. Further, the Pareto set may fall into several connected components. Evolutionary algorithms are correctly fitted to multiobjective optimization problems as they are essentially based on biological processes, which are inherently multiobjective. An extensive survey on multiobjective evolutionary algorithms is well presented in [31]. Central to these articles, considering superior performance for solving multiobjective problems, the 2LB-MOPSO [6], MOEA/D [9] and NSGA-II [8] algorithms have been used in this study. In NSGA-II, initially a random population of size H, which is sorted based on the non-domination, is created. This population subsequently undergoes selection, crossover and mutation processes to produce an offspring population of size H. A combined population of size 2H is formed from the parent and offspring population. Next, the population is sorted according to the nondomination relation. This in turn classifies the complete population into several non-dominated fronts based on the values of the objective functions. Until each member of the population falls into one front the other fronts are determined. The new parent population is generated by adding the solutions from the first front. Several non-dominated fronts are discarded as the population size is predefined. The required numbers of members for the new population are selected using a new parameter called crowding distance. The crowding distance describes how close an individual is to its neighbors. Similar to GA, the PSO algorithm has been successfully extended to multiobjective optimization problems. Different from other variants of MOPSO algorithms, 2LB-MOPSO uses two local bests instead of one personal best and one global best to lead each particle. The two local bests are selected to be close to each other in order to enhance the local search ability of the algorithm. Compared to the other variants of MOPSO algorithms, 2LB-MOPSO shows great advantages in maintaining a good diversity of the solutions, convergence speed and fine-searching ability. In 2LB-MOPSO, the initialized archive includes all initialized solutions at iteration 1. In every iteration, all new positions Q(t) generated in iteration t is combined with the members in the archive A(t) to obtain the mixed-temporary external archive. The sorted archive R(t) is obtained by applying the non-domination sorting to this mixed-temporary archive. During this process, all the sorted solutions retain two indicators, namely, the front rank and crowding distance value. The sorted solution with the lowest front rank is first included in the archive Aðt þ 1Þ. When the size of the archive equals to the permitted maximum size of an archive ðAðt þ 1Þj ¼ jAðtÞjÞ, the crowding distance is applied to select the required number of members to be included in Aðt þ 1Þ from the lowest front that still remains unselected in the archive R(t). The pseudo-code of the 2LB-MOPSO algorithm is presented in Fig. 5. In 2LB-MOPSO, each objective function range in the external archive is divided into a number of bins. The two lbests are chosen from the external archive members located in two neighboring bins, so that they are near each other in the parameter space. In
A. Mahapatro, P. Mohan Khilar / Swarm and Evolutionary Computation 13 (2013) 74–84
79
front into a number of scalar optimization problems. Let ς1 ; …; ς2 be a set of even spread weight vectors and zn be the reference point. The problem of approximation of the Pareto front can be decomposed into N scalar optimization subproblems by using the Tchebycheff approach. The objective function of the jth subproblem is given by g tc ðxjςj ; zn Þ min fςji jf i ðx−zni jÞg 1 ≤i ≤M
ð13Þ
where ςj ¼ ðςj1 ; …; ςjm ÞT . MOEA/D minimizes all these objective functions simultaneously in a single run. In MOEA/D, a neighborhood of weight vector ςi is defined as a set of its several closest weight vectors in ς1 ; …; ς2 . The neighborhood of the ith subproblem consists of all the subproblems with the weight vectors from the neighborhood of ςi . The population is composed of the best solution found so far for each subproblem. Only the current solutions to its neighboring subproblems are exploited for optimizing a subproblem.
7. Performance metrics and best trade-off solution
Fig. 5. The pseudo-code of the 2LB-MOPSO.
order to select the first lbests for a particle, an objective is first randomly selected followed by a random selection of a non-empty bin of the chosen objective. Within this bin, the archived member with the lowest front number and among these with the highest crowding distance is selected as the first lbests. The second lbests is selected from a neighboring non-empty bin with the lowest front number and the smallest Euclidean distance in the parameter space to the first lbests. As velocity of each particle is adjusted by the two lbests from two neighboring bins, the flight of each particle will be in the direction between the positions of two lbests and oriented to improve upon the current solutions. Upon assigning a pair of lbests to a particle, the number of iterations the particle fails to contribute a solution to the archive A (t) is counted. The particle is reassigned with another pair of lbests when the count exceeds a pre-specified threshold. During the initialization stage and when the count is larger than the prespecified threshold during the iterative optimization stage, the first lbests for a particle is chosen randomly by selecting an objective and one bin of the objective. The second lbests is chosen from the neighborhood of the first lbests in the parameter space. When the count is less than or equal to the pre-specified threshold during the iterative optimization stage, two lbests are chosen from the same assignment of the objective and the bin as used in the last iteration. The particle will accelerate potentially in a direction between the two lbests and hence may explore the region of the two lbests. Unlike NSGA-II and 2LB-MOPSO algorithm the MOEA/D algorithm uses weight vectors to decompose a multiobjective optimization problem into a number of single objective optimization subproblems and optimizes them simultaneously. Each sub-problem is optimized by sharing information between its neighboring subproblems with similar weight values. Tchebycheff approach is employed to convert the problem of approximating the Pareto
All the existing multiobjective optimization algorithms aim to find solutions as close as possible to the Pareto optimal front and as diverse as possible in the non-dominated front. Different performance metrics to measure these two objectives have been suggested in the literature. Since the true Pareto-optimal front for the proposed application is unknown, for performance analysis, we consider coverage of the Pareto front [32], and spacing of the Pareto front [33]. The first metric measures the convergence of the Pareto front, while the second metric measures the distribution of solutions along the Pareto front. 7.1. Coverage of the Pareto front Let A and B are two Pareto-optimal sets. This metric measures the relative spread of solutions between two non-dominated sets. The function C maps the ordered pair (A, B) to the interval [0, 1] and is given by CðA; BÞ ¼
jfb∈Bj∃a∈A : a≽bgj jBj
ð14Þ
where jBj represents the number of solutions in the set B, and a≽b implies that solution a weakly dominates solution b. The value CðA; BÞ ¼ 1 implies that all decision vectors in B are weakly dominated by A. In contrary, CðA; BÞ ¼ 0 implies that none of the points in B are weakly dominated by A. If CðA; BÞ 4 CðB; AÞ, then the set A has better solutions than the set B. 7.1.1. Spacing Schott [33] introduced a metric namely Spacing that measures the distribution of the solutions over the non-dominated front. Spacing between solutions is computed as sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 1 Q di Spacing ¼ ∑ ð15Þ Q −1 i ¼ 1 d where M
di ¼ min ∑ jF im −F jm j j
m¼1
for j ¼ 1; …; Q and i≠j:
ð16Þ
Q is the number of solutions in the non-dominated set, M is the total number of objectives to be optimized and d is the mean of all the di. The nearer the value of Spacing to zero, the more uniformly distributed the solutions found over the Pareto optimal front.
80
A. Mahapatro, P. Mohan Khilar / Swarm and Evolutionary Computation 13 (2013) 74–84
7.2. Fuzzy decision making Upon obtaining a set of Pareto optimal solutions using 2LB-MOPSO, we need to find a best optimum trade-off. As suggested in [7], the fuzzy membership functions that represent the goals of each objective function are used. The fuzzy sets are defined by these membership functions. These functions represent the degree of membership in certain fuzzy sets using values from 0 to 1. The membership functions for both objectives are defined as 8 1; F i ≤F min > i > > > max < F i −F i min ; F i o F i o F max ð17Þ μi ¼ i min > F max −F > i i > > max : 0; F ≥F i
i
where F min and F max are the minimum and maximum values from i i non-dominated solutions of each objective function, respectively. For each non-dominated solution, the normalized membership function can be calculated as μr ¼
∑1i ¼ 1 μri ∑Rr ¼ 1 ∑2i ¼ 1 μri
ð18Þ
where R is the number of non-dominated solutions. The solution that attains the maximum membership μr in the fuzzy set can be chosen as the best solution: Best solution ¼ maxfμr : r ¼ 1; …; Rg
8. Simulation results and analysis 8.1. Tuning of detection parameters This section is primarily meant to study how the design parameters namely kmax and T affect detection of intermittent faults in terms of two important figures of merit: the detection latency and energy overhead while maintaining low detection error. In this section, we compare the design results obtained with 2LB-MOPSO, algorithm CONT-Recover [29], MOEA/D [9], NSGA-II [8] and three single-objective optimization algorithms namely GA, DE and PSO. We use MATLAB as a simulation tool for tuning the detection parameters. The mean value of FAD is considered 50 ms where FAD is exponentially distributed. The FDD is assumed to follow a Weibull distribution with increasing failure rate ðβ ¼ 1:5Þ and expected value of 1 h. For 2LB-MOPSO, the parameters are set as in the [6]: count and number of bins are considered as 5 and 10 respectively, population size NP ¼50, inertia weight ω ¼ 0:729, C1 ¼ C2 ¼ 2:05, V max ¼ 0:25ðX max −X min Þ. For NSGA-II (real-coded)
and MOEA/D, we use a population size of 50, crossover probability of 0.9 and mutation probability of 0.5. As suggested in [8], the distribution indexes for crossover, and mutation operators are set as ηc ¼ 20 and ηm ¼ 20. For MOEA/D the number of the weight vectors in the neighborhood of each weight vector is set to 20. The decision variables are initialized with uniformly distributed pseudo-random numbers that take the range of these variables, i.e., T ¼ rand½T min ; T max and k ¼ rand½kmin ; kmax . We consider T min ¼ 1000 ms, T max ¼ 60 000 ms, kmin ¼ 1 and kmax ¼ 15 000, and θ1 ¼ 10−20 . The maximum function evaluations are set as 15 000. To optimize simultaneously F1 and F2 using GA, PSO and DE, the fitness function can be defined as F f ¼ ω1 F 1 þ ω2 F 2
The fitness function Ff is minimized using different values of ω1 and ω2 such that ω1 þ ω2 ¼ 1. The weighted sum method is, however, subjective and the solution obtained will depend on the values (more precisely, the relative values) of the weights specified [34]. It is hard, if not impossible, to choose a proper combination of ω1 , and ω2 to get a optimized code. As suggested in [35], for DE the parametric setup is CR ¼0.7, F¼0.5. For PSO, we used swarm size ¼50, acceleration coefficients C1 ¼ C2 ¼ 2:05. For GA, we use a population size of 50, crossover probability of 0.9 and mutation probability of 0.5. 8.1.1. Performance analysis In order to evaluate the performance, we first compare the Pareto fronts obtained using algorithm CONT-Recover [29] and with one of the 20 runs of 2LB-MOPSO (Fig. 6(a)). The best tradeoff solution is obtained on these two solutions by using the aforementioned fuzzy logic based mechanism and is shown in Fig. 6(b). Further, 20 independent runs were conducted for 2LB-MOPSO, MOEA/D, NSGA-II, GA, DE, and PSO. To illustrate the difference between the Pareto fronts obtained with 2LB-MOPSO, MOEA/D and NSGA-II, the Pareto fronts obtained with one of the 20 runs of 2LB-MOPSO, MOEA/D and NSGA-II are plotted in Fig. 6 (b). The best trade-off solution is obtained on these three solutions by using the aforementioned fuzzy logic based mechanism and is shown in Fig. 6(b). Here, we consider the normalized total energy which is the ratio between the total energy (10) and the number of nodes participated in the detection. Table 1 shows the tuned detection parameters obtained using the mentioned optimization algorithms. For GA, PSO, and DE based implementation, we provide the best solutions found in 50 independent trials of each algorithm. The quality of the Pareto-optimal solutions obtained with algorithm CONT-Recover, 2LB-MOPSO, MOEA/D, and NSGA-II is measured by the two aforementioned performance metrics. The
12
12
Detection latency (hours)
Detection latency (hours)
2LBMOPSO
10 8
Best trade off
6 CONT− Recover
4 2 0 0
ð19Þ
10 8
0.01
0.015
0.02
NSGA−II
6 4 2 0
0.005
MOEA/D
Best trade off 2LB MOPSO
2
Normalized total energy consumption (J) Fig. 6. Trade-off curve.
4
6
8
10
12
Normalized total energy consumption (J) x 10−3
A. Mahapatro, P. Mohan Khilar / Swarm and Evolutionary Computation 13 (2013) 74–84
Table 1 Tuned detection parameters and their corresponding results.
2LB-MOPSO MOEA/D NSGA-II CONT-Recover GA PSO DE
Table 5 Spacing.
T
kmax
EN (J)
Latency (min)
8600 9362 12 978 14 457 10 763 11 602 13 254
1052 1124 1256 1465 1867 1597 1508
0.0044 0.0046 0.0057 0.0071 0.0076 0.0065 0.0061
150.786 174.998 271.6728 353.2258 334.908 309.498 333.117
2LB-MOPSO
MOEA/D
0.9461 0.7129 0.9011 0.8172 0.0056 0.0746
0.3631 0.0248 0.2247 0.0182 0.0098 0.0993
2LB-MOPSO
NSGA-II
0.9886 0.7835 0.9133 0.8361 0.0067 0.0821
0.3126 0.0192 0.2013 0.0133 0.0182 0.1352
Table 4 Coverage (2LB-MOPSO vs CONT-Recover).
Best Worst Average Median Variance Std. dev.
MOEA/D
NSGA-II
CONT-Recover
0.2096 0.3932 0.3206 0.3182 0.0056 0.0749
0.2439 0.4218 0.3881 0.3437 0.0080 0.0897
0.3862 0.6696 0.5182 0.5021 0.0144 0.1204
0.5162 0.8103 0.7312 0.7663 0.0492 0.2218
Parameter
Value
Number of sensors Network grid Sink Initial energy Eelec ϵfs ϵamp
1000 From (0, 0) to (1000, 1000) m At (75,150) m 1J 50 nJ/bit 10 pJ/bit/m2 0.0013 pJ/bit/m4
8.2. Simulation experiments
Table 3 Coverage (2LB-MOPSO vs NSGA-II).
Best Worst Average Median Variance Std. dev.
Best Worst Average Median Variance Std. dev.
2LB-MOPSO
Table 6 Simulation parameters.
Table 2 Coverage (2LB-MOPSO vs MOEA/D).
Best Worst Average Median Variance Std. dev.
81
2LB-MOPSO
CONT-recover
1 0.92 0.9533 0.962 0.0039 0.0624
0.137 0 0.0625 0.0986 0.0669 0.2586
best, worst, mean, median, variance and standard deviation of the two performance metrics are presented in Tables 2–4. The best average result with respect to each metric is shown in bold font. In Table 2, the value for Coverage ¼0.9461 implies that 94.61% of the Pareto-optimal solutions obtained with MOEA/D are weakly dominated by the solutions obtained with 2LB-MOPSO. Likewise, the value for Coverage¼ 0.3631 means that only 36.31% of the solutions obtained with 2LB-MOPSO are weakly dominated by those with MOEA/D. In addition, the standard deviation of 2LBMOPSO with respect to Coverage implies that the performance of 2LB-MOPSO is more stable. The distributions of the Pareto-optimal solutions over the non-dominated front obtained with algorithm CONT-Recover, 2LB-MOPSO, MOEA/D and NSGA-II are evaluated with metric Spacing. Since a lower value of Spacing implies uniform spread of solutions, as shown in Table 5 for our application 2LB-MOPSO outperforms algorithm CONT-Recover, MOEA/D and NSGA-II.
In order to validate the obtained detection parameter T, and kmax and measure its effectiveness, we chose to conduct an extensive set of simulations using Castalia-2.3b, a state-of-theart WSN simulator based on the OMNET++ platform. The simulation parameters are given in Table 6, where the values for radio model parameters are same as those in [36]. As discussed earlier both FAD and FDD are system specific and depends on multiple factors. Thus, to simulate the real fault scenario FAD follows a Weibull distribution with expected value ranging from 1 min to 10 h and FAD follows an exponential distribution with expected value ranging from 5 ms to 50 ms. All the intermittent faults are activated randomly before first test, i.e., before 8600 ms from the start of simulation. Each sensor node in the network is scheduled to take sensor measurement at the discrete time kT with T ¼8600 ms. The data-gathering stage is scheduled at GT where G is an integer and is application specific. For instance, applications with short mission time need the data to be gathered more frequently in contrast to applications, where frequency of data gathering is less. For applications with long mission time, GT is large. Thus, to detect intermittent faults, G/T number of sensor measurements needs to be broadcasted by each node. This in turn makes the packet to grow with G. Since energy consumed by a sensor node is directly proportional to the number of bits it transmits or receives, the energy overhead will be more for large value of G and may not be practically implementable. To address this issue, we suggest to sample the interval GT where each sample constitutes of I consecutive senor measurements. The standard deviation of these I sensor measurements corresponding to each sample interval is calculated and broadcasted along with the routine data. This in turn reduces the packet size and makes the algorithm energy efficient. Each node takes the decision by comparing the corresponding standard deviations of one-hop neighbors. Use of standard deviation instead of individual measurements does not affect the detection performance since rate of change in sensor measurements over time is very less. In addition, a sensor often reports unusually high or low sensor measurement during FAD. Thus, the standard deviation of sensor measurements of a sample interval with at least one incorrect measurement will be distinguished from the corresponding standard deviations of
82
A. Mahapatro, P. Mohan Khilar / Swarm and Evolutionary Computation 13 (2013) 74–84
with a probability of 0.5 regardless of their locations. To validate the obtained detection parameters, the experiment was conducted for 21 epochs ðkmax =G ¼ 1052=50≈21Þ. The results shown are average of 100 experiments. Fig. 7(a) and (b) shows the average detection accuracy and average false alarm rate of the detection algorithm considering only intermittent faults. Interestingly, an improvement in both DA and FAR is observed. The reason is that a faulty node will be detected as fault-free only when the node has more than θ faulty neighbors and shows a match in comparison. A fault-free node detected as faulty only when the node has more than θ faulty neighbors. In the scenario where all faults are intermittent, the probability of mentioned neighbors at the time of test is less as compare to the scenario where all faults are permanent. This is because the probability that fault appears in all the faulty neighbors at the time of test is less.
one-hop neighbors with all true measurements. In this experiment, we assume temperature sensors. 8.3. Experiment 1: efficiency with regard to da and p In this experiment, the performance of the diagnosis algorithm in regard to DA and FAR is evaluated by first considering only intermittent faults and then considering both intermittent and permanent faults. In the later experiment, the number of intermittent and permanent faults is randomly chosen while maintaining the total number of faults. For performance evolution, we assume that the number of intermittent and permanent faults does not change during the simulation period. Note that this assumption does not mean that the detection algorithm is not adaptive to change in fault type and fault rate. In this simulation, sensor nodes are assumed to be faulty with probabilities of 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, respectively. The transmission range is chosen for the sensor network to have the desired average node degree da. Since a faulty node will often report unusually high or low sensor measurements, all the nodes with malfunctioning sensors are momentarily assumed to show a match in comparison
x 10 6
1
8.4. Experiment 2: time and energy efficiency In this experiment, we attempted to illustrate the detection latency and normalized total energy overhead of the detection
−3
0.01
1
0.008
0.99
DA
FAR
DA
0.006 0.98 0.004
2
0.99
0.97
0.985 0.05
0.1
0.15
0.2
0 0.3
0.25
FAR
4
0.995
0.002
0.96 0.05
0.1
Sensor fault probability
0.15
0.2
0.25
0 0.3
Sensor fault probability
400
400
350
350
Detection latency (min)
Detection latency (min)
Fig. 7. DA and FAR with da ≈4 and da ≈12 for a network considering (a) only intermittent faults (b) both intermittent and permanent faults.
300 250 200 150 100 0.05
0.1
0.15
0.2
0.25
300 250 200 150 100 0.05
0.3
0.1
x 10−3
10
9
Normalized total energy consumption (J)
Normalized total energy consumption (J)
10
8 7 6 5 4 3 0.05
0.1
0.15
0.2
Sensor fault probability
0.15
0.2
0.25
0.3
0.25
0.3
Sensor fault probability
Sensor fault probability
0.25
0.3
x 10−3
9 8 7 6 5 4 3 0.05
0.1
0.15
0.2
Sensor fault probability
Fig. 8. Average detection latency and normalized total energy overhead.
A. Mahapatro, P. Mohan Khilar / Swarm and Evolutionary Computation 13 (2013) 74–84
algorithm. We use the tuned detection parameters obtained through algorithm CONT-Recover, 2LB-MOPSO, MOEA/D, NSGA-II, GA, DE, and PSO (Table 1). All results are the average of results obtained on 100 random topologies. For better analysis, we consider only intermittent faults. The average detection latency and the average normalized total energy overhead is shown in Fig. 8(c) and (d) for varying fault rate and da. As shown, both the detection latency and normalized energy overhead are less affected by the number of faults. The reason is that the detection of intermittent faults depends only on T and the detection latency depends on the number of test repetitions executed to detect the fault. It is observed that the detection latency for 2LB-MOPSO based implementation outperforms algorithm CONT-Recover, MOEA/D, NSGA-II, GA, DE, and PSO based implementations. Similarly, the normalized total energy overhead is less affected by the fault rate. This is because it depends purely on the number of messages exchanged to detect the fault. As discussed earlier more messages need to be exchanged if nodes fail the threshold test. Since only intermittent faults are considered, thus as discussed in Experiment 1, the number of nodes failed the threshold test is less. However, a minor improvement is observed for greater average node degree and lower fault rate. It is observed that 2LBMOPSO based implementation outperforms algorithm CONTRecover, MOEA/D, NSGA-II, GA, DE, and PSO based implementations from normalized total energy overhead perspective.
is determined as N fl ¼ PðSi ¼ xjAi ¼ x; Ch ¼ GÞl PðSi ¼ xjAi ¼ px; Ch ¼ GÞN−l l N ¼ PðSi ¼ xjAi ¼ x; Ch ¼ GÞl PðSi ¼ xjAi ¼ x; Ch ¼ BÞN−l l N ¼ ð1−pÞl pN−l l
83
ðA:2Þ
The possible values for variables S and A are x and px where px defines a value which is not similar to x. Thus eight possible combinations exist for DE, S and A. The correctness of the proposed algorithm can be analyzed by the conditional probabilities corresponding to these combinations. From these combinations we can calculate the probability that the algorithm estimates the node is faulty though both the sensed and actual reading are similar. By using marginal probability this can be derived as PðDE ¼ FjS ¼ x; A ¼ xÞ ¼ 1−PðDE ¼ FFjS ¼ x; A ¼ xÞ N
¼ 1− ∑ PðDE ¼ FF; Eðx; lÞjS ¼ x; A ¼ xÞ l¼0 N
¼ 1− ∑ PðDE ¼ FFjS ¼ x; A ¼ x; Eðx; lÞÞ l¼0
PðEðx; lÞjS ¼ x; A ¼ xÞ N
9. Conclusions
¼ 1− ∑ P l f l
ðA:3Þ
l¼0
In this paper, an efficient fault detection technique with low detection latency, low energy overhead and high detection accuracy was considered. The application of two lbest multiobjective particle swarm optimization to minimize detection latency and energy overhead simultaneously was discussed. While applying the algorithm, the detection error was considered as a constraint. A fuzzy based mechanism is also used to find out the best compromised solution on the optimal Pareto front. The tuned detection parameters were used by the detection algorithm. The performance difference between 2LB-MOPSO, CONT-Recover, MOEA/D and NSGA-II based parameter tuning was observed and 2LB-MOPSO based approach was found more suitable for the proposed application.
In a similar manner, we can calculate the probability that the algorithm estimates the node is fault-free though the sensor reading does not agree with actual reading: PðDE ¼ FFjS ¼ px; A ¼ xÞ N
¼ ∑ PðDE ¼ FF; Eðx; N−lÞjS ¼ px; A ¼ xÞ l¼0 N
¼ ∑ PðDE ¼ FFjS ¼ px; A ¼ x; Eðx; N−lÞÞ l¼0
PðEðx; N−lÞjS ¼ px; A ¼ xÞ N
¼ ∑ PðDE ¼ FFjS ¼ px; A ¼ x; Eðpx; lÞÞ l¼0
PðEðx; N−lÞjS ¼ px; A ¼ xÞ
Appendix A
N
¼ ∑ P l f N−l
In this section, we formulate the threshold θ. Theorem 4. The optimum value of θ which minimizes the detection error is 0:5ðN−1Þ. Proof. Proof of this theorem closely follows a similar proof in [14]. The real situation at the sensor node is modeled by two variables S and A where S represents the sensor reading and A represents the actual reading. Let Eðx; lÞ be the manifest that l out of N 1-hop neighbors of a node vi report the similar sensor reading x. The objective here is to determine the fault detection estimate (DE) after obtaining information about the sensor readings of neighboring nodes. The possible vales of DE are fault-free (FF) and faulty (F). The probability that the detection estimate is fault-free, given that l out of N neighboring sensors of node vi report the same reading x is defined as P l ¼ PðDE ¼ FFjSi ¼ x; Ei ðx; lÞÞ
ðA:4Þ
l¼0
ðA:1Þ
For faulty communication channel Chi;j , vi believes that vj ∈N i is faulty. In the presence of channel fault, let fl is the probability that l out of N 1-hop neighbors of node vi are fault-free. This probability
As discussed earlier fault-free nodes which failed to pass the threshold test later diagnosed as fault-free through a fault-free neighbor. The probability that at least one out of N 1-hop neighbors are fault-free can be derived from Eq. (A.2) as f 1 ¼ Nð1−pÞpN
ðA:5Þ
Eqs. (A.(4) and A.5) suffice to calculate the probability that the detection algorithm declares a fault-free node as faulty. This probability is given by
P gf ¼ PðDE ¼ F; S ¼ xjA ¼ xÞ f 1 ¼ PðDE ¼ FjS ¼ x; A ¼ xÞ Pðs ¼ xjA ¼ xÞ f 1 ! ¼
N
1− ∑ P l f l l¼0
ð1−pÞ f 1
ðA:6Þ
84
A. Mahapatro, P. Mohan Khilar / Swarm and Evolutionary Computation 13 (2013) 74–84
In the similar manner the probability that the detection algorithm declares a faulty node as fault-free can be derived as P fg ¼ PðDE ¼ FF; S ¼ pxjA ¼ xÞ f 1 ¼ PðDE ¼ FFjS ¼ px; A ¼ xÞ Pðs ¼ pxjA ¼ xÞ f 1 ! ¼
N
∑ P l f N−l
l¼0
ðA:7Þ
p f1
In the proposed algorithm the detection estimation is fault-free only when l 4 θ. Thus Eq. (A.1) can be rewritten as 1 if l 4 θ ðA:8Þ Pl ¼ 0 otherwise Thus, the error probability of the proposed algorithm in detecting the status of a node is given by P e ¼ P gf þ P fg ¼f1
N
1−p− ∑ ð1−pÞf l þ pf N−l
! ðA:9Þ
l¼θ
Substituting fl in Eq. (A.9), the expression of summand of Eq. (A.9) can be written as N ðð1−pÞlþ1 pN−l −plþ1 ð1−pÞN−l Þ l N ðA:10Þ ðð1−pÞlþ1 pNþl ðpN−2l−1 −ð1−pÞN−2l−1 ÞÞ ¼ l For p o 0:5, Eq. (A.10) is negative for N 4 2l þ 1, zero for N ¼ 2l þ 1, and positive for N o 2l þ 1. Additional terms with negative contributions are produced by decreasing θ one at a time from N while θ 4 0:5ðN−1Þ and positive contributions once θ o 0:5ðN−1Þ. It follows that pe achieves a minimum when θ ¼ 0:5ðN−1Þ. □ References [1] S. Sengupta, S. Das, M. Nasir, B. Panigrahi, Multi-objective node deployment in wsns: in search of an optimal trade-off among coverage, lifetime, energy consumption, and connectivity, Engineering Applications of Artificial Intelligence 26 (2013) 405–416. [2] S. Sengupta, S. Das, M. Nasir, A.V. Vasilakos, W. Pedrycz, An evolutionary multiobjective sleep-scheduling scheme for differentiated coverage in wireless sensor networks, IEEE Transactions on Systems, Man, and Cybernetics Part C 42 (2012) 1093–1102. [3] R. Horst, D. Jewett, D. Lenoski, The risk of data corruption in microprocessorbased systems, in: The Twenty-Third International Symposium on FaultTolerant Computing, pp. 576–585. [4] D.P. Siewiorek, R.S. Swmlz, The Theory and Practice of Reliable System Design, Digital Equipment Corporation, 1982. [5] M. Barborak, A. Dahbura, M. Malek, The consensus problem in fault-tolerant computing, ACM Computing Survey 25 (1993) 171–220. [6] S.Z. Zhao, P.N. Suganthan, Two-lbests based multi-objective particle swarm optimizer, Engineering Optimization 43 (2011) 1–17. [7] J.S. Dhillon, S.C. Parti, D.P. Kothari, Stochastic economic emission load dispatch, Electric Power Systems Research 26 (1993) 179–186. [8] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: Nsga-ii, IEEE Transactions on Evolutionary Computation 6 (2002) 182–197. [9] Q. Zhang, H. Li, Moea/d: a multiobjective evolutionary algorithm based on decomposition, IEEE Transactions on Evolutionary Computation 11 (2007) 712–731.
[10] F.P. Preparata, G. Metze, R.T. Chien, On the connection assignment problem of diagnosable systems, IEEE Transactions on Electronic Computers EC-16 (1967) 848–854. [11] X. Luo, M. Dong, Y. Huang, On distributed fault-tolerant detection in wireless sensor networks, IEEE Transactions on Computers 55 (2006) 58–70. [12] X. Xu, W. Chen, J. Wan, R. Yu, Distributed fault diagnosis of wireless sensor networks, in: 11th IEEE International Conference on Communication Technology, pp. 148–151. [13] M.-H. Lee, Y.-H. Choi, Fault detection of wireless sensor networks, Computer Communications 31 (2008) 3469–3475. [14] B. Krishnamachari, S. Iyengar, Distributed Bayesian algorithms for faulttolerant event region detection in wireless sensor networks, IEEE Transactions on Computers 53 (2004) 241–250. [15] P. Jiang, A new method for node fault detection in wireless sensor networks, Sensors 9 (2009) 1282–1294. [16] X. Luo, M. Dong, Y. Huang, On distributed fault-tolerant detection in wireless sensor networks, IEEE Transactions on Computers 55 (2006) 58–70. [17] M. Elhadef, B. Ayeb, An evolutionary algorithm for identifying faults in tdiagnosable systems, in: The 19th IEEE Symposium on Reliable Distributed Systems, pp. 74–83. [18] B.T. Nassu, J.E.P. Duarte, A.T. Ramirez Pozo, A comparison of evolutionary algorithms for system-level diagnosis, in: Proceedings on Genetic and Evolutionary Computation, ACM, 2005, pp. 2053–2060. [19] M. Borairi, H. Wang, Actuator and sensor fault diagnosis of nonlinear dynamic systems via genetic neural networks and adaptive parameter estimation technique, in: IEEE International Conference on Control Applications, vol. 1, pp. 278–282. [20] M. Elhadef, S. Das, A. Nayak, A parallel genetic algorithm for identifying faults in large diagnosable systems, International Journal of Parallel Emergent and Distributed Systems 20 (2005) 113–125. [21] M. Elhadef, A. Nayak, N. Zeng, Ants vs. faults: a swarm intelligence approach for diagnosing distributed computing networks, in: International Conference on Parallel and Distributed Systems, vol. 2, pp. 1–8. [22] J.A.T.R.P. Elias, P. Duarte, B.T. Nassu, Fault diagnosis of multiprocessor systems based on genetic and estimation of distribution algorithms: a performance evaluation, International Journal on Artificial Intelligence Tools 19 (2010) 1–18. [23] W. Heinzelman, A. Chandrakasan, H. Balakrishnan, An application-specific protocol architecture for wireless microsensor networks, IEEE Transactions on Wireless Communications 1 (2002) 660–670. [24] M.C. Vuran, Özgür B. Akan, I.F. Akyildiz, Spatio-temporal correlation: theory and applications for wireless sensor networks, Computer Networks 45 (2004) 245–259. [25] D.P. Siewiorek, R.S. Swmlz, Reliable Computer System Design and Evaluation, Digital Press, 1992. [26] M. Breuer, Testing for intermittent faults in digital circuits, IEEE Transactions on Computers C-22 (1973) 241–246. [27] R.E. Barlow, F. Prochan, Mathematical Theory of Reliability, John Wiley & Sons, 1965. [28] K. Deb, Multi-objective Optimization using Evolutionary Algorithms, Wiley, 2001. [29] O. Schütze, M. Dellnitz, On continuation methods for the numerical treatment of multi-objective optimization problems, in: Practical Approaches to Multiobjective Optimization, Dagstuhl Seminar Proceedings, vol. 04461, IBFI, Schloss Dagstuhl. [30] M. Ringkamp, S. Ober-Blöbaum, M. Dellnitz, O. Schütze, Handling highdimensional problems with multi-objective continuation methods via successive approximation of the tangent space, Engineering Optimization 44 (2012) 1117–1146. [31] A. Zhou, B.-Y. Qu, H. Li, S.-Z. Zhao, P.N. Suganthan, Q. Zhang, Multiobjective evolutionary algorithms: a survey of the state of the art, Swarm and Evolutionary Computation 1 (2011) 32–49. [32] E. Zitzler, Evolutionary Algorithms for Multiobjective Optimization: Methods and Applications, Ph.D. Thesis, Swiss Federal Institute of Technology, 1999. [33] J. Schott, Fault Tolerant Design using Single and Multi-criteria Genetic Algorithms, Master's Thesis, Massachusetts Institute of Technology, 1995. [34] S. Pal, S. Das, A. Basak, Design of time-modulated linear arrays with a multiobjective optimization approach, Progress in Electromagnetics Research B 23 (2010) 83–107. [35] R. Storn, K. Price, Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces, Journal of Global Optimization 11 (1997) 341–359. [36] G. Chen, C. Li, M. Ye, J. Wu, An unequal cluster-based routing protocol in wireless sensor networks, Wireless Networks 15 (2009) 193–207.