Evidential framework for robust localization using raw GNSS data

Evidential framework for robust localization using raw GNSS data

Engineering Applications of Artificial Intelligence 61 (2017) 126–135 Contents lists available at ScienceDirect Engineering Applications of Artificia...

1MB Sizes 56 Downloads 92 Views

Engineering Applications of Artificial Intelligence 61 (2017) 126–135

Contents lists available at ScienceDirect

Engineering Applications of Artificial Intelligence journal homepage: www.elsevier.com/locate/engappai

Evidential framework for robust localization using raw GNSS data Salim Zair, Sylvie Le Hégarat-Mascle

MARK



SATIE Laboratory, Université Paris-Sud, ENS Cachan, CNRS, Université Paris-Saclay, 91405 Orsay, France

A R T I C L E I N F O

A BS T RAC T

Keywords: Global navigation satellite systems Outlier detection Belief function theory Consistency measure

Global Navigation Satellite Systems (GNSS) positioning in constrained environments suffers from the presence of Non Line Of Sight and multipath receptions so that, in addition to contain imprecise measurements (e.g. due to atmospheric effects), the set of GNSS pseudo-range observations includes some outliers. In this study, we evaluate the interest of the belief function framework as an alternative to the Interval Analysis approach classically used in localization to deal with imprecise data and possibly outliers. Following the basic idea of the RANSAC (RANdom SAmpling Consensus) algorithm, we propose a new detection of the outliers based on an evidential measure of the consistency of the solution. Each pseudo-range (PR) observation generates a 2D basic belief assignment (bba) that quantifies, for any 2D set, the possibility (according to the observed PR) that it includes the GNSS receiver. Outlier detection is then performed by evaluating directly the consistency of subsets of bbas and the inlier PR information is aggregated through the combination of corresponding bbas. In the case of a dynamic receiver, filtering is performed by combining the bbas derived from the new observations to a bba predicted from estimation at previous time step. Proposed approach was evaluated on two actual datasets acquired in urban environment. Results are evaluated both in terms of precision of the localization and in terms of guarantee of the solution. They compared with former approaches either in belief function framework or using interval analysis, stating the interest of the proposed algorithm.

1. Introduction Positioning is required in many applications involving autonomous navigation either on the land, e.g. intelligent vehicles or exploring robots, or in the air, e.g. drones. Global Navigation Satellite Systems (GNSS) refer to satellite constellations providing signals such that, if acquired by a receiver, this latter can derived its position in the Earth reference system by trilateration. For instance, the Global Positioning System (GPS) is the most popular GNSS used for outdoor localization (Skog and Handel, 2009). However, if the localization in open areas reaches sufficient precision for most applications, it is still an issue in constrained environments such as urban canyon because of the presence of multipath signals and Non-Line-OfSight (NLOS) receptions caused by buildings and trees. These phenomena induce an overestimation of the distance between satellite and receiver called pseudo-range (PR), and then, degrade the precision of the positioning. In the literature, different strategies have been proposed to detect some erroneous measurements or observations (called outliers) or, specifically to localization problem, to improve the positioning accuracy. One of them is to combine GNSS data with embedded sensors (e.g., camera (Won et al., 2014), Inertial Measurement Unit (IMU) (Georgy et al., 2011)) or geographical information (e.g., digital maps (Lu et al., 2014)). Focusing on localization using only-GPS data, we distinguish: (i) methods that aim at



enhancing GNSS accuracy by filtering (Rabaoui et al., 2012), estimating (Giremus et al., 2007) or correcting (Miura and Kamijo, 2014; Cheng et al., 2016) the PR errors or multipath biases and (ii) methods that consist in detecting and discarding the erroneous PR from the localization process. The statistical tests (Brown, 1992) and the Interval Analysis (Drevelle and Bonnifait, 2011) belong to this second category just as the method proposed in this study. For instance, the Receiver Autonomous Integrity Monitoring (RAIM) (Brown, 1992) is implemented in all receiver and allows the detection of at most one outlier per epoch (or time sample). However, in urban areas, the probability to have more than one outlier per epoch is all the more important that the environment includes obstacles and masking phenomena. Then, some probabilistic approaches have been proposed to deal with several simultaneous outliers. For instance, Le Marchand et al. (2008) couple failure detection and exclusion with RAIM, Schroth et al. (2008) adapt the classic RANSAC (RANdom SAmple Consensus) algorithm (Fischler and Bolles, 1981) whereas (Zair et al., 2016a) proposes a very robust approach based on an a-contrario modeling and a Number of False Alarms (NFA) criterion. Using probabilistic approaches the uncertainty is modeled in a rather fine and sophisticated way but imprecision is not distinguished from uncertainty. Among approaches designed to deal with imprecision, the Interval Analysis (IA) (Jaulin et al., 2001) aims at providing a

Corresponding author. E-mail addresses: [email protected] (S. Zair), [email protected] (S. Le Hégarat-Mascle).

http://dx.doi.org/10.1016/j.engappai.2017.02.003 Received 19 September 2016; Received in revised form 11 January 2017; Accepted 6 February 2017 0952-1976/ © 2017 Elsevier Ltd. All rights reserved.

Engineering Applications of Artificial Intelligence 61 (2017) 126–135

S. Zair, S. Le Hégarat-Mascle

and on the discretization step, the size of discernment frame may be huge. For instance, assuming the receiver is located in a given area of 100 m × 100 m , for a resolution of 1 m2, the cardinality of Ω is Ω = 10000 . Then, in order to keep our application tractable, we adopt a sparse representation of Ω: instead of enumerating all its possible elements, we only focus on the focal elements of the considered bba. Like in some previous works (André et al., 2015; Rekik et al., 2015), we represent (nonunequivocally) any focal element by a subset of rectangles or boxes and we redefine accordingly the basic set operators (intersection, union) from geometric operators (instead of their classic definition from bitwise operators associated to the binary representation of all Ω elements). For instance, the union of two focal elements is computed by adding the subset of boxes of the second focal element to the first one. The intersection of two focal elements is the set of non-empty intersections between any pair of boxes extracted from each subset of boxes, respectively. Such a representation is effective provided that we control its size. Indeed, previous focal element operations increase the number of boxes involved in focal element representation. Therefore, to avoid excessive memory load, we regularly simplify the representation of focal elements by computing a sub-paving of their geometric extension without overlapping boxes. Such a process, similar to the regularization used in IA, is called simplification to avoid ambiguity with the regularization (of the trajectory) provided by filtering process. Finally, we underline that the proposed 2D representation of the focal elements is more precise than processing the 1D intervals independently as in Nassreddine et al. (2010).

guaranteed solution. It was applied to indoor localization (e.g., Seignez et al. (2009)) and to outdoor GNSS localization (e.g., Drevelle and Bonnifait, 2011). It provides a technique, called qrelaxation, to deal with outliers by removing up to q data (observations) out of the N initial ones, to get a non-empty solution. Recently, Pichon et al. (2015) presented the idea of only considering a subset of sources in Belief Function Theory (BFT). It differs from classic q-relaxation (in IA) by the following points. First, the formalism is those of belief function theory so that the sources are handled via their basic belief assignments (bba). Second, relaxing q bbas out of N, the result is computed without specifying the relaxed bbas (it is the disjunction of any combination of N − q bbas). Thus, Pichon et al. (2015) generalize the IA q-relaxation in BFT framework, that manages both imprecision and uncertainty. However, the complexity increases noticeably, so that it may be difficult to apply such technique to an important number of sources. This paper proposes two contributions to answer to the problem of only-GNSS localization in constrained environment using BFT framework. The first one is a new algorithm that allows us to partition the dataset between two subsets respectively containing the inliers and the outliers, based on an evidential consistency measure. Since it is inspired from RANSAC algorithm, we call it evidential RANSAC. The second one is a localization method based on evidential modeling of the GNSS data and evidential combination of them. It involves a step of outlier detection by the evidential RANSAC algorithm. The paper is organized as follows: we describe the belief function tools and notations used in this work in Section 2. Section 3 explains the proposed evidential RANSAC and shows its interests with a toy example. Section 4 presents the proposed evidential localization method. The results obtained from two experiments using raw GNSS data acquired in constrained environments are presented in Section

2.2. Bba combination Let m1 and m2 be two bbas defined on 2Ω. If m1 and m2 represent the information from two independent sources, they can be combined conjunctively according to the conjunctive rule:

Even if source independence should be handled carefully, this rule is probably the most popular among the very numerous combination rules because of three main advantages: simplicity, ability to specify the information and convenient mathematical properties (in particular commutativity and associativity). It assumes open world (Ω may be non exhaustive), conversely to the orthogonal sum proposed by Dempster using closed world assumption. During conjunctive combination, any pair of incompatible focal elements, (B, C ) such that B ∩ C = ∅, generates some non null mass on the empty set, so that m(∅) is often interpreted as a measure of conflict. Recently, Pichon et al. (2015) presented the idea of relaxing sources in belief function combination. Let us assume, hypothesis denoted HrN, that only r sources have to be considered out of the N available sources (i.e. q = N − r sources should be relaxed). According to HrN, for any N-tuple of hypotheses A = {A1 , A2 , …, AN }, ∀ i ∈ [1, N ]Ai ∈ Ω , only r of them should be kept without knowing which ones. Let Γr (A) denote the element of 2Ω representing this meta-knowledge:

5. Section 6 gathers the conclusions and perspectives of this work. 2. Basics from belief function theory We only present the basics of BFT and notations used in this study. For a reader not familiar with this theory, we refer to the founding book (Shafer, 1976). 2.1. Belief functions Let Ω denote the discernment frame, i.e. the set of mutually exclusive hypotheses representing solutions and let 2Ω denote the set of the disjunctions of Ω hypotheses. 2Ω cardinality is denoted 2Ω and it equals 2 Ω . The mass function m is defined from 2Ω to [0, 1] such that: ∑A∈2Ω m(A) = 1. If m(A) ≥ 0 , A is said a focal element and m(A) represents the belief that the solution is in A ∈ 2Ω , without being able to specify any subset of A. A bba (basic belief assignment) can be defined by function m or by other belief functions, in particular plausibility, credibility or commonality, that are in one-to-one relationship with m. Plausibility function, denoted Pl, maps 2Ω to [0, 1] such that, ∀ A ∈ 2Ω , Pl (A) = ∑B ∈2Ω, A ∩ B ≠∅ m(B ). Then, contour function, denoted pl, maps Ω to [0, 1] such that, ∀ ωi ∈ Ω , pl (ωi ) = Pl ({ωi}) = ∑B ∈2Ω, B ⊇ ω m(B ). i In our application, the discernment frame Ω is a discrete set of rectangles, each one corresponding to a possible 2D (East, North) location of the receiver. Depending on the hypotheses about the possible locations

Γr (A) =

⋃ ( ⊆{A1,…, AN }, ( = r

( ∩ Ai ∈ ( Ai ).

(2)

Ω

Then, for any element B ∈ 2 , its mass is the sum, over all Γr (A) equal to B, of the products of masses of elements of A (Pichon et al., 2015):

∀ B ∈ 2Ω , m[HrN ](B ) =

∑ A⊆Ω N ; Γr (A)= B

⎡ N ⎤ ⎢∏ miΩ(Ai )⎥ . ⎢⎣ i =1 ⎥⎦ (3)

Such a rule generalizes the classic BFT combination rules, in particular 127

Engineering Applications of Artificial Intelligence 61 (2017) 126–135

S. Zair, S. Le Hégarat-Mascle

the conjunctive combination corresponds to r=N and the disjunction combination to r=1. Besides, when the knowledge is categorical (i.e. each bba has only one focal element), it is similar to the q-relaxation in IA with q = N − r . In the following, we call it evidential q-relaxation.

∀ H ∈ Ω , BetP(H ) =

∑ A ∈2Ω, H ∈ A

m (A ) . |A|(1 − m(∅))

(6)

3. Evidential RANSAC 2.3. Bba approximation

RANSAC algorithm (Fischler and Bolles, 1981) is based on two ideas. The first one is to avoid the exploration of the whole solution space. For this, the solutions to test are computed. For instance, in Fischler and Bolles (1981) they correspond to exact solutions only considering a randomly drawn subset of measurements. In this work, we follow this idea of a guided exploration of the space solution. However, the way we explore it differs from those used in classic RANSAC: it is based on addition/removal of a source in/from the current set of sources considered as inliers (i.e. truthful). The second idea is to keep the solution that is the most consensual, i.e. that induces the highest number of inliers defined as measurements presenting noise level lower than a given threshold (that is a parameter of the algorithm). This idea that is also those of the q-relaxation since maximizing the number of inliers boils down to minimizing the number of outliers. Then, after having explored (at least partially) the solution space, we keep the solution corresponding to the highest number of consistent sources according to the used measure of consistency or inconsistency and an a priori threshold parameter. In the following the proposed algorithm is called evidential RANSAC (EV-RANSAC) since instead of handling data or measurements, it handles bbas. Let 4 denote a set of bbas defined on a common discernment frame Ω, EV-RANSAC aims at estimating the subset of consistent bbas (i.e. the inliers) among 4 . Note that, when the 4 bbas correspond to different information sources (e.g. satellites visible at a given epoch), EV-RANSAC detects equivalently the subset of inlier sources.

In our application, a large number of conjunctive combinations will be performed (typically, per epoch, as many as visible satellites). These combinations induce an exponential increase of the number of focal elements, a crumbling of the mass function and an important computational load. The purpose of the approximation is then to reduce the number of focal elements in a bba while preserving the maximum of information. The simplest method, called summarization (Lowrance et al., 1986), gathers the focal elements having the lowest mass values (e.g., lower than τ) such that the mass of the disjunctive focal element is equal to the sum of the gathered elements. The summarized bba m′ is B = ∪A ∈2Ω, m(A)< τ A, m′(B ) = ∑A ⊆ B m(A) such that and, ∀ A ⊆ B, m′(A) = 0 . Besides its known drawback of increasing the plausibility of the suppressed focal elements, this approach presents the disadvantage of not taking into account the cardinality of the focal elements that are gathered. The second most popular way to perform bba approximation is the iterative aggregation that gathers only two focal elements at a time (iteration). Since this approach is non associative, several criteria have been proposed to decide which elements to gather at a given iteration. For instance, Harmanec (1999) proposes to minimize the difference between the credibility functions of the bba before and after its approximation. In the same order of idea, Denoeux (2001) proposes to minimize the difference between the contour functions. André et al. (2015) propose to use as similarity criterion the measure of similarity of Jousselme (Jousselme et al., 2001). It allows us to consider both mass values and cardinalities of focal elements so that, according to criterion of Eq. (4), the gathering of focal elements having large cardinalities is favored:

⎛ ⎛ A ⎞ B ⎞ DJ (A, B ) = m(A)2 ⎜1 − ⎟ + m ( B ) 2 ⎜1 − ⎟. ⎝ ⎝ A∪B⎠ A∪B⎠

3.1. Exploration of the solution space As said, we will displace in the solution space according to some moves. These latter are defined through graph . . Considering N bbas (N = 4 ), . nodes are the 2N subsets of bbas. Two different nodes, representing two different subsets of 4 , are connected by an edge (or arc) if the two represented subsets only differ by one element. Practically, we denote the nodes by binary words representing the possible subsets of bbas and we call n−nodes the nodes representing the subsets of cardinality n: e.g., for N=4, the 1− nodes are in {0001, 0010, 0100, 1000}, the 2− nodes ∈{0011, 0101, 0110, 1001, 1010, 1100}, the 3− nodes ∈{0111, 1101, 1110} and 1111 denotes the only 4− node. Then, for any node coded by binary word b, b denotes the number of bits equal to 1 in b. b is also the cardinality of the subset of bbas represented by b node. With such a coding, . nodes are connected if and only if their Hamming distance is equal to 1 (e.g., 0001 has exactly four neighbors: 0000, 0011, 0101 and 1001). Fig. 1 shows an example of graph with N=5. In order to store the exploration path, . nodes also contain two values: the (in)consistency measure (e.g. conflict value of the bba subset, computed using Eq. (5)) and the previous node on the path. This latter is called father whereas the connected nodes having cardinality increased by one (i.e. having one supplementary bba in the represented subset of bbas) are its children. (In)consistency measure and father values are only filled during the exploration so that, for a node not reached, they are not (yet) computed (their values are set to an invalid value, e.g. negative value). Since the exploration will be only partial, some of these values will never be computed. The exploration starts from the 2− node presenting the lowest conflict value (used inconsistency measure). This choice is a compromise between complexity and node separability according to conflict value. On the one hand, for 1− nodes, every conflict values are equal (to 0) and, even for 2− nodes, null conflict value can be reached by several nodes. On the other hand, considering k-source nodes, the number of source combinations to test to choose the initial node is the binomial

(4)

Previous criterion makes sense in our application where imprecise localizations are of little interest. Note that more sophisticated methods have been proposed (Dubois and Prade, 1990; Denoeux, 2001) but at the cost of a high computational burden that makes them inadequate for our application. 2.4. Inconsistency measure The measure of conflict or inconsistency has long been a subject of research (e.g., Roquel et al., 2014) this all the more as it may have different origins (Lefevre et al., 2002; Destercke and Burger, 2013): an erroneous bba (that itself may be due to an incomplete discernment frame, an erroneous observation, or…), the conjunctive combination of an important number of sources, a too high confidence in a source, etc. In the proposed application, basic conflict κ derived from Eq. (1) is used as a measure of disagreement or a conflict between sources:

However, depending on the application, one can consider other (more sophisticated) measures of inconsistency between sources. 2.5. Decision Having combined the different sources, the decision is generally taken using either maximum of plausibility among the singleton hypotheses or maximum of pignistic probability, defined as follows: 128

Engineering Applications of Artificial Intelligence 61 (2017) 126–135

S. Zair, S. Le Hégarat-Mascle

Fig. 1. Graph associated to the toy example with 5 bbas given in Table 1; node conflict values (cf. Eq. (5)) are given in italic just below the nodes; starting from node 00011, only conflict values in black are actually computed (and associated nodes examined) even if, for comparison, other conflict values are plotted in grey. N

exploring 8 nodes out of 26 if we start the exploration from the first nullconflict 2− node found (00011).

coefficient ( k ), that increases exponentially for k lower than N . 2 During the exploration, from a current node b, we examine its children (cardinality b + 1). For each child, we compute its (in)consistency measure (e.g., conflict value of the bba subset) that enables us to order the children from the most consistent bba subset to the least one. Then, the exploration continues by selecting the first child according to previous ordering, provided that the value satisfies (is lower than, in case of conflict measure) an a priori threshold κth. Otherwise, the exploration goes back to the father of the current node (cardinality b − 1) and selects the next child still according to the considered ordering measure, whose values have been already computed during the processing of the father node. If no brother of the current node satisfies the a priori threshold, we go back to the grandfather and so on. Let us illustrate the exploration in the case of Fig. 1 that presents a toy example with one inconsistent/conflicting bba among five bbas. The discernment frame has three hypotheses: Ω = {ω1, ω2 , ω3} and bbas are given in Table 1. At level 2-nodes (i.e. nodes representing subsets of two bbas among five), there are six null values of conflict (shown in italic below the node ellipsoid). We randomly chose the first one: the exploration starts then from node 00011. This node has three children. We select the one presenting the lowest conflict value (namely 00111 having .0 conflict). From node 00111, the two children (01111 and 10111) present conflict value higher than κth = 0.1. Then, after coming back to father node 00111, we select the second best child, namely 10011. Itself has one child, 11011, presenting conflict value (.04 ) lower than κth. This child is the most consistent subset of bbas according to conflict value criterion since adding the fifth bba, conflict exceed κth. In summary, denoting by brackets the explored nodes, the exploration path is as follows: 00011 → {00111, 01011, 1001} → 00111 → {01111, 10111} → 00111 → 00011 → 10011. At

3.2. EV-RANSAC algorithm Algorithm 1 presents the evidential RANSAC in the case where the conflict m(∅) is used as inconsistency measure. However, it is straightforward to change the considered measure of consistency or inconsistency. The input data are the N bbas (gathered in 4 = {mi , i ∈ {1, …, N}}) defined on subsets of discernment frame Ω, the a priori threshold on conflict κth and the maximum number of iterations lmax. The output is the set of the inlier bbas. Having initialized the graph . , the algorithm estimates b0 the starting node of the exploration path as the first 2− node presenting null conflict or the 2− node presenting the lowest conflict value. From b0, the graph exploration is performed using the recursive function described by Algorithm 2. Notations are as follows. Each bba is identified by its index j ∈ {1, …, N}. A node b of . represents the subset of bbas whose indices correspond to the position of the 1-bit in b. For notation shortness, the statement ‘jth bba belongs to b subset of bbas’ is denoted j ∈ b , the statement ‘jth bba does not belong to b subset of bbas’ is denoted j ∉ b , and the statement ‘jth bba has been added to b subset of bbas’ is denotes j ∪ b . Thanks to the node notation based on binary words, tests and operations on bba subsets are very simply achieved through bitwise operators. For instance, the set of the positions of the 1-bit in b (also used in Algorithm 1 to get the indices of the bbas to combine) is {n such that b&(1 ≪ n ) = 1}. According to Algorithm 2, from a node b given as an argument of the function, all children nodes are examined, i.e. the conflict values are computed. The next node is then the child node that minimizes the conflict provided that the value is below conflict threshold κth. Otherwise, if no child presents a conflict value below κth, the next node is chosen among the brothers of b provided that at least one of them has a conflict value in (κb, κth ) (in the research of the next minimum after κb, the condition κb > κb ensures to consider only already computed values since they have ′ been initializes at value −1). If no brother is suitable, the research of the next node continues among the brothers of the father and so on. Recursion is stopped if the maximum number of iterations is reached 1 2 (lmax ≪ 2N − ( N ) − ( N )) or if the node that includes all bbas is reached and appeared as a possible solution. Then, we select, as best solution, the one having the highest cardinality provided that computed conflict value is lower than conflict threshold. In case of several solutions (having same cardinality), we keep the one with the lowest conflict value. As in classic RANSAC, the choice of the threshold parameter and the number of iterations depends on the considered application.

→ {10111, 11011} → 11011 → {11111} → 11011

this stage, either the maximum number of iterations (of exploration path) has been reached (what is assumed here) or the exploration goes on. For the example, the values of conflict not computed (nodes not explored) are plotted in grey level. The most consistent subset of bbas is then {m1, m 2, m4, m5}. Using the proposed approach, it was obtained after

Table 1 Bbas of the toy example presented on Fig. 1; Ω = {ω1, ω2 , ω3}. A

m1

m2

m3

m4

m5

ω1 ω2 {ω1, ω2} {ω2 , ω3} Ω

0 0.05 0.75 0 0.2

0 0 0.8 0 0.2

0 0.8 0 0.2 0

0.7 0 0.3 0 0

0.3 0 0 0 0.65

129

Engineering Applications of Artificial Intelligence 61 (2017) 126–135

S. Zair, S. Le Hégarat-Mascle

Algorithm 1. EV-RANSAC; inputs: set of bbas 4 , discernment frame Ω, conflict threshold κth, maximum number of iterations lmax; outputs: set 0S of the bbas labeled inliers.

3.3. Example of application with 2D bbas: estimation of a straight line To illustrate the interest of the evidential RANSAC, we consider the classical example of the estimation of straight line from several points including inliers and outliers. Specifically, Fig. 2 shows 10 points randomly drawn: 8 of them are affected by noise around the ground truth straight line (slope 1 and offset 0) whereas two of them are outliers. Several efficient approaches have been proposed for this problem: either from statistical estimation (e.g., M-estimators (Huber, 1964), classic RANSAC (Fischler and Bolles, 1981)) or from pattern recognition (e.g., Hough's transform (Duda and Hart, 1972)). Here, our purpose is to illustrate the sorts of outcomes provided by evidential RANSAC and its ability to perform robust estimation. For a straight line parametrized as follows: y = αx + β , two parameters, slope α and offset β, should be estimated. Then, the discernment frame Ω is a discretized 2D compact. From a priori knowledge (assumed for the example), we bounded Ω so that α ∈ [−1, 3] and β ∈ [−2, 2]. Then, for a discretization step equal to 0.1, Ω = 40 × 40 . The information pieces are the 2D points (inliers and outliers). For any of these 2D points, (x 0 , y0 ), there is an infinity of straight lines passing through (x 0 , y0 ) such that β = y0 − αx 0 (Duda and Hart, 1972). Then, we propose to construct bbas as follows: the 2D focal elements generated by point (x 0 , y0 ) are consonant and have the shape of strips around line segment β = y0 − αx 0 , (α , β ) ∈ [−1, 3] × [−2, 2]. Besides, in order to mitigate the complexity despite the high number of combined bbas, we only consider two focal elements. Specifically, these two focal elements are as follows: A1 width is 0.1, A2 width is 0.3, m(A1) = 0.49 and m(A2 ) = 0.51. Fig. 2 presents an example of the results of the straight line estimation. Fig. 2a shows the points in the 2D space (x , y ) data, and the ground truth (called true line) as the straight lines estimated either by the proposed EVRANSAC or alternative approaches: the evidential q-relaxation (Pichon et al., 2015), classic RANSAC, M-estimator and Hough transform. It clearly appears that our estimation is more accurate even if the evidential q-relaxation estimation in particular and robust approaches in general are much better than non-robust approaches (not shown here). Fig. 2b shows an example of bba with its two consonant focal elements in the parameter space (α , β ) (discernment frame): the most committed, A1, is in blue whereas A2 gather the red and blue areas ( A1 ⊆ A2 ). Fig. 2c and d shows the geometric shape (discretized 2D sets) of the focal elements of final bbas. In the case of Fig. 2c, the final bba is the result of the conjunctive combination of the bbas selected as inliers by the proposed evidential RANSAC. Even if due to the overlapping of the focal elements, it is difficult to specify them, one can see that they are rather small attesting that the bba is rather committed. In the case of Fig. 2d, the final bba is the result provided by evidential q-relaxation (Pichon et al., 2015). We note that the bba obtained by evidential q-relaxation is much less specific than one provided by EV-RANSAC (that is also more accurate according to Fig. 2a).

Algorithm 2. Function RecursiveEvRANSAC; parameters: graph . , bba set 4 , discernment frame Ω, conflict threshold κth, current node b, current iteration l, maximum number of iterations lmax.

4. Evidential localization 4.1. Evidential formulation of the GNSS localization problem GNSS localization consists in estimating the receiver position based on the measured travel time of GNSS signals from satellite Si and the receiver. For a receiver located at position Xr = (er , nr , ur ) and a satellite Si located at position XSi = (eSi , n Si , u Si ) in the ENU (East North Up) frame, the pseudo-range (PR) ρi writes:

ρi = Xr − XSi + cδt + ϵae + ϵmp , =

2 2 2 (er − eSi ) + (nr − n Si ) + (ur − u Si ) + cδt + ϵae + ϵmp .

(7)

In Eq. (7), δt is the time bias (difference) between the satellite clock 130

Engineering Applications of Artificial Intelligence 61 (2017) 126–135

S. Zair, S. Le Hégarat-Mascle

Fig. 2. Comparison between solutions provided respectively by EV-RANSAC, evidential q-relaxation, classic RANSAC, M-estimator and Hough transform, in the case of the straight line example (y = αx + β ) with 10 points including 2 outliers: (a) dataset and straight line estimations, (b) focal elements of the bba associated to the 4th point, in the space parameter axis (α , β ) , (c) solution provided by EV-RANSAC, (d) solution provided by evidential q-relaxation.

and the receiver clock, c is the speed of light, ϵae and ϵmp are random realizations of independent centered Gaussian noise, that represent the travel time errors either due to atmospheric and electronic noise or to multipaths. In the absence of noise (ϵae = ϵmp = 0 ), four observations are required to estimate the 3D position Xr and δt. However, due to non null noise, more than four satellites are usually considered. lr According to the least mean square criterion, the optimal value X 2 lr = argmin ∑ [∼ ρ ( X ) − ρ ] minimizes the quadratic error: X where ρi r i i i Xr ρi is the denotes the measurement acquired from satellite i, and ∼ estimated PR assuming a position Xr and a clock bias δt: ∼ ρi (Xr ) = Xr − XSi + cδt . The solution can be obtained using the iterative Gauss-Newton algorithm (Kaplan, 2006). In the following, we focus on localization of land receivers. Then, using a Digital Elevation Model (DEM), the altitude (Up coordinate) is a function of the two land coordinates East and North (er , nr ) that varies slowly so that the altitude at a given epoch can be well approximated by the altitude at previous epoch(s). Besides, like (Chang et al., 2009), we assume clock bias follows a linear model between regular jumps. Then, it is estimated by a simple Kalman filter using linear model with constant drift. Therefore, the proposed evidential modeling only deals with the East and North coordinates represented in the vector xr = (er , nr ) and the estimated PR versus xr writes:

∼ ρi (xr ) =

(er − eSi )2 + (nr − n Si )2 + γi + ζt ,

The proposed evidential modeling of the PR observation is then as follows. The discernment frame is a discretized compact of xr space. Assuming a size equal to (w, h ) and the same discretization step R along w×h the East and North coordinates, Ω = 2 . In the following, an R elementary cell of Ω (i.e. a singleton hypothesis of 2Ω) is denoted Hj, j ∈ {0, …, Ω }. Any PR observation ρi induces a belief about the receiver position formalized via a bba having two focal elements A1i and A2i i ⎧ ∼ ρi (Hj ) − ρi ≤ σ , ⎪ A1 if ∀ j{0, …, Ω }, Hj ∈ ⎨ i ∼ ⎪ ⎩ A2 if ρi (Hj ) − ρi ≤ σ + η.

Index i in Eq. (10) refers to the PR observation so that, denoting by NPR the number of PR measurements at the considered epoch, i ∈ {1, …, NPR}. Like in the straight line example, the initial bbas are consonant with only two focal elements representing different levels of imprecision (André et al., 2015). Having two focal elements allows us to refine the IA modeling that would consider only one imprecision level per observation. The imprecision represented by these focal elements may be associated to different physical phenomena corrupting the PR observations: e.g., σ represents the noise (e.g. atmospheric, electronic) and η represents the effect of the multipath reception. However, being rather an interpretation of the origin of the imprecision on the data, we cannot relate specifically σ and η to ϵae and ϵmp (Eq. (7)) and we recommend these parameters be fitted.

(8)

where γi = (u∼r − u Si ) and ζt = cδt . From Eq. (4.1), we can define a 2D area such that the included 2D points xr are consistent with the observation ρi taking into account the imprecision ϵ of the measurement(s): 2

xr ∈ ( if ∼ ρi (xr ) − ρi ≤ ϵ.

(10)

4.2. Evidential filtering Temporal filtering involves two main steps: the prediction step that projects the system state at the next epoch and the estimation step that

(9) 131

Engineering Applications of Artificial Intelligence 61 (2017) 126–135

S. Zair, S. Le Hégarat-Mascle

uses the observations to correct the predicted state. These two steps will be also present in the proposed evidential filtering except that they handle bbas instead of state vectors. They are processed at each epoch.

evidential localization method.

4.2.1. Prediction step In the dynamic case, the receiver moves over time. Then, the estimation step aims at predicting its position at next epoch, t + 1, knowing its current position (at t) and assuming a displacement model (possibly requiring other features of the receiver such as its velocity, etc.). In addition to the position, the prediction step also aims at updating the imprecision of the location that increases due to the uncertainty introduced by the approximations of the displacement model, like classically modeled in the statistical filtering, e.g. Kalman filter. In our case, the prediction step displaces the focal elements of the bba. The displacement, that is the same for all the focal elements, only depends on the displacement model. In this study, this latter is simply a linear model function of the velocity vector. As an alternative to estimation by finite differences, in the case of GNSS localization, the velocity vector may be estimated using Doppler measurements (Mao et al., 2002; Zair et al., 2016b). After having been displaced, each focal element undergoes an isotropic dilation by the dilation operator of mathematical morphology (Serra and Analysis, 1982; Najman and Talbot, 2013). This dilation involves a parameter that is the radius of the disk used as structuring element. This parameter corresponds to the maximum displacement of the receiver between two epochs. Finally, note that the prediction step does not increase the complexity of the bba since the number of focal elements does not change. In the following, we call mP the bba resulting from prediction step.

5.1. Experimental data

5. Experiments and results

To evaluate our approach, we consider two experiments with their dataset. The first experiment was performed in an urban canyon (La Defense, Paris, France) in static conditions. The receiver was surrounded by tall buildings, so that the probability of multipath signal and NLOS receptions is high. The GPS receiver remains static during 40 min acquiring signals from four to seven satellites. The acquired dataset consists in raw data (PR measurements) acquired with a UBLOX GPS receiver EVK-5T. The precision of the receiver is 5 m according to factory parameter and its frequency acquisition is 1 Hz. The second experiment consists in a trajectory of 5 km in a constrained environment presenting NLOS signal and multipath phenomena (in urban area) and some blockages of the satellite signal (in forest area) that reduce the data redundancy. Two datasets were acquired simultaneously by two different GPS: the UBLOX EVK-5T and a high cost GPS, namely a RTK (Real Time Kinematic) ALTUS APS3 that has a 1 cm precision. This high precision GPS allows us to establish the trajectory referred as the ground truth. For this experiment, the acquisition frequency was set to 2 Hz for both GPS. 5.2. Performance metrics The evaluation of the performance of the evidential localization is not trivial since the obtained result comes in the form of set(s) (bba focal elements) while the ground truth location is punctual. The first solution is to convert the bba result into a distribution of punctual locations. From the final bba estimated at epoch t, the BetP function (Eq. (6)) allows us to estimate a probability distribution function (pdf) of the receiver 2D locations. These latter take discrete values corresponding to the centroids of the elementary boxes (singleton hypotheses) of Ω. Then, let ϵ(Hi, xlr ) denote the difference between the centroid of Hi and the ground truth xlr . From BetP pdf and ϵ(Hi, xlr ) function, two kinds of error measurements are computed. The first one, E1, allows us to evaluate if the location estimation is biased. It is written as the statistical expectation of the ϵ(Hi, xlr ) values and corresponds to p=1 in Eq. (13). The second error measurement, E2, allows us to evaluate the imprecision of the solution (since errors do not compensate each other). It is written as the statistical expectation of the L1 norm of ϵ(Hi, xlr ) values and corresponds to p=2 in Eq. (13).

4.2.2. Estimation step This step aims at taking into account the observations at the considered epoch. In our case, the observations have generated as many bbas (each one having two focal elements, cf. Section 4.1). The evidential RANSAC (cf. Section 3.2) applied to the set of these elementary bbas allows us to determine the subset of inlier bbas. In the absence of supplementary information, we have no reason to privilege one or another among these inliers bbas and they are cognitively independent (derived from PR observations provided by different satellites at a given epoch). Thus, the inlier bbas are combined according to the conjunctive combination rule (Eq. (1)) to derive the global observation bba, mO, at considered epoch. Concerning the combination of mO and mP, two rules have been considered. Apart from the conjunctive combination rule (Eq. (1)) that is commutative, the revision rule proposed by Ma et al. (2011) allows for an asymmetric fusion process. This rule, non commutative, allows for the updating of an a priori bba according to a new bba, on the model of the conditional probability in probability theory. In our case, where the a priori bba is mP and the new one is mO, it writes as follows:

(mP○r mO )(C ) =



σr (A, B )mO(B ),

A∩B=C

Ω

Ep =

1

∑ [(ϵ(Hi, xlr )) p] p BetP(Hi ), i =1

In this study, we propose a second performance metric that also

(11)

where

⎧ mP(A) if PlP(B ) > 0, ⎪ ⎪ Pl (B ) σr (A, B ) = ⎨ P if PlP(B ) = 0 and A ≠ B, ⎪0 ⎪1 if PlP(B ) = 0 and A = B. ⎩

(13)

(12)

At the end of estimation step, we obtain a new bba m͠ t . Before next iteration, two processes are applied to m͠ t to simplify it. The first one is the simplification mentioned in Section 2 (that aims at removing overlapping boxes in the geometric representation of a focal element). The second one is the bba approximation (e.g., by iterative aggregation according to Eq. (4) criterion). Fig. 3 provides a global scheme of the

Fig. 3. Synopsis of the only-GNSS evidential localization algorithm.

132

Engineering Applications of Artificial Intelligence 61 (2017) 126–135

S. Zair, S. Le Hégarat-Mascle

evaluate the robustness of the approaches to these parameters. Table 2 presents the E1 and E2 errors along the East and North coordinates corresponding to the 50th (median) and 90th percentiles. Less biased solution corresponds to minimum absolute value of E1. It varies with the considered coordinate and p-value (either p=50 or p=90). However, considering the sum of the absolute values of bias along each coordinate, we note that evidential q-relaxation provides interesting results. Concerning E2 criterion, the proposed approach provides the best performance: best results are achieved for width values equal to (6, 15) but the approach also outperforms the alternative ones for other tested widths. This good result can be explained by the fact that using conjunctive combination of the inlier bbas, the result is much more committed than with q-relaxation (either evidential or classic), which was already visible in the case of the straight line example. Fig. 4a presents the cumulative distribution function of the 2D E2 errors equal to the norm of vector of (East, North) E2 errors. The best performance of evidential RANSAC approach is confirmed whatever the considered p-value. We also note that evidential RANSAC performance seems much more robust to the width parameters than the alternative approaches. Finally, we note that best results correspond to assumed noise levels equal to 6 m and 15 m that are rather classic values used in GPS localization processes. Concerning second performance criterion (Eq. (14)), Fig. 5a shows ,(λ ) median value versus λ for the considered localization methods and widths of focal elements or intervals. We note that the ordering (in terms of performance) between the different methods varies with λ. Classic q-relaxation offers the poorest performance for λ low values but it outperforms the other methods for high values of λ. Conversely, EVRANSAC provides the best results for λ low values and the worst for high values of λ. Evidential IA is intermediate. Concerning the widths of the focal elements, Fig. 5a clearly confirms that values (6, 15) provide the best results for the three methods.

takes into account both the belief in a set and its geometry. Recall that the IA evaluates its results in term of guarantee of the solution (that means that it surely includes the ground truth) that avoids providing any punctual estimation of the receiver location. The following criterion allows us to quantify the location accuracy versus the guarantee of the solution:

, (λ ) =

∑ A ∈2Ω

( A + λ × 1xlr ∈ A )

m (A ) , m(∅)

(14)

where 1xlr ∈ A is the indicator function of the belonging of xlr to focal element A:

⎧ 0 if xlr ∈ A , 1xlr ∈ A = ⎨ ⎩1 if xlr ∉ A

(15)

and λ is a penalty factor that applies if the ground truth location is not included in focal element A. If λ = 0 , only the bba precision is considered in the performance estimation, regardless of the actual receiver position (ground truth). If λ → ∞, it becomes mandatory (hard constraint) that every focal element includes the ground truth. Proposed criterion (Eq. (14)) also applies for IA solution (categorical bba having only one focal element) with λ ≫ 0 to reflect the requirement of a guaranteed solution. 5.3. Results of static experiment The static experiment analysis has three purposes. Firstly, it allows us to fit the algorithm parameters. Secondly, we derive a first evaluation of the proposed approach (evidential localization using EV-RANSAC algorithm). Thirdly, we compare it to alternative approaches managing outliers and imprecision, namely the approach proposed by Pichon et al. (2015) and IA also using q-relaxation. First of all, let us specify that, since the receiver is static, there is no trajectory to filter so that, in this dataset, the estimation of the receiver location is performed independently at each epoch so that each time sample is an independent test sample to compute performance statistics. Concerning the parameters fitting, in Algorithm 1, we set parameter κth = 0.2 and lmax = min(3N , N + 2N−3), where N is the number of visible satellites (generally between 7 and 11) at the considered epoch. Having determined the subset of inlier bbas, these latter are combined conjunctively (Eq. (1)) to obtain the bba representing the receiver location. We do not consider revision rule (Eq. (11)) since the different epochs are considered independently and receiver location is estimated only from mO. Then, the main parameter that is not already fixed is the width of the initial focal elements. The presented results will allow for the comparison of three pairs of values for the widths of the two focal elements of each observation bba. Concerning alternative approaches, in Pichon et al. (2015), the thresholding is performed on consistency measure ϕmin. Here, ϕmin = 0.8. For the IA q-relaxation, according to the GOMNE (Guaranteed Outlier Minimal Number Estimator) method (Jaulin et al., 1996), the number of relaxed constraints should be the minimum such that there is a non-empty solution. However, because of possible remaining outliers, such an approach do not guarantee that the obtained solution is correct. Then, following (Jaulin et al., 2002), the number of constraints to relax has been increased to q = qmin + r , with qmin the GOMNE number of constraints and r the supplementary margin equal to 1. In the presented results, the initial width of interval is varied among the values considered for the largest focal element used in evidential approaches (either (Pichon et al., 2015) or proposed one). Finally, we recall that, in every case (IA or evidential approaches) only the East and North coordinates are estimated (2D state vector). Table 2 shows the achieved errors for the considered localization methods (evidential RANSAC, evidential q-relaxation, IA q-relaxation) and for different widths of focal elements A1 and A2 (i.e. assuming different levels of noise on the PR measurements). It allows us to

5.4. Results of dynamic experiment In the case of the dynamic experiment, we aim at deriving an evaluation of global algorithm proposed for evidential localization (including EV-RANSAC). The parameters of Algorithm 1 have the same values as in the static experiment (in particular, κth = 0.2 ) and for evidential localization, the whole discernment frame has 80 × 80 Ω = 2 × 2 = 1600 singleton hypotheses (elementary cells) with the widths of the two focal elements of the observation bbas equal to 6 m and 15 m, respectively, corresponding to best performance in static case. The number of focal elements kept by the approximation of the evidential filtering was set to four. This dynamic experiment is used to compare different versions of the proposed evidential filtering with each other and to the classic Interval Analysis. The different versions of the evidential filtering correspond to three combination rules between mP and mO (Section 4.2, Fig. 3): conjunctive combination, revision rule applied from mP Table 2 Statistical performance versus the method and the width of the focal elements (FE) or interval in IA. Statistical performance is measured through E1 and E2 errors whose median and 90th percentile values are given for East and North directions in meters. Method

EV-RANSAC

Evidential q-relax. IA q-relax.

133

Width

E1 (East, North) in m

E2 (East, North) in m

param.

50%

90%

50%

90%

(4, 20) (6, 15) (6, 20) (4, 20) (6, 15) (6, 20) (15) (20)

( ( ( ( ( ( ( (

(4.7, 5.2) (1.1, 4.6) (0.3, 5.0) (1.2, 6.2) ( − 0.3, 3.8) ( − 0.2, 5.5) ( − 1.7, 5.0) ( − 2.6, 5.5)

(9.4, 9.6) (8.1, 7.4) (8.8, 9.0) (14.7, 17.8) (10.6, 12.1) (13.6, 15.7) (10.0, 8.7) (13.0, 11.2)

(13.3, (12.2, (13.0, (19.0, (12.7, (16.4, (15.0, (18.3,

− − − − − − − −

4.8, 5.8, 5.3, 5.9, 5.3, 6.1, 7.5, 8.3,

− 2.0) − 1.2) − 1.4) − 0.9) − 1.1) − 0.5) 0.0) 0.0)

17.1) 11.5) 12.3) 37.5) 16.2) 22.7) 10.7) 13.4)

Engineering Applications of Artificial Intelligence 61 (2017) 126–135

S. Zair, S. Le Hégarat-Mascle

Fig. 4. Cumulative distribution function (CDF) of the 2D error: (a) static experiment allowing comparison between 3 set-based approaches (proposed EV-RANSAC, evidential IA (Pichon et al., 2015) and classic IA), (b) dynamic experiment allowing comparison between different versions of the proposed evidential localization, classic IA and acontrario approach (Zair et al., 2016a).

Fig. 5. Performance according to proposed criterion (Eq. (14)) versus parameter λ: (a) static experiment allowing comparison between 3 set-based approaches (proposed EVRANSAC, evidential IA (Pichon et al., 2015) and classic IA), (b) dynamic experiment allowing comparison between different versions of the proposed evidential localization and classic IA.

and revision rule applied from mO. In addition, we can also not to consider mP (epochs are independent like in the static case) so that mO is the bba representing the receiver location. First, we analyze the obtained results in terms of precision of localization. Table 3 shows the values achieved by the median and 90th percentile for the E1 and E2 criteria. Surprisingly, we note that best results are obtained only considering observations (no prediction and filtering), whereas the second best result is achieved by revision rule. An explanation may be in the summarization that, by reducing the number of focal elements to four, may be too drastic. Nevertheless, it is a compromise to reduce sufficiently the computational load. We also note that results are rather good (actually better than in the static experiment) which can be explained by the fact that the environment is less constrained: even in the urban area, the buildings are less tall than at La Défense that is a business district with towers of height between 50 m and 200 m. Besides, results are closer to each other than in the static case. Considering the precision of localization, we are also able to compare evidential approach to probabilistic ones. Specifically, we focus on the a-contrario approach proposed in Zair et al. (2016a) as a recent and efficient method representing the state-of-art of this kind of approaches. Fig. 4b presents the cumulative distribution function of the 2D E2 errors. It confirms the trends deduced from Table 3. We also note that, in terms of precision, proposed approach is intermediate between (Zair et al., 2016a) approach and Interval Analysis. Such a result is actually not surprising since having a set representation of the solution, evidential localization was not designed to provide precise localization. Then, it also reaffirms the need of

different criteria to evaluate the interest of set-based approaches. Fig. 5b shows the performance in terms of proposed criterion measuring location accuracy versus the guarantee of the solution (Eq. (14)). The curves represent ,(λ ) (median value) versus λ for each of the compared approaches. We note that the performance of the Interval Analysis varies only very little. In most cases, the final set obtained as localization result contains the ground truth so that λ penalty has no effect on the performance. In contrast, the performance of the evidential approaches varies significantly with λ, reflecting the fact that some small focal elements with significant masses do not contain the ground truth (by increasing λ value, we increasingly penalize the non inclusion of the ground truth in the focal elements of the considered solution). We note that the good performance of mO in terms of localization precision is undermined when also considering the guarantee of the solution. Then, best results are achieved considering revision rule mP○r mO which is also consistent with Table 1. We deduce that the evidential localization has improved the accuracy of the focal elements and has reduced the mass of those that do not include the ground truth. In summary, it clearly appears that the interest of the proposed evidential filtering is in terms of compromise between precision and guarantee of the solution. It offers a more committed and precise solution than the Interval Analysis, while still providing estimation of the imprecision via sets. The obtained set representation (of the receiver location) is both more flexible than uncertainty ellipsoids and more informative than the boxes of IA.

134

Engineering Applications of Artificial Intelligence 61 (2017) 126–135

S. Zair, S. Le Hégarat-Mascle

Duda, R.O., Hart, P.E., 1972. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 15 (1), 11–15. Fischler, M.A., Bolles, R.C., 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24 (6), 381–395. Georgy, J., Karamat, T., Iqbal, U., Noureldin, A., 2011. Enhanced MEMS-IMU/ odometer/GPS integration using mixture particle filter. GPS Solut. 15 (3), 239–252. Giremus, A., Tourneret, J.-Y., Calmettes, V., 2007. A particle filtering approach for joint detection/estimation of multipath effects on GPS measurements. IEEE Trans. Signal Process. 55 (4), 1275–1285. Harmanec, D., 1999. Faithful approximations of belief functions. In: Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann Publishers Inc., pp. 271–278. Huber, P.J., 1964. Robust estimation of a location parameter. Ann. Math. Stat. 35 (1), 73–101. Jaulin, L., Kieffer, M., Walter, E., Meizel, D., 2002. Guaranteed robust nonlinear estimation with application to robot localization. IEEE Trans. Syst. Man Cybern. Part C: Appl. Rev. 32 (4), 374–381. Jaulin, L., Kieffer, M., Didrit, O., Walter, E., 2001. Applied Interval Analysis with Examples in Parameter and State Estimation, Robust Control and Robotics. Springer London Ltd, UK. Jaulin, L., Walter, E., Didrit, O., 1996. Guaranteed robust nonlinear parameter bounding. In: Proceedings of the CESA'96 IMACS Multiconference (Symposium on Modelling, Analysis and Simulation), vol. 2, pp. 1156–1161. Jousselme, A.-L., Grenier, D., Boss, l., 2001. A new distance between two bodies of evidence. Inf. Fusion 2 (2), 91–101. Kaplan, E.D., 2006. Hegarty, Understanding GPS: Principles and Applications. Artech House, Boston. Le Marchand, O., Bonnifait, P., Baez-Guzmn, J., Peyret, F., Betaille, D., 2008. Performance evaluation of fault detection algorithms as applied to automotive localisation. In: Proceedings of the European Navigation Conference-GNSS 2008. Lefevre, E., Colot, O., Vannoorenberghe, P., 2002. Belief function combination and conflict management. Inf. Fusion 3 (2), 149–162. Lowrance, J.D., Garvey, T.D., Strat, T.M., 1986. A framework for evidential-reasoning systems. In: Proceedings of the 5th National Conference on Artificial Intelligence (AAAI- 86), pp. 896–901. Lu, W., Seignez, E., Rodriguez, F.A., Reynaud, R., 2014. Lane marking based vehicle localization using particle filter and multi-kernel estimation. In: Proceedings of the 13th International Conference ICARCV, pp. 601–606. Ma, J., Liu, W., Dubois, D., Prade, H., 2011. Bridging Jeffrey's rule, agm revision and Dempster conditioning in the theory of evidence. Int. J. Artif. Intell. Tools 20 (04), 691–720. Mao, X., Wada, M., Hashimoto, H., 2002. Nonlinear filtering algorithms for GPS using pseudorange and Doppler shift measurements. In: Proceedings of the 5th International Conference on Intelligent Transportation Systems, 2002. IEEE, pp. 914–919. Miura, S., Kamijo, S., 2014. GPS error correction by multipath adaptation. Int. J. Intell. Transp. Syst. Res., 1–8. Najman, L., Talbot, H., 2013. Introduction to mathematical morphology. In: Najman, L., Talbot, H. (Eds.), Mathematical Morphology: From Theory to Applications. John Wiley & Sons, Inc., Hoboken, NJ, USA. http://dx.doi.org/10.1002/ 9781118600788.ch1. Nassreddine, G., Abdallah, F., Denoeux, T., 2010. State estimation using interval analysis and belief function theory: application to dynamic vehicle localization. IEEE Trans. Syst. Man Cybern. Part B: Cybern. 40 (5), 1205–1218. Pichon, F., Destercke, S., Burger, T., 2015. A consistency-specificity trade-off to select source behavior in information fusion. IEEE Trans. Cybern. 45 (4), 598–609. Rabaoui, A., Viandier, N., Duflos, E., Marais, J., Vanheeghe, P., 2012. Dirichlet process mixtures for density estimation in dynamic nonlinear modeling: application to GPS positioning in urban canyons. IEEE Trans. Signal Process. 60 (4), 1638–1655. Rekik, W., le Hégarat-Mascle, S., Reynaud, R., Kallel, A., ben Hamida, A., 2015. Dynamic estimation of the discernment frame in belief function theory: application to object detection. Inf. Sci. 306, 132–149. Roquel, A., le Hégarat-Mascle, S., Bloch, I., Vincke, B., 2014. Decomposition of conflict as a distribution on hypotheses in the framework on belief functions. Int. J. Approx. Reason. 55 (5), 1129–1146. Schroth, G., Ene, A., Blanch, J., Walter, T., Enge, P., 2008. Failure detection and exclusion via range consensus. In: Proceedings of the European Navigation Conference. Seignez, E., Kieffer, M., Lambert, A., Walter, E., Maurin, T., 2009. Real-time boundederror state estimation for vehicle tracking. Int. J. Robot. Res. 28 (1), 34–48. Serra, J., 1982. Image Analysis and Mathematical Morphology, v.1. Academic Press. Shafer, G., 1976. A Mathematical Theory of Evidence. vol. 1. Princeton University Press, Princeton. Skog, I., Handel, P., 2009. In-car positioning and navigation technologies: a survey. IEEE Trans. Intell. Transp. Syst. 10 (1), 4–21. Won, D.H., Lee, E., Heo, M., Sung, S., Lee, J., Lee, Y.J., 2014. GNSS integration with vision-based navigation for low GNSS visibility conditions. GPS Solut. 18 (2), 177–187. Zair, S., le Hégarat-Mascle, S., Seignez, E., 2016a. A-contrario modeling for robust localization using raw GNSS data. IEEE Trans. Intell. Transp. Syst. 17 (5), 1354–1367. Zair, S., le Hégarat-Mascle, S., Seignez, E., 2016b. Outlier detection in GNSS pseudorange/Doppler measurements for robust localization. Sensors 16 (4), 580.

Table 3 Statistical performance versus the version of the proposed method or IA. Statistical performance is measured through E1 and E2 errors whose median and 90th percentile values are given for East and North directions in meters. Method

mP mO mP○r mO mO○r mP mO IA q-relax.

E1 (East, North) in m

E2 (East, North) in m

50%

50%

(3.6, (3.5, (3.5, (3.1, (3.3,

90%

0.8) 0.7) 0.7) 0.0) 1.1)

(6.3, (6.3, (6.4, (6.1, (5.3,

5.9) 5.8) 5.6) 4.7) 4.0)

(5.4, (5.4, (5.4, (5.1, (7.6,

90%

7.0) 6.9) 6.9) 6.0) 9.0)

(7.3, (7.2, (7.2, (7.2, (9.6,

10.0) 10.1) 10.0) 9.4) 11.8)

6. Conclusion In this paper, we have proposed a localization algorithm using only GNSS pseudo-range observations and robust to the presence of outlier measurements. It exploits the belief function framework that allows us to model different levels of imprecision in the pseudo-range measurements. A new algorithm has been proposed to detect the outliers that is inspired from the RANSAC principles, but it benefits from the evidential modeling, in particular measure of disagreement between sources. Specifically, it handles belief functions (bba) and it explores the graph of the subsets of bbas to find the most consensual subset. Then, only the inlier bbas are combined to form the new observation bba that, according to filtering principle, is combined with the predicted bba from previous epochs. Our method is compared to the widely used q-relaxation used in Interval Analysis and to the evidential q-relaxation recently proposed by Pichon et al. (2015). We show that our approach provides a more committed bba than both q-relaxation approaches. Results of the two performed experiments shows that the provided approach outperforms the q-relaxation approaches by providing a more specific solution that is nevertheless guaranteed. Future work will focus on the processing of the outliers. In the proposed approach, they are simply discard from the observation dataset. However, to save the data that could be sparse in case of blockage of some satellite, we aim at analyzing further the origin of conflict associated to each outlier. For this, we could decompose the conflict on the different 2D hypotheses (Roquel et al., 2014). Our second perspective is to apply the EV-RANSAC in other applications that are also subjects to the presence of outliers. For instance, we aim at evaluating its interest for matching between key points in image processing. Such problem has numerous applications: object recognition, object tracking and/or visual odometry, etc. It will be interesting since this kind of problem raises new challenges in terms of number of bbas to handle and in terms of dimensionality of the graph used in EVRANSAC. References André, C., le Hégarat-Mascle, S., Reynaud, R., 2015. Evidential framework for data fusion in a multi-sensor surveillance system. Eng. Appl. Artif. Intell. 43, 166–180. Brown, R.G., 1992. A baseline GPS RAIM scheme and a note on the equivalence of three RAIM methods. Navigation 39 (3), 301–316. Chang, T.-H., Wang, L.-S., Chang, F.-R., 2009. A solution to the ill-conditioned GPS positioning problem in an urban environment. IEEE Trans. Intell. Transp. Syst. 10 (1), 135–145. Cheng, C., Tourneret, J.-Y., Pan, Q., Calmettes, V., 2016. Detecting, estimating and correcting multipath biases affecting GNSS signals using a marginalized likelihood ratio-based method. Signal Process. 118, 221–234. Denoeux, T., 2001. Inner and outer approximation of belief structures using a hierarchical clustering approach. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 9 (04), 437–460. Destercke, S., Burger, T., 2013. Toward an axiomatic definition of conflict between belief functions. IEEE Trans. Cybern. 43 (2), 585–596. Drevelle, V., Bonnifait, P., 2011. A set-membership approach for high integrity heightaided satellite positioning. GPS Solut. 15 (4), 357–368. Dubois, D., Prade, H., 1990. Consonant approximations of belief functions. Int. J. Approx. Reason. 4 (5–6), 419–449.

135