A hidden semi-Markov model for indoor radio source localization using received signal strength

A hidden semi-Markov model for indoor radio source localization using received signal strength

Signal Processing 166 (2020) 107230 Contents lists available at ScienceDirect Signal Processing journal homepage: www.elsevier.com/locate/sigpro A ...

2MB Sizes 0 Downloads 54 Views

Signal Processing 166 (2020) 107230

Contents lists available at ScienceDirect

Signal Processing journal homepage: www.elsevier.com/locate/sigpro

A hidden semi-Markov model for indoor radio source localization using received signal strength Shuai Sun a,∗, Xuezhi Wang a, Bill Moran b, Wayne S.T. Rowe a a b

School of Engineering, RMIT University, Melbourne 3001, Australia Department of Electrical and Electronic Engineering, University of Melbourne, Parkville VIC 3010, Australia

a r t i c l e

i n f o

Article history: Received 30 January 2019 Revised 18 June 2019 Accepted 20 July 2019 Available online 23 July 2019 Keywords: Localization Hidden Markov model (HMM) Hidden semi-Markov Model (HsMM) Missing measurement

a b s t r a c t Multipath propagation makes the use of received signal strength (RSS) unreliable as a signal propagation model for localization of a radio source based on RSS data. An approach to mitigation of this problem is the use of a Hidden Markov model (HMM) to represent the relationship between RSS and the radio source location by incorporating an environment prior and RSS source dynamics. The HMM structure forces a geometric form for the distribution for the sojourn time. This, combined with missing data problems, reduces confidence in location estimation. It is found, in this paper, that Hidden semi-Markov Models (HsMMs), with a more flexible sojourn time distribution are more able to represent source dynamics while retaining the advantages of HMMs for environmental constraints and improving resilience to missing measurements. The performance of the proposed HsMM algorithm is compared with a standard HMM via experiments and simulations involving indoor radio source localization using RSS data. Simulations use a ray tracing software-based simulator and the experiments for transmitter localization with RSS data are collected by software defined radios. © 2019 Elsevier B.V. All rights reserved.

1. Introduction Localization techniques using received signal strength (RSS) data remain popular because of the ease of hardware implementation at low cost [1], by comparison with time-based localization techniques, such as time of arrival or time different of arrival methods. The latter requires system synchronization at either both ends or at the (multiple) receiver end [2]. Unfortunately, RSS data is notoriously unpredictable in practice and usually exhibits large fluctuations, particularly, in indoor environments because of multipath interference and shadowing [3], making RSS-based localization sensitive to subtle environmental changes. Finding a signal processing technique to cope with this problem is a difficult challenge. One possible mitigation method for multipath interference in RSS-based indoor localization problems is the use of available auxiliary prior information to model the uncertainty. In this context, the Hidden Markov Model (HMM) is appealing [4,5]. Fundamentally, an HMM establishes the relationship between RSS data and the location of a mobile transmitter (MT) via a Markov chain designed to incorporate prior information about environ-



Corresponding author. E-mail address: [email protected] (S. Sun).

https://doi.org/10.1016/j.sigpro.2019.07.023 0165-1684/© 2019 Elsevier B.V. All rights reserved.

mental constraints as well as the dynamics of the MT. Localization or tracking of radio sources using HMMs has been shown to be promising and effective in recent literature, such as the grid-based Bayesian method [6], the jump Markov particle filtering technique [7], and the joint localization and channel condition identification approach [8]. Nevertheless, in the problem of indoor MT tracking using RSS data, 1) the transition of the MT between accessible states is dependent on the sojourn time in the current state; and 2) the sojourn time of the MT in different regions/states exhibits different statistical characteristics. These features are not adequately represented in the HMM framework, as it carries an implicit assumption that the state sojourn time follows a geometric distribution. We propose here an enhanced HMM framework to handle these two shortcomings. Hidden semi-Markov Models (HsMMs) provide a framework for flexible state duration and have been used in many signal processing related applications, such as speech recognition, bioinformatics, and target tracking, etc. [9,10]. In HsMMs, a variable time duration, represented as a random number of regular observation periods, is permitted for the underlying system sojourn time at each state [11]. The flexible duration modelling capability of HsMMs is better able to address the MT localization problem, allowing prior information about state sojourn times to be learned and included in the system. For instance, in [12] the incorporation of explicit state duration in the HsMM facilitates its application in mobility

2

S. Sun, X. Wang and B. Moran et al. / Signal Processing 166 (2020) 107230

data modelling. In [13], by further introducing an auxiliary process, HsMM is applied for non-stationary data segmentation. These applications in relevant fields in turn lay the foundation of more practical prior information exploitation with localization applications. Homogeneous HMMs implicitly assume that measurements are available at regular sampling intervals. In practice, this assumption may be violated as a sensor node can suffer from missing measurements, for example, because of limitations on transmission range or unreliable wireless links, as reported in [14]. Missing measurements are a feature of cooperative or centralized localization systems, where measurements acquired by individual sensors are required to be transmitted to other nodes for information sharing or the process of data fusion and decision making [3]. In radar tracking, a missed detection occurs when the return signal level is below the noise level [15]. Missing data can detrimentally induce ambiguity in the measurement likelihood and hence the posterior of the corresponding HMM-based state estimator. An HsMM framework in the presence of missing measurement is proposed in [16], where an HsMM tracker with a maximum a posteriori (MAP) state estimator is developed. Compared to HMM, HsMM provides an extra dimension to mitigate the uncertainly induced by missing measurements, because of its incorporation of the state sojourn time prior in addition to the environment prior. In this paper, we propose an HsMM framework for indoor MT tracking using RSS data received by anchor nodes (ANs) at fixed locations. Our work differs from existing work in that 1) we extend the missing measurement case in [16] by permitting correlations among sensor measurements conditioned on a given state. The resulting HsMM allows a variable number of observation vector sequences with random missing entries to be quantized as a joint Gaussian distribution to infer the hidden state that generates it; 2) We compare the performance of different algorithms under the proposed HsMM framework and the corresponding HMM framework, applied in an indoor MT tracking scenario using RSS measurements via both simulated and real data. We consider three classical inference algorithms associated with HMMs and HsMMs for finding the optimal state (location) sequence given the observation (RSS) sequence: The forward only algorithm, forwardbackwards algorithm, and the Viterbi algorithm. As will be discussed in Section 3, the differences among these three algorithms lie in the different optimality criteria, and these can be seen in terms of different loss functions as described in [17]. The forward only algorithm is an online (causal) recursive process making use of observations only up to the current time for hidden state estimation, while the others use batch processing to incorporate later measurements for current state estimation, but with different optimality: Adopting a Bayesian perspective, the forwardbackwards algorithm maximizes the posterior probability of an individual state for a given observation segment or the whole observation sequence, whereas the Viterbi algorithm aims to maximize the likelihood of a consecutive sequence of states that fulfills the system dynamics. The remainder of the paper is as follows: In Section 2, we formulate a 2D localization problem. In Section 3, we outline the main algorithms under the HsMM framework. Section 4 discusses practical implementation issues and Section 5 demonstrates the ray-tracing software-based simulation and real data experiment using software defined radios. Finally, we give conclusions in Section 6. 2. Problem formulation Consider a real-time indoor localization system consisting of a mobile transmitter and M fixed ANs in a planar area X ⊂ R2 , where the locations of the M ANs (xk , yk ), k = 1, · · · , M are

known. The problem of interest is to find the location of the MT at epoch t (xt , yt ) ∈ X based on RSS measurement vectors ot = T [o1t , o2t , · · · , oM t ] received by the ANs. For open spaces, a radio propagation attenuation model can be used. In [18] it is observed that, given the location of the MT (xt , yt ) and the location of the kth AN (xk , yk ) at epoch t, the received RSS observation okt is modelled by



okt

= P0 − 10ε log10

(xk − xt )2 + (yk − yt )2 r0

+ Xσ t ,

(1)

where P0 is the power loss measured at a reference distance r0 (usually 1m), ε is the path loss exponent and Xσ t accounts for model uncertainty due to shadowing [19]. In practice, the solution based on the use of such an RSS attenuation model is often unrealistic for indoor applications as significant uncertainties arise from the fluctuation of RSS data due to multipath propagation. The HMM representation of the relationship between the RSS data and the transmitter location, takes the environmental constraints and dynamics of the MT into account by means of a finite set of hidden states (locations) and an assumption that the transition between states follows a first order Markov chain. Specifically, the location space X is divided into disjoint regions designated by {1, 2, , N}, converting the localization problem in a continuous 2D space X into a discrete region-based localization problem. The discrete-time HMM consists of a double-layered stochastic process: A hidden state process {st } that fulfills the Markov property, and a state dependent output process {ot }, where the RSS measurement ot is a multivariate continuous variable of measurements from multiple ANs. The process {st } describes the underlying state transition of the MT in the predefined regions in X and is “hidden” in the observation sequence {ot }. In other words, measurement is conditioned on one of a finite set of models and the transition between models is governed by a Markov chain with transition probability matrix A = [ai j ]N . i, j=1 While the standard HMM framework has been shown effective in the RSS localization applications, it has problems with practical situations where 1) ambiguity induced by the missing measurements can not be sufficiently addressed by utilizing only the environmental constraints for identifying the hidden state; 2) the statistics of each of the state sojourn times is non-geometric. In other words, when the measurements from some ANs are missing, confidence in the location of a mobile user decreases. The application of HMMs helps mitigate the effects of missing measurements by incorporating prior knowledge such as environmental constraints (e.g., the accessibility between rooms in a building) in the transition probability matrix of the model. Remarkably, the more flexible sojourn time distribution of HsMMs appears, at least in simulations, to provide increased ability to handle missing data. Therefore, in this work, we propose an HsMM framework. In addition to the standard feature of HMM, it also has the mechanism to address the above two issues as described below. 2.1. Meassurements availability uncertainty Consider a wireless sensor network (WSN) of M ANs taking RSS measurements from a RF signal transmitted by an MT. In practice, for each AN there are missing measurements for a variety of reasons, such as unreliable transmission link and long transmission range. This uncertainty of measurement availability will lead to reduced estimation performance in locating the source, and decreased reliability of localization under both the HMM and HsMM frameworks. To be specific, when an AN receives an identifiable packet transmitted by the MT, the connection status is declared successful (measurement available), otherwise it is a failure (measurement missing). We use a time-dependent 0 − 1 vector

S. Sun, X. Wang and B. Moran et al. / Signal Processing 166 (2020) 107230

3

Fig. 1. Illustration of the observation missing mode: Each column represents a unique mode, with the color filled rectangular (1) representing successful RSS measurement whereas the non-filled rectangular (0) representing RSS measurement failures. Note that the different color in the filled rectangular is just to indicate different anchor nodes and they all refer to 1 in the mode vector.

mt = (mt1 , mt2 , · · · , mtM ) to indicate whether or not there is a connection from each of the ANs, where mtk = 0 or 1 to indicate connection failure or success respectively at time t for AN k. Theoretically there are altogether 2M missing measurement configurations (modes), and the set of them is denoted by M (the mode space). m The corresponding observation (RSS) vector is denoted by ot t . Note that the measurement mode mt at a time t is completely known once the measurement acquisition process is finished. Fig. 1 attempts to enumerate all possible missing measurement modes. For example, a missing mode of all zeros, i.e., mt = [0, · · · , 0]T means that all ANs fail to take a measurement at time t, and a missing mode mt = [1, 1, 0, · · · , 0]T specifies that only AN1 and AN2 succeed in taking a measurement at time t, hence the observation vector only contains two effective entries that can be utilized for location inference. With this notation of observation mode, the corresponding observation sequence at a sample interval of T is denoted m m by the measurement sequence O = {o1 1 , · · · , oT T }.

Fig. 2. Example of a hidden semi-Markov process for modelling RSS measurements obtained by M = 5 anchor nodes. The system consists of N = 3 hidden states s1 , s2 , s3 . The initial state is state s2 in [t0 , t1 ] and switches to state s1 at t2 . The transmitter stays at state s1 for three epochs (t2 − t4 ) and then switches to state s3 at t5 . Note that at each epoch, the measurement mode varies. For example, at t2 , the measurement mode m2 = [0, 1, 0, 0, 1]T and at t3 , the measurement mode m3 = [1, 1, 1, 1, 1]T .

as,





bi otmt =

1 2π |mt |/2 det

 1

exp −

Conventional HMMs assume a fixed transition probability between accessible states, irrespective of how long the system has been in the current state, effectively implying a geometric state duration distribution. This is not flexible enough to model more complex dependencies between state transition and state duration time. HsMMs follow the same framework as HMMs, but provide more flexibility by modelling the underlying state process with a semiMarkov chain that explicitly models the state sojourn time. This flexibility in sojourn time distribution, encourages longer sojourn in a given state when lack of data provides no evidence for a move. Fig. 2 illustrates an example of how to model RSS measurements with varying missing entries using the HsMM. The system consists of five ANs (receivers) and the underlying transmitter may switch to one of three states at a time.

 Pr

t μm i



otmt

|st = i, μ

t m i

mt , i



mt i



,

t otmt − μm i

(2)

where and are the mean and covariance of the Gaussian m distribution for the ith state with observation mode mt . As bi (ot t ) is modelled as a multivariate Gaussian distribution, it is expressed

T 

t m i

−1 

t otmt − μm i



, (3)

where the determinant operator is denoted by det(· ), is vector transpose and |mt | is the total number of ANs that take a measurement in mode mt , i.e., |mt | is the total number of entries in mt that take the value of 1. Note that, in the event that mt is 0 (none of the ANs obtain a measurement) at time t, a uniform distribution is assumed, that is,





bi otmt =0 =

1 , N

i = 1, 2, · · · , N.

(4)

m :m

t+d−1 We define bi (ot :tt+d−1 ) to be the emission probability of a consecutive observation sequence with length d for all ANs given m :mt+d−1 the transmitter is in state i. Here ot :tt+d−1 is a simplified notation

m

m

d−1 for [ot t , · · · , ot+t+ ]. Observations across different time epochs are d−1 assumed independent, i.e.,



We model the probability distribution of the measured RSS vector conditioned on a given hidden state as a multivariate Gaussian. To facilitate this, we further define the probability of observing the m vector ot t at time t with observation mode mt under state i as



1 / 2

(·)T

m :m



t+d−1 bi ot :tt+d−1 =

3. HsMM algorithm

bi otmt

t m i

2 i = 1, 2, · · · , N,

2.2. State transition/duration uncertainty





t+ d−1 



m



bi ok k ,

i = 1, 2, · · · , N.

(5)

k=t

Now, we are in a position to describe the HsMM algorithm as follows. Following [16], let the HsMM parameter be

  N m λ = πi , a i j , p i ( d ) , μ m , i , i i, j=1

(6)

where 1. π i is the initial probability that a radio transmitter is in state i. A uniform distribution is usually assumed if no prior location information is available. The initial probability π i is

4

S. Sun, X. Wang and B. Moran et al. / Signal Processing 166 (2020) 107230

subjected to the following constraints, N

πi = 1 .

1. Initialization:

(7)

i=1

2. aij is the stationary state transition probability from state i to state j. The transition probability satisfies the following constraints N

ai j = 1 ,

aii = 0,

i = 1, 2, · · · , N.

(8)

Note that the diagonal elements of the transition probability matrix have to be zero in contrast to conventional HMMs. 3. pi (d) is the duration probability for the MT sojourn in state i for d epochs, and usually a maximum sojourn epoch D for each state is specified for practical application,

pi (d )  Pr(d|st = i ),

d = 1, 2, · · · , D,

(9)

i = 1, · · · , N.

(15)

2. Induction:

αt (i ) =

min(t,D )



 mt−d+1 :mt  ∗ αt−d (i ) pi (d )bi ot−d+1: , t

pi ( d ) = 1,

i = 1, 2, · · · , N.

(10)

4. {μm , m i } are the previously defined Gaussian distributed i emission parameters. It is assumed in the HsMM that Once the semi-Markov chain enters a certain state, it will hold for at least one epoch. The state duration model is, therefore, zero-truncated. State transitions may occur at any epoch during a specified time interval [20], i.e., during the time interval t = 1, 2, · · · , T .

We will use the notations of two forward variables from [16]:

  1 :mt αt (i )  Pr om , state i ends at t |λ , 1:t

(11)

  1 :mt αt∗ (i )  Pr om , state i begins at t + 1|λ , 1:t

(12)

where α t (i) is the joint probability of obtaining the partial observation sequence up to epoch t and ending in state i at epoch t, given the model λ, and αt∗ (i ) is the joint probability of obtaining the observation sequence up to epoch t and entry to state i begins at the next epoch t + 1, given the model λ. The relationship between α t (i) and αt∗ ( j ) is given by Eq. (16). Similarly, we have two backward variables:

βt (i )  Pr(ot:mTt :mT |state i begins at t, λ ), βt∗ (i )  Pr(ot:mTt :mT |state i ends at t − 1, λ ),

N

αt∗ (i ) =

αt ( j )a ji ,

j=1, j=i

t = 1, · · · , T ,

i = 1, · · · , N.

(16)

The estimate of state st based on forward algorithm is then defined as,





1:t sˆt = arg max Pr(st = i|om ) . 1:t

(17)

1≤i≤N

d=1



τ < 0,

According to Bayesian’s rule,

with the following constraints,



i = 1, · · · , N,

d=1

j=1, j=i

D

α0∗ (i ) = πi , ατ∗ (i ) = 0,

(13)

 αt (i ) sˆt = arg max ∝ arg max αt (i ). m 1≤i≤N Pr (o 1:t |λ ) 1≤i≤N 1:t

(18)

As can be seen from Eqs. (15) and (16), the forward algorithm is applied for online state estimation, and it requires insignificant memory resource as the computation of the current forward variable only relies on the previous forward variable because of the Markovian property. 3.2. HsMM forward-backwards algorithm The forward-backwards algorithm finds the maximum a posteriori estimate of state st given the whole observation sequence O. To facilitate this, a new variable γ t (i) is defined,

γt (i )  Pr(O, st = i|λ ),

(19)

which is the joint probability of the whole observation sequence and state i at epoch t, conditioned on model λ. Both a forward and a backward process are required for calculating γ t (i) [16]. As with the forward algorithm, described in Section 3.1, the backward variable can be computed recursively: 1. Initialization:

βT∗+1 (i ) = 1, βτ (i ) = 0,

i = 1, · · · , N,

τ > T + 1,

i = 1, · · · , N.

(20)

2. Induction:

βt (i ) =

min(T −t,D )



 mt :mt+d−1  ∗ βt+ d (i ) pi (d )bi ot :t +d−1

d=1

(14)

where β t (i) is the probability of the partial observation sequence from epoch t to the end, given that the systems starts in state i at epoch t and with model λ, and βt∗ (i ) is the probability of the partial observation sequence from epoch t to the end, given that the system leaves state i at the previous epoch t − 1 and with model λ. The relationship between β t (i) and βt∗ ( j ) is given by Eq. (21). In a similar fashion to standard HMM, HsMM may be implemented in algorithms described next. 3.1. HsMM forward algorithm The forward algorithm estimates state st by utilizing observation sequence only up to epoch t, via the forward variable α t (i), which is computed recursively, as follows:

βt∗ (i ) =

N

ai j βt ( j )

j=1, j=i

t = T , · · · , 1,

i = 1, · · · , N.

(21)

With the computed forward and backward variables, the forward-backwards algorithm is computed recursively as follows [16]: 1. Initialization:

γT (i ) = αT (i ),

i = 1, · · · , N.

(22)

2. Recursion: ∗ γt (i ) = γt+1 (i ) + αt (i )βt+1 (i ) − αt∗ (i )βt+1 (i ),

(23)

i = 1, · · · , N,

(24)

t = T − 1, · · · , 1.

S. Sun, X. Wang and B. Moran et al. / Signal Processing 166 (2020) 107230

5

The estimate of state st based on forward-backwards algorithm is then defined as

sˆt = arg max [Pr(st = i|O, λ )].

(25)

1≤i≤N

According to Bayesian’s rule,

sˆt = arg max

1≤i≤N

Pr(st = i, O|λ ) Pr(O|λ )

 ∝ arg max γt (i ).

(26)

1≤i≤N

3.3. HsMM Viterbi algorithm The Viterbi algorithm aims to find the sequence (path) that achieves the maximum likelihood of the whole observation sequence. We further need to define a variable δ t (i) as the most m :m likely path that generates the partial observation sequence o1:t1 t and ends in state i at epoch t, given the model λ, i.e.,

  1 :mt δt (i )  max Pr om , s1 , · · · , st−1 , state i ends at t |λ . 1:t s1 ,··· ,st−1

(27)

The recursion of δ t (i) over time is computed as follows [21]: (1) Initialization: 1 δ1 (i ) = πi pi (d = 1 )bi (om ), 1

(28)

ψ1 ( i ) = {0, 0},

(29)

(2) Recursion: min(t,D )

δt (i ) = max

i = 1, · · · , N.

N



t−d+1:t max[δt−d ( j )a ji ] pi (d )bi ot−d+1: t

m

j=1

d=1



Fig. 3. Viterbi algorithm illustration for HsMM: We assume a five-state HsMM with a total time period of eight epochs. Red circles represent the decoded state based on the HsMM Viterbi algorithm. The recorded path tuple {i, d} represents the previous state and the current state duration epoch that lead to the maximum probability. For instance, {2, 4} means the current state s3 sojourns for 4 epochs and its previous state is s2 .

 ,

(30)

 mt−d+1:t  ψt (i ) = arg max δt−d ( j )a ji pi (d )bi ot−d+1: , t j,d

i = 1, · · · , N,

t = 2, · · · , T .

(31)

(3) Termination:

sˆT = arg max δT (i ).

(32)

i

(4) Path (state, duration sequence) backtracking:

{sˆt , dˆt+1 } = ψt+1 (sˆt+1 ),

t = T − 1, · · · , 1.

(33)

Note that recording of the most likely path ψ t (i) contains a two-variable tuple at each epoch in Eq. (31): The previous state and the sojourn time of the current state that together lead to the maximum probability. Fig. 3 gives an example illustrating how HsMM Viterbi algorithm works. Remark 1. The forward algorithm, forward-backwards algorithm and Viterbi algorithm use different optimality criteria and so may be suitable for different applications and provide different desired performance. The forward algorithm only uses the previous and current observation and is suitable for online processing with less requirements of memory, while the forward-backwards algorithm and the Viterbi algorithm use batch processing. The formulation of the forward-backwards algorithm is based on estimation of individual states, which, applied over multiple epochs, may result in a violation of the system dynamics. The Viterbi algorithm avoids this issue by maximizing the most likely state sequence rather than the individual state. 4. Practical model training In practice, the HMM/HsMM model parameter λ is learned using a training data set. In this section, we discuss model parameter learning as well as issues related to scaling and training in the presence of missing measurements.

Fig. 4. Full observation mode covariance for state 1.

4.1. Parameter training in the context of missing data HsMM model parameter training is performed using the Expectation-Maximization (EM) algorithm, iteratively estimating λ by finding the approximation of the maximum likelihood of the ˆ ML = argmax p(O|λ ) [22]. This process is usually reparameter λ λ ferred to as re-estimation in the literature [23]. First we show how to train the observation model parameter in the emission function bi (om ), namely, the mean μm and the coi variance m i , with all possible missing modes m. Because of the sensitivity of RSS data in a multipath environment, measurements from multiple locations within a region needs to be recorded to capture its large variations in the training phase. To fix ideas, we introduce the notation of a grid cell. Each region (state of the HsMM) is partitioned with a finer grid, usually in a uniform fashion. Fig. 5 provides an example of this re-

6

S. Sun, X. Wang and B. Moran et al. / Signal Processing 166 (2020) 107230

Fig. 5. Illustration of simulation scenario on the floorplan. The red dots signify the MT moving path and they are also the MT trajectory in the test shown in Fig. 11.

Fig. 6. Transition matrix for the HMM model.

Fig. 7. Transition matrix for the HsMM model.

finement for region 12 in the area of interest. The number of grid cells in each region may vary roughly according to region size. Let Ni denote the total number of grid cells in region i and zi denote the index of the grid cell within region i. In practice, we require at least one RSS measurement sample from each grid cell for each AN needs to be taken. If more than one measurement sample is taken, we use the mean of the measurements instead. Let o¯ zki be

the RSS sample mean of the kth AN from grid cell zi within region i, and hence the mean vector of all the ANs from grid cell zi within T region i is ozi = [o¯1zi , o¯2zi , · · · , o¯M zi ] . With the above definition, training of the RSS mean and covariance of each region (state) for the case where every AN succeeds in taking a measurement, i.e., all the components in the mode vector are 1 is straightforward. We simply denote it by {μi , i } for region

S. Sun, X. Wang and B. Moran et al. / Signal Processing 166 (2020) 107230

7

Fig. 8. The trained duration parameter.

Fig. 10. Observation missing mode of the RSS at the first 15 time epochs: At each time epoch, the colored square indicates a measurement success for the corresponding AN.

Fig. 9. KLD of the duration probability versus different training data length.

i and omit its mode indicator, which is

Ni ˆi = μ

zi =1

ozi

Ni

Ni ˆi = , 

zi =1

ˆ i )(ozi − μ ˆ i )T (ozi − μ Ni − 1

, i = 1, · · · , N. (34)

Then the mean and covariance of each mode can be obtained by taking corresponding entries from {μi , i }. For example, the mean and covariance for state s = 1 in mode m = [1, 1, 0, · · · , 0]T (i.e., only AN1 and AN2 have measurements, are extracted from the corresponding dimension of {μ1 , 1 } respectively,



μm 1 =



−50 , −51



m 1 =

31.80 −4.61



−4.61 , 16.30

(35)

with

μ1 = [−50, −51, −53, −54, −65, −59, −61, −72] , T

(36)

and 1 being shown in Fig. 4. Note that the total number of ANs in this example is 8.

The rest of the model parameters in (6), i.e., the initial probabilities, transition probabilities and state duration probabilities can be trained using the Baum-Welch method [16,23], with a large amount of RSS data collected along an MT trajectory. See Eqs. (A.1)–(A.3) for more details. This model parameter training will be discussed further in the next Section. 4.2. Numerical underflow issue The forward and backward variable values decrease exponentially with time in practice [24]. To keep them within the dynamic range of the software, the implementation of the re-estimation procedure of HsMMs usually requires proper scaling [23]. We use the scaling method in [25] to scale the forward and backward variables. 5. Experimental study In this study, both simulations and real experiments were conducted to evaluate the performance of the proposed method. A typical indoor scenario with multipath propagation is chosen to test the effectiveness of the implemented HsMM-based localization algorithms, namely, the forward only (HsMM-FO) alogirithm, the

8

S. Sun, X. Wang and B. Moran et al. / Signal Processing 166 (2020) 107230

Fig. 11. Performance comparison of HMM-based and HsMM-based algorithm for a typical trajectory shown in Fig. 5.

S. Sun, X. Wang and B. Moran et al. / Signal Processing 166 (2020) 107230

9

Fig. 12. Performance comparison with different window size.

of interest is divided into 12 regions. In this design, we also include a dummy state for both HMM and HsMM to absorb measurements interference/outlier which do not originate from any of the 12 states. This additional dummy state can handle the case where the MT enters or leaves the building via the front or back door in Fig. 5.

Fig. 13. Real experiment setup.

Fig. 14. System transition diagram.

forward-backwards (HsMM-FB) algorithm and the Viterbi (HsMMVB) algorithm using RSS measurements with possible missing modes. We compare performances of these algorithms with that of the corresponding HMM-based algorithms, i.e., HMM-FO, HMM-FB and HMM-VB, respectively. 5.1. Simulation analysis The scenario is implemented using 3D ray-tracing software [26], where the numbers of reflections, transmissions and diffractions are set to 3, 2, and 1, respectively. As shown in Fig. 5, the area

5.1.1. HMM model training Using the ray-tracing software, transmitters are placed in each region with a uniform space of 0.2 meters for obtaining RSS measurements of all the 8 ANs. A trajectory of the MT with 1,0 0 0,0 0 0 epochs is then generated to train the transition probabilities using the Baum-Welch method, as shown in Eqs. (A.1) and (A.2) [23]. Fig. 6 shows the transition probability matrices with initially guessed values and trained values. The initial guess is based on prior knowledge of the accessibility of the defined regions, where the value of self-transition probabilities, as well as transition probability to accessible regions for each state is chosen such that to give a relatively large entropy with the sum-to-one constraint holding, as we have little information about the state transitions. The topology of the transition matrix is a reflection of the environmental constraints in the area of interest. As shown Fig. 6, the transition matrix becomes more diagonally dominant after training. 5.1.2. HsMM model training The HsMM model shares the same emission probability parameters {μi , i } with the HMM. As stated in [9], the HMM parameters can often be used to train HsMM parameters. The trained transition probability matrix for the HMM can be used to derive an initial guess of the transition probability matrix for the HsMM. Fig. 7 shows the transition probability matrices of the initial values and the trained values, where the diagonal components are all zeros because self transition probabilities in the HsMM are not allowed. A non-parametric duration model [11] is applied in this paper. And a uniform duration distribution is used as the initial guess of the duration probability for each state. Fig. 8 demonstrates an example of the trained duration probabilities versus the pre-defined ground truth for region 5 and 7. Note that the duration statistics

10

S. Sun, X. Wang and B. Moran et al. / Signal Processing 166 (2020) 107230

vary among states, representing the variations of duration time the mobile user sojourns in different regions. To evaluate the closeness between the estimated duration probability and the ground truth, we use the Kullback-Leibler divergence (KLD). Assume the estimated duration probability is pˆ i (d ) and the ground truth duration probability is pi (d), the KLD for state i is defined to be





DKL pi (d )|| pˆ i (d ) =

D

d=1

pi (d ) ln

pi ( d ) . pˆ i (d )

(37)

As pointed in [27], HsMM requires more training data than HMM in practice. We further evaluate the influence of different amount of training data on the accuracy of the estimated duration probability. Fig. 9 provides the average KLD value of the 12 states versus different training data length. Clearly large amount of training data is required to achieve a better duration probability training. 5.1.3. Missing measurement At each sampling time, an AN is subject to missing measurements. In the simulation, we assume the missing measurements for each AN follows a Bernoulli distribution, that is, the number of missing measurements occurring for each AN during the T epochs is a binomial distribution. Let ζ k denotes the status of the kth AN and



ζk =

1, 0,

data is available for the kth AN; data is unavailable for the kth AN.

(38)

Consequently, ζ = {ζk }M represents the status of all ANs. k=1 Throughout the simulation, we assume

Pr(ζ = 1 ) = [0.44, 0.48, 0.52, 0.51, 0.43, 0.42, 0.41, 0.47].

(39)

5.1.4. Fixed trajectory analysis In the simulation, we assume that RSS measurement variation is a result of multipath propagation and that the receiver thermal noise is negligible. A sequence of RSS data with length of T = 138 was generated using the ray-tracing software from a fixed MT trajectory as shown in Fig. 5. A depiction of the simulated observation missing modes based on Eq. (39) for the first 15 RSS samples is presented in Fig. 10. The state estimation comparison between the HMM algorithm and the HsMM algorithm based on a fixed MT motion trajectory is presented in Fig. 11, where the results using the forward algorithm, forward-backwards algorithm and Viterbi algorithm, together with the ground truth are plotted separately. We observed that •





Under the HMM framework, results exhibit fluctuations when the RSS belief is affected by multipath interference or missing measurements; The HsMM approaches are generally more “consistent” with the ground truth because of the incorporation of state duration; The forward-backwards algorithm and the Viterbi algorithm usually, as expected, outperform the forward-only algorithm.

5.1.5. Monte Carlo analysis The localization performance between HMM-based and HsMMbased solutions is further compared via Monte Carlo multiple runs. 10 0 0 MT trajectories are randomly generated under physical accessibility constraints, each consisting of T = 300 epochs. Specifically, at each run, the simulated ground truth state sequence is generated according to a semi-Markov chain process using Algorithm 1. Based on the generated hidden sequence {st } and its sojourn interval {dt }, a trajectory of the MT is generated in the ray-tracing

Algorithm 1: Ground truth state sequence generation. Input: Model parameter πi , ai j , pi (d ) Output: Hidden state sequence {st } and its corresponding duration time {dt } Initialization: Generate s0 ∼ πi , d0 ∼ ps0 (d ) ; Tc = d0 ; for t = 1; t < T ; t + + do if Tc ≥ T then  Truncate dt so that t dt = T ; break; else generate st ∼ {ast−1 , j }, j = 1, · · · , N; generate dt ∼ pst (d ) ; Tc = Tc + dt ; end end

Table 1 Performance comparison between HMM and HsMM. Algorithm

Error rate

Error rate (lag)

Running time

HMM-FO HMM-FB HMM-VB HsMM-FO HsMM-FB HsMM-VB

18.08% 9.08% 8.67% 16.66% 3.69% 4.28%

10.93% 4.99% 4.84% 6.66% 1.16% 1.29%

1.14e-5 sec 1.99e-5 sec 3.82e-5 sec 4.71e-3 sec 8.45e-3 sec 12.51e-3 sec

m

T software, after which the observation o1:1: is computed using a T ray-tracing method. At each epoch, the observation missing mode mt is sampled by drawing a binary variable vector for all ANs according to the predefined Bernoulli success rate ζ in Eq. (39). Simulation results are summarised in Table 1, where the error rate is defined as the percentage of the total number of mismatched region estimates over the time length T. We note that the improved localization performance of HsMM comes at the cost of a higher computational overhead for incorporation of duration modelling. We observed that in both the HMM or HsMM-based approaches, the Viterbi algorithm and the forward-backwards algorithm outperform the forward only algorithm in terms of error rate, though their computational complexities are higher. The forward-backwards algorithm slightly outperforms the Viterbi algorithm because it maximizes the expected number of the correct state [23], but with the risk of potentially violating the system dynamics in the estimated state sequence. In the HsMM inference, in particular, with the forward only algorithm, estimation delays are frequently observed for the situations where the MT crosses from one region to another. To take into account the delay effect in the performance evaluation, we introduce the error rate with lag [4], which is 1 minus the overall percentage of location estimates that match either the current or the previous region. Table 2 indicates that a large percentage of estimation delays arise using the forward only algorithm, and the error rate drops from 16.66% to 6.66% when the system tolerates lags. The forward-backwards and Viterbi results shown in Table 2 are off-line processing given the entire measurement sequence, i.e., the window size is T. In practice, to meet the online localization requirement, a smaller window size may be preferred. The idea of applying a window is to estimate the current state by using future measurements as supporting evidence due to the inherent non-casuality of the algorithms. A larger window size means using more future measurement for inferring current state, hence in

S. Sun, X. Wang and B. Moran et al. / Signal Processing 166 (2020) 107230

general leading to a more accurate estimate but on the other hand, resulting in a longer delay in the system response. Fig. 12 shows the algorithm performance comparison versus different window size. We can see that

Table 2 Notation. Term

Definition

M N D T st otmt mt :mt ot1 :1t2 2

Number of anchor nodes in the HMM/HsMM. Number of states in the HMM/HsMM. The maximum duration possible for all individual states. The total period of observations. Unknown hidden state at epoch t, taking values in {1, , N}. Observation vector at epoch t with missing mode mt . Observation sequence from epoch t1 to t2 , with corresponding mode sequence mt1 to mt2 . 1:T O Entire observation sequence of om . 1:T πi πi = Pr(s0 = i ): Probability that the initial state is i. aij ai j = Pr(st+1 = j|st = i ): Transition probability from state i to state j. pi (d) pi (d ) = Pr(d|st = i ): Probability that the system sojourns in state i for d epochs. Observation mean of state i with missing mode m. μm i Observation covariance of state i with missing mode m. m i μi Observation mean of state i where all ANs have measurements, i.e., m = [1, 1, · · · , 1]T . i Observation covariance of state i where all ANs have measurements, i.e., m = [1, 1, · · · , 1]T . m mt t , i t ): Observation model, i.e., emission bi (ot ) bi (otmt ) = Pr(ot mt |st = i, μm i mt from state i. probability of observing o t  

λ

11

λ = πi , ai j , pi (d ), {μm , m i } : the complete parameter set of HsMM. i





When the window size is less than 15, the HMM-based algorithms outperform the HsMM-based algorithms, suggesting that a longer window size is required for HsMM to incorporate the varying duration model; The normalized computational overhead increases with the window size for all algorithms, and the HsMM require more CPU resource than HMM for a fixed window size. Under both the HsMM and HMM frameworks, the computational complexity of the Viterbi algorithm is slightly greater than that of forward-backwards algorithm.

5.2. Experimental analysis In the real-world experiment, both the transmitter and receivers are the popular HackRF One software defined radio devices. A continuous wave (CW) within the ISM band at 2.412 GHz was transmitted from a mobile transmitter where the transmission rate is configured as 10 Hz as most practical RSS-based systems have a very low measurement update rate. Five HackRFs were utilized as

Fig. 15. Performance comparison using real RSS measurements.

12

S. Sun, X. Wang and B. Moran et al. / Signal Processing 166 (2020) 107230

the anchor receiver nodes as illustrated in Fig. 13. RSS values (in dBm) are obtained by taking the average logarithm of the square of the in-phase and quadrature samples. These RSS samples at each AN are transmitted to the central AN (AN1 in Fig. 13) where a HMM/HsMM-based algorithm is run for the MT location tracking. The model training is conducted the same as described in simulation and a uniform prior for the initial state probability is assumed. The trained transition probability matrix is depicted as a system transition diagram in Fig. 14. We observed that the missing measurement occurrence at an AN may be caused by two reasons: 1) The wireless channel is unreliable or the MT is too far away from the AN; 2) packet loss during the data transmission from the AN to the central machine. Fig. 15 displays the performance comparison of HMM-based approaches and HsMM-based approaches based on one of the MT walking trajectories and the result is quite similar to those in the simulated scenario in Fig. 11. Based on our experiments, we summarise that the performance of HsMM-based approaches are more consistent than that of HMM-based approaches. Localization result from the latter fluctuates frequently if either the RSS measurement is contaminated by multipath interference, or some of the AN fail to receive a measurement. HsMM approaches offer a better overall performance over HMM in the application of localization for mitigating multipath interference of the RSS signal as well as missing measurements among sensors. 6. Conclusion This paper studies a robust radio source localization technique based on RSS data and knowledge of a set of anchor receivers in the context of multipath and missing measurements. We propose a hidden semi-Markovian framework, which takes the prior knowledge of radio source dynamics and environmental constraints into the design of the underlying semi-Markov chain. Moreover, the HsMM framework is more natural to address the issues of state duration uncertainties and missing data compared with HMM. Simulation analysis as well as experimental testing demonstrate that the HsMM-based algorithm is more consistent and robust than the standard HMM-based algorithm for real-time location estimation with an increased computational power. Semi-Markov chain modelling of emitter dynamics is a promising solution for RSS based indoor localization. Declaration of Competing Interest The authors whose names are listed immediately below certify that they have NO affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers’ bureaus; membership, employment, consultancies, stock ownership, or other equity interest; and expert testimony or patent-licensing arrangements), or non-financial interest (such as personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript. Acknowledgment This work is partially supported by Australia Research Council Linkage project grant LP15010 0 065. Appendix A

π β (i ) πˆ i = N i 1 , i=1 πi β1 (i )

i = 1, 2, · · · , N.

(A.1)

T −1 aˆi j =

αt (i )ai j βt+1 ( j ) , ∗ t=1 αt (i )βt+1 (i )

t=1

T −1

aˆii = 0,

1 ≤ i = j ≤ N,

i = 1, 2, · · · , N. T −d+1 t=1

pˆ i (d ) =

(A.2)

  ∗ t+d−1 ∗ αt−1 (i ) pi (d )bi otm:tt+:md−1 βt+d (i ) , T ∗ t=1 αt=1 (i )βt (i )

i = 1, 2, · · · , N,

d = 1, 2, · · · , D.

(A.3)

References [1] P. Pivato, L. Palopoli, D. Petri, Accuracy of RSS-based centroid localization algorithms in an indoor environment, IEEE Trans. Instrum.Meas. 60 (10) (2011) 3451–3460. [2] H. Liu, H. Darabi, P. Banerjee, J. Liu, Survey of wireless indoor positioning techniques and systems, IEEE Trans. Syst. Man Cybern.Part C 37 (6) (2007) 1067–1080. [3] N. Patwari, J.N. Ash, S. Kyperountas, A.O. Hero, R.L. Moses, N.S. Correal, Locating the nodes: cooperative localization in wireless sensor networks, IEEE Signal Process. Mag. 22 (4) (2005) 54–69. [4] A. Haeberlen, E. Flannery, A.M. Ladd, A. Rudys, D.S. Wallach, L.E. Kavraki, Practical robust localization over large-scale 802.11 wireless networks, in: Proceedings of the 10th annual international conference on Mobile computing and networking, ACM, 2004, pp. 70–84. [5] N. Viol, J.Á.B. Link, H. Wirtz, D. Rothe, K. Wehrle, Hidden Markov model-based 3d path-matching using raytracing-generated wi-fi models, in: Indoor Positioning and Indoor Navigation (IPIN), 2012 International Conference on, IEEE, 2012, pp. 1–10. [6] C. Morelli, M. Nicoli, V. Rampa, U. Spagnolini, C. Alippi, Particle filters for rss-based localization in wireless sensor networks: an experimental study, in: 2006 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 4, 2006. IV–IV doi: 10.1109/ICASSP.2006.1661129. [7] M. Nicoli, C. Morelli, V. Rampa, A jump Markov particle filter for localization of moving terminals in multipath indoor scenarios, IEEE Trans. Signal Process. 56 (8) (2008) 3801–3809. [8] C. Morelli, M. Nicoli, V. Rampa, U. Spagnolini, Hidden Markov models for radio localization in mixed los/nlos conditions, IEEE Trans. Signal Process. 55 (4) (2007) 1525–1542, doi:10.1109/TSP.2006.889978. [9] S.-Z. Yu, Hidden semi-Markov models, Artif. Intell. 174 (2) (2010) 215–243. [10] K.P. Murphy, Hidden semi-Markov models (HsMMs), Unpublished Notes 2 (2002). [11] J. Freguson, Variable duration models for speech, in: Proc. Symposium on the Application of Hidden Markov Models to Text and Speech, 1980, 1980. [12] M. Baratchi, N. Meratnia, P.J. Havinga, A.K. Skidmore, B.A. Toxopeus, A hierarchical hidden semi-Markov model for modeling mobility data, in: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ACM, 2014, pp. 401–412. [13] J. Lapuyade-Lahorgue, W. Pieczynski, Unsupervised segmentation of hidden semi-Markovnon-stationary chains, Signal Process. 92 (1) (2012) 29–42. [14] Y. Li, S. Williams, B. Moran, A. Kealy, G. Retscher, High-dimensional probabilistic fingerprinting in wireless sensor networks based on a multivariate gaussian mixture model, Sensors 18 (8) (2018) 2602. [15] Y. Bar-Shalom, X.R. Li, T. Kirubarajan, Estimation with Applications to Tracking and Navigation: Theory Algorithms and Software, John Wiley & Sons, 2004. [16] S.-Z. Yu, H. Kobayashi, A hidden semi-Markov model with missing data and multiple observation sequences for mobility tracking, Signal Process. 83 (2) (2003) 235–250. [17] J. Lember, A.A. Koloydenko, Bridging viterbi and posterior decoding: a generalized risk approach to hidden path inference based on hidden Markovmodels, J. Mach. Learn. Res. 15 (1) (2014) 1–58. [18] T.S. Rappaport, et al., Wireless Communications: Principles and Practice, 2, prentice hall PTR New Jersey, 1996. [19] A.F. Molisch, Wireless Communications, 34, John Wiley & Sons, 2012. [20] Y. Guédon, Estimating hidden semi-Markov chains from discrete sequences, J. Comput. Graph. Stat. 12 (3) (2003) 604–639. [21] A.V. Nefian, L. Liang, X. Pi, L. Xiaoxiang, C. Mao, K. Murphy, A coupled HMM for audio-visual speech recognition, in: Acoustics, Speech, and Signal Processing (ICASSP), 2002 IEEE International Conference on, 2, IEEE, 2002, pp. II–2013. [22] C.J. Wu, et al., On the convergence properties of the em algorithm, Ann. Stat. 11 (1) (1983) 95–103. [23] L.R. Rabiner, A tutorial on hidden Markov models and selected applications in speech recognition, Proc. IEEE 77 (2) (1989) 257–286. [24] P.A. Devijver, Baum’s forward-backward algorithm revisited, Pattern Recognit. Lett. 3 (6) (1985) 369–373. [25] S.E. Levinson, Continuously variable duration hidden Markov models for automatic speech recognition, Comput. Speech Lang. 1 (1) (1986) 29–45. [26] Remcom Wireless InSite 3D Wireless Prediction Software, https://www. remcom.com/. [27] M. Azimi, P. Nasiopoulos, R.K. Ward, Offline and online identification of hidden semi-Markov models, IEEE Trans. Signal Process. 53 (8) (2005) 2658–2663.