Pervasive and Mobile Computing (
)
–
Contents lists available at ScienceDirect
Pervasive and Mobile Computing journal homepage: www.elsevier.com/locate/pmc
Spatio-temporal adaptive indoor positioning using an ensemble approach Taisei Hayashi, Daisuke Taniuchi, Joseph Korpela, Takuya Maekawa ∗ Department of Multimedia Engineering, Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita, Osaka, 565-0871, Japan
article
info
Article history: Available online xxxx Keywords: Indoor positioning Wi-Fi sensing Fingerprinting Ensemble learning
abstract We present a fingerprinting-based Wi-Fi indoor positioning method robust against temporal fluctuations and spatial instability in Wi-Fi signals. An ensemble is created using randomized weak position estimators, with the estimators specialized to different areas in the target environment and designed so that each area has estimators that rely on different subsets of stable APs. When conducting positioning, we cope with spatial instability by dynamically adjusting the weights of the weak estimators depending on the user’s estimated location and cope with temporal fluctuations by dynamically adjusting the weights based on a periodic assessment of their performance using a particle filter tracker. © 2016 Elsevier B.V. All rights reserved.
1. Introduction Wi-Fi technology is now very popular, with public Wi-Fi access now a common feature in many public areas such as hospitals, shopping malls, and train stations. Because of its prevalence, many pervasive computing researchers have attempted to construct indoor positioning systems that are based on Wi-Fi access points (APs) and smartphones that can sense Wi-Fi signals from the APs. Typically, these Wi-Fi based systems use fingerprinting techniques to estimate indoor positions [1]. Fingerprinting begins with a training phase in which Wi-Fi signals (i.e., the unique MAC addresses of the APs along with their received signal strengths) are observed at known coordinates (training points). A fingerprint database is then created in which the fingerprint for each training point is assigned as the unique set of Wi-Fi signals observed at that point. In the positioning phase (test phase), the observed Wi-Fi signals at unknown coordinates (test points) are compared with the fingerprints in the fingerprint database to determine the closest match. Such fingerprinting techniques often suffer from degraded positioning accuracy due to temporal fluctuations and spatial instability in the Wi-Fi signals. Several previous studies have argued that temporal fluctuations in Wi-Fi signals are largely due to environmental factors, including changes in ambient temperatures and humidity levels [2,3]. Additionally, in dynamic environments where the physical layout of the environment changes (e.g., due to installing new partitions in an office or repositioning Wi-Fi APs), the Wi-Fi signals observed in the positioning phase may deviate significantly from those stored in the fingerprint database. Because of environmental factors such as these, the positioning accuracy of fingerprinting techniques may gradually degrade with time. To cope with temporal fluctuations, many previous studies have employed fixed sniffing sensor nodes or wireless beacons. For example, our prior work [4] employed fixed Bluetooth beacons and inertial sensors in a user’s mobile device to precisely track the user’s position, while simultaneously obtaining Wi-Fi scan data from the mobile device. Our method then updated the Wi-Fi positioning model using the Wi-Fi scans and the estimated positions. In this paper, we attempt to update the Wi-Fi positioning model without using such fixed nodes or beacons.
∗
Corresponding author. E-mail address:
[email protected] (T. Maekawa).
http://dx.doi.org/10.1016/j.pmcj.2016.12.001 1574-1192/© 2016 Elsevier B.V. All rights reserved.
2
T. Hayashi et al. / Pervasive and Mobile Computing (
)
–
Fig. 1. Structure of our ensemble indoor position estimator.
Spatial instability in the Wi-Fi signals also affects positioning accuracy, since indoor positioning systems commonly use all Wi-Fi signals observed in the training phase when creating fingerprints, including the Wi-Fi signals for APs that are far from a given training point. Because the Wi-Fi signals from distant APs are often weak and unstable, they degrade the positioning accuracy when used during the positioning phase. Moreover, when the number of APs is large, the performance of the positioning systems can also be degraded due to the high dimensionality of the data, i.e., the curse of dimensionality. To cope with spatial instability, our prior work [5] employed an ensemble of weak position estimators whose weights could be changed according to a user’s current position. For example, when the user is far from a particular AP, the weights for weak estimators using that AP are reduced. Cooper et al. [6] constructed a room identification system combining Wi-Fi and Bluetooth. In their system, the authors also employed an ensemble of weak room estimators. The weight of each weak estimator was determined based on the estimated usefulness of the Wi-Fi/Bluetooth beacons used in the estimator for room classification. However, these prior studies did not cope with temporal fluctuations. To the best of our knowledge, there is no unified framework for coping with both temporal fluctuations and spatial instability. In our framework, we prepare an ensemble of randomized weak position estimators that each employ WiFi signals from a randomly selected subset of the APs observed in an environment of interest. Using these randomized estimators allows us to construct an ensemble estimator that does not over-fit the unstable training data. Our ensemble estimator combines the estimates from each of the weak estimators using adaptively assigned weights for the weak estimators, with the weights adjusted accorded to temporal and spatial changes in order to increase the robustness of the ensemble estimator to these changes. We cope with spatial changes by splitting the environment of interest into several areas and tailoring several weak estimators to each area so that they can accurately estimate the user’s position in the given area. We cope with temporal changes in the Wi-Fi signals by using random subsets of APs in our weak estimators. By using randomly selected APs, we ensure that even when signals from an AP become unstable due to a layout change, not all weak estimators will employ Wi-Fi signals from that AP and therefore not all weak estimators will have decreased performance. Furthermore, because each weak estimator employs only several randomly selected APs, our framework is able to avoid the curse of dimensionality. In this way, we are able to produce an ensemble of unique randomized estimators, with each estimator affected by different subsets of the APs and tailored to produce precise estimates in a particular area. Our framework automatically assesses the performance of each of these weak estimators by continually tracking the user and periodically updates the weights of the estimators based on how well their estimates agree with the user’s path. By doing so, we can detect estimators that have reduced accuracy due to temporal fluctuations in the Wi-Fi signals and reduce their weights in the ensemble. To achieve such spatio-temporal adaptive indoor positioning, we propose the ensemble indoor positioning architecture shown in Fig. 1. In this study, we assume that the environment of interest consists of several areas that are made up of training points whose signal features are similar to each other. For example, all the training points in a given room could constitute an area. We divide the environment of interest into several such areas in a non-parametric manner, i.e., our method does not require the number of areas as input. In Fig. 1, the root node of our ensemble estimator is the environment of interest. Each of the nodes in the second tier then corresponds to one of the areas in the environment, and the child nodes for each area’s node correspond to the weak estimators that are tailored to that area using a technique based on principal component analysis (PCA). The weak estimators tailored to each area are also designed to use only APs whose signals are stably observed in that area.
T. Hayashi et al. / Pervasive and Mobile Computing (
)
–
3
By using the history of a user’s estimated position trajectories, we can evaluate the performance of the weak estimators and periodically update their weights. When we detect that the reliability of a weak estimator has decreased, we reduce the weight for that weak estimator. To do this, we track the user using a particle filter and reduce the weights for weak estimators that output position estimates that are either inconsistent with the user’s past trajectory or inconsistent with the area’s floor map. For example, we reduce the weights for weak estimators that output a trajectory representing a collision with a wall or a sudden jump in position. By continually evaluating the performance of the weak estimators using the user’s everyday sensor data, we attempt to reduce the weights of poor estimators affected by temporal fluctuations in the Wi-Fi signals, as well as to increase the weights of estimators that output precise estimates of the user’s current position. Furthermore, we use a semi-supervised learning approach that periodically updates the weak estimators using Wi-Fi signals that yielded more reliable position estimates, in order to increase the long-term stability of our position estimates. In summary, our ensemble position estimator has the following features; (1) our ensemble estimator can cope with the spatial instability of Wi-Fi signals by tailoring weak estimators to each area, (2) our ensemble estimator can cope with temporal fluctuations in the Wi-Fi signals by employing randomized weak estimators that are not affected by temporal fluctuations of certain APs, (3) our method can adaptively change the weights of the weak estimators based on an estimated reliability of their estimates, (4) our weak estimators avoid the curse of dimensionality since they use only a small numbers of APs, and (5) our ensemble estimator is robust against unstable signals since we use an ensemble of randomized weak estimators that permits us to construct a positioning model that does not over-fit the training data. The contributions of this study are that (1) we propose a new spatio-temporal adaptive indoor positioning method based on an ensemble position estimator and (2) we evaluate the proposed method using real sensor data obtained over 100 days. 2. Related work 2.1. Adaptive indoor positioning Several previous studies have proposed methods for coping with temporal fluctuations in Wi-Fi signals when using WiFi-based positioning methods. Bolliger [7] asked end users to manually collect fingerprints using a crowd sourcing approach. The author stored fingerprints with their timestamps in a database, and tried to use the timestamps to adapt to temporal signal strength changes. S. Chen et al. [3] exploited environmental properties including temperature, humidity, and ambient noise obtained using sensor networks to improve indoor positioning accuracies. Y. Chen et al. [2] also investigated the effect of environmental factors (people, doors, and humidity) on indoor positioning with Wi-Fi, and developed a sensornetwork-assisted adaptive indoor positioning method by sensing those environmental factors. Yin et al. [8] installed a small numbers of Wi-Fi sensor nodes in an environment, and applied regression analysis to learn/estimate the temporal predictive relationship between the signal strength values received by those sensor nodes and the signal strength received at a given test point. Similarly, Zheng et al. [9] installed sniffing sensor nodes to collect up-to-date RSSI values and employed regression analysis to derive the transformation from a pre-trained radio map to a current radio map. Taniuchi et al. [4] employed Bluetooth beacons installed in an environment along with a smartphone accelerometer to precisely locate a user over a long term, and then determined the temporal fluctuation of the Wi-Fi signals at each coordinate. However, while each of the above methods attempts to cope with temporal fluctuations in the Wi-Fi signals, they required the installation of sensor nodes or beacons in the environment of interest. There also exists several studies that attempt to select appropriate APs for an environment of interest in order to cope with the issue of spatial instability. Gallagher et al. [10] proposed a method that automatically determines whether or not an AP should be used in an indoor positioning system. In their method, the score for an AP increases each time the AP is observed by a user as expected at a given location and decreases each time it is not observed as expected. APs are then added to and removed from the positioning system based on these scores. Lim et al. [11] introduced a more sophisticated scoring function that uses an exponential approach for accelerating the inclusion of APs into the positioning system. In contrast, in this study, we prepare sets of appropriate APs on a finer granularity based on smaller areas within the overall environment. 2.2. Semi-supervised learning approach Pulkkinen et al. [12] employed semi-supervised manifold learning to obtain densely labeled fingerprints from partially labeled (training) fingerprints. The authors did this by constructing a non-linear projection that maps high-dimensional signal fingerprints onto a two-dimensional manifold. Chai et al. [13] attempted to achieve accurate positioning using few (sparse) training fingerprints by employing a time series of Wi-Fi scan data continually obtained from a user’s mobile device while the user is moving. The authors modeled signal strengths at places between the training fingerprints with a hidden Markov model, in an attempt to interpolate between the sparse training fingerprints. 2.3. Sensor fusion To cope with unstable Wi-Fi signals, many recent studies have employed other sensors in smartphones in addition to Wi-Fi modules such as light, magnetic, and vision sensors [14,15]. In contrast, our method does not rely on other sensors.
4
T. Hayashi et al. / Pervasive and Mobile Computing (
)
–
Fig. 2. Overview of process for training ensemble position estimators.
3. Training ensemble position estimators 3.1. Overview Fig. 2 shows an overview of the process for training our ensemble position estimators. We first partition an environment of interest into several areas according to Wi-Fi signal similarity between training points using a non-parametric approach. For each area, we then select a set of stable APs and construct randomized weak estimators tailored to that area using only those stable APs. 3.2. Area partitioning In the first step of our training process, we divide an environment of interest into several areas, with each area consisting of training points whose Wi-Fi signals are similar to each other, e.g., room. During our study, we found that the set of unstable APs for each area is often common to most training points in that area, with these unstable APs being APs that are located far from that area. Therefore, we determine a single set of unstable APs for each area, and do not use these APs for any weak estimators used for that area. In order to partition the environment in a non-parametric way, we first translate the environment into a graph, and then partition the graph into several sub-graphs using spectral clustering [16], a technique commonly used in graph clustering. 3.2.1. Graph expression The environment of interest is first translated into a complete weighted graph with each node corresponding to a training point in the environment and each edge weight corresponding to the Wi-Fi signal similarity between two training points. By partitioning the graph based on these edge weights, we can group the nodes (training points) based on the similarity of their observed Wi-Fi signals. Assume that, in the training phase, time-series of signal scans sA and sB are observed at training points A and B, respectively, with a single scan at time t corresponding to a set of observed signal strength values from the APs, i.e., a signal strength vector. We first model the probability density function (PDF) for the signal-scan time-series by generating a Gaussian mixture model (GMM) from each time-series, e.g., a GMM MA (x) for training point A would be trained using sA . C Note that x represents a single Wi-Fi scan and MA (x) = n=1 πn N (x, µn , Σn ), where πn , µn , and Σn are the mixture weight, the mean vector, and the covariance matrix of nth multivariate Gaussian distribution, respectively, and C is the number of mixtures. We estimate the parameters for these GMMs using the expectation maximization (EM) algorithm [17]. We next compute the distance (dissimilarity) between pairs of GMMs. The inverse of the distance is then the similarity between the GMMs, which is assigned as the edge weight between the corresponding nodes. In order to compute these distances, we employ the Cauchy–Schwarz PDF divergence measure, since this measure yields an analytic closed-form expression for GMMs [18]. Let MA (x) and MB (x) denote two Gaussian mixtures. The Cauchy–Schwarz PDF divergence measure [19] is then given by: DCS (MA (x), MB (x)) = − log
MA (x)MB (x)dx
MA (x)2 dx
MB (x)2 dx
.
(1)
Using this divergence, we can compute the dissimilarity between each pair of GMMs, i.e., nodes, and can assign edge weights in our graph as the inverse of these dissimilarities. 3.2.2. Spectral clustering using NCut After using the process described above to construct our complete weighted graph, we then need to partition the graph into several sub-graphs, corresponding to the areas (e.g., rooms) in our environment of interest. When partitioning the graph, we want to cut the edges so that the weights of intra-cluster edges are high while the weights of inter-cluster edges are low. Therefore, we partition the graph using an evaluation function based on this objective, the widely-used Normalized Cut (NCut) algorithm [20]. In this algorithm, the k-means++ method [21] is used to partition the graph. However, in this study we wish to partition our graph without giving the number of areas as input, but the k-means++ method requires the
T. Hayashi et al. / Pervasive and Mobile Computing (
)
–
5
Fig. 3. Example observation counts of APs obtained at training points in a given area.
number of clusters as an input. Therefore, in this study, we use the Akaike Information Criterion (AIC) [22] to automatically determine the number of clusters. With AIC, we can address the trade-off between the goodness of fit of a model to the data with the complexity of the model (# of clusters). We compute AIC of a clustering result by changing K (# of clusters) and M as follows.
K 1 2 (x − Xk ) AIC = M 1 + ln + KM , k=1 x∈Ck
2
(2)
where Ck is the kth cluster and Xk is a centroid of Ck . Using this function, we can automatically determine the value to use for K as being the K value that minimizes the resulting AIC value. 3.3. Tailoring weak estimators for each area For each area determined using the graph partitioning method described above, we then construct nr w randomized weak estimators that can accurately locate a user in that area. We first determine a set of APs whose signals are stably observed in the given area, and then randomly select nra APs from those stable APs for each weak estimator. Additionally, instead of simply using a feature space whose axis corresponds to the signal strength of each selected stable AP, this study uses a new feature space transformed from the simple space to improve the positioning accuracy in the area while simultaneously reducing the dimensionality of the original space. 3.3.1. Selecting stable APs For each area (collection of training points), we select a set of APs whose signals were stably observed in that area during the training phase by simply counting the number of signal observations for each AP. We define the stable APs as being in the top rst % of APs in regards to the observation counts. Fig. 3 shows example observation counts that were obtained in one area of our experimental environment. The area consists of four training points, with 192 scans collected in the area. As is shown in the graph, several of the APs had observation counts that were significantly below those of more stable APs. 3.3.2. Transforming feature space We then construct nr w randomized weak estimators for each area, with each of the weak estimators employing nra randomly selected stable APs, i.e., feature bagging. Note that, when precisely estimating positions for a user in some given area, it is important that the weak classifiers be able to distinguish between the training points within that specific area. In order to accurately distinguish between the training points, we should use an AP whose observed signal strengths are noticeable different for each of the training points, e.g., AP1 and AP2 in Fig. 4. Likewise, we should not use APs whose observed signal strengths are similar for the different training points, e.g., AP3 in Fig. 4. By ignoring the uninformative APs, i.e., AP3, and only using the informative APs, i.e., AP1 and AP2, we can easily reduce the dimensionality of our feature space. Additionally, as is shown in the bottom-right example in Fig. 4, the signal strengths for some APs change proportionally with each other, e.g., AP1 with AP2. In such cases, we can combine the axes of these APs to create a new axis that represents much of the variability in the data, allowing us to accurately distinguish between training points while further reducing the dimensionality of our feature space. To find the new feature space, we employ principal component analysis (PCA), which finds a new axis (principal component) that maximizes the variance while reducing the dimensionality. With PCA, we find a transformation function z = Z (x) for each weak estimator, where z is the transformed vector. For more detail about how to compute the coefficients, see [23]. By computing PCA using sensor data obtained in an area, we can obtain a new feature space that allows us to precisely distinguish between training points in that area.
6
T. Hayashi et al. / Pervasive and Mobile Computing (
)
–
3.3.3. Weak estimator Using the method described above, we obtain a new feature space for each weak estimator that improves its positioning accuracy for that area. Our weak estimators then use these transformed feature spaces to compute position estimates based on k-nearest neighbor search. When scan x is obtained from the user’s smartphone, we create a transformed observation vector z by projecting x onto the new feature space, and then compute the likelihood of observing z given the Wi-Fi signal GMM of each of the i training points tpi by using p(z |tpi ) = C /d(z , ztpi ), where d(·, ·) shows the Euclidean distance between two vectors and ztpi shows a projected training finger print collected at tpi . The value for C is determined such that the sum of the top-k p(z |tpi ) becomes 1. The estimated position from the nth weak estimator is computed using the weighted average for the top-k training points using: Posn (z ) =
k
p(z |tpi )Pos(tpi ) k
,
(3)
p(z |tpi )
where Pos(tpi ) are the coordinates of the ith training point. In this way, we can prepare unique randomized estimators, with each estimator unaffected by several of the APs and tailored to produce precise estimates in a particular area. 4. Coping with spatial instability and temporal fluctuations 4.1. Overview Fig. 5 shows an overview of our method for dynamically adjusting the weights wn assigned to each of the weak estimators in our hierarchical ensemble. Our method addresses the issue of spatial instability by preparing weak estimators tailored to each area. It addresses the issue of temporal fluctuations in the Wi-Fi signals by periodically updating the weights of the weak estimators while tracking the user through the use of a particle filter tracker. The weight (wn ) assigned to the nth weak estimator is then updated based on how well its estimates agree with the output from the tracker. For example, we reduce the weights for weak estimators that output a trajectory representing a collision with a wall or a sudden jump in position. We also cope with long-term fluctuations in the Wi-Fi signals by using the particle filter tracker to select high-quality scan data to be used to update the training data for the weak estimators through the use of a semi-supervised learning technique (self-training). 4.2. Final estimates from the ensemble of weak estimators The final output of the ensemble of weak estimators is the weighted average of the outputs from each the weak estimators, which is calculated as follows:
Pos(x) =
n =1
wn Posn (Zn (x)) , wn
(4)
n =1
where wn is the weight of the nth weak estimator and Zn (·) is the transformation function for the nth estimator based on PCA. Note that estimates that are not in the estimator’s area or adjacent to it are ignored in the calculation. 4.3. Using a particle filter tracker to track user positions and update estimator weights We track the user’s movement through the environment using a particle filter [24,25], a technique commonly used for estimating states in non-linear systems. The algorithm used in our particle filter works in a three-step process: sampling, weight calculation, and resampling. In the sampling step, new particles are generated from particles at the previous time slice (t − 1) based on a motion model. These new particles represent the prior estimates for the user’s coordinates at time t. In the weight calculation step, the particle weights are computed based on an importance function that uses the measurements taken at time t. In this study, the measurements correspond to the estimated positions from the weak estimators. In the resampling step, the particles are re-sampled according to their weights. 4.3.1. Sampling In the sampling process, we estimate the coordinates of the ith particle at time t based on a motion model that assumes a linear uniform motion. The probability with which pti −1 (the ith particle at time t − 1) will move to the coordinates c t at time t is computed according to the following bivariate Gaussian distribution. p(c t |pit −1 ) = N (c t , pti −1 + ve (pti −1 )1t , Σit −1 ),
(5)
where are the coordinates of v( ) is the velocity of and is the covariance matrix of the bivariate Gaussian distribution. The mean of the distribution corresponds to extrapolated coordinates at time t simply computed cit −1
pti −1 ,
t −1 e pi
pti −1 ,
Σit −1
T. Hayashi et al. / Pervasive and Mobile Computing (
)
–
7
Fig. 4. Example signal strengths observed at training points in an area in our collection environment. The bottom graphs show 2-dimensional projections of the signal strengths for the training points. For example, the bottom left graph plots the signal strengths using the RSSIs of AP1 and AP3.
Fig. 5. Overview of our process for coping with spatial instability and temporal fluctuations in Wi-Fi signals through the use of ensemble of weak estimators and a particle filter tracker.
from a particle’s speed and coordinates at time t − 1. Also, its standard deviation corresponds to the distance between the mean of the distribution and the particle’s coordinates at time t − 1, and its covariances are zero. Using this distribution, we sample p new particles for each particle from time t − 1.
4.3.2. Particle weight calculation We then compute the particle weights by using measurements obtained at time t. These measurements are the estimated coordinates from each of the weak estimators Posn (zt ), where zt is the scan xt taken by the smartphone at time t after being transformed to our new feature space for the nth estimator, i.e., zt = Zn (xt ). The weight of the ith particle is computed according to the PDF of a mixture of bivariate Gaussian distributions whose mean values are measurements obtained by
8
T. Hayashi et al. / Pervasive and Mobile Computing (
)
–
Fig. 6. Floor plans of experimental environments (29.8 m × 16.3 m, 93.0 m × 65.0 m, and 82.0 m × 37.0 m).
using
w pi =
wnt −1 N (pti |Posn (zt )),
(6)
n
where pti are the coordinates of the ith particle at time t and wnt −1 is the weight of the nth weak estimator at time t − 1. 4.3.3. Resampling In the resampling process, we first eliminate any particles that collide with obstacles (e.g., walls) in a map of the environment [25]. Then, we resample r particles from the remaining particles according to their weights, with the probability with which a particle is resampled set proportional to its weight. 4.4. Updating weak estimator’s weight wn We then update the weights of the weak estimators based on the resampled particles so that an estimator whose estimate is supported by many particles will have higher weight (since the particles are the user’s probable positions derived from the user’s past trajectory). The weight wnt of a weak estimator at time t is computed using the positions of the particles and the weight wnt −1 computed at time t − 1. For example, assume that the nth weak estimator outputs a position estimate Posn (zt ) at time t. To compute the weight of this weak estimator, we take particles located within m meters of Posn (zt ) and compute the weight based on the probability density function of a mixture of bivariate Gaussian distributions whose mean values correspond to the positions of the particles by using
wnt = λwnt −1 + (1 − λ)
w pi N (Posn (zt )|pti ),
(7)
i
where λ (0 < λ ≤ 1) is a forgetting factor [26] that controls the effect of past values. 4.5. Self-training To improve the long-term stability of the estimator, we employ a semi-supervised learning method called selftraining [27]. In self-training, an estimator generates labels for new unlabeled data that are used along with the new data to re-train the estimator. In this study, we retrain the weak estimators using new data whose corresponding position estimates are consistent with particles that have high weights. That is, we assume that the coordinates of the top-rss % heavy particles at time t are correct and use these coordinates and their associated Wi-Fi scans as additional training data. Note that we update the estimators at the end of the day in our current implementation. 5. Evaluation 5.1. Data set The sensor data used in this study were collected from three different buildings in our university, environments 1, 2, and 3 as shown in Fig. 6. Environments 1 is a laboratory floor. Environments 2 and 3 are the ground floors of two buildings, and are high-traffic areas. On the first day, we collected Wi-Fi fingerprints at the training points shown in Fig. 6 using a smartphone with a Wi-Fi sampling rate of about 1 Hz. This first day’s data were then used as the training data for the indoor positioning models in our study. After that, a participant collected test data by randomly walking around the floors at various times with a Google Nexus 7 every two or three days at various times of day. This was done over a period of 14 days for environment 1 and 50 days for environments 2 and 3. The participant walked for about 30–40 min in each environment on each day. The ground-truth coordinates for the participant were recorded by video.
T. Hayashi et al. / Pervasive and Mobile Computing (
)
–
9
Table 1 Experimental parameters used in this study. Parameters
Value
Description
nr w nra rst p r
20 10 90 10 1000 0.9
# randomized estimators in each area. # APs used in each weak estimator. Percentage of stable APs. # particles generated from a particle. # particles re-sampled. Forgetting factor.
λ
5.2. Evaluation methodology To investigate the effectiveness of our method, we tested the following methods. Also, Table 1 summarizes the experimental parameters used in this study. – kNN: The kNN method (k = 3) constructs vectors of signal data by concatenating the signal strength values received from all the APs. It then uses Euclidean distance to compare vectors obtained at unknown coordinates (test points) with vectors obtained at known coordinates (training points) in order to find the top-k similar training points. The final output is the weighted average of those top-k coordinates, with the weights based on their distances. – Naive Bayes (NB): This method also constructs vectors of signal data by concatenating the signal strength values received from all the APs. It then uses the vectors of training data to train a naive Bayes classifier that is used to classify vectors of test data, with the classes corresponding to training points. – Gaussian mixture model (GMM): This method also constructs vectors of signal data by concatenating the signal strength values received from all the APs. It uses the vectors of training data to model a signal strength distribution for each C training point using p(x|tpi ) = n=1 πn N (x, µn , Σn ). It then compares vectors of test data with the models to find the top-k training points. The final output is the weighted average of those top-k coordinates. – Ensemble estimators for coping with spatial instability [5] (Spatial): This method represents a state-of-the-art method. This method uses an ensemble of weak estimators with each weak estimator assigned to a specific area in the environment. In order to cope with spatial instability, it prepares an area estimator that roughly estimates the user’s current area and then changes the weight of each weak estimator according to the area estimator’s output. However, it does not cope with the issue of temporal fluctuations. While the original implementation of this method employs manually divided areas, here we instead use areas divided by our area partitioning method. In addition to these methods, we prepared the following methods to investigate the effectiveness of our method. – Proposed (w/o AS): This method is a modified version of the proposed method that does not perform AP selection, i.e., it uses all the APs observed in the environment of interest. – Proposed (w/o FB): This method is a modified version of the proposed method that does not perform feature bagging (random selection of APs for each weak estimator). – Proposed (w/o AS&FB): This method is a modified version of the proposed method that does not perform AP selection nor feature bagging. – Proposed (w/o ss): This method is a modified version of the proposed method that does not use semi-supervised learning. We tested these methods using the following scenarios. – Scenario 1: We simply use the raw signal strength data collected during the experiment. – Scenario 2: We virtually remove 10 randomly-selected APs located outside the floor on approximately the 10th day, by removing the AP’s signals from the collected data. This scenario simulates a sudden drastic change of signal conditions caused by events such as a move of office or renovation of Wi-Fi infrastructure. 5.3. Results: scenario 1 5.3.1. Positioning performance First, we compare the performance of the proposed method with that of each of the three conventional methods: kNN, NB, and GMM. Table 2 shows the mean absolute errors (MAEs) for each method for each environment. In all three environments, our method greatly outperformed all three conventional methods, with the conventional methods each achieving similar positioning performance. Also, we confirmed that there was a significant difference between the performance of our method and that of each of the three conventional methods using a two-tail t-test (p < 0.05). Figs. 7–9 show the transitions in the error distances for each of the methods across the duration of the experiment. As shown in these figures, our method always outperformed the three conventional methods, with the performances of these three methods appearing unstable. In Fig. 8, we can see a gradual increase in the MAEs for all methods in environment 2, which may be because a new building was being built next to this environment during the experimental period. While the MAEs for all methods for environment 3, which is a high-traffic area, were unstable, our proposed method always outperformed the other three methods.
10
T. Hayashi et al. / Pervasive and Mobile Computing (
)
–
Table 2 MAEs for each method (Scenario 1) [meters]. Method
Env. 1
Env. 2
Env. 3
Average
kNN NB GMM Spatial Proposed Proposed (w/o AS) Proposed (w/o FB) Proposed (w/o AS&FB) Proposed (w/o ss)
2.93 3.18 3.19 3.39 2.10 2.60 2.18 3.08 2.14
4.74 5.28 5.39 4.52 4.11 4.49 4.14 4.50 4.14
4.75 4.80 4.34 4.04 3.82 4.27 3.99 4.60 4.02
4.14 4.42 4.31 3.98 3.34 3.79 3.45 4.06 3.43
Fig. 7. Transitions in the error distances [meters] in environment 1 (Scenario 1).
Fig. 8. Transitions in the error distances [meters] in environment 2 (Scenario 1).
Fig. 9. Transitions in the error distances [meters] in environment 3 (Scenario 1).
5.3.2. Effect of weight updating Spatial is a state-of-the-art positioning method based on ensemble learning, and as such this method also greatly outperformed the three conventional methods. As shown in Table 2, our method also outperformed Spatial, with an average improvement of 0.65 m. Also, we confirmed that there was a significant difference between the performance of our method and that of Spatial using a two-tail t-test (p < 0.05). In environments 2 and 3, the MAEs for our method and Spatial were almost the same in the beginning of the experimental period, as shown in Figs. 8 and 9. However, the accuracy for Spatial
T. Hayashi et al. / Pervasive and Mobile Computing (
)
–
11
Fig. 10. Resulting area partitioning for environment 1. The separate clusters are designated by shape.
gradually worsened. This is because Spatial is not designed to cope with the temporal fluctuation in the Wi-Fi signals. In contrast, our method periodically updates the weights of weak estimators based on their performance. Additionally, in environment 1, our method greatly outperformed Spatial from day 1. Environment 1 is a smaller environment and therefore the weights of the weak estimators in environment 1 are more frequently updated than those in environments 2 and 3, since there are fewer weak estimators (i.e., fewer areas) in environment 1. In our results, the weak estimator with the highest average weight had a weight 3.4 times heavier than the weak estimator with the lowest weight. The average MAE for the estimator with the lowest weight was 1.37 times larger than that for the estimator with the highest weight. 5.3.3. Effect of AP selection and feature bagging Our method automatically partitions an environment of interest into several areas and prepares weak estimators tailored to each area. Fig. 10 shows the partitioning result of environment 1. (Due to the page limitation, we can only show the results for environment 1.) As shown in the result, training points in the same room have been successfully grouped into one cluster in non-parametric manner. For each area, our method selects stable APs based on their observation counts, which are used in weak estimators for the area. Proposed (w/o AS) in Table 2 shows the positioning accuracies of the proposed method when we did not perform AP selection, i.e., when using all the APs observed in the environment of interest. (Note that each weak estimator employs a randomly selected subset of those APs.) The effectiveness of AP selection can be seen in how the MAE for Proposed (w/o AS) was on average about 0.45 m poorer than that for Proposed. Also, we confirmed that there was a significant difference between the performance of our method and that of Proposed (w/o AS) using a two-tail t-test (p < 0.05). Proposed (w/o FB) in Table 2 shows the positioning accuracies of the proposed method when we did not perform feature bagging (random selection of APs for each weak estimator). The MAE for Proposed (w/o FB) was on average about 0.09 m poorer than that for Proposed. There was no significant difference between the performance of our method and that of Proposed (w/o FB) using a two-tail t-test (p > 0.05). Proposed (w/o AS&FB) in Table 2 also shows the positioning accuracies of the proposed method when we did not perform AP selection nor feature bagging. The MAE for Proposed (w/o AS&FB) was on average about 0.72 m poorer than that for Proposed. Also, we confirmed that there was a significant difference between the performance of our method and that of Proposed (w/o AS&FB) using a two-tail t-test (p < 0.05). Based on these results, we can say that while the effect of feature bagging was smaller than that of AP selection, applying both AP selection and feature bagging greatly reduced positioning errors. We believe that the weak estimators do not work well when many unstable APs are included. 5.3.4. Effect of semi-supervised learning Proposed (w/o ss) in Table 2 shows the MAEs for our method when we did not use semi-supervised learning. We can see that using semi-supervised learning slightly improved positioning performance. In particular, the MAE for Proposed in environment 3 was about 0.2 m smaller than that for Proposed (w/o ss). In environment 3, while the MAEs for Proposed (w/o ss) and Proposed were almost the same in the beginning of the experimental period, the MAEs for Proposed (w/o ss) gradually worsened, increasing to about 0.4 m larger than that for Proposed by the last day. This indicates that by using semi-supervised learning in addition to the weight updates, we can periodically add new fingerprints and adapt to changing environments. However, there was no significant difference between the performance of our method and that of Proposed (w/o ss) using a two-tail t-test (p > 0.05). 5.4. Results: scenario 2 (removing AP) Figs. 11–13 show the transitions in the error distances for kNN, NB, GMM, and Proposed in scenario 2. In this scenario, we removed 10 APs from each environment, with the APs removed on day 4 for environment 1 and on day 9 for environments 2 and 3. (Originally 222, 336, and 395 APs were observed for environments 1, 2, and 3, respectively.) As shown in the figures, the MAEs for kNN and NB appear to increase after the removal of APs. Looking at the changes in MAE for kNN and NB in Table 3, the MAEs for kNN and NB increased about one meter after the removal of APs. Surprisingly, GMM did not appear to be affected by the removal of APs. This may be because generative models (e.g., GMM) are robust against missing data.
12
T. Hayashi et al. / Pervasive and Mobile Computing (
)
–
Fig. 11. Transitions in the error distances [meters] in environment 1 (Scenario 2). APs were removed on the 4th day.
Fig. 12. Transitions in the error distances [meters] in environment 2 (Scenario 2). APs were removed on the 9th day.
Fig. 13. Transitions in the error distances [meters] in environment 3 (Scenario 2). APs were removed on the 9th day. Table 3 Changes in MAE when comparing scenario 2 to scenario 1 (after removal of APs) [meters].
kNN NB GMM Spatial Proposed
Env. 1
Env. 2
Env. 3
Average
+1.03 +0.53 +0.15
+0.10 +1.17 +0.04 +0.65 +0.10
+1.26 +1.24 +0.01 +0.55 +0.42
+0.80 +0.98 +0.07 +0.33 +0.32
0.20 +0.45
However, since GMM’s original positioning performance was poor, it was still outperformed by Proposed and Spatial. Proposed and Spatial both maintained good positioning performance even after 10 APs were removed. We believe that these methods were robust against the removal, since they construct randomized estimators that avoid over-fitting the training data. 6. Discussion 6.1. Scalability of weak estimators Our current implementation for weak estimators employs a kNN search, requiring distance computation between the observed signals and the fingerprint for each training point. Since each weak estimator computes the distances for all the
T. Hayashi et al. / Pervasive and Mobile Computing (
)
–
13
training points in an environment of interest, the performance of the kNN search will degrade when the number of training points in the environment is too high. One solution for this problem is to only use training points close to an area where the weak estimator is allocated. Another possible solution is to use a kNN algorithm with an index such as R-Tree [28] or M-Tree [29]. 6.2. Real-time model update Because it takes long time to learn GMM parameters, we assume that model updating using semi-supervised learning is undertaken at the end of the day. If we want to achieve real-time update, we can use a simple method in each weak estimator directly employing RSSI vectors in the kNN search like the kNN method evaluated in the evaluation section. This enables the positioning model to quickly adapt the dynamic environment using semi-supervised learning. However, this simple architecture can sacrifice the basic positioning accuracy. In contrast, our method can update the weights of weak estimators in real-time because this process is included in the particle filter tracking. (The particle filter is computationally inexpensive.) As mentioned in the evaluation section, we could not find the significant difference between Proposed and Proposed (w/o ss). This may be because the weight updating is enough to adapt to the environment dynamics. 6.3. Feature space transformation While this study uses PCA to transform the feature space, alternative methods for feature space transformation could possibly be used. One alternative is canonical correlation analysis (CCA), which finds a transformation that maximizes the correlation between the dependent variables and transformed independent variables. Another alternative is linear discriminant analysis (LDA), which finds a transformation that maximizes the inter-class scatter and minimizes the intraclass scatter in the dependent variables. Both CCA and LDA are supervised transformation methods, since they use ground truth information, i.e., the dependent variables. In contrast, the PCA used in our method is an unsupervised transformation method, since it does not require ground truth information. While the supervised methods can find good transformations for discriminating training points, the found transformations can overfit the training data. Since we deal with fluctuating signal data, we decided to use PCA in our weak classifiers. An investigation of other transformation schemes is an important part of our future work. 7. Conclusion This paper proposed a new ensemble Wi-Fi indoor positioning method that updates its positioning model to counter temporal fluctuations and spatial instability in Wi-Fi signals. Our evaluation using long-term data confirmed that our method could maintain stable positioning performance over long intervals. Our method achieved 19% error reduction compared to conventional kNN based method and also achieved 16% error reduction compared to a state-of-the-art method. References [1] A. LaMarca, Y. Chawathe, S. Consolvo, J. Hightower, I. Smith, J. Scott, T. Sohn, J. Howard, J. Hughes, F. Potter, et al. Place lab: Device positioning using radio beacons in the wild, in: Pervasive 2005, 2005, pp. 116–133. [2] Y.-C. Chen, J.-R. Chiang, H.-h. Chu, P. Huang, A.W. Tsui, Sensor-assisted Wi-Fi indoor location system for adapting to environmental dynamics, in: ACM International Symposium on Modeling, Analysis and Simulation of Wireless and Mobile Systems, 2005, pp. 118–125. [3] S. Chen, Y. Chen, W. Trappe, Exploiting environmental properties for wireless localization and location aware applications, in: PerCom 2008, 2008, pp. 90–99. [4] D. Taniuchi, T. Maekawa, Automatic update of indoor location fingerprints with pedestrian dead reckoning, ACM Trans. Embedded Comput. Syst. (TECS) 14 (2) (2015) 27:1–27:23. [5] D. Taniuchi, T. Maekawa, Robust Wi-Fi based indoor positioning with ensemble learning, in: IEEE 10th International Conference on Wireless and Mobile Computing, Networking and Communications, WiMob 2014, 2014, pp. 592–597. [6] M. Cooper, J. Biehl, G. Filby, S. Kratz, LoCo: boosting for indoor location classification combining Wi-Fi and BLE, Pers. Ubiquitous Comput. 20 (1) (2016) 83–96. [7] P. Bolliger, Redpin - adaptive, zero-configuration indoor localization through user collaboration, in: International Workshop on Mobile Entity Localization and Tracking in GPS-less Environments, 2008, pp. 55–60. [8] J. Yin, Q. Yang, L. Ni, Adaptive temporal radio maps for indoor location estimation, in: PerCom 2005, 2005, pp. 85–94. [9] V.W. Zheng, E.W. Xiang, Q. Yang, D. Shen, Transferring localization models over time, in: AAAI, 2008, pp. 1421–1426. [10] T. Gallagher, B. Li, A.G. Dempster, C. Rizos, Database updating through user feedback in fingerprint-based Wi-Fi location systems, in: International Conference on Ubiquitous Positioning Indoor Navigation and Location Based Service, UPINLBS 2010, 2010, pp. 1–8. [11] J. Lim, W. Jang, G. Yoon, D. Han, Radio map update automation for WiFi positioning systems, IEEE Commun. Lett. 17 (4) (2013) 693–696. [12] T. Pulkkinen, T. Roos, P. Myllymäki, Semi-supervised learning for WLAN positioning, in: ICANN 2011, 2011, pp. 355–362. [13] X. Chai, Q. Yang, Reducing the calibration effort for location estimation using unlabeled samples, in: PerCom 2005, 2005, pp. 95–104. [14] H. Xu, Z. Yang, Z. Zhou, L. Shangguan, K. Yi, Y. Liu, Enhancing wifi-based localization with visual clues, in: UbiComp 2015, 2015, pp. 963–974. [15] Q. Xu, R. Zheng, S. Hranilovic, IDyLL: Indoor localization using inertial and light sensors on smartphones, in: UbiComp 2015, 2015, pp. 307–318. [16] U. Von Luxburg, A tutorial on spectral clustering, Stat. Comput. 17 (4) (2007) 395–416. [17] A.P. Dempster, N.M. Laird, D.B. Rubin, Maximum likelihood from incomplete data via the EM algorithm, J. R. Stat. Soc. Ser. B Stat. Methodol. 39 (1) (1977) 1–38. [18] K. Kampa, E. Hasanbelliu, J. Principe, Closed-form Cauchy-Schwarz PDF divergence for mixture of Gaussians, in: International Joint Conference on Neural Networks, 2011, pp. 2578–2585. [19] R. Jenssen, D. Erdogmus, K. Hild, J. Principe, T. Eltoft, Optimizing the Cauchy-Schwarz PDF distance for information theoretic, non-parametric clustering, in: Energy Minimization Methods in Computer Vision and Pattern Recognition, 2005, pp. 34–45.
14
T. Hayashi et al. / Pervasive and Mobile Computing (
)
–
[20] J. Shi, J. Malik, Normalized cuts and image segmentation, IEEE Trans. Pattern Anal. Mach. Intell. 22 (8) (2000) 888–905. [21] D. Arthur, S. Vassilvitskii, k-means++: The advantages of careful seeding, in: The Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2007, pp. 1027–1035. [22] H. Akaike, Information theory and an extension of the maximum likelihood principle, in: the 2nd International Symposium on Information Theory, 1973, pp. 267–281. [23] I. Jolliffe, Principal Component Analysis, Wiley Online Library, 2002. [24] A. Doucet, Sequential Monte Carlo Methods, Wiley Online Library, 2001. [25] O. Woodman, R. Harle, Pedestrian localisation for indoor environments, in: UbiComp 2008, 2008, pp. 114–123. [26] S. Markovitch, P.D. Scott, The role of forgetting in learning, in: ICML 1988, 1988, pp. 459–465. [27] O. Chapelle, B. Schölkopf, A. Zien, Semi-Supervised Learning, MIT press, Cambridge, MA, 2006. [28] A. Guttman, R-Trees: A Dynamic Index Structure for Spatial Searching, vol. 14, ACM, 1984. [29] P. Ciaccia, M. Patella, P. Zezula, M-tree an efficient access method for similarity search in metric spaces, in: International Conference on Very Large Data Bases, vol. 23, 1997.