The Journal of China Universities of Posts and Telecommunications October 2012, 19(5): 39–44 www.sciencedirect.com/science/journal/10058885
http://jcupt.xsw.bupt.cn
Indoor localization via 1 -graph regularized semi-supervised manifold learning ZHU Yu-jia1 ( ), DENG Zhong-liang1,2, JI Hao3 1. School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China 2. Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China 3. School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
Abstract In this paper, a new 1 -graph regularized semi-supervised manifold learning (LRSML) method is proposed for indoor localization. Due to noise corruption and non-linearity of received signal strength (RSS), traditional approaches always fail to deliver accurate positioning results. The 1 -graph is constructed by sparse representation of each sample with respect to remaining samples. Noise factor is considered in the construction process of 1 -graph, leading to more robustness compared to traditional k-nearest-neighbor graph (KNN-graph). The KNN-graph construction is supervised, while the 1 -graph is assumed to be unsupervised without harnessing any data label information and uncovers the underlying sparse relationship of each data. Combining KNN-graph and 1 -graph, both labeled and unlabeled information are utilized, so the LRSML method has the potential to convey more discriminative information compared to conventional methods. To overcome the non-linearity of RSS, kernel-based manifold learning method (K-LRSML) is employed through mapping the original signal data to a higher dimension Hilbert space. The efficiency and superiority of LRSML over current state of art methods are verified with extensive experiments on real data. Keywords
1
1
-graph, indoor positioning, semi-supervised, manifold learning, wireless local area network (WLAN)
Introduction
Recently, WLAN has gained significant interest on how to design accurate and low cost sensor localization system for many personal and commercial applications [1–2]. There are three major calculating metrics in WLAN: time of arrival, angle of arrival, and RSS. Among these algorithms, RSS-based methods have been extensively studied as an inexpensive solution for indoor positioning system [3–4]. Compared to other algorithms, RSS can be easily obtained by WLAN integrated mobile device without any additional hardware modification. The RSS information can be utilized in two different ways for indoor localization applications [5]. One is the physical radio propagation model. Because of the Received date: 27-02-2012 Corresponding author: ZHU Yu-jia, E-mail:
[email protected] DOI: 10.1016/S1005-8885(11)60298-7
complexity of the radio propagation in the indoor scenario [6], the model always can not be precisely described. The other is called as fingerprint method. A database of RSS signals with labeled location information is initially collected. This process is finished at the offline stage, and the database is treated as the training set for statistical learning models [6–7]. Then, at the online stage, the model constructed at the offline stage is used to estimate the location from a new given RSS signal. This technique, known as fingerprinting, generally overcomes several limitations of the above mentioned propagationbased approaches, especially in complex scenarios. The key technical challenge in fingerprint-based localization method is how to map the RSS signal received from access points (APs) to a spatial position in 2D Cartesian coordinates [8], which can be described as the mapping rule H (⋅) : R m → R 2 . In order to model this mapping, lots
of pattern matching methods are proposed. One simple
40
The Journal of China Universities of Posts and Telecommunications
solution is the KNN method [9], which estimates the position by computing the k closest neighbors with k smallest Euclidean distances with respect to the offline collected RSS database. Statistical method is proposed to estimate the probability of RSS signal at each potential position, such as maximum likelihood (ML) algorithm [10]. Kernel method [11] is another solution which maps the original RSS vector into a kernel feature space for better estimation. Except for the localization precision factor, computational complexity and the storage capacity are also need to be jointly considered. Researchers have studied lots of ways to reduce the computing cost. Such as spatial filtering [4] and offline clustering method [10–12]. However, there remains two challenging problems faced by complex indoor scenarios. One is the AP selection problem. Due to wide deployment of APs, the dimension of the RSS vector is generally much higher than the three dimensional spatial coordinates needed for positioning. The other problem is the inevitable data noise generated due to several kinds of reasons, such as severe multipath, shadowing conditions, non-line-of-sight (NLOS) propagation or the effect of user body shadow in real applications. All methods mentioned above focus on modeling a map using the original RSS data without considering data noise, so the model constructed using contaminated data directly is not accurate. In this paper, a 1 -graph regularized semi-supervised manifold learning method is proposed to mapping the initial high dimensional data into a low dimensional space which uncovers the nature structure of RSS data. Meanwhile, by introducing 1 -graph into our approach, the system is more robust to data noise. Our work is inspired in part by the prowess of compressive sensing (CS) [13] and Kernel method [14]. The novelties and contributions of the proposed methods are: 1) 1 -graph is used as an extra regularization term to the objective function to reduce the influence of data noise, the original KNN-graph is biased by 1 -graph. 2) Both labeled and unlabeled information are utilized in the process of semi-supervised manifold learning, which has the potential to convey more discriminative information compared to traditional manifold learning methods. 3) Kernel trick is used by mapping the signal space R m to a Hilbert space H through a nonlinear mapping function Φ : R m → H . The performance and generalization
2012
capability of the indoor positioning system are greatly improved. The rest of this paper is organized as follows. Sect. 2 proposes the overall positioning system, and describes the interactions between the location server and the mobile device. Sect. 3 details our semi-supervised manifold learning process. The regularized 1 -graph makes the system more robust to data noise. Kernel trick is used to overcome the non-linearity of RSS data. Extensive experiments are presented to show the robustness and superiority of our approach over other state-of-the-art methods in Sect. 4. Finally, Sect. 5 gives a conclusion.
2
System overview
The proposed system consists of mobile users and the location server. Fig. 1 illustrates the overall structure of the proposed system.
Fig. 1
System overview
At the offline stage, the mobile users collect the RSS fingerprints from APs. The location server constructs a fingerprint database (radio-map) including RSS signals at predetermined reference locations from all selected APs in the vicinity. The location server use the LRSML algorithm described in Sect. 3 to get the kernel embedding of every training sample and the spanned coefficient vector over these training samples. At the online stage, the localization of the mobile user is achieved in three steps: local collection of the RSS fingerprints, computing the low dimensional kernel embedding and estimating the position by k-nearest neighbor (KNN) method with respect to the kernel embeddings of the training data. The detail could be found in Sect. 3. The mobile users can locate themselves
Issue 5
ZHU Yu-jia, et al. / Indoor localization via
by their own phones. That is to say, users’ position privacy is well protected and it does not need to worry about the packet loss or delay caused by instability of wireless network if they use the location server-based system.
3 3.1
Proposed method 1
-graph
Recently, sparse representation, as an effective technique for solving underdetermined systems of linear equations, has attracted a lot of attention. Numerous excitements have been generated by remarkable successes in practical applications of many researching fields, such as signal processing [15], pattern recognition [16] and compressive sensing based indoor position [17]. In this paper, sparse representation is introduced for solving fingerprint matching problem. Let’s define the training sample set as a matrix X = [ x1 , x2 ,..., xn ] , xi ∈ R m , where n is the sample number and m is the feature dimension (In our system, m is also the number of APs). Let x ′ be one sample of the set X , X ′ is remaining set of X except x ′ , then x ′ can be reconstructed by X ′ using sparse representation.. The sparsest solution to x ′ = X ′ω can be sought by solving the following optimization problem: min ω 0 ⎫⎪ ω (1) ⎬ s.t. x ′ = X ′ω ⎪⎭ where ⋅ denotes the 0-norm , which counts the 0 number of nonzero entries in a vector. But it is a well known NP-hard problem and difficult to approximate. However, recent result [18] reveals that the minimum 1 -norm solution to an underdetermined system of linear equations is also the sparsest possible solution under quite general conditions: min ω 1 ⎫⎪ ω (2) ⎬ s.t. x ′ = X ′ω ⎪⎭ In practical scenarios, x ′ sometimes could be partially ⎡ω ⎤ corrupted by noise ε , x ′ = X ′ω + ε = [ X ′ I ] ⎢ ⎥ , so the ⎣ε ⎦ above Eq. (2) should be modified as: min β 1 ⎪⎫ β (3) ⎬ s.t. x ′ = B β ⎪⎭ ⎡ω ⎤ where B = [ X ′ I ] , β = ⎢ ⎥ . ⎣ε ⎦
1
-graph regularized semi-supervised manifold learning
A
1
41
-graph G = { X , W } can summarize the overall
relationship of the whole training samples in sparse representation. The sample set X are graph vertices and W is the sparse weight matrix constructed by Eqs. (2) or (3), the weight matrix item Wij reveals the contribution of the jth data point to the sparse representation of the ith data: xi = Wi1 x1 + Wi 2 x2 + … + … + Win xn . Non-negativity constraints should be imposed on Wij in the
1
-norm
optimization. For each sample, its sparse representation code is achieved by solving Eqs. (2) or (3). 3.2 Semi-supervised manifold learning with regularization
1
-graph
In order to discover the manifold structure, traditional methods focus on modeling a KNN-graph which preserves the local structure of the feature space [19]. However, the neighborhood of RSS sample is always contaminated with ‘fake’ neighbors due to noise. Such KNN-graph can not reveal the nature structure of the manifold. In order to take noise into consideration, a semi-supervised manifold learning method is proposed for indoor localization. Let y = ( y1 , y2 ,..., yn ) , yi ∈ R be an embedding, then T
our semi-supervised manifold learning method can be formulated to minimize the following function:
Φ=
⎛ ⎞ 2 1 ∑ ( yi − y j ) Ωij + γ ∑i ⎜ yi − ∑j Wij y j ⎟ 2 i, j ⎝ ⎠
2
(4)
Ωij is the similarity weight between data points xi and x j based on the distance between two samples, Wij is the sparse coefficient mentioned above, γ is a tradeoff parameter. The similarity weight Ωij could be achieved based on the heat kernel function as follows: ⎧ ⎛ d ( i, j ) ⎞ ⎪exp ⎜ − ⎟ ; if xi and x j are neighbors Ωij = ⎨ ⎝ 2σ 2 ⎠ ⎪ ⎩0; otherwise
(5)
Obviously, the cost function is composed of two parts: a KNN-graph (see Fig. 2) and a 1 -graph. KNN-graph is constructed in a supervised manner, 1 -graph is assumed to be unsupervised without harnessing any data label information. Such semi-supervised manifold learning method has more discriminative power compared to traditional methods. 1 -graph is used as a regularization to make our method more robust to noises. According to Eq. (5), the nearest neighbor set of fingerprint x1 is
42
The Journal of China Universities of Posts and Telecommunications
{ x2 , x9 } , when k=2.
vector
p
is
spanned
by
2012
the
transformed
data
n
p = ∑ aiφ ( xi ) = φ ( X ) a. The embedding
y
is then
i =1
represented as y T = a Tφ ( X ) φ ( X ) = a T K
(10)
The cost Eq. (7) then can be reformulated as arg min a T K ( L + γ M ) K T a
(11)
T
a T KK T a
The vector a is obtained by the minimum eigenvalue solution to the following generalized eigenvector problem: (12) K ( L + γ M ) K T a =λ KK T a Fig. 2 An example of KNN graph
Following some simple algebraic steps, the cost Eq. (4) can be reduced to
( I −W ) ( I −W ) y = T T T y Ly + γ y My = y ( L + γ M ) y
Φ = y Dy − y Ω y + γ y T
T
T
T
(6)
where D is a diagonal matrix, Dii = ∑ Ωij , L = D − Ω j
is the Laplacian matrix, M = ( I − W )
T
( I − W ) . In order
to remove the scaling factor, we impose a constraint y T y = 1 . Finally, the optimization problem reduces to arg min y T ( L + γ M ) y
(7)
y T y =1
The optimal embedding is then given by the minimum eigenvalue solution to the matrix L + γ M . Suppose the transformation to the manifold embedding is linear (L-LRSML), that is to say y T = pT X , the cost Eq. (7) then can be reformulated as arg min pT X ( L + γ M ) X T p
(8)
pT XXpT =1
The transformation vector p is given by the minimum eigenvalue solution to the following generalized eigenvector problem: X ( L + γ M ) X T p = λ XX T p (9) However, the RSS signal may be not linearly separable in the initial feature space. Kernel trick (K-LRSML) is used by mapping the Euclidean space R m to a Hilbert space H through a nonlinear mapping function φ : R m → H . Let φ ( X ) donate the data matrix in the Hilbert space,
φ ( X ) = ⎡⎣φ ( x1 ) , φ ( x2 ) ,..., φ ( xn ) ⎤⎦ .
The
At the online stage, when mobile users collect new RSS vector x , the kernel embedding of x can be achieved by n
n
i =1
i =1
y = ∑ ai φ ( x ) , φ ( xi ) = ∑ ai K ( x , xi )
(13)
Algorithm 1 summarizes the offline stage and online stage using K-LRSML method for robust RSS signal localization. Algorithm 1 K-LRSML method for RSS signal location Offline stage: 1) Normalize collected RSS signals to obtain the training set X = ( x1 , x2 ,..., x N ) , where xi is the feature vector and N is the sample number. 2) Identify neighborhoods of each sample in Euclidean space to get the similarity weight matrix Ω . 3) Identify sparse reconstruction for each sample with all other training samples (Eq. (3)) to get the 1 -graph weight matrix W . 4) Find the kernel Embedding yi of each training sample and the spanned coefficient vector a (Eq. (12)). Online stage: 1) Normalize test RSS signal to obtain feature vectors x . 2) Compute the low-dimensional kernel embedding y using function (Eq. (13)). 3) Estimate the location of test RSS signal using weighted KNN by comparing embedding y to each yi in the database.
4
Experiments
element of kernel matrix K is defined by the inner
product of the mapped data K ij = φ ( xi ) , φ ( x j ) . The embedding y is defined as y = p φ ( X ) . The project T
4.1
Experiments setup
T
RSS data were recorded in the 6-th floor of Main
Issue 5
ZHU Yu-jia, et al. / Indoor localization via
Building, Beijing University of Posts and Telecommunications, as shown in Fig. 3. The area of test environment is about 30 m × 20 m. The dash line represents the path of data collection along the corridor, the stars show the test rooms. A total of 13 IEEE 802.11 b/g APs were covered throughout the spatial domain. A PDA (HTC HD2 with Windows Mobile 6.0) was used to measure WLAN signal strength value. The RSS data were collected on the device by using the open source library OpenNetCF, which provides access to MAC address and RSS values of WLAN APs. 39 reference locations were selected. For easy operation, we choose 1.8 m as the grid spacing, which is an integer multiple of the length of the floor tiles. We collected 100 RSS samples per reference location for training at a rate of 1 sample/sec (total 3 900 samples). Test samples were collected on different days situated on and off the training points to capture a variety of environmental conditions. The distance error is calculated by the Euclidean distance between the estimated and the true position.
Fig. 3
4.2
The area of test environment
1
-graph regularized semi-supervised manifold learning
The dimensionality of the embedding is set to 7 in our experiment, the rest are discarded and regarded as noise.
Fig. 4 RMSE
Effect of dimensionality of the embedding on the
To verify the superiority of our LRSML methods, we conduct the experiments to compare L-LRSML and K-LRSML with several current states of art methods: KNN [9], support vector machine (SVM) [6], and kernel direct discriminant analysis (KDDA) [11]. For fairness, same Gaussian kernel is used in K-LRSML, SVM and KDDA. Fig. 5 presents the cumulative accuracy within specified distances of different methods. Apparently, our methods obtain the best positioning accuracy. At error distance of 1.8 m, cumulative accuracy of K-LRSML, L-LRSML, SVM, KDDA and KNN algorithm are 67%, 66.5%, 63.4%, 60.8% and 58.5% respectively. At error distance of 3.6 m, cumulative accuracy for these algorithms are 87.5%, 81.4%, 81.1%, 79.3% and 78.3% respectively. It means that K-LRSML and L-LRSML perform much better than other algorithms. Table 1 shows RMSE of different approaches. Our LRSML methods (L-LRSML and K-LRSML) have dramatic performance improvement due to 1 -graph regularization. K-LRSML works best among all methods, SVM has a comparable performance with the KDDA method, while KNN performs the worst.
Analysis of the experimental results
Fig. 4 shows that effects of dimension of our LRSML method (both linear version L-LRSML and kernel version K-LRSML) on the localization performance. The root mean square error (RMSE) decreases dramatically as the dimensionality increases at first. The lowest RMSE is achieved when the dimensionality is 7 and the RMSE no longer decreases after that. The performance of K-LRSML is better than L-LRSML because L-LRSML fails to explore the nonlinear relationship between the RSS data.
43
Fig. 5
Location accuracy comparison of various algorithms
44
The Journal of China Universities of Posts and Telecommunications
Table 1
Performance comparison
Algorithm
RMSE/m
WKNN SVM KDDA L-LRSML K-LRSML
3.38 3.02 3.24 2.77 2.40
Location accuracy/% Within 1.8 m Within 3.6 m 58.5 78.3 63.4 81.1 60.8 79.3 66.5 81.4 67.0 87.5
Fig. 6 and Fig. 7 show the cumulative accuracy of all methods with different training numbers within distances of 1.8 m and 3.6 m. Apparently, L-LRSML and K-LRSML works much better than other methods.
2012
kernel embedding is used to solve the non-linearity problem of RSS signals and further improves the performance of the system. The K-LRSML and L-LRSML are more robust to data noise and has the potential to convey more discriminative information compared to conventional methods. Experimental results also show the superiority of our method over several current states of art methods for indoor localization problem. Acknowledgements This work was supported by the Hi-Tech Research and Development Program of China (2009AA12Z324).
References
Fig. 6
Error distance 1.8 m
Fig. 7
Error distance 3.6 m
As can be seen, K-LRSML and L-LRSML perform best of all among all the methods. Thus, K-LRSML and L-LRSML have the potential to convey more discriminative information and are more robust to data noise than other methods.
5
Conclusions
In this paper, we propose LRSML method for indoor localization problem. A new 1 -graph is constructed to uncover the underlying sparse reconstruction relationship of data samples. By conbining 1 -graph and traditional KNN-graph, both the localization accuracy and generalization capability are improved greatly. Furthermore,
1. Akyildiz I F, Su W, Sankarasubramaniam Y, et al. A survey on sensor networks. IEEE Communication Magazine, 2002, 40(8): 102−114 2. Patwari N, Ash J N, Kyperountas S. Locating the nodes: cooperative localization in wireless sensor networks. Signal Processing Magazine, 2005, 22(4): 54−69 3. Sun G, Chen J, Guo W, et al. Signal processing techniques in network-aided positioning: a survey of state-of-the-art positioning designs. Signal Processing Magazine, 2005, 22(4): 12−23 4. Kushki A, Plataniotis N, Venetsanopoulos A N. Kernel based positioning in wireless local area networks. IEEE Transactions on Mobile Computing, 2007, 6(6): 689−705 5. Xia Y, Wang L, Liu Z. Hybrid indoor positioning method based on WLAN RSS analysis. Journal of Chongqing University of Posts and Telecommunications: Natural Science, 2012, 24(2): 217−221(in Chinese). 6. Brunato M, Battiti R. Statistical learning theory for location fingerprinting in wireless LANs. Computer Networks, 2005, 47(6): 825−845 7. Kaemarungsi K, Krishnamurthy P. Modeling of indoor positioning systems based on location fingerprinting. Proceedings of the 23rd Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM’04): Vol 2, Mar 7−11, 2004, Hong Kong, China. Piscataway, NJ, USA: IEEE, 2004: 1012−1022 8. Sayed A H, Tarighat A, Khajehnouri N. Network-based wireless location: challenges faced in developing techniques for accurate wireless location information. Signal Processing Magazine, 2005, 22(4): 24−40 9. Rizos C, Dempster A G, Li B H, et al. Indoor positioning techniques based on wireless LAN. Proceedings of the AusWireless Conference (AusWireless’06), Mar 13−16, 2006, Sydney, Australia. 2006: 13−16 10. Youssef M, Agrawala A, Shankar A. WLAN location determination via clustering and probability distributions. Proceedings of the 1st IEEE Annual Conference on Pervasive Computing and Communication Workshops (PERCOM’03), Mar 23−26, 2003, Fort Worth, TX, USA. Piscataway, NJ, USA: IEEE, 2003: 143−150 11. Xu Y, Deng Z, Meng W. An indoor positioning algorithm with kernel direct discriminant analysis. Proceedings of the IEEE Global Telecommunications Conference (GLOBECOM’10), Dec 6−10, 2010, Miami, FL, USA. Piscataway, NJ, USA: IEEE, 2010: 5p 12. Feng C, Au W S, Valaee S, et al. Received signal strength based indoor positioning using compressive sensing. IEEE Transactions on Mobile Computing, 2011
To p. 91
Issue 5
4.
5.
6.
7.
8.
XU Ying-ying, et al. / Storage capacity model for cloud download
and Protocols for Computer Communications (SIGCOMM 2007), Aug 27−31, 2007, Kyoto, Japan. New York, NY, USA: ACM, 2007: 133−144 Pouwelse J, Garbacki P, Epema D, et al. The BitTorrent P2P file-sharing system: measurements and analysis. Proceedings of the 4th International Workshop on Peer-to-Peer Systems (IPTPS'05), Feb 24−25, 2005, Ithaca, NY, USA. LNCS 3640. Berlin, Germany: Springer-Verlag, 2005: 205−216 SUN Y, Liu F, Li B, et al. FS2you: peer-assisted semi-persistent online storage at a large scale. Proceedings of the 28th Annual Joint Conference of the IEEE Computer and Communications (INFOCOM’09), Apr 19−25, 2009, Rio de Janeiro, Brazil. Piscataway, NJ, USA: IEEE, 2009: 873−881 Greenberg A, Hamilton J, Maltz D A, et al. The cost of a cloud: research problems in data center networks. ACM SIGCOMM Computer Communication Review, 2008, 39(1): 68−73 Huang Y, Li Z, Liu G, et al. Cloud download: using cloud utilities to achieve high-quality content distribution for unpopular videos. Proceedings of the 19th ACM International Conference on Multimedia (MM’11), Nov 28−Dec 1, 2011, Scottsdale, AZ, USA. New York, NY, USA: ACM, 2011: 213−222 Karagiannis T, Rodriguez P, Papagiannaki K. Should Internet service providers fear peer-assisted content distribution. Proceedings of the 5th Internet Measurement Conference (IMC’05), Oct 19−21, 2005, Berkeley, CA, USA. New York, NY, USA: ACM, 2005: 1−14
91
9. Neglia G, Reina G, Zhang H, et al. Availability in BitTorrent systems. Proceedings of the 26th Annual Joint Conference of the IEEE Computer and Communications (INFOCOM’07), May 6−12, 2007, Anchorage, AK, USA. Piscataway, NJ, USA: IEEE, 2007: 2216−2224 10. Gummadi K, Dunn R, Saroiu S, et al. Measurement, modeling, and analysis of a peer-to-peer file-sharing workload. Proceedings of the 19th ACM SIGOPS Symposium on Operating Systems Principles (SOSP’03), Oct 19−22, 2003, Bolton, NY, USA. New York, NY, USA: ACM, 2003: 314−329 11. Wu C, Li B, Zhao S. On dynamic server provisioning in multi-channel P2P live streaming. IEEE Transactions on Networking, 2011, 19(5): 1317−1330 12. Xu C, Liu J. NetTube: exploring social networks for peer-to-peer short video sharing. Proceedings of the 28th Annual Joint Conference of the IEEE Computer and Communications (INFOCOM’09), Apr 19−25, 2009, Rio de Janeiro, Brazil. Piscataway, NJ, USA: IEEE, 2009: 1152−1160 13. Armbrust M, Fox A, Griffith R, et al. Above the clouds: a Berkeley view of cloud computing. Technical Report.EECS-2009-28. Berkeley, CA, USA: University of California, Department of EECS, 2009 14. Liu F, Shen S, Li B, et al. Novasky: Cinematic-quality VoD in a P2P storage cloud. Proceedings of the 30th Annual Joint Conference of the IEEE Computer and Communications (INFOCOM’11), Apr 10−15, 2011, Shanghai, China. Piscataway, NJ, USA: IEEE, 2011: 936−946
(Editor: ZHANG Ying)
From p. 44 13. Emmanuel J C, Michael B W. An introduction to compressive sampling. Signal Processing Magazine, 2008, 25(2): 21−30 14. Shawe-Taylor J, Cristianini N. Kernel methods for pattern analysis. New York, NY, USA: Cambridge University Press, 2004 15. Bruckstein A M, Donoho D L, Elad M. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Review, 2009, 51(1): 34−81 16. Cheng B, Yang J, Yan S, et al. Learning with l1−Graph for image analysis. IEEE Transactions on Image Processing, 2010, 19(4): 858−866 17. Feng C, Au W S A, Valaee S, et al. Compressive sensing based positioning
using RSS of WLAN access points. Proceedings of the 29th Annual Joint Conference of the IEEE Computer and Communications (INFOCOM’10), Mar 14−19, 2010, San Diego, CA, USA. Piscataway, NJ, USA: IEEE, 9p 18. Donoho D L. For most large underdetermined systems of linear equations the minimal l1−norm solution is also the sparest solution. Communications on Pure and Applied Mathematics, 2006, 59(6): 797−829 19. He X F, Niyogi P. Locality preserving projections. Advances in Neural Information Processing Systems: Proceedings of the 17th Annual Conference on Neural Information Processing Systems (NIPS’03), Dec 8−13, 2003, Vancouver, Canada. Cambridge, MA, USA: The MIT Press, 2004
(Editor: ZHANG Ying)