A learning based joint compressive sensing for wireless sensing networks

A learning based joint compressive sensing for wireless sensing networks

A learning based joint compressive sensing for wireless sensing networks Journal Pre-proof A learning based joint compressive sensing for wireless s...

741KB Sizes 0 Downloads 106 Views

A learning based joint compressive sensing for wireless sensing networks

Journal Pre-proof

A learning based joint compressive sensing for wireless sensing networks Ping Zhang, Jianxin Wang, Wenjun Li PII: DOI: Reference:

S1389-1286(19)31021-7 https://doi.org/10.1016/j.comnet.2019.107030 COMPNW 107030

To appear in:

Computer Networks

Received date: Revised date: Accepted date:

11 August 2019 20 October 2019 25 November 2019

Please cite this article as: Ping Zhang, Jianxin Wang, Wenjun Li, A learning based joint compressive sensing for wireless sensing networks, Computer Networks (2019), doi: https://doi.org/10.1016/j.comnet.2019.107030

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Published by Elsevier B.V.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

1

A learning based joint compressive sensing for wireless sensing networks Ping Zhang, Jianxin Wang, Wenjun Li

Abstract In wireless sensor networks (WSNs), due to the complex deployment environment, the sparse expression capability of the same sparse transformation basis may vary greatly in different time or different applications. These dynamic characteristics will further affect the recovery performance of compressive sensing in WSNs. Traditional predefined sparse transformation basis cannot satisfy the requirement of such dynamic change. Traditional dictionary learning technique which trains sparse transformation basis from historical data also has some problems. First, in WSN applications, the acquisition of a large number of historical data is costly, or even impossible. Secondly, sparse transformation basis learned from specific historical data is in fact a static transformation basis, which still faces the less dynamic problem. In this paper, we present a sparse expression model as well as a training method which can learn the sparse transformation basis from compressive sensing measurement results rather than original historical data. These training data can be easily obtained from compressive sensing based schemes in WSNs, and thus the sparse transformation basis can be updated in time, which enhances the dynamic adaptability. We also present a joint recovery scheme to explore the spatio-temporal relationship among multiple sources, and further improve the compressive sensing recovery performance. Evaluation results based on real data demonstrate that the proposed scheme can achieve the performance superior to the most closely related work. Index Terms Dictionary learning, Distributed compressive sensing, Wireless sensor network. Corresponding author: Jianxin Wang. Ping Zhang ([email protected]) is with the College of Computer and Information Engineering, Hunan University of Technology and Business, China. Jianxin Wang ([email protected]) is with the School of Computer Science and Engineering, Central South University, China. Wenjun Li is with the School of Computer and Communication Engineering, Changsha University of Science and Technology, China.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

2

I. I NTRODUCTION Compressive sensing, which integrates the signals acquisition with the data compression, has been extensively applied in wireless sensor networks (WSNs). In WSNs, sensor nodes are usually powered by batteries, and the energy supply is limited. At the same time, the signal is usually transmitted through a wireless channel, and the communication cost is heavy. Therefore, energy efficiency is a major research topic in WSN. This requirement can be well satisfied by compressive sensing technology. The communication cost and the computation cost are reduced by the low rank and linear random measurement of compressive sensing technology, respectively. By exploring the sparse structure information contained in the signals, it surpasses the boundary of the traditional Nyquist-Shannon theory, and thus greatly reduced the data transmission cost. The performance of sparse transformation basis is one of the critical factors affecting the performance of compressive sensing. According to compressive sensing theory, the signal processed by compressive sensing technology needs to be sparse or compressible in a specific transform domain. The sparse transformation domain is described by the corresponding sparse transformation basis. There are other concepts similar to the sparse transformation basis in the literatures, such as sparse representation basis and sparse expression basis. There are numerous predefined sparse transformation bases, such as FFT (Fast Fourier Transform) and DCT (Discrete Cosine Transform). However, the dynamic characteristic of sparse domain exists in most WSNs. Predefined sparse transformation bases lack of dynamic adaptability, and thus it is not suitable for most WSN applications. By introducing appropriate machine learning technology, the adaptability of sparse transformation basis can be improved. Dictionary learning is representative learning technology for obtaining the sparse transformation basis [1], [2]. Dictionary learning has achieved great success in image related fields [3], [4]. Many researchers have also introduced it into the network field [5], [6]. The existing sparse representation mechanisms usually learn the sparse transformation basis from the raw historical data. It requires the preparation of a large number of raw historical data as training samples in advance. At the same time, in order to make sure that the learned basis still has sparse transformation ability to the subsequent generated data, it also requires the sparse transformation domain of application scenarios to be stable. For image related applications, training samples are easy to be obtained. The sparse domain also has a great stability. The learned basis can be easily migrated from one application scenario to another.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

3

However, wireless sensor network applications have their own characteristics, which makes the dictionary learning technology have some limitations. Firstly, it is difficult to obtain a large number of raw historical data. Sensor network deployment environments are usually complex, and the cost of obtaining large amounts of raw data is high. In some application scenarios, it may even be impossible to obtain a large amount of raw data in advance. Secondly, sparse transformation basis trained from historical data is still a static transformation basis, which may still not accurately address the needs of complex and changeable space-time relations in real scenes. Wireless sensor network application scenarios usually have the characteristics of time-space variability. For example, affected by terrain, vegetation and other factors, the same structure of sensor networks deployed in different scenarios, the possible relationship between nodes may be different. Therefore, the data gathered in one scenario may have less reference value for another scenario. In addition, the most suitable sparse transformation basis in the same region may also change in different time. This dynamic variability makes the performance of the same sparse transformation basis inconsistent at different times or in different scenarios. In order to solve these challenges, we will learn the sparse transformation basis directly from the compressive sensing measurement results. Compared to the raw historical data, these compressive sensing measurement results can be easily obtained in the compressive sensing based scheme. Due to the low rank measurement of compressive sensing, the acquisition cost is also much lower than that of raw historical data. However, the direct combination of dictionary learning and compressive sensing is not a viable model. It is difficult to solve such naive learning model. In this paper, we design a new sparse expression model where the prior knowledge is introduced as the constraint to reduce the difficulty of solving the optimization problem. We also present a specific training method to learn the sparse transformation basis from this model. Besides these, we also introduce distributed compressive sensing technology and construct a joint compressive sensing recovery model to explore the spatial-temporal correlation between multiple sources. As mentioned above, sparse transformation basis obtained from traditional dictionary learning technology is still a static transformation basis, which may still face the low dynamic adaptability problem. In the proposed scheme, the sparse transformation basis is directly trained from the compressive sensing measurement results. It can be updated in time, and thus it has more dynamic adaptability.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

4

The major contributions of this paper are summarized as follows: •

We analyze the challenge of traditional dictionary learning technology in WSNs, and train the sparse transformation basis directly from the compressive sensing measurement results, which can avoid the influence of difficult acquisition of raw historical data.



We analyze the challenge of the naive sparse expression model that directly combines dictionary learning with compressive sensing. We also design a new sparse expression model to solve this challenge.



We present a specific training method to learn the sparse transformation basis from this model. We also propose a joint compressive sensing recovery method to mine the temporalspatial correlation among multiple sources.



As the sparse transformation basis is directly trained from the compressive sensing measurement results, it can be updated in time, and thus it has more dynamic adaptability than the traditional dictionary learning technology.

The remaining parts of this paper are organized as follows: Section II briefly reviews the related work. Section III introduces the background knowledge. Section IV formulates a naive version of expression model, and analyzes its challenge. Section V introduces a revised expression model. Section VI presents the training method for this model. Section VII presents joint compressive sensing recovery process for multi-source signals. Section VIII and IX present the theoretical analysis and the experimental evaluation. Section X is a conclusion. II. R ELATED WORK Compressive sensing, which allows for efficient signal acquisition and reconstruction, has attracted considerable interest from researchers in the field of wireless sensor networks. Luo et al. [7] presented the first compressive sensing based data gathering (CDG) scheme for large scale wireless sensor networks, which can reduce the communication overhead and achieve load balance. Zhang et al. [8] proposed a data collection scheme based on compressive sensing and random walk. Chen et al. [9] combined compressive sensing with network coding. Liu et al. [10] proposed data persistence schemes based on compressive sensing. Compressive sensing was adopted in the fog-based industrial IoT in [11]. Compressive sensing was explained as a symmetric and lightweight cryptosystem in [12]. Both distributed compressive sensing and Kronecker compressive sensing are representative frameworks for processing multi-source sparse signals and compressible signals. The distributed

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

5

compressive sensing was first appeared in [13], [14]. It consists of a concept named joint sparsity for signal sets, a model for joint sparse signal representation, and a joint signal recovery scheme. The Kronecker compressive sensing is proposed in [15]. It introduces the tensor product into the compressive sensing for multi-source signal compressive sensing. Caione et al. [16] presented an experimental comparative study for distributed compressive sensing and Kronecker compressive sensing in the WSNs, and pointed out that both of them can effectively enhance the recovery accuracy of compressive sensing, while performance of distributed compressive sensing is much better. WSNs are always deployed in complex environment, which results in dynamic characteristics. On the one hand, due to the failure of sensor nodes or wireless channels, WSNs may have a dynamic network topology [17]. On the other hand, the sparse domain of the signal may also has dynamic characteristics, which makes the sparse expression ability of the traditional predefined basis changes with time and space [18]. Dictionary learning is a classical sparse expression basis learning technology. It trains a dictionary from a set of historical signals, and decomposes the signals using a few atoms. Both MOD [19] and K-SVD [20] are well-known dictionary learning algorithms, which can achieve compact representation of signals. Duarte et al. [21] presented a framework for the joint design and optimization, from a set of training images, of the overcomplete non-parametric dictionary and the sensing matrix. Christian et al. [22] proposed a dictionary learning method with an effective control over two conflicting goals of high signal coherence and low selfcoherence. Studer et al. [23] presented two dictionary learning algorithms to learn dictionaries from sparsely corrupted or compressed measurements. Sadeghi et al. [24] proposed an atomby-atom dictionary learning algorithm, which is much faster than K-SVD while its results are better. Instead of alternating between the two stages of traditional dictionary learning scheme, it performs only the second stage, and updates the atoms sequentially. Anaraki and Hughes [25] proposed the compressive K-SVD, which has been further extended in [26]. Dictionary learning has also aroused the interest of researchers in the network field. Chen et al. [27] introduced a dictionary learning framework to learn the dictionary from a distributed compressive sensing application. Zhu et al. [28] proposed a nonparametric Bayesian dictionary learning based interpolation method for WSNs missing data [29]. Gupta et al. [30] presented a sparse Bayesian learning-based adaptive sensor selection framework. Kumar et al. [31] presented a dictionary learning framework for fingerprinting indoor locations [32]. Li et al. [33] developed a

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

6

sparse representation anomaly node detection method for sensor networks based on the dictionary learning. There is a challenge in applying dictionary learning to WSNs. Dictionary learning requires a large amount of original historical data, which is not easy to be satisfied in WSNs. If the sparse representation basis needs to be updated frequently, the cost of acquiring training data will be higher. In order to address this challenge, the proposed scheme directly trains the sparse expression basis from the compressive sensing measurement results, which can be easily obtained.

III. BACKGROUND In this section, we introduce the necessary background knowledge. A. Compressive sensing Compressive sensing is a novel technology that allows for efficient signal acquisition and reconstruction. It can surpass the bounds of the classical Shannon-Nyquist theory, by finding solutions for under-determined linear systems. It mainly consists of two processes: compressive sensing measurement and compressive sensing recovery. The compressive sensing measurement process can be expressed as Y = ΦX. Φ is a low rank random measurement matrix, with a size m × n. Y is the compressive sensing

measurement results, with a size m×l, and X is the original signal, with a size n×l. Compressive sensing theory requires signals to be sparse or compressible. The compressive sensing recovery process can be expressed as min ||Y − ΦΨS||2 , s.t.||S||0 < k. S is the sparse representation S

coefficient. Ψ is the sparse transformation basis. k is the sparsity. Ψ can be either a predefined

or a non-predefined basis. The widely used predefined sparse transform basis includes DCT, wavelet transform, and so on. The non-predefined sparse transform basis can be obtained from historical data by using machine learning. B. Dictionary learning In practice, different application scenarios may have different requirements on the sparse representation ability of the transform basis. It is impossible for the traditional predefined sparse transform basis to satisfy the sparse representation requirements of all applications.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

7

In order to obtain better sparse representation ability, the dictionary learning technology is proposed. Dictionary learning technology, which explores the sparse representation structure from the historical signal by using machine learning technology, can achieve better sparse representation performance. For T numbers of training signal X = [x1 , x2 , ..., xT ] , the corresponding dictionary learning problem can be formally expressed as the following optimization problem. min ||DZ − X||2F , s.t., ||Zi ||0 ≤ T0 , ∀i D,Z

(1)

The input data of this traditional model is the raw historical data X. The other two variables are target variables, where D is the dictionary to be learned, and Z is the sparse representation result. Classical dictionary learning scheme includes MOD [19] and K-SVD [20]. Both of them are iterative algorithm. Each iteration consists of two stages, i.e., sparse coding stage and codebook update stage. The following two sub-optimization problems are solved in these two stages respectively. min ||DZ − X||2F s.t., ||Zi ||0 ≤ T0 , ∀i

(2)

min ||DZ − X||2F

(3)

Z

D

In the first sub-optimization problem, D is fixed, and Z is to be solved. In the second problem, Z is fixed, and D is to be solved. The main difference between MOD and K-SVD is reflected in the schemes for solving the second sub-optimization problem. C. Distributed compressive sensing Distributed compressive sensing (DCS) [34] improves the recovery performance of compressive sensing by exploring the correlation structure of intra signals and inter signals. In the DCS application scenario, on the one hand, there is a certain correlation between these signals. On the other hand, the signals of different source are sparse expressed on some sparse transform basis independently. The DCS theory is based on the concept of joint sparsity, that is, the sparsity of the whole signal set. Joint sparsity is usually less than the sum of individual signal sparsity, and thus reducing the total number of compressive sensing measurements. In the DCS model, all signals can be decomposed into two components: one is the common sparse component, and the other is the innovation component. That is, the sparse representation

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

8

coefficient θ can be decomposed into two parts, i.e., θc and θi . θc is the common sparsity coefficient, and θi is the innovation sparsity coefficient. For signal i, the compressive sensing process can be expressed as yi = Φi xi = Φi Ψ(θc + θi ) + ei . ei is the reconstruction error of compressive sensing. The DCS theory proposed three different joint sparse models (JSM) for different specific application scenarios. Taking JSM-1 as an example (assuming T=2), the matrix structures involved in the joint recovery process is shown as follows: xˆ = [x1 , x2 ]T , yˆ = [y1 , y2 ]T , θˆ = [θc , θ1 , θ2 ]T , ˆ = [Φ, 0; 0, Φ], Ψ ˆ = [Ψ, Ψ, 0; Ψ, 0, Ψ]. Φ IV. NAIVE SPARSE EXPRESSION MODEL AND ITS CHALLENGE As mentioned early, there are two challenges for traditional dictionary learning technology in WSNs. Firstly, the traditional dictionary learning technology takes the raw historical data as its input data. However, the raw historical data collection in WSNs is difficult due to the complex deploying environment of sensor nodes and high cost of wireless communication. Secondly, the sparse transformation basis learned from specific original historical data is still a static expression base for specific historical scenes, and thus cannot meet the need of dynamic changes of the sparse transformation basis. As a result, we decide to learn the sparse transformation basis Ψ directly from the historical compressive sensing measurement results Y . The collection of compressive sensing measurement results is much easy. The compressive sensing measurement results are the data collected by the compressive sensing scheme. Due to the low rank measurement of compressive sensing, the communication cost for these measurement results is also very low. A. The naive sparse expression model In order to achieve this purpose, a naive solution is to integrate the compressive sensing process with the dictionary learning process. More specifically, the second term X of the above equation (i.e., formula 1) is replaced by the compressive sensing measurement result Y , where Y = ΦX. In order to maintain the balance of the formula, we need to perform a left multiplication in the first term by using Φ. By integrating the compressive sensing process with the dictionary learning process, the optimization problem of sparse transformation basis learning can be formally expressed as

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

min ||ΦΨZ − Y||2F , s.t., ||Zi ||0 ≤ k Ψ,Z

9

(4)

Y is compressive sensing measurement result, which is an m×l matrix, and Φ is measurement matrix with a size of m × n. Both Y and Φ are known quantities. Φ is a predefined matrix. Y

is gathered by the sink, and regarded as the training dataset in this scheme. Ψ and Z are two target variables. Ψ is the dictionary to be learning, Z is the sparse representation coefficients. B. Challenge However, such naive sparse expression model is not a well-designed model. One of the most important challenges is that the low rank measurement process of compressive sensing will greatly improve the difficulty of solving the sparse transformation basis Ψ. On the one hand, this optimization problem is different from the corresponding optimization problem of the traditional compressive sensing recovery process, no matter in form, or substance. There are two target variables in this formula, i.e., Ψ and Z. The former is the dictionary to be learning, and the latter is the sparse representation coefficient on the dictionary. In contrast, there is only one target variable in the traditional compressive sensing recovery process. Therefore, it cannot be solved by the traditional compressive sensing recovery technology. On the other hand, this optimization problem is also different from the traditional dictionary learning problem. There is a low rank measurement process of compressed sensing in the first term of the equation. Φ is an underdetermined matrix, corresponding to an under-determined equation set. Theoretically, it cannot solve a unique Ψ from ΦΨ. As a result, it is not easy to solve this learning problem. V. T HE PROPOSED SPARSE EXPRESSION MODEL A. Design principle In order to solve the challenge mentioned above, we present a new sparse expression model. The design principle is as follows. As the predefined sparse transformation basis has certain sparse representation ability, we adopt this prior knowledge as a constraint term to simplify the dictionary learning process. More specifically, we construct our dictionary model based on the predefined sparse transformation basis.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

10

In the proposed scheme, the dictionary model consists of two parts. One is the predefined sparse transformation basis Ψ itself, while the other is sparse transformation coefficient Θ of the dictionary on the predefined sparse transformation basis. The former represents a priori knowledge of predefined sparse transformation basis, and the latter is used to characterize the individual difference of signal sparsity in different application scenarios. B. The proposed sparse expression model Although it is not well enough, the predefined sparse transformation basis (e.g., DCT) still has a certain sparse representation ability for most of the physical signals. As a result, we can use the predefined sparse transformation basis as the basic dictionary of our sparse expression model. The final sparse expression model to be learned consists of two parts. One is the predefined sparse transformation basis itself (i.e., Ψ), and the other is the sparse expression coefficient on Ψ (i.e., the sparse structure information matrix Θ). The structure sparse expression model can be constructed by projecting the dictionary D on the predefined sparse transformation basis. More specifically, structured sparse representation model can be formally defined as (5)

min ||D − ΨΘ||2F , s.t., ||Θi ||0 ≤ p Θ

||A||F is the Frobenius norm, which is defined as ||A||F =

m P n P

i=1 j=1

norm, which refers to the number of non-zero elements in a vector.

|aij |2

! 12

. ||a||0 is the 0

The physical meanings behind the structured sparse representation model are as follows. The main features of the dictionary to be trained are described by the predefined sparse transformation basis Ψ, and the difference between the dictionary to be learning (i.e., D) and Ψ is described by the sparse structure information matrix Θ. Because the predefined sparse transformation basis Ψ itself has a good sparse representation ability in most of the physical signals, the difference between Ψ and D is small. This less difference can be described by the sparse structure information matrix Θ. Based on this structured sparse representation model, the traditional compressive sensing recovery process can be re-expressed as min ||Y − ΦDZ||2F , s.t., ||Zi ||0 ≤ k Z

(6)

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

11

Y =ΦX is the compressive measurement result of the historical data, where X is the historical data. By combining these two optimization problems together, we can obtain the proposed sparse expression model, min ||ΦΨΘZ − Y||2F , s.t., ||Zi ||0 ≤ k, ||Θi ||0 ≤ p Θ,Z

(7)

There are two target variables in the above optimization problem, i.e., Θ and Z. Both of them are with the L0 constraint. Ψ is the predefined sparse transformation basis. Y is the compressive sensing measurement results, which can be easily obtained in compressive sensing based schemes. VI. T HE TRAINING PROCESS In this section, we will introduce how to solving the above optimization problem (i.e., formula 7). By solving this problem, we can achieve the purpose that training a sparse transformation basis from compressive sensing measurement result, instead of original historical data. This problem can be solved by alternately iterative optimization. More specifically, in the first step, target variable Θ is regarded as a hidden variable whose solution will be postponed to the next stage, and target variable Z can be solved from the compressive measurement result Y , by using the sparsity constraint condition on Z. In the second step, by using the results of the first step and the constraint conditions on Θ, we can obtain a solution of Θ. The above two steps are iterated until convergence, and a hot restart mechanism will be adopt to accelerate the convergence. More detailed description of the training process is as follows. Step 1: solving for the intermediate result. In this step, the target variable Θ is treated as a hidden variable. ΦΨΘ is treated as a whole, and is defined as an intermediate variable H ( i.e., H = ΦΨΘ). Both the intermediate result H and the target variable Z will be solved from the compressive measurement result Y , by using the sparsity constraints on Z. It can be formally expressed as the following optimization problem. min ||HZ − Y ||2F , s.t., ||Zi ||0 ≤ k H,Z

(8)

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

12

In this problem, Y is directly transformed from the historical data X through a compressive sensing measurement process. This optimization problem has two target variables: the intermediate variable H and target variable Z, where Z satisfies L0 -norm constraint. This is a typical optimization problem in machine learning, which is a convex problem on H and Z, respectively. It can be solved by using the alternate optimization method. That is, fixing one of the H and Z to optimize the other one, and repeating it until the convergence condition is satisfied. More specifically, it can be converted into the following two problems. min ||HZ − Y ||2F , s.t.,||Zi ||0 ≤ k

(9)

min ||HZ − Y ||2F

(10)

Z

H

The first one is a LASSO (Least Absolute Shrinkage and Selection Operator) regression problem, which can be solved through matching pursuit based methods, e.g., MPL (Matching Pursuit LASSO) [35]. The second one is a classical least squares regression problem. The final value of the intermediate variable H is obtained by alternately optimizing these two problems until convergence. Step 2: solving for Θ. The target variable Θ is solved from the intermediate result H by using the sparsity constraint on it. The main task of this step can be formally expressed as the following optimization problem. min ||ΦΨΘ − H||2F , s.t., ||Θj ||0 ≤ p Θ,A

(11)

In this problem, the target variable Θ satisfies the L0 constrain, while both H and ΦΨ are given. This is a typical L0 -norm problem, which can be converted into L1 -norm problem and solved by using orthogonal matching pursuit method. Step 3: warm reboot, repeat until convergence. The first and second step needs to be repeated. The Θ obtained in the second step is using as a warm restart parameter for the first step during the next round until the final convergence. VII. T HE JOINT RECOVERY PROCESS There are usually multiple nodes in the distributed application scenarios, such as wireless sensor networks, and crowd sensing. Data generated from nodes with close geographical location often have a certain correlation. In theory, the recovery performance of the compressive sensing can be improved by exploring these temporal and spatial correlation.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

13

There are two representative compressive sensing frameworks for processing multi-source sparse signals and compressible signals, i.e. DCS [13], [34] and KCS (Kronecker compressive sensing) [15]. The DCS achieves efficient distributed coding by exploring the correlation structure of intra signals and inner signals. In DCS theory, the authors introduced a concept named joint sparsity for signal sets, as well as a model for joint sparse signal representation. A joint signal recovery scheme is also proposed. By introducing the tensor product, KCS provides a tool for generating joint sparse transformation basis, as well as joint measurement matrix for multisource signal compressive sensing application. The experimental results show that it achieved good performance in applications, such as 3D hyperspectral images and video sequences. Caione et al. [16] presented a comparative experimental study for DCS and KCS in the WSN application scenario. The authors pointed out that both DCS and KCS can effectively utilize the spatio-temporal correlation between nodes’ data to enhance the recovery accuracy of compressive sensing, and the performance of DCS is much better. In order to improve the recovery performance by exploring the spatio-temporal correlation characteristics among different signal sources, we integrate the DCS into the proposed schemes, and present a multi-source signal joint compressive sensing recovery process. The multi-source signal joint compressive sensing recovery process of the proposed scheme can be formally expressed as the following optimization problem. ˆΨ ˆΘ ˆZ ˆ − Yˆ ||2 , min ||Φ F ˆ Z

(12)

s.t., ||αc ||0 ≤ k1 , ||αl ||0 ≤ k2

ˆ is the received measurement result of compressive sensing. In this optimization problem, Y ˆ is known, which is shared by the sink and other nodes. Ψ ˆ and Θ ˆ are obtained in the training Φ ˆ is the sparse representation coefficient. process. Z ˆ is the joint measurement matrix for distributed compressive sensing. Ψ ˆ is the joint sparse Φ ˆ is the joint sparse structure information matrix. transformation basis for joint sparse model. Θ These symbols are defined as follows.    Φ 0 0 Ψ 0 0  1      . ˆ = 0 ˆ =  0 ... 0 .. 0  ; Ψ Φ    0 0 ΦL 0 0 Ψ





I Θ 0 0    ˆ  .. .  ; Θ =  . 0 .. 0   I 0 0 Θ



  . 

(13)

Φi is the compressive sensing measurement matrix adopted in the i−th signal. Ψ is predefined

sparse transformation basis. I is a identity matrix. Θ is the sparse structure information matrix, which has been learned during the training process.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

14

Yˆ is the joint compressive sensing measurement result, which is defined as Yˆ = [y1T , ..., yLT ]T . Superscript T means matrix transposition. Zˆ is the joint sparse coefficient matrix, which is ˆ = [αc T , α1 T , ..., αL T ]. αc is the common sparse coefficient, and αi is the innovation defined as Z sparsity coefficient of signal i. The recovery process of the proposed scheme is similar to the traditional compressive sensing recovery process. The difference is that the sparse transformation basis used in the recovery process of the proposed scheme is obtained from the above training process, while the traditional compressive sensing recovery process usually adopts the predefined sparse transformation basis. This is a typical compressive sensing recovery problem, which can be solved by traditional ˆ = [ˆ ˆ =Ψ ˆΘ ˆ Z. ˆ compressive sensing recovery scheme. The final X xT1 , ..., xˆT ]T is obtained from X L

VIII. T HEORETICAL ANALYSIS In this section, we will perform a theoretical comparison between the proposed scheme and the most closely related scheme, i.e., Dict-DCS [27]. We choose Dict-DCS as the comparison scheme mainly for three factors that related to the proposed scheme. First, it is a DCS based scheme. Second, it adopts dictionary learning technology. Third, it is WSN oriented. A. General description of Dict-DCS The Dict-DCS [27] introduces the dictionary learning technology into DCS. It mainly consists of two processes: dictionary learning and joint recovery. The first process learns the sparse transformation basis from historical data. The second one is a joint compressive sensing recovery process based on the learned sparse transformation basis. The dictionary learning process can be formally expressed as the following optimization problem. min

T P L P ( ||Ψ(Zc,t + Zi,t ) − Xi,t ||2F

Ψ,Zc,t ,Zi,t t=1 i=1 L P

+ λ(

i=1

(14)

||Zi,t ||0 + ||Zc,t ||0 ))

In the above formula, Zc,t and Zi,t are common sparse coefficients and innovation sparse coefficients, respectively. Ψ is the dictionary to be learned. Zc,t , Zi,t and Ψ are unknown variables. λ is a penalty factor. L is the number of signal sources. T is the total batch number of historical data.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

15

The joint recovery process can be formally expressed as the following optimization problem. ˜ 0, min ||Z|| ˜ Z

˜Ψ ˜ Z˜ − Y˜ ||2 ≤ ε s.t. ||Φ F

(15)

˜ = [Z T , Z T , ..., Z T ]T is the sparse representation coefficient to be recovered. Y ˜ = where Z c 1 L [y1T , ..., yLT ]T is compressive sensing measurement results. ε is the upper bounds of error tol˜ = erance. The JSM-1 model is used in the joint compressive sensing recovery process. Φ diag(Φ1 , ..., ΦL ), where Φi is the compressive sensing  Ψ Ψ ···  . . .. ˜  Ψ= .  .. ..  Ψ 0 ···

˜ is defined as measurement matrix. Ψ  0  ..  (16) .   Ψ

B. Model formulas analyses This section will analyse the relationship of model formulas (i.e., training model and recovery model) in the proposed scheme and Dict-DCS scheme. One of the most significant differences between them lies in the training model of sparse expression base. According to formula (7) and (14), their data sources used for training are different. Dict-DCS trains the sparse transformation basis directly from original historical data (Xi,t in formula (14)), while in the proposed scheme, the sparse transformation basis is trained from compressive sensing measurement result (Y in formula (7)). In order to reduce the difficulty of training, the prior knowledge is introduced as a constraint in the proposed scheme (ΨΘ and ||Θ||0 ≤ p in formula (7)).

Both of them are DCS-based scheme, and thus their recovery models are much similar. For ˆ and Ψ ˆ in the proposed scheme (formula (12)) are much example, joint matrix definitions of Φ similar to these in Dict-DCS (formula (15)). The difference in recovery model is mainly caused by the difference in the training model. In Dict-DCS, the joint matrix of sparse representation ˜ in formula (15). However, in the proposed scheme, it consists basis only contains one part i.e., Ψ ˆ and the joint matrix of sparse of two parts: the joint matrix of traditional predefined basis (Ψ) ˆ on the predefined basis. representation coefficient (Θ) C. Comparing with Dict-DCS

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

16

Firstly, the training data can be easily obtained in the proposed scheme. Dict-DCS trains the sparse transformation basis from original historical data, which is not easy obtained in most WSNs. In the proposed scheme, sparse transform basis is trained from compressive sensing measurement result. There is no additional cost to obtain the training data. Secondly, the proposed scheme has a better dynamic adaptability in the sparse domain. The dynamic property of sparse domain is the inherent feature of WSNs. As the original historical data are not easily obtained in most WSNs, sparse transform basis cannot be updated frequently in the traditional dictionary learning scheme. Thirdly, there is a certain loss accuracy loss in the proposed scheme. There is a low rank transformation between the compressive sensing measurement results and the original sensor data, which will cause a certain amount of information loss. However, according to the sparse hypothesis of compressive sensing technology, the information lost is limited and has little impact on the overall recovery performance IX. E XPERIMENTAL EVALUATION In this section, we will evaluate the performance of the proposed scheme. First, we introduce the basic experimental setting of the evaluation. Second, we perform the sparsity analysis on the dataset used in the experiments. Third, we evaluate the recovery accuracy performance. Finally, the relationship between the number of signal sources and the recovery accuracy is analyzed by experiments. A. Evaluation setting Both DCS [16] and Dict-DCS [27] are selected as candidate schemes for the experimental comparison. The former is a traditional distributed compressive sensing scheme which is based on the predefined sparse transformation basis. The latter is a dictionary learning based joint compressive sensing scheme, which adopts trained dictionary as its representation basis. Each experiment is repeated 200 times, and the final evaluation results are the average results of them. The DCT basis is adopted in both DCS and the proposed scheme. More specifically, the DCT basis is used as a predefined basis in DCS, while it is used as the basic dictionary in our structured sparse representation model. The dataset used in the experiments is Intel Berkeley Lab dataset [36]. This dataset is gathered from 54 sensors deployed in the Intel Berkeley Research lab between February 28th and April

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

17

5th, 2004. In the following experiments, both the temperature data and the humidity data provided by the project are used to evaluate the proposed scheme respectively. B. Signal sparsity testing In this section, we perform the sparsity analysis based on DCT basis for both the temperature dataset and the humidity dataset. First, two nodes are randomly selected, and their data are extracted from the dataset. The start point of the signal slice used for sparsity test is also selected randomly. Then, the sparsity analysis is carried out on these selected signals. Figure 1a is the original signal slice of two nodes selected randomly. Figure 1b is their sparse representation performance. Its horizontal axis is the number of remaining DCT coefficient, and the vertical axis is the corresponding recovery accuracy.

Original Signal 34

10 node 26 node 49

32

28

Recovery accuracy

o

Temperature ( C)

30

10

26 24 22 20

10 10 10 10

18 10

16 14

0

200

400

600

800

1000

10

Recovery Performance

0

node 26 node 49

-1

-2

-3

-4

-5

-6

-7

0

200

Time

(a) original signal

400 600 800 Number of remaining coefficients

1000

(b) sparse representation performance

Fig. 1: Signal sparsity test According to these evaluation results, on the one hand, DCT has a certain sparse representation ability for signals from different source. This is also an important reason why DCT can be widely used in different application scenarios of compressive sensing. As a result, we can use DCT as the basis for the dictionary model construction in the proposed scheme. On the other hand, the sparse representation ability of DCT in different signal sources is not similar to each other. For example, according to Figure 1b, the sparse representation ability of DCT for signal of node 49 is obviously better than that of node 26. This is an inherent shortage of the traditional predefined sparse transformation basis. Therefore, we need to introduce dictionary learning techniques to explore their differential features for specific applications.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

18

C. Dynamic characteristic of sparse domain The dynamic characteristic of sparse domain in WSN is evaluated based on DCT. Both temperature and humidity data sets of node 26 and node 49 were used. 300 sample sets were selected from each data sets in chronological order, and the top 5 coefficient of DCT was used for data recovery. The experimental results (Fig. 2) show that the recovery performance of DCT is not consistent at different time, and the difference is very large. Take the humidity data set in node 26 as an example, the recovery accuracy fluctuation range is [0.04, 0.18]. The dynamic property of sparse domain is the inherent feature of WSNs. Static sparse representation bases will reduce the recovery performance of compressive sensing scheme in WSNs.

0.12

0.1

0.08

0.06

0.04 0

Sparse expression ability at different time (top 5); Humidity10m 0.18 node 26 node 49 0.16

Recovery accuracy

Recovery accuracy

Sparse expression ability at different time (top 5); Temperature10m 0.16 node 26 node 49 0.14

0.14 0.12 0.1 0.08 0.06

50

100 150 200 Sample ID (sorted by time)

250

300

(a) Temperature

0.04 0

50

100 150 200 Sample ID (sorted by time)

250

300

(b) Humidity

Fig. 2: Dynamic characteristic of the sparse expression ability

D. Recovery performance 1) Compare with tradition schemes: In this section, we evaluate the recovery performance of the proposed scheme. Figure 3a is the the evaluation result on the temperature dataset, and Figure 3b is the evaluation result on the humidity dataset. The horizontal axis is the number of compressive sensing measurements M . The vertical axis is the recovery accuracy of compressive sensing, where the accuracy is defined as

||x−ˆ x||2 . ||x||2

During the evaluation, two kinds of signal lengths (i.e., n = 256 and n = 512) were taken into account. The experiment was repeated 200 times. The final results are the average of these 200 experimental results. The number of signal sources is set to L = 6. In the next section, we

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

19

Recovery Accuracy Comparion;Temperature10m

Recovery Accuracy

0.2

0.15

0.1

0.05

0

DCS,N=256 Dict-DCS,N=256 Ours,N=256 DCS,N=512 Dict-DCS,N=512 Ours,N=512

0.25 Recovery Accuracy

DCS,N=256 Dict-DCS,N=256 Ours,N=256 DCS,N=512 Dict-DCS,N=512 Ours,N=512

0.25

Recovery Accuracy Comparion;Humidity10m

0.2

0.15

0.1

0.05

20

40 60 80 Number of measurements (M)

100

0

20

(a) Temperature

40 60 80 Number of measurements (M)

100

(b) Humidity

Fig. 3: Recovery Accuracy

will evaluate the effects of different values of L. Each experiment randomly selected L nodes to participate in the evaluation. The signal starting points of these L nodes in the same experiment are the same. However, in different experiment, the signal starting points are randomly selected. According to Figure 3a and 3b, the accuracy of the different schemes increases obviously with the increase of the number of compressive measurements in these two datasets. Among them, the recovery performance of the proposed scheme is obviously superior to the other two schemes in these two datasets. 2) Compare with periodically updated Dict-DCS: The above comparison assumes that the dictionary is not updated periodically in the traditional scheme. We will conduct further evaluation where periodic dictionary updates are allowed. The update period includes 5 and 15. According to Fig. 4, the performance of the traditional scheme is slightly better than that of the proposed scheme when the period is 5. This is because the proposed scheme takes the compressive sensing measurement results as training data. Compared with the original data, the compressive sensing measurement results have some information loss. However, as the raw data is difficult to be obtained in WSN, the proposed scheme still has practice value. E. The influence of the number of data sources In this section, we evaluate the relationship between the joint recovery performance and the number of signal sources. The evaluation results on are given in Figure 5a and Figure 5b, where

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

20

Recovery Accuracy Comparion;Temperature10m

Recovery Accuracy Comparion;Humidity10m

0.2

0.16 Dict-DCS Dict-DCS(period=15) Dict-DCS(period=5) Our scheme

0.18

0.12

0.14

Recovery Accuracy

Recovery Accuracy

0.16

Dict-DCS Dict-DCS(period=15) Dict-DCS(period=5) Our scheme

0.14

0.12 0.1 0.08 0.06

0.1 0.08 0.06 0.04

0.04 0.02

0.02 0

20

40 60 80 Number of measurements (M)

0

100

20

40 60 80 Number of measurements (M)

(a) Temperature

100

(b) Humidity

Fig. 4: Recovery Accuracy (Compare with periodically updated Dict-DCS)

the first one is the result on the temperature dataset, and the second one is the result on the humidity dataset. The experimental setting is similar to those of the previous evaluation. The experiment was repeated 200 times. Each experiment randomly selected L nodes to participate in the evaluation. The L is changed from 2 to 18. The signal starting point of the L node of the same experiment is the same. The starting point of the signal in different experiment is selected randomly.

Temperature10m

Humidity10m

0.026

0.027 n=256 n=512

0.025

0.025 Recovery Accuracy

Recovery Accuracy

0.024 0.023 0.022 0.021 0.02

0.024 0.023 0.022 0.021

0.019

0.02

0.018

0.019

0.017 2

n=256 n=512

0.026

4

6

8 10 12 Number of Source

(a) Temperature

14

16

18

0.018 2

4

6

8 10 12 Number of Source

14

16

18

(b) Humidity

Fig. 5: Relationship between recovery accuracy and the number of sources According to Figure 5a and Figure 5b, with the increase of the number of nodes, the recovery

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

21

accuracy also increase in these two different types datasets. However, when the number of nodes is large enough, further increasing the number of nodes has less influence on the recovery accuracy. For example, when L = 6, this proposed scheme has achieved good performance in both two datasets. Further increasing the number of nodes, the effect of accuracy improvement is obviously weakened. In theory, with the increase of the number of nodes, the amount of information contained in the data will increase, which is beneficial to improve the accuracy of recovery to a certain extent. However, this does not mean that the more the number of nodes, the better the recovery accuracy is. On the one hand, when the number of nodes increases, the difference of these signal set also increases. It will reduce the number of common basis, and further weaken the ability of joint compressive sensing recovery. On the another hand, when the number of nodes increases, the the computing overhead of SINK nodes also increase. X. C ONCLUSION This paper addressed the sparse transformation basis learning problem in distributed compressive sensing. Traditional dictionary learning technique trains sparse transformation basis from original historical data. It may not be applicable to most WSN applications, because the acquisition of a large number of original historical data is costly, or even impossible. In wireless sensor networks, the sparse expression capability of the same sparse transformation basis may vary greatly in different time or different applications. Sparse transformation basis learned from specific original historical data is in fact a static expression basis, which still faces the dynamic problem. In this paper, we train the sparse transformation basis from compressive sensing measurement results rather than original historical data, which means the training data can be easily obtained. As the sparse transformation basis is directly trained from the compressive sensing measurement results, it can be updated in time, and thus it has more dynamic adaptability than the traditional dictionary learning technology. We analyze the challenge of the naive model that directly combines dictionary learning with compressive sensing. We also design a new sparse expression model, as well as an alternating iteration training method, to solve this challenge. Besides these, we also present a joint compressive sensing restoration method to mine the temporal-spatial correlation among multiple sources and improve the recovery performance of compressive sensing.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

22

ACKNOWLEDGMENT This work is supported by the Humanities and Social Science Youth Fund of Ministry of Education of China (19YJCZH254), Natural Science Foundation of China (71790615, 71431006, 61420106009, 61672536, 61872140, 61872048, 61762005), Scientific Research Fund of Hunan Provincial Education Department (15B127), Key Laboratory of Hunan Province for Mobile Business Intelligence, Key Laboratory of Hunan Province for New Retail Virtual Reality Technology. D ECLARATION OF C OMPETING I NTEREST The authors declared that they have no conflicts of interest to this work. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted R EFERENCES [1] J. Sulam, V. Papyan, Y. Romano, and M. Elad, “Multilayer convolutional sparse modeling: Pursuit and dictionary learning,” IEEE Transactions on Signal Processing, vol. 66, no. 15, pp. 4090–4104, Aug 2018. [2] I. Y. Chun and J. A. Fessler, “Convolutional dictionary learning: Acceleration and convergence,” IEEE Transactions on Image Processing, vol. 27, no. 4, pp. 1697–1712, April 2018. [3] X. Shu, J. Tang, Z. Li, H. Lai, L. Zhang, and S. Yan, “Personalized age progression with bi-level aging dictionary learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 905–917, April 2018. [4] X. Lan, S. Zhang, P. C. Yuen, and R. Chellappa, “Learning common and feature-specific patterns: A novel multiple-sparserepresentation-based tracker,” IEEE Transactions on Image Processing, vol. 27, no. 4, pp. 2022–2037, April 2018. [5] L. Wan, G. Han, L. Shu, and N. Feng, “The critical patients localization algorithm using sparse representation for mixed signals in emergency healthcare system,” IEEE Systems Journal, vol. 12, no. 1, pp. 52–63, March 2018. [6] Z. Qin, J. Fan, Y. Liu, Y. Gao, and G. Y. Li, “Sparse representation for wireless communications: A compressive sensing approach,” IEEE Signal Processing Magazine, vol. 35, no. 3, pp. 40–58, May 2018. [7] C. Luo, F. Wu, J. Sun, and C. W. Chen, “Compressive data gathering for large-scale wireless sensor networks,” in MobiCom’09. ACM, 2009, pp. 145–156. [8] P. Zhang, J. Wang, and K. Guo, “Compressive sensing and random walk based data collection in wireless sensor networks,” Computer Communications, vol. 129, pp. 43 – 53, 2018. [9] S. Chen, C. Zhao, M. Wu, Z. Sun, H. Zhang, and V. C. Leung, “Compressive network coding for wireless sensor networks: Spatio-temporal coding and optimization design,” Computer Networks, vol. 108, pp. 345 – 356, 2016. [10] F. Liu, M. Lin, Y. Hu, C. Luo, and F. Wu, “Design and analysis of compressive data persistence in large-scale wireless sensor networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 26, no. 10, pp. 2685–2698, 2015. [11] S. Chen, Z. Wang, H. Zhang, G. Yang, and K. Wang, “Fog-based optimized kronecker-supported compression design for industrial iot,” IEEE Transactions on Sustainable Computing, pp. 1–1, 2019. [12] P. Zhang, S. Wang, K. Guo, and J. Wang, “A secure data collection scheme based on compressive sensing in wireless sensor networks,” Ad Hoc Networks, vol. 70, pp. 73–84, 2018.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

23

[13] M. F. Duarte, S. Sarvotham, D. Baron, M. B. Wakin, and R. G. Baraniuk, “Distributed compressed sensing of jointly sparse signals,” in 39th Asilomar Conference on Signals, Systems and Computers, 2006, pp. 1537–1541. [14] D. Baron, M. F. Duarte, S. Sarvotham, M. B. Wakin, and R. G. Baraniuk, “An information-theoretic approach to distributed compressed sensing,” in 43rd Annual Allerton Conference on Communication, Control, and Computing, 2005. [15] M. F. Duarte and R. G. Baraniuk, “Kronecker compressive sensing,” IEEE Transactions on Image Processing, vol. 21, no. 2, pp. 494–504, 2012. [16] C. Caione, D. Brunelli, and L. Benini, “Compressive sensing optimization for signal ensembles in wsns,” IEEE Transactions on Industrial Informatics, vol. 10, no. 1, pp. 382–392, 2014. [17] P. Zhang and J. Wang, “On enhancing network dynamic adaptability for compressive sensing in wsns,” IEEE Transactions on Communications, pp. 1–1, 2019. [18] W. Li, H. Xu, H. Li, Y. Yang, P. K. Sharma, J. Wang, and S. Singh, “Complexity and algorithms for superposed data uploading problem in networks with smart devices,” IEEE Internet of Things Journal, pp. 1–1, 2019. [19] K. Engan, S. O. Aase, and J. H. Husoy, “Method of optimal directions for frame design,” in ICASSP’99, 1999, pp. 2443–2446. [20] M. Aharon, M. Elad, and A. Bruckstein, “K-svd: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311–4322, 2006. [21] J. M. Duarte-Carvajalino and G. Sapiro, “Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization,” IEEE Transactions on Image Processing, vol. 18, no. 7, p. 1395, 2009. [22] C. D. Sigg, T. Dikk, and J. M. Buhmann, “Learning dictionaries with bounded self-coherence,” IEEE Signal Processing Letters, vol. 19, no. 12, pp. 861–864, 2012. [23] C. Studer and R. G. Baraniuk, “Dictionary learning from sparsely corrupted or compressed signals,” in IEEE International Conference on Acoustics, Speech and Signal Processing, 2012, pp. 3341–3344. [24] M. Sadeghi, M. Babaie-Zadeh, and C. Jutten, “Learning overcomplete dictionaries based on atom-by-atom updating,” IEEE Transactions on Signal Processing, vol. 62, no. 4, pp. 883–891, 2014. [25] F. P. Anaraki and S. M. Hughes, “Compressive k-svd,” in IEEE International Conference on Acoustics, Speech and Signal Processing, 2013, pp. 5469–5473. [26] F. Pourkamali-Anaraki, S. Becker, and S. M. Hughes, “Efficient dictionary learning via very sparse random projections,” in International Conference on Sampling Theory and Applications, 2015, pp. 478–482. [27] W. Chen, I. J. Wassell, and M. R. D. Rodrigues, “Dictionary design for distributed compressive sensing,” IEEE Signal Processing Letters, vol. 22, no. 1, pp. 95–99, 2015. [28] L. Zhu, Z. Huang, Y. Liu, C. Yue, and B. Ci, “The nonparametric bayesian dictionary learning based interpolation method for wsns missing data,” International Journal of Electronics and Communications, vol. 79, pp. 267 – 274, 2017. [29] P. Zhang, J. Wang, K. Guo, F. Wu, and G. Min, “Multi-functional secure data aggregation schemes for wsns,” Ad Hoc Networks, vol. 69, pp. 86–99, 2018. [30] V. Gupta and S. De, “Sbl-based adaptive sensing framework for wsn-assisted iot applications,” IEEE Internet of Things Journal, vol. 5, no. 6, pp. 4598–4612, 2018. [31] C. Kumar and K. Rajawat, “Dictionary-based statistical fingerprinting for indoor localization,” IEEE Transactions on Vehicular Technology, vol. 68, no. 9, pp. 8827–8841, 2019. [32] W. Li, Z. Chen, X. Gao, W. Liu, and J. Wang, “Multi-model framework for indoor localization under mobile edge computing environment,” IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4844 – 4853, 2019. [33] X. Li, G. Xu, X. Zheng, K. Liang, E. Panaousis, T. Li, W. Wang, and C. Shen, “Using sparse representation to detect anomalies in complex wsns,” ACM Transactions on Intelligent Systems and Technology, 2019.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

24

[34] D. Baron, M. F. Duarte, M. B. Wakin, S. Sarvotham, and R. G. Baraniuk, “Distributed compressive sensing,” arXiv preprint, p. arXiv:0901.3403, 2009. [35] M. Tan, I. W. Tsang, and L. Wang, “Matching pursuit lasso part ii: Applications and sparse recovery over batch signals,” IEEE Transactions on Signal Processing, vol. 63, no. 3, pp. 742–753, 2015. [36] “Intel berkeley lab dataset,” http://www.select.cs.cmu.edu/data/labapp3/index.html, Oct. 2015.

B IOGRAPHY

Ping Zhang is a Doctor and Associate Professor in the College of Computer and Information Engineering at Hunan University of Technology and Business, Changsha, China. His current research interests include network, security, and machine learning.

Jianxin Wang is the Dean and a Professor in the School of Computer Science and Engineering at Central South University, Changsha, China. His current research interests include algorithm analysis and optimization, parameterized algorithm, bioinformatics, and computer network.