Inversion algorithm based on the generalized objective functional for compressed sensing

Inversion algorithm based on the generalized objective functional for compressed sensing

Applied Mathematical Modelling 37 (2013) 4407–4429 Contents lists available at SciVerse ScienceDirect Applied Mathematical Modelling journal homepag...

2MB Sizes 0 Downloads 10 Views

Applied Mathematical Modelling 37 (2013) 4407–4429

Contents lists available at SciVerse ScienceDirect

Applied Mathematical Modelling journal homepage: www.elsevier.com/locate/apm

Inversion algorithm based on the generalized objective functional for compressed sensing Jing Lei ⇑, Shi Liu School of Energy, Power and Mechanical Engineering, North China Electric Power University, Changping District, Beijing 102206, China

a r t i c l e

i n f o

Article history: Received 23 October 2011 Received in revised form 11 July 2012 Accepted 18 September 2012 Available online 28 September 2012 Keywords: Compressed sensing Inverse problems Tikhonov regularization method Homotopy method Artificial physics optimization algorithm

a b s t r a c t Owing to providing a novel insight for signal and image processing, compressed sensing (CS) has attracted increasing attention. The accuracy of the reconstruction algorithms plays an important role in real applications of the CS theory. In this paper, a generalized reconstruction model that simultaneously considers the inaccuracies on the measurement matrix and the measurement data is proposed for CS reconstruction. A generalized objective functional, which integrates the advantages of the least squares (LSs) estimation and the combinational M-estimation, is proposed. An iterative scheme that integrates the merits of the homotopy method and the artificial physics optimization (APO) algorithm is developed for solving the proposed objective functional. Numerical simulations are implemented to evaluate the feasibility and effectiveness of the proposed algorithm. For the cases simulated in this paper, the reconstruction accuracy is improved, which indicates that the proposed algorithm is successful in solving CS inverse problems. Ó 2012 Elsevier Inc. All rights reserved.

1. Introduction CS is a promising theory that is based on the fact that a relatively few number of the projections of a sparse signal can capture most of its salient information. Owing to providing a novel insight for signal and image processing, CS theory has attracted increasing attention. Presently, the CS theory has found numerous potential applications in various fields such as the signal and image processing, machine learning, astronomy, wireless sensing networks, medical imaging, spectral imaging and remote sensing [1–10]. The accuracy of reconstruction algorithms plays a crucial role in real applications. At present, various algorithms have been developed for CS reconstruction, which can be divided into three rough categories [11]: (1) the greedy pursuit algorithms, such as the orthogonal matching pursuit (OMP) method [12], the stagewise OMP (StOMP) algorithm [13], the regularized OMP (ROMP) method [14] and the compressive sampling matching pursuit (CoSaMP) algorithm [11], where these methods build up an approximation one step at a time by making locally optimal choices at each step; (2) the convex relaxation algorithms, such as the interior-point method [15], the gradient projection method [16], the iterative thresholding algorithm [17], the Bregman iteration technique [18,19], the separable approximation algorithm [20] and (3) the combinatorial algorithms [11]. Overall, these algorithms have played a prominent role in promoting the developments and applications of CS theory. Owing to the characteristics and complexities of CS reconstruction problems, however, developing an efficient reconstruction algorithm is highly desirable. At present, CS reconstruction algorithms often consider the inaccurate property on the measurement data. In real applications, however, the measurement matrix or model may be inaccurate owing to the facts such as the physically ⇑ Corresponding author. Tel.: +86 10 61772472; fax: +86 10 61772219. E-mail address: [email protected] (J. Lei). 0307-904X/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.apm.2012.09.049

4408

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

implementing the measurement matrix in a sensor and the model approximation distortions [21]. A detailed discussion on the inaccurate properties of the measurement matrix and the measurement data can be found in [21]. As a result, it may be more reasonable to simultaneously consider the inaccurate properties on the measurement data and the measurement matrix in the process of CS reconstruction. To obtain a meaningful solution, additionally, CS construction process is often formulated into an optimization problem, and finding an efficient optimization algorithm will be highly desirable for real applications. Optimization algorithms can be rough divided into two categories: the local optimization algorithms and the global optimization algorithms. Presently, local optimization algorithms have found numerous applications in the field of CS reconstruction. Unfortunately, it is hard for the local optimization algorithms to ensure a global optimal solution. Consequently, developing a global optimization algorithm may be important for solving CS inverse problem. This paper presents a generalized reconstruction model that simultaneously considers the inaccurate properties on the measurement matrix and the measurement data. A generalized objective functional, which integrates the merits of the LS estimation and the combinational M-estimation, is proposed. An iterative scheme that integrates the advantages of the homotopy method and the APO algorithm is developed for solving the proposed objective functional. Numerical simulations are implemented to validate the feasibility of the proposed algorithm. The rest of this paper is organized as follows. Section 2 briefly introduces CS model. In Section 3, a generalized reconstruction model that simultaneously considers the inaccurate properties on the measurement matrix and the measurement data is proposed. In Section 4, a generalized objective functional that integrates the advantages of the LS estimation and the combinational M-estimation is in detail described. Section 5 introduces the homotopy method and the APO algorithm, and an iterative scheme that integrates the merits of the both algorithms is developed for solving the proposed objective functional. Numerical results and discussions are presented in Section 6. Finally, Section 7 presents a summary and conclusions. 2. CS model The CS model is introduced briefly in this section, and more theoretical discussions on the CS technique can be found in [22–25]. If a signal x is sparse, the CS method attempts to reconstruct x from just a few linear measurements of x, which can be formulated by:

y ¼ Ux;

ð1Þ

where x indicates an n  1 dimensional vector standing for a sparse signal or image; y is an m  1 dimensional vector indicating the linear measurements of x; U represents a matrix of dimension m  n, which is named as the measurement matrix. Popular measurement matrices include the Gaussian measurement matrix, the Binary measurement matrix and the Fourier measurement matrix, and more details can be found in [23]. If a signal is not sparse, however, it can be sparsely represented by the other bases, such as the wavelet bases and the Fourier bases, Eq. (1) can be rewritten by [26]:

y ¼ Uwa;

ð2Þ

where x ¼ wa, and w is a matrix of dimension n  n. In real applications, taking the measurement noises into account, Eqs. (1) and (2) can be reformulated by [26]:

y ¼ Ux þ r;

ð3Þ

y ¼ Uwa þ r;

ð4Þ

where r is a m  1 dimensional vector standing for the measurement noise. In brief, the primary task of the CS inverse problem is to estimate x from the known U and y under the condition of satisfying sparsity assumption of x. In practice, the solving of the CS model is often formulated into the following optimization problem [8]: n X min JðxÞ ¼ kUx  yk2 þ a jxi j;

ð5Þ

i¼1

where a > 0 is the regularization parameter; k  k defines the 2-norm and j  j represents the absolute value operator. 3. Generalized reconstruction model It can be found from Eqs. (3) and (4) that the standard CS model only considers the inaccurate property on the measurement data, however, the inaccuracy of the measurement matrix or model is not considered. In practice, the measurement matrix or model may be inaccurate [21]. As a result, it is essential to simultaneously consider the inaccuracy properties on the measurement data and the measurement matrix in the process of CS reconstruction, which can be described by [27]:

ðU þ EÞx ¼ y þ r;

ð6Þ

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

4409

where E is a perturbation matrix of dimension m  n. It is crucial to consider this kind of noise since it can account for the precision errors when applications call for physically implementing the measurement matrix in a sensor. In other CS scenarios, such as when U represents a system model, E can absorb the errors in assumptions made about the transmission channel. Furthermore, E can also model the distortions derived from discretizing the domain of the analog signals and systems [21]. More discussions on the perturbation analysis for Eq. (6) can be found in [21,28,29]. Directly solving Eq. (6) is challenging. For easy computation, Eq. (6) can be reformulated as:

Ux þ Ex ¼ y þ r:

ð7Þ

Let Ex ¼ B, Eq. (7) can be expressed by:

Ux þ B ¼ y þ r:

ð8Þ

In fact, Eq. (8) is called as the semiparametric estimation in the field of the econometrics, and more details can be found in [30]. It is worth mentioning that B in Eq. (8) can also be considered as the model errors. Eq. (8) is equivalent to Eq. (3) when B ¼ 0; obviously, Eq. (3) is a special case of Eq. (8). Additionally, other approaches, such as the regularized total least squares method, are available for solving Eq. (6), and more discussions can be found in [27,31]. 4. Design of the objective functional In Eq. (8), a generalized model is proposed for CS reconstruction, and finding an efficient computational method is important for real applications of Eq. (8). In this section, a generalized objective functional, which integrates the advantages of the LS estimation and the combinational M-estimation, is described in detail. 4.1. Regularized semiparametric method Directly solving Eq. (8) is challenging because two unknown variables U and B need to be solved. A popular approach is to reformulate the solving of Eq. (8) into an optimization problem. In [32], the authors proposed the following optimization problem:

min J ¼ kRðUx þ B  yÞk2 þ a1 kRxk2 þ a2 kNBk2 ;

ð9Þ

x;B

where a1 > 0 and a2 > 0 represent the regularization parameters; R, N and R are the weighted matrices. Eq. (9) can be considered as a generalized Tikhonov functional; especially Eq. (9) is the standard Tikhonov regularization (STR) method when B ¼ 0, R and R are the identity matrices, which can be described by:

minJ ¼ kUx  yk2 þ a1 kxk2 :

ð10Þ

x

4.2. Extension of the objective functional Designing a Tikhonov regularization functional includes two key issues: the choice of the measure function of accuracy of a solution and the design of stabilizing functional. In general, a Tikhonov regularization solution is a result of balancing the accuracy and stability of a solution [33,34]. According to the Tikhonov regularization method [33], Eq. (9) can be generalized as:

minJ ¼ /ðx; BÞ þ a1 X1 ðxÞ þ a2 X2 ðBÞ;

ð11Þ

x;B

where /ðx; BÞ measures the accuracy of a solution; X1 ðxÞ and X2 ðBÞ can be referred to as the stabilizing functionals or the constrained functionals. It can be seen from the first item of the right hand side of Eq. (9) that the function of sum of squares is used to measure the accuracy of a solution. Studies indicate that the LS estimation is strongly influenced by the small cluster of outliers, which is therefore not directly suitable for the estimation problems with outliers [35]. Consequently, seeking a robust method is crucial to improve the robustness of the LS estimation. In [36], the authors proposed the combinational estimation of the LS estimation and the least absolute deviation (LAD) estimation to improve the robustness of the LS estimation. The noises in the measurement data are complicated in CS applications; in order to improve the robustness of the LS estimation, a combinational estimation that integrates the advantages of the LS estimation and the combinational M-estimation is proposed, which can be formulated by:

/ðx; BÞ ¼ d1 kUx þ B  yk2 þ d2

m X

m X

j¼1

j¼1

q1 ðUj x þ Bj  yj Þ þ d3

q2 ðUj x þ Bj  yj Þ;

ð12Þ

where 0 6 d1 6 1, 0 6 d2 6 1, 0 6 d3 6 1 and d1 þ d2 þ d3 ¼ 1; q1 ðÞ and q2 ðÞ stand for the M-estimation functions; Uj is the jth row of matrix U; Bj and yj represent the jth element of vectors B and y. Popular M-estimation functions include the

4410

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

absolute value function, the Huber function, the Cauchy function, the G-M function and the Fair function, and more discussions on the M-estimation can be found in [35,37]. The design of the stabilizing functional, which will strongly influence the reconstruction quality, is crucial for Eq. (11). Overall, stabilizing functionals can be designed according to a specific reconstruction task. Especially, the design of the stabilizing functional must meet the sparsity assumption of a solution in the process of CS reconstruction. Presently, different stabilizing functionals are available for CS reconstruction, such as the ‘1 norm [16], the ‘p norm (0 < p 6 1) [38], the M-estimators (such as the Laplace function and the Geman-McClure function) [39] and so on. From the viewpoint of the penalty function, their differences depend mainly on different penalties are imposed on unknown variables [40]. Studies indicate that the quadratic stabilizing functional (that is the standard ‘2 norm) penalizes the large nonzero values in the reconstructed signals or images in a quadratic fashion, which discourages the existence of such values and favors the solutions with small nonzero entries. As a result, it is hard to ensure a sparse solution [41]. In contrast, in order to ensure a sparse solution, the stabilizing functional should impose a relatively small penalty on the large nonzero values, and a relatively large penalty on the small nonzero values. As a result, such stabilizing functionals will allow for the existence of the large values since it penalizes such values lighter than the quadratic scheme, and discourage the existence of the small nonzero values; finally, a sparse solution may be obtained [41]. More discussions on the design of the sparsity stabilizing functionals can be found in [39,41–43]. In this paper, an alternative stabilizing functional that satisfies the above properties is proposed, which can be described by:

   lnð1þlnð1þjxi jpi ÞÞ n 1  exp b X uþlnð1þlnð1þjxi jpi ÞÞ    ; X1 ðxÞ ¼ lnð1þlnð1þjxi jpi ÞÞ i¼1 1 þ exp b uþlnð1þlnð1þjx jpi ÞÞ

ð13Þ

i

where u > 0, pi > 0 and b > 0. In fact, Eq. (13) can be considered as a generalized M-evaluation function [44]. For easy computation, the absolute value function is approximated by [45]:

jxj  ðx2 þ nÞ1=2 ;

ð14Þ

where n > 0. Therefore, Eq. (13) can be approximated by:

X1 ðxÞ 

   lnð1þlnð1þðx2 þnÞpi =2 ÞÞ  exp b uþlnð1þlnð1þðxi 2 þnÞpi =2 ÞÞ i    : lnð1þlnð1þðx2 þnÞpi =2 ÞÞ 1 þ exp b uþlnð1þlnð1þðxi 2 þnÞpi =2 ÞÞ

n 1 X i¼1

ð15Þ

i

In this work, X2 ðBÞ is defined as:

X2 ðBÞ ¼

n X

lnð1 þ lnð1 þ jBj jqj ÞÞ;

ð16Þ

j¼1

where qj > 0. For ease of calculation, Eq. (16) is approximated by:

X2 ðBÞ 

n X

lnð1 þ lnð1 þ ðB2j þ nÞqj =2 ÞÞ:

ð17Þ

j¼1

In this paper, the employed M-estimation functions are the Cauchy function and the G-M function [35,37], which are presented in Eqs. (18) and (19):

qCauchy ¼

l21 2 1 2

 ln 1 þ

qGM ðxÞ ¼ 

x2

l2 þ x2

x

2 ! ð18Þ

;

l1

ð19Þ

;

where l1 > 0 and l2 > 0 are the predetermined parameters. According to the above discussions, a generalized objective functional can be obtained for CS reconstruction:

minJ ¼ d1 krk2 þ d2 x;B

m X l2 1

j¼1

2

ln 1 þ



2 !

rj

l1

   lnð1þlnð1þðx2i þnÞpi =2 ÞÞ 1  exp b m n X X uþlnð1þlnð1þðx2i þnÞpi =2 ÞÞ r2j 1    þ d3  þ a1 2 2 l2 þ r j lnð1þlnð1þðx2i þnÞpi =2 ÞÞ j¼1 i¼1 1 þ exp b pi =2 2 uþlnð1þlnð1þðx þnÞ ÞÞ i

þ a2

n X

lnð1 þ lnð1 þ

j¼1

where r j ¼ Uj x þ Bj  yj .

ðB2j

qj =2

þ nÞ

ÞÞ:

ð20Þ

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

4411

5. Solving of the objective functional Eq. (20) essentially is an unconstrained optimization problem and finding an efficient computational method is important for the applications of Eq. (20). In brief, algorithms of solving the unconstrained optimization problems can be divided into two groups: the local optimization algorithms and the global optimization algorithms. Global optimization algorithms can also be rough classified into two categories: the deterministic methods such as the filled function method and the tunneling function method, and the stochastic methods such as the particle swarm optimization algorithm, the genetic algorithms, the APO algorithm, and the simulated annealing algorithm and so on. Local optimization algorithms have favorable ability of local search; however, the global search capability is relatively weak. Stochastic global optimization algorithms have satisfactory global search capability; however, the ability of local search is relatively weak. In real applications, an algorithm that integrates the advantages of the local search and stochastic global search will be highly desirable. In this section, the APO algorithm and the homotopy method is introduced and an iterative scheme that integrates the merits of the both algorithms is described in detail. 5.1. Homotopy method Eq. (20) includes two unknown variables, x and B, and directly solving the equation is challenging. In the alternating direction iteration optimization technique [19], the variables x and B in Eq. (20) can be alternately solved. The main ideas behind the method is that in the kth step, if the value of variable x is given, the estimation of variable B can be obtained by solving Eq. (20), and then variable x can be recalculated when B is fixed. As a result, variables x and B can be alternately solved by repeating the above process. Especially, in the kth step, when the estimation of variable B is provided, minimizing Eq. (20) is equivalent to the solving of the following system of nonlinear equations [46]:

uðxÞ ¼ 0;

ð21Þ

where uðxÞ is the gradient vector of Eq. (20). The general homotopy paradigm involves embedding the equations to be solved, uðxÞ ¼ 0, in a system of equations of one higher dimension, @ðx; kÞ ¼ 0, with the introduction of one more variable k, called the homotopy parameter or continuation parameter. Typically, k is restricted to the range [0,1], and the embedding is done so that the augmented system @ðx; kÞ ¼ 0 is easy to solve and reduces to the original problem when k ¼ 1, i.e., @ðx; 1Þ ¼ uðxÞ. More details on the design of the homotopy equation can be found in [47–49]. To solve the homotopy equation @ðx; kÞ ¼ 0, k can be divided into [49]:

0 ¼ k0 < k1 <    < km ¼ 1:

ð22Þ

At each point ki , solve the following discrete homotopy equation:

@ðx; ki Þ ¼ 0; i ¼ 1; 2;    ; m:

ð23Þ

Set the solution xðki Þ as the initial value of the homotopy equation @ðx; kiþ1 Þ ¼ 0, repeat this process until ki ¼ 1. For easy calculation, the fixed point homotopy is used in this paper, which can be described by [49]:

@ðx; ki Þ ¼ ð1  ki Þðx  x0 Þ þ ki uðxÞ ¼ 0:

ð24Þ

For concise notation, Eq. (24) can be rewritten as:

x ¼ Uðx; ki Þ;

ð25Þ

where UðÞ is a function of variables x and k. In this paper, the fixed point iterative algorithm is used to solve Eq. (25), which can be formulated by [48]:

xkþ1 ¼ Uðxk ; ki Þ;

ð26Þ

where k is the index of iterations. It can be known in advance that the range of the solution belongs to ½H1 ; H2 , therefore the iterative scheme is slightly modified according to the prior information. As a result, a projected operator is introduced to the iterative scheme:

xkþ1 ¼ ProjectfUðxk ; ki Þg;

ð27Þ

8 < H1 ; if x < H1 Projectfxg ¼ x; if H1 6 x 6 H2 : H2 ; if x > H2 :

ð28Þ

where

5.2. Artificial physics optimization The APO algorithm is an emerging stochastic global optimization method, and it consists of three procedures, such as the initialization, the calculation force and the motion, which are distinct features that differ from other stochastic global optimization algorithms. The implementation of the APO algorithm can be summarized as follows [50,51]:

4412

Step Step Step Step Step Step Step

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

1. 2. 3. 4. 5. 6. 7.

Specify the problem to be solved and algorithmic parameters, and generate an initial population. Evaluate the fitness of each individual using the objective function value. Update the global best position xbest . Compute the mass and component force of each individual according to Eqs. (29) and (30). Compute the total force of each individual using Eq. (31). Update the velocity and position of each individual using Eqs. (32) and (33). Loop to Step 2 until a predetermined stopping criterion is satisfied.

In the initialization phase, - individuals are selected. The fitness of each individual is evaluated by the objective function f ðxÞ, and the individual with the best fitness is selected and stored in xbest . In the procedure of the calculation force, the masses of individuals are calculated firstly. The mass function can be described by mi ¼ gðf ðxi ÞÞ with the conditions that mi 2 ð0; 1 and gðÞ is a positive bounded monotonic decreasing function. According the definition, Eq. (29) is one of the functions [50,51]:

mi ¼ exp



 f ðxbest Þ  f ðxi Þ ; 8i; f ðxworst Þ  f ðxbest Þ

ð29Þ

where f ðxbest Þ and f ðxworst Þ represent the function values of individual best and individual worst, in which best ¼ arg minff ðxi Þg and worst ¼ arg maxff ðxi Þg. The component forces exerted on each individual via all other individuals can be calculated by [50,51]:

 F ij;k ¼

Gmi mj ðxj;k  xi;k Þ;

if f ðxj Þ < f ðxi Þ

Gmi mj ðxj;k  xi;k Þ; if f ðxj Þ P f ðxi Þ;

ð30Þ

where F ij;k is the component force exerted on individual i via individual j in the kth dimension; xi;k and xj;k represent the positions of individual i and individual j in the kth dimension. Finally, the total force F i;k exerted on individual i via all other individuals in the kth dimension can be calculated by:

F i;k ¼

X F ij;k ;

8i – best:

ð31Þ

j¼1 j – i

It is worth mentioning that the individual best cannot be attracted or repelled by other individuals. That is to say that the component force and the total force exerted on individual best are zero. In the update step of the velocity and position, the velocity and position of individual i at time t þ 1 are updated by Eqs. (32) and (33) [50,51].

v i;k ðt þ 1Þ ¼ wvi;k ðtÞ þ h

F i;k ; mi

xi;k ðt þ 1Þ ¼ xi;k ðtÞ þ v i;k ðt þ 1Þ;

8i – best; 8i – best;

ð32Þ ð33Þ

where v i;k ðtÞ and xi;k ðtÞ are the velocity and position of individual i in the kth dimension at generation t;  h represents a random variable generated with uniform distribution with the range of (0, 1); w is an inertia weight with the range of (0, 1). The movement of each individual is restricted in the feasible domain with xi;k 2 ½xmin ; xmax  and v i;k 2 ½v min ; v max . It is worth mentioning that the current individual best does not move. Finally, if a stopping criterion is satisfied, then computation is terminated, and the optimal solution is output. Otherwise, loop to Step 2 until a predetermined stopping criterion is met. 5.3. Hybrid algorithm The homotopy method has favorable capability of the local search; however, it is hard for the algorithm to ensure a global optimal solution owing to the fact that the search process may fall into the attraction domain of the local optimal solution. The APO algorithm has satisfactory global search ability; unfortunately, the capability of the local search is relatively weak. In practice, an algorithm that integrates the merits of the local search and the global search is highly desirable [52]. Presently, various algorithms that integrate the merits of the local search algorithms and the stochastic global search algorithms have been developed for solving complicated optimization problems, and more details can be found in [52–54]. In this paper, a hybrid algorithm that integrates the advantages of the homotopy method and the APO algorithm is proposed, which can be summarized as follows: Step 1. Specify the problem of interest and initialize the algorithmic parameters for the homotopy method and the APO algorithm. Step 2. Implement the homotopy method and a local optimal solution can be found. Step 3. Generate an initial population in the neighborhood of the local optimal solution found in Step 2, and implement a predetermined number of iterations for the APO algorithm.

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

4413

Step 4. Select the best individual from the population in the APO algorithm, set the best individual into an initial solution, and loop to Step 2. Repeat the above steps until a predetermined stopping criterion is satisfied. Several interesting properties can be found from the above iterative scheme. The implementation of the homotopy algorithm can fast find a local optimal solution. Implementing the APO algorithm after the homotopy algorithm will accelerate the convergence of the APO algorithm owing to the fact that a good initial solution is provided, increase the possibility of escaping from the attraction domain of the local optimal solution and expand the search space. After implementing an entire iteration, the best candidate solution that is found by the APO algorithm will be further exploited by the homotopy algorithm, which not only accelerates the convergence of the APO algorithm but also enhances the capability of local search. Especially, when a whole iteration is implemented, a new initial solution is obtained for implementing the homotopy method. This process resembles the multi-point starting strategy, which may enhance the global search capability of the homotopy method. A distinct feature of the proposed algorithm is that it integrates the advantages of the local search derived from the local optimization method and the global search derived from the stochastic global optimization algorithm. 6. Numerical simulations and discussions According to the above discussions, the proposed reconstruction process involves solving Eq. (20) using the proposed iteration scheme, which can be briefly called as the generalized reconstruction (GR) algorithm. In this section, numerical simulations were implemented to evaluate the feasibility and the efficiency of the GR algorithm and the reconstruction quality is compared with the standard Tikhonov regularization (STR) method [33], the SpaRSA algorithm [20], the l1_ls method [15] and the GPSR algorithm [16]. All algorithms are implemented using the MATLAB 7.0 software on a PC with a Pentium IV 2.4 G Hz CPU, and 4 G bytes memory. The reconstruction precision is evaluated by the relative error, which is defined by:



kxoriginal  xreconstructed k ; kxoriginal k

ð34Þ

where g stands for the relative error; xoriginal and xreconstructed represent the original signal and the reconstructed signal. In this paper, the noise on the measurement data is defined as:

ycontaminated ¼ yoriginal þ r;

ð35Þ

where r ¼ r1  rand n; r1 represents the standard deviation and rand n stands for a normal distribution random number with the mean of 0 and the standard deviation of 1, which can be achieved by the function ‘rand n’ in the MATLAB software; yoriginal and ycontaminated define the original and noise-contaminated measurement data. Similarly, the inaccuracy on the measurement matrix is defined as:

Ucontaminated ¼ Uoriginal þ E; where E ¼ r2  rand n; measurement matrix.

ð36Þ

r2 is the standard deviation; Uoriginal and Ucontaminated stand for the original and noise-contaminated

6.1. Case 1 In this section, a complicated signal, which is presented in Fig. 1, is used to evaluate the feasibility and effectiveness of the GR algorithm, and the reconstruction quality is compared with the STR method, the SpaRSA algorithm, the l1_ls method and the GPSR algorithm. In this case, the inaccuracies on the measurement matrix and the measurement data are simulated, and the standard deviations for the measurement data and the measurement matrix r1 and r2 are 0.01 and 0.001. The value of the regularization parameter for the STR method is 1  1010. In the GR algorithm, d1 ¼ 0:4, d2 ¼ 0:3, d3 ¼ 0:3, l1 ¼ 1, l2 ¼ 1, k ¼ 1, b ¼ 10, u ¼ 1, pi ¼ p, qi ¼ q, the values of p and q are determined according to a specific reconstruction task, matrix w is defined by the ‘Symlets 8’ wavelet in the MATLAB software, and the rest of algorithmic parameters are listed in Table 1. In the APO algorithm, the number of population is 50. The regularization parameters for the SpaRSA method, the l1_ls algorithm and the GPSR method are 0.03, 1  104 and 0.3. The Gaussian measurement matrix is employed, and the sizes for subfigures a and b in Figs. 2–6 are 100  2048 and 300  2048. Figs. 2–6 show the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm under different sampling numbers. The reconstruction errors for the compared algorithms under different sampling numbers were listed in Table 2. Figs. 2–5 demonstrate the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method under different sampling numbers when the inaccurate properties on the measurement matrix and the measurement data are considered. It can be seen from Table 2 that when the sampling number is 100, the quality of the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method is far from satisfactory, and the distortions are large. When the sampling number is 300, however, the quality of the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method is improved.

4414

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 1 0.9 0.8 0.7

Amplitude

0.6 0.5 0.4 0.3 0.2 0.1 0

0

500

1000 1500 Signal index

2000

2500

Fig. 1. Original signal.

Table 1 Algorithmic parameters for the GR algorithm. Algorithmic parameters

Fig. 6a

Fig. 6b

a1 a2

1  1010 0.005 1 1.3

1  1010 0.005 1 1.3

q p

(a)

0.5

(b)

0.4

1 0.8

0.3 0.6 0.2 0.4 Amplitude

Amplitude

0.1 0 -0.1

0.2 0

-0.2 -0.2 -0.3 -0.4

-0.4 -0.5

0

500

1000 1500 Signal index

2000

2500

-0.6

0

500

1000 1500 Signal index

2000

2500

Fig. 2. Reconstructed signals by the STR method.

Fig. 6 is the signals reconstructed by the GR algorithm under different sampling numbers when the inaccurate properties on the measurement matrix and the measurement data are considered. As can be expected, the GR algorithm shows satisfactory numerical performances, the accuracy of the reconstructed signals is improved as compared to the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method, and the detailed information on the original signal can be well reconstructed. At the same time, it can be found from Figs. 2–6 that the quality of the signals reconstructed by the GR algorithm under different sampling numbers is higher than the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method, which indicates that the GR algorithm is successful in solving CS reconstruction problems. Table 2 shows the reconstruction errors under different sampling numbers for the compared algorithms. It can be found that for the cases simulated in this section the GR algorithm gives the smallest reconstruction errors, which indicates that the GR algorithm is competent to solve CS reconstruction problems. Additionally, it can be found from Table 2 that the GR algorithm can well reconstruct the original signal under a relative few sampling number, which will facilitate real applications.

4415

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

(a)

(b) 1.2

1.5

1

1

0.8

Amplitude

Amplitude

0.5

0

0.6

0.4

-0.5 0.2 -1

-1.5

0

0

500

1000 1500 Signal index

2000

-0.2

2500

0

500

1000 1500 Signal index

2000

2500

1000 1500 Signal index

2000

2500

1000 1500 Signal index

2000

2500

Fig. 3. Reconstructed signals by the SpaRSA algorithm.

(a)

1.5

(b)

1.2

1 1 0.8

Amplitude

Amplitude

0.5

0

0.6

0.4

0.2 -0.5 0

-1

0

500

1000 1500 Signal index

2000

-0.2

2500

0

500

Fig. 4. Reconstructed signals by the l1_ls algorithm.

2

(b)

1.2

1.5

1

1

0.8

0.5

0.6

Amplitude

Amplitude

(a)

0

0.4

-0.5

0.2

-1

0

-1.5 0

500

1000 1500 Signal index

2000

2500

-0.2

0

500

Fig. 5. Reconstructed signals by the GPSR algorithm.

6.2. Case 2 A complex signal is used to better evaluate the feasibility of the GR algorithm, which is presented in Fig. 7. In this case, the inaccuracies on the measurement matrix and the measurement data are simulated, and the standard deviations for the

4416

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 1

1

(b)

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6 Amplitude

Amplitude

(a)

0.5 0.4

0.5 0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0

500

1000 1500 Signal index

2000

0

2500

0

500

1000 1500 Signal index

2000

2500

Fig. 6. Reconstructed signals by the GR algorithm.

Table 2 Reconstruction errors under different sampling numbers. Algorithms

100

300

STR SpaRSA l1_ls GPSR GR

0.9787 0.3882 0.2922 0.7713 0.0688

0.9312 0.0577 0.0599 0.0867 0.0406

measurement data and the measurement matrix r1 and r2 are 0.001 and 0.001. Matrix w is defined by the ‘Daubechies 8’ wavelet in the MATLAB software. The Gaussian measurement matrix is used in the case, and the sizes for subfigures a and b in Figs. 8–12 are 300  2048 and 400  2048. The algorithmic parameters for the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm are the same as Section 6.1. Figs. 8–12 are the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm under different sampling numbers. Table 3 presents the reconstruction errors for the compared algorithms under different sampling numbers. Figs. 8–11 are the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method under different sampling numbers when the inaccurate properties on the measurement data and the measurement matrix are considered. Numerical simulation results indicate that the quality of the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method is not satisfactory when the sampling numbers are relatively few. Especially, it can be found from Figs. 8–11 that when the sampling number is 300, the distortions of the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method are relatively serious.

1 0.9 0.8 0.7

Amplitude

0.6 0.5 0.4 0.3 0.2 0.1 0

0

500

1000 1500 Signal index

Fig. 7. Original signal.

2000

2500

4417

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

(a)

(b) 0.6

0.5 0.4

0.4

0.3 0.2

Amplitude

Amplitude

0.2 0.1 0

0

-0.2

-0.1 -0.4 -0.2 -0.6

-0.3 -0.4

0

500

1000 1500 Signal index

2000

-0.8

2500

0

500

1000 1500 Signal index

2000

2500

Fig. 8. Reconstructed signals by the STR method.

1

(b) 1.2

0.8

1

0.6

0.8

0.4

0.6

Amplitude

Amplitude

(a)

0.2

0.4

0

0.2

-0.2

0

-0.4

0

500

1000 1500 Signal index

2000

-0.2

2500

0

500

1000 1500 Signal index

2000

2500

Fig. 9. Reconstructed signals by the SpaRSA algorithm.

1

(b) 1.2

0.8

1

0.6

0.8

0.4

0.6

Amplitude

Amplitude

(a)

0.2

0.4

0

0.2

-0.2

0

-0.4

0

500

1000 1500 Signal index

2000

2500

-0.2

0

500

1000 1500 Signal index

2000

2500

Fig. 10. Reconstructed signals by the l1_ls algorithm.

The signals reconstructed by the GR algorithm under different sampling numbers are presented in Fig. 12. It can be seen that when the inaccurate properties on the measurement matrix and the measurement data are considered, the GR

4418

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

(b)

1

1

0.8

0.8

0.6

0.6

0.4

0.4

Amplitude

Amplitude

(a)

0.2

0.2

0

0

-0.2

-0.2

-0.4

0

500

1000 1500 Signal index

2000

-0.4

2500

0

500

1000 1500 Signal index

2000

2500

1000 1500 Signal index

2000

2500

Fig. 11. Reconstructed signals by the GPSR algorithm.

1

(b)

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6 Amplitude

Amplitude

(a)

0.5 0.4

0.5 0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0

500

1000 1500 Signal index

2000

0

2500

0

500

Fig. 12. Reconstructed signals by the GR algorithm under different sampling numbers.

Table 3 Reconstruction errors under different sampling numbers. Algorithms

300

400

STR SpaRSA l1_ls GPSR GR

0.9298 0.2342 0.2325 0.3446 0.0887

0.8764 0.1278 0.1239 0.2536 0.0573

algorithm shows satisfactory numerical performances, and the quality of the signals reconstructed by the GR algorithm is higher than the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method. Especially, it can be found from Table 3 that the GR algorithm gives the smallest reconstruction errors in all cases simulated in this section, which indicates that the GR algorithm is successful in solving CS reconstruction problems. 6.3. Case 3 In this section, an extremely complicated signal, which is presented in Fig. 13, is employed to further evaluate the numerical performances and effectiveness of the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm. Matrix w is defined by the ‘Symlets 8’ wavelet in the MATLAB software. The algorithmic parameters for the compared algorithms are the same as Section 6.1. The Gaussian measurement matrix is used in this section, and the sizes for the subfigures a and b in Figs. 14–18 are 100  1024 and 200  1024. In this case, the inaccuracies on the measurement

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

4419

matrix and the measurement data are simulated, and the standard deviations for the measurement data and the measurement matrix r1 and r2 are 0.01 and 0.001. Figs. 14–18 show the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm under different sampling numbers. Table 4 is the reconstruction errors for the compared algorithms under different sampling numbers. Figs. 14–18 are the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm under different sampling numbers when the inaccurate properties on the measurement matrix and the measurement data are considered. It can be found from Figs. 14–18 that the quality of the signals reconstructed by the GR algorithm is higher than the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method, and the detailed information of the original signal can be well reconstructed. At the same time, it can be seen from Table 4 that in all cases the GR algorithm gives the smallest reconstruction errors and can well reconstruct the detailed information of the original signal under a relatively few sampling number, which indicates that the GR algorithm is successful in solving CS reconstruction problems. 6.4. Case 4 In this section, a complex signal, which is presented in Fig. 19, is used to further evaluate the feasibility and efficiency of the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm. Matrix w is defined by the ‘Daubechies 8’ wavelet in the MATLAB software. The algorithmic parameters for the compared algorithms are the same as Section 6.1. The Gaussian measurement matrix is used, and the sizes for the subfigures a and b in Figs. 20–24 are 200  1024 and 450  1024. In this case, the inaccuracies on the measurement matrix and the measurement data are simulated, and the standard deviations for the measurement data and the measurement matrix r1 and r2 are 0.06 and 0.002. Figs. 20–24 show the signals reconstructed by the compared algorithms under different sampling numbers. Table 5 illustrates the reconstruction errors for the compared algorithms under different sampling numbers. Figs. 20–24 are the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm under different sampling numbers when the inaccurate properties on the measurement matrix and measurement data are considered. It can be seen from Figs. 20–24 that the quality of the signals reconstructed by the GR algorithm outperforms the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method, and distortions of the signals reconstructed by the GR algorithm are smallest among the compared algorithms, which indicates that the GR algorithm is successful in solving CS inverse problems. Additionally, it can be seen from Table 5 that in all cases the GR algorithm gives the smallest reconstruction errors and can well reconstruct the detailed information of the original signal under a relatively few sampling number, which is a desirable feature for real applications. 6.5. Case 5 In this section, the noise-contaminated data with different standard deviations are used to further evaluate the numerical performances of the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm. The original signal is presented in Fig. 25. For the GR algorithm, p ¼ 1, and the rest of the algorithmic parameters are the same as Section 6.1. Matrix w is defined by the ‘Symlets 2’ wavelet in the MATLAB software. The Gaussian measurement matrix is used, and the sizes for subfigures a and b in Figs. 26–30 are 300  1024. The inaccurate properties on the measurement data and the measurement matrix are simulated, and the standard deviations for subfigures a and b in Figs. 26–30 are r1 ¼ 0:01

1 0.9 0.8 0.7

Amplitude

0.6 0.5 0.4 0.3 0.2 0.1 0

0

200

400

600 Signal index

800

Fig. 13. Original signal.

1000

1200

4420

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

(a)

0.8

(b)

1 0.8

0.6

0.6 0.4

0.2

Amplitude

Amplitude

0.4

0

0.2 0 -0.2

-0.2 -0.4 -0.4

-0.6

-0.6

0

200

400

600 Signal index

800

1000

-0.8

1200

0

200

400

600 Signal index

800

1000

1200

400

600 Signal index

800

1000

1200

400

600 Signal index

800

1000

1200

Fig. 14. Reconstructed signals by the STR method.

(a)

1.2

(b)

1

1.2

1

0.8 0.8

Amplitude

Amplitude

0.6 0.4

0.6

0.4

0.2 0.2 0 0

-0.2 -0.4

0

200

400

600 Signal index

800

1000

-0.2

1200

0

200

Fig. 15. Reconstructed signals by the SpaRSA algorithm.

(a) 1.2

(b) 1.2

1

1

0.8 0.8

Amplitude

Amplitude

0.6 0.4

0.6

0.4

0.2 0.2 0 0

-0.2 -0.4

0

200

400

600 Signal index

800

1000

1200

-0.2

0

200

Fig. 16. Reconstructed signals by the l1_ls algorithm.

and r2 ¼ 0:001, r1 ¼ 0:06 and r2 ¼ 0:002. Figs. 26–30 present the signals reconstructed by the compared algorithms. The reconstruction errors for the compared algorithms are shown in Table 6.

4421

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

(a) 1.2

(b) 1.2

1

1

0.8 0.8

Amplitude

Amplitude

0.6 0.4 0.2

0.6

0.4

0 0.2 -0.2 0

-0.4 -0.6

0

200

400

600 Signal index

800

1000

-0.2

1200

0

200

400

600 Signal index

800

1000

1200

Fig. 17. Reconstructed signals by the GPSR algorithm.

1

(b)

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6 Amplitude

Amplitude

(a)

0.5 0.4

0.5 0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0

200

400

600 Signal index

800

1000

0

1200

0

200

400

600 Signal index

800

1000

1200

Fig. 18. Reconstructed signals by the GR algorithm.

Table 4 Reconstruction errors under different sampling numbers. Algorithms

100

200

STR SpaRSA l1_ls GPSR GR

0.9485 0.1332 0.1331 0.2751 0.0659

0.9075 0.0857 0.0843 0.1394 0.0549

When the inaccurate properties on the measurement matrix and the measurement data are considered, Figs. 26–30 illustrate the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm. As can be expected, the GR algorithm shows satisfactory robustness. It can be seen from Table 6 that at different noise conditions, the GR algorithm gives the smallest reconstruction errors, which indicates that the GR algorithm is successful in treating with the inaccurate properties on the measurement matrix and the measurement data. Additionally, it is worth mentioning that the reconstruction quality decreases with the increase of the noises, which indicates that the noises in the measurement matrix and the measurement data should be treated seriously, and this issue should be further investigated in the future. 6.6. Case 6 To further evaluate the feasibility and efficiency of the GR algorithm, a two-dimensional case is also illustrated. The original image is the ‘checkerboard’ image in the MATLAB software, which is presented in Fig. 31. In this case, matrix w is

4422

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 1 0.9 0.8 0.7

Amplitude

0.6 0.5 0.4 0.3 0.2 0.1 0

0

500

1000 1500 Signal index

2000

2500

Fig. 19. Original signal.

(a)

0.8

1.5

(b)

0.6 1 0.4

0.5 Amplitude

Amplitude

0.2 0

0

-0.2 -0.4

-0.5 -0.6 -0.8

0

500

1000 1500 Signal index

2000

-1

2500

0

500

1000 1500 Signal index

2000

2500

Fig. 20. Reconstructed signals by the STR method.

(a)

(b)

1.4 1.2

1.2

1

1 0.8

Amplitude

Amplitude

0.8 0.6 0.4

0.6

0.4

0.2 0.2 0 0

-0.2 -0.4

0

500

1000 1500 Signal index

2000

2500

-0.2

0

500

1000 1500 Signal index

2000

2500

Fig. 21. Reconstructed signals by the SpaRSA algorithm.

defined by the discrete cosine transform (DCT) matrix and the Gaussian measurement matrix is employed. The size of the original image is 48  48, and for easy computation the image matrix is reshaped into a vector with a dimension of 2304  1. The sizes of the measurement matrix for subfigures a and b in Figs. 32–36 are 700  2304 and 800  2304. In this

4423

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

(a)

1.4

(b)

1.2

1.2

1

1 0.8

Amplitude

Amplitude

0.8 0.6

0.6

0.4

0.4 0.2 0.2 0

0 -0.2

0

500

1000 1500 Signal index

2000

-0.2

2500

0

500

1000 1500 Signal index

2000

2500

1000 1500 Signal index

2000

2500

1000 1500 Signal index

2000

2500

Fig. 22. Reconstructed signals by the l1_ls algorithm.

(a)

1.4

(b)

1.2

1.2

1

1 0.8

Amplitude

Amplitude

0.8 0.6 0.4

0.6

0.4

0.2 0.2 0 0

-0.2 -0.4

0

500

1000 1500 Signal index

2000

-0.2

2500

0

500

Fig. 23. Reconstructed signals by the GPSR algorithm.

1

(b)

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6 Amplitude

Amplitude

(a)

0.5 0.4

0.5 0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0

500

1000 1500 Signal index

2000

2500

0

0

500

Fig. 24. Reconstructed signals by the GR algorithm.

section, the inaccuracies on the measurement matrix and the measurement data are simulated, and the standard deviations for the measurement data and the measurement matrix r1 and r2 are 0.01 and 0.001. The algorithmic parameters for the STR

4424

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 Table 5 Reconstruction errors under different sampling numbers. Algorithms

200

450

STR SpaRSA l1_ls GPSR GR

0.9419 0.1649 0.1633 0.2581 0.0994

0.8792 0.0679 0.0766 0.0789 0.0517

1 0.9 0.8 0.7

Amplitude

0.6 0.5 0.4 0.3 0.2 0.1 0

0

200

400

600 Signal index

800

1000

1200

200

400

Fig. 25. Original signal.

0.8

(b)

1

0.6

0.8

0.4

0.6

0.2

0.4 Amplitude

Amplitude

(a)

0

0.2

-0.2

0

-0.4

-0.2

-0.6

-0.4

-0.8

0

200

400

600 Signal index

800

1000

1200

-0.6

0

600 Signal index

800

1000

1200

Fig. 26. Reconstructed signals by the STR method.

algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm are the same as Section 6.1. Figs. 32–36 show the images reconstructed by the compared algorithms under different sampling numbers. The reconstruction errors for the compared algorithms are shown in Table 7. Figs. 32–36 are the images reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm under different sampling numbers when the inaccurate properties on the measurement matrix and the measurement data are considered. It can be found from Figs. 32–36 that as can be expected, when the inaccurate properties on the measurement matrix and the measurement data are considered, the quality of the images reconstructed by the GR algorithm is higher than the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method. In addition, it can be seen from Table 7 that in all cases the GR algorithm gives the smallest reconstruction errors, which indicates that the GR algorithm is successful in solving CS inverse problems.

4425

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

(a)

1.2

(b)

1.2 1

1

0.8 0.8

Amplitude

Amplitude

0.6 0.6

0.4

0.4 0.2

0.2 0 0

-0.2

-0.2

0

200

400

600 Signal index

800

1000

-0.4

1200

0

200

400

600 Signal index

800

1000

1200

400

600 Signal index

800

1000

1200

400

600 Signal index

800

1000

1200

Fig. 27. Reconstructed signals by the SpaRSA algorithm.

(a)

1.2

(b)

1.2 1

1

0.8 0.8

Amplitude

Amplitude

0.6 0.6

0.4

0.4 0.2

0.2 0 0

-0.2

-0.2

0

200

400

600 Signal index

800

1000

-0.4

1200

0

200

Fig. 28. Reconstructed signals by the l1_ls algorithm.

1.2

(b)

1.2

1

1

0.8

0.8

0.6

0.6

Amplitude

Amplitude

(a)

0.4

0.4

0.2

0.2

0

0

-0.2

0

200

400

600 Signal index

800

1000

1200

-0.2

0

200

Fig. 29. Reconstructed signals by the GPSR algorithm.

4426

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 1

(b)

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6 Amplitude

Amplitude

(a)

0.5 0.4

0.5 0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0

200

400

600 Signal index

800

1000

1200

0

0

200

400

600 Signal index

800

1000

1200

Fig. 30. Reconstructed signals by the GR algorithm.

Table 6 Reconstruction errors under different noise conditions. Algorithms

r1 ¼ 0:01, r2 ¼ 0:001

r1 ¼ 0:06, r2 ¼ 0:002

STR SpaRSA l1_ls GPSR GR

0.8595 0.0917 0.0974 0.2330 0.0352

0.8606 0.1532 0.1639 0.2460 0.0969

Fig. 31. Original image.

7. Conclusions The accuracy of the reconstruction algorithms plays a vital role in real applications of CS theory. In this paper, a generalized inversion model, which simultaneously considers the inaccurate properties on the measurement matrix and the measurement data, is proposed for CS reconstruction. A generalized objective functional, which integrates the advantages of the LS estimation and the combinational M-estimation, is proposed. An iterative scheme that integrates the advantages of the homotopy method and the APO algorithm is developed for solving the proposed objective functional. Numerical simulations are implemented to evaluate the feasibility and effectiveness of the proposed algorithm. For the cases simulated in this paper, the reconstruction accuracy is improved, which indicates that the proposed algorithm is successful in solving CS inverse problems. As a result, a promising algorithm is introduced for solving CS inverse problems. Applications indicate that each algorithm has its advantages and disadvantages, and may show different numerical performances to different reconstruction tasks. In real applications, the selection of an appropriate algorithm depends mainly on

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

4427

Fig. 32. Reconstructed images by the STR method.

Fig. 33. Reconstructed signals by the SpaRSA algorithm.

Fig. 34. Reconstructed signals by the l1_ls algorithm.

the characteristics and prior information of a specific reconstruction task. Our work provides an alternative approach for CS reconstruction, which needs to be further validated by more cases in the future.

4428

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

Fig. 35. Reconstructed signals by the GPSR algorithm.

Fig. 36. Reconstructed images by the GR algorithm.

Table 7 Reconstruction errors under different sampling numbers. Algorithms

700

800

STR SpaRSA l1_ls GPSR GR

0.7249 0.1618 0.1521 0.2852 0.1147

0.6667 0.1410 0.1270 0.2557 0.0864

Acknowledgements The authors wish to thank the National Natural Science Foundation of China (Nos. 51206048, 51006106 and 50906086), the Fundamental Research Funds for the Central Universities (No. 10MG20), the China Postdoctoral Science Foundation (Nos. 20090460263 and 201003088), and the Program for Changjiang Scholars and Innovative Research Team in University (No. IRT0952) for supporting this research. References [1] J.C. Ye, Compressed sensing shape estimation of star-shaped objects in Fourier imaging, IEEE Signal Process. Lett. 14 (2007) 750–753. [2] J. Bobin, J.L. Starck, R. Ottensamer, Compressed sensing in astronomy, IEEE J. Select. Top. Signal Process. 2 (2008) 718–726. [3] J. Ma, Single-pixel remote sensing, IEEE Geosci. Remote Sens. Lett. 6 (2009) 199–203.

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

4429

[4] W. Bajwa, J. Haupt, A. Sayeed, R. Nowak, Compressive wireless sensing, in: The Fifth International Conference on Information Processing in Sensor Networks, Nashville, TN, USA, 19–21 April 2006, pp. 134–142. [5] M. Lustig, D.L. Donoho, J.M. Santos, J.M. Pauly, Compressed sensing MRI, IEEE Signal Process. Mag. 25 (2008) 72–82. [6] Y. Rivenson, A. Stern, Compressed imaging with a separable sensing operator, IEEE Signal Process. Lett. 16 (2009) 449–452. [7] P. Parasoglou, D. Malioutov, A.J. Sederman, J. Rasburn, H. Powell, L.F. Gladden, A. Blake, M.L. Johns, Quantitative single point imaging with compressed sensing, J. Magn. Reson. 201 (2009) 72–80. [8] J. Provost, F. Lesage, The application of compressed sensing for photo-acoustic tomography, IEEE Trans. Med. Imaging 28 (2009) 585–594. [9] M.E. Gehm, R. John, D.J. Brady, R.M. Willett, T.J. Schulz, Single-shot compressive spectral imaging with a dual-disperser architecture, Opt. Express 15 (2007) 1403–1427. [10] G. Hennenfent, F.J. Herrmann, Simply denoise: wavefield reconstruction via jittered undersampling, Geophysics 73 (2008) 19–28. [11] D. Needell, J.A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Appl. Comput. Harmonic Anal. 26 (2009) 310–321. [12] J.A. Tropp, A.C. Gilbert, Signal recovery from random measurement via orthogonal matching pursuit, IEEE Trans. Inf. Theory 53 (2007) 4655–4666. [13] D.L. Donoho, Y. Tsaig, I. Drori, J.L. Starck, Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit, IEEE Trans. Inf. Theory 58 (2012) 1094–1121. [14] D. Needell, R. Vershynin, Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit, Found. Comput. Math. 9 (2009) 317–334. [15] S.J. Kim, K. Koh, M. Lustig, S. Boyd, D. Gorinevsky, An interior-point method for large-scale ‘1 -regularized least squares, IEEE J. Select. Top. Signal Process. 1 (2007) 606–617. [16] M.A.T. Figueiredo, R.D. Nowak, S.J. Wright, Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems, IEEE J. Select. Top. Signal Process. 1 (2007) 586–597. [17] T. Blumensath, M.E. Davies, Iterative thresholding for sparse approximations, J. Fourier Anal. Appl. 14 (2008) 629–654. [18] W. Yin, S. Osher, D. Goldfarb, J. Darbon, Bregman iterative algorithms for L1-minimization with applications to compressed sensing, SIAM J. Imag. Sci. 1 (2008) 143–168. [19] T. Goldstein, S. Osher, The split Bregman algorithm for Ll-regularized problems, SIAM J. Imag. Sci. 2 (2009) 323–343. [20] S.J. Wright, R.D. Nowak, M.A.T. Figueiredo, Sparse reconstruction by separable approximation, IEEE Trans. Signal Process. 57 (2009) 2479–2493. [21] M.A. Herman, T. Strohmer, General deviants: an analysis of perturbations in compressed sensing, IEEE J. Select. Top. Signal Process. 4 (2010) 342–349. [22] E.J. Candès, J. Romberg, T. Tao, Robust uncertainty principle: exact signal reconstruction from highly incomplete Fourier information, IEEE Trans. Inf. Theory 52 (2006) 489–509. [23] E.J. Candès, M.B. Wakin, An introduction to compressive sampling, IEEE Signal Process. Mag. 25 (2008) 21–30. [24] E. Candès, J. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Commun. Pure Appl. Math. 59 (2005) 1207–1233. [25] D.L. Donoho, Compressed sensing, IEEE Trans. Inf. Theory 52 (2006) 1289–1306. [26] J. Ma, Compressed sensing by inverse scale space and curvelet thresholding, Appl. Math. Comput. 206 (2008) 980–988. [27] A. Beck, A. Ben-Tal, On the solution the Tikhonov regularization of the total least squares problem, SIAM J. Optim. 17 (2006) 98–118. [28] J.G. Sun, Perturbation Analysis of Matrix, Science Press, Beijing, 1987. [29] Q.S. Chen, Mathematical Principles of Digital Signals Processing, Petroleum Industry Press, Beijing, 1993. [30] G.X. Chai, S.Y. Hong, Semiparametric Regression Model, Anhui Education Press, Hefei, 1995. [31] I. Markovsky, D.M. Sima, S.V. Huffel, Total least squares methods, WIREs Comput. Stat. 2 (2010) 212–217. [32] Y.H. Liu, L.J. Song, Ridge estimation method for ill conditioned semiparametric regression model, Hydrogr. Surv. Chart. 28 (2008) 1–3. [33] A.N. Tikhonov, V.Y. Arsenin, Solution of Ill-Posed Problems, V.H. Winston & Sons, New York, 1977. [34] J. Lei, S. Liu, Z.H. Li, M. Sun, X.Y. Wang, A multi-scale image reconstruction algorithm for electrical capacitance tomography, Appl. Math. Model. 35 (2011) 2585–2606. [35] P.J. Huber, Robust Statistics, John Wiley & Sons, New York, 1981. [36] Y. Dodge, J. Jureckova, Adaptive Regression, Spring-Verlag, New York, 2000. [37] A. Cichocki, S. Amari, Adaptive Blind Signal and Image Processing, Johan Wiley & Sons, 2002. [38] L.C. Potter, E. Ertin, J.T. Parker, M. Cetin, Sparsity and compressed sensing in Radar imaging, Proc. IEEE 98 (2010) 1006–1020. [39] J. Trzasko, A. Manduca, Highly undersampled magnetic resonance image reconstruction via homotopic L0-minimization, IEEE Trans. Med. Imaging 28 (2009) 106–121. [40] Y.F. Wang, Computational Methods for Inverse Problems and Their Applications, Higher Education Press, Beijing, 2007. [41] Z.M. Wang, J.B. Zhu, Resolution Improvement Approaches for SAR images, Science Press, Beijing, 2006. [42] B. Han, L. Li, Computational Methods and Applications of Nonlinear Ill-posed Problems, Science Press, Beijing, 2011. [43] M.E. Zervakis, A.K. Katsaggelos, T.M. Kwon, A class of robust entropic functionals for image restoration, IEEE Trans. Image Process. 4 (1995) 752–773. [44] F.M. Ham, I. Kostanic, Principles of Neurocomputing for Science and Engineering, McGraw-Hill, 2001. [45] R. Acar, C.R. Vogel, Analysis of bounded variation penalty methods for ill posed problem, Inverse Prob. 10 (1994) 1217–1229. [46] Y.M. Guo, S.L. Wang, X. Cai, Engineering Optimization: Principle, Algorithm and Implementation, China Machine Press, Beijing, 2008. [47] Z.H. Ma, Handbook of Modern Applied Mathematics, Tsinghua University Press, Beijing, 2005. [48] X.D. Huang, Z.G. Zeng, Y.N. Ma, Theory and Methods for Nonlinear Numerical Analysis, Wuhan University Press, Wuhan, 2004. [49] Q.Y. Li, Z.H. Mo, L.Q. Qi, Numerical Methods of System of Nonlinear Equations, Science Press, Beijing, 1987. [50] L.P. Xie, J.C. Zeng, An extended artificial physics optimization for global optimization problems, in: Fourth International Conference on Innovation Computation, Information and Control, 2009, pp. 881–884. [51] L.P. Xie, J.C. Zeng, R.A. Formato, Convergence analysis and performance of the extended artificial physics optimization algorithm, Appl. Math. Comput. 218 (2011) 4000–4011. [52] H.B. Duan, X.Y. Zhang, C.F. Xu, Bio-inspired Computation, Science Press, Beijing, 2011. [53] C.Y. Liang, C.G. Wu, X.H. Shi, H.W. Ge, Theory and Applications of the Swarm Intelligence Algorithms, Science Press, Beijing, 2009. [54] E. Zahara, Y.T. Kao, Hybrid Nelder-Mead simplex search and particle swarm optimization for constrained engineering design problems, Expert Syst. Appl. 36 (2009) 3880–3886.