Complex-valued sparse reconstruction via arctangent regularization

Complex-valued sparse reconstruction via arctangent regularization

Signal Processing ] (]]]]) ]]]–]]] 1 Contents lists available at ScienceDirect 3 Signal Processing 5 journal homepage: www.elsevier.com/locate/s...

1MB Sizes 0 Downloads 52 Views

Signal Processing ] (]]]]) ]]]–]]]

1

Contents lists available at ScienceDirect

3

Signal Processing

5

journal homepage: www.elsevier.com/locate/sigpro

7 9 11 13

Complex-valued sparse reconstruction via arctangent regularization$

15 Q1

Gao Xiang n, Xiaoling Zhang, Jun Shi

17

School of Electronic Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China

19

a r t i c l e i n f o

abstract

Article history: Received 3 November 2013 Received in revised form 15 April 2014 Accepted 30 April 2014

Complex-valued sparse reconstruction is conventionally solved by transforming it into real-valued problems. However, this method might not work efficiently and correctly, especially when the size of the problem is large, or the mutual coherence is high. In this paper, we present a novel algorithm called the arctangent regularization (ATANR), which can handle the complex-valued problems of large size and high mutual coherence directly. The ATANR is implemented with the iterative least squares (IRLS) framework, and accelerated by the dimension reduction and active set selection steps. Further, we summarize and analyze the common properties of a penalty kernel which is suitable for sparse reconstruction. The analyses show that the key difference, between the arctangent kernel and the ℓ1 norm, is that the first order derivative of ATANR is close to zero for a nonzero variable. This will make ATANR less sensitive to the regularization parameter λ than ℓ1 regularization methods. Finally, lots of numerical experiments validate that ATANR usually has better performance than the conventional ℓ1 regularization methods, not only for the random signs ensemble, but also for the sensing matrix with high mutual coherence, such as the resolution enhancement case. & 2014 Elsevier B.V. All rights reserved.

21 23 25 27 29 31

Keywords: Arctangent regularization Complex-valued problem Active set projection Penalty functions Sparse reconstruction Resolution enhancement

33 35 37

63

39 41 43 45 47 49 51

1. Introduction Sparse reconstruction has been attracting more and more attention in recent decades, especially after the establishment of compressive sensing (CS) by David L. Donoho et al. during 2004–2006 [1–3]. CS employs the ℓ0 quasi-norm to depict the sparsity of a signal, and describes the sparse reconstruction problem (SRP) as an ℓ0 quasinorm optimization, which is proven to be NP-hard [4]. Encouragingly, Candes et al. propose the famous restricted isometric property (RIP) [5], which describes an equivalent condition between the ℓ1 regularization and the ℓ0

53 55 57 59

☆ This work is supported by the Natural Science Fund of China under Grant 61101170 and Ph.D. Programs Foundation of Ministry of Education of China (No. 2011018511001). n Corresponding author. Tel.: þ86 28 61831500. E-mail address: [email protected] (G. Xiang).

quasi-norm optimization. The advantages are inspiring: on the one hand, the ℓ1 norm optimization is convex, thus its local minima is also its global minima; on the other hand, there are already lots of excellent algorithms solving ℓ1 regularization problems efficiently, such as the least absolute shrinkage and selection operator (LASSO) [6–8], and the ℓ1 regularized least squares (ℓ1LS) [9]. Besides the ℓ1 regularization, many other penalty methods were proposed, including the MCþ algorithm [10], and the SparseNet [11], etc. It is worth noting that these methods are most originally developed for the real-valued SRPs, and are almost not for complex-valued ones directly. However, in some applications, we really have to handle the complex-valued SRPs. For example, in radar imaging, the desired scattering coefficients are always considered to be complex numbers. In order to solve the complex-valued problems, it is common to transform them into the real-valued ones [12]. However,

http://dx.doi.org/10.1016/j.sigpro.2014.04.037 0165-1684/& 2014 Elsevier B.V. All rights reserved.

61 Please cite this article as: G. Xiang, et al., Complex-valued sparse reconstruction via arctangent regularization, Signal Processing (2014), http://dx.doi.org/10.1016/j.sigpro.2014.04.037i

65 67 69 71 73 75 77 79 81 83 85

2

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61

G. Xiang et al. / Signal Processing ] (]]]]) ]]]–]]]

this way will dramatically increase the computation load as the dimensions of the measurement matrix A A Cnm grow up, and it might fail when the mutual coherence is high. Although there are a few methods which could deal with the complex-valued problems directly, such as the orthogonal matching pursuit (OMP) [13], the LASSO [6–8], and the sparsity driven method [14]. However, their disadvantages are obvious: For OMP, it shows bad performance when the mutual coherence of A is high. For LASSO, it is time consuming for searching a proper regularization parameter λ by the cross validation method [15]. Even for the LARS-LASSO [8] which does not need to assign a proper λ, it might be still very slow when n and m are both large. Because it requires to compute the whole solution paths first, and then select a proper solution by AIC, BIC or Cp-type risk [8,7,15]. Moreover, when n 5 m and the mutual coherence is high, the LARS-LASSO might fail to find a proper positive direction during its iterations. For sparsity driven method, it has good performance on enhancing the features of block targets; however, it has several parameters which should be well designed. Besides, the Hilbert transform based methods are usually used to analyze the nonlinear and non-stationary complex signals, such as [16,17]. However, they do not emphasize the sparsity of a signal, so that they show worse performance on SRPs than the sparse reconstruction methods. In this paper, we design an algorithm called the arctangent regularization, which could handle the complex-valued sparse reconstruction problems directly and efficiently, and be suitable for the problem of large size. It is based on the penalty method with the arctangent function as its penalty kernel. Compared with the ℓ1 norm penalty kernel, the penalty kernel of ATANR is closer to the ℓ0 quasi-norm. With respect to the ℓ1 norm penalty kernel, the larger magnitude entry will correspond to a larger penalty term. Whereas the arctangent penalty kernel will suppress the influence of the large magnitudes, such that the solution of ATANR seems to be less sparse than that of ℓ1 regularization. By the dimension reduction and active set selection steps, ATANR is extended to solve the SRPs of large size. Numerical experiments show that ATANR costs much less execution time than LASSO and ℓ1LS when the size of the problem reaches 2000  4000. The remaining sections are organized as follows: In Section 2, we briefly introduce the complex-valued sparse reconstruction problem and some existing penalty algorithms. In Section 3, ATANR is proposed. It is implemented by the IRLS framework, and further improved by the dimension reduction and active set selection steps. In Section 4, the common properties of penalty functions, which are suitable for sparse reconstruction, are summarized and analyzed in detail. These properties expose that the key difference, between ATANR and ℓ1 regularization, is that the first order derivative of ATANR is close to zero when the variable is nonzero. This difference also results in its less sensitivity to the regularization parameter λ. In Section 5, we focus on the performance of ATANR on the random signs ensemble [18]. Numerical experiments show that ATANR has nearly the same performance as OMP, and outperform LASSO and ℓ1LS. In Section 6, plenty of simulations were performed for the resolution enhancement case, and ATANR exhibited

good performance both in the discrete scatters case and in the continuous block case. In Section 7, we summarize the main work of this paper, and list the next work of ATANR in the future.

63 65 67

2. Complex-valued sparse reconstruction and penalty methods

69 71

In this section, we introduce the complex-valued sparse reconstruction problem, and briefly review some excellent existing penalty algorithms.

73

2.1. Complex-valued sparse reconstruction

75

The complex-valued sparse reconstruction usually solves an underdetermined linear system

77 79

y ¼ Ax þ n where A A Cnm is the measurement matrix with rank ðAÞ ¼ n and n 5 m. x A Cm , y A Cn and n A Cn denote the input signal, the measurements and the noise vector, respectively. In CS, the sparsity of x is defined by the ℓ0 quasi-norm, namely ‖x‖0 ¼ jsuppðxÞj ¼ jfi: xi a0gj, where xi denotes the ith entry of x. When x has s A Z þ nonzero entries, we say x is s-sparse. Then, CS describes the SRP as an ℓ0 quasi-norm optimization. x^ ¼ arg min‖x‖0 ; x

s:t: ‖Ax  y‖2 r ε

ð1Þ

where ε is related to the variance of the noise vector n, and ‖‖2 denotes the ℓ2 norm. Usually, in order to solve the complex-valued SRP, it is required to transform it into the real-valued problem, as shown in (2). It implies that the input signal x should be transformed into a vector composed of real values, then the linear system y¼Ax can be rewritten as " #  " r # y xr A r  Ai ¼ ð2Þ i r yi xi A A It is evident that (2) is real-valued, but meanwhile it increases the problem's dimensions. This indicates that the computation complexity also grows up sharply, thus it is not suitable for the measurement matrix A of large size. Therefore, we should develop the algorithm handling the complex-valued problems straight and being suitable for the large size problems.

81 83 85 87 89 91 93 95 97 99 101 103 105 107 109

2.2. Penalty methods

111

The complex-valued problem (1) could be usually solved by the penalty methods, such as the ℓ1 regularization (LASSO and ℓ1LS) and the MCþ. Generally, the penalized problem usually has a form of

113

m

^ xðλÞ ¼ arg minLðAx; yÞ þ ∑ Jðjxi j; λi Þ x

115 117

ð3Þ

i¼1

119

where LðAx; yÞ is the fidelity constraint. In most applications, it is considered to be

121

LðAx; yÞ ¼ 12 ‖Ax y‖22

123

Please cite this article as: G. Xiang, et al., Complex-valued sparse reconstruction via arctangent regularization, Signal Processing (2014), http://dx.doi.org/10.1016/j.sigpro.2014.04.037i

G. Xiang et al. / Signal Processing ] (]]]]) ]]]–]]]

3

3.2. Arctangent penalty

63

Owing to the unstability of IDTH, we introduce the arctangent penalty kernel in order to approximate the ℓ0 quasi-norm. It has an explicit expression given as follows:

65

5

And the second term of (3) is called the penalty term. It is a sum of penalty kernels, and λi denotes the penalty level or the regularization parameter with λi 40 [10]. Especially, if the penalty term chooses the ℓ2 norm, it is the famous Tikhonov regularization defined by

7

1 x^ ðλÞ ¼ arg min ‖Ax  y‖22 þλ‖x‖22 x 2

2 J a ðxÞ ¼ arctanðQ ðx  μÞÞ; γπ

ð7Þ

69

9

However, Tikhonov regularization is not appropriate when the desired solution is sparse [19].

where Q denotes the slope parameter, γ is the scale parameter and μ represents the middle threshold. Its first order derivative is formulated as

71

1 3

11

15

2.2.1. ℓ1 penalty γ The ℓγ penalty term is given by ∑m i ¼ 1 Jðjxi j; λi Þ ¼ λ‖x‖γ for 0 o γ r 1 [11]. When choosing γ ¼1, we obtain the ℓ1 penalty method, namely the ℓ1 regularization:

17

1 x^ ðλÞ ¼ arg min ‖Ax  y‖22 þλ‖x‖1 x 2

13

ð4Þ

19 21 23 25 27

2.2.2. MCþ penalty The MCþ penalty kernel function is defined by  Z jxj  t J MC þ ðx; λÞ ¼ λ 1 dt γλ þ 0   x2 λ2 γ IðjxjZ λγ Þ Iðjxj oλγ Þ þ ¼ λ jxj  2 2λγ

ð5Þ

31

37

ð8Þ

m  1 x^ ðλÞ ¼ argmin ‖Ax  y‖22 þ λ ∑ J a xi jÞ x 2 i¼1

The primary idea of ATANR is to find some smooth function to approximate the ℓ0 quasi-norm. In this section, we first introduce the ideal double threshold (IDTH) penalty which exhibits the basic idea of the ℓ0 quasinorm approximation, and then we extend it to ATANR. Moreover, a general iterative scheme is employed with adaptive step size to solve ATANR problems.

79 81

39 3.1. IDTH penalty 41 The penalty kernel function of IDTH is defined by

∂φ ¼ AH ðAx  yÞ þλΛðxÞx ∂xn

83 85

ð10Þ

87 89 91 93 95

The gradient of φðxÞ can be formulated as ∇φðxÞ ¼

75

ð9Þ

In this subsection, we introduce you IRLS-ATANR to solve the ATANR problem efficiently. It is based on a modified IRLS framework. For ATANR, its objective function φðxÞ could be rewritten as m  1 φðxÞ ¼ ‖Ax  y‖22 þ λ ∑ J a xi jÞ 2 i¼1

73

77

where γ is usually selected either 1 or Q. Generally, ATANR is defined by

3.3. Implementation with IRLS

3. Arctangent regularization

35

2Q γπ½Q 2 ðx  μÞ2 þ 1

When γ ¼ Q and γ is sufficiently small, the ATANR will degenerate to the ℓ1 regularization.

29

33

J 0a ðxÞ ¼

xAR

67

ð11Þ

97 99

where xn denotes the conjugate of x, and ΛðxÞ is the derivative diagonal matrix with its diagonal entries as

101

J 0 ðjx jÞ Λii ðxÞ ¼ a i jxi j þδ

103

ð12Þ

and its first order derivative has a form of

δis employed to avoid non-differentiability of jxi j around the origin. Consider the first order necessary condition ∇φ ¼ 0, then we obtain

47

J 0IDTH ðxÞ ¼ kslope IðΘL rx rΘU Þ

½AH A þ λΛðxÞx ¼ AH y

ð13Þ

109

49

where ΘU and ΘL denote the upper threshold and the lower threshold, respectively. kslope is the slope of the line segment between the upper and lower thresholds. These parameters should satisfy kslope ðΘU ΘL Þ ¼ 1. When kslope - þ 1, the IDTH is close to the ℓ0 quasinorm. It is obvious that the IDTH kernel is not differentiable at the points ðΘL ; 0Þ and ðΘU ; 1Þ. These singular points might cause unstable performance if we search a sparse solution with the IDTH. We usually set ΘL ¼ 0, and therefore need a penalty kernel function which is smooth in the range ð0; þ 1Þ and right continuous at (0,0). In the next subsection, we will introduce the arctangent function which satisfies the requirements.

Let MðxÞ ¼ AH A þ λΛðxÞ. Assume we have just completed the kth iteration, and xk denotes the output of the kth iteration. Then (13) is rewritten as

111

43 45

51 53 55 57 59 61

J IDTH ðxÞ ¼ kslope ðx ΘL ÞIðΘL rx r ΘU Þ þ kslope ðΘU ΘL ÞIðΘU o xÞ;

xAR

ð6Þ

k

Mk ðx Þx

kþ1

105 107

113

H

¼A y

115

Apply the quasi-Newton method to our problem, and we get the iterative formula

117

xk þ 1 ¼ xk þ τk ½∇2 φðxk Þ  1 ∇φðxk Þ

119

where τk is the step size and ∇φðxk Þ is defined by (11). Let the Hessian matrix

121

∇2 φðxk Þ ¼ Mk ðxk Þ

123

ð14Þ

Please cite this article as: G. Xiang, et al., Complex-valued sparse reconstruction via arctangent regularization, Signal Processing (2014), http://dx.doi.org/10.1016/j.sigpro.2014.04.037i

G. Xiang et al. / Signal Processing ] (]]]]) ]]]–]]]

4

1

k

63

Since Mk ðxk Þ is positive semi-definite, we could solve (15) by the preconditioned conjugate gradient (PCG) algorithm (or the Matlab backslash division technology when programming) during iterations, which is much more stable than inverting Mk ðxk Þ directly. Different from [14], where τk is fixed to 1, ATANR employs the Armijo Rule [20] to determine the step size τk adaptively.

set and apply the least squares estimation. As a result, we could get the solution of IRLS-ATANR as precise as that of LARS-LASSO. In detail, this active set projection can be implemented as follows: 3.4.1. Compute the active set Suppose x^ denotes the final output of the IRLS-ATANR in Algorithm 1. Let the active index set S of x^ be  S≔ t: jx^ t j Zαmaxjx^ i j

69

Lemma 1 (Armijo Rule Sun and Yuan [20]). Given β A ð0; 1Þ, ρ A ð0; 0:5Þ, υ 4 0, there exists the least nonnegative integer sk such that

^ and the threshold where x^ t represents the tth entry of x, parameter α A ½0; 1. Empirically, α could be chosen as 0.01. Then the active set is defined by

75

φðxk Þ  φðxk þβsk υdk Þ Z  ρβsk υg Tk dk

AS ¼ fai : ai A A; iA Sg

where gk denotes the gradient and dk is the search direction.

where ai is the ith column of A.

and solve the search direction d by k

3 5 7 9 11 13 15 17 19 21 23

Mk ðxk Þd ¼  ∇φðxk Þ

ð15Þ

Assume τk ¼ βsk υ and apply the Armijo rule to our problem h iH k k ð16Þ φðxk Þ φðxk þβsk υd Þ Z  ρβsk υ ∇φðxk Þ d Finally, the stop rule is defined by

25 27 29 31 33 35 37 39 41 43 45 47 49

‖xk þ 1  xk ‖2 oξ ‖xk ‖2

ð17Þ

where ξ is some small positive constant, such as 10  5. Besides, we usually might want the program to stop after a maximum iterations maxIters, if the stop rule (17) is not satisfied. In our simulations, we set maxIters to 100, which works well in most conditions. In Algorithm 1, we list the main steps of the IRLSATANR. Algorithm 1. The framework of the IRLS-ATANR. Inputs: A A Cnm , y A Cn , ξ ¼ 10  5 , δ ¼ 10  5 , λ ¼ 10  3 , maxIters¼ 100, k ¼ 1, rstop¼1, x1 ¼ 0 {k denotes the iteration number; rstop denotes the ratio ‖xk þ 1  xk ‖2 =‖xk ‖2 , and its initial value is set to 1.} calculate AH A, AH y Main Loops 2: while k r maxIters and rstop 4ξ do 3: Mk ðxk Þ ¼ AH A þ λΛðxk Þ 1:

4:

∇φðxk Þ ¼ Mk ðxk Þ AH y

5:

solve (15) for the search direction d search τk by the Armijo rule (16)

6: 7: 8: 9: 10: 11:

update xk : xk þ 1 ¼ xk þ τk d calculate rstop by (17) increase k: k ¼ k þ 1 end while

k

k

Outputs: The desired result is x

k

51 53 55 57 59 61

i

67

71 73

77 79 81

3.4.2. The least squares estimation Solve the linear system AS xS ¼ y for its least squares solution. xS stands for the sub-vector of x composed of the entries indexed by S. Then the improved solution x^ I is represented as x^ I;S ¼ A†S y;

83 85 87

x^ I;S ¼ 0

where S designates the complementary of S.

89

3.5. The fast IRLS-ATANR

91

The IRLS-ATANR in Section 3.3 could handle the problem of small size. However, once the dimension of the measurement matrix A increases, for example n ¼2000, m ¼4000, it will become very slow mainly because of the operation to solve (15). In this section, we aim to accelerate ATANR by reducing the computation load of (15). The operations are divided into two steps.

93

3.5.1. Dimension reduction According to (14) and (15), we need to invert a matrix of size m  m: an operation Oðm3 Þ. It could be reduced by applying the matrix inversion lemma [21]:

101

Mk 1 ¼ ðAH A þ λΛðxk ÞÞ  1 1 ¼  Λðxk Þ  1 AH ðAΛðxk Þ  1 AH þ λIÞ  1 AΛðxk Þ  1 λ 1 þ Λðxk Þ  1 λ

95 97 99

103 105 107

ð18Þ

109

Λðxk Þ is a diagonal matrix and its diagonal elements are nonzero according to (8) and (12), thus its inverse matrix could be easily computed. Finally, the operation to inverse Mk can be reduced to Oðn3 Þ.

111 113 115

3.4. Active set projection We might not usually reach the optimal point within a finite number of iterations, so that the solution obtained by the IRLS-ATANR described in Algorithm 1 has been often less precise than that of LARS-LASSO, especially under the high signal-to-noise ratio (SNR) situation. Here, we introduce a threshold parameter α in order to keep only the large magnitudes, select the corresponding active

65

3.5.2. Active set selection Although the operation to inverse Mk has reduced to Oðn3 Þ, however, when n goes large, the computation load is still very heavy. Note that the matrix inversion operation of Mk in (18) needs to consider all columns of A. However, for an s-sparse solution, it is not necessary. It means to replace A in (13) by the corresponding active set AS at each iteration. Herein, AS is determined by S≔ft: jxkt j Zα maxi jxki jg

Please cite this article as: G. Xiang, et al., Complex-valued sparse reconstruction via arctangent regularization, Signal Processing (2014), http://dx.doi.org/10.1016/j.sigpro.2014.04.037i

117 119 121 123

G. Xiang et al. / Signal Processing ] (]]]]) ]]]–]]]

1 3 5 7 9 11 13 15

and the threshold parameter α A ½0; 1Þ. At the moment, the operation for each iteration will gradually decrease to Oðs3 Þ. The accelerated ATANR is called fast IRLS-ATANR (IRLSFATANR), and its main steps are summarized in Algorithm 2. Algorithm 2. The framework of the IRLS-FATANR. Inputs: A A Cnm , y A Cn , ξ ¼ 10  5 , δ ¼ 10  5 , λ ¼ 10  3 , maxIters ¼100 , k ¼ 1, rstop¼1, x1 ¼ 0, α ¼ 0:01 {k denotes the iteration number; rstop denotes the ratio ‖xk þ 1  xk ‖2 =‖xk ‖2 , and its initial value is set to 1.} 1:

5: 6: 7: 8:

19

9: 10: 11:

21

12:

23

13: 14:

27 29 31 33

H

calculate A A, A y Main Loops: 2: while k r maxIters and rstop 4 ξ do 3: if xk ¼ ¼ 0 and k ¼ ¼ 1 then 4: AS ¼ A and xkS ¼ xk

17

25

H

15: 16: 17: 18: 19: 20: 21: 22: 23: 24:

else S≔ft: jxkt j Z α maxi jxki jg AS ¼ fai : ai A A; i A Sg xkS ¼ fxki : xki A xk ; iA Sg end if

Intuitively, the response of a large entry (magnitude) is desired to be not smaller than that of a small one, so that the penalty kernel is better to be monotone nondecreasing. According to the iterative scheme, it is necessary to require that the penalty has at least first order derivative. An explicit expression of the penalty function is not necessary, but recommended in order to apply the line search step during iterations. (3) The first order derivative should be close to zero when the variable is nonzero. Consider the noise free case, and assume x~ is the true value, such that

k

k

k

d ’dS search τk by the Armijo rule (16) update xk : xk þ 1 ¼ xk þ τk d calculate rstop by (17) increase k: k ¼ k þ 1 end while

k

The desired result is xk

37

41

In this section, we summarize and analyze the common properties of penalty functions which are suitable for sparse reconstruction. Then we discuss the convergence properties of the ℓ1 regularization and the ATANR.

43

4.1. Properties of the penalty kernel functions

51 53 55 57 59 61

73

~ ~ x~ ¼ AH ðy AxÞ λΛðxÞ

79 81 ð20Þ 83

J ðjx~ i jÞx~ i jx~ i j þ δ ¼ 0

ð21Þ

85 87

Property 1. The IRLS-ATANR could not converge to x~ exactly, if the penalty function satisfies J 0 ðjx~ i jÞ a 0 for all x~ i a 0. Property 2. If x~ is the solution of (20), the penalty kernel function should satisfy J 0 ðjx~ i jÞ ¼ 0 for all x~ i a0. Proof. If J 0 ðjx~ i jÞ a0 for all x~ i a0, then (21) drives x~ to be a zero vector. However, this is not consistent with (19) for ‖y‖2 a0. Therefore, it requires that J 0 ðjx~ i jÞ ¼ 0 for all x~ i a 0. □ To distinguish the zero entries from the nonzero and make sure that the entries of ΛðxÞ are not all zeros, it requires that J 0 ðjx~ i jÞ a 0 where x~ i ¼ 0. However, if we desired a sparse solution, the penalty kernel function, with J 0 ðjx~ i jÞ ¼ 0 for all x~ i a0, may not give a sparse driven effect. That is to say we should not require Property 2 exactly satisfied in our iterative scheme. It means J 0 ðjx~ i jÞ-0 when x~ i a0.

89 91 93 95 97 99 101 103 105 107

45

49

71

k

solve (15) for dS by applying the inverse (18) end if

4. Discussions

47

69

77

0

solve (15) for dS with the PCG method (or the Matlab backslash division) else

67

Generally, we desire x~ be a feasible solution of (13), namely

It implies

if rowðAS Þ 4colðAS Þ then

65

75

~ x~ ¼ 0 λΛðxÞ

k Mk ðxkS Þ ¼ AH S AS þ λΛðxS Þ H k k ∇φðxS Þ ¼ Mk ðxS Þ  AS y

63

ð19Þ

~ 2¼0 ‖y  Ax‖

then we could obtain

H H H AH S AS , AS y’A A, A y

35

39

5

(1) The penalty kernel should be a real-valued and nonnegative function. On the one hand, in our context, the penalty kernel handles both the real- and complex-valued signals. To evaluate their magnitudes is a convenient way to estimate the number of nonzero entries. Therefore the penalty kernel is better to be defined as a real-valued function on the domain ½0; þ 1Þ. On the other hand, the designed penalty kernel should also approximate the sparsity of a signal. That is to say the penalty kernel should be positive at where the entry is nonzero. In addition, when the entry is zero, the penalty is better to be defined as zero just like the ℓ0 quasi-norm. (2) The penalty kernel should be right continuous at the origin, monotone nondecreasing and first order differentiable in the range ð0; þ 1Þ.

4.2. Convergence properties

109

Herein, we discuss the convergence properties of the ℓ1 regularization and the ATANR by invoking IRLS framework.

111

 The ℓ1 regularization:

113

Its derivative diagonal matrix has an expression of

115

 ΛðxÞ ¼ diag

117

1 jxi j þδ



119 thus ΛðxÞx calculates the signs of x. According to Property 1, the ℓ1 regularization cannot converge to the true value x~ exactly. However, it can approach the true value x~ close if we select some proper small λ.

Please cite this article as: G. Xiang, et al., Complex-valued sparse reconstruction via arctangent regularization, Signal Processing (2014), http://dx.doi.org/10.1016/j.sigpro.2014.04.037i

121 123

G. Xiang et al. / Signal Processing ] (]]]]) ]]]–]]]

6

For the ℓ1 regularization, it is interesting that ‖ΛðxÞx‖22 approximates the sparsity of x, namely ‖ΛðxÞx‖22  ‖x‖0 . According to (13), the solution of the ℓ1 regularization x^ ℓ1 should satisfy

1 3 5

λ‖x^ ℓ1 ‖0  ‖AH ðy  Ax^ ℓ1 Þ‖22

7

On the one hand, we desire ‖x~  x^ ℓ1 ‖22 sufficiently small, so λ should be small enough; on the other hand, we should note that too small λ will result in the least squares solution. The ATANR: Typically, we consider the configuration γ¼1 and μ¼0. The diagonal elements of ΛðxÞ are formulated by

9 11 13 15 17 19 21 23 25 27



 2 Λii ðxÞ ¼ Þ xi j þ δÞ πðQ jxi j2 þ 1=Q It is obvious that Λii ðxÞ is close to zero, when Q is large enough. However, too large Q will lead to the least squares solution. On the contrary, Q should also not be too small (Q -0), which results in J a ðxÞ approaches 0, finally it gets the least squares solution as well. For xi ¼0, Λii ðxÞ ¼ 2Q =πδb 1. It implies that the ATANR can also approach the true value x~ very close. Compared with the ℓ1 regularization, the ATANR is much more insensitive to λ, because ΛðxÞx  0 for any sparse x, while the ℓ1 regularization makes ΛðxÞx  sgnðxÞ. This is the significant difference between the ATANR and the ℓ1 regularization.

5.1. Influence of the parameter Q

63

The parameter Q adjusts the shape of the ATANR penalty kernel. As shown in Fig. 1, the larger the Q is, the more it is similar to the ℓ0 quasi-norm. However, a larger Q does not guarantee better performance. Fig. 2 shows approximately how Q influences ATANR under several noise situations while λ ¼ 10  3 . In these simulations, we selected a measurement matrix with its size R100512 , and the input signal was 5-sparse, complex-valued and randomly generated. Each sub-figure in Fig. 2 was obtained through 100 numerical simulations, and drew the average value of mean square error (MSE) over the 100 experiments. From Fig. 2, we could find out:

65 67 69 71 73 75 77

(1) In the noise free case, the MSE decreases monotonously when Q varies from 1 to 104. However, when the Q is too large about 105, the performance goes poor again. (2) It seems that Q¼ 1 always gives poor performance in our experiments. (3) When SNR is better than 30 dB, and Q varies from 10 to 104, ATANR performs well. (4) When SNR decreases to about 10 dB or below, all Q's seem to give nearly the same bad performance. h i Therefore, we usually choose Q in the range 10; 104 , and Q¼100 as the default setting.

79 81 83 85 87 89 91

29 31

5. Numerical experiments for random signs ensemble

5.2. Performance comparison

93

33

In this section, we focus on the performance of the ATANR applied to the random signs ensemble, which is a class of matrix satisfying the RIP. Entries of the matrix in this ensemble are chosen from a Bernoulli distribution ( 71), and columns are normalized to have unit Euclidean length. In these simulations, their sensing matrices were generated by the MATLAB function MatrixEnsemble from SparseLab2.0 [18]. The LASSO and the ℓ1LS invoked the lassoen function [22] and the l1_ls function [23], respectively. In addition, we assumed that n satisfied n 4 2slog ðm=nÞ [24].

In this subsection, we compare the performance of sparse reconstruction methods in handling the complexvalued problem that the sensing matrix and the input signal are both complex-valued. There were 7 algorithms being considered: IRLS-ATANR, IRLS-FATANR, IRLS-L1, IRLS-MCþ, OMP, LASSO and ℓ1LS. IRLS-L1 and IRLS-MCþ mean to implement the ℓ1 regularization and the MCþ method by the IRLS framework, respectively. The stop condition of OMP was controlled by its maximum number of iterations, which was set to 2‖x‖0 . It is worth being aware of that the function l1_ls could not

95

35 37 39 41 43

97 99 101 103 105

45

107

47

109

49

111

51

113

53

115

55

117

57

119

59

121

61

Fig. 1. The penalty kernels of ATANR with different values for Q: (a) γ ¼1 and (b) γ ¼ Q.

Please cite this article as: G. Xiang, et al., Complex-valued sparse reconstruction via arctangent regularization, Signal Processing (2014), http://dx.doi.org/10.1016/j.sigpro.2014.04.037i

123

G. Xiang et al. / Signal Processing ] (]]]]) ]]]–]]]

7

1

63

3

65

5

67

7

69

9

71

11

73

13

75

15

77

17

79

19

81

21

83

23

85

25

87

27

89

29

91

31

93

33

95

35

Fig. 2. The influence of the parameter Q under different SNR situations: (a) Noise free, (b) SNR¼50 dB, (c) SNR¼ 30 dB and (d) SNR¼ 10 dB.

97

37

99

39

101

41

103

43

105

45

107

47

109

49

111

51

113

53

115

55

117

57

119

59

121

61

Fig. 3. The influence of the regularization parameter λ for different sparse reconstruction methods when SNR¼50 dB.

Please cite this article as: G. Xiang, et al., Complex-valued sparse reconstruction via arctangent regularization, Signal Processing (2014), http://dx.doi.org/10.1016/j.sigpro.2014.04.037i

123

G. Xiang et al. / Signal Processing ] (]]]]) ]]]–]]]

8

1

5.2.2. Noise performance Here, we compare the seven algorithms under different noise scenarios. A was selected from C200500 and x was generated by (23) with ‖x‖0 ¼ 16. For IRLS-MC þ, γ ¼50. For IRLS-FATANR, α¼0.1 when SNR Z 20 dB, and α ¼0.3 when SNR¼10 dB. Table 1 shows that all methods provide sound recoveries when SNR is better than 20 dB, whilst IRLS-ATANR, IRLS-FATANR and OMP usually have better performance than the other 4 algorithms. Even when SNR ¼10 dB, they still provide reasonable solutions with the average RMSE better than 20.7336 dB.

63

5.2.3. The large size problem Here, we consider the large size problem, for example the size of A reaches C20004000 . It is obvious that this problem cannot be solved efficiently by LASSO and ℓ1LS. x was 50-sparse and was generated by (23). This simulation was carried out on windows 7-64bit system with Matlab 2012b, Intel Processor i5-3550 and 8 GB RAM. Table 2 lists the CPU time of the seven methods when they are applied to the large size problem. It shows that these algorithms obtain precise recoveries, whilst OMP and IRLS-FATANR cost much shorter CPU time than the others.

75

Table 1 The average performance for the complex-valued sensing matrix of random signs ensemble under different noise scenarios over 20 Monte Carlo simulations with ‖x‖0 ¼ 16.

87

deal with the complex-valued problem directly. In order to do so, we transformed the problem into the form of (2).

3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33

5.2.1. Influence of the regularization parameter λ The relative mean square error (RMSE) (22) was employed to examine the performance of these algorithms with different values for λ:   ~ 22 2 ‖x^  x‖ RMSE ¼ 10 log10 ‖2 ð22Þ ‖x~ The input signal x was a 5-sparse complex-valued vector and its nonzero entries were generated by the model pffiffiffiffiffiffiffiffi ð23Þ x ¼ 1 þejϕ where ϕ  Uð0; 2πÞ; j ¼  1: Besides, A A C200500 and SNR¼50 dB. λ's were generated logarithmically from 10  5 to 104. For IRLS-MCþ, its initial x0 was a zero vector, namely 0 A Rm , and its γ was set to 1.7. Fig. 3 shows that when λ r 10  2 , all methods except IRLS-MCþ obtain exact results. However, when λ ¼10, IRLS-L1, IRLS-MC þ and l1_ls all fail, but IRLS-ATANR still has good performance even when λ¼ 100. In consequence, ATANR is less sensitive to the λ than the other three methods.

Methods

IRLS-ATANR

IRLS-FATANR

IRLS-L1

IRLS-MC þ

OMP

LASSO

ℓ1LS

RMSE (dB) SNR¼ 50 dB SNR¼ 30 dB SNR¼ 20 dB SNR¼ 10 dB

 60.9124  40.8095  31.0088  20.8748

 60.9124  40.8095  31.0088  20.7336

 48.9835  28.7603  18.7641  8.9382

 58.1297  29.617  14.9447 146.5947

 60.9124  40.8095  31.0088  21.1498

 51.3674  30.9786  18.8705  9.22651

 49.7225  28.1525  18.0976  8.16641

39 41

67 69 71 73

77 79 81 83 85

89 91 93 95 97

35 37

65

Table 2 The CPU time and RMSE of the seven sparse reconstruction methods for the large size problem (2000  4000).

99

Methods

IRLS-ATANR

IRLS-FATANR

IRLS-L1

IRLS-MCþ

OMP

LASSO

ℓ1LS

RMSE (dB) Time (s)

 49.0337 74.82578

 67.0662 8.899855

 53.1918 36.22541

 65.7538 32.90746

 57.1232 4.521926

 46.3993 198.0448

 53.1596 73.5398

101 103

43

105

45

107

47

109

49

111

51

113

53

115

55

117

57

119

59

121

61

Fig. 4. The measurement matrix and its PSF: (a) 1-D lower-resolution PSF matrix and (b) the original and enhanced PSF.

Please cite this article as: G. Xiang, et al., Complex-valued sparse reconstruction via arctangent regularization, Signal Processing (2014), http://dx.doi.org/10.1016/j.sigpro.2014.04.037i

123

G. Xiang et al. / Signal Processing ] (]]]]) ]]]–]]]

1

6. Resolution enhancement

3

In this section, we concentrate on the sensing matrix with high mutual coherence. We take the LASAR [25] for example to apply ATANR to the resolution enhancement problem.

5 7 9 11 13 15 17 19 21 23

6.1. Introduction to the lower-resolution PSF matrix The LASAR 3-D imaging is usually sparse [26], therefore its resolution enhancement problem could be handled by sparse reconstruction methods. However, it is usually complex-valued and cannot be handled directly and efficiently by the existing algorithms. In order to use ATANR,

9

we should first know the measurement matrix of the LASAR imaging, which is also called the lower-resolution point spread function (PSF) matrix here. For simplicity and clarity, we consider a 1-D PSF matrix, which is generated by the back projection (BP) imaging [26]: A ¼ fa1 ; a2 ; …; ai ; …am g

63 65 67 69

where ai denotes the BP imaging result when there is only one scatter with its scattering coefficient 1. In the resolution enhancement problem, n 5m does not hold (for n ¼m) and A is usually singular, but the input signal x is still sparse. Fig. 4 draws the 1-D PSF matrix and the magnitudes of the PSF before and after the enhancement. It shows that

Table 3 The average performance for the discrete scatters case under different noise scenarios over 20 Monte Carlo simulations with ‖x‖0 ¼ 16.

71 73 75 77 79

Methods

IRLS-ATANR

IRLS-FATANR

IRLS-L1

IRLS-MC þ

OMP

LASSO

ℓ1LS

81

RMSE (dB) SNR¼50 dB SNR¼30 dB SNR¼20 dB SNR¼10 dB

 59.848  1.15982 5.411775 9.736175

 60.0901  26.7955  17.8015 7.752013

 28.7784  1.41696 11.41104 24.63711

 20.6925 27.71198 41.33241 49.761

 0.34473  0.65694  0.31786  0.8733

 30.1853  9.61569 1.889679 10.0931

 29.522  1.95876 10.82803 23.8348

83 85

25

87

27

89

29

91

31

93

33

95

35

97

37

99

39

101

41

103

43

105

45

107

47

109

49

111

51

113

53

115

55

117

57

119 121

59 61

Fig. 5. The influence of the initial value of xk in resolution enhancement case: (a) The original and recovered signal (x0 ¼ 0); (b) the residual error of IRLS-FATANR (x0 ¼ 0); (c) the original and recovered signal (x0 ¼ A† y); and (d) the residual error of IRLS-FATANR (x0 ¼ A† y).

Please cite this article as: G. Xiang, et al., Complex-valued sparse reconstruction via arctangent regularization, Signal Processing (2014), http://dx.doi.org/10.1016/j.sigpro.2014.04.037i

123

10

G. Xiang et al. / Signal Processing ] (]]]]) ]]]–]]]

1

63

3

65

5

67

7

69

9

71

11

73

13

75

15

77

17

79

19

81

21

83

23

85

25

87

27

89

29

91

31

93

33

95

35

97

37

99

39

101

41

103

43

105

45

107

47

109

49

111

51

113

53

115

55

117

57 59

Fig. 6. The performance of different methods for the discrete scatters case when SNR ¼ 30 dB: (a) The original signal and the recovery of IRLS-FATANR (α ¼ 0:3); (b) the residual error of IRLS-ATANR; (c) the residual error of IRLS-FATANR; (d) the residual error of IRLS-L1; (e) the residual error of IRLS-MC þ; (f) the residual error of OMP; (g) the residual error of LASSO; and (h) the residual error of ℓ1LS.

119 121 123

61 Please cite this article as: G. Xiang, et al., Complex-valued sparse reconstruction via arctangent regularization, Signal Processing (2014), http://dx.doi.org/10.1016/j.sigpro.2014.04.037i

G. Xiang et al. / Signal Processing ] (]]]]) ]]]–]]]

1 3 5

the resolution enhancement should generally have two effects: shrinking the mainlobe and suppressing the sidelobes, like Fig. 4(b). In Fig. 4(b), the mainlobe of the PSF is designed as 18 pixels wide and the mutual coherence as very high as 0.9747.

11

It is obvious that the mutual coherence of A depends on the imaging spacing and might be very large as the imaging spacing becomes small. As far as we know, high mutual coherence always indicates that it is difficult to distinguish the pixels in the same resolution cell. Assume

63 65 67

7

69

9

71

11

73

13

75

15

77

17

79

19

81

21

83

23

85

25

87

27

89

29

91

31

93

33

95

35

97

37

99

39

101

41

103

43

105

45

107

47

109

49

111

51

113

53

115

55

117

57

119 121

59 61

Fig. 7. The performance for the continuous block case (TEST I) with SNR¼ 30 dB: (a) the original continuous block signal and the IRLS-FATANR recovery (local); (b) the residual error of IRLS-ATANR; (c) the residual error of IRLS-FATANR; (d) the residual error of LASSO; and (e) the residual error of ℓ1LS.

Please cite this article as: G. Xiang, et al., Complex-valued sparse reconstruction via arctangent regularization, Signal Processing (2014), http://dx.doi.org/10.1016/j.sigpro.2014.04.037i

123

12

1

G. Xiang et al. / Signal Processing ] (]]]]) ]]]–]]]

(13) holds for the solution x^ and re ¼ y  Ax^ denotes the residual error of y. Thus, we have

3 5

^ x^ ¼ AH re λΛðxÞ

7

When λ is properly small, LASSO, OMP and ATANR all aim to minimize the residual re . However, OMP only adds

atoms without pruning operations, and LASSO emphasizes the solution x^ should have the same signs with AH re [7]. ^ should be small ATANR emphasizes that AH ðy AxÞ ^ x-0. ^ enough to match λΛðxÞ Besides, α in ATANR keeps the large entries indexed by S at each iteration. It improves the anti-noise performance of ATANR by setting the small entries in S to 0.

63 65 67 69

9

71

11

73

13

75

15

77

17

79

19

81

21

83

23

85

25

87

27

89

29

91

31

93

33

95

35

97

37

99

39

101

41

103

43

105

45

107

47

109

49

111

51

113

53

115

55

117

57

119 121

59 61

Fig. 8. The performance for the continuous block case (TEST II) with SNR ¼30 dB: (a) the original continuous block signal and the IRLS-FATANR recovery (local); (b) the residual error of IRLS-ATANR; (c) the residual error of IRLS-FATANR; (d) the residual error of LASSO; and (e) the residual error of ℓ1LS.

Please cite this article as: G. Xiang, et al., Complex-valued sparse reconstruction via arctangent regularization, Signal Processing (2014), http://dx.doi.org/10.1016/j.sigpro.2014.04.037i

123

G. Xiang et al. / Signal Processing ] (]]]]) ]]]–]]]

1 3 5 7

13

Table 4 The average performance for the continuous block case (TEST I) under different noise scenarios over 20 Monte Carlo simulations with ‖x‖0 ¼ 20. Methods

IRLS-ATANR

IRLS-FATANR

IRLS-L1

IRLS-MC þ

OMP

LASSO

ℓ1LS

RMSE (dB) SNR¼50 dB SNR¼30 dB SNR¼20 dB

 38.2736 6.219955 12.13062

 37.0387  18.4897 6.229572

 25.6277 1.054745 14.31614

 17.0493 31.84331 42.16249

 0.50591  0.04415 1.098687

 27.8745  7.23268 2.328133

 26.2619 0.508711 13.63833

13

6.4. Performance for the continuous block case 73 The continuous block case indicates that the nonzero components are adjacent as shown in Fig. 7. We designed two different continuous blocks:

17

27

Comparisons here show that we have better find a good initial value of x before performing experiments. Fig. 5 shows that the zero vector (x0 ¼ 0) results in a correct solution, however, the original least squares (OLS) solution (x0 ¼ A† y) gives a bad result. This motivates us to use the zero vector as the initial value in resolution enhancement. This can be explained from two aspects: on the one hand, ATANR is non-convex method, then it might have more than one minima; on the other hand, it could be also due to the high mutual coherence of A.

29

6.3. Performance for the discrete scatters case

23 25

75 77

6.2. Initial value of xk in resolution enhancement

21

69 71

It seems that the maximum magnitude and the signs of AH ðy Axk Þ can be easily influenced by noise during iterations in the resolution enhancement problem. This might account for the failure of LASSO, OMP and ℓ1LS in some situations (Table 3).

15

19

65 67

9 11

63

TEST I: The nonzero entries were generated by (24). TEST II: The nonzero entries were generated by x ¼ 6ejϕ which indicated equivalent magnitudes.

79 81

In these simulations, α was set to 0.1. Figs. 7 and 8 show that IRLS-FATANR obtains reasonable solutions in both configurations (SNR ¼30 dB). Table 4 shows that when SNR¼50 dB, the seven methods, except OMP, all obtain reasonable solutions. When SNR ¼30 dB, only the IRLSFATANR gives the proper solution, with an average RMSE¼ 18.4897 dB. As SNR degenerates to 20 dB, all methods fail to provide sound solutions.

83 85 87 89 91

7. Conclusions 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61

The discrete scatters case indicates that every two nonzero entries of x are not adjacent, as shown in Fig. 4 (a). In these simulations, x was a 16-sparse vector, and its nonzero entries were generated by the model x ¼ pejϕ þ1

ð24Þ

where p  Nð6; 22 Þ and ϕ  Uð0; 2πÞ. Fig. 6 draws the performance of the seven algorithms for the discrete scatters case when SNR¼30 dB. Fig. 6 shows that IRLS-FATANR gives a precise solution. However, the other 6 methods all perform badly in this simulation. Their failures are mainly owing to the high mutual coherence of the 1-D PSF matrix A. In addition, plenty of simulations under different SNRs were executed. For IRLS-FATANR, α¼0.1 when SNR¼ 50 dB, but α¼0.3 for other situations. Table 3 shows that the seven methods, except OMP, all have good performance when SNR¼50 dB. OMP fails because it adds only one basis to the active set by the maximum current coherence at each iteration, and this could be easily influenced by noise when the mutual coherence of A is high. When SNR decreases to 30 dB, only IRLS-FATANR gives correct solutions with an average RMSE  26.7955 dB. This is contributed to the active set selection of IRLS-FATANR at each iteration by the parameter α. It indicates that α improves the anti-noise performance of ATANR. While SNR¼20 dB, IRLS-FATANR performs well with an average RMSE 17.8015 dB over 20 random experiments (Table 3). However, when SNR is as bad as 10 dB, the examined methods all fail to provide good recoveries.

93 In this paper, we firstly propose the arctangent regularization algorithm and implement it by the IRLS framework. It is able to handle complex-valued sparse reconstruction problems directly for small size problems. In order to solve problems of large size, we accelerate IRLS-ATANR by the dimension reduction and active set selection steps, which result in another version of ATANR, called IRLS-FATANR. Secondly, the common properties of penalty functions which are suitable for sparse reconstruction are discussed and analyzed in detail. We find out that the key difference, between the arctangent penalty kernel and the ℓ1 norm, is that its first order derivative is close to zero when the variable is nonzero. Further, this property makes the ATANR less sensitive to the regularization parameter λ. Finally, plenty of simulations verify that IRLS-ATANR and IRLS-FATANR have quite better performance than the other 5 examined methods:

 IRLS-ATANR, IRLS-FATANR and OMP usually have better   

performance than the other examined methods for random signs ensemble. IRLS-FATANR could handle the large size problem efficiently and correctly. IRLS-FATANR has better anti-noise performance owing to the threshold parameter α. In resolution enhancement, IRLS-FATANR usually obtains correct recoveries for the discrete scatters case when SNR is better than 20 dB; however, for the continuous block case, reasonable solutions could be obtained while SNR Z30 dB.

Please cite this article as: G. Xiang, et al., Complex-valued sparse reconstruction via arctangent regularization, Signal Processing (2014), http://dx.doi.org/10.1016/j.sigpro.2014.04.037i

95 97 99 101 103 105 107 109 111 113 115 117 119 121 123

14

1 3 5 7

G. Xiang et al. / Signal Processing ] (]]]]) ]]]–]]]

In the future work, we will focus on the selection and the influence of the threshold parameter α, in order to improve the performance of ATANR in resolution enhancement. Acknowledgment

13

The authors would like to thank the reviewers for the valuable comments that have helped us improve the presentation of this paper. We would also like to thank the 701-4 lab of the University of Electronic Science and Technology of China (UESTC) for their challenging conversation and comments.

15

References

17

[1] Y. Tsaig, D.L. Donoho, Extensions of compressed sensing, Signal Process. 86 (3) (2006) 549–571. [2] D.L. Donoho, Compressed sensing, IEEE Trans. Inf. Theory 52 (4) (2006) 1289–1306. [3] D.L. Donoho, For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution, Commun. Pure Appl. Math. 59 (6) (2006) 797–829. [4] B.K. Natarajan, Sparse approximate solutions to linear systems, SIAM J. Comput. 24 (2) (1995) 227–234. [5] E.J. Candes, J.K. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Commun. Pure Appl. Math. 59 (8) (2006) 1207–1223. [6] R. Tibshirani, Regression shrinkage and selection via the lasso, J. R. Stat. Soc. Ser. B (Methodol.) (1996) 267–288. [7] B. Efron, T. Hastie, I. Johnstone, R. Tibshirani, Least angle regression, Ann. Stat. 32 (2) (2004) 407–499. [8] H. Zou, T. Hastie, Regularization and variable selection via the elastic net, J. R. Stat. Soc.: Ser. B (Stat. Methodol.) 67 (2) (2005) 301–320. [9] S.-J. Kim, K. Koh, M. Lustig, S. Boyd, D. Gorinevsky, An interior-point method for large-scale l1-regularized least squares, IEEE J. Sel. Top. Signal Process. 1 (4) (2007) 606–617. [10] C.-H. Zhang, Nearly unbiased variable selection under minimax concave penalty, Ann. Stat. 38 (2) (2010) 894–942.

9 11

19 21 23 25 27 29 31 33

[11] R. Mazumder, J.H. Friedman, T. Hastie, SparseNet: coordinate descent with nonconvex penalties, J. Am. Stat. Assoc. 106 (495) (2011) 37 1125–1138. [12] S.-J. Wei, X.-L. Zhang, J. Shi, Linear array SAR imaging via compressed 39 sensing, Prog. Electromagn. Res. 117 (2011) 299–319. [13] A. Bruckstein, D. Donoho, M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images, SIAM Rev. 41 51 (1) (2009) 34–81. [14] M. Çetin, W.C. Karl, Feature-enhanced synthetic aperture radar 43 image formation based on nonquadratic regularization, IEEE Trans. Image Process. 10 (4) (2001) 623–631. [15] T. Hastie, R. Tibshirani, J.J.H. Friedman, The Elements of Statistical 45 Learning, vol. 1, Springer, New York, 2001. [16] N.E. Huang, Z. Shen, S.R. Long, M.C. Wu, H.H. Shih, Q. Zheng, 47 N.-C. Yen, C.C. Tung, H.H. Liu, The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis, Proc. R. Soc. Lond. Ser. A: Math. Phys. Eng. Sci. 454 49 (1971) (1998) 903–995. [17] F. Gianfelici, G. Biagetti, P. Crippa, C. Turchetti, Multicomponent 51 AM–FM representations: an asymptotically exact approach, IEEE Trans. Audio Speech Lang. Process. 15 (3) (2007) 823–837. [18] SparseLab, 2013. URL: 〈http://sparselab.stanford.edu/〉. 53 [19] D.P. Wipf, B.D. Rao, Sparse Bayesian learning for basis selection, IEEE Trans. Signal Process. 52 (8) (2004) 2153–2164. 55 [20] W. Sun, Y.-X. Yuan, Optimization Theory and Methods: Nonlinear Programming, vol. 1, Springer Science þ Business Media, 2006. Q2 [21] N.J. Higham, Accuracy and Stability of Numerical Algorithms, SIAM, 57 1996. [22] K. Sjö strand, Matlab implementation of LASSO, LARS, the elastic net and SPCA, version 2.0, 2005. URL: 〈http://www2.imm.dtu.dk/pubdb/ 59 p.php?3897〉. [23] K. Koh, l1_ls: Simple Matlab solver for l1-regularized least squares 61 problems, 2008. URL: 〈http://www.stanford.edu/  boyd/l1_ls/〉. 63 [24] D. Donoho, J. Tanner, Counting faces of randomly projected polytopes when the projection radically lowers dimension, J. Am. Math. Soc. 22 (1) (2009) 1–53. 65 [25] G. Xiang, X. Zhang, J. Shi, Airborne 3-d forward looking SAR imaging via chirp scaling algorithm, in: Geoscience and Remote Sensing 67 Symposium (IGARSS), 2011 IEEE International, 2011, pp. 3011–3014. [26] S. Jun, Z. Xiaoling, X. Gao, J. Jianyu, Signal processing for microwave array imaging: TDC and sparse recovery, IEEE Trans. Geosci. Remote 69 Sens. 50 (11) (2012) 4584–4598.

35

Please cite this article as: G. Xiang, et al., Complex-valued sparse reconstruction via arctangent regularization, Signal Processing (2014), http://dx.doi.org/10.1016/j.sigpro.2014.04.037i