Infrared Physics & Technology 67 (2014) 306–314
Contents lists available at ScienceDirect
Infrared Physics & Technology journal homepage: www.elsevier.com/locate/infrared
Sparse hyperspectral unmixing based on smoothed ‘0 regularization Chengzhi Deng ⇑, Shaoquan Zhang, Shengqian Wang, Wei Tian, Zhaoming Wu Department of Information Engineering, Nanchang Institute of Technology, Nanchang 330099, PR China
h i g h l i g h t s This paper present a new sparse hyperspectral unmixing method based on smoothed ‘0 norm. The smoothed ‘0 norm is a continuous function, which provides a smooth measure of ‘0 sparsity and better accuracy than ‘1 norm and also tolerates noise
to some extent. The experimental results show effectiveness and accuracy of the proposed method.
a r t i c l e
i n f o
Article history: Received 8 January 2014 Available online 21 August 2014 Keywords: Sparse unmixing Smoothed ‘0 norm Spectral unmixing Hyperspectral imaging
a b s t r a c t Sparse based approach has recently received much attention in hyperspectral unmixing area. Sparse unmixing is based on the assumption that each measured pixel in the hyperspectral image can be expressed by a number of pure spectra linear combination from a spectral library known in advance. Despite the success of sparse unmixing based on the ‘0 or ‘1 regularizer, the limitation of this approach on its computational complexity or sparsity affects the efficiency or accuracy. As the smoothed ‘0 regularizer is much easier to solve than the ‘0 regularizer and has stronger sparsity than the ‘1 regularizer, in this paper, we choose the smoothed ‘0 norm as an alternative regularizer and model the hyperspectral unmixing as a constrained smoothed ‘0 ‘2 optimization problem, namely SL0SU algorithm. We then use the variable splitting augmented Lagrangian algorithm to solve it. Experimental results on both simulated and real hyperspectral data demonstrate that the proposed SL0SU is much more effective and accurate on hyperspectral unmixing than the state-of-the-art SUnSAL method. Ó 2014 Elsevier B.V. All rights reserved.
1. Introduction Hyperspectral remote sensing collects and processes electromagnetic spectrum information from the Earth’s surface at hundreds of narrow and contiguous wavelength bands. It has been widely applied in various fields, such as target detection, material mapping, and material identification. However, due to the low spatial resolution and the complexity of natural diversity of surface features, which leads to homogeneous mixture, hyperspectral remote sensing imagery of a pixel contains may not be a single feature. The spectral unmixing technique was proposed to deal with the problem, which estimates the number of endmembers and corresponding fractional abundances in each mixed pixel [1]. There are two models used to analyze the mixed pixel problem which can be divided into linear and nonlinear mixture models [2]. ⇑ Corresponding author. E-mail addresses:
[email protected] (C. Deng),
[email protected] (S. Zhang),
[email protected] (S. Wang),
[email protected] (W. Tian), zmwunit@ foxmail.com (Z. Wu). http://dx.doi.org/10.1016/j.infrared.2014.08.004 1350-4495/Ó 2014 Elsevier B.V. All rights reserved.
Compared with the nonlinear mixture model, the linear mixture model is computational tractability and flexibility and has been widely used to solve the unmixing problem. In recent years, many linear unmixing methods have been proposed, including the pixel purity index (PPI) [3], N-FINDR [4], the iterative error analysis (IEA) [5] and iterative constrained endmembers (ICE) [6]. However, all these methods under the framework of linear spectral unmixing are very likely to fail in highly mixed scenarios. The advantage that they do not require the estimation of the endmembers makes the sparse regression to be a new direction for hyperspectral unmixing proposed by Iordache firstly [7]. In recent years, sparse unmixing becomes a new hotspot [8–14], which is based on the assumption that the pixel’s spectrum can be expressed in the form of linear combinations of a number of pure spectral signatures from a large spectral library that is known in advance [1]. Because the size of the spectral library is often large, the number of endmembers in the spectral library will be much greater than the number of spectral bands. The sparse unmixing model is a typical underdetermined linear system, which is difficult to find a unique, stable, and optimal solution [14].
307
C. Deng et al. / Infrared Physics & Technology 67 (2014) 306–314
Bioucas-Dias and Figueiredo proposed a representative algorithm, the sparse unmixing algorithm via variable splitting and augmented Lagrangian (SUnSAL) [15] which used the ‘1 norm instead of ‘0 norm in unmixing models. Chen and Zhang proposed the ‘p regularizers for hyperspectral unmixing [8], which show the solution of the ‘p (0 < p < 1) regularizers are sparser than the ‘1 regularizer, while this method is easy to fall into local optimal. Sun and Wu proposed the ‘1/2 regularizer for hyperspectral unmixing [11], it also show that the solution of the ‘1/2 regularizer are sparser than the ‘1 regularizer, while this method is too complex. Finding a sparse and easy method based on hyperspectral unmixing is critical. In this paper, a novel smoothed ‘0 regularized sparse unmixing method, namely SL0SU, is proposed. In SL0SU, a continuous smooth function regularizer based on smoothed ‘0 norm [16] is incorporated into sparse unmixing model. The smoothed ‘0 norm provides a smooth measure of ‘0 sparsity and better accuracy than ‘1 norm and also tolerates noise to some extent, it has been widely used in super-resolution reconstruction [17], compressive sensing radar imaging [18], error correction [19], and so on. Using three simulated and a real hyperspectral data, SL0SU algorithm was tested to compare with SUnSAL algorithm. The experimental results demonstrate the proposed SL0SU algorithm outperforms a better spectral unmixing accuracy. The rest of this paper is structured as follows. Section 2 introduces sparse unmixing theory. In Section 3, the SL0SU algorithm is described in detail. Section 4 presents a description of the datasets used and analyzes the experimental results and shows a quantitative comparison between SL0SU and SUnSAL unmixing algorithm. Section 5 concludes the paper with some remarks. 2. Sparse hyperspectral unmixing
endmember-matrix which containing q endmembers, a is a q 1 abundance vector of the endmembers in the pixel, n denotes a L 1 vector indicating the model error and noise. We suggest that two constraints are imposed on the linear mixture model. They are non-negative (abundance non-negativity constraint – ANC) and sum to one (abundance sum-to-one constraint – ASC), which are defined:
ai P 0 ði ¼ 1; 2; . . . ; qÞ
ð2Þ
q X
ai ¼ 1
ð3Þ
i¼1
2.2. Sparse unmixing The sparse unmixing model is something different from the above model. It uses a known large spectral library A, whose number of endmembers in the spectral library will be much greater than that of spectral bands, to replace the unknown endmember matrix M. The sparse unmixing model can be written as
y ¼ Ax þ n
ð4Þ Lp
where A is a large matrix called the spectral library, A e R containing p endmembers. L denotes the number of the spectral bands, x denotes the abundance vector corresponding to library A, which is a p 1 matrix. As q p, the vector x contains many values of zero which can be seen as sparse. The ‘0 regularizer is one of the best choices for hyperspectral unmixing problems. Because A contains the noise in practical applications or sparse vector is not an ideal situation, it needs to make relaxation. Then the sparse unmixing problem can be written as:
minkxk0 subject to ky Axk2 6 d;
2.1. Linear mixture model
x P 0;
x
The linear mixture model assumes the spectral response of a pixel in any given spectral band to be a linear combination of all of the pure spectral signatures present in the pixel, which release to be formulated as follows:
y ¼ Ma þ n
ð1Þ
where y is a L 1 column vector of the observed hyperspectral data, L denotes the number of the spectral bands, M denotes a L q
1T x ¼ 1
ð5Þ
where kxk0 denotes the ‘0 norm of the vector x, d P 0 is the tolerated error and modeling error. x P 0 and 1Tx = 1 denote the ANC and ASC respectively. However, the ‘0 term is a NP-hard optimization problem which was difficult to solve, Candès and Tao [20,21] proved the ‘0 norm can be replaced by the ‘1 norm under a certain condition of the restricted isometric property (RIP). So the previous problem becomes:
minkxk1 x
subject to ky Axk2 6 d;
x P 0;
1T x ¼ 1
ð6Þ
Table 1 SRE (dB) and time (s) comparison between SL0SU algorithm and SUnSAL algorithm. Data cube
SNR (dB)
SUnSAL (dB)
Times (s)
SL0SU (dB)
DC1 (k1 = 2)
20
3.52
1.1594
3.73
0.9042
ðk ¼ 7 104 ; 10.19
r ¼ 0:12Þ
30
ðk ¼ 3 102 Þ 9.01
0.6690
ðk ¼ 2 104 ; 21.95
r ¼ 0:05Þ
40
ðk ¼ 1 102 Þ 18.22
ðk ¼ 1 104 ;
r ¼ 0:02Þ
ðk ¼ 5 103 Þ DC2(k1 = 4)
20
3.22
30
ðk ¼ 1 101 Þ 7.24
40
ðk ¼ 2 102 Þ 13.36
20
1.7
30
ðk ¼ 9 101 Þ 4.55
40
ðk ¼ 9 102 Þ 9.44 ðk ¼ 3 103 Þ
1.9391 1.439 1.3022
1.6522
3.43
r ¼ 0:13Þ
1.2573
ðk ¼ 1 103 ; 8.15
r ¼ 0:06Þ
0.5309
ðk ¼ 5 104 ; 15.42 ðk ¼ 1 104 ;
r ¼ 0:03Þ
ðk ¼ 3 103 Þ DC3(k1 = 6)
Times (s)
1.5757 1.6289 0.8128
1.5990
2.19
r ¼ 0:15Þ
0.8771
ðk ¼ 7 103 ; 4.87
1.5945
r ¼ 0:07Þ
0.5789
ðk ¼ 4 104 ; 10.6 ðk ¼ 3 105 ;
r ¼ 0:04Þ
0.8881 0.8664
308
C. Deng et al. / Infrared Physics & Technology 67 (2014) 306–314
where kxk1 denotes the ‘1 norm. In the process of minimizing the respective Lagrangian function, the above constrained optimization problem can be converted into an unconstrained version, and can be written as:
1 min kAx yk22 þ kkxk1 þ ‘Rqþ ðxÞ þ ‘f1g ð1T xÞ x 2
ð7Þ
where k > 0 is a regularization parameter which weights the two terms of the objective function. The ‘Rqþ ðxÞ and ‘{1}(1Tx) denote the ANC and ASC respectively. There are many different strategies to
compute the objective criterion, such as Orthogonal Matching Pursuit (OMP) [22], Iterative Spectral Mixture Analysis (ISMA) [23], Two-Step Iterative Shrinkage/Thresholding (TwIST) algorithm [24], Sparse Unmixing algorithm via variable Splitting and Augmented Lagrangian (SUnSAL) [15]. The SUnSAL algorithm has better performance than any others. Nevertheless, while ‘1 regularization provides the best convex approximation to ‘0 regularization and it is computationally efficient, the ‘1 regularization often introduces extra bias in unmixing and cannot estimate the fractional abundances with the sparser solutions. In a word, we use the smoothed ‘0 norm instead of the ‘0 norm and ‘1 norm, for it’s much easier to solve than the ‘0 regularizer and has stronger sparsity than the ‘1 regularizer. 3. Unmixing model and algorithm based smoothed ‘0 norm
3.1. Unmixing model based smoothed ‘0 norm In this paper, we choose a continuous function to approximate ‘0 norm instead of minimize the ‘0 norm directly. It should have a parameter (r) which determines the quality of the approximation. The smoothed ‘0 norm can be written as [16]:
x2 f r ðxÞ ¼ 1 exp 2 2r
ð8Þ
where r is a positive constant and r – 0. As r approaches to zero, we have
lim f r ðxÞ ¼ r!0
1 x¼0 0 x–0
ð9Þ
or approximately can be denotes:
f r ðrÞ
1 jxj r 0 jxj r
ð10Þ
Defining:
Fr ðxÞ ¼
m X f r ðxi Þ
ð11Þ
i¼1
Obviously, From Eqs. (9) and (10), we can achieve the equation kxk0 1 Fr(x) for small values of r, when r ? 0 the approximation tend to equality. The value of r can determine how smooth the function Fr: the larger value of r, the smoother Fr; the smaller value of r, the closer behavior of Fr to ‘0 norm [16]. In this article, the smoothed ‘0 regularizer is used to replace the ‘1 regularizer in (4), which can be rewritten:
min gðxÞ subject to ky Axk2 6 d; x P 0; 1T x ¼ 1
ð12Þ
gðxÞ ¼ 1 Fr ðxÞ
ð13Þ
x
Eq. (12) can be turned into the following form:
1 min kAx yk22 þ kgðxÞ þ ‘Rqþ ðxÞ þ ‘f1g ð1T xÞ x 2
ð14Þ
where y is a L 1 column vector of the observed hyperspectral data, L denotes the number of the spectral bands, A is a large matrix called the spectral library, A e RLk containing k endmembers. x is a k 1 abundance vector of the endmembers. g(x) denotes the smoothed ‘0 norm. Then the following will give the access to how to choose an effective algorithm to solve Eq. (14). 3.2. Unmixing algorithm based smoothed ‘0 norm Fig. 1. (a) Ground-truth abundances containing 50 pixels and k = 4 endmembers randomly extracted from library A. The SNR = 30 dB. (b) Abundances estimated by SUnSAL algorithm. (c) Abundances estimated by SL0SU algorithm.
Bioucas-Dias and Figueiredo introduced a class of alternating direction algorithms to solve several constrained sparse regression
309
C. Deng et al. / Infrared Physics & Technology 67 (2014) 306–314
(CSR) problems [25]. The proposed algorithms are based on the alternating direction method of multipliers (ADMM) [26], which decomposes a difficult problem into a sequence of simpler ones. This approach releases an idea of our method. To solve problem (14), we first introduce intermediate variable u and transform problem (14) into an equivalent problem, i.e.
1 min kAx yk22 þ kgðuÞ þ ‘Rqþ ðuÞ þ ‘f1g ð1T xÞ subject to u x 2 ¼x
(1) x – Subproblem: Let uk and dk be fixed, we solve
1 l xkþ1 ¼ arg minx kAx yk22 þ ‘f1g ð1T xÞ þ kx uk dk k22 2 2
ð19Þ
Problem (19) is a linearly constrained quadratic problem, the solution of which is:
xkþ1 ¼ B1 w Cð1T B1 w 1Þ
ð20Þ
where
ð15Þ
B ¼ AT A þ lI
ð21Þ 1
The augmented Lagrangian of problem (15) is
C ¼ B 1ð1 B 1Þ
1 l Lðx; u; dÞ ¼ kAx yk22 þ kgðuÞ þ ‘Rqþ ðuÞ þ ‘f1g ð1T xÞ þ kx u dk22 2 2 ð16Þ
w ¼ AT y þ lðuk þ dk Þ
1
where l > 0 is a positive constant, and d/l denotes the Lagrangian multipliers associated to the constraint u = x. The basic idea of the augmented Lagrangian method is to seek a saddle point of Lðx; u; dÞ, which is also the solution of problem (14). By using the augmented Lagrangian algorithm, we solve the problem (14) by iteratively solving problem (17) and (18):
ðxkþ1 ; ukþ1 Þ ¼ arg minx;u Lðx; u; dÞ
ð17Þ
dkþ1 ¼ dk ðxkþ1 ukþ1 Þ
ð18Þ
It is evident that the minimization problem (17) is still hard to solve efficiently in a direct way, since it involves a nonseparable quadratic term and non-differentiability terms. To solve this problem, a quite useful ADMM algorithm [26] is employed, which alternatively minimizes one variable while fixing the other variables. By using ADMM, the problem (17) can be solved by the following two subproblems with respect to x and u.
T
1
ð22Þ ð23Þ
If ASC is critical [15], we can set C = 0 to easily discard it. (2) u – Subproblem: Let xk+1 and dk be fixed, we solve
ukþ1 ¼ arg minkgðuÞ þ x
l 2
kxkþ1 u dk k22 þ ‘Rnþ ðuÞ
ð24Þ
Clearly, formulation (24) is a quadratic function, therefore, the steepest descent method is desirable to use to solve (24).
ukþ1 ¼ uk cD
ð25Þ
where c is a constant, and D ¼ ðk=r þ lÞu þ lðdk xkþ1 Þ kuT gðuÞ=r2 is the gradient of formation (24). To consider the nonnegative of abundance vector uk+1, we can set 2
c¼
uk kuk f r ðuk Þ=r2 þ uk þ dk
ð26Þ
Then, the multiplicative update rule uk+1 is
ukþ1 ¼
lxkþ1 uk kuk f r ðuk Þ=r2 þ uk þ dk
ð27Þ
Fig. 2. Sparsity of x with smoothed ‘0 regularizer and ‘1 regularizer using library A, choose the SNR = 40 dB (a) ‘1 regularizer, k = 4 endmembers, (b) smoothed ‘0 regularizer, k = 4 endmembers, (c) ‘1 regularizer, k = 6 endmembers, (d) smoothed ‘0 regularizer, k = 6 endmembers.
310
C. Deng et al. / Infrared Physics & Technology 67 (2014) 306–314
Fig. 3. The relationship between r and SRE. (a) Select k = 4 endmembers, the different of SNR (20, 30, 40 dB) of the optimal value, (b) select SNR = 40 dB, the different number of endmember (k = 2, 4, 6) of the optimal value.
So far, all issues for handing the sub-problems have been solved. According to the above description, the main implementation procedure of the proposed method is depicted in Algorithm 1. Algorithm 1 Unmixing algorithm based on smoothed ‘0 norm Algorithm’s inputs: The measured hyperspectral data y, spectral library A, regularization k, l, parameter r, the max iteration number itermax, and the iteration stopping criterion estop. Algorithm’s output: The estimated abundance vector u . Initializations: Initialize u0, x0, and d0. Main iteration as follows: Step (1) Compute xk by solving x – Subproblem using (20). Step (2) Compute uk by solving u – Subproblem using (27). Step (3) Update Lagrange multipliers dk according to (18). Step (4) Increase k (k = k + 1) and go to step (1). Termination criteria: if kuk xk k22 6 estop or k > itermax, stop iteration.
4. Experiments In this section, we evaluate the unmixing performance of the proposed SL0SU algorithm for spectral unmixing using three simulated and one real hyperspectral data sets. We will compare the unmixing results obtained by the proposed SL0SU algorithm to those obtained with the SUnSAL method [15]. All the considered algorithms have taken into account the abundance non-negativity constraint. The max iteration number itermax and the iteration stopping criterion estop are set to 200 and 0.001, respectively. The signal-to-reconstruction error (SRE) [15] is used to evaluate the accuracy of unmixing methods, which is defined as follows:
^k22 Þ SREðdBÞ ¼ 10 log10 Eðkxk22 Þ=Eðkx x
ð28Þ
^ denotes the estimated Here, x denotes the true abundances, x abundances, and E( ) denotes the expectation function. Generally speaking, the larger the SRE is, the more accurate the unmixing algorithm is.
4.1. Simulated datacubes In our experiments, the spectral library A is a dictionary of minerals extracted from the United States Geological Survey (USGS) library denoted splib06 (http://speclab.cr.usgs.gov/spectral.lib06), which contains 240 members with L = 224 spectral bands. The reflectance values are measured for 224 spectral bands distributed uniformly in the interval 0.4–2.5 lm. The mutual coherence [27,28] of the library is very close to one. We use the library A to generate various data cubes of 500 pixels and test three simulated data cubes that each contains a different number of endmembers: the simulated data cube 1(DC1) k1 = 2, the data cube 2 (DC2) k1 = 4, the data cube 3 (DC3) k1 = 6. The fractional abundances of the endmembers which were randomly chosen signatures from the library, which follow a Dirichlet distribution [29]. The obtained data cubes were contaminated with the i.i.d. Gaussian noise, which having different levels of the signalto-noise ratio (SNR = 20 dB, 30 dB, 40 dB). Table 1 shows the SRE (dB) results achieved by the different test algorithms with the three simulated data cubes and using all considered SNR levels. In Table 1, some parameter values are given, such as k, and r, which are sensitive to different data and will influence the unmixing accuracy or have impact on the objective criteria’s optimization. As shown in Table 1, we give the optimal values obtained across the considered parameter range and the optimal parameters are given in the parentheses. From Table 1, it can be seen that SL0SU algorithm outperforms the SUnSAL algorithm under the same conditions. The accuracy of both algorithms decreases at the same rate with decreasing SNR and an increasing number of endmembers. The results of both methods are not accepted when the value of SNR is lower than 20 dB. In order to analyze the computational complexity of these algorithms, Table 1 also shows the running times of each algorithm, which are closely related to the numbers of iterations, the programming environment of the experiments were implemented using MATLAB R2012a. As can be seen from Table 1, the SL0SU algorithm is a little slower than the SUnSAL algorithm, at least show that our algorithm is not very complicated. Fig. 1 shows a graphical comparison of the performances of our unmixing algorithm and the SUnSAL algorithm in a simulated data cube containing 50 pixels and simulated using k = 4 endmembers
C. Deng et al. / Infrared Physics & Technology 67 (2014) 306–314
311
Fig. 4. (a) SRE (dB) values as a function of the log values of k, (b) SRE(dB) values as a function of the r values and compare with the value of SUnSAL algorithm.
Fig. 5. Results obtained by SUnSAL, SL0SU (without ASC), SL0SU (with ASC).
randomly extracted from the library A, the datacube was contaminated with the i.i.d. Gaussian noise having SNR of 30 dB. In the abundance maps Fig. 1, we can achieve the SL0SU algorithm produces more similar abundance maps to the ground-truth maps and the number of nonzero is less than the SUnSAL algorithm. This is because smoothed ‘0 regularizer has stronger sparsity than the ‘1 regularizer. In order to further compare the smoothed ‘0 regularizer with the ‘1 regularizer about the sparsity of abundances x. Fig. 2 shows a graphical comparison of the performances between them, we select k = 4 and k = 6 endmembers, and SNR = 40 dB. For the k = 4 endmembers, In ideal condition, the nonzero numbers of x of the reconstruction result in each column will be 4, the closer the numbers, the more sparse, so do k = 6. As can be seen from the Fig. 2, the nonzero numbers of x using the smoothed ‘0 regularizer is
Fig. 6. USGS map showing the location of different minerals in the Cuprite mining district in NV.
312
C. Deng et al. / Infrared Physics & Technology 67 (2014) 306–314
smaller than that of using the ‘1 regularizer (SUnSAL). It demonstrates that the result of the smoothed ‘0 regularizer is sparser than that of the ‘1 regularizer. As mentioned above, the parameter r plays an important role in our SL0SU algorithm. Just as kxk0 1 Fr(x), the approximation tends to equality when r ? 0. The value of r can determine how smooth the function Fr: the larger value of r, the smoother Fr; the smaller value of r, the closer to ‘0 norm. Fig. 3 shows the relationship between r and SRE (dB). In Fig. 3(a) fix the number of endmembers k = 4 shows the optimal value of the different SNR (20, 30, 40 dB): the larger value of SNR, the smaller of the r. When SNR = 40 dB, 30 dB, 20 dB, the r ¼ 0:03; 0:06; 0:13 obtain the optimal value respectively. So the larger the SNR, the smaller value is
the r to reach the optimal value. In Fig. 3(b) fix the SNR = 40 dB, shows the optimal value of the different number of endmembers. So the fewer number of the endmember, the smaller of the r. When k = 2, 4, 6, the r = 0.02, 0.03, 0.04 obtain the optimal value respectively. As a result, the fewer the number of endmembers, the smaller value is the r to reach the optimal value. 4.2. Sensitivity analysis of parameters Many parameters of the objective criteria in SUnSAL and SL0SU may influence the performances of the algorithm, which demonstrates the importance of choosing the proper parameter values to guarantee the accuracy. In SL0SU algorithm, the parameters k
Fig. 7. Fractional abundance maps estimated by SL0SU, SUnSAL and the distribution maps produced by Tricorder software for the AVIRIS Cuprite scene. (a) USGS Tetracorder classification map for buddingtonite; (b) USGS Tetracorder classification map for chalcedony; (c) SUnSAL abundance map for buddingtonite; (d) SUnSAL abundance map for chalcedony; (e) SL0SU abundance map for buddingtonite; (f) SL0SU abundance map for chalcedony.
C. Deng et al. / Infrared Physics & Technology 67 (2014) 306–314
and r which are sensitive to different data and significantly influence the unmixing accuracy need to be adjusted until they reach an ideal effect. Furthermore, whether to consider the ASC is also of interest. The regularization parameter k plays significant roles in controlling the relative contribution of the data fidelity in the SL0SU model, a better choice of k leads to an improved result. To analysis the impact of the regularization parameters when running the SL0SU and SUnSAL algorithm, Fig. 4(a) plots the SRE values as a function of k using our algorithm and the SUnSAL algorithm at a SNR of simulated data of 40 dB and the k = 4 endmembers. From these results, it can be seen that a better unmixing quality is acquired when the value of k reach 1e 4 and 1e 3 for SL0SU algorithm and SUnSAL algorithm respectively, we also find that the capability of reconstruction of fractional abundances of our algorithm is stronger than the SUnSAL algorithm. This result also indicates that our algorithm is a feasible and efficient approach to hyperspectral unmixing. Discuss the parameter r again, it is critical how to choose a proper parameter value to obtain a better unmixing quality. Fig. 4(b) shows the SRE values as a function of r using SL0SU algorithm at a SNR of simulated data of 40 dB and the k = 4 endmembers, one can see that the proposed algorithm can also obtain better SRE results than the SUnSAL algorithm for the considered r values. It is noted that the best SRE result is obtained when r = 0.03. Although determining theoretically the optimal r for the constrained smoothed ‘0 ‘2 optimization problem is difficult, the experiments show that generally smaller r value can achieve better results. It should be noted that in SUnSAL algorithm that explicitly enforces the ANC constraint but not the ASC constraint. In order to verify the influence of the ASC on the sparse unmixing algorithms, the results obtained by the three different models, SUnSAL, SL0SU (without ASC), and SL0SU (with ASC), with the simulated data of SNR = 40 dB and k = 4 endmembers. Fig. 5 shows the different impacts of the ASC on the sparse unmixing. It is shown that adding the ASC to the SL0SU algorithm brings about a better SRE (15.42 dB) for the unmixing optimization than SUnSAL (13.36 dB) as well as SL0SU (without ASC) (13.72). As the unmixing is used for the purpose of material quantification, both the ASC and ANC should meet the requirements of the physical principle at the same time, which reflects the true abundance fractions of the materials (see Figs. 6 and 7). 4.3. Experimental results with real data In our real data experiments, the scene is a well-known Airborne Visible Infrared Imaging Spectrometer (AVIRIS) Cuprite data set, available online in reflectance units (http://aviris.jpl.nasa.gov/ html/aviris.freedata.html). This scene used to validate the performance of endmember extraction algorithms. The portion used in experiments corresponds to a 250 191-pixel subset. The scene comprises 224 spectral bands in the interval 0.4–2.5 lm, with nominal spectral resolution of 10 nm. Prior to the analysis, due to water absorption and low SNR, bands 1–2, 105–115, 150–170, and 223–224 were removed, leaving a total of 188 spectral bands. In order to ensure the accuracy, we select the USGS library containing 498 pure endmember signatures, then the library A e R188498. For illustrative purposes, the minerals map (available at http://speclab.cr.usgs.gov/cuprite95.tgif.2.2um_map.gif) which was produced by a Tricorder 3.3 software, which is shown in Fig. 4. We used the mineral map as a reference to make a qualitative analysis of the performances of different sparse unmixing methods. To ensure the algorithm convergence, we used a maximum iteration number to stop the algorithm. As showed in Fig. 5, a qualitative comparison between the classification maps produced by the USGS Tetracorder algorithm [30]
313
and the fractional abundances estimated by SUnSAL and SL0SU algorithm for two different minerals (Buddingtonite and Chalcedony). The unmixing results by SL0SU algorithm are closer to the classification maps produced by the USGS Tetracorder algorithm compared with the SUnSAL algorithm. In general, we can conclude that our algorithm outperforms the SUnSAL algorithm.
5. Conclusions In this paper, we have proposed a new model using the smoothed ‘0 regularizer to replace the ‘1 regularizer in a sparse regression model when dealing with hyperspectral unmixing problems. The smoothed ‘0 regularizer model produces sparser and more accurate unmixing results than the ‘1 regularizer. We also have used an effective method based on variable splitting and augmented Lagrangian algorithm to solve the smoothed ‘0 problem. We have tested several experiments on simulated and real hyperspectral data sets separately, demonstrated the effectiveness and accuracy of our method. The experiments show that SL0SU algorithm gives sparser and more accurate results than SUnSAL algorithm. Therefore, it demonstrates that the smoothed ‘0 regularized sparse regression method is feasible and effective for hyperspectral unmixing. Although our experimental results are very encouraging, the proposed method focuses on analyzing the hyperspectral data without incorporating spatial structure information of hyperspectral data. In the future, we will include the spatial-contextual information to approximate sparse regression based unmixing to improve the performance.
Conflict of interest The authors declare that there is no conflict of interests regarding the publication of this paper. Acknowledgements The authors would like to thank Prof. J. Bioucas-Dias and M. Iordache provide a lot of help, they make the source code of the latest sparse algorithms available to the community. The author would also thank the referees for their outstanding comments and suggestions. The work was supported by the National Natural Science Foundation of China under the Grants 61162022, and 61362036, the Natural Science Foundation of Jiangxi China under Grant 20132BAB201021, the Jiangxi Science and Technology Research Development Project of China under Grant KJLD12098, by the Jiangxi Science and Technology Research Project of Education Department of China under Grant GJJ12632. References [1] N. Keshava, J.F. Mustard, Spectral unmixing, IEEE Signal Process. Mag. 19 (1) (2002) 44–57. [2] J. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gager, J. Chanussot, Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches, IEEE J. Sel. Top. Appl. Earth Remote Sens. 5 (2) (2012) 354–379. [3] J.W. Boardman, F.A. Kruse, R.O. Green, Mapping target signatures via partial unmixing of AVIRIS data, in: Proceedings of the Fifth JPL Airborne Earth Science Workshop, California, 1995. [4] M. Winter, N-FINDR: an algorithm for fast autonomous spectral end-member determination in hyperspectral data, in: Proceedings of the SPIE Conference on Imaging Spectrometry, Denver, 1999. [5] R.A. Neville, K. Staenz, T. Szeredi, J. Lefebvre, P. Hauff, Automatic endmember extraction from hyperspectral data for mineral exploration, in: Proceedings of the 21st Canadian Symposium on Remote Sensing, Canada, 1999.
314
C. Deng et al. / Infrared Physics & Technology 67 (2014) 306–314
[6] M. Berman, H. Kiiveri, R. Lagerstrom, A. Ernst, R. Dunne, J.F. Huntington, ICE: a statistical approach to identifying endmembers in hyperspectral images, IEEE Trans. Geosci. Remote Sens. 42 (10) (2004) 2085–2095. [7] M.D. Iordache, J. Bioucas-Dias, A. Plaza. Unmixing sparse hyperspectral mixtures, in: Proceedings of the IEEE Geoscience and Remote Sensing Symposium, Cape Town, 2009. [8] F. Chen, Y. Zhang, Sparse hyperspectral unmixing based constrained Lp-L2 optimization, IEEE Trans. Geosci. Remote Sens. 10 (5) (2013) 1142–1146. [9] Z.W. Shi, W. Tang, Z. Duren, Z.G. Jiang, Subspace matching pursuit for sparse unmixing of hyperspectral data, IEEE Trans. Geosci. Remote Sens. PP 99 (2013) 1–19. [10] M.D. Iordache, J. Bioucas-Dias, A. Plaza, Collaborative sparse regression for hyperspectral unmixing, IEEE Trans. Geosci. Remote Sens. 52 (1) (2014) 341– 354. [11] L. Sun, Z.B. Wu, L. Xiao, J.J. Liu, Z.H. Wei, F. Dang, A novel L1/2 sparse regression method for hyperspectral unmixing, Int. J. Remote Sens. 34 (20) (2013) 6983– 7001. [12] X.L. Zhao, F. Wang, T.Z. Huang, K. Michael, R.J. Plemmons, Deblurring and sparse unmixing for hyperspectral images, IEEE Trans. Geosci. Remote Sens. 51 (7) (2013) 4045–4058. [13] M.D. Iordache, J. Bioucas-Dias, A. Plaza, Total variation spatial regularization for sparse hyperspectral unmixing, IEEE Trans. Geosci. Remote Sens. 50 (11) (2012) 4484–4502. [14] Y.F. Zhong, R.Y. Feng, L.P. Zhang, Non-local sparse unmixing for hyperspectral remote sensing imagery, IEEE J. Selected Topics in Appl. Earth Remote Sens. 7 (6) (2014) 1889–1909. [15] M.D. Iordache, J. Bioucas-Dias, A. Plaza, Sparse unmixing of hyperspectral data, IEEE Trans. Geosci. Remote Sens. 49 (6) (2011) 2014–2039. [16] G.H. Mohimani, M.B. Zadeh, A fast approach for overcomplete sparse decomposition based on smoothed ‘0 norm, IEEE Trans. Signal Process. 57 (1) (2009) 289–301. [17] M. Rostami, Z. Wang, Image super-resolution based on sparsity prior via smoothed ‘0 norm, in: Proceedings of the Symposium on Advanced Intelligent Systems, Waterloo, 2011. [18] X.X. Chun, Y.H. Zhang, Fast compressive sensing radar imaging based on smoothed ‘0 norm, in: Proceedings of the 2nd Asian-Pacific Conference on Synthetic Aperture Radar, Xian, 2009.
[19] S. Ashkiani, M. Babaie-Zadeh, C. Jutten, Error correction via smoothed ‘0-norm recovery, in: Proceedings of the IEEE Statistical Signal Processing Workshop, Nice, 2011. [20] E. Candès, T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory 51 (12) (2005) 4203–4215. [21] E. Candès, T. Tao, Near-optimal signal recovery from random projections: universal encoding strategies, IEEE Trans. Inform. Theory 52 (22) (2006) 5406– 5424. [22] S.G. Mallat, Z.F. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Trans. Signal Process. 41 (12) (1993) 3397–3415. [23] J.B. Adams, M.O. Smith, A.R. Gillespie, Imaging spectroscopy: interpretation based on spectral mixture analysis, Remote Geochem. Anal.: Elemen. Mineral. Compos. 7 (1993) 145–166. [24] J. Bioucas-Dias, M.A.T. Figueiredo, A new TwIST: two-step iterative shrinkage thresholding algorithms for image restoration, IEEE Trans. Image Process. 16 (12) (2007) 2992–3004. [25] J. Bioucas-Dias, M.A.T. Figueiredo, Alternating direction algorithms for constrained sparse regression: application to hyperspectral unmixing, in: Proceedings of the 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Iceland, 2010. [26] J. Eckstein, D. Bertsekas, On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators, Math. Program. 55 (3) (1992) 293–318. [27] A.M. Bruckstein, M. Elad, M. Zibulevsky, On the uniqueness of nonnegative sparse solutions to underdetermined systems of equations, IEEE Trans. Inf. Theory 54 (11) (2008) 4813–4820. [28] E. Candès, J. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Commun. Pure Appl. Math. 59 (8) (2006) 1207– 1223. [29] J. Bioucas-Dias, J. Nascimento, Hyperspectral unmixing based on mixtures of Dirichlet components, IEEE Trans. Geosci. Remote Sens. 50 (3) (2012) 863–878. [30] R. Clark, G. Swayze, K. Livo, R. Kokaly, S. Sutley, Imaging spectroscopy: Earth and planetary remote sensing with the USGS Tetracorder and expert systems, J. Geophys. Res. 108 (E12) (2003) 5131–5135.