Signal Processing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Contents lists available at ScienceDirect
Signal Processing journal homepage: www.elsevier.com/locate/sigpro
Radar target HRRP recognition based on reconstructive and discriminative dictionary learning Daiying Zhou School of Electronic Engineering, University of Electronic Science and Technology of China, China
a r t i c l e i n f o
abstract
Article history: Received 27 August 2015 Received in revised form 16 November 2015 Accepted 5 December 2015
A novel dictionary learning algorithm, namely reconstructive and discriminative dictionary learning based on sparse representation classification criterion (RDDLSRCC), is proposed for radar target high resolution range profile (HRRP) recognition in this paper. The core of proposed algorithm is to incorporate the reconstructive power and discriminative power of atoms during the update of atoms. By constructing the objective function based on sparse representation classification criterion (SRCC), the discriminative performance of atoms can be improved while preserving the same-class reconstruction ability of atoms and reducing their reconstruction contribution to other classes. Moreover, the sparse coding coefficients of samples are updated using class-optimal SVD vectors of class-reconstruction residual matrix, thereby accelerating convergence. Compared with other dictionary learning algorithms, RDDLSRCC is more robust to the variation of target aspect and noise's effect. The extensive experimental results on the measured data illustrate that the proposed algorithm achieves a promising target recognition performance. & 2015 Elsevier B.V. All rights reserved.
Keywords: Radar target recognition HRRP Sparse representation classification Reconstructive and discriminative dictionary learning
1. Introduction The high-resolution range profile (HRRP) is the amplitude of radar scattering as a function of range, which represents the distribution of radar scattering centers along the radar line of sight. It can provide geometric structure information such as target size, scatterer distribution, etc., which is very useful in target classification. Therefore, the radar target recognition using HRRP recently has intensively been focused by radar target recognition community. For instance, Li et al. [1,2] use Abbreviations: RDDLSRCC, reconstructive and discriminative dictionary learning based on sparse representation classification criterion; HRRP, high resolution range profile; SRCC, sparse representation classification criterion; RDDL, reconstructive and discriminative dictionary learning; DDL, discriminative dictionary learning; D-KSVD, discriminative KSVD; LC-KSVD, label consistent KSVD
HRRPs as feature vectors directly to identify aerospace objects. Zyweck et al. [3,4] propose some subspace methods for HRRP recognition. Eom et al. [5] study a hierarchical autoregressive moving average (ARMA) model for modeling HRRPs at multiple scales for classifying radar target HRRPs. Shaw et al. [6–9] extract the features from sequential HRRPs. Some invariant features are proposed for HRRP recognition [10,11]. The statistical model methods based on the probability distribution of scattering centers in HRRP are presented [12–15]. In addition, some new signal processing tools and classification technologies are widely applied in radar target HRRP recognition. Nelson et al. use the wavelet transformation to reduce noise and extract features from HRRPs [16,17]. Li et al. [18] present a sparse representation based denoising method for improve recognition performance using HRRP. Shi et al. [19] propose a novel neural network classifier.
http://dx.doi.org/10.1016/j.sigpro.2015.12.006 0165-1684/& 2015 Elsevier B.V. All rights reserved.
Please cite this article as: D. Zhou, Radar target HRRP recognition based on reconstructive and discriminative dictionary learning, Signal Processing (2016), http://dx.doi.org/10.1016/j.sigpro.2015.12.006i
2
D. Zhou / Signal Processing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
However, there are still some big challenges for robust target recognition using HRRPs in practical application. Because it projects three-dimensional (3-D) information into a single dimension and there appear many ambiguities which must be resolved. These ambiguities include: (1) The shape and structure of targets are changed from 3D to 1-D; (2) The scattering centers of targets are composite, but not real. For example, the shape information and geometrical structure information of a target cannot be obtained from HRRP directly. It is also difficult to extract the real scattering centers from HRRP due to the fact that the real scattering centers in each range cell of HRRP are summed up to form the composite scattering centers in HRRP. In addition, HRRP signatures exhibit a high degree of variability, i. e. the HRRPs vary considerably with only a small variation in target-aspect [1]. In order to cope with the target-aspect sensitivity, the target-aspect coverage of interest is divided into many target-aspect sectors. In each target-aspect sector, there is only small variation of HRRPs [1,13,14]. However, to obtain high recognition accuracy, these methods require very small target-aspect sector, which is not practical in real situations. Thus, we need to study some effective feature extraction and classification methods for radar HRRP recognition. It has been shown that the sparse representation is very effective for object recognition such as face recognition, human action recognition and etc. [20–25]. The sparse representation uses the linear combination of selected atoms from the over-complete dictionary to describe the objects such as face images, which is analogy to the selectivity of human's vision. Obviously, human's vision is highly selective to some specific stimuli like the shape, color, illumination and so on. Therefore, the sparse representation is very successful in many pattern recognition applications [26–30]. According to sparse analysis theory, spare representation classification process includes two stages: dictionary learning and signal reconstruction. First a overcomplete dictionary is obtained by learning from training samples. Then, samples in the testing set are reconstructed via sparse coding given the learnt dictionary. From the above process, it can be seen that dictionary learning is a key step in sparse representation classification. In this paper, we propose a reconstructive and discriminative dictionary learning algorithm based on sparse representation classification criterion (RDDLSRCC) for radar target HRRP recognition. The objective function of RDDLSRCC is formed based on the sparse representation classification criterion. This shows that RDDLSRCC can train a discriminative dictionary by minimizing the sameclass reconstruction residual of atoms and maximizing the other-class reconstruction residual of atoms during the update of atoms. Here, the same-class means that the class label of atoms is the same as the class label of reconstructed samples, and the other-class means that the class label of atoms is different from the class label of reconstructed samples. Meanwhile, it is able to preserve the same-class reconstructive power of atoms, while being designed to reduce their reconstruction contribution to other classes as little as possible. Therefore, the learned atoms can only well represent the same-class samples, and impair the reconstruction to the samples of other classes.
Moreover, the sparse coding coefficients of samples are updated using class-optimal SVD vectors of classreconstruction residual matrix, and thus accelerating convergence. As a result, the proposed method with the learned dictionary is more robust to variation of target aspect and noise's effect, and has better recognition performance than other methods. The proposed algorithm include following advantages: (1) The discriminative power of dictionary atoms is considered by constructing the objective function based on sparse representation classification criterion. As a result, it can reduce the same-class reconstruction residual of atoms and increase the other-class reconstruction residual of atoms, and it is more robust to the variation of target aspect. (2) The same-class reconstructive power of dictionary atoms can be preserved while reducing their reconstruction contribution to other classes as little as possible. Therefore, it is more robust to noise. (3) The sparse coding coefficients of samples are updated using class-optimal SVD vectors of classreconstruction residual matrix during the update of dictionary atoms, and thus accelerating convergence.
2. Related work Recently, sparse signal representations have drawn much focus in pattern recognition task. It can select as few atoms as possible from an over-complete dictionary to reconstruct each sample feature vector. The over-complete dictionary means that the dimension of space is higher than that of target features. Thus, the geometric structure of target features may be separable in high dimensional space, and can achieve good recognition performance. For example, Wright et al. [20] present a sparse representation classification algorithm (SRC) for robust face recognition. SRC only uses the features of all training data to form the dictionary, and testing sample features are sparsely encoded with the dictionary. Classification decisions are made by selecting class label whose atoms minimize the classreconstruction error. SRC is robust to occlusion and noise in face recognition. Yang et al. [21] propose a SRC steered discriminative projection method by maximizing the ratio of between-class reconstruction residual to within-class reconstruction residual. Zhuang et al. [22] use a nonnegative low rank and sparse representation matrix among data samples to construct a non-negative low-rank sparse adjacency graph for classification. Liu et al. [23] recognize the human action using sparse coding to represent each frame as a linear combination of all frames in training sequences. Zhang et al. [24] propose a Laplacian group sparse coding model for human action recognition by preserving spatio-temporal structural information in video sequences. Although being successful in many applications, the above methods directly use the entire training samples as the dictionary atoms, thus the reconstruction accuracy is not high, and classification performance will degrade when the within-class variation
Please cite this article as: D. Zhou, Radar target HRRP recognition based on reconstructive and discriminative dictionary learning, Signal Processing (2016), http://dx.doi.org/10.1016/j.sigpro.2015.12.006i
D. Zhou / Signal Processing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
become large. Therefore, it is necessary to optimize the dictionary by dictionary learning. Inspired by this idea, Lewicki et al. [25,26] propose a iterative method for update of dictionary by maximizing the likelihood function. Engan et al. [27,28] present a method of optimal directions (MOD) to directly compute the new dictionary using the training samples and sparse coefficients, which is a simple way of updating the dictionary. Yang et al. [29] learn the metafaces from original images, and then use those metafaces as dictionary for face recognition. These metafaces can improve the representation capability of dictionary, thus achieve better recognition accuracy. In addition, Yang et al. [30] also model the sparse coding as sparsity-constrained robust regression problem, and propose an iteratively reweighted sparse coding algorithm based on maximum likelihood estimation theory. This algorithm can avoid solving the distribution parameters, and is more robust to occlusions and corruptions. Wei et al. [31] introduce two locality adapters in dictionary learning and sparse coding, and can improve data representation and classification due to preservation for data structure. Zheng et al. [32] incorporate the local manifold structure of data in sparse representation learning for image representation, which makes sparse representation vary smoothly along geodesics of data manifold. Zhou et al. [33] propose a double shrinking sparse model to compress the image data on both dimensionality and cardinality. Double shrinking can be applied to manifold learning and feature selections for better interpretation of features, and can boost classification performance. Gui et al. [34] study a group sparse multiview patch alignment framework for image classification by enjoying joint feature extraction and feature selection, and thus make features more discriminative. The above methods yield impressive results on some benchmark datasets due to the optimization of dictionary. However, these methods mainly focus on the reconstruction ability of dictionary learning, and are not optimal for classification task. Thus, the discrimination power is required to be considered in dictionary learning. Yang et al. [35] propose a Fisher discriminative dictionary method using the Fisher discriminative criterion to optimize both a structured dictionary with class label and sparse coding coefficients. Huang et al. [36] present a theoretical framework for sparse representation classification. This approach can combine the discriminative power with reconstruction property and sparsity of sparse coding, and thus is robust to noise, missing data and outliers. Lu et al. [37] propose the locality repulsion projections and the sparse reconstruction-based similarity measure to improve the SSPP (a single sample per person) face recognition performance. Pham et al. [38] obtain the sparse encoding coefficients in a supervised manner, and try to jointly find the most discriminative sparse overcomplete encoding and optimal classifier parameters for classification, leading to the most discriminative sparse over-complete representation, which is different from some unsupervised learning methods. Mairal et al. [39] study an energy formulation that can optimize the sparse reconstruction and class discrimination components jointly during dictionary learning. Lian et al. [40] propose a
3
loss function to link hinge-loss term with dictionary learning for binary classifier, and thus can increase the margins of binary classifiers and decrease the error bound of the aggregated classifier. Qiao et al. [41] firstly construct an adjacent weight matrix of data set based on modified sparse reconstruction framework, then evaluate the lowdimensional embedding of data to best preserve sparse reconstructive weights. As a result, some intrinsic structure information and discriminative information can be preserved automatically. Yang et al. [42] apply the back projection to perform the supervised dictionary learning, and present a supervised hierarchical sparse coding model with translation invariant properties. Zhang et al. [43] optimize the SRC by learning a discriminative projection and a dictionary simultaneously, and are able to extract discriminative information from raw data while respecting the sparse representation assumption. Feng et al. [44] propose to learn jointly the projection matrix for dimensionality and the discriminative dictionary for face representation, which makes face classification more effective. Jiang et al. [45] use greedy optimization algorithm to learn a compact and discriminative dictionary, which has property that the feature points from the same class have very similar sparse code. Ma et al. [46] optimize the objective function with sparse coefficients, class discrimination and rank minimization, and learn a discriminative low-rank dictionary for face recognition. These methods can further improve performance of recognition due to incorporating the discrimination ability during the dictionary learning. But, the dictionary update phase and the sparse coding phase are completed separately, and thus slow down convergence. Fortunately, the K-SVD algorithm proposed by Aharon et al. [47] is successful in this problem. It is able to perform dictionary learning and sparse coding simultaneously, thereby accelerating convergence. The essence of K-SVD is to fast design a dictionary and solve the corresponding best possible sparse representation coefficients for training samples using singular value decomposition (SVD). However, the K-SVD algorithm is not optimal for classification task due to the fact that it only focuses on minimizing the reconstruction error and does not consider the dictionary's utility for discrimination. Therefore, it is necessary to incorporate the discrimination ability into K-SVD algorithm for classification task. Zhang et al. [48] extend the KSVD algorithm to obtain the reconstructive and discriminative dictionary by directly incorporating classification error into the objective function in dictionary learning stage. Jiang et al. [49] combine the discriminative sparsecode error with the reconstruction error and the classification error to form a unified objective function, and propose a label consistent K-SVD (LC-KSVD). Kong et al. [50] present a supervised discriminative dictionary learning algorithm (DDL), which is designed for classifying HEp2 cell patterns. This algorithm considers the discriminative power of the dictionary atoms and reduces the intra-class reconstruction error while considering the inter-class reconstruction error during each update. It has been illustrated that these algorithms have higher recognition performance than K-SVD and SRC on some natural object images such as face images. However, their objective
Please cite this article as: D. Zhou, Radar target HRRP recognition based on reconstructive and discriminative dictionary learning, Signal Processing (2016), http://dx.doi.org/10.1016/j.sigpro.2015.12.006i
D. Zhou / Signal Processing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
4
function terms for increasing the discriminative ability of dictionary atoms are not exactly constructed using the sparse representation classification criterion. It implies that these algorithms may not be optimal for pattern recognition based on SRC framework.
3. Sparse representation classification criterion Let X ¼ ½x11 ⋯x1N1 ⋯xC1 ⋯xCNC denote a training sample set,xij is the jth n-dimensional HRRP vector of ith class. Each class contains Ni training samples, and the number of total training sample for C classes is N(N ¼ N1 þ N2 þ ⋯ þ NC ). Let D ¼ ½d11 ⋯ d1K 1 ⋯dC1 ⋯dCK C represent n K-dimensional class-specific over-complete dictionary, where dlk th th denotes the k atom of l class for the dictionary, K l is the th number of atoms for l class and K is the number of total dictionary atoms, i.e., K ¼ K 1 þK 2 ⋯ þK C . Notes that K is much larger than the sample vector dimension n. It means that the over-complete dictionary is redundant in describing sample xij . According to theory of sparse analysis, the dictionary learning D and sparse codes A ¼ ½α11 ⋯α1N1 ⋯αC1 ⋯αCNC for the training sample set are given ðD; ΑÞ ¼ argminX DΑ2F s:t: αij 0 r κ; dlk k2 ¼ 1 fD;Αg
i ¼ 1; 2; ⋯; C
j ¼ 1; 2; ⋯; Ni
l ¼ 1; 2; ⋯; C
k ¼ 1; 2; ⋯; K l
ð1Þ
2
where ‖ U ‖F is Frobenius norm. ‖ U ‖2 is ℓ norm. ‖ U ‖0 denotes number of non-zero elements in vector, and κ is a sparsity threshold parameter limiting the number of nonzero coefficients in sparse coding vector αij . For testing sample xt , its sparse coding coefficients αt are computed below αt ¼ argmin‖xt Dαt ‖22 fαt g
s:t:
‖αt ‖0 rκ
ð2Þ
Computing the class-reconstruction residual using the dictionary atoms and sparse coding coefficients belonging to ith class r i ¼ ‖xt Di δi ðαt Þ‖2
ð3Þ
where r i is the class-specific reconstruction residual of the ith class. Di ¼ ½di1 di2 ⋯diK i contains the atoms coming from ith class and δi ð UÞ only selects the coefficients associated with ith class. Then, we identify xt as the pth class p ¼ arg min r i fi g
ð4Þ
class label. Moreover, each atom is required to minimize the reconstruction residual of its own class' samples and maximize the reconstruction residual of samples from other classes. According to above concept, under SRC framework, it means that the dictionary atom has more class-specific reconstruction power for its own class and smaller classspecific reconstruction ability for other classes. More specifically, the subset of ith class's atoms Di can well represent the ith class's sample set Xi , but not well represent the sample set of other classes Xl ðl a iÞ. The reason is that Di is only associated with the ith class. It indicates that δi ðΑi Þ should include some significant coefficients such that ‖Xi Di δi ðΑi Þ‖2F is small, while δi ðΑl Þ ðl a iÞ should include very small coefficients such that ‖Xl Di δi ðΑl Þ‖2F is large, where .Αi ¼ ½αi1 αi2 ⋯αiNi and Αl ¼ ½αl1 αl2 ⋯αlNi represent the corresponding sparse coding coefficients of the ith class' th samples and the l class' samples, respectively. Thus, according to above discussion, to update the atom dij (i ¼ 1; 2; ⋯; C;j ¼ 1; 2; ⋯; K i ), where dij denotes the jth atom of ith class and K i is the number of atoms of ith class in dictionary, we need to solve the following minimization problem Kl C X X dlk ðΑi Þmlk dij ¼ arg min Xi fdij g l¼1 k¼1 k a j if l ¼ i
2 K i X mij 2 mk mij dij ðΑi Þ F þ Xi dik ðδi ðΑi ÞÞ dij ðΑi Þ k¼1 kaj F 2 Kl C C X X X mlk mij dij Αg dlk Αg Xg l ¼ 1 g¼1 k¼1 g ai k aj if l ¼ i 2 Ki C X X mk mij X δ ðΑ Þ d Α d g g g i ij ik g¼1 k¼1 g ai k aj
F
ð5Þ
F
where ð U Þmlk ð U Þmk , and ð U Þmij denote the ðmlk Þth row, ðmk Þth th row and mij row of matrix, respectively. Their values are given mlk ¼
l X
Kp þ k Kl
ð6Þ
K p þ j K i
ð7Þ
p¼1
4. Reconstructive and discriminative dictionary learning (RDDL) For spare representation task, the dictionary atom is designed for minimizing the global reconstruction residual. It is not necessary to incorporate the class information into each dictionary atom. However, for classification task, we expect that the dictionary atom has discriminative power. It shows implicitly that each dictionary atom must be associated with a
mij ¼
i X p¼1
mk ¼ k
ð8Þ
In Eq. (5), the first term uses the whole dictionary atoms to represent the subset of ith class samples, and the second term only utilizes the subset of ith class' atoms to
Please cite this article as: D. Zhou, Radar target HRRP recognition based on reconstructive and discriminative dictionary learning, Signal Processing (2016), http://dx.doi.org/10.1016/j.sigpro.2015.12.006i
D. Zhou / Signal Processing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
reconstruct the subset of ith class samples. The first term shows that the atoms of any class in dictionary may well reconstruct the samples of ith class. The second term indicates that only the atoms of ith class can well reconstruct the samples of ith class. Thus, combining the first term and second term can reflect the reconstruction power of atoms of ith class to the samples of ith class, i.e. the same-class reconstruction power of atom dij ð A Di Þ to ith class's samples. Here, the same-class means that the class label of atom dij is the same as the class label of these reconstructed samples. The third term applies the whole dictionary atoms to reconstruct the subset of the other classes' samples except the ith class. The fourth term utilizes the subset of ith class' atoms to reconstruct the subsets of the other classes' samples except the ith class. In a similar way, combining the third term and the fourth term can express the other-class reconstruction power of atom dij . The other-class means that the class label of atom dij is different from the class label of these reconstructed samples. Comparing Eq. (3) with Eq. (5), we can see that the formula of the second term and the fourth term in Eq. (5) is exactly the same as the expressions of class-specific reconstruction residual r i in Eq. (3). This demonstrates that the objective function is constructed based on sparse representation classification criterion, and implicitly the discriminative power of atom will be optimal under SRC framework. In addition, the minimization of Eq. (5) means the minimization of the first term and the second term, and the maximization of the third term and the fourth term. However, the sparse coding coefficient corresponding to dij might not be large when minimizing the value of the first term. But, it is sure that some sparse coefficients corresponding to dij will be significant as the value of the first term and the value of second term reduce simultaneously. Therefore, combining the these two terms, it can show that the contribution of atom dij to the reconstruction of sample subset Xi is big, i.e. the same-class reconstructive power of atom dij is good. In a similar way, the maximization of the third term and fourth term can demonstrate that the contribution of atom dij to reconstruction of sample subset Xg ðg aiÞ is small, i.e. the otherclass reconstructive power of atom dij is weak. Therefore, after solving the optimization problem in Eq. (5), the dictionary atoms will have the good discriminative power while preserving their same-class reconstructive power and reducing their reconstruction contribution to other class' samples as little as possible. From Eq. (5), it seems that the objective function looks like that proposed by Aharon et al. [47] and Kong et al. [50]. However, the objective function of K-SVD algorithm only takes into account minimizing the global reconstruction error of samples over whole dictionary atoms. This means that K-SVD algorithm primarily emphasizes the representation capability of the dictionary atoms, but not the discriminative power. Although the objective function proposed by Kong et al incorporates the term of intra-class representation error and the term of inter-class
5
representation error, the term of the inter-class representation error is not exactly same as the SRC criterion. This demonstrates that the discriminative power of the dictionary atom proposed by Kong et al is not optimal under SRC framework. Let RD i ¼ Xi
C X
Kl X
dlk ðΑl Þmlk
ð9Þ
k¼1
l¼1
k aj if l ¼ i
i RD i ¼ Xi
Kl X
dik ðδi ðΑi ÞÞmk
ð10Þ
k¼1 kaj
RD g ¼ Xg
Kl X
C X
m dlk Αg lk ; g a i
ð11Þ
k¼1
l¼1
k a j if l ¼ i Kl X
i RD g ¼ Xg
mk ; g ai dik δi Αg
ð12Þ
k¼1 k aj th class' where matrix RD i stands for residual matrix for i th samples over whole dictionary atoms when the j atom of ith class dij is removed, namely the same-class global residual i stands for residual matrix corresponding to atom dij . RD i matrix for ith class' samples over the subset of ith class' dictionary atoms when thejth atom of ith class dij is removed, namely the same-class local residual matrix corresponding to Di atom dij . In a similar way, the matrix RD g and R g stand for the other-class global residual matrix and the other-class local residual matrix corresponding to atom dij , respectively. Instituting equation (9)-(12) into Eq. (5), it follows that Di mij 2 mij 2 dij ¼ arg min‖RD i dij ðΑi Þ ‖F þ‖R i dij ðΑi Þ ‖F fdij g
C X
C X mij 2 mij 2 i ‖F ‖F ‖RD ‖RD g dij Αg g dij Αg g¼1 g¼1
gai
gai ð13Þ
The Eq. (13) is not convex optimization problem, and thus it is difficult to solve this optimization problem using some gradient descent methods. Of course, we can also modify the third term and the fourth term of objective function in Eq. (13), and the Eq. (13) may be transformed into a convex optimization problem that can be solved by some traditional gradient descent methods. However, it is not straightforward and still slow down convergence due to the fact that the dictionary update phase and the sparse coding phase are completed separately. In a similar way as Ref. [47], we approximate the optimization solution to dij using singular value decomposition (SVD) algorithm.
Please cite this article as: D. Zhou, Radar target HRRP recognition based on reconstructive and discriminative dictionary learning, Signal Processing (2016), http://dx.doi.org/10.1016/j.sigpro.2015.12.006i
D. Zhou / Signal Processing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
6
First, the K-SVD decomposes the residual matrix by SVD. Then the left singular vector corresponding to the largest singular value is as the update of the atom and the corresponding right singular vector multiplied by largest singular value is as the update of the sparse coefficient vector. The K-SVD algorithm is able to perform dictionary learning and sparse coding simultaneously, thereby accelerating convergence. However, the K-SVD cannot directly be applied for solving the nonconvex optimization problem in Eq. (13). Our optimization algorithm is an extension of K-SVD. The matrix Ωimij , Ωimk , Ωgmij and Ωgmk are defined as in Ref. [47]. The product term ðAi Þmij Ωimij can discard the zero entries of the row vector ðAi Þmij , which can reduce comi putation complexity. RD i Ωmij selects a subset of residual matrix's columns that corresponds to examples that use the dictionary atom dij , and can enforce the sparsity constraint in an update of atom dij . The meaning of Ωimk , Ωgmij and Ωgmk is similar to that of Ωimij . SVD decomposes the following four residual matrices in Eq. (13) to obtain the principal component directions. T X i D D RD λD ð14Þ i;m ui;m vi;m i Ωmij ¼ m
i i RD i Ωmk ¼
g RD g Ωmij ¼
g i RD g Ωmk ¼
T Di Di i λD i;m ui;m vi;m
ð15Þ
T D D λD ; g ai g;m ug;m vg;m
ð16Þ
T Di Di i λD ; g ai g;m ug;m vg;m
ð17Þ
X m
X m
X m
From Eq. (15), it is shown that selecting column vector Di i uD i;1 corresponding to maximum eigenvalue λi;1 as a solution to dij means minimizing the value of term mij 2 i ‖R D i dij ðΑi Þ ‖F . But, it may not be sure that this solution minimizes the term ‖R D d ðΑ Þmij ‖2F , and maximizes both miij 2 ij i mij 2 D i the term ‖R g dij Αg ‖F and the term ‖RD ‖F , g dij Αg simultaneously. Thus, it is necessary to seek a principal i component direction uD i;m as large as possible in Eq. (15), which is close to top-ranking principal component direction uD i;m in Eq. (14) while being distant from the major Di directions of uD g;m in Eq. (16) and ug;m in Eq. (17), i.e. 9 8 Di D 2
‖ui;m ui;p ‖2 > > > > > > min D > > λi;p > > fpg > > > > > > > > C n o > > X > > > > D D D 2 i > > λ ‖u u ‖ max > g;p g;p F > > > i;m > > fpg > > > > g¼1 > > > > > > = < g ai Di ð18Þ ui;m ¼ argmin Di fpg > > > ui;m > > > > > > C n o> X > > > Di Di 2 > > > i > max λD > > g;p ‖ui;m ug;p ‖F > > > fpg > > > > g ¼ 1 > > > > > > > > > > g a i > > > > > > ; : fpg where
λD i;p ,
λD g;p
and
solving the optimal
i λD g;p i uD i;m
are nonzero eigenvalues. After as an update of atom dij , the
T th Di i corresponding λD is used to update the mij row i;m vi;m vector ðΑi Þmij of coefficient matrix for ith class. The T Di i i corresponding to uD λD g;p which is most distant g;p v g;p th i row vector from uD i;m is chosen as an update of the mij mij of coefficient matrix for g th class (g a i). The steps of Αg proposed algorithm is shown in Algorithm 1. After each update, the ratio of the same-class reconstruction residual to the other-class reconstruction residual is computed as follow ηi ¼
‖Xi Di δi ðΑi Þ‖2F
C P
g¼1
ð19Þ
‖Xg Di δi ðΑg Þ‖2F
g ai where ηi is the ratio for ith class. When the variation of ηi is less than a threshold, the update for dictionary atoms of ith class is stopped. The algorithm ends when all classes reach the termination condition. This ratio incorporates the same-class reconstruction residual and other-class reconstruction residual simultaneously, and thus the termination condition can also emphasize the discriminative ability of dictionary atoms. Algorithm 1. Reconstructive and discriminative dictionary learning Task: Learn a reconstructive and discriminative dictionary using training data set X ¼ ½x11 ⋯x1 N 1 ⋯xC1 ⋯xCNC . Initialization: Set the dictionary Dð0Þ with ℓ2 normalized column. The atoms belonging to ith class in dictionary Dð0Þ are randomly selected from ith class's training samples. In general, the number of atoms for each class is set as the number of training samples for the same class. T ¼1. Repeat until the termination condition is reached Sparse coding phase: For sample xij ði ¼ 1; 2; ⋯; C; j ¼ 1; 2; ⋯; Ni Þ, apply the orthogonal matching pursuit (OMP) algorithm to compute the corresponding sparse coding vector αij : αij ¼ argmin‖xij Dαij ‖2F ; fαij g
i ¼ 1; 2; ⋯; C
subject to ‖αij ‖0 r κ
j ¼ 1; 2; ⋯; Ni
Dictionary update phase: For the dictionary atom dij (i ¼ 1; 2; ⋯; C; j ¼ 1; 2; ⋯; K i ) in DðT 1Þ
Compute the same-class global reconstruction residual matrix RD i , by RD i ¼ Xi
C X l¼1
Kl X
dlk ðΑl Þmlk
k¼1 l ¼ iand k a j
Please cite this article as: D. Zhou, Radar target HRRP recognition based on reconstructive and discriminative dictionary learning, Signal Processing (2016), http://dx.doi.org/10.1016/j.sigpro.2015.12.006i
D. Zhou / Signal Processing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Compute the same-class local reconstruction residual
Seek the optimal vector uDi;m as the update of dij by i
i RD i ,
matrix by P i RD i ¼ Xi k¼1
7
k a jK l dik ðδi ðΑi ÞÞmk
Compute the other-class global reconstruction residual matrix R D g , by
i uD min i;m ¼ arg D
C X
RD g ¼ Xg
Kl X
dlk Αg
mlk
ui;mi
; g ai
k¼1
l¼1
l ¼ iand k aj
9 8 Di D 2
‖ui;m ui;p ‖2 > > > > > > min > > λD > > fpg i;p > > > > > > > > C n o > > X > > > Di D D 2 > > max λg;p ‖ui;m ug;p ‖F > > > > > fpg > > > > > > g¼1 > > > > > > = < g ai fpg > > > > > > > > > C n o> X > > > Di Di Di 2 > > > > max λg;p ‖ui;m ug;p ‖F > > > > > fpg > > > > g¼1 > > > > > > > > > > g ai > > > > > > ; : fpg
Compute the other-class local reconstruction residual
Update the same-class sparse coding coefficients ðΑi Þm
i matrix RD g , by
i RD g
Kl X
¼ Xg
by T Di i ðΑi Þmij ’λD i;m vi;m
mk ; gai dik δi Αg
k¼1 k aj
Update the other-class sparse coding coefficients Αg
SVD decomposes RDi Ωim , RDi Ωim , RDg Ωgm and RDg Ωgm by i
ij
¼
g i RD g Ωmk ¼
k
T D D λD i;m ui;m vi;m
Di Di i λD i;m ui;m vi;m
D D λD g;m ug;m vg;m
T
5. Experimental results ; g ai
5.1. Data description
m
X
T Di Di i λD ; gai g;m ug;m vg;m
The data set used in experiments are HRRPs measured from three aircrafts, including An-26, Jiang, and Yark-42.
m
15
5
7
15 6
10 5 0
g ai
T
m
X
by
Set T¼T þ1
m
X
1
4
Km
g RD g Ωmij
ij
k
mij
T mij Di i Αg ’λD g;p vg;p
i
2
3
Radar
-20 -15 -10
-5
1
10
0
0
6
7
5
5 -20
2 3 4 -15
Km
-10
-5
0
Km
60 5
4
40
3
Km
i i RD i Ωmk ¼
X
Km
i RD i Ωmij ¼
ij
2
20 0
1 Radar
-5
0
5
10
Km Fig. 1. The projections of target trajectories onto ground plane. (a)An-26. (b)Jiang. (c)Yark-42.
Radar
D. Zhou / Signal Processing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
8
0.5
0.4
0.45
0.35
0.4 0.35
Normalized amplitude
Normalized amplitude
0.3 0.25 0.2 0.15
0.3 0.25 0.2 0.15
0.1 0.1 0.05 0
0.05
0
50
100
150
200
250
0
300
0
50
100
Range bin
150
200
250
300
Range bin
0.25
Normalized amplitude
0.2
0.15
0.1
0.05
0
0
50
100
150
200
250
300
Range bin Fig. 2. The HRRPs of the three airplanes. (a) An-26. (b)Jiang. (c) Yark-42.
The projections of target trajectories onto ground plane are illustrated in Fig. 1. As shown in Fig. 1, the measured data for each aircraft are collected from several segments of its flying trajectory. The solid lines and the dash lines in Fig. 1 represent the different segments. The HRRPs of three aircrafts are illustrated in Fig. 2. For each aircraft, 260 HRRPs over a wide range of aspects are adopted. With large SNR, the effect of noise in measured data on recognition performance can be ignored. The SNR (dB) is computed below ! Px ‖x‖2 ð20Þ ¼ 10 log10 SNR ¼ 10 log10 PN n PN where x is a HRRP, P x denotes the average power of x, and P N denotes the average power of noise. In addition, each HRRP will be preprocessed using following steps before running experiments:
(1) Normalize each HRRP, i.e. jjxij jj2 ¼ 1. (2) Apply Fast Fourier Transform (FFT) to each HRRP, to achieve shift alignment on HRRP. Because the amplitude of FFT is time invariant with shift. (3) the preprocessed HRRP is a 128-dimensional vector according to the symmetry of the amplitude of FFT for real vector. In order to decrease the computation complexity, we utilize the principal component analysis (PCA) to reduce the dimension of HRRPs. PCA project subspace is formed using the eigenvectors accounting for 98% of total energy. The dimension of HRRPs is reduced to 26 by PCA. Besides, the sparse coding coefficients of samples are computed using orthogonal matching pursuit (OMP). After dictionary training, the sparse representation classification criterion is used for classification.
Please cite this article as: D. Zhou, Radar target HRRP recognition based on reconstructive and discriminative dictionary learning, Signal Processing (2016), http://dx.doi.org/10.1016/j.sigpro.2015.12.006i
100
100
90
95
80
90
Average recognition rates (%)
Average recognition rates (%)
D. Zhou / Signal Processing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
70 60 50 40 30 20 10 0
9
85 80 75 70 65 60 55
0
2
4
6
8
50
10
0
2
4
Sparsity threshold
6
8
10
Sparsity threshold
100
Average recognition rates (%)
95 90 85 80 75 70 65 60 55 50
0
2
4
6
8
10
Sparsity threshold Fig. 3. The average recognition rates of RDDLSRCC versus sparsity threshold for three cases. (a) Dictionary size K ¼210; (b) Dictionary size K¼ 390; (c) Dictionary size K¼390 and SNR¼30 dB.
5.2. Sparsity threshold This experiment considers the effect of sparsity threshold on the recognition rates. For each airplane, the first half of all HRRPs are used as training and rest are used as testing. The value of sparsity threshold κ is set from 1 to 10. Three cases are set: (a) dictionary size K equals to 210, which corresponds to 70 items of each class; (b) dictionary size K equals to 390, which corresponds to 130 items of each class; (c) dictionary size K equals to 390, SNR¼ 30 dB. Fig. 3 shows the average recognition rates (ARR) of RDDLSRCC versus sparsity threshold for three cases. It can be seen from Fig. 3(a) that there is a increase in the average recognition rate when the sparsity threshold κ is increased from 1 to 3, and the average recognition rate decreases when the sparsity threshold κ is above 3. The similar conclusions can be drawn from Fig. 3(b) and (c). It shows that κ ¼ 3 is a appropriate sparsity threshold in the following experiments. Obviously, κ ¼ 3 means that the
linear combination of only three selected atoms can best reconstruct the samples. For the case that κ o 3, it is underfitting and will lead to loss of information for sample reconstruction. For the case that κ 4 3, it is over-fitting and may increase bad information for sample reconstruction. In summary, the reconstruction error is increased for κ a3, and thus degrade the recognition accuracy. 5.3. Dictionary size In this experiment, we investigate the effect of dictionary size on recognition rates. The training data and test data are the same as previous experiments. The dictionary size K is set to 150, 210, 270, 330, and 390. We also apply SRC [20], K-SVD [47], DDL [50], D-KSVD [48], LC-KSVD [49] for comparison. The parameters for these methods are set experimentally. After dictionary training, the sparse representation classification criterion is used for classification. The average recognition rates for six methods
Please cite this article as: D. Zhou, Radar target HRRP recognition based on reconstructive and discriminative dictionary learning, Signal Processing (2016), http://dx.doi.org/10.1016/j.sigpro.2015.12.006i
D. Zhou / Signal Processing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
10
versus dictionary size are illustrated in Fig. 4(a). It can be observed from Fig. 4(a) that the proposed method has better recognition performance than the other methods for all dictionary size. The reason is that the proposed method takes into account the dictionary's optimization based on SRC criterion, and thus can improve the recognition performance. In addition, we also compare the training time of a dictionary for four methods with different dictionary size. The training time of a dictionary for four methods (RDDLSRCC, DDL, D-KSVD and LC-KSVD) is shown in Fig. 4 (b). From Fig. 4(b), it is seen that the training time of 100
Average recogniiton rates (%)
90 80 70 60 50 40
RDDLSRCC SRC K-SVD DDL D-KSVD LC-KSVD
30 20 10 0 100
150
200
250
300
350
400
450
Dictionary size 100
Computing time for training (s)
90 80 70 60 50 40
RDDLSRCC DDL D-KSVD LC-KSVD
30 20
proposed method may be comparative with the other methods due that the sparse coding coefficients of samples are updated using class-optimal SVD vectors of classreconstruction residual matrix, and thus accelerating convergence. All program codes are run on the Laptop with I5 2410M CPU and 8G memory, and matlab 2010a. 5.4. Target aspect variation In this experiment, we consider the effect of target aspect variation on the recognition performance. We also apply DDL, D-KSVD and LC-KSVD for comparison. The three subsets of samples are selected. The three subsets include 300, 540 and 780, HRRPs respectively. For each target class, 100 HRRPs, 180 HRRPs, and 260 HRRPs are selected for three subsets, respectively. Obviously, the corresponding target aspect variation will increase when the number of samples is large. For each subset of samples, one half of all HRRPs are used for training and rest are used for testing. The parameters of these methods are set experimentally, which leads to the best results for each experiment. The recognition results of four methods for three subsets are demonstrated in Tables 1–4, respectively. It can be observed that the recognition rates of four methods decrease when size of subset is from 300 to 780, i.e. target aspect variation becomes big. However, the recognition of RDDLSRCC is still better than that of the other methods. For size of subset 540, the average recognition rates of RDDLSRCC, DDL, D-KSVD and LC-KSVD are 93%, 90%, 89% and 91%, respectively. At SNR¼30 dB, the average recognition rates of four methods for three subsets are illustrated in Table 5. From the results of Table 5, we can also draw similar conclusions. It demonstrates that RDDLSRCC is more robust to target aspect variation than other methods. The reason is that the proposed method incorporates the discriminative power of dictionary atoms during each update of atoms, and can reduce the sameclass reconstruction residual and increase the other-class reconstruction residual of atoms. Therefore, it can still achieve the high recognition accuracy when the variation of within-class HRRPs becomes large. 5.5. Effect of noise
10 0 100
150
200
250
300
350
400
450
Dictionary size Fig. 4. The Average recognition rates and training time for different dictionary size. (a) The average recognition rates of six methods versus dictionary size; (b) Training time of four methods versus dictionary size.
This experiment evaluates the performance of RDDLSRCC, DDL, D-KSVD and LC-KSVD under SNR¼5 dB, 10 dB, 15 dB, 20 dB, and 25 dB. When SNR level is given, the average power of noise i.e. variance of noise σ 2n is computed using Eq. (20), and the Gaussian noise with zero mean and the variance σ 2n is added to the corresponding
Table 1 The confuse matrix and average recognition rates (%) of RDDLSRCC for three subsets. Size of subset
An26 Jiang Yak45 ARR (%)
300
540
780
An26
Jiang
Yak45
An26
Jiang
Yak45
An26
Jiang
Yak45
98 7 2 94
0 90 3
2 3 95
96 7 1 93
1 88 4
3 5 95
95 10 2 91
2 86 5
3 4 93
Please cite this article as: D. Zhou, Radar target HRRP recognition based on reconstructive and discriminative dictionary learning, Signal Processing (2016), http://dx.doi.org/10.1016/j.sigpro.2015.12.006i
D. Zhou / Signal Processing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
11
Table 2 The confuse matrix and average recognition rates (%) of DDL for three subsets. Size of subset
300
An26 Jiang Yak45 ARR (%)
540
780
An26
Jiang
Yak45
An26
Jiang
Yak45
An26
Jiang
Yak45
97 10 2 92
1 87 5
2 3 93
95 9 2 90
1 85 7
4 6 91
94 12 4 89
4 81 5
2 7 91
Table 3 The confuse matrix and average recognition rates (%) of D-KSVD for three subsets. Size of subset
300
An26 Jiang Yak45 ARR (%)
540
780
An26
Jiang
Yak45
An26
Jiang
Yak45
An26
Jiang
Yak45
97 7 3 91
0 88 8
3 5 89
96 12 2 89
1 84 10
3 4 88
93 13 2 87
5 83 12
2 4 86
Table 4 The confuse matrix and average recognition rates (%) of LC-KSVD for three subsets. Size of subset
300
An26 Jiang Yak45 ARR (%)
540
An26
Jiang
Yak45
An26
Jiang
Yak45
An26
Jiang
Yak45
98 8 3 92
0 87 5
2 5 92
97 9 4 91
3 86 5
0 5 91
93 8 4 88
5 82 7
2 10 89
Table 5 The average recognition rates (%) of four methods. for three subsets and SNR¼ 30 dB. Size of subset
RDDLSRCC DDL D-KSVD LC-KSVD
300
540
780
93 91 90 91
91 88 88 89
90 86 87 88
100
Average recognition rates (%)
90
80
70
RDDLSRCC DDL D-KSVD LC-KSVD
60
50
40
780
HRRP. The training data and testing data are the same as experiment in Section 5.2. The parameters of these methods are set experimentally. Thus, the value of parameters for experiments with the different data subset and SNR is different. For each SNR, the recognition results are averaged for 50 run. Fig. 5 shows the average recognition rates of four methods versus SNR. As can be seen, RDDLSRCC achieves higher recognition rates than other methods when SNR increases from 5 dB to 25 dB. At SNR¼5 dB, the average recognition rates of these methods are below 70%. It seems that the proposed method is not robust to low SNR. However, the SNR is above 13 dB in reality because it is difficult to detect the radar target when SNR is below 13 dB. At SNR¼10 dB, the average recognition rates of RDDLSRCC, DDL, D-KSVD and LC-KSVD are 84%, 79%, 81%, and 82%, respectively. This indicates that the proposed method is more robust to noise than DDL, DKSVD and LC-KSVD. The reason is that the proposed method considers the reconstructive power of dictionary atoms while enhancing the discriminative power of atoms, and thus is more robust to noise.
6. Conclusions
0
5
10
15
20
25
SNR (dB)
Fig. 5. The average recognition rates of four methods versus SNR.
30
This paper proposes a novel dictionary learning algorithm for radar target recognition using HRRP, namely reconstructive and discriminative dictionary learning based on sparse representation classification criterion
Please cite this article as: D. Zhou, Radar target HRRP recognition based on reconstructive and discriminative dictionary learning, Signal Processing (2016), http://dx.doi.org/10.1016/j.sigpro.2015.12.006i
12
D. Zhou / Signal Processing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
(RDDLSRCC). RDDLSRCC has the following advantages. First, it takes into account the discriminative power of atoms while preserving their same-class reconstructive ability and reducing their reconstruction contribution to other class' samples by constructing the objective function based on sparse representation classification criterion. Second, The sparse coding coefficients of samples are updated using class-optimal SVD vectors of classreconstruction residual matrix, thereby accelerating convergence. Extensive experimental results for measured data show that the proposed method outperforms others such as D-KSVD, DDL and etc.
Acknowledgments The authors would like to thank the radar laboratory of University of Electronic Science and Technology of China (UESTC) for providing the measured data. The authors would also like to thank Prof. Qilian Liang of wireless communication Lab in University of Texas at Arlington (UTA) for his help and advice.
References [1] H.J. Li, S.H. Yang, Using range profiles as feature vectors to identify aerospace objects, IEEE Trans. Antennas Propag. 41 (1993) 261–268. [2] S.K. Wong, Non-cooperative target recognition in the frequency domain, IEE Proc. Radar Sonar Navig. 151 (2004) 77–84. [3] A. Zyweck, R.E. Bogner, Radar target classification of commercial aircraft, IEEE Trans. Aerosp. Electron. Syst. 32 (1996) 598–606. [4] J.S. Fu, X.H. Deng, W.L. Yang, Radar HRRP recognition based on discriminant information analysis, WSEAS Trans. Inf. Sci. Appl. 8 (4) (2011) 185–201. [5] K.B. Eom, R. Chellappa, Noncooperative target classification using hierarchical modeling of high-range resolution radar signatures, IEEE Trans. Signal Process. 45 (1997) 2318–2326. [6] A.K. Shaw, R. Vasgist, R. Williams, HRR-ATR using eigen-templates with observation in unknown target scenario, in: Proceedings of the SPIE, vol. 4053, 2000, pp. 467–478. [7] S.P. Jacobs, J.A. Sullivan, Automatic target recognition using sequences of high range resolution radar rangeprofiles, IEEE Trans. Aerosp. Electron. Syst. 36 (2000) 364–381. [8] R. Wu, Q. Gao, J. Liu, H. Gu, ATR Scheme based on 1-D HRR profiles, Electron. Lett. 38 (2002) 1586–1587. [9] X.J. Liao, P. Runkle, L. Carin, Identification of ground targets from sequential high-range-resolution radar signatures, IEEE Trans. Aerosp. Electron. Syst. 38 (2002) 1230–1242. [10] K.T. Kim, D.K. Seo, H.T. Kim, Efficient radar target recognition using the MUSIC algorithm and invariant feature, IEEE Trans. Antennas Propag. 50 (2002) 325–337. [11] J. Zwart, R. Heiden, S. Gelsema, F. Groen, Fast translation invariant classification of HRR range profiles in a zero phase representation, IEE Proc. Radar Sonar Navig. 150 (2003) 411–418. [12] R.A. Mitchell, J.J. Westerkamp, Robust statistical feature based aircraft identification, IEEE Trans. Aerosp. Electron. Syst. 35 (1999) 1077–1093. [13] L. Du, H.W. Liu, Z. Bao, Radar HRRP Statistical Recognition: Parametric Model and Model Selection, IEEE Trans. Signal Process 56 (5) (2008) 1931–1944. [14] L. Shi, P.H. Wang, H.W. Liu, L. Xu,and, Z. Bao, Radar H.R.R.P. Statistical, Recognition with local factor analysis by Automatic Bayesian YingYang harmony learning, IEEE Trans. Signal Process. 59 (2) (2011) 610–617. [15] C.Y. Wang, J.L. Xie, The T-Mixture model approach for radar HRRP target recognition, Int. J. Comput. Electr. Eng. 5 (5) (2013) 500–503. [16] D.E. Nelson, J.A. Starzyk, D.D. Ensley, Iterated wavelet transformation and discrimination for HRR radar target recognition, IEEE Trans. Syst. Man. Cybern.-Part A: Syst. Hum. 33 (2003) 52–57.
[17] B.M. Huther, S.C. Gustafson, R.P. Broussad, Wavelet preprocessing for high range resolution radar classification, IEEE Trans. Aerosp. Electron. Syst. 37 (2001) 1321–1331. [18] M. Li, G.J. Zhou, B. Zhao, T.F. Quan, Sparse representation denoising for radar high resolution range profiling, Int. J. Antennas Propag. 3 (2014) (2014) 1–8. [19] Y. Shi, X.D. Zhang, A Gabor atom network for signal classification with application in radar target recognition, IEEE Trans. Signal Process. 49 (2001) 2994–3004. [20] J. Wright, A. Yang, S. Sastry, Y. Ma, Robust face recognition via sparse representation, IEEE Trans. Pattern Anal .Mach .Intell. 31 (2) (2009) 210–227. [21] J. Yang, D. Chu, L. Zhang, Y. Xu, J. Yang, Sparse representation classifier steered discriminative projection with applications to face recognition, IEEE Trans. Neural Netw. Learn. Syst. 24 (7) (2013) 1023–1035. [22] L. Zhuang, H. Gao, Z. Lin, Y. Ma, X. Zhang, N. Yu, Non-negative low rank and sparse graph for semi-supervised learning, in: Proceedings of the IEEE Conference Computer Vision and Pattern Recognition (CVPR), 2012, pp. 2328–2335. [23] C. Liu, Y. Yang, Y. Chen, Human action recognition using sparse representation, in: Proceedings of the IEEE International Conference on Intelligent Computing and Intelligent Systems, 2009, pp. 184– 188. [24] X.R. Zhang, H. Yang, L.C. Jiao, Y. Yang, F. Dong, Laplacian group sparse modeling of human actions, Pattern Recognit. 47 (2014) 2689–2701. [25] B.A. Olshausen, D.J. Field, Sparse coding with an overcomplete basis set: a strategy employed by v1? Vis. Res. 37 (23) (1997) 3311–3325. [26] M.S. Lewicki, T.J. Sejnowski, Learning overcomplete representations, Neural Comput. 12 (2) (2000) 337–365. [27] K. Engan, S.O. Aase, J.H. Hakon-Husoy, Method of optimal directions for frame design, in: Proceedings of the 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 5, 1999, pp. 2443–2446. [28] K. Engan, B.D. Rao, K. Kreutz-Delgado, Frame design using focuss with method of optimal directions (MOD), in: Proceedings of the Norwegian Signal Processing on Symposium, vol. 65–69, 1999. [29] M. Yang, L. Zhang, J. Yang, D. Zhang, Metaface learning for sparse representation based face recognition, in: Proceedings of the 2010 17th IEEE International Conference on Image Processing (ICIP), 2010, pp. 1601–1604. [30] M. Yang, L. Zhang, J. Yang, D. Zhang, Robust sparse coding for face recognition, in: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011, pp. 625–632. [31] C.P. Wei, Y.W. Chao, Y.R. Yeh, C.F. Wang, Locality-sensitive dictionary learning for sparse representation based classification, Pattern Recognit. 46 (5) (2013) 1277–1287. [32] M. Zheng, J. Bu, C. Chen, C. Wang, L. Zhang, G. Qiu, D. Cai, Graph regularized sparse coding for image representation, IEEE Trans. Image Process. 20 (5) (2011) 1327–1336. [33] T. Zhou, D. Tao, Double shrinking sparse dimension reduction, IEEE Trans. Image Process. 22 (1) (2013) 244–257. [34] J. Gui, D. Tao, Z. Sun, Y. Luo, X. You, Y.Y. Tang, Group sparse multiview patch alignment framework with view consistency for image classification, IEEE Trans. Image Process. 23 (7) (2014) 3126–3137. [35] M. Yang, L. Zhang, X. Feng, D. Zhang, Fisher discrimination dictionary learning for sparse representation, in: Proceedings of the 2011 IEEE International Conference on Computer Vision (ICCV), 2011, pp. 543–550. [36] K. Huang, S. Aviyente, Sparse representation for signal classification, Adv. Neural Inf. Process. Syst. 19 (2007) 609. [37] J. Lu, Y.P. Lu, G. Wang, G. Yang, Image-to-set face recognition using locality repulsion projections and sparse reconstruction-based similarity measure, IEEE Trans. Circuits Syst. Video Technol. 23 (6) (2013) 1070–1080. [38] D.S. Pham, S. Venkatesh, Joint learning and dictionary construction for pattern recognition, in: Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR2008), 2008, pp. 1–8. [39] J. Mairal, F. Bach, J. Ponce, G. Sapiro, A. Zisserman, Discriminative learned dictionaries for local image analysis, in: Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, (CVPR2008), 2008, pp. 1–8. [40] X.C. Lian, Z. Li, B.L. Lu, L. Zhang, Max-margin dictionary learning for multiclass image categorization, in: K. Daniilidis, P. Maragos, N. Paragios (Eds.), Computer Vision – ECCV 2010, Springer Verlag, Berlin, Heidelberg, 2010, pp. 157–170. [41] L. Qiao, S. Chen, X. Tan, Sparsity preserving projections with applications to face recognition, Pattern Recognit. 43 (1) (2010) 331–341.
Please cite this article as: D. Zhou, Radar target HRRP recognition based on reconstructive and discriminative dictionary learning, Signal Processing (2016), http://dx.doi.org/10.1016/j.sigpro.2015.12.006i
D. Zhou / Signal Processing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ [42] J. Yang, K. Yu, T. Huang, Supervised translation-invariant sparse coding, in: Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 3517–3524. [43] H. Zhang, Y. Zhang, T.S. Huang, Simultaneous discriminative projection and dictionary learning for sparse representation based classification, Pattern Recognit. 46 (1) (2013) 346–354. [44] Z. Feng, M. Yang, L. Zhang, Y. Liu, D. Zhang, Joint discriminative dimensionality reduction and dictionary learning for face recognition, Pattern Recognit. 46 (8) (2013) 2134–2143. [45] Z. Jiang, G. Zhang, L. Davis, Submodular dictionary learning for sparse coding, in: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 3418– 3425. [46] L. Ma, C. Wang, B. Xiao, W. Zhou, Sparse representation for face recognition based on discriminative low-rank dictionary learning, in: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 2586–2593.
13
[47] M. Aharon, M. Elad, A. Bruckstein, K-SVD: an algorithm for designing over- complete dictionaries for sparse representation, IEEE Trans. Signal Process. 54 (11) (2006) 4311–4322. [48] Q. Zhang, B. Li, Discriminative K-SVD for dictionary learning in face recognition, in: Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 2691–2698. [49] Z. Jiang, Z. Lin, L. Davis, Learning a discriminative dictionary for sparse coding via label consistent K-SVD, in: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011, pp. 1697–1704. [50] X. Kong, K. Li, J.J. Cao, Q. Yang, W. Liu, HEp-2 cell pattern classification with discriminative dictionary learning, Pattern Recognit. 47 (2014) 2379–2388.
Please cite this article as: D. Zhou, Radar target HRRP recognition based on reconstructive and discriminative dictionary learning, Signal Processing (2016), http://dx.doi.org/10.1016/j.sigpro.2015.12.006i