Optics and Lasers in Engineering ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Contents lists available at ScienceDirect
Optics and Lasers in Engineering journal homepage: www.elsevier.com/locate/optlaseng
Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga–Koontz Transform Hamidullah Binol a,n, Shuhrat Ochilov b, Mohammad S. Alam b, Abdullah Bal a a b
Department of Electronics and Communications Engineering Yildiz Technical University, Istanbul 34220, Turkey Department of Electrical and Computer Engineering University of South Alabama, Mobile, AL 36688, USA
art ic l e i nf o
a b s t r a c t
Article history: Received 20 December 2015 Received in revised form 16 February 2016 Accepted 8 March 2016
Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga–Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently. & 2016 Elsevier Ltd. All rights reserved.
Keywords: Dimensionality reduction Fukunaga–Koontz Transform Hyperspectral imagery Kernel methods Tuned basis functions
1. Introduction Hyperspectral imaging (HSI) provides very fine spectral discrimination among different substances yielding high signal-toclutter ratio compared to broadband imagery [1,2]. High spectral resolution sensors may be used for classification when spatial target detection techniques are not adequate [3]. Such techniques require hundreds of wavelength bands and each of these bands conveys specific information about the target. Unfortunately, this tremendous volume of data makes real time exploitation of HSI extremely difficult and challenging [4]. Therefore, dimensionality reduction is expected to improve detector performance. Several unsupervised [5–7], supervised [8], and semi-supervised [9] techniques have been proposed in literature for dimensionality reduction. These techniques may omit valuable target information during band reduction which can be very important for detection or classification problems. One of the widely used reduction techniques is principal component analysis (PCA), where the main n
Corresponding author. E-mail address:
[email protected] (H. Binol).
issue is analyzing and finding the dominant directions of signal variations [10]. Then, the filters used for reduction are selected in the way that best approximate the main principal component vectors [11]. In contrary, Fukunaga–Koontz Transform (FKT) provides best eigenvectors to discriminate the target and background clutter. Once the filters are selected by applying these eigenvectors the data will be reduced by preserving and enhancing the features of the target class. FKT are also known as the “best” low-rank approximation to the quadratic discriminant analysis [12]. This feature is important to realize a low-rank approximate for classification applications since the number of basis functions determines the complexity of the algorithm [13,14]. The drawback of FKT is that it only considers the second-order statistics. To overcome this drawback, some researchers extended the FKT to its nonlinear version (KFKT) [15,16] by using kernel machines. Our framework consists of applying KFKT to the high dimensional data for feature extraction. For comparison we classified the reduced data employing 10-fold discriminant analysis (DA) [17]. Experimental results on two real hyperspectral scenes are demonstrated where suggested target oriented dimension reduction approach outperforms its former version and PCA.
http://dx.doi.org/10.1016/j.optlaseng.2016.03.009 0143-8166/& 2016 Elsevier Ltd. All rights reserved.
Please cite this article as: Binol H, et al. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga– Koontz Transform. Opt Laser Eng (2016), http://dx.doi.org/10.1016/j.optlaseng.2016.03.009i
H. Binol et al. / Optics and Lasers in Engineering ∎ (∎∎∎∎) ∎∎∎–∎∎∎
2
The rest of the paper is organized as follows. The following section describes the formulation of KFKT to address dimensionality reduction problem in HSI. In Section 3, we discuss a procedure based on KFKT for performing dimensionality reduction. Comparative performance analyses are presented in Section 4 and concluding remarks are included in Section 5.
In this section, we briefly summarize the procedure for calculation of eigenspace of the training image sets and introduce the notation which will be used throughout the paper. Image data from HSI can be considered as producing a cube of data comprised of two spatial dimensions and one spectral dimension. Its spectral dimension contains many contiguous data layers of high spectral resolution. Each pixel of a hyperspectral cube displays a spectral reflectance vector in the space Rd , where d is the number of spectral channels. For training stage, we collect spectral vectors from two different classes, target and background clutter. Fig. 1 illustrates conceptually how multiple spectral bands are collected for each pixel and converted to form of characteristic reflectance curve for each material. Let T ¼ ½t 1 ; t 2 ; …; t M and C ¼ ½c 1 ; c 2 ; …; cM be two d M matrices where M is the number of training pixels for target and clutter, respectively. These sets have normalized image vectors where the mean is subtracted from each spectral vector. 2.1. Linear model The covariance matrices can be calculated for each target and background classes: ð1Þ
where the superscript T represents the transpose operation. The sum of these matrices, ΣT þ ΣC , is positive and can be factorized as follows: Σ ¼ ΣT þ ΣC ¼ VΛVT
T^ ¼ PT TandC^ ¼ PT C
ð3Þ
and the sum of the covariance matrices for transformed data becomes ð4Þ PT ΣT þ ΣC P ¼ I where I is the identity matrix. Thus, the corresponding covariance matrices for each class can be calculated easily,
2. Review of Fukunaga–Koontz Transform
ΣT ¼ TTT andΣC ¼ CCT
can be transformed by
ð2Þ
where V is eigenvector matrix of Σ and Λ is a diagonal matrix with diagonal elements being equal to the eigenvalues. By combining these matrices, we can define a transformation operator 1=2 P ¼ VΛ . Using the transformation operator P, input vectors
T
ΣT^ ¼ T^ T^ ¼ PT TTT P ¼ PT ΣT P T
ΣC^ ¼ C^ C^ ¼ PT CCT P ¼ PT ΣC P
ð5Þ
where ΣT^ denotes target class and ΣC^ represents clutter. From Eq. (5), it is obvious that Σ^ ¼ ΣT^ þ ΣC^ ¼ PT ΣT þ ΣC P ¼ I ð6Þ Assuming θi is an eigenvector of ΣT^ with corresponding to eigenvalue λi following equation can be written
ΣT^ θi ¼ λi θi
ð7Þ
Substituting Eq. (7) into Eq. (6) for ΣT^ and ΣC^ , we obtain I ΣC^ θi ¼ λi θi and ΣC^ θi ¼ ðI λi Þθi
ð8Þ
From Eq. (8), we can say that ΣC^ has same eigenvectors with ΣT^ but corresponding eigenvalues for ΣT^ is λi while for ΣC^ is ð1 λi Þ, which is noted that two classes share complimentary eigenvalues. This property of eigenvalues can be used for discrimination of two classes. Fig. 2 shows the complimentary eigenvalues produced from experiments using the training set of Catalca data, details of which will be described in Section 4. The tuned basis functions are defined according to complimentary eigenvalues distribution. Initially, eigenvalues are ascended for each class. Then, dominant eigenvectors corresponding to dominant eigenvalues for class ΣT^ is used to construct tuned basis functions. Let matrix elements of Θ ¼ ½θ1 ; θ2 ; …; θm be eigenvectors corresponding to the eigenvalues of ΣT^ arranged in descending order as λ1 Z λ2 Z::: Z λm Z 0. Assuming m1 eigenvectors best represents class ΣT^ , tuned basis function vectors for target class are
Θ1 ¼ ½θ1 ; θ2 ; :::; θm1
ð9Þ
Fig. 1. Hyperspectral image exploitation concept.
Please cite this article as: Binol H, et al. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga– Koontz Transform. Opt Laser Eng (2016), http://dx.doi.org/10.1016/j.optlaseng.2016.03.009i
H. Binol et al. / Optics and Lasers in Engineering ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Similarly, the rest of m2 eigenvectors are used to construct tuned basis functions for class ΣC^ , which are given by ð10Þ Θ2 ¼ θm ; θm 1 ; :::; θm m2 þ 1 The theoretical foundation and classification practice of FKT may be found in [13,15]. 2.2. Nonlinear model with Kernel machines Although the FKT is a powerful technique for two-pattern discrimination problems, since it extracts second-order correlations, it cannot capture the higher order statistical properties of input data. To overcome this drawback, some researchers [15], have extended the FKT to its nonlinear version. By operating a ϕ mapping procedure [18], the input data x A Rd is projected into high-dimensional feature (Hilbert) space ϕðxÞ A F. Selecting a sufficient mapping function, the nonlinear discrimination problem will become a linearly separable classification model in the feature
3
space. In most applications, the classification algorithm needs the dot products in F (i.e., ϕðxi Þ; ϕðxj Þ ), instead of knowing ϕ explicitly. It can be proven that there is a function, called the kernel function, instead of this inner product. The kernel function is defined as: ð11Þ Kðxi ; xj Þ ¼ ϕðxi Þ; ϕðxj Þ In the literature [19], several kernel functions have been proposed. The most popular one of them is Gaussian radial basis function (RBF) which is formulated as
K xi ; xj ¼ exp xi xj 2 =2σ 2 ; σ A R þ ð12Þ After computing the kernel matrices of target and clutter vectors, nonlinear FKT called kernel FKT (KFKT) is achieved as classical K FKT. Let Σ be the summation matrix of input kernel matrices, i.e., ΣKT and ΣKC , then the matrix factorization is accomplished by
ΣK ¼ ΣKT þ ΣKC ¼ ΦΛK ΦT
ð13Þ
where the columns of Φ are the eigenvectors and the correK sponding elements of Λ the eigenvalues. Although the size of training sets for target and clutter need not to be equal in FKT, it must be equal for Eq. (13). The rest of the KFKT algorithm is similar to the conventional one except the application to the dimensionality reduction problem as described in the next section.
3. Dimensionality reduction As mentioned in the previous section, we are generally dealing with d-dimensional vectors of hyperspectral image cubes corresponding to each pixel z that we call as pixel vectors. Thus, our task is to find a transformation matrix W ¼ ½w1 ; w2 ; ⋯; wl which will project our data into l dimensions, where l o d. Accordingly, we will get a reduction using the following transform: z0 ¼ zW Fig. 2. Complimentary eigenvalues of target and clutter for the training part of Catalca data set.
ð14Þ
For PCA case, W is obtained from eigenvectors which represent data in a few principal components by projecting it along the largest data variance [11]. However, for small targets variance may
Fig. 3. Subimages of the Catalca scene. (a) Training set includes Vehicle 1 and background clutter. (b) Testing set consists of Vehicle 2 and background clutter. False color composite images obtained from bands {R:85, G:52, B:19}. (For interpretation of the references to color in this figure, the reader is referred to the web version of this article.)
Please cite this article as: Binol H, et al. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga– Koontz Transform. Opt Laser Eng (2016), http://dx.doi.org/10.1016/j.optlaseng.2016.03.009i
H. Binol et al. / Optics and Lasers in Engineering ∎ (∎∎∎∎) ∎∎∎–∎∎∎
4
not be a good criterion for detection and the target information may be sacrificed as a result of compression [20]. In order to achieve target oriented band reduction via FKT, we will project data utilizing tuned basis functions consisting of l dominant target eigenvectors resulting in the following transform matrix: W ¼ PΘ1
ð15Þ
where Θ1 is a low-rank tuned basis functions which best represent target class and P is a transform operator. Once this transform is applied to hyperspectral cube, the number of bands in the final image cube will be the same as the number of selected l eigenvectors. K In KFKT, firstly ΣT is centralized by:
Σ~ T ¼ ΣKT IM ΣKT ΣKT IM þ IM ΣKT IM K
ð16Þ
where IM is the M M matrix which has the same elements of
1=M. Based on l largest eigenvalues (λ1 ; λ2 ; …; λl ) the corre~ K (θK ; θK ; …; θK ) are selected. For the sponding eigenvectors Σ K
T
1
2
K
K
l
input vectors ½t 1 ; t 2 ; …; t M of target class, the ith feature (f i for i ¼ 1; …; l) of the testing vector z is then obtained as:
1 KT f i ¼ qffiffiffiffiffiffi θi ½K ðt 1 ; zÞ; K ðt 2 ; zÞ; :::; K ðt M ; zÞ
λKi
ð17Þ
K ~ K corresponding to the eigenvalue where θi is an eigenvector of Σ T K λi . At the end of these process, reduced (extracted) features are contained in f ¼ ½f 1 ; f 2 ; ⋯; f l . Details of KFKT-based dimensionality reduction technique are given as follows: Input: Given a hyperspectral scene consists of N spectral vectors xi A Rd and M training images for each group; target and background clutter. Output: Projected data F ¼ f 1 ; f 2 ; ⋯; f N ; where f i A Rl . Procedure:
Fig. 4. Subimages of the Pavia University scene. (a) Training set. (b) Reference map of training set containing asphalt and bricks. (c) Testing set. (d) Reference map of test set containing asphalt and bricks. False color composite images obtained from bands {R:85, G:52, B:19}. (For interpretation of the references to color in this figure, the reader is referred to the web version of this article.)
Please cite this article as: Binol H, et al. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga– Koontz Transform. Opt Laser Eng (2016), http://dx.doi.org/10.1016/j.optlaseng.2016.03.009i
H. Binol et al. / Optics and Lasers in Engineering ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Step 1. Select a kernel function and its parameter(s) if exists, and calculate kernel matrices of target and clutter training images, K K K i.e., ΣT and ΣC in (13). Then centralize the ΣT via formula (16). K ~ and obtain its eigenStep 2. Make eigen decomposition of Σ T
5
In Eq. (19), TP is the number of true positives, FN of false negatives, TN of true negatives and FP of false positives. Random Accuracy is construed as the sum of the products of actual probability and observed probability for each class. In terms of the
vectors θi and eigenvalues λi . Determine dominant eigenvectors to constitute the subspace which represents the target class. Step 3. For all the spectral vectors of scene zi A Rd , compute the feature vectors using (17). Reduced features are then obtained. K
K
4. Experimental results 4.1. Data sets and parameter settings We chose two hyperspectral data sets: (1) Catalca dataset (June 7, 2015, Catalca, Istanbul, Turkey) which has been acquired by the SPECIM sensor. Image dimensions are 810 1091 196 where spectral bands span the wavelength range of 0:4–1 μm. Thirty six channels have been removed due to noise. The remaining 160 spectral bands are processed. We used two 140 100 pixel sized sub-images, training and testing scenes, of Catalca data containing two similar targets Vehicle 1 (V1 ) and Vehicle 2 (V2 ) (See Fig. 3). (2) Pavia university dataset (Engineering School at the University of Pavia, Pavia, Italy) which has been acquired by the reflective optics system imaging spectrometer (ROSIS-03) airborne optical sensor. Image dimensions are 610 340 115 where spectral bands span the wavelength range of 0:43 0:86 mm. After removing noisy bands, 103 out of 115 bands are used. For evaluation purposes, two 200 150 pixel sized hyperspectral images, training and testing scenes, are segmented containing two man-made materials, i.e., asphalt and bricks. Three channel color composite images of the training and testing sub-scenes of Pavia University data and the corresponding ground-truth information are given in Fig. 4. Employed numbers of training and test samples are provided for each sub-image in Table 1. We have utilized three metrics for comparison of the techniques; accuracy which is the ratio of correctly classified samples to the total number of samples, recall which is the ratio of correctly detected targets to the number of test targets and kappa coefficient [21]. The kappa coefficient is a value less than or equal to 1. It allows one to quantify the agreement of classification, e.g., a value of 1 implies perfect agreement and 0 implies a chance agreement. Kappa coefficient can be calculated by considering the elements of a classical confusion matrix: kappa ¼
total accuracy random accuracy 1 random accuracy
ð18Þ
where the total accuracy is simply the sum of the confusion matrix diagonals, divided by the total number of items such that total accuracy ¼
TP þ TN TP þ TN þ FP þ FN
ð19Þ
Table 1 Targets and training/test samples for the data sets. Data sets
Targets
No. of training samples
No. of test samples
Catalca
Vehicle 1 Vehicle 2 Asphalt Bricks
157 – 147 135
– 273 851 1876
Pavia University
Fig. 5. Kappa coefficient vs sigma for Vehicle 2 (top), asphalt (middle), and bricks (bottom).
Please cite this article as: Binol H, et al. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga– Koontz Transform. Opt Laser Eng (2016), http://dx.doi.org/10.1016/j.optlaseng.2016.03.009i
6
H. Binol et al. / Optics and Lasers in Engineering ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Fig. 6. First row: original three spectral bands of the test set of Catalca data with wavelength of 450.34, 550.09, and 651.85 nm starting from left to right. Second row: Most informative principal components. Third row: most informative FKT components. Fourth row: most informative KFKT components.
Please cite this article as: Binol H, et al. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga– Koontz Transform. Opt Laser Eng (2016), http://dx.doi.org/10.1016/j.optlaseng.2016.03.009i
H. Binol et al. / Optics and Lasers in Engineering ∎ (∎∎∎∎) ∎∎∎–∎∎∎
7
elements of confusion matrix, the random accuracy is ðTN þ FP Þ ðTN þ FN Þ þ ðFN þ TP Þ ðFP þ TP Þ total total
ð20Þ
where the total number of items in the classification is shown as total. In this experiment, RBF is the selected kernel in KFKT. For classification problems, a parameter selection technique for RBF in KFKT is presented by Binol et al. [22]. The independent variable of RBF, σ , is not optimized in this study. However, the parameter of RBF is searched in a certain interval for each target. One, out of three training samples given in Table 1, is used for each target to search parameters over the normalized data set. Once KFKT is applied, new dimension of the data set is defined based on the number of eigenvalues including 95% of the cumulative percentage of total information. For each target, kappa coefficient vs. sigma (σ ) in range of ½0:1 2 and in increments of 0:02 is plotted in Fig. 5. Optimal σ are found for Vehicle 2, asphalt, and bricks as 0:62, 0:23, and 0:18, respectively. 4.2. Results In this section, we present the comparative results between PCA, FKT and KFKT for dimensionality reduction in hyperspectral images. The performance of the dimension reduction is tested by applying 10-fold cross-validation with DA. Our aim is to reduce number of bands so that hyperspectral images can be exploited for near real time target detection or classification applications. For this purpose, we incorporate dominant eigenvectors corresponding to dominant eigenvalues which best represent the target class for dimensionality reduction. In the simulations, covariance matrices in Eq. (1) for FKT and kernel matrices in Eq. (13) for KFKT are computed using specified training sample sizes as given in Table 1. However, when the number of samples for training M-1, covariance and kernel matrices increase and approach the real covariance matrices [20]. Unfortunately, in remote sensing applications, generally, it is difficult to have a large number of target class samples since target constitute in a relatively small number of pixels. First row of Fig. 6 shows the original three channels of Catalca test image in wavelengths of 450.34, 550.09 and 651.85 nm, respectively. After dimensionality reduction, leading three components of the same image using all three techniques are demonstrated in the rest of Fig. 6. In the first experiment, the relationships between the number of dimensions and the corresponding kappa value for three methods (FKT, KFKT and PCA) are investigated for Vehicle 2, asphalt and bricks. It is clear from Fig. 7 that KFKT provides better separability for each target at the same dimension size. In the second experiment, preserving 99% of the total variance is used as a criterion to select number of bands. According to this criterion, the dimension of the Catalca data becomes 16, 94, 71 for PCA, FKT and KFKT, respectively. Table 2 provides with the details for the second experiment. The classification performance for Vehicle 2 using FKT features is better than using PCA features. However, PCA gives better performance than FKT for other targets. The proposed target oriented band reduction via KFKT consistently presents a better performance for all three targets based on all measured metrics.
Fig. 7. The relationship between the number of dimensionality and the corresponding kappa value of the methods for Vehicle 2 (top), asphalt (middle) and bricks (bottom). The classification was done by Discriminant Analysis.
5. Discussion and conclusions In this paper, an improved band reduction technique has been proposed for HSI using KFKT. Fukunaga–Koontz Transform becomes non-linear after applying kernel function. The obtained
new transform is employed to project the data into desired number of bands in which the target signature differs much from the clutter. The proposed band reduction and detection framework
Please cite this article as: Binol H, et al. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga– Koontz Transform. Opt Laser Eng (2016), http://dx.doi.org/10.1016/j.optlaseng.2016.03.009i
H. Binol et al. / Optics and Lasers in Engineering ∎ (∎∎∎∎) ∎∎∎–∎∎∎
8
Table 2 Accuracy (%), Recall (%) and Kappa statistic, k, of the all test images with different dimensionality reduction approaches. The results of raw data are also provided. The best scores for each target are highlighted in bold face font. Data Sets
Catalca
Target
Feature No. of features
Vehicle 2 Raw PCA FKT KFKT Pavia University Asphalt Raw PCA FKT KFKT Bricks Raw PCA FKT KFKT
160 16 94 71 103 3 85 66 103 3 85 80
Accuracy Recall κ
99.38 99.21 99.23 99.80 76.40 73.34 70.18 96.70 75.55 69.40 67.52 91.88
99.41 99.26 99.27 99.93 76.02 72.58 70.08 96.97 74.73 67.41 67.57 92.29
0.8568 0.8218 0.8269 0.9469 0.1328 0.1294 0.0747 0.5859 0.2308 0.2033 0.1103 0.5295
has been evaluated on sub-images of Catalca and Pavia University hyperspectral scenes. FKT is a linear data transformation approach like PCA. Experimental results demonstrate that FKT brings similar information content as compared to classical PCA, particularly in the higher dimensional spaces. Taking advantage of kernel theory, traditional linear techniques can be utilized in a nonlinear manner which allows construction of nonlinear mappings that consider the higher order relationships of features in the data. On further investigation it is confirmed that the proposed KFKT-based technique can effectively reduce the spectral dimensionality while maintaining good separability and detection results. Other notable nonlinear approaches involve manifold learning techniques such as Isomap, Laplacian eigenmaps, and locally linear embedding [23]. These approaches create a projected data representation applying a cost function that keeps local properties of the data. This procedure can be regarded as designating a graph-based kernel for KPCA. One can perform a comparative study in a future work. The recommended technique is similar in operation to linear discriminant analysis (LDA) in which class labels are taken into account. Nevertheless LDA works under the assumption that classes have normal distribution. It will be more accurate that the comparison of the suggested technique and Fisher-LDA which has no assumption like classical LDA can be performed. In this case, it is possible to mention the difficulty of generating class covariance matrices for the target since the amount of the target data is limited to be able to fitting any distribution function [24]. Therefore, the main drawback of suggested technique is to estimate the true covariance matrix of a target class. In future, we would like to test this method for scenarios where limited data is available to estimate target class distribution. It leads us to this search that dimension of a kernel matrix depends on the number of training samples directly. In addition to these, we study on a one-class version of proposed technique that it utilizes only target class' training samples. Finally, the proposed algorithm provides an efficient way to reduce hyperspectral cube to meaningful bands for target detection, which alleviates the complexity of hyperspectral image processing such as huge data storage and slow processing time.
Acknowledgments This research was supported by a Grant from The Scientific and Technological Research Council of Turkey (TUBITAK – 112E207). The authors would like to thank the YTU-YAZGI Laboratory Team for providing the Catalca HSI data and the anonymous reviewers for their comments which helped to improve the paper quality and presentation.
References [1] Cheriyadat A, Bruce LM. Why principal component analysis is not an appropriate feature extraction method for hyperspectral data. In: Proceedings of IEEE the International geoscience and remote sensing symposium, IEEE; 2003. p. 3420–2. [2] Manolakis D, Shaw G. Detection algorithms for hyperspectral imaging applications. IEEE Signal Process Mag 2002;19(1):29–43. [3] Sadjadi FA, Chun CSL. Remote sensing using passive infrared Stokes parameters. Opt Eng 2004;43(10):2283–91. [4] Mukherjee K, Bhattacharya A, Ghosh JK, Arora MK. Comparative performance of fractal based and conventional methods for dimensionality reduction of hyperspectral data. Opt Laser Eng 2014;55:267–74. [5] Jolliffe I. Principal component analysis. New York, NY, USA: Springer; 2002. [6] Kaewpijit S, Le Moigne J, El-Ghazawi T. Feature reduction of hyperspectral imagery using hybrid wavelet-principal component analysis. Opt Eng 2004;43 (2):350–62. [7] Price JC. Band selection procedure for multispectral scanners. Appl Opt 1994;33(15):3281–8. [8] Balakrishnama S, Ganapathiraju A. Linear discriminant analysis-a brief tutorial. Inst Signal Inf Process 1998. [9] Zhang D, Zhou ZH, Chen S. Semi-supervised dimensionality reduction. In: Proceedings of the 7th SIAM international conference on data mining (SDM), 2007. p. 629–34. [10] Feng S, Chen Q, Zuo C, Sun J, Tao T, Hu Y. A carrier removal technique for Fourier transform profilometry based on principal component analysis. Opt Laser Eng 2015;74:80–6. [11] Karlholm J, Renhorn I. Wavelength band selection method for multispectral target detection. Appl Opt 2002;41(32):6786–95. [12] Mahalanobis A, Muise RR, Stanfill SR. Quadratic correlation filter design methodology for target detection and surveillance applications. Appl Opt 2004;43(27):5198–205. [13] Huo X. A statistical analysis of Fukunaga–Koontz transform. IEEE Signal Process Lett 2004;11(2):123–6. [14] Fukunaga K. Introduction to statistical pattern recognition. California: Academic Press; 1990. [15] Liu R, Liu E, Yang J, Zhang T, Wang F. Infrared small target detection with kernel Fukunaga–Koontz transform. Meas Sci Technol 2007;18(9):3025–35. [16] Binol H, Bilgin G, Dinc S, Bal A. Kernel Fukunaga–Koontz transform subspaces for classification of hyperspectral images with small sample sizes. IEEE Geosci Remote Sens Lett 2015;12(6):1287–91. [17] Welling M. Fisher linear discriminant analysis. Department of Computer Science. University of Toronto; 2005. p. 3. [18] Aizerman M, Braverman E, Rozonoer L. Theoretical foundations of the potential function method in pattern recognition learning. Autom Remote Control 1964;25:821–37. [19] Shawe-Taylor J, Cristianini N. Kernel methods for pattern analysis. Cambridge: Cambridge University Press; 2004. [20] Manolakis D, Shaw G. Directionally constrained energy minimization adaptive matched filter: theory and practice. In: Proceedings of the SPIE 4480, imaging spectrometry VII, San Diego(CA): SPIE; 2002. p. 57–64. [21] Richards JA, Jia X. Remote sensing digital image analysis: an introduction. New York, NY, USA: Springer; 1999. [22] Binol H, Bal A, Cukur H. Differential evolution algorithm-based kernel parameter selection for Fukunaga–Koontz Transform subspaces construction. In: Proceedings of the SPIE 9646, high-performance computing in remote sensing V, Oct 2015. [23] Balasubramanian M, Schwartz EL. The isomap algorithm and topological stability. Science 2002;295(5552):7–13. [24] Manolakis D, Marden D, Shaw GA. Hyperspectral image processing for automatic target detection applications. Linc Lab J 2003;14(1):79–116.
Please cite this article as: Binol H, et al. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga– Koontz Transform. Opt Laser Eng (2016), http://dx.doi.org/10.1016/j.optlaseng.2016.03.009i