An approach for hyperspectral image classification by optimizing SVM using self organizing map

An approach for hyperspectral image classification by optimizing SVM using self organizing map

Accepted Manuscript Title: An Approach for Hyperspectral Image Classification by Optimizing SVM using Self Organizing Map Authors: Deepak Kumar Jain, ...

669KB Sizes 0 Downloads 28 Views

Accepted Manuscript Title: An Approach for Hyperspectral Image Classification by Optimizing SVM using Self Organizing Map Authors: Deepak Kumar Jain, Surendra Bilouhan Dubey, Rishin Kumar Choubey, Amit Sinhal, Siddharth Kumar Arjaria, Amar Jain, Haoxiang Wang PII: DOI: Reference:

S1877-7503(17)30853-0 http://dx.doi.org/doi:10.1016/j.jocs.2017.07.016 JOCS 731

To appear in: Received date: Revised date: Accepted date:

6-12-2016 18-6-2017 25-7-2017

Please cite this article as: Deepak Kumar Jain, Surendra Bilouhan Dubey, Rishin Kumar Choubey, Amit Sinhal, Siddharth Kumar Arjaria, Amar Jain, Haoxiang Wang, An Approach for Hyperspectral Image Classification by Optimizing SVM using Self Organizing Map, Journal of Computational Sciencehttp://dx.doi.org/10.1016/j.jocs.2017.07.016 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

An Approach for Hyperspectral Image Classification by Optimizing SVM using Self Organizing Map Deepak Kumar Jain a, Surendra Bilouhan Dubeyb, Rishin Kumar Choubey b, Amit Sinhal b, Siddharth Kumar Arjaria b, Amar Jainc, Haoxiang Wangd a Institute of Automation, Chinese Academy of Sciences, Beijing, China b Technocrats Institute of Technology, Bhopal India c Samrat Ashok Technological Institute, Vidisha, India d Department of ECE, Cornell University, NY, USA d R&D Center, GoPerception Laboratory, NY, USA

HIGHLIGHTS  This paper gives an efficient technique for the classification of Hyper-Spectral images, based on the concept of optimizing Support Vector Machine(SVM) using Self Organizing Maps(SOM). 

Objective of this research is to improve the recognition of pixels with a degree of uncertainty and improve the classification results.



we also calculate the posterior probability of each pixel and compare with the threshold value.

Abstract In this paper, an efficient technique for the classification of Hyper-Spectral Images taken form satellite is actualized. The Proposed Methodology is based on the concept of optimizing Support Vector Machine (SVM ) using Self Organizing Maps (SOM) and then classification of Interior and Exterior Pixels can be done by the comparing the Posterior Probability of each of the pixel intensities. The Methodology applied here works in two phases, first is to train the important features from the image by Optimizing Support Vector Machine using Self Organizing Maps and second is to find the Interior and Exterior Pixels and Comparing Optimal Threshold and Probability. The Experimental results are performed on two datasets which consists of 16 and 9 classes such as corn-no till, corn, soybeans-no till, corn-min till, soybeans-clean till, soybeans min till alfalfa, grass/trees, grass/pasture, grass/pasture-mowed, oats, hay windrowed wheat, woods, stone-steel towers and building-grass-trees-drives. The proposed Methodology Outperforms better in comparison with the existing Classification methodology in terms of Accuracy and Kappa and Confusion Matrix. Keywords: Self Organizing Map(SOM), Support Vector Machine(SVM), Classification, Hyper-spectral image. 1. Introduction By the increase of remote sensing activities, various new pattern recognition approaches were developed, most of which concentrate on speeding up the classification and improving classification accuracy by combination of different algorithms. Researchers include support vector machine (SVM) in supervised classification, fuzzy cluster algorithm and evolutionary algorithm including genetic algorithms (GA) in unsupervised Classification and in addition Artificial Neural Networks (ANNs) connected into both supervised and unsupervised classification. There is a classifier which works like Bayes formula that basically calculates the probability of a pixel belonging to each class and assigns that pixel to the class with the highest probability which is called Maximum Likelihood classifier (MLC). The main drawback of the MLC is that it requires and assumes distribution of the class to be normal, while some class do not follow normal distribution. For example, radar images intensities are exponentially distributed or high resolution multispectral data like Quick Bird image's DN values are non-normally distributed. Also multimodality of class

distribution sometimes is a problem that causes MLC to fail. Oppositely the Gaussian distribution has a continuous space with an infinite range of data values while DN values in RS images are integer, distributed in a finite domain.

1.1 Support Vector Machines As a modern discipline, pattern analysis has undergone three revolutions in algorithm design, namely the formulation of robust linear algorithms for vector data in the 1960s, the introduction of non-linear algorithms (artificial neural networks and decision trees) in the 1980s and the introduction of kernel based learning in the mid-1990s. Kernel based learning is a statistical learning technique first proposed by [1] Kernel methods combine the theoretically well founded approaches of linear pattern analysis algorithms with the flexibility of non-linear algorithms such as artificial neural networks. The support vector machine (SVM) classifier [1] has emerged as a theoretically superior and popular binary, kernel based statistical classifier [2] proposed one class or anomaly detector variants of the SVM. In this research the potential of a free parameter and attribute optimized one class kernel classifier, operating on object based data, for classifying sparse natural features is investigated. It is an unsupervised learning algorithm [3] [ 4] which provides clustering of the data on the basis of some threshold value so that values can be classified and trained easily and quickly [5, 6, 7, 8]. The figure shown below is the hyperplane which contains a margin value which denotes the trained value to be classifies as +1 or -1. A support vector machine is a binary statistical classifier composed of two parts. The first part is a component that uses quadratic or linear programming to find an optimal separating hyperplane (line) between the two sets of samples in some defined feature space. The second part is a mapping function, likewise called a kernel function, which changes the information space to an arbitrary higher dimensional feature space which permits a better linear discrimination to be constructed by the first component. The first component is explained by example. Say a training set is represented by [ xi , yi ] where i = 1,2, ..., N,

yi

ϵ [-1,1] . The symbol x denotes the samples and y their corresponding labels.

Assume that the two classes are linearly separable. An SVM attempts to find an optimal line, also called a hyperplane, which separates the two populations. The hyperplane is defined by a standard linear function: …………………………………………………………………

f(x) = w.x + b such that

yi

(w.x + b)>= +1 for all

yi

= +1 and

yi

(w.x + b)<= -1 for all

(2.1)

yi = -1. Although numerous linear functions

may possibly separate the samples of the two classes only one optimal hyperplane exists that separates the two classes with the greatest margin. Figure 1 illustrates such an example with the optimal separating hyperplane shown by a blue line with samples of the +1 and -1 classes on either sides of this hyperplane. The brown lines define the bounds of the maximum margin hyperplane, called the support planes.

Figure 1: Optimal margin hyper plane (separating the samples of two arbitrary classes)

1.2 Kohonen Self Organizing Feature Map

Kohonen Self Organizing Feature Maps (SOM) were originated by a man named Teuvo Kohonen, his exploration gave a method for representing multidimensional data in much lower dimensional spaces – commonly may be one or two dimensions. This progression, of diminishing the dimensionality of vectors, is mainly a data compression technique known as vector quantization. Other than that, the Kohonen strategy makes a system that stores information in a manner that any topological connection within the training set are maintained. The most important goal of an SOM is to change an incoming signal pattern of arbitrary dimension into a one or two dimensional discrete map, and to execute this transformation adaptively in a topologically arrangement way. Consequently, SOM can be set up by placing neurons at the nodes of a one or two dimensional lattice. Higher dimensional maps are also feasible, but not so familiar. The neurons turn out to be selectively adjusted to various input patterns (stimuli) or classes of input patterns at some stage in the course of the competitive learning. The places of the neurons so tuned (i.e. the winning neurons) turn out to be generated and a meaningful coordinate system for the input features is generated on the lattice. The SOM consequently types they have need of topographic map of the input patterns. The self-organization method includes four noteworthy parts: Initialization: All the association weights are initialized with small random quantities. Competition: For every information pattern, the neurons ascertain their pertinent estimations of a discriminated function which gives the basis for competition. The specific neuron with the smallest value of the discriminate function is pronounced the champion. Cooperation: To be successful neuron discover out the spatial location of a topological neighborhood of excited neurons, in that route as long as the premise for cooperation along with neighboring neurons. Adjustment: The excited neurons decrease their character values of the differentiated function in relation to the input pattern throughout appropriate adjustment of the combined connection weights, such that the comeback of the winning neuron to the subsequent application of a comparable input pattern is improved. The image data representation using per-pixel uses a traditional technique for the classification of remotely sensed images. The classification on the basis of resolution of land cover images contains feature points in the image [10]. The more the feature points in the image contains the more is the accuracy of the image classification [9]. The other features points such as scale dependency are also used [11,12]. The techniques used for the classification of image pixels are supervised or unsupervised such as using SVM but the main concept is access the feature points from the image such as the images can be classified accurately [13,14]. The major drawback of per-pixel classification is that a pixel seldom corresponds to a real world object; that is it is a ordinary spatial unit. Homogeneous regions could consist of a collection of heterogeneous pixels [15,16]. The existing methodologies implemented for the classification of remote Sensed Images can be classified in an effective manner, although these methodologies are effective but the methodologies fail to extracts features based on dataset or type of image. Hence Our objective focus on providing an effective methodology for intelligent processing of data

acquired through remote sensing images and to find out Image classification method based on existing method that performs better than the standard SVM[17] and other methods in terms of classification exactness, that is to increase the effectiveness of classification methods for remotely sensed Hyper-spectral images. The aim is to find a classifier that will simply the use of SVMs to the identification and modeling of the large quantity of classes in mixed pixels by using SOM along with some new algorithm. 2. Literature Survey Zhou Zhange et. al proposed a new and efficient methodology for the categorization of hyperspectral descriptions [18]. Here in this research an Energetic learning algorithm is proposed with hierarchical segmentation which provides effective classification of Hyperspectral images.Hua Zhang et. al proposed a Fuzzy Topology Support Vector Machine algorithm for the classification of Satellite Images [19]. Here in this paper a probabilistic approach for the optimizing of Support Vector machine using Fuzzy is implemented. Junfei Chen et. al implemented SVM FAHP based way Excellent, great, medium and terrible classes [20]. Fuzzy AHP was chosen in the opening segment, which is getting to estimation the remarkable enterprises. Giorgos Mountrakis et. al executed another approach of Aerial and Satellite imagery images [21]. The study includes a wide assortment of strategies for the for the analysis, SVMs especially because of the limited training samples, consistent with the restrictions for distant sensing application, the ability to sum up well in the remote detecting are engaging. Liu Zhibin et. al uses and implemented Linear SCDA method for the multistage Dynamic Fuzzy Judgement technique which is based on the concept of Support Vector Machine [22]. SVM classifier single multilayer advantages of this technique throws, but also the complexity of looking for the elevated grade preparation example data cannot removed.Yan Li et. al implemented an effective approach for the advancement and changes in remote sensing classification of images [23]. Later image cataloging predicament was viewed to be an image texture learning problem by Qiu Zhen Ge, Zhang Chun Ling, Li. Qiong, Xin Xian Hui, Guo Zhang [24].An advance was introduce which performs better and proved helpful in quality mapping through a chosen metric distance function. Farid Melgani et. al implemented an approach which provides difficulty of the Categorization of hyperspectral distant sensing images using Support Vector machine [25]. Here in this a hypothetical discussion and untried psychotherapy were performed which intended at considerate and evaluating the possibilities of SVM classifiers in hyper dimensional component spaces. The effect of outliers and noises in data points were removed by FSVM and is suitable for applications, in which data points have un-modeled characteristics [26]. Chiang and Hao [27] proposed an SVM based fuzzy inference system brushing the compensation of arithmetical knowledge structure and the fuzzy foundation function deduction scheme, which provides reliable performance in the cases of classification and prediction. Unclassifiable areas for multiclass issues which were determined utilizing fuzzy LS-SVMs. Fuzzy topology is generalized from ordinary topology by introducing the concept of membership value in a fuzzy set this was discussed by Tsujinishi and Abe [28]. 3. Dataset Description The proposed structure was validated experimentally on two generally utilized hyperspectral remote sensing datasets: 1) Indian Pines and 2) Pavia University. The Indian Pines dataset was obtained by the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor over the Indian Pines agrarian site in northwestern Indiana, at 20-m spatial determination and 10- nm spectral determination over the scope of 400–2500 nm.

Twenty noisy and water retention bands (104–108, 150– 163, and 220) were evacuated bringing about 200-band image. Ground reference information were involved of 10366 labeled samples relegated to 16 classes: corn-no till, corn-min till, corn, soybeans-no till, soybeans min till, soybeans-clean till, alfalfa, grass/pasture, grass/trees, grass/pasture-mowed, hay windrowed, oats, wheat, woods, building-grass-trees-drives, and stone-steel towers. The crop-tillage-related classes relate to the management practices used for tilling in a particular field. For example, soybeans-no till refers to a field in which soybeans are planted, but no tillage was performed after the last harvest, thereby leaving a significant quantity of crop residue. Soybeans in till refers to tillage which breaks up the soil, reducing the quantity of remaining residue, while soybeans-clean till refers to fields that are completely ploughed after the past harvest, leaving basically no deposit. Corntillage-related classes are characterized likewise. True-color composite and related ground reference maps are appeared in Figure 2.

Figure 2 Indian Pines Dataset (a) True Color-Composite (bands R: 26 G:14 B:8). (b) Ground reference map 4. Proposed Methodology 4.1. Steps of Methodology The Proposed methodology adopted here for the classification of Hyper-spectral images using Optimization of Support Vector Machine by Self Organizing Maps is proposed. The Proposed Methodology is implemented here consists of following phases: Step 1: Take Input Image First of all, an Input Training Dataset of Hyper-spectral images is taken. Step 2: Convert into Gray Scale Each of the Hyper-spectral image is converted to Gray Scale image so that the proper classification of classes is done. Step 3: Select Region of Interest The input hyperspectral image of satellite consists of different pixel regions for classification is selected; hence Region of Interest is selected to define the region of classification in the image. The image contains (X, Y) where X is the pixel region and Y is the respective class label. Step 4: SOM Based Training On the selected Region of Interest (ROI) Self Organizing maps (SOM) is applied for the proper grouping of Pixels on the basis of their intensity levels and features. Here Self Organizing Maps (SOM) are applied for the Optimization of Support Vector Machine (SVM). Training occurs in several steps and over much iteration:

(i) The training algorithms must contain some input values to be trained. (ii) The SOM Training algorithm provides an input and weights to each of the edge in the image. (iii) The training starts with the neighborhood value BMU is calculated at each step. (iv) Each neighboring node's weights are changed in accordance with make them more like the input vector. (vi) Repeat step (ii) for N iterations. Step 5: Algorithm for SVM Now train the Optimized Model using Support Vector Machine (SVM) using Gaussian Kernel Function (RBF) with a class index and Gamma Coefficient. Consider training sample{(𝑥𝑖 , 𝑑𝑖 )}, where 𝑥𝑖 is the input pattern, 𝑑𝑖 is the desired output: 𝑊0𝑇 𝑋𝑖 + 𝑏0 ≥ +1, 𝑓𝑜𝑟 𝑑𝑖 = +1 𝑊0𝑇 𝑋𝑖 + 𝑏0 ≤ −1, 𝑓𝑜𝑟 𝑑𝑖 = −1

Figure 3 Basic Architecture of SVM The data point which is very near is called the margin of separation 𝜌.The main aim of using the SVM is to find the particular hyperplane of which the margin 𝜌 is maximized Optimal hyperplane 𝑊0𝑇 𝑋 + 𝑏0 = 0 For example, if we are choosing our model from the set of hyperplanes in Rn, then we have: f(x; {w; b}) = sign(w . x + b) We can try to learn f(x; _) by choosing a function that performs well on training data: 𝑚

1 𝑅𝑒𝑚𝑝 (𝛼) = ∑ 𝑙(𝑓(𝑥𝑖 , 𝛼), 𝑦𝑖 ) 𝑚 𝑖=1

Step 6: Classification of Outer & Interior Pixels (i) Figure Posterior Probability of each pixel. Figure optimal threshold value of each pixel, to decide the pixel has a place with the interior or outer, computed using.

𝐏𝐊 𝟎 (𝐗

𝟎 )≤𝛂

Where α is threshold value

(ii) If PK0 (X ) >α then pixel xo belongs to the interior of class ko. Then, the pixel will be classified as belonging to that 0 particular class and output (iii) If PK0 (X )≤α then pixel xo belongs to the boundary of a certain class and those boundary pixels have to be further 0 treated by next step.

Figure 4: Flow chart of proposed methodology

5. Result Analysis The Table 1 shown below is the analysis and comparison of various Baseline and Proposed Methods that are implemented for the classification of hyperspectral images. The Proposed Methodology implemented is applied on Pavia University Dataset which contains 9 classes. Hence the analysis on the basis of OA and AA and Kappa and 9 classes is done. The Proposed Methodology performs better as compared to the baselines methods. Table 1 Analysis of Various Parameters Baseline Methods Adseg_AddFeat

Adseg_AddSamp

Adseg_AddFeat+AddSamp

Proposed Methods SOM-SVM

OA

90.71

86.58

92.23

95.46

AA

90.71

87.5

92.66

94.27

Kappa

88.05

82.73

90.05

C1

89.41

81.53

90

92.74 92

C2

92.5

89.46

93.59

95.19

C3

82.63

76.42

86.21

89.34

C4

91.81

90.25

92.65

94.73

C5

96.79

98.68

97.62

98.91

C6

86.74

81.81

90.34

93.27

C7

89.4

88.35

95.59

96.24

C8

89.75

82.31

90.72

93.24

C9

97.35

98.66

97.25

98.85

The Table 2 shown below is the analysis and comparison of various Baseline and Proposed Methods that are implemented for the classification of Hyperspectral images. The Proposed Methodology implemented is applied on Indian Pines Dataset which contains 16 classes. Hence the analysis on the basis of OA and AA and Kappa and 16 classes is done. The Proposed Methodology performs better as compared to the baselines methods. Table 2 Analysis of Various Parameters on Indian Pines Dataset Baseline Method

Proposed Methods SOM-SVM 85.29 86.23 83.29 78.11 80.47 76.2 82.49 83.2 79.13 93.82 87.91 93.28 97.22 96.35 98.19 97.82 92.37 75.93 93.29

Adseg_AddFeat+AddSamp 82.77 85.98 80.55 75.44 78.5 74.64 80.58 81.95 77.08 90.31 86.55 89.36 96.92 95.06 97.14 97.33 90.05 73.21 91.51

OA AA Kappa C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C16

The baseline method “Adding Samples” strategy depends on the parameter m, i.e., the number of pseudo-labeled samples than can be added for each labeled sample already present in the image. The average OAs obtained for both datasets by varying this parameter in {0, 1, 3, 5, 7} are reported in Table 3 and 4. We note that m = 0 relates to the case in which no pseudo named test are added to the training set. In general, for both datasets, OA increases as m increases. However, it is worth mentioning that more computational time is also needed as m increases. Considering the tradeoff between the classification accuracy and the computational time, we fixed m = 5 for both datasets in all the experiments. Table 3 Average OA on Indian Pines Dataset Average OA on Indian Pines Dataset m 0 1 3 5 7

Existing Method 61.96 68.26 72.48 74.78 75.48

Proposed Method 65.34 71.83 75.29 76.82 77.36

Table 4 Average OA on Pavia University Dataset Average OA onPavia University Dataset m 0 1 3 5 7

Existing Method 80.95 84.08 85.82 86.58 87.2

Proposed Method 83.28 85.21 87.83 88.41 89

In the baseline method segmentation hierarchy pruning strategy, the parameter Threshold is a measure of homogeneity based on the class probability distribution. To decide the parameter onset, the following strategy was adopted. At the first AL iteration, calculate the mean region sizes of the segmentation maps obtained with the parameter evaluated at {0.1, 0.3, 0.5, 0.7, 0.9}, and it is exactly set to the value yielding the mean region size nearest to 20, i.e., 0.7 for Pavia University and 0.9 for Indian Pines. The average and the mean region sizes accomplished for the two datasets while applying the proposed system utilizing distinctive Threshold qualities are accounted for in Table 5,6.

Table 5 Average OA on Indian Pines Dataset on Threshold Average OA on Indian Pines Dataset Threshold 0.1 0.3 0.5 0.7 0.9

Existing Method 61.9 61.96 68.49 78.44 82.77

Proposed Method 64.23 65.28 72.35 80.18 85.34

Table 6 Average OA on Pavia University Dataset on Threshold Threshold 0.1 0.3 0.5 0.7 0.9

Average OA on Pavia University Dataset Existing Method Proposed Method 81.31 83.25 83.28 85.45 84.01 85 92.23 94.31 86.86 88.4

Here figure 5 shows the graph of comparison between existing method and proposed method in terms of average OA on India pine dataset and figure 6 shows the graph of comparison between existing method and proposed method in terms of average OA on Pavia university.

Average OA on Indian Pines Dataset

90 80 70

Average OA

60 50 Existing Method

40

Proposed Method

30 20 10 0 0

1

3

5

7

# of Labelled Samples

Figure 5 Averages OA on Indian Pines Dataset Average OA on Pavia University Dataset

90 88

Average OA

86 84 Existing Method

82

Proposed Method 80 78 76 0

1

3

5

7

# of Labelled Samples

Figure 6 Averages OA on Pavia University Dataset

6. Conclusion and Scope of Future Work Experiments were carried out in two datasets (Pavia university and India pines datasets) to evaluate the performance of the proposed method using different remotely sensed hyperspectral images. Compared with the standard Existing

technique obtains a comparatively high accuracy. These evince that the proposed method is a very effective classifier for multispectral remotely sensed images. Theoretically, this research has contributed to the development of the SOM- SVM based method. With this, the misclassified pixels are reclassified; consequently, the problem of misclassification in the traditional SVM methods is thus solved to a certain extent, particularly for those pixels located at the boundary of between classes. This is the theoretical basis leading to the improvement of the classification accuracy by the newly proposed method. In our future work, we can apply a new method for determining the threshold value, the work will be applied for another type remotely sensed images, etc.

References [1]. [2]. [3]. [4]. [5]. [6]. [7]. [8]. [9].

[10]. [11]. [12]. [13].

[14]. [15]. [16].

[17]. [18].

[19].

[20]. [21]. [22].

Vapnik VN 1995. The nature of statistical learning theory. New York: Springer. Tax DMJ & Duin RPW 1999.Support vector domain description. Pattern Recognition Letters 20: 1191-1199. V. Vapnik. Statistical Learning Theory. John Wiley and Sons, 1998. Vapnik VN 1995. The nature of statistical learning theory. New York: Springer. David M Skapura, Building Neural Networks, ACM press, 1996. Tom Mitchell, Machine Learning, McGraw-Hill Computer science series, 1997. Nello Cristianini and John Shawe-Taylor, “An Introduction to Support Vector Machines and Other Kernel-based Learning Methods”, Cambridge University Press, 2000. Burges C., “A tutorial on support vector machines for pattern recognition”, In “Data Mining and Knowledge Discovery”. Kluwer Academic Publishers, Boston, 1998, (Volume 2). Ozdarici A & Turker M Field-based classification of agricultural crops using multiscale images. Paper presented at the 1st international conference on geographic object-based image analysis held 4-5 July 2006. Salzburg, Austria.2006 Wang L, Sousa WP & Gong P 2004. Integration of object-based and pixel-based classification for mapping mangroves with IKONOS imagery. International Journal of Remote Sensing 25: 5655-5668. Atkinson PM & Aplin P 2004. Spatial variation in land cover and choice of spatial resolution for remote sensing. International Journal of Remote Sensing 25: 3687-3702. Johansen K, Nicholas CC, Gergel SE & Stange Y. Application of high spatial resolution satellite imagery for riparian and forest ecosystem classification. Remote Sensing of Environment 110: 29-44. 2007 Camps-Valls G, Gomez-Chova L, Muñoz-Marí J, Rojo-Alvarez JL & Martinez-Ramon M 2007. Kernel-based framework for multi-temporal and multi-source remote sensing data classification and change detection. IEEE Transactions on Geoscience and Remote Sensing 46: 1822-1835. Camps-Valls G. Kernel classifiers in remote sensing tutorial. Academic course material. Valencia: Department of Electronics Engineering, Universitat de Valencia. 2008 Navulur K. Multispectral image analysis using the object-oriented paradigm. Boca Raton: CRC Press. 2007 Hay GJ, Castilla G, Wulder MA & Ruiz JR. An automated object-based approach for the multiscale image segmentation of forest scenes. International Journal of Applied Earth Observation and Geoinformation 7: 339-359. 2005 J.M. Zurada, “Introduction to Artificial Neural Networks System”. Jaico Publishing house, 2003. Zhou Zhang, Edoardo Pasolli, Melba M. Crawford,” An Active Learning Framework for Hyperspectral Image Classification Using Hierarchical Segmentation”, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2015. Hua Zhang, Wenzhong Shi, and Kimfung Liu “Fuzzy-Topology-Integrated Support Vector Machine for Remotely Sensed Image Classification”, IEEE Transactions on Geosciences And Remote Sensing, Vol. 50, No. 3, March 2012. Junfei Chen, Jiameng Zhong “An Integrated SVM and Fuzzy AHP Approach for Selecting Third Party Logistics Providers”, Electrotechnical Review (Electrical Review), ISSN 0033-2097, R. 88 NR 9b/2012. Giorgos Mountrakis, Jungho Im, Caesar Ogole “Support vector machines in remote sensing: A Review”, ISPRS Journal of Photogrammetry and Remote Sensing, vol. 66, issue 2, pp. 247-259, 2011. Zhibin Liu, Haifen Yang, Shaomei Yang “Integration of Multi-layer SVM Classifier and Multistage Dynamic Fuzzy Judgment and Its Application in SCDA Measurement”, Journal of Computers, Vol. 4, No. 11, November 2009.

[23]. Yan LI, Li YAN, Jin LIU “Remote Sensing Image Classification Development in the Past Decade”, Proceedings of SPIE Vol.7494 74941D-1, pp. 338-343, 2009. [24]. Qiu Zhen Ge, Zhang Chun Ling ,Li. Qiong, Xin, Xian Hui , Guo Zhang “High Efficient Classification on Remote Sensing Images Based on SVM”, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B2. Beijing (2008) [25]. Farid Melgani, Lorenzo Bruzzone “Classification of Hyper spectral Remote Sensing Images With Support Vector Machines”, IEEE Transactions on Geosciences And Remote Sensing, Vol. 42, No. 8, August 2004. [26]. C. F. Lin and S. D. Wang, “Fuzzy support vector machines,” IEEE Transaction Neural Network, vol. 13, no. 2, pp. 464–471, Mar. 2002 [27]. J. H. Chiang and P. Y. Hao, “Support vector learning mechanism for fuzzy rule-based modelling: A new approach,” IEEE Transaction on Fuzzy System, vol. 12, issue 1, pp. 1–12, Feb. 2004. [28]. D. Tsujinishi and S. Abe, “Fuzzy least squares support vector machines for multiclass problems,” Neural Network., vol. 16, no. 5/6, pp. 785–792, Jun. / Jul. 2003.

Author biographies Deepak Kumar Jain received his Bachelor of Engineering. degree from Rajiv Gandhi Proudyogiki Vishwavidyala, Bhopal, India in 2010 and received his M.S. degree from Jaypee University of Engineering & Technology, Guna, India, in 2012 .Currently he is pursuing his PhD degree from Chinese Academy of Sciences, Beijing, China. His research interests include computer vision, artificial Intelligence, face recognition and Expression. Surendra Dubey received the Bachelor of Engineering degree in computer science and engineering from TIEIT RGPV in 2005 Bhopal and received the Master of Technology degree from SOIT RGPV Bhopal in 2008. Rishin Kumar Choubey received the Bachelor of Engineering in computer science and Engineering from Shri Ram Institute of Technology Jabalpur and Master of Technology from Technocrats Institute of Technology, Bhopal, India. Amar Jain received his Bachelor of Engineering. degree from Rajiv Gandhi Proudyogiki Vishwavidyala, Bhopal, India in 2013 and received his M.E. degree from Samrat Ashok Technological Institute, Vidisha, in 2016.