International Journal of Applied Earth Observation and Geoinformation 52 (2016) 192–203
Contents lists available at ScienceDirect
International Journal of Applied Earth Observation and Geoinformation journal homepage: www.elsevier.com/locate/jag
Fusion of HJ1B and ALOS PALSAR data for land cover classification using machine learning methods X.Y. Wang a,b,∗ , Y.G. Guo a,b , J. He c , L.T. Du a,b a State Key Laboratory Breeding Base of Land Degradation and Ecological Restoration of Northwest China, Ningxia University, Yinchuan 750021, Ningxia, China b Key Laboratory for Restoration and Reconstruction of Degraded Ecosystem in Northwestern China of Ministry of Education, Ningxia University, Yinchuan 750021, Ningxia, China c School of Resources and Environment, Ningxia University, Yinchuan 750021, Ningxia, China
a r t i c l e
i n f o
Article history: Received 7 December 2015 Received in revised form 7 June 2016 Accepted 13 June 2016 Keywords: HJ1B ALOS/PALSAR Image fusion Land cover classification
a b s t r a c t Image classification from remote sensing is becoming increasingly urgent for monitoring environmental changes. Exploring effective algorithms to increase classification accuracy is critical. This paper explores the use of multispectral HJ1B and ALOS (Advanced Land Observing Satellite) PALSAR L-band (Phased Array type L-band Synthetic Aperture Radar) for land cover classification using learning-based algorithms. Pixel-based and object-based image analysis approaches for classifying HJ1B data and the HJ1B and ALOS/PALSAR fused-images were compared using two machine learning algorithms, support vector machine (SVM) and random forest (RF), to test which algorithm can achieve the best classification accuracy in arid and semiarid regions. The overall accuracies of the pixel-based (Fused data: 79.0%; HJ1B data: 81.46%) and object-based classifications (Fused data: 80.0%; HJ1B data: 76.9%) were relatively close when using the SVM classifier. The pixel-based classification achieved a high overall accuracy (85.5%) using the RF algorithm for classifying the fused data, whereas the RF classifier using the object-based image analysis produced a lower overall accuracy (70.2%). The study demonstrates that the pixel-based classification utilized fewer variables and performed relatively better than the object-based classification using HJ1B imagery and the fused data. Generally, the integration of the HJ1B and ALOS/PALSAR imagery can improve the overall accuracy of 5.7% using the pixel-based image analysis and RF classifier. © 2016 Elsevier B.V. All rights reserved.
1. Introduction Remotely sensed image classification is a core field of remote sensing data applications (Otukei and Blaschke, 2010). Land use and land cover change is regarded as a most primary variable that influences the distribution and dynamics of terrestrial biodiversity, ecosystem structures and the function and energy exchange between the land and atmosphere in the remote sensing community (Chettri et al., 2013). Accurate land cover information is also an essential variable for improving the performance of ecosystem, hydrologic, atmospheric and land data assimilation models (Jia et al., 2014). Land cover data are also vital for modeling the dynamics of the ecosystem and for the sustainable development and effective resource management (Tuanmu et al., 2014). It has
∗ Corresponding author at: State Key Laboratory Breeding Base of Land Degradation and Ecological Restoration of Northwest China, Ningxia University, Yinchuan 750021, Ningxia, China. E-mail addresses: wxy
[email protected], wxy
[email protected] (X.Y. Wang). http://dx.doi.org/10.1016/j.jag.2016.06.014 0303-2434/© 2016 Elsevier B.V. All rights reserved.
been widely applied in many fields such as carbon cycle and climate change modeling and crop yield estimation. Various algorithms have been developed to effectively derive reliable land classification information from satellite data over the past years. These methods have been demonstrated to obtain more accurate and more reliable information from remote sensing images (Gislason et al., 2006; Akar and Güngör, 2012). The most widely used classification approaches from remotely sensed imagery can be divided into two general image analysis approaches: (i) classifications based on pixels, (ii) classifications based on objects (Duro et al., 2012). Pixel-based classifications (e.g., maximum likelihood classification) became widely used approaches for classifying remotely sensed imagery in the last decade (Pal, 2005, 2006; Otukei and Blaschke, 2010; Duro et al., 2012). Pixel-based classifications use multi-spectral classification techniques that assign a pixel to a class by considering the spectral similarities with the class or with other classes (Gao et al., 2006; Myint et al., 2011). Pal (2006) used a genetic algorithm (GA) technique to derive for SVM as a fitting function (SVM/GA) for feature
X.Y. Wang et al. / International Journal of Applied Earth Observation and Geoinformation 52 (2016) 192–203
selection using DAIS hyperspectral data and achieved a classification accuracy of 91.89%. Otukei and Blaschke (2010) compared classification mapping accuracies using three different classification algorithms: a maximum likelihood classifier (MLC), support vector machines (SVMs) and the decision trees (DTs). Their results indicated that the accuracy of the DT performed better than those of MLC and SVM. Object-based classification methods use different object features such as shape, texture and spectral information to segment images (Liu and Xia, 2010; Duro et al., 2012). Myint et al. (2011) used QuickBird image data to identify urban classes using maximum likelihood rule and the discriminant analysis of spectral information. The study demonstrated that the object-based classifier (90.40%) produced a higher overall accuracy than the pixel-based classification (67.60%). Duro et al. (2012) compared pixel-based and object-based image analysis approaches for classifying broad land cover classes using three supervised machine learning algorithms. The results showed that pixel-based classification achieved similar classification accuracies with object-based image analysis approach. The developed learning-based algorithms such as random forests (RFs), classification and regression trees (CARTs), artificial neural networks (ANNs) and SVM, have been widely used in the pixel-based and object-based land cover classification of remote sensing data (Cervone and Haack, 2012). Duro et al. (2012) compared pixel-based and object-based image analysis approaches for classifying broad land cover classes over agricultural landscapes using three supervised machine learning algorithms: decision trees (DTs), random forests (RFs) and the support vector machines (SVMs). The results showed that the pixel-based classification utilized fewer variables, achieved similar classification accuracies and required less time than the object-based classification. Learning-based algorithms have been developed to obtain better classification accuracies than those of parametric classification methods. For example, a random forest classifier performs equally well to SVM when Pal (2005) compared the RF classifier classification accuracy with that of the SVM. Zhu et al. (2012) combined multi-season Landsat Enhanced Thematic Mapper Plus (ETM+) and ALOS/PALSAR data using a RF classifier to map land cover classifications, which achieved a classification accuracy of 93.82%. Zhu (2013) applied a RF classifier combined with post-hoc smoothing to classify land cover using Landsat Thematic Mapper (TM) images and topographic data. The overall accuracy improved by up to 6%, and the kappa value improved by up to 9%. Although optical sensors have been successfully used for land cover classification, cloud cover still presents a challenge to obtaining continuous observation data. Synthetic aperture radars (SARs), which are active microwave sensors, are almost uninfluenced by atmospheric interference and can penetrate cloud cover (Li et al., 2012). Radar data have played an increasingly important role in land cover classification in the past decade (Qi et al., 2012; Flores-De-Santiago et al., 2013; Turkar et al., 2012). SAR data provide complementary and textural information of ground features relative to those achieved with optical imagery (McNairn et al., 2009; Cervone and Haack, 2012). The spatial and texture information inherent to the radar data have become valuable for improving mapping accuracy (Li et al., 2012; Zhu et al., 2012). Research has addressed the integration of radar and optical remote sensing data achieves higher predictive accuracy than each sensor used independently (Li et al., 2012; Cervone and Haack, 2012). Image fusion methods such as intensity-hue-saturation (IHS), principal component analysis (PCA) and Brovey transforms can often produce spectral degradation distortion (Johnson et al., 2012; Basuki et al., 2013). Wavelet transforms are a relatively new ˜ and promising approach to fusing remotely sensed images (Núnez et al., 1999; Amolins et al., 2007). The wavelet approach preserves
193
the spectral characteristics of the multispectral image better than other methods (e.g., IHS or PCA). SAR data have been fused with optical images using wavelet transforms to enhance object features for land cover classification. Waske and Benediktsson (2007) classified multisensor datasets using SVM by fusing multitemporal SAR with optical images from TM-5 and SPOT 5 based on decision fusion. The proposed SVM-based fusion approach significantly improves classification accuracy. Classification accuracy increased when Ban et al. (2010) conducted decision level fusion to integrate Quickbird MS with RADARSAT 1 SAR imagery using an object-based and rulebased approach. Classifications that integrate multisensor datasets such as multitemporal, multipolarization and multifrequency clas˜ sifications have yielded promising results (Larranaga et al., 2011). However, more application case studies are still needed to achieve better classification accuracy. Moreover, there are not reported studies that combine HJ1B and SAR data for land cover classification The objective of this research was to present a developed algorithm to classify remotely sensed image data by combining optical and SAR imagery in arid and semiarid regions. Pixel-based and object-based image analysis approaches for classifying HJ1B and ALOS/PALSAR fused image were compared using two machine learning algorithms: support vector machines (SVMs) and random forest (RF) algorithms. The two methods were compared and the one with the better classification accuracy was determined. Accurate classification maps would be utilized to monitor land cover and ecosystem change and their potential impacts on the biodiversity conservation in arid and semiarid regions. 2. Study area and data 2.1. Study area The study area is located in the Helan Mountain Nature Reserve Region and on the Yinchuan Plain (latitude 38◦ 22 34 –39◦ 13 50.4 N, longitude 105◦ 43 43 –106◦ 33 12.86 E,) in the northwest part of China. It covers a total area of 692803.5 ha. The region belongs to a transitional zone between steppe and desert regions (Fig. 1). The most widespread vegetation species in the study area include evergreen coniferous forest (the dominant species are Picea crassifolia, Pinus tabulaeformis and Juniperus rigida), deciduous broadleaf species (the dominant species are Ulmus glaucescens Franch and Populus davidiana) and evergreen coniferous and deciduous broadleaf mixed forest (the dominant species are Picea crassifolia and Populus davidiana) (Jiang et al., 2000). The non-forested parts of the study area are largely comprised of sand, water, grassland, building/urban uses and several types of agricultural croplands (e.g., wheat, rice, corn). Topographic variations can be found across the study site, and the terrain elevation varies between 1400 m and 3542 m above sea level. 2.2. HJ1B data HJ, namely ‘Huan-Jing’, is a small earth observation satellite that is widely used for environmental and disaster monitoring. HJ satellite data are freely available to download from the China Center for Resource Satellite Data and Application website (www.cresda. com). HJ1B is equipped with a charge-coupled device (CCD) camera that measures the solar radiation reflected by ground objects in four spectral channels in 3 visible (blue: 0.43–0.52 m, green: 0.52–0.60 m and 0.63–0.69 m) domains and 1 near-infrared (0.76–0.90 m) spectral domain (Meng et al., 2011). A cloudless HJ1B operational land imager (OLI) image (path/row: 17/68) with a 30-m spatial resolution was acquired on July 20, 2010, over the study area. The HJ1B data preprocessing mainly included radiance
194
X.Y. Wang et al. / International Journal of Applied Earth Observation and Geoinformation 52 (2016) 192–203
Fig. 1. The location of the study area. The black dot in the middle of the rectangle denotes the geographical location of the study area on a map of China.
calibration, and atmospheric, geometric and topographic corrections. The radiance calibration of the HJ1B data was used to convert the digital number (DN) values of the raw image to radiance values. After the atmospheric corrections, four wavebands of surface spectral reflectances were obtained using the Fast Line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) algorithm in the ENVI software program (www.exelisvis.com). The HJ1B image was geometrically rectified with a Universal Transverse Mercator (UTM) projection using 20 ground control points (GCPS). Nearestneighbor resampling was used during image rectification, with a root mean square (rms) error of less than 0.5 pixels. A topographic correction algorithm was also applied to remove the influences of ˜ et al., terrain distortions on the image classification accuracy (Riano 2003).
2.3. ALOS/PALSAR data The ALOS/PALSAR data used in the study were acquired in the L-band (23.6 cm wavelength) in the fine beam dualpolarization (FBD) mode, including HH (horizontal-horizontal) and HV (horizontal-vertical) with ascending orbit and an average incidence angle of 38.7◦ . The SAR data were acquired on 30 June 2010, with a pixel spacing of 9.4 m (slant range) by 3.2 m (azimuth range). The Level single-look complex data (SLC level 1.1) were geocoded using the 3-arcsecond SRTM DEM oversampled to 25 m, radiometrically calibrated and speckle filtered using the GAMMA remote sensing AG. A terrain correction was carried out to minimize the influence of terrain distortions on the radar backscatter (Zhou et al., 2011).
X.Y. Wang et al. / International Journal of Applied Earth Observation and Geoinformation 52 (2016) 192–203 Table 1 8-categories land cover description in the study area (Zhu et al., 2012).
Table 2 Numbers of plots and pixels per class for the training and validation data.
Land cover types
Description
Coniferous forest Broadleaf forest Mixed forest
Forested land ≥ 80% needleleaved evergreen canopy cover. Forested land ≥ 80% broadleaf evergreen canopy cover. Forested land ≥20% conifer and < 80% deciduous canopy cover. Land with herbaceous types. Lands expose sand and never more than 5% vegetated cover. Lands covered by building and other man-made structures. Lands covered with temporary crops followed by harvest and a bare soil period. Lakes, reservoirs and rivers.
Grassland Sand Urban/build Cropland Water
Class
A general calibration formula for 0 can be expressed mathematically (Ulander, 1996) as 0 = ˇ0 cos
(1)
and cos
195
Building/Urban Cropland Broadleaf forest Coniferous forest Mixed forest Sand Grassland Water Total
Training data
Validation data
Plots
Pixels
Plots
Pixels
27 6 17 13 8 14 23 16 124
500 732 500 500 500 500 500 500 4232
16 22 15 32 26 16 18 11 156
500 500 356 595 304 500 500 500 3755
data training and 3755 data for validation with 8 land cover categories (Fig. 2). Field samples were randomly assigned to the training data and the accuracy validation set to be used in the confusion matrices describing the final classification result. 3. Methods
= sin cos ˛ + cos sin ˛ sin ˇ
(2)
where is the complementary to the smallest angle between the surface normal and image plane, and the ˛ and ˇ angles correspond to the slope and aspect angles of the surface, respectively. is the nominal incidence angle provided by PALSAR data. The slopes and aspects were obtained from DEM (digital elevation model) data. The local incidence angle for every pixel of the image can be calculated as cos = cos cos ˛ + sin sin ˛ cos( − ˇ)
(3)
where is the local incidence angle, and is the azimuth angle of the sensor. The surface tilt angles method was used to obtain the relationship between the projection factor and cos . The projection factor is related to cos by sin( + u) cos v =
cos
(4)
tan ˛. sin ˛. sin 2ˇ 2 1+( ) 2
where u is the surface tilt angle in the range direction, and v the surface tilt angle in the azimuth direction. The modified terrain slope correction for tilted area is defined as SCF = sin( + u) cos v
(5)
and c0 = SCF q .ˇ0
(6)
where q ≈ 1/p, and p takes a value between 0.5 ∼ 1 for diffuse scattering. 2.4. Ground reference data The field data for the sample plots were collected in July and August of 2012. Land cover types can be summarized into eight categories: building/ubban, cropland, broadleaf forest, coniferous forest, mixed forest, sand, grassland and water (Tables 1 and 2). The sample data were collected by driving along a sample polygon. Because the spatial resolution of the HJ1B CCD is 30 m, to match the field data to the image pixel, the same object attribution was selected in each sample polygon. The land cover class was recorded within each sample polygon. The polygons were located in the field using a handheld GPS (global positioning system) device (eXplorist 500) to ensure the correspondence of the sample image objects and the field observations. A total of 4232 reference data were used for
3.1. Image fusion using discrete wavelet transforms Image fusion is applied to classify remotely sensed data by combining information from multiresolution images of the same scene to enhance the color information to achieve a new fused image with high spatial and spectral resolutions (Pajares and Cruz, 2004; Lu et al., 2011). Various data fusion methods have been developed by integrating multispectral images and radar data to produce enhanced multispectral images of high spatial and spectral resolutions for land cover classification (Chibani, 2006; Ehlers et al., 2010; Lu et al., 2011). Multisensor fusion from different sensors, combining SAR with optical data or hyperspectral and lidar has attracted a large amount of attention because of the advantages of distinct features in the data collection and representation (Lu et al., 2011; Shimoni et al., 2009). Data fusion is generally used to improve spatial and spectral resolutions. Multisensor data fusion is used to obtain better visual interpretations, color information and computer-processing such as segmentation, feature extraction and object recognition (Li et al., 1995; Basuki et al., 2013). It is also helpful for distinguishing mixed-class pixels (Lu et al., 2011). Wavelet transforms can be used to characterize features in the image and reconstruct multiscale signals with minimal distortion of the spectral characteristics of the original image. Moreover, discrete wavelet transform (DWT) fusion preserves the richer spectral information of optical images (Choi et al., 2005; Pajares and Cruz, 2004). Two original images are first accurately co-registered for precisely extracting information content from both image datasets. The optical image can also require radiometric and atmospheric calibration before the multisensor data are fused. The corrected image is decomposed by wavelet transform to the same resolution. Wavelet decomposition is increasingly being used to process ˜ images (Núnez et al., 1999 González-Audícana et al., 2005). Multiresolution analyses based on wavelet theory decompose images into sets of new images with low frequency coarse information and high frequency detailed information. The coarse component corresponds to the larger structures, whereas the detailed resolution component corresponds to smaller structures (González-Audícana et al., 2005; Basuki et al., 2013). Discrete wavelet transforms (DWT) decompose the image in different wavelet coefficients, preserving the image information. To fuse the image, wavelet coefficients corresponding to different images can be reconstructed to obtain new coefficients to achieve the appropriate information in the original images. The most important step in image fusion based on wavelets is the appropriate combination of the coefficients to achieve the
196
X.Y. Wang et al. / International Journal of Applied Earth Observation and Geoinformation 52 (2016) 192–203
Fig. 2. Collected field training and validation data in the study area.
highest quality fused image. The final fused image is obtained by taking the inverse wavelet transform (IDWT) when the fused wavelet coefficients are merged (Li et al., 2002; Basuki et al., 2013). The HJ1B image was accurately registered to the ALOS/PALSAR data using 20 control points. A nearest-neighbor resampling was used with a root mean square (rms) error of less than 0.2 pixels during the image registration. Different polarization options such as HH and HV perform similarly when used in data fusion (Lu et al., 2011). The HJ1B was fused with ALOS/PALSAR HH polarization using the DWT method. The original HJ1B and ALOS/PALSAR HH polarization images are depicted in Figs. 3a and 3b. The final fusion results using DWT are shown in Fig. 3c.
3.2. Object-based image analysis The eCognition Developer 8.9 package was used to perform the object-based image segmentation. The object-based image analysis segments the image into image objects to acquire a variety of additional textural and spatial features. A multiresolution segmentation is performed to segment satellite images and delineate objects based on shape and color criteria. It is a bottom-up region merging technique that begins by considering each pixel as a separate object (Coillie et al., 2007). Image objects are extracted from images at hierarchical segmentation levels, and smaller image objects are merged to form larger ones based on heterogeneity criteria comprised of spectral similarity, contrast with neighboring objects and the shape characteristics of the resulting object (Gao et al., 2006). An optimization procedure is used to minimize the average heterogeneity of the resulting image objects weighted by their sizes during the merging process. The mergers are performed in a pairwise clustering process that finds the area of minimum spectral and spatial heterogeneity. The process stops when the smallest growth
exceeds a user-defined threshold of within-object homogeneity based on the underlying input layers (Benz et al., 2004). A segmentation algorithm first needs to assign appropriate values to three key parameters, namely scale, shape and compactness, to segment objects or pixels with similar spectral or spatial characters. The scale parameter determines the average size of the image objects and the number of objects produced by the segmentation algorithm, which influences the classification accuracy of the final map. The shape and compactness factors control the homogeneity of the image objects at various scale levels. The HJ1B and fusion images were used as inputs to achieve a segmentation product suitable for land cover classification mapping, respectively. The scale parameter determines the internal heterogeneity of image objects and controls the relative size of the image object, which affects the classification accuracy of the final map (Dr˘agut et al., 2014). A larger value of the scale parameter produces relatively larger image objects and vice versa. A multiresolution segmentation algorithm was used to segment the fused image. Different scale parameters were selected to determine the optimal scale parameter (Fig. 4). Scale parameters of 5, 10, 15, 20 and 30 were specified to test the segmentation. The test showed that a segmentation with 15 scale parameters suitable delineated the image objects and achieved a better classification accuracy. Other segmentation parameters were set as constants, with shape factors of 0.8 and compactness/smoothness factors of 0.8. Ten feature variables were calculated from a segmentation image using statistical values, shapes and texture features in the object-based classification. Those features are divided into 2 major categories (Zhu et al., 2012):
(1) The statistical values of each object: mean and standard deviation of each layer.
X.Y. Wang et al. / International Journal of Applied Earth Observation and Geoinformation 52 (2016) 192–203
197
Fig. 3. (a) The HJ1B image (Band 4, 2 and 1); (b) The ALOS/PALSAR HH polarization image; (c) the fusion image using DWT.
Fig. 4. Demonstration on the optimal scale determination for the segmentation of the HJ1B and ALOS/PALSAR HH fusion data.
(2) Texture variables: inputs include eight textural measurements derived from the gray-level co-occurrence matrix (GLCM) (e.g., homogeneity, contract, dissimilarity, entropy, angular second moment, mean, standard deviation and correlation).
In this study, information based on 4 bands and 10 object features was used in the object-based classification. A total of 40 features were extracted from HJ1B and the fused image for each image object. An object-based classifier was then used to select features and create classification rules for the land cover classification. Finally, the image classification was performed using the built classification rules.
3.3. Classification using a support vector machine A support vector machine (SVM) classifier was selected for the land cover classification of the fusion data. The SVM classifier is a widely used non-parametric statistical learning classifier that requires no assumptions and can achieve better classification accuracies (Jia et al., 2014). The SVM algorithm allows an optimal separating hyperplane to be obtained for a training dataset in a multidimensional feature space (Waske and Van der Linder, 2008). SVM only uses samples close to the class boundary. A detailed introduction to the SVM algorithm is given by Burges (1998). In this study, the radial basis function (RBF) was selected as the kernel function for the SVM classifier. The kernel parameters C and
198
X.Y. Wang et al. / International Journal of Applied Earth Observation and Geoinformation 52 (2016) 192–203
were the two basic parameters used for the RBF kernels and were set to 100 and 0.125, respectively. The training of the SVM with the RBF kernel and the generation of the rule images were performed using IDL/ENVI, which uses the LIBSVM (https://www.csie.ntu.edu. tw/∼cjlin/libsvm/) library for training the SVM classifier. 3.4. Classification using random forests Machine-learning methods are non-parametric because they do not rely on any assumptions about the data distribution. Non-parametric approaches are widely used for forest inventory mapping applications (Franco-Lopez et al., 2001; Tomppo et al., 2009; Powell et al., 2010). Machine learning algorithms such as random forest have been widely applied by the remote sensing research and applications community. It is an ensemble classifier that builds a forest of classification trees (Breiman, 2001). Each tree is fitted to a different bootstrapped training sample and randomly selected set of predictor variables. Unweighted voting is used to produce an overall prediction for each site in the sample. This method is not sensitive to noise or overfitting and has achieved an overall accuracy of over 90% in land cover classification (Zhu et al., 2012). Random forests have been widely used to classify remote sensing imagery with very high accuracy in many fields such as tree species distribution (Freeman et al., 2012; Naidoo et al., 2012) and urban and peri-urban (Zhu et al., 2012) and crop (Long et al., 2013) applications. To produce the map, two methods are used to classify the remote sensing imagery: pixel-based and object-based image analysis using machine learning algorithms. The HJ1B and fused images were classified using RF fitted with the ‘randomForest’ R package (Liaw and Wiener, 2002), respectively. A default value of 500 trees was selected because values greater than the default have little influence on the overall classification accuracy (Duro et al., 2012).
building/urban class. The object-based classification map using the RF algorithm (Fig. 5D) misclassified the grassland on the northeast corner and edge of the forested region as building/urban class. Moreover, the cropland type was also misclassified building/urban using the RF algorithm. Overall, the grassland and cropland types showed more indications of omission and commission errors. 4.2. Pixel-based and object-based classification mapping using the HJ1B data A comparison between the pixel-based and object-based image analyses was also made using HJ1B data. For the pixel-based classification (Figs. 6A and B), the SVM and RF classifiers were used to classify the HJ1B image. The major difference between the interpretations of the classification maps produced by the two algorithms was the relative amount of forest and building/urban land cover depicted in the northwest corner of the forested area. For the SVM classification (Fig. 6A), the northwest corner of the forested area was depicted grassland, whereas the map produced by the RF algorithm (Fig. 6B) depicted this area as dominated by building/urban land cover. The mapping using the RF algorithm depicted larger patches of the forest class (Fig. 6B), whereas the SVM algorithm depicted this area as grassland. The thematic maps using the HJ1B data and the object-based classification are shown in Fig. 5C and D. The maps in Fig. 5C and D differ by the relative amounts of building/urban, cropland and grassland areas. In Fig. 6C, a larger patch of cropland was misclassified as grassland class. However, in Fig. 6D, the relative amount of grassland was misclassified as building/urban, as in Fig. 5D. Moreover, the grassland and cropland types showed more indications of omission and commission errors. The visual interpretations of the thematic maps produced using pixel-based image analyses shown in Figs. 5A, 5B, 6A and B produced similar results. The pixel-based classifications based on the SVM and RF algorithms produced more visually accurate maps in arid and semiarid regions than the object-based classification.
4. Results 4.3. Accuracy assessment and statistical comparisons 4.1. Pixel-based and object-based classification mapping using fusion data A comparison between the pixel-based and object-based image analysis based on multiresolution scale segmentation was used to test the performance of the machine learning method for land cover classification using HJ1B and ALOS/PALSAR fused data. The fusion image was classified into 8 dominant land cover types: building/urban, cropland, broadleaf forest, coniferous forest, mixed forest, sand, grassland and water. For the pixel-based classification (Figs. 5A and B), the SVM and RF classifiers were used to classify the fused image. The major different interpretations between the classification maps produced by the two algorithms were the number of sand and building/urban land covers depicted in the northwest corner of the forested area. For the SVM classification (Fig. 5A), the northwest corner of the forested area depicted grassland, whereas the map produced by the RF algorithm (Fig. 5B) depicted this area as dominated by the amount of sand. The map above the forested area using the SVM algorithm showed few building/urban areas and was dominates by the grassland type, whereas the RF algorithm depicted that region as primarily building/urban. Using Fig. 5C and D, the differences in the thematic maps produced using the object-based image analysis can be visually interpreted. The main difference between Fig. 5C and D is the relative amount of building/urban land cover depicted in the northern half of the study area. In Fig. 5A and B, the northern half of the study area depicts larger patches of grassland vegetation, whereas the object-based algorithm (Figs. 5C and D) classified this area as
An accuracy assessment was made to test the performance of the machine learning method for land cover classification using HJ1B and ALOS/PALSAR fusion data. The overall accuracy (OA), producer’s accuracy (PA), user’s accuracy (UA) and kappa coefficient (Kappa) were computed to evaluate the quality of the image classification using the validation data for the 8-categories land cover classification in the confusion matrices. Detailed statistics of the classification accuracies based on the validation data are shown in Table 3. In general, the pixel-based classification achieved a similar overall accuracy with the object-based image analysis using the SVM algorithm, whereas the overall classification accuracy of the pixel-based classification was superior to that of the object-based classification using the RF algorithm. The overall performance of the pixel-based classification (OA: 79.0%; Kappa: 0.75) was similar to that of the object-based classification (OA: 80.0%; Kappa: 0.77) using the SVM classifier. All of the land cover types obtained greater than 80% UA, except the coniferous, broadleaf and mixed forest types, which achieved less than 80% when the pixel-based classification was used to classify the fused image using the SVM and RF algorithms. However, slightly differences between the machine learning algorithms were found. For example, cropland had a UA of 54.68% using the SVM algorithms. The PA for all of the land cover types was similar, as was the UA when using the pixel-based image analysis and the SVM classification algorithm. The pixel-based classification using the RF algorithm obtained a better PA than that of the SVM algorithm, except for the coniferous forest, which had a PA
X.Y. Wang et al. / International Journal of Applied Earth Observation and Geoinformation 52 (2016) 192–203
199
Fig. 5. Comparison of the pixel-based and object-based classifications using the fusion data: A) pixel-based classification using the SVM algorithm; B) pixel-based classification using the RF algorithm; C) object-based classification using the SVM algorithm; D) object-based classification using the RF algorithm.
of 37.65%. The three forest types had lower user’s and producer’s accuracies and were confused with cropland using the SVM classification. Those had greater than 40% omission and commission errors. The broadleaf forest and mixed forest were easily confused with coniferous forests using the RF algorithm based on the pixelbased image analysis. In contrast to the pixel-based image analysis, the SVM classifier (OA: 80.0%; Kappa: 0.77) produced a higher overall accuracy than the RF classification (OA: 70.2%; Kappa: 0.66) using the objectbased image analysis (Table 3). The RF classifier performed worse than that of the SVM classification using the object-based image analysis. The objected-based classification for all land cover types achieved less than 80% user’s and producer’s accuracies for the building/urban, sand, water and cropland types using the SVM algorithms. However, the RF classifier for all land cover types achieved less than 70% user’s and producer’s accuracies for the building/urban, sand and cropland types. The grassland class produced a relatively low UA (50.96%).
An accuracy assessment was also made to test the performance of the machine learning method using the HJ1B data. Detailed statistics of the classification accuracies based on the validation data are shown in Table 4. Generally, the pixel-based classification achieved a similar overall accuracy to the object-based image analysis using the SVM algorithm, whereas the pixel-based classification performed better than the object-based classification for the overall classification accuracy using the RF classifier. The overall performance of the SVM classifier (OA: 81.46%; Kappa: 0.79) was similar to that of the RF classifier (OA: 76.8%; Kappa: 0.77) using the pixel-based classification. All of the land cover types had greater than 80% user’s accuracies, except the coniferous, broadleaf and mixed forest types, which had less than 80% user’s accuracies when the pixel-based classification was used to classify the HJ1B data using the SVM and RF algorithms. Cropland had a UA of 67% using the SVM algorithms. In contrast to the pixel-based image analysis, the object-based image analysis produced a lower OA than the pixel-based classi-
200
X.Y. Wang et al. / International Journal of Applied Earth Observation and Geoinformation 52 (2016) 192–203
Table 3 Confusion matrices and classifier accuracies based on the test data using the fused data. Pixel-based, SVM
Object-based, SVM
Class
BU
BF
G
CF
S
W
MF
C
Total
UA(%)
Class
BU
BF
G
CF
S
W
MF
C
Total
UA(%)
BU BF G CF S W MF C Total PA(%) OA(%) Kappa
497 0 2 0 0 1 0 0 500 99.4 79.0 0.75
0 196 0 33 0 0 17 110 356 55.06
12 0 484 0 0 4 0 0 500 96.8
0 84 17 305 0 0 71 118 595 51.26
0 0 2 0 498 0 0 0 500 99.6
0 0 0 0 0 500 0 0 500 100
0 12 3 111 0 0 91 87 304 29.93
0 120 0 0 0 0 0 380 500 76
509 412 508 449 498 505 179 695 3755
97.64 47.57 95.28 67.93 100 99.01 50.84 54.68
BU BF G CF S W MF C Total PA(%) OA(%) Kappa
449 0 0 14 0 37 0 0 500 89.8 80.0 0.77
0 222 5 68 0 0 17 44 356 62.36
0 0 414 25 0 16 20 25 500 82.8
0 164 34 338 0 0 59 0 595 56.81
0 0 45 0 455 0 0 0 500 91
0 0 34 0 0 466 0 0 500 93.2
0 16 21 103 0 0 164 0 304 53.95
0 3 2 0 0 0 0 495 500 99
449 405 555 548 455 519 260 564 3755
100 54.81 74.59 61.68 100 89.79 63.08 87.77
Pixel-based, random forest
Object-based, random forest
Class
BU
BF
G
CF
S
W
MF
C
Total
UA(%)
Class
BU
BF
G
CF
S
W
MF
C
Total
UA(%)
BU BF G CF S W MF C Total PA(%) OA(%) Kappa
486 0 13 0 0 0 1 0 500 97.2 85.5 0.84
0 289 0 20 0 0 37 10 356 81.18
13 0 482 3 0 2 0 0 500 96.4
8 162 1 224 1 0 149 50 595 37.65
0 0 19 0 481 0 0 0 500 96.2
7 0 0 0 0 493 0 0 500 98.6
0 9 0 27 0 0 263 5 304 86.51
0 6 0 0 0 0 0 494 500 98.8
514 466 515 274 482 495 450 559 3755
94.55 62.02 93.59 81.75 99.79 99.6 58.44 88.37
BU BF G CF S W MF C Total PA(%) OA(%) Kappa
408 0 53 0 0 39 0 0 500 81.6 70.2 0.66
9 150 9 133 0 0 55 0 356 42.13
61 0 423 0 0 16 0 0 500 84.6
0 132 65 349 0 0 49 0 595 58.66
0 0 45 0 455 0 0 0 500 91
49 0 156 0 0 295 0 0 500 59
14 9 54 69 0 0 158 0 304 51.97
0 61 25 0 0 0 18 396 500 79.2
541 352 830 551 455 350 280 396 3755
75.42 42.61 50.96 63.34 100 84.29 56.43 100
Note: BU = Building/Urban, BF = Broadleaf forest, G = Grassland, CF = Coniferous forest, S = Sand, W = Water, MF = Mixed forest, C = Cropland, OA = Overall accuracy, PA = Producer’s accuracy, UA = User’s accuracy.
Table 4 Confusion matrices and classifier accuracies based on the test data using the HJ1B data. Pixel-based, SVM Class
BU
BF
BU BF G CF S W MF C Total PA(%) OA(%) Kappa
444 0 0 222 12 0 31 36 0 0 12 0 1 16 0 82 500 356 88.8 62.36 81.46 0.79
Object-based, SVM
G
CF
S
W
MF
C
Total
UA(%)
Class
BU
BF
G
CF
S
W
MF
C
Total
UA(%)
16 0 483 0 0 1 0 0 500 96.6
2 112 22 288 0 0 90 81 595 48.4
0 0 6 0 494 0 0 0 500 98.8
0 0 0 0 0 500 0 0 500 100
0 7 2 59 11 0 155 70 304 50.99
0 27 0 0 0 0 0 473 500 94.6
462 368 525 414 505 513 262 706 3755
96.1 60.33 92 69.57 97.82 97.47 59.16 67
BU BF G CF S W MF C Total PA(%) OA(%) Kappa
377 45 9 0 0 25 44 0 500 75.4 76.9 0.74
0 186 0 43 0 0 77 50 356 52.25
0 0 430 0 0 28 25 17 500 86
0 183 35 296 0 0 81 0 595 49.75
0 0 44 0 456 0 0 0 500 91.2
72 0 0 0 0 428 0 0 500 85.6
4 19 13 53 0 0 215 0 304 70.72
0 0 0 0 0 0 0 500 500 100
453 433 531 392 456 481 442 567 3755
83.22 42.96 80.98 75.51 100 88.98 48.64 88.18
Pixel-based, random forest
Object-based, random forest
Class
BU
BF
G
CF
S
W
MF
C
Total
UA(%)
Class
BU
BF
G
CF
S
W
MF
C
Total
UA(%)
BU BF G CF S W MF C Total PA(%) OA(%) Kappa
453 0 29 11 0 0 0 7 500 90.6 79.8 0.77
0 285 12 24 0 0 32 3 356 80.06
59 0 413 2 0 21 0 5 500 82.6
0 164 10 259 0 0 148 14 595 43.53
0 0 20 0 480 0 0 0 500 96
32 0 0 0 0 468 0 0 500 93.6
1 30 12 56 5 0 192 8 304 63.16
0 51 2 0 0 0 0 447 500 89.4
545 530 498 352 485 489 372 484 3755
83.12 53.77 82.93 73.58 98.97 95.71 51.61 92.36
BU BF G CF S W MF C Total PA(%) OA(%) Kappa
474 0 0 0 0 26 0 0 500 94.8 68.8 0.64
0 186 18 129 0 0 23 0 356 52.25
109 0 355 0 0 36 0 0 500 71
2 175 64 291 0 0 63 0 595 48.91
0 0 44 0 456 0 0 0 500 91.2
2 0 0 0 0 498 0 0 500 99.6
48 11 13 59 0 0 173 0 304 56.91
39 103 183 0 0 25 0 150 500 30
674 475 677 479 456 585 259 150 3755
70.33 39.16 52.44 60.75 100 85.13 66.8 100
Note: BU = Building/Urban, BF = Broadleaf forest, G = Grassland, CF = Coniferous forest, S = Sand, W = Water, MF = Mixed forest, C = Cropland, OA = Overall accuracy, PA = Producer’s accuracy, UA = User’s accuracy.
X.Y. Wang et al. / International Journal of Applied Earth Observation and Geoinformation 52 (2016) 192–203
201
Fig. 6. Comparison of the pixel-based and object-based classification using the HJ1B data: A) pixel-based classification using the SVM algorithm; B) pixel-based classification using the RF algorithm; C) object-based classification using the SVM algorithm; D) object-based classification using the RF algorithm.
fication. The SVM classifier (OA: 76.9%; Kappa: 0.74) produced a higher overall accuracy than the RF classification (OA: 68.8%; Kapp: 0.64) using the object-based image analysis (Table 4). The RF classifier performed worse than that of the SVM classification using the object-based image analysis. 5. Discussion According to the visual observations and quantitative classification accuracy assessment, the pixel-based image analysis achieved better land cover classification results than those of the objectbased image analysis for the fused data and HJ1B using the SVM and RF classifiers. The main difference in the classification results between the SVM and RF classifiers was that the forest vegetation classes in mountain regions were misclassified as cropland using the SVM classifier. Each class could be better distinguished from the other types, except that the forest classes had lower classification accuracies. The accuracy of the SVM classification was slightly lower than that of the random forest classifier when the pixelbased classification was utilized to classify the fused data (Table 3). This may have been due in part to the influence of the “forest” and “cropland” classes. Forest and cropland were confused because
they coexisted in the mountain boundaries. Moreover, cropland, forest and grass may have similar reflectance signals. Various forest classes can also be easily confused with one another because of their similar spectral characters (Waser et al., 2011). Another study has demonstrated that the RF classifier outperformed the SVM or MLC classifiers in term of accuracies when classifying multi-sensor data sets; an overall accuracy of greater than 90% was achieved (Zhu et al., 2012). The thematic maps in Figs. 5 and 6 show the relative amount of signature confusion between building/urban and sparse grassland areas. The signature confusion between building/urban and grassland areas is typical when using object-based classification (Myint et al., 2011). This could because sparse grasslands have similar spectral reflectance and spatial textural properties with building/urban areas in mountain areas covered by sparse grass and small shrub vegetation. Although we extracted the statistical values and texture variables from the segmented objects, the statistical values of each object still played an important role in determining the segmented objects and classification accuracy. The pixel-based classification achieved better overall accuracies than those of the object-based classification when the fused data
202
X.Y. Wang et al. / International Journal of Applied Earth Observation and Geoinformation 52 (2016) 192–203
and HJ1B image were used to provide the land cover classification. A comparison of the fusion image and HJ1B data using the pixel-based and object-based image analyses showed that the integration of the multispectral and SAR imagery could increase the overall accuracy (Tables 3 and 4). The overall accuracy using object-based classification was lower than that using pixel-based classification based on the RF classifier. In part, this could be because the RF classifier produced suboptimal results with scattered misclassifications and generated more commission errors than the SVM (Zhu, 2013). 6. Conclusion Land cover classifications based on pixel-based and objectbased image analysis were made to quantitatively compare two machine learning algorithms. Differences between the pixel-based and object-based image analyses were found when using the same classification algorithm. The classification accuracies of the pixelbased and object-based image analyses were compared and it was observed that the pixel-based image analysis resulted in better classification accuracies. The RF algorithm outperformed the SVM when classifying various land covers using the pixel-based image analysis. The results showed that the pixel-based image classification performed much better than the object-based classification for land cover classification using HJ1B and ALOS/PALSAR fusion data. The integration of the multispectral and SAR imagery can increase the overall accuracy and kappa coefficient. Furthermore, it was shown that random forest outperformed the SVM classifier in terms of accuracy when classifying the fusion image using the pixelbased classification, whereas the SVM classifier performed better than RF when the object-based classification was used to classify the remote sensing imagery. The fusion of the multispectral and SAR data is was worthwhile, and the classification accuracy was improved, whereas the classification accuracy for the forest classes was still lower than that of the other classes. Further research is needed to develop better image segmentation and image fusion algorithms to improve classification accuracies. Acknowledgments This work was supported by the National Natural Science Foundation of China (Nos. 41261089, 41201393, 41201438 and 41271344). The authors thank China Center for Resource Satellite Data and Application for free data and the anonymous reviewers for their valuable comments and suggestions. We also thank Forest Ecosystem Research Station, Helan Mt. National Nature Reserve Administration for cooperation in the filed works. References Akar, Ö., Güngör, O., 2012. Classification of multispectral images using Random Forest algorithm. J. Geod. Geoinf. 1, 105–112. Amolins, K., Zhang, Y., Dare, P., 2007. Wavelet based image fusion techniques—An introduction: review and comparison. ISPRS J. Photogramm. 62, 249–263. Ban, Y.F., Hu, H.T., Rangel, I.M., 2010. Fusion of Quickbird MS and RADARSAT SAR data for urban land-cover mapping: object-based and knowledge-based approach. Int. J. Remote Sens. 31, 1391–1410. Basuki, T.M., Skidmore, A.K., Hussin, Y.A., Duren, I.V., 2013. Estimating tropical forest biomass more accurately by integrating ALOS PALSAR and Landsat-7 ETM+ data. Int. J. Remote Sens. 34, 4871–4888. Benz, U.C., Hofmann, P., Willhauck, G., Lingenfelder, I., Heynen, M., 2004. Multi-resolution: object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. 58, 239–258. Blaschke, T., 2010. Object based image analysis for remote sensing. ISPRS J. Photogramm. 65, 2–16. Breiman, L., 2001. Random forests. Mach. Learn. 45, 5–32. Burges, C.J.C., 1998. A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Disc. 2, 121–167. Cervone, G., Haack, B., 2012. Supervised machine learning of fused RADAR and optical data for land cover classification. J. Appl. Remote Sens. 6 (063597), 1–18.
Chettri, N., Uddin, K., Chaudhary, S., Sharma, E., 2013. Linking spatio-temporal land cover change to biodiversity conservation in the Koshi Tappu Wildlife Reserve. Nepal.Divers 5, 335–351. Chibani, Y., 2006. Additive integration of SAR features into multispectral SPOT images by means of the à trous wavelet decomposition. ISPRS J. Photogramm. 60, 306–314. Choi, M., Kim, R.Y., Nam, M.-R., Kim, H.O., 2005. Fusion of multispectral and panchromatic satellite images using the curvelet transform. IEEE Geosci. Remote Sens. Lett. 2, 136–140. Coillie, F.M.B.V., Verbeke, L.P.C., Wulf, R.R.D., 2007. Feature selection by genetic algorithms in object-based classification of IKONOS imagery for forest mapping in Flanders, Belgium. Remote Sens. Environ. 110, 476–487. Dr˘agut, L., Csillik, O., Eisank, C., Tiede, D., 2014. Automated parameterization for multi-scale image segmentation on multiple layers. ISPRS J. Photogramm. 88, 119–127. Duro, D.C., Franklin, S.E., Dubé, M.G., 2012. A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sens. Environ 118, 259–272. Ehlers, M., Klonus, S., Åstrand, P.J., Rosso, P., 2010. Multi-sensor image fusion for pansharpening in remote sensing. Int. J. Image Data fusion 1, 25–45. Flores-De-Santiago, F., Kovacs, J.M., Lafrance, P., 2013. An object-oriented classification method for mapping mangroves in Guinea West Arica, using multipolarized ALOS PALSAR L-band data. Int. J. Remote Sens. 34, 563–586. Franco-Lopez, H., Ek, A.R., Bauer, M.E., 2001. Estimation and mapping of forest stand density volume, and cover type using the k-nearest neighbors method. Remote Sens. Environ. 77, 251–274. Freeman, E.A., Moisen, G.G., Frescino, T.S., 2012. Evaluating effectiveness of down-sampling for stratified designs and unbalanced prevalence in Random Forest models of tree species distributions in Nevada. Ecol. Model. 233, 1–10. Gao, Y., J.–F, M.A.S., Maathuis, B.H.P., Zhang, X.M., Dijk, P.M.V., 2006. Comparison of pixel-based and object-oriented image classification approaches −a case study in a coal fire area, Wuda, Inner Mongolia, China. Int. J. Remote Sens. 27, 4039–4055. Gislason, P.O., Benediktsson, J.A., Sveinsson, J.R., 2006. Random Forests for land cover classification. Pattern Recogn. Lett. 27, 294–300. González-Audícana, M., Otazu, X., Fors, O., Seco, A., 2005. Comparison between Mallat’s and the ‘à trous’ discrete wavelet transform based algorithms for the fusion of multispectral and panchromatic images. Int. J. Remote Sens. 26, 595–614. Jia, K., Wei, X.Q., Gu, X.F., Yao, Y.J., Xie, X.H., Li, B., 2014. Land cover classification usingLandsat 8 operational land imager data in Beijing, China. Geocarto Int. 29, 941–951. Jiang, Y., Kang, M.Y., Liu, S., Tian, L.S., Lei, M.D., 2000. A study on the vegetation in the east side of Helan Mountain. Plant Ecol. 149, 119–130. Johnson, B.A., Tateishi, R., Hoan, N.T., 2012. Satellite image pansharpensing using a hybrid approach for object-based image analysis. ISPRS Int. J. Geo-Inf. 1, 228–241. ˜ Larranaga, A., Álvarez-Mozos, J., Albizua, L., 2011. Crop classification in rain-fed and irrigated agricultural areas using Landsat TM and ALOS/PALSAR data. Can. J. Remote Sens. 37, 157–170. Li, H., Manjunath, B.S., Mitra, S.K., 1995. Multisensor image fusion using the wavelet transform. Graph. Model. Im. Proc. 57, 235–245. Li, S.T., Kwok, J.T., Wang, Y.N., 2002. Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images. Inform. Fusion 3, 17–23. Li, G.Y., Lu, D.S., Moran, E., Dutra, L., Batistella, M., 2012. A comparative analysis of ALOS PALSAR L-band and RADARSAT-2 C-band data for land-cover classification in a tropical moist region. ISPRS J. Photogramm 70, 26–38. Liaw, A., Wiener, M., 2002. Classification and regression by randomForest. R News Newslett. R Project 2, 18–22. Liu, D.S., Xia, F., 2010. Assessing object-based classification: advantages and limitations. Remote Sens. Lett. 1, 187–194. Long, J.A., Lawrence, R.L., Greenwood, M.C., Marshall, L., Miller, P.R., 2013. Object-oriented crop classification using multitemporal ETM+ SLC-off imagery and random forest. GISci Remote Sens. 50, 418–436. Lu, D.S., Li, G.Y., Moran, E., Dutra, L., Batistella, M., 2011. A comparison of multisensor integration methods for land cover classification in the Brazilian Amazon. GISci. Remote Sens. 48, 345–370. McNairn, H., Shang, J.L., Jiao, X.F., Champagne, C., 2009. The contribution of ALOS PALSAR multipolarization and Polarimetric data to crop classification. IEEE Trans. Geosci. Remote Sens. 47, 3981–3992. Meng, J.H., Wu, B.F., Chen, X.Y., Du, X., Niu, L.M., Zhang, F.F., 2011. Validation of HJ-1B charge-coupled device vegetation index products wih spectral reflectance of Hyperion. Int. J. Remote Sens 32, 9051–9070. Myint, S.W., Gober, P., Brazel, A., Grossman-Clarke, S., Weng, Q.H., 2011. Per-pixel vs: object-based classification of urban land cover extraction using high spatial resolution imagery. Remote Sens. Environ. 115, 1145–1161. ˜ Núnez, J., Otazu, X., Fors, O., Prades, A., Palà, V., Arbiol, R., 1999. Multiresolution-Based image fusion with additive wavelet decomposition. IEEE Trans. Geosci. Remote Sens. 37, 1204–1211. Naidoo, L., Cho, M.A., Mathieu, R., Asner, G., 2012. Classification of savanna tree species in the Greater kruger National Park region, by integrating hyperspectral and LIDAR data in a Random Forest data mining environment. ISPRS J. Photogramm. 69, 167–179.
X.Y. Wang et al. / International Journal of Applied Earth Observation and Geoinformation 52 (2016) 192–203 Otukei, J.R., Blaschke, T., 2010. Land cover change assessment using decision trees, support vector machines and maximum likelihood classification algorithms. Int. J. Appl. Earth Obs. Geoinfe., S27–S31. Pajares, G., Cruz, J.M.D.L., 2004. A wavelet-based image fusion tutorial. Pattern Recogn. 37, 1855–1872. Pal, M., 2005. Random forest classifier for remote sensing classification. Int. J. Remote Sens. 26, 217–222. Pal, M., 2006. Support vector machine-based feature selection for land cover classification: a case study with DAIS hyperspectral data. Int. J. Remote Sens. 27, 2877–2894. Powell, S.L., Cohen, W.B., Healey, S.P., Kennedy, R.E., Moisen, G.G., Pierce, K.B., Ohmann, J.L., 2010. Quantification of live aboveground forest biomass dynamics with Landsat time-series and field inventory data: a comparison of empirical modeling approaches. Remote Sens. Environ. 114, 1053–1068. Qi, Z.X., Gar-On, Yeh A., Li, X., Lin, Z., 2012. A novel algorithm for land use and land cover classification using RADARSAT-2 polarimetric SAR data. Remote Sens. Environ. 118, 21–39. ˜ D., Chuvieco, E., Salas, J., Aguado, I., 2003. Assessment of different Riano, topographic corrections in Landsat-TM data for mapping vegetation types. IEEE Trans. Geosci. Remote Sens. 41, 1056–1061. Shimoni, M., Borghys, D., Heremans, R., Perneel, C., Acheroy, M., 2009. Fusion of PolSAR and PolInSAR data for land cover classification. Int. J. Appl. Earth Obs. Geoinf. 11, 169–180. Tomppo, E.O., Gagliano, C., Natale, F.D., Katila, M., McRoberts, R.E., 2009. Predicting categorical forest variables using an improved k-nearest neightbor estimator and Landsat imagery. Remote Sens. Environ. 113, 500–517. Tuanmu, Mao-Ning, Jetz, W., 2014. A global 1-Km consensus land-cover product for biodiversity and ecosystem modeling. Glob. Ecol. Biogeogr. 23, 1031–1045.
203
Turkar, V., Deo, R., Rao, Y.S., Mohan, S., Das, A., 2012. Classification accuracy of multi-frequency and multi-polarization SAR images for various land covers. IEEE Trans. Geosci. Remote Sens. 5, 936–941. Ulander, L.M.H., 1996. Radiometric slope correction of synthetic-aperture radar images. IEEE Trans. Geosci. Remote Sens. 34, 1115–1122. Waser, L.T., Ginzler, C., Kuechler, M., Baltsavias, E., Hurni, L., 2011. Semi-automatic classification of tree species in different forest ecosystems by spectral and geometric vaiables derived from Airborne Digital Sensor (ADS40) and RC30 data. Remote Sens. Environ. 115, 76–85. Waske, B., Benediktsson, J.A., 2007. Fusion of support vector machines for classification of multisensor data. IEEE Trans. Geosci. Remote Sens. 45, 3858–3866. Waske, B., Van der Linder, S., 2008. Classifying multilevel imagery from SAR and Optical sensors by decision fusion. IEEE Trans. Geosci. Remote Sens. 46, 1457–1466. Zhou, Z.-S., Lehmann, E., Wu, X., Caccetta, P., McNeill, S., Mitchell, A., Milne, A., Tapley, I., Lowell, K., 2011. Terrain slope correction and precise registration of SAR data for forest mapping and monitoring. In: 34th International Symposium on Remote Sensing of Environment (ISRSE 2011), Sydney, Australia, April 2011, pp. 1–4. Zhu, Z., Woodcock, C.E., Rogan, J., Kellndorfer, J., 2012. Assessment of spectral polarimetric, temporal, and spatial dimensions for urban and peri-urban land cover classification using Landsat and SAR data. Remote Sens. Environ. 117, 72–82. Zhu, X., 2013. Land cover classification using moderate resolution satellite imagery and random forests with post-hoc smoothing. J. Spat. Sci. 58, 323–337.