Unsupervised saliency-guided SAR image change detection

Unsupervised saliency-guided SAR image change detection

Pattern Recognition 61 (2017) 309–326 Contents lists available at ScienceDirect Pattern Recognition journal homepage: www.elsevier.com/locate/pr Un...

7MB Sizes 0 Downloads 79 Views

Pattern Recognition 61 (2017) 309–326

Contents lists available at ScienceDirect

Pattern Recognition journal homepage: www.elsevier.com/locate/pr

Unsupervised saliency-guided SAR image change detection Yaoguo Zheng n, Licheng Jiao, Hongying Liu, Xiangrong Zhang, Biao Hou, Shuang Wang Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, International Research Center for Intelligent Perception and Computation, Joint International Research Laboratory of Intelligent Perception and Computation, Xidian University, Xi'an 710071, China

art ic l e i nf o

a b s t r a c t

Article history: Received 16 December 2015 Received in revised form 31 May 2016 Accepted 26 July 2016 Available online 4 August 2016

In this paper, a novel unsupervised saliency-guided synthetic aperture radar (SAR) image change detection method is proposed. Salient areas of an image always are discriminative and different from other areas, which make them easily noticed. The strong visual contrast of local areas makes saliency suitable to guide the change detection of SAR images, where exists a difference between the two images. By applying the saliency extraction on an initial difference map obtained via the log ratio operator, a saliency map can be obtained in which most of the changed areas are included and the false changed pixels raised by speckle noises are well neglected, simultaneously. Then, by thresholding the saliency map, most of the interest regions can be preserved and further used to extract regions from the initial SAR images to generate difference image. The principal component analysis (PCA) method is used to extract features from local patches to incorporate the spatial information and reduce the influence of isolated pixels. Finally, k-means clustering is employed to obtain the change map on the extracted features, which are clustered into two classes: changed areas and unchanged areas. Experimental results on five real and two simulated SAR image data sets have demonstrated the effectiveness of the proposed method. & 2016 Elsevier Ltd. All rights reserved.

Keywords: Unsupervised change detection Saliency map Principal component analysis K-means clustering Synthetic aperture radar (SAR) images

1. Introduction Synthetic aperture radar (SAR) image has been widely researched for its well property in the imaging of scenes with extreme weather conditions [1]. It provides an effective way of all day and all weather imaging which is incomparable to other sensors. Various tasks of SAR image have been found such as segmentation [2], classification [3], target recognition [4], denoising [5], change detection [6,7], etc. In this paper, we focus our attention on the change detection task of SAR images. It is to finely find out the change information of two SAR images captured in the same area but with different times. Wide applications of change detection include disaster monitoring, supervision of country resource and changed target detection [8]. From the perspective of whether the labeled information is used in change detection, the existing SAR image change detection methods can be summarized into three classes: supervised, semisupervised and unsupervised. Supervised methods need labeled samples to train a suitable classifier, which are limited in real applications for the lacking of labeled samples [9]. Semin

Corresponding author. E-mail addresses: [email protected] (Y. Zheng), [email protected] (L. Jiao), [email protected] (H. Liu), [email protected] (X. Zhang), [email protected] (B. Hou), [email protected] (S. Wang). http://dx.doi.org/10.1016/j.patcog.2016.07.040 0031-3203/& 2016 Elsevier Ltd. All rights reserved.

supervised approaches utilize the labeled samples and unlabeled samples together to learn a classifier [10], distance measurement [11] or use active learning to generate informative samples [12]. Although the supervised and semi-supervised methods can get a better performance than unsupervised methods theoretically, unsupervised methods are more popular for the unnecessary usage of labeled samples and the directness of difference image generation [13–15]. Therefore, we focus our work on the unsupervised change detection method. Traditional unsupervised change detection methods mainly include three steps: preprocessing, difference image generation and postprocessing [16]. Different from the traditional methods, in our method, we use an initially generated difference image by log ratio to guide the extraction of informative regions, rather than directly based on which we generate the change map. Various unsupervised methods have been proposed for SAR image change detection task. Bazi et al. [14] proposed to obtain the change map on the log ratio map via a reformulated Kittler–Illingworth threshold, which is under the assumption that pixels in changed and unchanged areas have a generalized Gaussian distribution. An adaptive semi-parametric method is presented in [15] to estimate the statistical terms of difference image, and then a Markov random field method is used to describe the contextual information for the generation of difference image. Hou et al. [17] utilized a strategy of discrete wavelet transform to fuse the difference images obtained via the Gauss-log ratio operator and the log-ratio operator. The noises in the fused image are reduced via

310

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

the nonsubsampled contourlet transform and then compressive projection is used to extract feature for each pixel. Finally, the change map is obtained via the k-means clustering. Celik et al. [18] adopted principal component analysis (PCA) to construct an eigenvector space over nonoverlapping patches of difference image, and then projected all blocks centered with test pixels onto the eigenvector space to get the representation features. Finally, kmeans clustering is adopted to cluster the features into two classes, changed and unchanged. Other traditional clustering methods used for change detection can be found in [19,20] and the change detection of real applications can be found in [21,22]. As we know, k-means clustering is more suitable for clustering the data which has a Gaussian distribution. With the existence of a large amount of isolated and singular pixels in difference image, the performance of k-means clustering will be limited. However, in the difference image obtained via the log ratio without denoising, there exist many isolated pixels which have an obvious distinction with neighboring regions. The isolated pixels are mostly caused by the speckle noise and will be appeared as false alarms in the change map. Due to the existence of isolated pixels, the performance of PCA on change detection is decreased. Therefore, how to handle the isolated false changes in an appropriate manner should be considered in PCA based methods. The difference images generated via the substraction operator or log ratio operator have a close relationship with the true reference image, especially in the shape and position of changes. The gray values of the changes corresponding to regions in difference images are higher than that of other regions, which make the regions much more salient and distinctive. This phenomenon inspires us to consider the change detection problem as a saliency extraction/detection problem, where the saliency can be used as a prior knowledge to guide the change detection of SAR images. Following the framework of change detection from rough localization to fine detection, two important questions need to be considered. How to automatically find out and locate the interest (informative and discriminative) areas? Which kind of feature is better for exact evaluation, pixel-wise gray value or extracted feature which incorporates contextual information of local areas? For the first problem, saliency extraction from images is a feasible approach. Tian et al. [23] proposed to linearly combine the saliency maps of texture, intensity and orientation for the change detection of remote sensing images. An information theoretical entropy method in a scale invariant form is used to detect the salient regions in polarimetric SAR images [24]. Zhang et al. [25] adopted a saliency extraction method to explore the interest regions in very high resolution satellite images. A bottom-up visual attention method based on Hebbian-based neural networks is adopted in [26] for ship detection of SAR images. Saliency has shown its well performance in the field of remote sensing. Other visual attention mechanisms and saliency extraction methods can be found in [27–35]. Pixel-wise detection can obtain the best performance theoretically, for that pixel is the smallest statistical unit of an image. However, the existence of speckle noise in SAR images seriously limits the performance of pixel-wise methods. Features in a transformation domain which contain the contextual information of neighboring pixels can reduce the influence of speckle noise and improve the local consistency of neighboring pixels. Therefore, in this paper, we consider the extracted features for change detection. In this paper, we propose an unsupervised saliency-guided SAR image change detection method with k-means clustering. A context-aware saliency detection method is explored for saliency detection from an initial difference image. The saliency map is then utilized to guide the extraction of corresponding regions from the two initial SAR images, which are further used to generate a

new difference image via the log ratio operator. Then, PCA is adopted for feature extraction, where the local contextual information is incorporated. Finally, k-means clustering is used to cluster the features into two classes, the changed areas and unchanged areas, in the feature space. Two contributions of the proposed method can be concluded as follows: (1) We explore a saliency guided region extraction method to find the interest and informative regions from the SAR images, which is different from the existing methods. (2) We introduce a spatial information based PCA method to unsupervised change detection, in which extensive experiments are used to demonstrate the effectiveness. The reminder of this paper is organized as follows. Section 2 describes the proposed saliency-guided SAR image change detection method, where k-means clustering is used for obtaining the change map. Section 3 presents the experimental results of SGK and compared methods on five real and two simulated SAR data sets. Finally, we conclude this paper in Section 4.

2. Saliency-guided SAR image change detection with k-means clustering The flowchart of the proposed unsupervised saliency-guided SAR image change detection method with k-means clustering (referred to as SGK) is shown in Fig. 1, which includes six parts: log ratio for initial difference image, saliency map extraction, thresholding for the extraction of corresponding areas, log ratio for difference image, PCA for feature extraction, and k-means clustering to obtain the change map. In the following subsections, we will introduce the motivation and the process of the proposed method in detail. 2.1. From saliency to change detection Saliency map of an image shows the areas where exists a strong local or global contrast. They attract human eyes firstly and make them stay with most of the time. Such a strong contrast always come from the great difference in local or global texture, gray value, shape, etc. For an initial difference image obtained via the log ratio operator without denoising, as shown in Fig. 2(a), there exists an area whose gray values are much larger than that of the neighboring regions. The area is salient which can firstly attract the visual attention of human due to the strong contrast in vision. Fig. 2(b) and (c) shows the saliency map and the reference map, respectively. It can be clearly observed from the two images that there exists a similarity in the shape and position of the salient areas, which can be utilized to guide the change detection of SAR images. There exists a latent and essential consistency between the saliency detection of an initial difference image and change detection. Saliency detection is mainly to find out the salient regions which are much different from other regions. Change detection is to determine the distinctive regions where great changes have occurred. Changes occurred in multi-temple SAR images will result in a difference in the difference image, which is salient compared to other regions. And also, the initial difference image (obtained via the log ratio operator) has a region which is greatly different from other regions. By extracting the salient areas from Fig. 2(b), we can find that there has been a shape similarity between the extracted saliency and the reference. From the above point of view, the saliency can be used as a rough localization to guide the change detection of SAR images. Inspired by the observation from experiments and the relationship between saliency extraction and change detection, we propose to use the knowledge to guide our change detection task under the framework from rough

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

Image 1

311

Log Ratio

Image 2

PCA Feature Extraction

Log Ratio

Extracted Areas

K-means Clustering

Saliency Map

Extracted Saliency

Change Map

Fig. 1. Scheme of the proposed unsupervised saliency-guided SAR image change detection method with k-means clustering (SGK).

(a)

(b)

(c)

Fig. 2. The similarity in shape between the saliency map and the reference (in the red box). (a) Initial difference image obtained via the log ratio operator, (b) saliency map, and (c) reference. (Taken the Bern data set as an example). (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

been widely used for SAR image, for which difference image generation can theoretically reduce the influence of speckle noise. In this paper, we adopt the log ratio for generating the initial difference image DL.

localization to exact detection. 2.2. Initial difference image generation Traditional difference image generation methods include the subtraction operator and the log ratio operator, which are presented as follows:

DS = |I1 − I2 |

DL = log10

I1 + 1 I2 + 1

(1)

(2)

where I1 = {I1 (i, j )|1 ≤ i ≤ I , 1 ≤ j ≤ J} and I2 = {I2 (i, j )|1 ≤ i ≤ I , 1 ≤ j ≤ J} are two SAR images captured in different times and with a size of I  J. Although the subtraction operator can find out the changes to a small extent, it is not suitable for the mechanism of SAR images which have multiplicative noises. The log ratio has

2.3. Saliency extraction As discussed in Section 2.1, the visually salient areas always contain informative and distinctive information for visual processing. In this paper, we adopt this idea to guide the change detection task of SAR images. Since the initial difference image DL has a relatively strong contrast region compared with neighboring regions, it is just the salient area in vision. So we propose to use the saliency detection method to locate the similar-change regions and then based on which our change detection task is conducted. We use a context-aware saliency detection method [36] in this paper for simplicity and well performance. This method measures the similarity of each patch with other image patches extracted in

312

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

the entire image, where the local and global saliency values for a given patch can be obtained. Let xi, xj be two vectorized patches extracted from DL, and dv (xi , xj ) be the Euclidean distance between the values of xi and xj which is normalized to the range of [0, 1]. Denote dp (xi , xj ) as the Euclidean distance between the positions of xi and xj which is normalized to the range of [0, 1] via the larger value of width and height. Then the dissimilarity between the two patches is defined as:

d (xi , xj ) =

dv (xi , xj ) 1 + c·dp (xi , xj )

(3)

where the parameter c is set to 3 in all our experiments as used in [36]. When the number of patches is large, it is time consuming to calculate all the dissimilarities for each pair of patches. In practice, it is unnecessary to incorporate all patches for calculating the dissimilarity, the K most similar patches are enough. Considering the saliency of a image patch in multiple scales, the saliency value of patch xi at scale r is defined as follows:

⎧ ⎪ 1 Sir = 1 − exp ⎨ − ⎪ K ⎩

⎫ ⎪ ∑ d ( xir , xkr ) ⎬⎪ ⎭ k=1 K

(4)

Considering the saliency maps in multiple scales and the mechanism that the closet patches around the foci of saliency should be significantly used, the saliency value for patch xi can be generated as:

1 ^ Si = M

∑ [Sir ] ( 1 − d rfoci (i) ) r∈R

(5)

where M is the number of scales and [  ] is a normalized operator which also interpolates the image at current scale back to the original size. R includes four scales of {100%, 80%, 50%, 30%}. d rfoci (i ) is the Euclidean distance between patch i and the closet patches of patch i in scale r, which is normalized to the range of [0, 1]. We follow the parameter settings as in [36] that is K ¼ 64, the patch size for partition is set to 7  7 with 50% pixel overlapping and M¼ 4 in all our experiments. More details for saliency extraction can be found in [36]. 2.4. From saliency to area extraction Once we have obtained the saliency map of the initial difference map DL, the general shape and position of the final change map will be determined. However, for change detection of SAR images, we need an exact change map to calculate the number of and find the positions of changed and unchanged pixels, and such a direct extraction from initial difference image is too rough to satisfy the demand of real application. Therefore, we propose to utilize the saliency map to guide our change detection task and locate the position of changes in an unsupervised way. By using a thresholding method, the pixels can be preserved in the extracted areas when their values are larger than a given threshold τ, otherwise the pixels are neglected. The thresholding function is as follows:

⎧ 1, S ≥ τ SE = ⎨ ⎩ 0, otherwise

(6)

Let SE be a thresholding map, where “1” indicates that the corresponding pixel is preserved in the extracted areas and “0” stands for neglection. With the usage of a thresholding method on the saliency map S, the interest areas with discriminative information are well preserved and the false-changes generated via the speckle noise are greatly neglected. Therefore, we can extract the areas corresponding to 1s in SE for further difference image generation.

A direct representation of such a process is

ISi = Ii ⊙ SE

(i = 1, 2)

(7)

where ⊙ denotes the dot product operator and ISi (i¼1, 2) are the extracted images from the two original SAR images without denoising. 2.5. Feature extraction via PCA Before using PCA for feature extraction, a traditional filter in the field of image processing, the mean filter, is adopted to denoise the extracted images for simplicity and well performance. Considering the influence of window size of filter on the reservation of details of denoised images, we set a small value to the window size h. For the selected areas after a mean filter, IS1 and IS2, we can obtain the new difference image DSL via the log ratio operator as:

DSL = log10

IS1 + 1 IS2 + 1

(8)

Comparing DSL with DL, we can get a direct observation which is the difference between IS1 and I1. By using the threshold method with a given threshold τ, we can obtain IS1 from I1, where the nonsalient areas with less discriminative information are neglected. For the obtained difference image DSL, we propose to use PCA for feature extraction on nonoverlapping image patches with a window size of h  h. Compared with the direct usage of PCA on DL, the obtained features are less affected by the noise-induced false changes because of that PCA is more suitable for the data which has a Gaussian distribution. The speckle noise will result in many isolated pixels in the difference image DL, while this phenomenon can be reduced with the usage of mean filter and the saliency area extraction. Firstly, we partition the difference image DSL into h  h nonoverlapping image blocks. By reshaping the blocks into a vector version and arranging them into a data matrix, where each row is a vectorized block, we can apply PCA to generate an eigenvector space. Then, over the entire difference image DSL, overlapping blocks with a size of h  h are created for each pixel, which were projected onto the eigenvector space to generate feature vectors. Within a block size of h  h, the contextual information of local area can be well considered in feature extraction. Denote S as the dimension of the extracted feature vector, where 1 ≤ S ≤ h2, we can obtain vectors with a size of S × (I × J ) in the feature space to present the I  J pixels in the difference image DSL. In this way, the change detection problem in image space is converted to the feature space, where the clustering method is applied on the extracted feature rather than directly on the original intensity values of difference image DSL. More details about feature extraction can be found in [18]. 2.6. K-means clustering Here, we use the k-means clustering method to cluster the obtained feature vectors into two classes, the changed areas and the unchanged areas. K-means clustering is one of the traditional clustering methods which has a well performance and an effective time cost compared with that of other traditional clustering methods. The number of clusters k is directly set to 2 for that we only need to distinguish whether the areas are changed or not, thus k is an application determined parameter. The change map CM = {cm (i, j )|1 ≤ i ≤ I , 1 ≤ j ≤ J} can be obtained via the Euclidean distance ∥∥2 in the feature vector space as follows:

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

313

Fig. 3. Multitemporal images relating to the city of Bern. (a) Image acquired in April 1999, (b) image acquired in May 1999, and (c) reference.

⎧ 255, ∥ v (i, j ) − Vc ∥2 ≤ ∥ v (i, j ) − Vu ∥2 cm (i, j ) = ⎨ otherwise ⎩ 0,

(9)

where v (i, j ) is the extracted feature for pixel in coordinate of (i, j), Vc and Vu are the cluster mean feature vectors for changed and unchanged classes, respectively. 2.7. Discussion about parameters With the selection of similar patches, normalization in the distance of patches and linear combination of saliency maps in multiple scales, the result of saliency detection is robust against parameters. The parameter τ controls the size of the extracted areas from the two SAR images for change detection. Parameter τ with a larger value results in a smaller percent of extracted areas from the original SAR images, where the most informational and interest regions are preserved, and vice versa. Therefore, the size of the extracted areas can be tuned via the parameter τ. Actually, in our experiments, we search the value of τ in a wide parameter range between 0 and 0.9 with an interval of 0.1. The influence of the speckle noise existed in SAR images can also be reduced to some extent due to the neglection of non-salient (non-interest) areas, where exist a number of isolated pixels. The median filter with a small window size of 3  3 is used for denoising to reduce the influence of speckle noise, meanwhile, which can preserve the details of the original SAR images. For the feature extraction of local patches by using PCA, we use the largest principle component of local patches with a window size (such as 3  3) as the feature for sequential clustering, because of that the performance of SGK is relatively stable under several dimensionalities. A small window size is used to incorporate the spatial information but not to make the change map excessively smoothed. The parameters' setting in feature extraction by using PCA is consistent with that in PCA-K [18]. Finally, the extracted features from local patches are clustered into two classes, changed areas and unchanged areas. Therefore, the number of clusters in k-means clustering is set to 2 in our experiments. Based on the above analysis, the work in our experiments can be greatly simplified.

3. Experiments In this section, we evaluate the performance of the proposed SGK method on four to five real and two simulated SAR data sets. We compared the proposed SGK method with Log-normal generalized Kittler and Illingworth thresholding algorithm (LN-GKIT) [37], principal component analysis with k-means clustering (PCA-

K) [18], fuzzy local information c-means clustering (FLICM) method [38], modified Markov random field with fuzzy c-means (MRFFCM) [39], SGK without PCA (which is DSL þ saliency þkmeans clustering, referred to as DSK), SGK without saliency (which is DL þ PCA þk-means clustering, it is just the PCA-K method, so we neglect this comparison in experiments), SGK without saliency and PCA (which is DL þk-means clustering, referred to as DK). LN and PCA-K are compared for which are classical change detection methods for remote sensing images. FLICM and MRFFCM are used for comparisons because of that they are spatially related to methods for change detection, which have shown well performances by incorporating the local spatial information in statistics and visual results. DSK and DK are compared to show the effectiveness of PCA and saliency extraction in SGK. False alarms (FA), missed alarms (MA), overall error (OE) and the Kappa coefficient (Kappa) [40] are used as evaluation criterions of the performance of the compared methods in our experiments. All the experiments are conducted on a personal computer with 2 GHz Core2 Duo CPU and 4GB RAM using MATLAB. The CPU time costs are recorded via the MATLAB functions tic and toc. 3.1. Experimental data sets The first data set is the Bern data set which has two images with a spatial size of 301  301 pixels, which was acquired by the European Remote Sensing 2 satellite SAR sensor over an area near the city of Bern, Switzerland, in April and May 1999, respectively. Fig. 3 shows the two images and the manually defined reference map. The second data set is the Ottawa data set of two SAR images with a spatial size of 290  350 pixels, which were acquired over the city of Ottawa by the Radarsat SAR sensor with a spatial resolution of 10 m  10 m. They were provided by the Defence Research and Development Canada (DRDC) – Ottawa and acquired in July and August 1997, respectively. Fig. 4(a) and (b) presents the flood afflicted areas and Fig. 4(c) shows the manually defined reference map. The third and fourth data sets are parts of the SAR images of Yellow River Estuary (with a spatial size of 7666  7692 pixels), which were taken over by Dongying in the Shandong Province of China. They were acquired in different times with different levels of noise. The spatial resolution is 8 m  8 m. The image shown in Fig. 5(a) was acquired in June 2008 with four looks and the images shown in Fig. 5(b) was acquired in June 2009 with single look, which means the noise in the two images are greatly different. Figs. 5(c)–(d) and 6(a)–(b) show the images selected from Fig. 5 (a) and (b) with a spatial size of 257  289 pixels and 400  300 pixels, respectively. Fig. 5(e) shows the manually defined reference

314

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

Fig. 4. Multitemporal images relating to Ottawa. (a) Image acquired in July 1997, (b) image acquired in August 1997, and (c) reference.

Fig. 5. Multitemporal images relating to Yellow River Estuary. (a) Image acquired in June 2008, (b) image acquired in June 2009, (c) area selected in (a), (d) area selected in (b), and (e) reference of the selected area.

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

315

Fig. 6. Images captured from the Yellow River Estuary (denoted as Yellow River 2). (a) Image acquired in June 2008 and (b) image acquired in June 2009.

Fig. 7. Images captured from the region of Erqi Changjiang River Bridge, Wuhan, China. (a) Image acquired in June 2006, (b) image acquired in March 2009.

Fig. 8. Simulated data set 1. (a) Image acquired in time 1, (b) image acquired in time 2, and (c) the ground truth.

map as used in [39]. We denote the fourth data set as Yellow River 2 which has no reference image, therefore, we only show the visually experimental results of each algorithm for subjective comparison. The fifth data set shown in Fig. 7 was captured from the region of Erqi Changjiang River Bridge, Wuhan, China in June 2006 and in March 2009, respectively. Its size is 500  500 with a resolution of 10 m. The Equivalent Number of Looks (ENL) of the data set equals to 1 [41]. Changes with different values are occurred in this data set. The sixth and seventh data sets are two simulated data sets. The simulated data set 1 shown in Fig. 8 has a size of 251  282

with an ENL of 1, in which changes with regular shapes are manually inserted. The simulated data set 2 is a section (250  350 pixels) of the data set described in [42]. The changes occurred in simulated data set 1 have different values. Fig. 9(a) shows the image with a strong level of speckle noise, whose ENL is 1. Fig. 9 (b) is obtained by manually inserting some simulated changes at the original image, and then to add the speckle noise with ENL equals to 1. Fig. 9(c) shows the reference image which has 2109 changed pixels and 85,391 unchanged pixels. The simulated data set 2 is used to test the performance of SGK against several levels of speckle noise, such as ENL equals to 1, 2 and 4, respectively.

316

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

Fig. 9. Simulated data set 2 relating to the village of Feltwell, U.K. (ENL ¼ 1). (a) Image acquired in time 1, (b) image acquired in time 2, and (c) the ground truth.

3.2. Experimental results The visual results and statistic values on Bern data set are shown in Fig. 10 and listed in Table 1, respectively. It can be seen that SGK [see Fig. 10(h)] obtains less isolated pixels and better local consistency than other compared methods [see Fig. 10(a)– (f)]. As we can see, the change map obtained via DK exists with many isolated pixels and holes which are aroused by the existence of speckle noise. The result in Fig. 10(e) seems better than that of LN due to the saliency area extraction. In DSK, the nonsalient areas generated by the speckle noise are neglected, thus makes a more consistent change map with less isolated pixels. Comparing the change maps between Fig. 10(b) and (f), a clear and direct observation is that the isolated pixels are greatly reduced and the local consistency of continuous regions is significantly improved. This improvement is mainly thanks to the extracted feature via PCA which incorporates the contextual information of local patches. The change maps obtained via FLICM and MRFFCM have better local consistency than that of LN, DSK and DK due to the incorporation of spatial relationship among neighboring pixels. However, the existence of isolated pixels is not well processed. As listed in Table 1, although the consistency is improved, the FAs of FLICM and MRFFCM are much higher than that of other methods. The related parameters used in SGK are set as h ¼ 3 and S ¼1. Fig. 10(g) shows the extracted saliency map of the Bern data set. As we can see, the most focused region is where the changes occurred. With a threshold method, the non-salient areas are removed in the difference image, where the removed areas include noise-generated false changes. By including the salient area extraction and the PCA feature extraction in SGK, the obtained OE is 290, which is much less than that of DSK and DK of 479 and 686, respectively. The values in brackets in Table 1 are the best threshold value τ used in our experiments for saliency based methods. The running time for saliency extraction is 128 s and for PCA and k-means clustering is 0.38 s. Since changes in the Bern data set are within a small region, the threshold τ is set to a large value of 0.7. Fig. 11 shows the change maps of compared methods on the Ottawa data set. As we can see, the change maps obtained via PCA-K, MRFFCM and SGK have clear edges and less isolated pixels, and the consistency of the three maps are significantly higher than that of other methods. The performances of PCA-K and SGK are greatly

improved when compared with that of DSK and DK due to the incorporation of contextual information of local patches and the saliency. The change map obtained via SDK is much clear than that of DK because the non-salient areas which incorporate isolated pixels are removed in the threshold saliency map. The DK method directly uses the pixel-wise calculation for difference image generation, therefore, the local uniformity is greatly affected by the existence of speckle noise. The OE of DK is 4827, while with saliency extraction, the OE is improved to 4068, the performance of DK is further improved to 2407 by using the PCA method, which show the importance of saliency or feature in change detection. Combining saliency with PCA on DK, the OE is improved to 1063, which is much lower than that of DSK, DK and PCA-K. The change map shown in Fig. 11(f) and the statistics listed in Table 2 demonstrate the superiority of SGK. The running time for saliency extraction is 92 s and for PCA and kmeans clustering is 0.71 s. Fig. 11(g) shows the extracted saliency map of the Ottawa data set, which is much different from the one of the Bern data set. In the Bern data set, the changes occurred in a nearly continuous region, while in Ottawa, changes nearly occurred in the entire image, so in order to sufficiently capture the change area, the threshold τ is set to a small value of 0.4. The related parameters used in SGK are set as h ¼3 and S ¼1. Figs. 12 and 13 show the change maps of compared methods on the Yellow River and Yellow River 2 data sets, respectively. The statistic values of Yellow River are listed in Table 3. Due to the strong level of speckle noise, the performances of pixel-wise based methods, such as LN-GKIT, DSK and DK, are not very well, which are much different from the results in Bern and Ottawa. Although LN-GKIT and DK can find out the entire region of changes, they are sensitive to speckle noise, which result in many isolated pixels. By considering the salient area extraction on DK, the change map obtained via DSK has nearly no isolated changes beyond the interest areas. PCA-K considers the locally contextual information of patches, which results in less isolated pixels and has a large area of smooth region. FLICM and MRFFCM have better shape reservation of changes, however, there also exist many isolated changes or too smooth regions. Though SGK misses to find out the changes in the bottom of the Yellow River image, the details reservation is better than that of PCA-K and spatial related MRF methods. The running time for saliency extraction is 111 s and for PCA and k-means clustering is 0.93 s on

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

317

Fig. 10. Change detection results by using different methods on the Bern data set. (a) LN-GKIT, (b) PCA-K, (c) FLICM, (d) MRFFCM, (e) DSK, (f) DK, (g) saliency map, (h) SGK, and (i) reference.

Table 1 Comparison of detection results on the Bern data set. Method

FA

MA

OE

Kappa

LN-GKIT PCA-K FLICM MRFFCM DSK(0.3) DK SGK(0.7)

88 158 724 364 74 360 124

226 146 84 47 405 326 166

314 304 808 411 479 686 290

0.8537 0.8674 0.8045 0.8413 0.7554 0.7035 0.8705

the Yellow River data set. Similar performances also can be found in the Yellow River 2 data set. The related parameters used in SGK are set as h ¼3 and S ¼1.

Fig. 14 shows the change maps of compared method on the data set of Erqi Changjiang River Bridge, Wuhan, China. LN, FLICM and SGK can obtain results with less isolated pixels and clear edges. With the extraction of saliency related regions in the original SAR images, much of the isolated shown in Fig. 14(f) can be neglected, which makes Fig. 14(e) have a better visual result. The related parameters used in SGK are set as h¼3 and S ¼1. Fig. 15 shows the change maps of compared method on the simulated data set 1 and Table 4 lists the statistic values. The changes in this data set have different regular shapes with different levels of gray values, which makes that a region is very salient in the obtained saliency map (as shown in the left of Fig. 15(g)). Although it increases the difficulty of threshold based region extraction for change detection, the obtained result is still prior to that of other compared methods. Since there exist large areas of smooth regions

318

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

Fig. 11. Change detection results by using different methods on the Ottawa data set. (a) LN-GKIT, (b) PCA-K, (c) FLICM, (d) MRFFCM, (e) DSK, (f) DK, (g) saliency map, (h) SGK, and (i) reference.

in the simulated data set 1, we increase the patch size to 5  5 to obtain a smoother change map. It should be noted that with a patch size of 3  3, the obtained OE is 2556 which is still much higher than that of other compared methods. The running time for k-means clustering is 2.2 s. As we can see from Fig. 15(h), the obtained change map has less isolated pixels and a well preservation of edges. The related parameters used in SGK are set as

h¼5 and S ¼1. 3.3. Influence of parameters Fig. 16 shows the effect of τ on the Kappa coefficient of the four referenced SAR image data sets (Bern, Ottawa, Yellow River, and the simulated data set 1). It can be clearly observed that within a

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

Table 2 Comparison of detection results on the Ottawa data set. Method

FA

MA

OE

Kappa

LN-GKIT PCA-K FLICM MRFFCM DSK(0.5) DK SGK(0.4)

1674 955 2608 1636 819 2086 127

583 1515 369 712 3249 2741 936

2257 2470 2977 2348 4068 4827 1063

0.9187 0.9049 0.9052 0.9151 0.8396 0.8184 0.9598

319

wide parameter range, SGK can get well and stable performances on the three real SAR image data sets. Taken the results on Bern data set as an example, the Kappa coefficients are almost unchanged on every value of τ. The is due to the reason that there only exists a small changed region in the Bern data set, therefore, the variation of τ will not greatly change the size of extracted areas from the two SAR images. For Ottawa image data set, there exist large areas of changes which nearly scattered throughout the entire image. A smaller value of τ results in a larger size of extracted areas from the two SAR images, however, with the existence of speckle noise in uninterest regions, the performance of SGK will be

Fig. 12. Change detection results by using different methods on the Yellow River data set. (a) LN-GKIT, (b) PCA-K, (c) FLICM, (d) MRFFCM, (e) DSK, (f) DK, (g) saliency map, (h) SGK, and (i) reference.

320

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

Fig. 13. Change detection results by using different methods on the Yellow River 2 data set. (a) LN-GKIT, (b) PCA-K, (c) FLICM, (d) MRFFCM, (e) DSK, (f) DK, (g) saliency map, and (h) SGK.

degraded to some extent. The interest and informative regions will be preserved with the increase of τ, and also, the influence of speckle noise will be reduced due to the neglection of uninterest regions. Therefore, the Kappa coefficients are increasing with the increase of the value of τ at first. When the value of τ is too large, the size of extracted areas will be very small and correspondingly, many informational regions will be neglected. Lacking of enough regions for change detection will naturally result in an unsatisfied result. Fortunately, as observed from Fig. 16, the performance of SGK is relatively well and stable within a wide parameter range, which shows the robustness of the proposed method against parameters. The Kappa coefficient of the simulated data set 1 decreases when the value of τ is larger than 0.2 for that, as shown in Fig. 15(g), the obtained saliency map has a region in which the values are much higher than that of other regions. With the increase of the value of τ, less regions will be extracted thus resulting in a higher MA in change detection. Furthermore, we analyze the influences of the patch size h and the feature dimensionality S. Considering the complex scenes of the three real SAR data sets, the patch size h is set to 3  3 in our experiments. In our experiments, we have test the performances of SGK with a number of values of h, it can be clearly observed that the number of OE is increasing with the increase of the value of h (i.e., 3, 5, 7 and 9). The consistency of feature vectors related to a local region will be increased with the increase of the value of h, which are reflected in the change map with large size of

smoothness areas. Too much smooth areas will increase the OE due to the incorrect detection of changes, especially for the complex scenes with changes in irregular shapes. For the feature dimensionality S, we have experimented SGK with values from 1 to h2 and found that there were no clear differences, so in our experiments, we directly fix the value of S to 1, which is consistent with the parameter setting of PCA-K in [18]. Fig. 17 shows the influence of parameters h and S on the performance of SGK in the four referenced SAR image data sets. A clear phenomenon can be observed that the number of OE changes rapidly with the increase of h. A small value of h results in a preservation of the details of SAR images, which meanwhile reduces the influence of speckle noise to some extent. However, too large a value of h will lead to a large area of smooth regions, where the edges and details will be morphologically dilated. Therefore, the accuracy in exact change detection will be greatly affected. Taken the results on the Bern data set as an example, when the window size is set to 3, the performances of SGK on various dimensions from 1 to 9 change a little. The stability of performance comes from the local consistency of pixels in the window. The extracted features corresponding to each pixel are extremely similar, therefore, the clustered labels for each pixel will also be the same. Such a consistency of features from a local patch will greatly increase the smoothness of the change map. For the simulated data set 1, there exist large areas of smooth region, therefore, we increase the patch size to 5 in order to improve the homogeneity of the change map. It

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

321

Fig. 14. Change detection results by using different methods on the data set of Erqi Changjiang River Bridge, Wuhan, China. (a) LN-GKIT, (b) PCA-K, (c) FLICM, (d) MRFFCM, (e) DSK, (f) DK, (g) saliency map, and (h) SGK.

should be noted that with a patch size of 3  3, the obtained OE is 2556, which still outperforms other compared methods. 3.4. Influence of saliency map The comparison between DSK and DK shows the influence of the saliency extraction on the change detection of SAR images. It can be clearly observed that from (e) and (f) of Figs. 10–15, with the existence of saliency map extraction, there are less isolated pixels in the change maps (e). It is mainly caused by the neglection of non-salient areas which include some noise induced false changes. Saliency map extraction can neglect the non-salient areas and preserve the salient areas, simultaneously. Due to the sensitivity of k-means clustering to outliers, the neglecting of noise

induced regions before clustering can improve the clustering performance to some extent. The statistical values of experimental results of DK, DSK and SGK are listed in the last three rows of Tables 1–4. As we can see, there exists a distinct improvement in OE and Kappa when comparing DSK with DK, which shows the effect of saliency extraction in SAR image change detection. Taken the results in Table 1 as an example, without the saliency extraction, the OE and Kappa of DK are 686 and 0.7035, respectively, while with the incorporation of saliency area extraction, the OE and Kappa of DSK are 479 and 0.7554, respectively, improvements of 207 in OE and 0.05 in Kappa are obtained, respectively. Similar performance improvements can also be found from Tables 2–4. Both the visualization results and the statistical values of compared algorithms in change maps have

322

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

Fig. 15. Change detection results by using different methods on the simulated data set 1. (a) LN-GKIT, (b) PCA-K, (c) FLICM, (d) MRFFCM, (e) DSK, (f) DK, (g) saliency map, (h) SGK, and (i) reference.

shown the effectiveness of salient areas extraction in change detection of SAR images.

1 0.9

3.5. Influence of speckle noise

0.8

Kappa

0.7 0.6 0.5 0.4

Bern Ottawa Yellow River Simulated data set 1

0.3 0.2 0.1

0

0.1

0.2

0.3

0.4

τ

0.5

0.6

0.7

0.8

Fig. 16. Influence of the parameter τ on the Kappa coefficient.

0.9

Fig. 18 shows the influence of speckle noises with several ENLs to the change detection result on the simulated data set 2. A clear observation can be obtained that is the number of OE decreases with the increase of ENL. When ENL equals to 1, there exists a strong level of speckle noise in the two SAR images which results in an OE of 600 points. When the number of ENL increases to 2 or 4, the intensities of speckle noise are greatly reduced. Therefore, the numbers of OE are decreased to 400, which is smaller than 600. This phenomenon indicates that the lower the intensity of speckle noise is, the smaller the degree of influence will be. Fig. 19 shows the change maps with each ENL. With the increase of the number of ENL, the obtained change map has more regular

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

323

Fig. 17. Influence of parameters in PCA on the OE. (a) Bern data set, (b) Ottawa data set, (c) Yellow River data set, and (d) the simulated data set 1.

Table 3 Comparison of detection results on the Yellow River data set. Method

FA

MA

OE

Kappa

LN-GKIT PCA-K FLICM MRFFCM DSK(0.4) DK SGK(0.55)

22,974 2899 1415 1260 1112 11,120 611

919 1979 2951 1182 7191 5472 2462

22,893 4878 4366 2442 8303 16,592 3073

0.3378 0.7841 0.7695 0.8791 0.5419 0.3522 0.8524

Table 4 Comparison of detection results on the simulated data set 1. Method

FA

MA

OE

Kappa

LN-GKIT PCA-K FLICM MRFFCM DSK(0.2) DK SGK(0.2)

250 725 27 336 4764 8878 202

8717 8292 11,969 4480 8826 8230 1881

8967 9017 11,996 4816 13,590 17,108 2083

0.5896 0.5959 0.4025 0.7998 0.4361 0.3545 0.9177

changes compared with the reference map. 3.6. Discussion In our experiments, the mean filter with a small window size is directly used for denoising for its simplicity, which still obtains good experimental results. Fig. 20 shows the residuals between

Fig. 18. The performance of SGK to different levels of speckle noises on the simulated data set 2.

the obtained change maps via SGK and the reference maps. It can be clearly observed that the residuals mostly lie in the region of edges which may be caused by the usage of a simple denoising method on the speckle noise. As we know, the speckle noise has a great effect on the result of change detection, therefore, a better denoising method which preserves the details of image will improve the performance to some extent. And also, a fused difference image which considers the local information will improve the performance because of that in this paper, we only use a simple difference image obtained via the log ratio operator.

324

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

Fig. 19. The change maps of SGK to different levels of speckle noises on the simulated data set 2. (a)–(c) Images in time 1 with ENLs 1, 2, 4. (d)–(f) Images in time 2 with ENLs 1, 2, 4. (g)–(i) Change maps with ENLs 1, 2, 4.

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

325

Fig. 20. Residuals between the obtained change maps and the reference maps. (a) Bern data set, (b) Ottawa data set, (c) Yellow River data set, and (d) the simulated data set 1.

One problem exists in SGK is that for very small changes which are far away from the focus region, such as the changes of point targets with only several pixels, the proposed method cannot work very well. The reason is that with the salient area extraction, the regions in which the very small changes correspond to in the initial difference are ignored, thus which will not appear in the final change map. For such a specific application which needs accurate statistics, prior information about or previous detection of the small point targets should be used to preserve them. On the other aspect, the locally neighbor information is also important for obtaining an accurate change map, which can be incorporated into the processes of the extraction of saliency map, the generation of difference image or the designment of the final clustering algorithm. The main time cost of the proposed method exists in saliency extraction, so reducing the time cost and making SGK suitable for real time processing is also one direction of our future works.

4. Conclusion In this paper, a novel unsupervised change detection method for SAR images which uses the saliency map with k-means clustering is proposed. We follow the procedure of positioning and identifying of object for the change detection problem of SAR images. Saliency map is used to guide the search of interest (salient) areas in an initial difference image obtained via the log ratio operator, where the effect of speckle noise can be reduced to some extent for that the little/false changes are removed in the obtained saliency map by using a threshold method. Then, the corresponding areas to the saliency map are extracted for change detection. We further use PCA for feature extraction which contains the contextual information of local area patches. Finally, a simple yet effective clustering method, k-means clustering with k ¼2, is used for obtaining the change map. Experimental results on quantitative analysis have demonstrated the effectiveness of the proposed method. There still exist some works to further improve the performance of the proposed method. In this paper, we directly use an unsupervised way for the change detection of SAR images, the determined information of changes should be utilized as labeled samples for supervised or semi-supervised change detection [10,11] (i.e., similarity measurement, neural network based difference image generation, etc.), especially for complex scenes. The automatic selection of parameters for change detection will also be one of our future works.

Acknowledgments The authors would like to thank Dr. L. Su from Xidian University for sharing the results of MRFFCM. This work was supported by the National Basic Research Program (973 Program) of China (2013CB329402), the National Natural Science Foundation of China (61272282, 61501353, 61573267, 61473215), the Program for New Century Excellent Talents in University (NCET-13-0948), the Major Research Plan of the National Natural Science Foundation of China (91438201, 91438103), the Fund for Foreign Scholars in University Research and Teaching Programs (the 111 Project) (B07048) and the Program for Cheung Kong Scholars and Innovative Research Team in University (IRT_15R53).

References [1] C. Oliver, S. Quegan, Understanding Synthetic Aperture Radar Images, SciTech Publishing, 2004. [2] X. Zhang, L. Jiao, F. Liu, L. Bo, M. Gong, Spectral clustering ensemble applied to sar image segmentation, IEEE Trans. Geosci. Remote Sens. 46 (7) (2008) 2126–2136. [3] G.P. Bernad, L. Denise, P. Réfrégier, Hierarchical feature-based classification approach for fast and user-interactive sar image interpretation, IEEE Trans. Geosci. Remote Sens. Lett. 6 (1) (2009) 117–121. [4] M. Liu, Y. Wu, P. Zhang, Q. Zhang, Y. Li, M. Li, Sar target configuration recognition using locality preserving property and Gaussian mixture distribution, IEEE Geosci. Remote Sens. Lett. 10 (2) (2013) 268–272. [5] C.-.A. Deledalle, L. Denis, F. Tupin, Iterative weighted maximum likelihood denoising with probabilistic patch-based weights, IEEE Trans. Image Process. 18 (12) (2009) 2661–2672. [6] X. Zhang, Y. Zheng, J. Feng, S. Gou, Sar image change detection based on low rank matrix decomposition, in: 2012 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), IEEE, Munich, Germany, 2012, pp. 6271– 6274. [7] G. Liu, L. Jiao, F. Liu, H. Zhong, S. Wang, A new patch based change detector for polarimetric sar data, Pattern Recognit. 48 (3) (2015) 685–695. [8] S.-q. Huang, D.-z. Liu, G.-q. Gao, X.-j. Guo, A novel method for speckle noise reduction and ship target detection in sar images, Pattern Recognit. 42 (7) (2009) 1533–1542. [9] D. Fernàndez-Prieto, M. Marconcini, A novel partially supervised approach to targeted change detection, IEEE Trans. Geosci. Remote Sens. 49 (12) (2011) 5016–5038. [10] M. Roy, S. Ghosh, A. Ghosh, A novel approach for change detection of remotely sensed images using semi-supervised multiple classifier system, Inf. Sci. 269 (2014) 35–47. [11] Y. Yuan, H. Lv, X. Lu, Semi-supervised change detection method for multitemporal hyperspectral images, Neurocomputing 148 (2015) 363–375. [12] M. Roy, S. Ghosh, A. Ghosh, A neural approach under active learning mode for change detection in remotely sensed images, IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 7 (4) (2014) 1200–1206. [13] F. Bovolo, L. Bruzzone, M. Marconcini, A novel approach to unsupervised change detection based on a semisupervised svm and a similarity measure, IEEE Trans. Geosci. Remote Sens. 46 (7) (2008) 2070–2082. [14] Y. Bazi, L. Bruzzone, F. Melgani, An unsupervised approach based on the generalized gaussian model to automatic change detection in multitemporal sar images, IEEE Trans. Geosci. Remote Sens. 43 (4) (2005) 874–887. [15] R.J. Radke, S. Andra, O. Al-Kofahi, B. Roysam, Image change detection algorithms: a systematic survey, IEEE Trans. Image Process. 14 (3) (2005) 294–307.

326

Y. Zheng et al. / Pattern Recognition 61 (2017) 309–326

[16] L. Bruzzone, D.F. Prieto, An adaptive semiparametric and context-based approach to unsupervised change detection in multitemporal remote-sensing images, IEEE Trans. Image Process. 11 (4) (2002) 452–466. [17] B. Hou, Q. Wei, Y. Zheng, S. Wang, Unsupervised change detection in sar image based on gauss-log ratio image fusion and compressed projection, IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 7 (8) (2014) 3297–3317. [18] T. Celik, Unsupervised change detection in satellite images using principal component analysis and k-means clustering, IEEE Geosci. Remote Sens. Lett. 6 (4) (2009) 772–776. [19] Y. Zheng, X. Zhang, B. Hou, G. Liu, Using combined difference image and k-means clustering for sar image change detection, IEEE Geosci. Remote Sens. Lett. 11 (3) (2014) 691–695. [20] A. Ghosh, N.S. Mishra, S. Ghosh, Fuzzy clustering algorithms for unsupervised change detection in remote sensing images, Inf. Sci. 181 (4) (2011) 699–715. [21] Y. Ban, O. Yousif, Multitemporal spaceborne sar data for urban change detection in China, IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 5 (4) (2012) 1087–1094. [22] N. Longbotham, F. Pacifici, T. Glenn, A. Zare, M. Volpi, D. Tuia, E. Christophe, J. Michel, J. Inglada, J. Chanussot, et al., Multi-modal change detection, application to the detection of flooded areas: outcome of the 2009–2010 data fusion contest, IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 5 (1) (2012) 331–342. [23] M. Tian, S. Wan, L. Yue, A novel approach for change detection in remote sensing image based on saliency map, in: Computer Graphics, Imaging and Visualisation, 2007. CGIV'07, IEEE, Bangkok, Thailand, 2007, pp. 397–402. [24] M. Jäger, O. Hellwich, Saliency and salient region detection in sar polarimetry, in: 2005 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2005, pp. 2791–2794. [25] F. Zhang, B. Du, L. Zhang, Saliency-guided unsupervised feature learning for scene classification, IEEE Trans. Geosci. Remote Sens. 53 (4) (2015) 2175–2184. [26] Y. Yu, B. Wang, L. Zhang, Hebbian-based neural networks for bottom-up visual attention and its applications to ship detection in sar images, Neurocomputing 74 (11) (2011) 2008–2017. [27] L. Itti, C. Koch, E. Niebur, A model of saliency-based visual attention for rapid scene analysis, IEEE Trans. Pattern Anal. Mach. Intell. (11) (1998) 1254–1259. [28] M. Carrasco, Visual attention: the past 25 years, Vis. Res. 51 (13) (2011) 1484–1525.

[29] A. Borji, L. Itti, State-of-the-art in visual attention modeling, IEEE Trans. Pattern Anal. Mach. Intell. 35 (1) (2013) 185–207. [30] A. Borji, M.-.M. Cheng, H. Jiang, J. Li, Salient object detection: a survey, arXiv preprint arxiv:1411.5878. [31] X. Huang, W. Yang, X. Yin, H. Song, Saliency detection in sar images, in: EUSAR 2014. [32] L. Zhang, K. Yang, H. Li, Regions of interest detection in panchromatic remote sensing images based on multiscale feature fusion, IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 7 (12) (2014) 4704–4716. [33] J.-.Y. Zhu, J. Wu, Y. Xu, E. Chang, Z. Tu, Unsupervised object class discovery via saliency-guided multiple class learning, IEEE Trans. Pattern Anal. Mach. Intell. 37 (4) (2015) 862–875. [34] J. Li, L.-.Y. Duan, X. Chen, T. Huang, Y. Tian, Finding the secret of image saliency in the frequency domain, IEEE Trans. Pattern Anal. Mach. Intell. 37 (12) (2015) 2428–2440. [35] A. Borji, M.-.M. Cheng, H. Jiang, J. Li, Salient object detection: a benchmark, IEEE Trans. Image Process. 24 (12) (2015) 5706–5722. [36] S. Goferman, L. Zelnik-Manor, A. Tal, Context-aware saliency detection, IEEE Trans. Pattern Anal. Mach. Intell. 34 (10) (2012) 1915–1926. [37] G. Moser, S.B. Serpico, Generalized minimum-error thresholding for unsupervised change detection from sar amplitude imagery, IEEE Trans. Geosci. Remote Sens. 44 (10) (2006) 2972–2982. [38] S. Krinidis, V. Chatzis, A robust fuzzy local information c-means clustering algorithm, IEEE Trans. Image Process. 19 (5) (2010) 1328–1337. [39] M. Gong, L. Su, M. Jia, W. Chen, Fuzzy clustering with a modified mrf energy function for change detection in synthetic aperture radar images, IEEE Trans. Fuzzy Syst. 22 (1) (2014) 98–109. [40] G.H. Rosenfield, K. Fitzpatrick-Lins, A coefficient of agreement as a measure of thematic classification accuracy, Photogramm. Eng. Remote Sens. 52 (2) (1986) 223–227. [41] S. Wang, L. Jiao, S. Yang, Sar images change detection based on spatial coding and nonlocal similarity pooling, IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. (2016), http://dx.doi.org/10.1109/JSTARS.2016.2547638. [42] L. Bruzzone, D.F. Prieto, A technique for the selection of kernel-function parameters in rbf neural networks for classification of remote-sensing images, IEEE Trans. Geosci. Remote Sens. 37 (2) (1999) 1179–1184.

Yaoguo Zheng received the B.S. degree from Xi'an University of Posts and Telecommunications, Xi'an, China, in 2010. He is currently a Ph.D. candidate from the Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, China. His current research interests include machine learning and hyperspectral image classification.

Licheng Jiao received the B.S. degree from Shanghai Jiaotong University, Shanghai, China, in 1982 and the M.S. and Ph.D. degrees from Xi'an Jiaotong University, Xi'an, China, in 1984 and 1990, respectively. Since 1992, he has been a Professor with the School of Electronic Engineering, Xidian University, Xi'an, where he is currently the Director of the Key Laboratory of Intelligent Perception and Image Understanding of the Ministry of Education of China. His research interests include image processing, natural computation, machine learning, and intelligent information processing.

Hongying Liu received her B.E. and M.S. degrees in Computer Science and Technology from Xi'an University of Technology, China, in 2002 and 2006, respectively, and Ph.D. in Engineering from Waseda University, Japan in 2012. Currently, she is a Faculty Member at the School of Electronic Engineering, and also with the Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, China. Her major research interests include intelligent signal processing, machine learning, compressive sampling, etc.

Xiangrong Zhang received the B.S. and M.S. degrees in Computer Science and Technology from Xidian University, Xi'an, China, in 1999 and 2003, respectively, and the Ph.D. degree in Pattern Recognition and Intelligent System from Xidian University, Xi'an, China, in 2006. She is currently a professor in the Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Electronic Engineering at Xidian University, China. Her current research interests include image analysis and understanding, pattern recognition, and machine learning.

Biao Hou received the B.S. and M.S. degrees in mathematics from Northwest University, Xi'an, China, in 1996 and 1999, respectively, and the Ph.D. degree in circuits and systems from Xidian University, Xi'an, in 2003. Since 2003, he has been with the Key Laboratory of Intelligent Perception and Image Understanding of the Ministry of Education, Xidian University, where he is currently a Professor. His research interests include compressive sensing and Synthetic Aperture Radar image interpretation.

Shuang Wang received the B.S. and M.S. degrees from Xidian University, Xi'an, China, in 2000 and 2003, respectively, and the Ph.D. degree in circuits and systems from Xidian University, Xi'an, China, in 2007. Currently, she is a Professor with the Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education of China, Xidian University. Her research interests include sparse representation and high-resolution SAR image processing.