Multispectral and panchromatic image fusion using a joint spatial domain and transform domain for improved DFRNT

Multispectral and panchromatic image fusion using a joint spatial domain and transform domain for improved DFRNT

Accepted Manuscript Title: Multispectral and panchromatic image fusion using a joint spatial domain and transform domain for improved DFRNT Author: Q...

2MB Sizes 2 Downloads 118 Views

Accepted Manuscript Title: Multispectral and panchromatic image fusion using a joint spatial domain and transform domain for improved DFRNT Author: Q. Guo Q. Wang Z.J. Liu A. Li H.Q. Zhang Z.K. Feng PII: DOI: Reference:

S0030-4026(15)01266-8 http://dx.doi.org/doi:10.1016/j.ijleo.2015.09.185 IJLEO 56390

To appear in: Received date: Accepted date:

21-11-2014 26-9-2015

Please cite this article as: Q. Guo, Q. Wang, Z.J. Liu, A. Li, H.Q. Zhang, Z.K. Feng, Multispectral and panchromatic image fusion using a joint spatial domain and transform domain for improved DFRNT, Optik - International Journal for Light and Electron Optics (2015), http://dx.doi.org/10.1016/j.ijleo.2015.09.185 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Joint spatial-transform domain fusion method for improved DFRNT

Multispectral and panchromatic image fusion using a joint spatial domain and transform domain for improved DFRNT

Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing,

cr

a

ip t

Q. Guoa,*, Q. Wangb, Z. J. Liuc , A. Lia, H. Q. Zhanga, Z. K. Fenga

b

us

100094, China

School of Physics and Optoelectronic Engineering, Guangdong University of

c

an

Technology, Guangzhou, 510006, China

Department of Automation Measurement and Control, Harbin Institute of Technology,

Abstract

d

M

Harbin, 150001, China

te

The use of traditional spatial domain fusion methods such as intensity-hue-saturation

Ac ce p

(IHS) and principal component analysis (PCA) achieve good spatial quality but often lead to spectral distortion. Use of discrete fractional random transform (DFRNT) fusion, which is a transform domain method proposed by the authors before, can maintain the high spectral information. Hence, in this paper we propose using the respective advantages of DFRNT and IHS or PCA, via combined IHS-DFRNT and PCA-DFRNT approaches to obtain good spectral quality and high spatial quality, which is also for improved DFRNT method. At the same time, instead of fusion in the spatial domain or the transform domain, respectively, as in the conventional methods, the proposed IHSDFRNT and PCA-DFRNT approaches are fused in a kind of joint spatial domain and *

Corresponding author. Tel.: +86 010 8217 8083; fax: +86 010 8217 8087. E-mail address: [email protected]

1

Page 1 of 71

Joint spatial-transform domain fusion method for improved DFRNT

transform domain. Moreover, different random distributions of DFRNT produce different fusion results which can meet different application demands. The fused images are found to preserve both the spectral information of the multispectral image and the

ip t

high spatial information of the panchromatic image. Especially, the proposed combined methods are much faster and keep better comprehensive information than DFRNT.

cr

Objective quantitative evaluation indexes are also used to illustrate the effectiveness of

us

the proposed methods.

an

Keywords: Discrete fractional random transform; IHS; image fusion; pansharpening;

Ac ce p

te

d

M

PCA; remote sensing

2

Page 2 of 71

Joint spatial-transform domain fusion method for improved DFRNT

1. Introduction

In a remote sensing image fusion process, a high spectral and low spatial resolution

ip t

multispectral (MS) image with a high spatial resolution panchromatic (PAN) image are

fused to produce a high spectral and high spatial resolution image [1-7]. Therefore, the

cr

major objective of image fusion is to generate a high spatial resolution MS image. With

us

the development of sensor technology, the numbers of available high spatial resolution images are increasing; however, image fusion is still an important and popular method

an

to interpret the data to obtain a more suitable image for a variety of applications, such as visual interpretation and digital classification. For human visual perception and further

M

image processing, both detailed textures and color information of surface features are indeed important. However, due to the limitation of satellite technology, no one sensor

d

can give both types of information for any one scene, which affects the further

Ac ce p

types of information.

te

application of imagery. Hence, image fusion is an effective approach to combine these

Various fusion algorithms have been proposed. They can be broadly classified into

spatial domain methods and transform domain methods according to the domain in which the fusion is performed. The spatial domain methods usually generate a fused image in which each spatial pixel is determined from a set of spatial pixels from the input sources, such as this is done currently in the intensity-hue-saturation (IHS) method [8-10] or the principal component analysis (PCA) [11-13] method. The transform domain methods convert the input images into a common transform domain, such as a wavelet domain [2, 14-16], a pyramid domain [17] or a discrete fractional random

3

Page 3 of 71

Joint spatial-transform domain fusion method for improved DFRNT

transform (DFRNT) domain [18]. Fusion is then applied by combining their transform coefficients. The IHS method converts three MS bands from red-green-blue (RGB) color space

ip t

into IHS space to separate the spatial component, which is the intensity (I) component, from the spectral components, which are hue (H) and saturation (S) components. After

cr

replacing the I component with a PAN image, the merged result is converted back into

us

RGB space. All of these fusion procedures are performed in the spatial domain. Although IHS method can preserve the high spatial information of the PAN image, it

an

severely distorts the spectral information [19]. Actually the I component still has some spectral information, but that information is totally replaced during the further

M

processing. The procedures of PCA method are similar to those of IHS, except that the first principal component (PC1) represents the spatial information and PCA can fuse

d

more than three bands. The performance of PCA is similar to that of IHS, and moreover,

te

PCA also usually introduces distortion to the spectral information.

Ac ce p

The DFRNT method firstly transforms both MS and PAN images to the DFRNT domain. In DFRNT domain, high amplitude spectrum (HAS) and low amplitude spectrum (LAS) components carry different information from the original images and different fusion rules are adopted in HAS and LAS components, respectively. Here, performing fusion in a transform domain is an indirect change of the original image simultaneously based on spatial image features and different transform spectrum distribution features. The fused image is observed to preserve both the spectral information of MS and the spatial information of PAN. Compared to the IHS method, the DFRNT method preserves spectral information more effectively but spatial

4

Page 4 of 71

Joint spatial-transform domain fusion method for improved DFRNT

information less effectively. The low spectrum distribution obtained can be attributed to the fact that the spectrum distribution of DFRNT is random and dispersive [18]. In this paper, our motivation is to combine DFRNT and IHS or PCA to generate the

ip t

fused image with both high spectral quality and high spatial quality. Compared to the

DFRNT method proposed by authors before, the proposed IHS-DFRNT and PCA-

cr

DFRNT methods can obtain higher spatial quality and faster speed, since the I and the

us

PC1 of MS have been extracted previously and then are fused with PAN in the DFRNT domain. In DFRNT method, each and every MS band needs to be transformed to the

an

DFRNT domain. In contrast, in the proposed methods, just the I or PC1 of MS, equivalent to only one MS band, needs to be transformed to the DFRNT domain. In

M

other hand, since the I component and the PC1 component of MS still have some spectral information, we further decompose I and PC1 into low frequency and high

d

frequency components to fuse to get high spectral quality. The proposed IHS-DFRNT

te

and PCA-DFRNT methods are the improved DFRNT method versions. DFRNT

Ac ce p

transform had been proposed and was first introduced in the fusion field by the authors, now the improved DFRNT fusion is worthy of study. Meanwhile, the proposed IHS-DFRNT and PCA-DFRNT methods are done using the

joint spatial domain and transform domain, instead of fusion in spatial domain or transform domain respectively as in the conventional way. The proposed methods include fusion of the spatial domain in the beginning and final steps and fusion of the transform domain in the intermediate steps. Previously reported combinations of the IHS and wavelet methods and of the PCA and wavelet methods [20-21] were also done using both the spatial domain and the transform domain. However in this paper, the DFRNT domain is very different from the wavelet domain, even though they both are

5

Page 5 of 71

Joint spatial-transform domain fusion method for improved DFRNT

types of transform domains. In the DFRNT domain, there is no intuitive correspondence to the spatial information of the image; instead, it just has the transform spectrum information. On the other hand, in the wavelet domain, one can see an approximate

wavelet domain and the DFRNT domain is shown in Fig. 1.

ip t

spatial image and the detailed edges of the image. The specific difference between the

cr

The rest of this paper is organized as follows. The proposed methods are described in

us

Section 2. Section 3 provides the experimental results including test data, fusion methods for comparison, evaluation criteria and performance analysis. The conclusion

2.1 DFRNT and its properties

M

d

2. The proposed fusion method

an

is drawn in Section 4.

te

DFRNT originates from the discrete fractional Fourier transform (DFrFT), which is a

Ac ce p

joint space-frequency transform. The transform coefficients of DFrFT represent the contribution of each basis function at each frequency, thus DFrFT has the exact transform spectrum domain and the corresponding spatial image can be obtained only after the inverse DFrFT transform. DFrFT can clearly display features of signals which are difficult to be displayed in spatial domain. DFrFT converts the grayscale distribution of an image into its frequency distribution; moreover, the perfect frequency resolution indicates the extent of change in gray scale. DFRNT has the most excellent mathematical properties as DFrFT in addition to the inherent randomness. Therefore, performing fusion in such transform domain is an indirect change of the original image

6

Page 6 of 71

Joint spatial-transform domain fusion method for improved DFRNT

simultaneously based on spatial image features and different spectrum distribution features. For discrete transforms, the kernel matrix is key, which can be expressed as the

ip t

product of eigenvector and eigenvalue matrices through eigendecomposition. DFRNT

has the same eigenvalues as DFrFT, but with random eigenvectors. DFRNT for two-

The kernel transform matrix R is written as

where

D

is

the

an

R  VD V t ,

us

X R ( )  R x(R )t .

cr

dimensional signal x can be expressed as

diagonal

matrix

M

exp(2i  n  / T ) : n  0,1, 2,..., N 1 of DFRNT, 

generated

by

(1)

(2) eigenvalues

is the fractional order of DFRNT,

V is the eigenvector matrix, T is the period of eigenvalues with respect to the fractional

d

order and t expresses the transpose. The randomness of DFRNT comes from the

te

matrix V , which is generated by eigenvectors of a symmetric random matrix Q . The

Ac ce p

matrix Q can be obtained from the N  N real random matrix P by

Q  (P  Pt ) / 2.

(3)

DFRNT has several useful mathematical properties, such as linearity, unitarity, index

additivity and periodicity. Moreover, the random and dispersive spectrum distributions are the most important properties [22]. Due to these special properties, a DFRNT transform spectrum, shown in Fig. 1 as an example, with different amplitude values spreads out so that, many amplitudes can play roles in the fusion. Thus, it has a lower influence on the same strength change in spectrum comparing to the concentrated distributed spectrum. This guarantees the low spectral distortion. Meanwhile, the majority of the fusion results in DFRNT domain are acceptable, including the amount of

7

Page 7 of 71

Joint spatial-transform domain fusion method for improved DFRNT

distortion occurring at any position. The dispersivity of DFRNT spectrum distribution confirms the certain robustness obtained. In addition, the result of DFRNT is totally random, since the transform kernel is

ip t

random, which results from the randomness of matrix Q . The eigenvectors of DFRNT

cr

depends on the random matrix Q or P , therefore the results of DFRNT are different if

the matrix P is changed. Multiple randomly distributed types of random matrix P can be

us

used, such as Poisson, exponential, beta, gamma, Rayleigh and uniform distributions.

an

2.2 Fusion scheme

In this paper, MS and PAN images are already registered before fusion. The

M

flowchart of the proposed IHS-DFRNT fusion scheme is shown in Fig. 2. The low resolution MS image is firstly resampled using cubic interpolation to the same size as

d

the high resolution PAN image. Three MS bands are converted from RGB color space

te

into IHS space to separate the spatial component (I) from the spectral components (H,

Ac ce p

S). In order to minimize the spectral distortion, histogram matching [23] is applied to the PAN image to make its brightness and contrast best match with that of I component from MS image. In order to further preserve the spectral information of MS as much as possible, keeping H and S components of MS image untouched, the spatial component of MS image and the histogram matched PAN image are transformed to DFRNT domain and then fused. For PCA-DFRNT method, histogram matching is done between PAN and PC1 of MS. The PC1 of MS is fused with the histogram matched PAN in DFRNT domain, keeping the other principal components untouched. It is known that the spatial detailed information of PAN image is mostly carried by its high frequency components, while the spectral information of MS image is mostly

8

Page 8 of 71

Joint spatial-transform domain fusion method for improved DFRNT

carried by its low frequency components. Since both the I component and the PC1 component of MS image still have some spectral information, we further decompose I or PC1 into low frequency and high frequency components. Then for the different

ip t

frequency components, different fusion rules are performed. In the DFRNT transform

domain, the HAS component is the nominal low frequency component, and the LAS

cr

component is the nominal high frequency component. As a rule, the majority of

us

spectrum energy is carried in the low frequency component. Thus, energy is calculated through sorted spectrum amplitudes in a descending order. Then, ratios of different

an

parts of the energy to the total energy are calculated to extract HAS and LAS components. When the ratio is extremely small, the LAS component predominates in

M

the transform spectrum. Thus, the fusion result cannot preserve enough spectral information. In contrast, the fusion result cannot improve the spatial resolution properly.

d

The ratio with the best balance of spatial and spectral information and with the best

te

vision and evaluation index as a whole serves as the separation threshold to extract the

Ac ce p

HAS and LAS components.

For the HAS component, the main objective is to preserve, as much as possible, the

spectral information of MS image while adding spatial details of PAN into the fusion result. Therefore, only individual spatial details of PAN are added to each corresponding MS band. The common information MPHAS ( com ) of MS and PAN in the HAS component can be expressed as

MPHAS ( com)  min[M HAS , PHAS ],

(4)

where M HAS and PHAS are the HAS components of MS and PAN, respectively. The individual spatial details of PAN PHAS ( own ) in the HAS component are written as

9

Page 9 of 71

Joint spatial-transform domain fusion method for improved DFRNT

PHAS ( own )  PHAS  MPHAS ( com ) .

(5)

The details of PAN in HAS component are simply added into the HAS component of

ip t

MS to get the fused HAS component. For the LAS component, the main goal is to improve spatial details of the fused

cr

image. Thus, the LAS component of PAN is directly used as the LAS component of the

fused result. Finally, the fused image is obtained by taking the inverse DFRNT and then

us

the inverse IHS or PCA into RGB space.

an

3. Experimental results

M

3.1 Test data, fusion methods for comparison and evaluation criteria Two pairs of remote sensing images are used in our experiments, including

d

QuickBird MS and PAN images, TM MS and SPOT PAN images. The MS and PAN of

te

QuickBird images have 2.8 m and 0.7m spatial resolutions, respectively. The color

Ac ce p

composite of MS bands 2, 4, and 3 in red, green, and blue (243RGB) is selected for the illustration. The spatial resolutions of TM and SPOT images are 30 m and 10 m, respectively. The color composite of the TM MS bands is 5, 4, and 1 in red, green, and blue (541RGB). The sizes of the PAN and resampled MS images are 256 x 256. All MS bands are linearly stretched which leads to an effective use of the whole pixel value range. The QuickBird PAN and resampled MS images are given in Fig. 3(a)-(b), respectively. The SPOT PAN and resampled TM images are shown in Fig. 4(a)-(b), respectively. For comparative purposes, two spatial fusion methods and one transform method are applied to the test data, including IHS, PCA and digital wavelet transform (DWT)

10

Page 10 of 71

Joint spatial-transform domain fusion method for improved DFRNT

methods. For DWT method, the wavelet decomposition level is two and the fusion rule is replacing. The computing speeds of these methods are also compared. For QuickBird image, the resulting fused image has a 0.7 m spatial resolution and should be compared

ip t

with a real 0.7 m MS image in order to assess its quality. Since the latter does not exist,

spatially degraded images are used in this paper [24]. The original MS and PAN images

cr

are thus degraded to 11.2 m and 2.8 m spatial resolutions, respectively, to simulate the

us

fusion of MS and PAN images with a spatial resolution ratio of 4:1. The fused image (2.8m) is obtained by fusing the degraded PAN (2.8m) and MS (11.2m) images. The

an

result of each fusion method is evaluated by comparing the fused images with the true MS (2.8m) image. For TM and SPOT image, the same performance evaluation rule at

M

this degraded scale is used.

Two sets of criteria are used to evaluate the spectral and spatial performance,

d

respectively. The correlation coefficient (CC) is used to measure the spectral quality of

te

each fused band image. Q index [25-26] is calculated to evaluate the global spectral

Ac ce p

quality of all the fused bands as a whole. The spatial quality is evaluated by both the average gradient (AG) and the standard deviation (STD).

3.2 Visual comparison

Fig. 3(c)-(g) and Fig. 4(c)-(g) illustrate the fused images by IHS method, PCA

method, the proposed IHS-DFRNT method, the proposed PCA-DFRNT method and DWT method for QuickBird and TM-SPOT, respectively. The random distribution of DFRNT is Rayleigh. For the proposed methods, the energy ratio is 20% except that the PCA-DFRNT method for QuickBird image is 10%. From these figures, it is observed that the fused images using the proposed methods preserve more of the spectral

11

Page 11 of 71

Joint spatial-transform domain fusion method for improved DFRNT

information of MS images than the traditional IHS and PCA methods and improve the spatial details simultaneously. The fused image using DWT method has some unclear edges. In order to demonstrate spectral information retention, Fig. 3(h)-(i) and Fig. 4(h)-

ip t

(i) show the fusion results of the proposed methods with energy ratio 80% for QuickBird and TM-SPOT images, respectively.

cr

As for the fused images by IHS and PCA, it is obvious that the spectral information is

us

distorted. The tonal variations in PCA and IHS fused images are noticeable; especially the white areas are significantly distorted in the PCA result for QuickBird and TM-

an

SPOT images. In contrast, IHS-DFRNT fused images have very similar tonalities to that of the MS image. PCA-DFRNT fused images have also obvious spectral distortion for

M

white areas when the energy ratio is 10%. However, the spectral distortion is reduced or gone when the ratio is 80%. By observing the results carefully, one can see that the

d

proposed methods have more even, more natural and richer colors than the IHS and

te

PCA methods. In terms of the spatial quality, the proposed fusion results are roughly the

Ac ce p

same as or slightly inferior to that of the traditional IHS, PCA and DWT methods. Totally, the proposed methods with energy ratio 20% or 10% have almost the same spatial quality but a little more spectral quality than IHS and PCA methods, and are comparable with DWT method. When the energy ratio is 80%, the proposed methods preserve much more spectral quality but a little less spatial quality than IHS and PCA methods, and are still comparable with DWT method. Since the proposed approaches provide good spectral response but sacrifices a little spatial information, this tradeoff is worth doing. These visual observations of the fusion results are quite similar by using the high resolution QuickBird image and the mid-resolution TM-SPOT image.

12

Page 12 of 71

Joint spatial-transform domain fusion method for improved DFRNT

In order to demonstrate multiple distributed randomness of DFRNT, different random distributed matrices P are performed for IHS-DFRNT and PCA-DFRNT fusion methods. We do the experiments with Poisson, exponential, beta, gamma, Rayleigh and uniform

ip t

distributions. Moreover, the different energy ratios to extract HAS and LAS components lead to different fusion results for the same random distribution DFRNT.

cr

Here, the beta and uniform distributions with energy ratio from 10% to 80% are

us

illustrated as sample. Fig. 5 and Fig. 6 show the results of IHS-DFRNT method for QuickBird and TM-SPOT images, respectively. Fig. 7 and Fig. 8 show the results of

an

PCA-DFRNT for QuickBird and TM-SPOT images, respectively. With the increase of the energy ratio, the spatial quality is getting lower and lower and the spectral quality is

M

getting higher and higher. Especially, the previous obvious spectral distortion for white regions in PCA-DFRNT can be gone with the increase of the energy ratio. The

d

experimental results are consistent with the previous theoretical analysis results.

te

Therefore, one can select specific random distribution and energy ratio suited to one’s

Ac ce p

needs according to different application purposes.

3.3 Statistical Comparison

Besides the subjective evaluation, the statistical quantitative assessment of fusion

performance is also necessary. Each comprehensive quality index offers a global view of the fused image quality. The higher quality of the fused image is indicated by larger values of CC, AG, STD and Q. However, the ideal Q value is 1. Table 1 and Table 2 list the objective evaluation indices of different fusion results as shown in Fig. 3 and Fig. 4 using QuickBird and TM-SPOT images, respectively. From these values, it can be seen

13

Page 13 of 71

Joint spatial-transform domain fusion method for improved DFRNT

that all the methods preserve both the spectral information of the MS image and the high spatial information of the PAN image. Table 1 Objective evaluation of different fusion results using QuickBird images.

ip t

Spectral quality Spatial quality CC Q AG STD R 0.8826 24.3180 33.8670 PCA G 0.8668 0.8267 21.7623 31.6849 B 0.8654 25.3403 36.3155 R 0.8912 27.1497 36.2722 PCA-DFRNTa G 0.8812 0.8346 24.0749 33.5464 B 0.8772 28.4195 38.7282 R 0.9242 29.3425 43.4974 0.8731 IHS G 0.9080 24.2388 36.6385 B 0.9201 30.5590 45.8985 R 0.9309 30.0869 44.5284 0.8795 IHS-DFRNTb G 0.9162 24.8709 37.5472 B 0.9266 31.4808 47.0285 R 0.9231 29.2249 45.0011 DWT G 0.9021 0.8608 24.5177 37.9589 B 0.9252 31.3054 48.1742 a PCA-DFRNT is with Rayleigh random distribution and engineering ratio 10% b IHS-DFRNT is with Rayleigh random distribution and engineering ratio 20% Band

M

an

us

cr

Method

d

Table 2 Objective evaluation of different fusion results using TM-SPOT images. Spectral quality Spatial quality CC Q AG STD R 0.8319 26.3251 35.0548 PCA G 0.8061 0.7381 20.6553 28.7675 B 0.9067 21.7901 37.9364 R 0.8539 28.9732 37.6383 PCA-DFRNTa G 0.8373 0.7544 22.7078 30.6777 B 0.9208 23.4391 39.6341 R 0.8052 28.1823 40.1837 IHS G 0.7748 0.7111 24.2756 35.5547 B 0.8998 25.0753 45.3715 R 0.8313 30.5956 42.2831 IHS-DFRNTb G 0.8029 0.7276 25.8922 36.5474 B 0.9139 25.9292 46.4058 R 0.8594 34.0566 46.2074 DWT G 0.8194 0.7148 25.6531 33.7623 B 0.9069 27.9379 42.8405 a PCA-DFRNT is with Rayleigh random distribution and engineering ratio 20% b IHS-DFRNT is with Rayleigh random distribution and engineering ratio 20% Band

Ac ce p

te

Method

As demonstrated in Table 1 for QuickBird data, the IHS-DFRNT fused image provides the largest CC, AG, STD and Q values. The PCA fused image offers the lowest CC, AG, STD and Q values. With respect to any comprehensive quality indexes, the IHS-DFRNT method statistically performs better than IHS, the PCA-DFRNT

14

Page 14 of 71

Joint spatial-transform domain fusion method for improved DFRNT

method performs better than PCA and the proposed methods quite equal DWT. In Table 2 for TM-SPOT data, most indexes also show this performance. Moreover, for the proposed methods with energy ratio 80%, the spectral quality is much better than that of

ip t

the traditional IHS and PCA methods. For example, in IHS-DFRNT method for QuickBird, the CC values for the R, G, and B bands are 0.9576, 0.9490, and 0.9544,

cr

respectively, and the Q value is 0.9122. But in IHS method, the CC values are 0.9242,

us

0.9080, and 0.9201, respectively, and the Q value is 0.8731. In PCA-DFRNT method for TM-SPOT with energy ratio 80%, the CC values for the R, G, and B bands are

an

0.9218, 0.9106, and 0.9566, respectively, and the Q value is 0.8328. But in PCA method, the CC values are 0.8319, 0.8061, and 0.9067, respectively, and the Q value is 0.7381.

M

It is well known that it is very important for the fused image to preserve as much as possible the spectral information of the original MS image. The proposed methods do

d

good jobs in this aspect. The spectral information was first separated from the spatial

te

information using IHS or PCA transform. Then while keeping the spectral information

Ac ce p

untouched, further separation was performed on the spatial information component which still has some spectral information, to obtain high frequency and low frequency components using DFRNT transform. And, during the fusion procedures in the DFRNT domain, only individual spatial details of PAN are added to each corresponding MS band. All the three steps can minimize the spectral distortion; especially the second step ensures that the proposed methods are superior to the IHS and PCA methods. For the spatial quality, the main spatial information from the PAN image has been added into the MS image during the fusion process in different fusion methods. All the quantitative values in Tables show the spatial qualities of the fused images based on the proposed algorithms are a little superior to the qualities obtained using the IHS and

15

Page 15 of 71

Joint spatial-transform domain fusion method for improved DFRNT

PCA algorithms and evenly match that of DWT algorithm. The spatial indexes of DWT approach are higher than those of IHS and PCA methods. However, for the visual spatial quality, the proposed methods are roughly the same as or slightly inferior to IHS

ip t

and PCA methods; the spatial quality of DWT method is not always superior to that of IHS and PCA methods. This may be because the spectral distortion results in gray value

cr

distortion, and thus leads to inconsistent calculation of AG and STD values, since AG

us

and STD respond to the change characteristic in one image, not between two images. On the other hand, the evaluation function itself is only a rough indication of the image

an

quality and has some inherent drawbacks.

For the computing speed comparison, we use the Intel(R) Core (TM)2 i7-3770 CPU

M

3.40 GHz CPU, 4.00 GB memory computer and R2013a version Matlab. The computing speeds of different fusion methods including IHS-DFRNT, PCA-DFRNT,

d

IHS, PCA, DFRNT and DWT are shown in Table 3. It is obvious that the IHS, PCA and

te

DWT methods are the fastest and the DFRNT method is slowest. The speeds of IHS-

Ac ce p

DFRNT and PCA-DFRNT methods are about one-third of that of DFRNT when fuse three MS bands. Although IHS and PCA methods are fast, they obtain always the same fusion results, which is bad when the serious spectral distortion happens. For the proposed DFRNT methods, different parameters can be adjusted to obtain balance fusion results. The proposed methods do require more computer time than IHS, PCA and DWT, but our method is still being developed and we hope that by trying these new combinations we are able to improve DFRNT method enough to get fused results which are far superior. Table 3 The computing speed comparison for different fusion methods. QuickBird image

IHSDFRNT

PCADFRNT

DFRNT

16

PCA

IHS

DWT

Page 16 of 71

Joint spatial-transform domain fusion method for improved DFRNT

Speed

3.7651

3.7090

10.9645

0.4939

0.5256

0.6610

ip t

4. Conclusions

We have discussed image fusion methods for multisource remote sense images. The

cr

proposed two methods combine the IHS or PCA approach with DFRNT to obtain high

spectral quality and high spatial quality, which are the improved DFRNT fusion

us

proposed by the authors before. The computing speed of IHS-DFRNT and PCA-

an

DFRNT is about one-third of that of DFRNT when fuse three MS bands. The I and PC1 components of MS are further decomposed into low frequency and high frequency in

M

DFRNT domain during fusion to reduce spectral distortion. At the same time, the proposed methods are spatial-spectral pansharpening algorithms, instead of fusion in

d

spatial domain or transform domain respectively as in the conventional way. Moreover,

te

different random distributions of DFRNT produce different fusion results; different energy ratios with the same random distribution also produce different fusion results in

Ac ce p

a certain change rule. All these characteristics can better meet the demands of different users according to their different application purposes. In order to prove the effectiveness of the proposed methods, both the qualitative

visual evaluation and quantitative analyses are employed. Comparison of the experimental results demonstrates that our methods are generally better than several existing methods.

Acknowledgements

17

Page 17 of 71

Joint spatial-transform domain fusion method for improved DFRNT

This work was supported by the National Natural Science Foundation of China under Grant no. 61101204 and the Director Youth Foundation of Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences under Grant no. Y3SJ6400CX and the

ip t

Youth Innovation Promotion Association, Chinese Academy of Sciences under Grant

us

reviewers for their valuable components and suggestions.

cr

no. Y4YR1400QM. The authors would like to thank the Editors and anonymous

an

References

Ac ce p

te

d

M

[1] P.S. Chavez, Digital merging of Landsat TM and digital NHAP data for 1:24,000 scale image mapping, Photogrammetric Engineering and Remote Sensing 52 (1986) 1637-1646. [2] T. Ranchin, and I. Wald, The wavelet transform for the analysis of remotely sensed images, International Journal of Remote Sensing 14 (1993) 615–179. [3] J. Zhou, D.L. Civico, and J.A. Silander, A wavelet transform method to merge Landsat TM and SPOT panchromatic data, International Journal of Remote Sensing 19 (1998) 743–757. [4] B. Aiazzi, L. Alparone, S. Baronti, and A. Garzelli, Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis, IEEE Transactions on Geoscience and Remote Sensing 40 (2002) 23012312. [5] X. Otazu, A.M. Gonazlez, O. Fors, and J. Nunez, Introduction of sensor spectral response into image fusion methods, application to wavelet-based methods, IEEE Transactions on Geoscience and Remote Sensing 44 (2005) 2376–2385. [6] C. Thomas, T. Ranchin, L. Wald, and J. Chanussor, Synthesis of multispectral images to the high spatial resolution: a critical review of fusion methods based on remote sensing physics, IEEE Transactions on Geoscience and Remote Sensing 46 (2008) 1301–1312. [7] I. Amro, J. Mateos, M. Vega, R. Molina, and A. K. Katsaggelos, A survey of classical methods and new trends in pan-sharpening of multispectral images, EURASIP Journal on Advances in Signal Processing, 2011(2011) 1-22. [8] W.J. Carper, T.M. Lillesand, and R.W. Kiefer, The use of intensity-hue-saturation transformations for merging SPOT panchromatic and multispectral image data, Photogrammetric Engineering and Remote Sensing 56 (1990) 459–467. [9] P.S. Chavez, S.C. Sildes, and J.A. Anderson, Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic, Photogrammetric Engineering and Remote Sensing 57 (1991) 295– 303.

18

Page 18 of 71

Joint spatial-transform domain fusion method for improved DFRNT

Ac ce p

te

d

M

an

us

cr

ip t

[10] S. Rahmani, M. Strait, D. Merkurjev, M. Moeller, and T. Wittman, An adaptive IHS pan-sharpening method, IEEE Transactions on Remote Sensing Letters 7(2010) 746-750. [11] P.S. Chavez, and A.Y. Kwarteng, Extracting spectral contrast in Landsat thematic mapper image data using selective component analysis, Photogrammetric Engineering and Remote Sensing 55 (1989) 339–348. [12] Q. Guo, and S.T. Liu, Performance analysis of multi-spectral and panchromatic image fusion techniques based on two wavelet discrete approaches, Optik 122 (2011) 811-819. [13] V. P. Shah, N. H. Younan, and R. L. King, An efficient pan-sharpening method via a combined adaptive PCA approach and contourlets, IEEE Transactions on Geoscience and Remote Sensing, 46(2008) 1323-1335. [14] B. Garguet-Duport, J. Girel, J.M. Chassery, and G. Pautou, The use of multiresolution analysis and wavelets transform for merging SPOT panchromatic and multispectral image data, Photogrammetric Engineering and Remote Sensing 62 (1996) 1057–1066. [15] S. Teggi, R. Cecchi, and F. Serafini, TM and IRS-1C-PAN data fusion using multiresolution decomposition methods based on the ‘a trous’ algorithm, International Journal of Remote Sensing 24 (2003) 1287–1301. [16] M.G. Audicana, J.L. Saleta, R.G. Catalan, and R. Garcia, Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition, IEEE Transactions on Geoscience and Remote Sensing 42(6) (2004) 1291–1299. [17] Z. Liu, K. Tsukada, K. Hanasaki, Y.K. Ho, and Y.P. Dai, Image fusion by using steerable pyramid, Pattern Recognition Letters, 22(9) (2001) 929–939. [18] Q. Guo, and S.T. Liu, Novel image fusion method based on discrete fractional random transform, Chinese Optics Letters, 8(7) (2010) 656-660. [19] N. Prasad, S. Saran, S.P.S. Kushwaha, and P.S. Roy, Evaluation of various image fusion techniques and imaging scales for forest features interpretation, Current Science, 81 (2001) 1218–1224. [20] J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, Multiresolutionbased image fusion with additive wavelet decomposition, IEEE Transactions on Geoscience and Remote Sensing 37 (1999) 1204–1211. [21] A.M. Gonzalez, X. Otazu, O. Fors, R. Garcia, and J. Nunez, Fusion of different spatial and spectral resolution images: development, application and comparison of new methods based on wavelets, in: Proceedings of the RAQRS, 2002, pp. 228–237. [22] Z.J. Liu, H.F. Zhao, and S.T. Liu, A discrete fractional random transform, Optics Communications 255 (2005) 357–365. [23] J. Morovic, J. Shaw, and P.L. Sun, A fast, non-iterative and exact histogram matching algorithm, Pattern Recognition Letters 23 (2002) 127–135. [24] L. Wald, T. Ranchin, and M. Mangolini, Fusion of satellite images of different spatial resolutions: assessing the quality of resulting images, Photogrammetric Engineering and Remote Sensing 63(6) (1997) 691–699. [25] L. Alparone, S. Baronti, A. Garzelli, and F. Nencini, A global quality measurement of pansharpened multispectral imagery, IEEE Transactions on Geoscience and Remote Sensing 1(4) (2004) 313–317.

19

Page 19 of 71

Joint spatial-transform domain fusion method for improved DFRNT

Ac ce p

te

d

M

an

us

cr

ip t

[26] S.J. Sangwine, and T.A. Ell, Colour image filters based on hypercomplex convolution, in: Proceedings of the IEE conference on VISP, vol. 147, no. 2, 2000, pp. 89–93.

20

Page 20 of 71

Joint spatial-transform domain fusion method for improved DFRNT

Figure Captions Fig.1 Illustrations of the differences between the wavelet and DFRNT transform domains: (a) spatial image; (b) wavelet transform domain; (c) DFRNT transform

ip t

domain.

us

cr

Fig.2 The flowchart of the proposed fusion scheme.

Fig.3 Illustrations of the fusion results using QuickBird images: (a) panchromatic image;

an

(b) resampled multispectral image; (c) IHS fused image; (d) PCA fused image; (e) IHSDFRNT fused image with energy ratio 20% and Rayleigh random distribution; (f) PCA-

M

DFRNT fused image with energy ratio 10% and Rayleigh random distribution; (g) DWT fused image; (h) IHS-DFRNT fused image with energy ratio 80% and Rayleigh

te

Ac ce p

random distribution.

d

random distribution; (i) PCA-DFRNT fused image with energy ratio 80% and Rayleigh

Fig.4 Illustrations of the fusion results using TM-SPOT images: (a) panchromatic image; (b) resampled multispectral image; (c) IHS fused image; (d) PCA fused image; (e) IHSDFRNT fused image with energy ratio 20% and Rayleigh random distribution; (f) PCADFRNT fused image with energy ratio 20% and Rayleigh random distribution; (g) DWT fused image; (h) IHS-DFRNT fused image with energy ratio 80% and Rayleigh random distribution; (i) PCA-DFRNT fused image with energy ratio 80% and Rayleigh random distribution.

21

Page 21 of 71

Joint spatial-transform domain fusion method for improved DFRNT

Fig.5 Illustrations of the IHS-DFRNT fusion results for QuickBird images using beta and uniform random distribution with different energy ratios: (a) 10% beta; (b) 50%

ip t

beta; (c) 80% beta; (d) 10% uniform; (e) 50% uniform; (f) 80% uniform.

Fig.6 Illustrations of the IHS-DFRNT fusion results for TM-SPOT images using beta

cr

and uniform random distribution with different energy ratios: (a) 10% beta; (b) 50%

us

beta; (c) 80% beta; (d) 10% uniform; (e) 50% uniform; (f) 80% uniform.

an

Fig.7 Illustrations of the PCA-DFRNT fusion results for QuickBird images using beta and uniform random distribution with different energy ratios: (a) 10% beta; (b) 50%

M

beta; (c) 80% beta; (d) 10% uniform; (e) 50% uniform; (f) 80% uniform.

d

Fig.8 Illustrations of the PCA-DFRNT fusion results for TM-SPOT images using beta

te

and uniform random distribution with different energy ratios: (a) 10% beta; (b) 50%

Ac ce p

beta; (c) 80% beta; (d) 10% uniform; (e) 50% uniform; (f) 80% uniform.

Table 1 Objective evaluation of different fusion results using QuickBird images.

Table 2 Objective evaluation of different fusion results using TM-SPOT images.

Table 3 The computing speed comparison for different fusion methods.

22

Page 22 of 71

Ac ce p

te

d

M

an

us

cr

ip t

Figure1(a)

Page 23 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure1(b)

Page 24 of 71

Ac ce p

te

d

M

an

us

cr

ip t

Figure1(c)

Page 25 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure2

Page 26 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure3(a)

Page 27 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure3(b)

Page 28 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure3(c)

Page 29 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure3(d)

Page 30 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure3(e)

Page 31 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure3(f)

Page 32 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure3(g)

Page 33 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure3(h)

Page 34 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure3(i)

Page 35 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure4(a)

Page 36 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure4(b)

Page 37 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure4(c)

Page 38 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure4(d)

Page 39 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure4(e)

Page 40 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure4(f)

Page 41 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure4(g)

Page 42 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure4(h)

Page 43 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure4(i)

Page 44 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure5(a)

Page 45 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure5(b)

Page 46 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure5(c)

Page 47 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure5(d)

Page 48 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure5(e)

Page 49 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure5(f)

Page 50 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure6(a)

Page 51 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure6(b)

Page 52 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure6(c)

Page 53 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure6(d)

Page 54 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure6(e)

Page 55 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure6(f)

Page 56 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure7(a)

Page 57 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure7(b)

Page 58 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure7(c)

Page 59 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure7(d)

Page 60 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure7(e)

Page 61 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure7(f)

Page 62 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure8(a)

Page 63 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure8(b)

Page 64 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure8(c)

Page 65 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure8(d)

Page 66 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure8(e)

Page 67 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Figure8(f)

Page 68 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Table1

Page 69 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Table2

Page 70 of 71

Ac

ce

pt

ed

M

an

us

cr

i

Table3

Page 71 of 71