Remote sensing image fusion using the curvelet transform

Remote sensing image fusion using the curvelet transform

Information Fusion 8 (2007) 143–156 www.elsevier.com/locate/inffus Remote sensing image fusion using the curvelet transform Filippo Nencini a, Andrea ...

2MB Sizes 0 Downloads 111 Views

Information Fusion 8 (2007) 143–156 www.elsevier.com/locate/inffus

Remote sensing image fusion using the curvelet transform Filippo Nencini a, Andrea Garzelli a, Stefano Baronti b, Luciano Alparone b

c,*

a DII: Department of Information Engineering, University of Siena, Via Roma, 56, 53100 Siena, Italy IFAC-CNR: Institute of Applied Physics ‘‘Nello Carrara,’’ National Research Council, Via Madonna del Piano, 10, 50019 S.to Fiorentino (FI), Italy c DET: Department of Electronics and Telecommunications, University of Florence, Via Santa Marta, 3, 50139 Florence, Italy

Received 16 March 2005; received in revised form 1 February 2006; accepted 27 February 2006 Available online 2 May 2006

Abstract This paper presents an image fusion method suitable for pan-sharpening of multispectral (MS) bands, based on nonseparable multiresolution analysis (MRA). The low-resolution MS bands are resampled to the fine scale of the panchromatic (Pan) image and sharpened by injecting highpass directional details extracted from the high-resolution Pan image by means of the curvelet transform (CT). CT is a nonseparable MRA, whose basis functions are directional edges with progressively increasing resolution. The advantage of CT with respect to conventional separable MRA, either decimated or not, is twofold. Firstly, directional detail coefficients matching image edges may be preliminarily soft-thresholded to achieve a noise reduction that is better than that obtained in the separable wavelet domain. Secondly, modeling of the relationships between high-resolution detail coefficients of the MS bands and of the Pan image is more fitting, being accomplished in the directional multiresolution domain. Experiments are carried out on very-high-resolution MS + Pan images acquired by the QuickBird and Ikonos satellite systems. Fusion simulations on spatially degraded data, whose original MS bands are available for reference, show that the proposed curvelet-based fusion method performs slightly better than the state-of-the art. Fusion tests at the full scale reveal that an accurate and reliable Pan-sharpening, little affected by local inaccuracies even in the presence of complex and detailed urban landscapes, is achieved by the proposed method.  2006 Elsevier B.V. All rights reserved. ` trous’’ wavelet transform; Curvelet transform; Image fusion; Interband structure modeling (IBSM); Multispectral imagery; Multiresolution Keywords: ‘‘A analysis; Pan-sharpening; Satellite remote sensing; Ridgelet transform

1. Introduction Remote sensing image fusion aims at integrating the information conveyed by data acquired with different spatial and spectral resolutions, for purposes of photoanalysis, feature extraction, modeling, and classification [1]. A notable application is the fusion of multispectral (MS) and panchromatic (Pan) images collected from space. Image fusion techniques take advantage of the complementary spatial/spectral resolution characteristics for producing spatially enhanced MS observations. This specific aspect of fusion is often referred to as band-sharpening *

Corresponding author. Tel.: +39 055 4796 563/380 (off./lab.); fax: +39 055 494569. E-mail address: [email protected]fi.it (L. Alparone). 1566-2535/$ - see front matter  2006 Elsevier B.V. All rights reserved. doi:10.1016/j.inffus.2006.02.001

[2]. More specifically, Pan-sharpened MS data are available as a commercial product, in which the higher-resolution Pan image has been used to sharpen the lower-resolution MS bands. In fact, the former is usually acquired with the maximum resolution allowed by the imaging sensor and by the data link throughput; the MS data are acquired with coarser resolution, typically, two or four times lower, because of SNR constraints and the transmission bottleneck. After being received at ground stations, the MS image may be fused with the Pan image to enhance the spatial resolution of the former. Image fusion methods based on injecting high frequency components taken from the Pan image into resampled versions of the MS data have demonstrated a superior capability of translating the spectral information of the coarse-scale MS data to the finer scale of the Pan image

144

F. Nencini et al. / Information Fusion 8 (2007) 143–156

with minimal introduction of spectral distortions [3]. Multiresolution analysis (MRA) based on wavelets [4], filter banks [5], and Laplacian pyramids [6], has been recognized as one of the most efficient tools to implement fusion of images at different resolutions [7,8]. Redundant multiresolution structures, like the generalized Laplacian pyramid (GLP) [9] matching even fractional scale ratios between the images to be fused, the undecimated discrete wavelet transform (UDWT) [10], and the ‘‘a` trous’’ wavelet transform (ATWT) [11] have been found to be particularly suitable for image fusion thanks to their translation-invariance property (not strictly possessed by GLP). Such decompositions do not require sharp, i.e., frequency selective, digital filters and are not critically subsampled. Therefore, typical injection artifacts appearing as impairments in images fused by means of conventional wavelet analysis, like ringing (Gibbs) effects and canvas-like patterns originated by aliasing, are missing when images are fused by using redundant MRA. This topic has been firstly addressed in [12] and recently developed in a couple of papers [9,13], where spatial details injected by fusion methods using either decimated or undecimated wavelet analysis are visually compared. In recent years, nonseparable MRA has been proposed for image processing. In particular, the curvelet transform, originally proposed for image denoising [14], has been found to be an invaluable tool for image enhancement [15]. The curvelet transform is obtained by applying the ridgelet transform [16] to square blocks of detail frames of an undecimated wavelet decomposition. Since the ridgelet transform possesses basis functions matching directional straight lines, the curvelet transform is capable of representing piecewise linear contours on multiple scales through few significant coefficients. This property leads to a better separation between geometric details and background noise, which may be easily reduced by thresholding curvelet coefficients before they are used for fusion [14]. Image fusion based on MRA, however, requires the definition of a model establishing how the missing highpass information to be injected into the resampled MS bands is extracted from the Pan image. The goal is to make the fused bands as similar as possible to what the narrow-band MS sensor would image if it had the same spatial resolution as the broad-band one, by which the Pan band is captured. This model is referred to in the literature [17–19] as an interband structure model (IBSM); it deals with the radiometric transformation (gain and offset) of spatial structures (edges and textures) when they are passed from Pan to MS images. The model is usually space-varying; it is calculated at a coarser resolution and inferred to the finer resolution. However, to increase its specificity, it would be desirable to calculate such a model in a different domain, in which linear structures that are injected are represented by few sparse coefficients [20–22]. In this work, we propose an image fusion method for Pan-sharpening of very-high-resolution MS images, which

operates in the nonseparable transformed domain of the curvelet transform. The algorithm is defined for either QuickBird or Ikonos imagery, having scale ratio between Pan and MS equal to 4, but may be easily extended to other scale ratios that are powers of two. A thorough performance comparison on both QuickBird and Ikonos datasets is carried out among a number of advanced methods described in the literature. Results highlight the benefits of the proposed method for Pan-sharpening of satellite remote sensing imagery. 2. Image fusion based on the curvelet transform In this section, a novel MRA-based scheme with improved geometrical adaptivity and suitable for image data fusion is described. The reader is referred to [9] for a concise, yet comprehensive, review of multiresolution wavelet analysis in the case of separable processing and octave frequency subbands. After reviewing the ‘‘a` trous’’ wavelet transform, which represents the starting point of the curvelet transform, the ridgelet transform, from which the curvelet transform is derived, is introduced. A discrete implementation of the curvelet transform is briefly outlined. The main feature of the curvelet transform is that it is sensitive to directional edges and capable of representing the highpass details of object contours at different scales through few sparse nonzero coefficients. Then the flowchart of a scheme based on the curvelet transform and suitable for Pan-sharpening of MS images is presented; problems occurring in developing an IBSM in the curvelet domain are also discussed. 2.1. ‘‘A` trous’’ wavelet transform The ‘‘a` trous’’ wavelet transform (ATWT) is a nonorthogonal multiresolution decomposition defined by a filter bank {hn} and {gn = dn  hn}, with the Kronecker operator dn denoting an allpass filter. The filter bank does not allow perfect reconstruction to be achieved if the output is decimated. In the absence of decimation, the lowpass filter is upsampled by 2j1, before processing the jth level; hence the name ‘‘a` trous’’ which means ‘‘with holes’’. In two dimensions, the filter bank becomes {hmhn} and {dmdn  hmhn}, which means that the 2-D detail signal is given by the pixel difference between two successive approximations. The prototype 1-D lowpass filter h must be strictly zerophase and symmetric. The frequency responses of the equivalent 1-D analysis filters are shown in Fig. 1(a) and (b), for two different lowpass filters h: the 23-taps GLP filter [9] and the Gaussian-like filter derived by Starck and Murtagh [23] from a cubic spline scaling function and widely used for image fusion [11,24,25]. Apart from the lowpass filter (leftmost), all the other filters are bandpass with bandwidths roughly halved as j increases by one. The spline filter, however, does not yield octave subbands, since the prototype exhibits cutoff at approximately one third of the bandwidth. Fig. 2 displays the ATWT of a sam-

1

1

0.8

0.8

Magnitude

Magnitude

F. Nencini et al. / Information Fusion 8 (2007) 143–156

0.6

0.4

0.2

0 0

145

0.6

0.4

0.2

0.1

0.2 0.3 Normalized frequency

0.4

0.5

(a)

0 0

0.1

0.2 0.3 Normalized frequency

0.4

0.5

(b)

Fig. 1. Frequency responses of equivalent filter bank of an octave ATWT, for J = 3 achieved by (a) 23-taps half-band filter and (b) 5-taps Starck and Murtagh’s Gaussian-like filter.

` trous’’ wavelet transform of Lenna, achieved by the filter bank in Fig. 1(a). Fig. 2. ‘‘A

ple image. The levels of ATWT are obtained by separable filtering of the original image with the equivalent filters shown in Fig. 1(a). For a J-level decomposition, the ATWT exhibits a number of coefficients J + 1 times greater than the number of pixels. Due to the absence of decimation, the synthesis is simply obtained by summing details levels to the approximation: J X f ðm; nÞ ¼ cJ ðm; nÞ þ d j ðm; nÞ ð1Þ j¼1

in which cJ(m, n) and dj(m, n), j = 1, . . . , J are obtained through 2-D separable linear convolution with the equivalent lowpass and highpass filters, respectively.

2.2. Ridgelet transform The next step is finding a transformation capable of representing straight edges with different slopes and orientations. A possible solution is the ridgelet transform [16], which may be interpreted as the 1-D wavelet transform of the Radon transform of f. This is the basic idea behind the digital implementation of the ridgelet transform. An inconvenience with the ridgelet transform is that it is not capable of representing curves. To overcome this drawback, the input image is partitioned into square blocks and the ridgelet transform is applied to each block. Assuming a piecewise linear model for the contour, each block will contain straight edges only, that may be analyzed by the

146

F. Nencini et al. / Information Fusion 8 (2007) 143–156

increased, e.g., doubled for 50% overlap between any two adjacent blocks. During curvelet synthesis, the overlapped regions of detail blocks that are produced are bilinearly interpolated to yield the ATWT detail levels of the synthesized MS image. This strategy has the effect of mitigating possible blocking effects in fused images, that may be noticed if injection models varying in the curvelet domain are devised. 2.4. Curvelet-based Pan-sharpener of MS images

Fig. 3. Flowchart of discrete ridgelet transform applied to square blocks of a bitmap image.

ridgelet transform. Fig. 3 portrays the operation of a block ridgelet transform and highlights that the discrete Radon transform is obtained with a recto-polar resampling of the 2D-FFT of the image block. 2.3. Curvelet transform The main benefit of curvelets is their capability of representing a curve as a set of superimposed functions of various lengths and widths. The curvelet transform is a multiscale transform but, unlike the wavelet transform, contains directional elements. Curvelets are based on multiscale ridgelets with a bandpass filtering to separate an image into disjoint scales. The side length of the localizing windows is doubled at every other dyadic subband, hence maintaining the fundamental property of the curvelet transform, which states that elements of length 2j/2 serve for the analysis and synthesis of the jth subband. In practice, the curvelet transform consists of applying the block ridgelet transform in Fig. 3 to the detail frames of the ATWT (see Fig. 2). The algorithm used to implement the digital curvelet transform (CT) [14] is outlined below: • Apply the ATWT algorithm with J scales. This transform decomposes an image f in its coarse version, cJ, and in the details {dj}j=1,. . .,J, at scale 2j1 (1). • Select the minimum block size, Qmin, at the finest scale level d1. • For a given scale j, make a partition of dj in blocks having size ( j if j is even 22 Qmin Qj ¼ ð2Þ j1 2 2 Qmin if j is odd • Apply the ridgelet transform to each block. Alternatively the block partition may be replaced with an overlapped set of blocks. The size of blocks is properly

Fig. 4 shows the flowchart of a CT-based scheme suitable for fusion of MS and Pan data, whose scale ratio is an integer p = 4, as used in the present study. Let f(P)(m, n) be the dataset constituted by a single Pan image having smaller scale, i.e., finer resolution, and size Mp · Np. Let also {f(l)(m, n), l = 1, . . . , L} be the dataset made up of the L bands of an MS image. Such bands have a scale larger by a factor p, i.e., coarser resolution, and thus size M · N. The goal is to obtain a set ff^ ðlÞ ðm; nÞ; l ¼ 1; . . . ; Lg of MS bands each having same spatial resolution as Pan. The enhancement of each band f(l) to yield the spatial resolution of Pan is synthesized from the CT levels s1 and s2 of the Pan image. The MS bands {f(l)(m, n), l = 1, . . . , L} are preliminarily interpolated by p to match the scale of the Pan image. A new dataset, ff~ ðlÞ ; l ¼ 1; . . . ; Lg, is thus produced. It constitutes the lowpass component to which details extracted from Pan by means of CT are added, to yield a spatially enhanced set of MS observations. Then, the two sets of CT coefficients, one for each layer, calculated from the Pan image are possibly soft-thresholded to reduce the noise, weighted by the injection model, and used to synthesize, by means of the inverse ridgelet transform, two maps of zero-mean spatial edges and textures that are added to the corresponding detail frames of the ATWT of ff~ ðlÞ ðm; nÞ; l ¼ 1; . . . ; Lg. The Pan-sharpened MS image ff^ ðlÞ ðm; nÞ; l ¼ 1; . . . ; Lg is obtained by ATWT synthesis, that is, by summing the approximations and enhanced detail frames of each band. 2.5. Injection model The definition of an IBSM suitable for ruling the transformation of highpass details before they are injected into the resampled MS bands, is a crucial point. In this work, the model has been stated as an adaptive cross-gain. Such a gain weights CT coefficients, after they have been possibly soft-thresholded to reduce the background noise. In fact, the spatially uncorrelated noise is uniformly spread onto the CT coefficients; instead, correlated spatial details are concentrated in few sparse coefficients. In order to adaptively weight the thresholded CT coefficients coming from the high (s1) and middle (s2) resolution layers, the cross-gain must be calculated in a domain indexed exactly by the same coordinates, that are spatial block position and scale-angle pair within the block. Therefore, the approximations c2 of the MS bands and Pan

F. Nencini et al. / Information Fusion 8 (2007) 143–156 ATW Transform

Multispectral Bands resampled to Pan scale

c2

d2

Curvelet Transform

c2

d1

147

s2

s1

Panchromatic Denoising

Soft thresholding

Soft thresholding

Equalization Calculation of IBSM

Application of IBSM

Application of IBSM

Inverse Ridgelet Transform

High-resolution Multispectral Bands

c2

d2

Inverse Ridgelet Transform

d1 ATW Synthesis

Fig. 4. Flowchart of curvelet-based fusion of MS and Pan data with 4:1 scale ratio.

image are further analyzed with a block ridgelet transform, as shown in Fig. 3. As two block sizes exist for fine and coarse details, i.e., s1 and s2, respectively, the ridgelet transform is applied to two replicas of c2 from MS and Pan, with two different block sizes. In this way, there is a 1:1 correspondence between cross-gain map and CT coefficients from Pan on both layers s1 and s2. A simple IBSM is given by the ratio of local standard deviations of ridgelet coefficients calculated from the approximations (c2) of an MS band and of the Pan image, respectively. Fig. 5 summarizes the calculation the cross-gain map at the finer resolution level s1. Analogously, the map of s2 is calculated with a coarser partition. 3. Experimental results and comparisons 3.1. Quality assessment of fusion products

Fig. 5. Flowchart of cross-gain calculation in a block-wise directional domain.

The quality assessment of Pan-sharpened MS images is a difficult task [26,3,27]. Even when spatially degraded MS images are processed for Pan-sharpening, and therefore reference MS images are available for comparisons, assessment of fidelity to the reference requires computation of several indexes. Examples are the band-to-band correlation coefficient (CC), bias in the mean, and root mean square error (RMSE). Given two spectral vectors v and ^v both having L components, in which v = {v1, v2, . . . , vL} is the original spectral

148

F. Nencini et al. / Information Fusion 8 (2007) 143–156

pixel vector vl = f(l)(i, j) while ^v ¼ f^v1 ; ^v2 ; . . . ; ^vL g is the distorted vector, i.e. ^vl ¼ f^ ðlÞ ði; jÞ, the spectral angle mapper (SAM) denotes the absolute value of the spectral angle between the two vectors:   hv; ^vi SAMðv; ^vÞ , arccos ð3Þ kvk2  k^vk2 SAM is measured in either degrees or radians and is usually averaged over the whole image to yield a global measurement of spectral distortion. SAM values equal to zero denote absence of spectral distortion, but possible radiometric distortion (the two vectors are parallel but have different lengths). Ranchin et al. [17,18] proposed an error index that gives a global picture of the quality of a fused product. This error is called ERGAS, after its name in French, which means relative dimensionless global error in synthesis, and is given by: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u L  2 dh u 1 X RMSEðlÞ t ERGAS , 100 ð4Þ lðlÞ d l L l¼1 where dh/dl is the ratio between the pixel sizes of the Pan and MS, e.g., 1/4 for QuickBird and Ikonos data, l(l) is the mean (average) of the lth band, and L is the number of bands. The score index (4), which measures a distortion and thus must be as small as possible, is a notable effort to encapsulate several measurements in a unique number. However, it does not consider CCs but only radiometric distortions (RMSE) and fails to measure spectral distortions. An image quality index for MS images having four spectral bands was recently proposed [28] for assessing Pan-sharpening methods. The quality index Q4 is a generalization to 4-band images of the Q index [29,27], which can be applied only to monochrome images. Q4 is obtained through the use of CC between hypercomplex numbers representing spectral vectors. Analogously to Q, the highest value of Q4, attained if and only if the test MS image is equal to the reference, is one. 3.2. Dataset and compared methods The proposed curvelet-based fusion procedure has been assessed on two very high-resolution image datasets collected by QuickBird and Ikonos. The former displays the urban and suburban areas of Pavia, in Italy, and was acquired on June 23, 2002; the latter was taken on May 15, 2000 and shows the city of Toulouse, in France. The four MS bands of QuickBird span the visible and near infrared (NIR) wavelengths and are spectrally disjoint: blue (B1 = 450–520 nm), green (B2 = 520–600 nm), red (B3 = 630–690 nm), and NIR (B4 = 760–900 nm). The Pan band approximately covers the whole interval (Pan = 500–900 nm). The four MS bands of Ikonos span the visible and NIR (VNIR) wavelengths and are nonoverlapped, with the exception of B1 and B2: B1 = 440–530 nm, B2 = 520–600 nm, B3 = 630–700 nm, and B4 = 760–850 nm. The bandwidth of Pan covers the interval 450–950 nm.

Both datasets are geometrically and radiometrically calibrated. They are available as geocoded products, resampled to uniform ground resolutions of 2.8 m (MS)–0.7 m (Pan) and 4 m (MS)–1 m (Pan), for QuickBird and Ikonos, respectively. All pixel values are packed in 16-bit words, of which only 11 are utilized, because the maximum radiance value is 2047. Square regions of about 2 km2 (QuickBird) and 4 km2 (Ikonos) were analyzed. The original Pan images are of size 2048 · 2048, while the original MS images of size 512 · 512. To allow quantitative distortion measures to be achieved, the Pan and MS bands are preliminarily lowpass filtered and decimated by 4, to yield 2.8 m Pan–11.2 m MS, and 4 m Pan–16 m MS, for QuickBird and Ikonos, respectively. Such spatially degraded data are used to re-synthesize the four spectral bands at 2.8 m and 4 m, respectively. Thus, the true 2.8 m/4 m 512 · 512 MS data are available for objective distortion measurements. To highlight the trend in performance varying with scale ratio, also 2:1 fusion simulations are carried out on both datasets, after the original MS data have been spatially degraded by 2; the Pan images are still degraded by 4, i.e., QuickBird: 5.6 m MS, 2.8 m Pan; Ikonos: 8 m MS, 4 m Pan. Means, standard deviations, and interband CCs, are reported in Tables 1–4, for the QuickBird and Ikonos datasets, respectively. Notice that the mean radiance significantly changes between the two datasets, being two to three times larger for QuickBird. Instead, standard deviations are comparable, except for that of the NIR band, which is almost double for QuickBird. Also, the NIR band (B4) of QuickBird is almost uncorrelated with the visible spectral bands, unlike that of Ikonos. Table 1 Means (l) and standard deviations (r) of the test QuickBird image (radiance units): original 2.8 m MS and 0.7 m Pan, and spatially degraded by 4 (11.2 m MS and 2.8 m Pan) B2

B3

B4

Pan

Original lO 356.40 15.13 rO

B1

488.59 34.15

319.66 48.96

463.73 90.28

415.79 57.79

Degraded 356.10 lD rD 12.82

488.17 28.30

319.31 40.99

463.30 72.64

415.79 52.67

Table 2 Means (l) and standard deviations (r) of the test Ikonos image (radiance units): original 4 m MS and 1 m Pan, and spatially degraded by 4 (16 m MS and 4 m Pan) B2

B3

B4

Pan

Original lO 194.92 rO 21.17

B1

187.42 37.50

146.81 46.23

138.60 52.34

151.04 51.67

Degraded 194.92 lD 19.41 rD

187.43 35.59

146.82 43.09

138.62 44.63

151.07 46.34

F. Nencini et al. / Information Fusion 8 (2007) 143–156 Table 3 QuickBird: CCs between original 2.8 m MS and Pan image degraded to same the pixel size

B1 B2 B3 B4 Pan

B1

B2

B3

B4

Pan

1.0 0.950 0.823 0.060 0.505

0.950 1.0 0.911 0.182 0.625

0.823 0.911 1.0 0.192 0.650

0.060 0.182 0.192 1.0 0.798

0.505 0.625 0.650 0.798 1.0

Table 4 Ikonos: CCs between original 4 m MS and Pan image degraded to same the pixel size

B1 B2 B3 B4 Pan

B1

B2

B3

B4

Pan

1.0 0.959 0.903 0.614 0.784

0.959 1.0 0.965 0.662 0.827

0.903 0.965 1.0 0.701 0.856

0.614 0.662 0.701 1.0 0.854

0.784 0.827 0.856 0.854 1.0

149

True-color composites (bands 3–2–1) of 11.2 m and 2.8 m MS data, are reported in Fig. 6(a) and (i), and in Fig. 8(a) and (i), for QuickBird and Ikonos, respectively. Local misalignments between MS and degraded Pan, that are likely to affect fusion results, are introduced by cartographic resampling and are hardly noticeable. A thorough performance comparison was carried out between the novel method and a number of state-of-theart image fusion methods, which are listed in the following. • Multiresolution IHS [11] with additive model (AWL), based on ATWT. • ATWT-based method with spectral distortion minimizing (SDM) model [19]. • ATWT-based method with Ranchin–Wald–Mangolini (RWM) model [18]. • ATWT-based method with context-based decision (CBD) model [9]. • Gram–Schmidt spectral sharpening method [30] (GS), as implemented in ENVI [31].

Fig. 6. Detail of fused QuickBird MS image (256 · 256): (a) resampled 11.2 m MS, (b) CT fusion, (c) SDM fusion, (d) RWM fusion, (e) CBD fusion, (f) HPF fusion, (g) GS fusion, (h) AWL fusion and (i) true 2.8 m MS.

150

F. Nencini et al. / Information Fusion 8 (2007) 143–156

• High-pass filtering (HPF) [26]: 3 · 3 and 5 · 5 box filters for 2:1 and 4:1 fusion. Also the case in which the MS data are simply resampled (through the 23-taps filter) and no details are added, is presented (under the label EXP) to discuss the behavior of the different fusion methods. 3.3. Performance comparison on QuickBird data Mean bias (Dl) and root mean squared error (RMSE) between each pair of fused and original MS bands are reported in Tables 5 and 6, respectively. Mean biases are practically zero for all entries, as expected, since all methods actually perform injection of zero-mean edges and textures into the resampled MS data. What immediately stands out by looking at the values in Table 6 is that, for 2:1 fusion, only few methods are capable of providing average RMSE

Table 5 Mean bias (Dl) between 2.8 m originals and fused MS bands obtained from 11.2 m MS through 4:1 fusion with 2.8 m Pan and from 5.6 m MS through 2:1 fusion with 2.8 m Pan EXP CT

AWL

SDM

RWM CBD

GS

HPF

0.002 0.002 0.007 0.025

0.234 0.310 0.134 0.097

0.080 0.127 0.118 0.024

0.002 0.007 0.011 0.028

0.005 0.001 0.004 0.020

4:1 B1 B2 B3 B4

0.002 0.007 0.011 0.027

Avg.

0.012 0.052 0.009 0.194 0.087 0.101 0.012 0.007

2:1 B1 B2 B3 B4

0.000 0.023 0.002 0.197 0.028 0.000 0.000 0.005 0.001 0.004 0.000 0.245 0.046 0.002 0.001 0.003 0.001 0.001 0.000 0.122 0.089 0.006 0.001 0.003 0.007 0.068 0.005 0.103 0.013 0.001 0.007 0.002

Avg.

0.002 0.010 0.001 0.167 0.044 0.001 0.002 0.002

0.054 0.032 0.045 0.077

0.027 0.058 0.117 0.204

Table 6 RMSE between original 2.8 m MS bands and fusion products obtained from 11.2 m MS through 4:1 fusion with 2.8 m Pan and from 5.6 m MS through 2:1 fusion with 2.8 m Pan EXP

CT

AWL

SDM

RWM

CBD

GS

HPF

4:1 B1 B2 B3 B4

7.31 17.51 25.12 50.83

6.61 13.67 18.93 35.65

15.85 15.74 20.74 44.85

25.51 31.03 20.06 32.76

9.78 19.98 29.66 40.12

7.45 16.48 23.70 36.26

6.35 14.05 19.55 36.66

31.97 27.57 26.24 34.12

Avg.

29.89

21.56

27.11

27.79

27.31

23.47

22.16

30.15

2:1 B1 B2 B3 B4

4.30 9.23 12.20 23.72

13.54 12.81 14.44 28.86

14.95 19.08 13.39 22.80

4.92 10.33 13.65 25.01

4.71 10.34 14.21 25.18

4.24 9.26 12.47 24.83

22.89 21.05 20.47 24.13

17.10 14.28 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Avg. ¼ Avg.MSE.

18.63

17.94

15.35

15.53

14.80

22.18

Avg.

4.57 10.47 13.93 29.07

values lower than those obtained by plain resampling of MS bands (EXP). In this sense the best algorithms are the curvelet one (CT), strictly followed by Gram–Schmidt (GS) sharpening, and ATWT with the CBD model (CBD). The multiresolution IHS (AWL), and the RWM model, both exploiting ATWT as well, are comparable with plain resampling, even if slightly superior. The ATWT-SDM algorithm, devised to optimize the spectral error with respect to the resampled MS bands, and especially HPF, yield larger average errors, mostly concentrated in the blue band. The main reason for that is the absence of a space-varying model (IBSM) varying with bands: all details are injected, thereby leading to over-enhancement. We also can see that RMSE is directly related to the square root of variance of the original band and inversely related to the correlation with Pan. Two things can be noticed: pffiffiffi RMSE is approximately reduced by a factor equal to 2 when the scale ratio is halved, i.e., when passing from 4:1 to 2:1 fusion; for the resampled MS bands (EXP), it holds that r2O ðBl Þ  r2D ðBl Þ þ RMSE2 ðBl Þ (see Table 1). Table 7 reports CCs between fused products and reference original MS bands. From a general view, the blue wavelengths band (B1) is most difficult to synthesize, while the NIR band (B4) is the easiest. The simple explanation is that the enhancing Pan is weakly correlated with B1 and strongly with B4 (see Table 3). For 4:1 fusion, CT, GS, and CBD yield average CCs greater than that of the resampled bands (EXP). For 2:1 fusion, also RWM outperforms EXP. Practically, all schemes can adequately enhance B4, but only CT, GS and CBD provide acceptable enhancement of B1 for 4:1 fusion (also RWM for 2:1). The best average scores in terms of CCs between true and fused products follow similar trends as for RMSE, GS now being the best for 4:1, closely followed by CT; HPF being the poorest. Obviously, the correlation loss of the resampled data is proportional to the amount of detail possessed by each band.

Table 7 CC between 2.8 m originals and fused MS bands obtained from 11.2 m MS through 4:1 fusion with 2.8 m Pan and from 5.6 m MS through 2:1 fusion with 2.8 m Pan EXP

CT

AWL

SDM

RWM

CBD

GS

HPF

4:1 B1 B2 B3 B4

0.860 0.843 0.852 0.819

0.887 0.907 0.918 0.916

0.706 0.876 0.904 0.879

0.598 0.771 0.912 0.929

0.794 0.829 0.837 0.898

0.860 0.870 0.878 0.912

0.899 0.907 0.919 0.921

0.588 0.812 0.833 0.922

Avg.

0.843

0.907

0.841

0.803

0.840

0.880

0.911

0.789

2:1 B1 B2 B3 B4

0.947 0.946 0.957 0.944

0.953 0.958 0.967 0.963

0.761 0.923 0.953 0.950

0.713 0.866 0.961 0.966

0.939 0.948 0.958 0.959

0.944 0.958 0.955 0.958

0.955 0.959 0.966 0.960

0.655 0.874 0.931 0.964

Avg.

0.948

0.960

0.897

0.877

0.951

0.954

0.960

0.856

F. Nencini et al. / Information Fusion 8 (2007) 143–156 Table 8 Average cumulative quality/distortion indexes between original 2.8 m MS bands and fusion products obtained from 11.2 m MS through 4:1 fusion with 2.8 m Pan and from 5.6 m MS through 2:1 fusion with 2.8 m Pan

4:1 Q4 SAM (deg) ERGAS 2:1 Q4 SAM (deg) ERGAS

EXP

CT

AWL

SDM

RWM

CBD

GS

HPF

0.750 2.14

0.888 1.76

0.827 2.59

0.864 2.14

0.865 2.07

0.881 1.85

0.880 1.80

0.857 2.33

1.760

1.281

1.611

1.676

1.694

1.430

1.316

1.904

0.930 1.20

0.954 1.09

0.924 1.32

0.932 1.20

0.948 1.15

0.947 1.14

0.949 1.13

0.914 1.35

2.005

1.687

2.237

2.152

1.828

1.858

1.745

2.815

Although CC measurements between fused MS and reference originals may be valid detectors of fusion artifacts and especially of possible misalignments, the parameters in Table 8 measuring the global distortion of pixel vectors, either radiometric (ERGAS) or spectral (SAM), and both

151

radiometric and spectral (Q4), will give a more comprehensive measure of quality. CT attains global scores better than those of the other methods, followed by GS, which is not described in the literature, because it is patented [30]. The SAM attained by CBD and RWM is lower than that of SDM (identical to that of resampled MS data). This means that the two space-varying injection models are capable of unmixing the coarse MS pixels using the Pan data, even if to a small extent. The ranking of methods confirms that HPF is spectrally better than AWL (lower SAM), but radiometrically poorer (higher ERGAS). The novel index Q4 [28] trades off both types of distortion and yields a unique quality index, according to which HPF is slightly better than AWL. The case of 2:1 fusion points out the favorable performance of RWM, which is comparable with CBD, while it was slightly poorer for the 4:1 case (not surprisingly, because it was originally conceived for 2:1 fusion). The interscale comparison highlights that Q4, representing the

Fig. 7. Examples of full-scale spatial enhancement of fusion algorithms displayed as 512· 512 true-color compositions at 0.7 m pixels spacing for the QuickBird image: (a) original MS bands (2.8 m) resampled to the scale of Pan image (0.7 m), (b) CT-based fusion product, (c) Gram–Schmidt fusion product and (d) ATWT-SDM fusion product.

152

F. Nencini et al. / Information Fusion 8 (2007) 143–156

fidelity to the reference original, and SAM, measuring the average spectral distortion, are perfectly in trend. Instead, ERGAS yields higher values for the 2:1 fusion case. The factor dh/dl in (4) was presumably introduced to obtain invariance with the scale ratio, such that the value of ERGAS may be thresholded to decide whether fusion products are satisfactory or not. However, given the trend of RMSE with the scale ratio, shown in Table 6, the factor dh/dl should be taken inside the square root to yield values of ERGAS almost independent of the scale ratio, for a given test image. Only visual results of 4:1 fusion, carried out on 1:4 degraded images, are presented in this paper. In the 2:1 case the extent of enhancement is hardly perceivable. Fig. 6(a)– (h) shows 64 · 64 tiles of the 11.2 m MS expanded to 2.8 m scale, i.e., to size 256 · 256, and its spatially enhanced fusion products achieved by means of CT, SDM, RWM, CBD, HPF, GS, and AWL. The original 2.8 m MS is also included for visual reference in Fig. 6(i). The icons are shown in R–G–B true-color composites. True-color visualization has been deliberately chosen, because Pan-sharpening of MS bands falling partly outside the bandwidth of Pan, as in the case of the blue band B1, is particularly critical [32]. The displayed area is one fourth of the 512 · 512 area in which performance scores have been calculated. All fused images are more similar to the reference original (Fig. 6(i)) than to the expanded MS (Fig. 6(a)). HPF (Fig. 6(f)) shows marked spectral and geometric distortion, as well as over-enhancement. Since HPF implicitly relies on undecimated MRA same as the other schemes, its poor performance is mainly due to the boxcar filter having little frequency selection, as well as to the absence of an IBSM. The RWM model (Fig. 6(d)) and the CBD model (Fig. 6(e)) try to recover spectral signatures by unmixing pixels. However, artifacts originated by statistical instabilities of the model give a grainy appearance to the former. The most visible effect of the SDM model is that the color hues of the MS image are exactly transferred to the fused image in Fig. 6(c). AWL is spatially rich, but spectrally poorer than SDM, notwithstanding the low spatial frequency components of the Pan image do not affect the fusion, unlike conventional IHS approaches [33,34]. GS yields a result apparently different from the other methods, yet similar to the reference. Eventually, CT is compared with GS and with SDM on 0.7 m fusion products, as it happens in practice. Fig. 7 displays the resampled 2.8 m MS bands and the enhanced bands at 0.7 m. A visual inspection highlights that all the spectral signatures of the original MS data are carefully incorporated in the sharpened bands. SDM is geometrically rich and detailed, but over-enhanced. CT and GS products are visually superior and substantially similar to each other. A basic consideration is that aliasing impairments are already present in the original MS data (see the color stripes along the main road in the upper left part). Unfortunately, they are carefully preserved by all fusion methods. This drawback motivates the decision of using

another test set of images taken from a different sensor, but again of an urban landscape. 3.4. Performance comparison on Ikonos data The experiments carried out on the Ikonos dataset are aimed firstly at validating the proposed method, and secondly at revealing differences between the two satellite sensors that might contribute to further refinement of the fusion methods and the performance indexes utilized in this work. For the sake of conciseness, band-to-band parameters have been calculated but are not reported here. Mean biases are negligible and perfectly in trend with those calculated on the QuickBird dataset. Also RMSEs are slightly larger on an average, but comparable. The same changes hold for CCs, which are slightly lower, but still comparable. The global indexes Q4, SAM, and ERGAS are reported in Table 9 for 4:1 and 2:1 fusion carried out on spatially degraded data. The values of Q4 are slightly lower than those measured on the fused QuickBird images, as expected since band-to-band scores are less favorable as well. The average SAM is almost doubled in value, thereby revealing that the extremely detailed urban area imaged with a slightly lower resolution exhibits a considerably higher spectral information, that can be recovered only in small part by the fusion process. Indeed, the aliasing artifacts appearing in the QuickBird MS image, and totally missing in the Ikonos MS image, suggest that the former is more spatially but less spectrally informative than the latter. This idea is sustained by the scores achieved in the case of plain resampling (EXP). The numerical values of ERGAS are somewhat in trend among algorithms, as well as across scales. By adopting a different normalization by the square root of the ratio of small to large pixel spacings, the values would be almost scale-independent. However, compared with those reported in Table 9, the ERGAS values are abnormally higher for Ikonos than for QuickBird. The obvious explanation is that ERGAS is proportional to the RMS value across bands of RMSEs normalized to band means. Given the mean radiances of Ikonos that are two to three times lower than those of QuickBird, as

Table 9 Average cumulative quality/distortion indexes between original 4 m MS bands and fusion products obtained from 16 m MS through 4:1 fusion with 4 m Pan and from 8 m MS through 2:1 fusion with 4 m Pan

4:1 Q4 SAM (deg) ERGAS 2:1 Q4 SAM (deg) ERGAS

EXP

CT

AWL

SDM

RWM

CBD

GS

HPF

0.608 4.85

0.886 3.58

0.846 4.82

0.867 4.85

0.886 3.52

0.880 3.60

0.839 3.99

0.824 4.46

5.936

3.608

4.214

4.273

3.691

3.740

4.066

4.568

0.875 3.07

0.933 2.59

0.914 3.33

0.907 3.07

0.928 2.69

0.927 2.68

0.926 2.66

0.912 2.85

7.460

5.659

6.426

6.505

5.894

5.974

5.795

6.989

F. Nencini et al. / Information Fusion 8 (2007) 143–156

shown in Tables 1 and 2, it is not surprising that in practice the interval of ERGAS values is strictly sensor-dependent. No conclusions about which values are suitable to guarantee the quality of fusion products can be inferred from one MS scanner to another. While the change in values with the scale ratio can be easily modeled and hence predicted, there is no way to overcome the problem stemming from the difference in mean value across sensors, given the intrinsic definition of ERGAS. Conversely, the Q4 index is steady between the two datasets. The values of Q4 reveal the improved performance of RWM, which is comparable with the curvelet method on the Ikonos dataset. The numerical values in Table 9 are corroborated by the visual analysis of the fusion results. Fig. 8(a)–(h) shows 64 · 64 tiles of the 16 m MS, expanded to 4 m scale, and its spatially enhanced fusion products achieved by means of the same methods compared for QuickBird. The original 4 m MS is displayed for reference in Fig. 8(i). Despite the

153

lower performance scores, the quality of Ikonos fusion products seems better than that of the QuickBird data. The image produced by RWM lacks the artifacts appearing in the QuickBird dataset (perhaps statistical instabilities are due to aliasing). Hence, RWM is now definitely comparable with CBD and even with CT. The visual (and numerical) performance of GS is somewhat poorer than that achieved on QuickBird data. Due to the lack of information about the GS algorithm and its implementation, this issue will not be discussed further. Again, CT is compared with GS and with SDM on full resolution fusion products (1 m). Fig. 9 displays the resampled 4 m MS bands and the spatially enhanced bands at 1 m. A visual inspection reveals that the quality of Pansharpened Ikonos products is better than that of the corresponding QuickBird products. No spectral distortion with respect to the resampled original MS data can be noticed. SDM is still over-textured. The CT and GS products are

Fig. 8. Detail of fused Ikonos MS image (256 · 256): (a) resampled 16 m MS, (b) CT fusion, (c) SDM fusion, (d) RWM fusion, (e) CBD fusion, (f) HPF fusion, (g) GS fusion, (h) AWL fusion and (i) true 4 m MS.

154

F. Nencini et al. / Information Fusion 8 (2007) 143–156

Fig. 9. Examples of full-scale spatial enhancement of fusion algorithms displayed as 512· 512 true-color compositions at 1 m pixels spacing for the Ikonos image: (a) original MS bands (4 m) resampled to the scale of Pan image (1 m), (b) CT-based fusion product, (c) Gram–Schmidt fusion product and (d) ATWT-SDM fusion product.

visually superior and similar to each other, notwithstanding GS was poorer than CT in the simulation on degraded data. 4. Conclusions and future developments A novel image fusion method based on the nonseparable MRA provided by the discrete curvelet transform and suitable for Pan-sharpening of MS bands has been described and assessed. The low-resolution MS bands are resampled to the scale of the Pan image and sharpened by injecting highpass directional details extracted from the high-resolution Pan image. Thanks to the curvelet basis functions, which are directional edges with progressively increasing resolution, a superior spatial enhancement is provided. Curvelet coefficients matching image edges may be preliminarily soft-thresholded to achieve denoising far better than in the separable wavelet domain [14]. This feature, which has not been exploited in the present study, mainly for a fairer performance comparison with the other meth-

ods, is important for fusion of MS and synthetic aperture radar (SAR) data [35], or of Pan and hyperspectral data [36], and more generally, whenever the sharpening image exhibits lower SNR the image to be enhanced. The high SNR of the latter might be diminished by the former, having lower SNR, if proper denoising of the detail contribution is not devised. The second advantage of performing fusion in the curvelet domain is that modeling of the relationships between the high-resolution detail coefficients of MS bands and of the Pan image is better fitting, being carried out in a directional multiresolution domain. This issue is still open: the tradeoff between spatial selectivity of models and the statistical stability of their estimation requires further investigations (why RWM performs far better on Ikonos than on QuickBird data?). Also computational complexity issues are relevant in application contexts. As a matter of fact, the main drawback of the proposed curvelet method is that it is algorithmically and computationally several times more complex than the others. We wish to recall that, if thresholding of Pan coefficients is not per-

F. Nencini et al. / Information Fusion 8 (2007) 143–156

formed and the IBSM is nonadaptive (a global gain, or gain–offset pair, different for each band but not for each coefficient) there is no advantage whatsoever in using the curvelet transform, rather than the ATWT, for image fusion. This property may be easily verified if one refers to the flowchart in Fig. 4. Experiments carried out on very-high-resolution QuickBird and Ikonos images have shown that the proposed curvelet method is quantitatively comparable with stateof-the art image fusion methods. Depending on the datasets, however, the performance ranking of fusion methods may be different. In particular, on the Ikonos dataset the proposed method outperforms the Gram–Schmidt spectral sharpening method, which is implemented in the software package ENVI and constitutes the baseline algorithm from which commercial Pan-sharpened products are achieved. The different origin of the two algorithms (GS exploits an orthogonal component substitution, CT a nonseparable MRA) and the fact that the GS algorithm is likely to have been strongly optimized for commercial exploitation lead us to the conclusion that the novel CT fusion method may be useful for Pan-sharpening of MS imagery. For the sake of completeness, it is worth noting that the idea of using the curvelet transform for image fusion was independently carried out by Choi et al. [20], who implemented the RWM injection model in the curvelet domain. Unfortunately, due to several factors in their paper (lack of a flowchart of the algorithm, different dataset, i.e., Landsat 7 ETM+ MS and Pan, use of ERGAS as global performance index), it was not possible to attempt any comparisons with an otherwise valuable counterpart. Acknowledgments The authors are grateful to Bruno Aiazzi of IFAC-CNR for the computer simulations and to Centre Nationale d’Etudes Spatiales (CNES) for providing the Ikonos dataset. This work has been supported in part by the Italian Ministry of Research under PRIN 2005. References [1] C. Pohl, J.L. van Genderen, Multisensor image fusion in remote sensing: concepts, methods, and applications, International Journal of Remote Sensing 19 (5) (1998) 823–854. [2] A.S. Kumar, B. Kartikeyan, K.L. Majumdar, Band sharpening of IRS-multispectral imagery by cubic spline wavelets, International Journal of Remote Sensing 21 (3) (2000) 581–594. [3] L. Wald, T. Ranchin, M. Mangolini, Fusion of satellite images of different spatial resolutions: assessing the quality of resulting images, Photogrammetric Engineering and Remote Sensing 63 (6) (1997) 691– 699. [4] S.G. Mallat, A Wavelet Tour of Signal Processing, second ed., Academic Press, New York, 1999. [5] F. Argenti, L. Alparone, Filterbanks design for multisensor data fusion, IEEE Signal Processing Letters 7 (5) (2000) 100–103. [6] B. Aiazzi, L. Alparone, S. Baronti, F. Lotti, Lossless image compression by quantization feedback in a content-driven enhanced Laplacian pyramid, IEEE Transactions on Image Processing 6 (6) (1997) 831–843.

155

[7] P. Blanc, T. Blu, T. Ranchin, L. Wald, R. Aloisi, Using iterated rational filter banks within the ARSIS concept for producing 10 m Landsat multispectral images, International Journal of Remote Sensing 19 (12) (1998) 2331–2343. [8] G. Piella, A general framework for multiresolution image fusion: from pixels to regions, Information Fusion 4 (4) (2003) 259–280. [9] B. Aiazzi, L. Alparone, S. Baronti, A. Garzelli, Context-driven fusion of high spatial and spectral resolution data based on oversampled multiresolution analysis, IEEE Transactions on Geoscience and Remote Sensing 40 (10) (2002) 2300–2312. [10] S. Li, J.T. Kwok, Y. Wang, Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images, Information Fusion 3 (1) (2002) 17–23. [11] J. Nu´n˜ez, X. Otazu, O. Fors, A. Prades, V. Pala`, R. Arbiol, Multiresolution-based image fusion with additive wavelet decomposition, IEEE Transactions on Geoscience and Remote Sensing 37 (3) (1999) 1204–1211. [12] D.A. Yocky, Artifacts in wavelet image merging, Optical Engineering 35 (7) (1996) 2094–2101. [13] M. Gonza´les Audı´cana, J.L. Saleta, R. Garcı´a Catala´n, R. Garcı´a, Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition, IEEE Transactions on Geoscience and Remote Sensing 42 (6) (2004) 1291– 1299. [14] J.L. Starck, E.J. Cande`s, D.L. Donoho, The curvelet transform for image denoising, IEEE Transactions on Image Processing 11 (6) (2002) 670–684. [15] J.L. Starck, F. Murtagh, E.J. Cande`s, D.L. Donoho, Gray and color image contrast enhancement by the curvelet transform, IEEE Transactions on Image Processing 12 (6) (2003) 706–717. [16] M.N. Do, M. Vetterli, The finite ridgelet transform for image representation, IEEE Transactions on Image Processing 12 (1) (2003) 16–28. [17] T. Ranchin, L. Wald, Fusion of high spatial and spectral resolution images: the ARSIS concept and its implementation, Photogrammetric Engineering and Remote Sensing 66 (1) (2000) 49–61. [18] T. Ranchin, B. Aiazzi, L. Alparone, S. Baronti, L. Wald, Image fusion—the ARSIS concept and some successful implementation schemes, ISPRS Journal Photogrammetry and Remote Sensing 58 (1– 2) (2003) 4–18. [19] A. Garzelli, F. Nencini, Interband structure modeling for Pansharpening of very high resolution multispectral images, Information Fusion 6 (3) (2005) 213–224. [20] M. Choi, R.Y. Kim, M.-R. Nam, H.O. Kim, Fusion of multispectral and panchromatic satellite images using the curvelet transform, IEEE Geoscience and Remote Sensing Letters 2 (2) (2005) 136–140. [21] A. Garzelli, F. Nencini, L. Alparone, S. Baronti, Multiresolution fusion of multispectral and panchromatic images through the curvelet transform, in: Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, 2005, pp. 2838–2841. [22] L. Alparone, S. Baronti, A. Garzelli, F. Nencini, The curvelet transform for fusion of very-high resolution multi-spectral and panchromatic images, in: Proceedings of the 25th EARSeL Symposium, Porto, Portugal, Millpress Science Publishers, The Netherlands, 2006. [23] J.L. Starck, F. Murtagh, Image restoration with noise suppression using the wavelet transform, Astronomy and Astrophysics 288 (1994) 342–350. [24] Y. Chibani, A. Houacine, The joint use of IHS transform and redundant wavelet decomposition for fusing multispectral and panchromatic images, International Journal of Remote Sensing 23 (18) (2002) 3821–3833. [25] S. Teggi, R. Cecchi, F. Serafini, TM and IRS-1C-PAN data fusion using multiresolution decomposition methods based on the ‘‘a` trous’’ algorithm, International Journal of Remote Sensing 24 (6) (2003) 1287–1301. [26] P.S. Chavez Jr., S.C. Sides, J.A. Anderson, Comparison of three different methods to merge multiresolution and multispectral data:

156

[27]

[28]

[29] [30]

[31] [32]

F. Nencini et al. / Information Fusion 8 (2007) 143–156 Landsat TM and SPOT panchromatic, Photogrammetric Engineering and Remote Sensing 57 (3) (1991) 295–303. G. Piella, H. Heijmans, A new quality metric for image fusion, in: Proceedings of the IEEE International Conference on Image Processing, vol. III/IV, 2003, pp. 173–176. L. Alparone, S. Baronti, A. Garzelli, F. Nencini, A global quality measurement of Pan-sharpened multispectral imagery, IEEE Geoscience and Remote Sensing Letters 1 (4) (2004) 313–317. Z. Wang, A.C. Bovik, A universal image quality index, IEEE Signal Processing Letters 9 (3) (2002) 81–84. C.A. Laben, B.V. Brower, Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening, Eastman Kodak Company, 2000, US Patent # 6,011,875. ENVI Version 4.1 User Manual, Research Systems Inc., 2004. Y. Zhang, Understanding image fusion, Photogrammetric Engineering and Remote Sensing 70 (6) (2004) 657–661.

[33] T.-M. Tu, S.-C. Su, H.-C. Shyu, P.S. Huang, A new look at IHSlike image fusion methods, Information Fusion 2 (3) (2001) 177– 186. [34] T.-M. Tu, P.S. Huang, C.-L. Hung, C.-P. Chang, A fast IntensityHue-Saturation fusion technique with spectral adjustment for IKONOS imagery, IEEE Geoscience and Remote Sensing Letters 1 (4) (2004) 309–312. [35] L. Alparone, S. Baronti, A. Garzelli, F. Nencini, Landsat ETM+ and SAR image fusion based on generalized intensity modulation, IEEE Transactions on Geoscience and Remote Sensing 42 (12) (2004) 2832– 2839. [36] M.T. Eismann, R.C. Hardie, Hyperspectral resolution enhancement using high-resolution multispectral imagery with arbitrary response functions, IEEE Transactions on Geoscience and Remote Sensing 43 (3) (2005) 455–465.