Structure-guided unidirectional variation de-striping in the infrared bands of MODIS and hyperspectral images

Structure-guided unidirectional variation de-striping in the infrared bands of MODIS and hyperspectral images

Accepted Manuscript Structure-Guided Unidirectional Variation De-striping in the Infrared Bands of MODIS and Hyperspectral images Yaozong Zhang, Zhang...

2MB Sizes 0 Downloads 39 Views

Accepted Manuscript Structure-Guided Unidirectional Variation De-striping in the Infrared Bands of MODIS and Hyperspectral images Yaozong Zhang, Zhang Tianxu PII: DOI: Reference:

S1350-4495(16)30085-8 http://dx.doi.org/10.1016/j.infrared.2016.05.022 INFPHY 2053

To appear in:

Infrared Physics & Technology

Received Date: Revised Date: Accepted Date:

22 February 2016 24 April 2016 19 May 2016

Please cite this article as: Y. Zhang, Z. Tianxu, Structure-Guided Unidirectional Variation De-striping in the Infrared Bands of MODIS and Hyperspectral images, Infrared Physics & Technology (2016), doi: http://dx.doi.org/10.1016/ j.infrared.2016.05.022

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

1

Highlights    

A novel de-striping method for MODIS and hyperspectral infrared bands is proposed. A novel stripe detecting method that can distinguish between texture and stripes is proposed. Spatial information extracted using the stripe detecting method is utilized to construct spatially weighted parameters. All parameters have the default setting, which greatly improves the practicability of our method. Abstract—Images taken using moderate resolution imaging spectroradiometer(MODIS) and hyperspectral imaging systems, especially in their infrared bands, usually lead to undesired stripe noises, which seriously affect the image quality. A variational destriping model has been proven to have good performance, but knowing how to detect stripes effectively, especially to distinguish them from edges/textures, is still challenging. In this paper, a structure-guided unidirectional variational (SGUV)model that considers the structure of stripes is proposed. Because of the use of structure information, which textures and edges do not have, the proposed algorithm can effectively distinguish stripes from image textures and almost does not blur details while removing stripes. Comparative experiments based on real stripe images demonstrated that the proposed method provides optimal qualitative and quantitative results. keywords—De-striping, hyperspectral image, MODIS, structure, stripe noise, unidirectional variational. 

1

INTRODUCTION

Stripe noises commonly exist in imaging systems with multi-detectors, such as moderate resolution imaging spectroradiometer(MODIS) images[1] and hyperspectral images[2]. There are 36 spectral bands, ranging from the visible (0.4μm) to the long-wave infrared (14.4μm), in MODIS data, while bands 20-25(3.660-4.549μm) and 27-36(6.535-14.385μm) belong to emissive bands. The striping effect is clearly visible in MODIS emissive bands[3] and is particularly serious in bands 27 (6.535– 6.895μm), 30 (9.580–9.880μm), 33(13.185–13.485μm), and 36 (14.085–14.385μm)[1], as shown in Fig.1(a)and(b). There are 196 spectral bands, ranging from the visible (356nm) to the short-wave infrared (2577nm), in Hyperion satellite images, and the striping effect exists in almost every band, as shown in Fig.1(c)and(d). Its presence cannot only be attributed to the imperfect relative calibration of the sensor detectors because other factors, such as source spectral distribution and polarization or random noise in the internal calibration system, can intervene[4,5]. This type of noise seriously affects image quality and creates difficulties in data classification and the restoration of useful information[6].

Fig.1 Stripe noise in MODIS and hyperspectral data: (a) MODIS image(band 27), (b) MODIS image(band 36), (c)Hyperion image(band 26), (d)Hyperion image(band 189)

2 In the past two decades, many useful methods have been developed to address this type of noise, e.g., histogram modification[7], moment matching methods[8], and transformed domain filtering algorithms such as the Fourier transform and wavelet decomposition(Wavelet-FFT)[9-11]. In recent years, image de-striping methods based on the variational/PDE framework have attracted more and more attention. Bouali and Ladjal[1] introduced a unidirectional variational (UV) model to remove stripes in MODIS images. UV is superior to traditional methods and has successfully been used to detect the detector biases in the MODIS Thermal Emissive Band[12]. Later, more and more methods improved the efficiency of UV by using different regularization or adaptive technology[13-16]. For example, Zhou et al.[17] developed a stripe reweighted version of UV(SAUTV),and then Chang et al.[18] considered the framelet regularization to attain more details. Zhou et al.[19] introduced a weighted matrix to combine two different unidirectional total variation models (HUTV), which can deal with “weak” and “heavy” stripes. Wang et al.[3] adjusted the de-striping strength adaptively by introducing difference curvature for spatially weighted parameters. Zhang et al.[20] combined the TV-Stokes model and UV model (UTV-S) and obtained a better result due to the introduction of “divergence free” prior[21,22]. Chang et al.[23] combined the UV model and sparse representation over a learned dictionary to de-stripe and denoise at the same time. All of the methods above have to strike a balance between removing stripes and preserving details, which leads to residual stripes or/and excessive blurring. However, the more information(prior) the algorithms use, the better the results reached. In this letter, we propose a structure-guided unidirectional variational(SGUV) de-striping algorithm, which makes full use of the information regarding stripes and can effectively distinguish between the image texture and stripes. The key idea is to construct a weighted matrix that considers the structure of stripes and guides the behavior of regularization with different weighted parameters in three types of areas: “weak” stripe areas, “heavy” stripe areas, and truncation areas. A specialized experiment is carried out to obtain the suitable parameter configuration, and the comparison of experimental results shows that because more information is introduced, the algorithm results in almost no residual stripes or excessive blurring and is slightly better than SAUTV and HUTV. The remainder of this paper is organized as follows. Section 2 describes the proposed model in detail. Experiments regarding parameters selection and the comparison of different algorithms are described in Section 3. Finally, the conclusions are drawn in Section 4. 2

THE PROPOSED METHOD

2.1 Degradation Model The striping effect is often modeled as an additive process[24-26]; the degradation process can be formulated as (1) where is the degraded image by the instrument at the pixel , is the potential clean image to be recovered, and is the stripe noise. Stripes can be viewed as a structured noise, of which variations are mainly concentrated along the xaxis or y-axis. When their direction is horizontal, most pixels of the stripe noise hold the following property: (2) In this paper, we only consider horizontal stripes. For a stripe image than

, we should transpose

, we calculate

and

first. If

is larger

.

2.2 UV for De-striping The energy functional of the UV model[1] is (3) In (3), is called the regularization term, which is used for removing stripes, and is called the fidelity term, which is used for holding the image the same as the original image f. means to calculate the component of the gradient of u in the y-direction, while means to calculate it in the x-direction. The UV model assumes that the changes of gray in the y-direction in an image are mainly caused by stripes; thus, by minimizing Eq.(3), becomes smaller, and then stripes are removed. There are three disadvantages for the UV model: 1. It has an infinite number of solutions 2. It may generate artificial stripes[19] 3. It cannot distinguish between texture and stripes The first two disadvantages have been considered in some improved UV methods, such as HUTV and UTV-S. However, the third disadvantage, as far as we know, hasn’t been solved or even considered in any algorithms.

3 The key reason UV and its improved methods cannot address the third disadvantage is that the priori assumption they use is unreasonable. This assumption is not peculiar to stripes. The texture and edges, even though they are not straight lines, will also cause changes of gray in the vertical direction in an image, as shown in Fig.2.

Fig.2. (a)Stripe-free image, (b) of(a), (c)simulated stripe image, (d) of(c). Values that are too small in (b) and (c) have been removed, using the same threshold Fig.2(b) is the result when we calculate the gradient in the y-direction of a stripe-free image and use a threshold to remove small values; Fig.2(d) is the result for a stripe image. Both use the same threshold. Obviously, even though there are no obvious stripes in an image, the stripe detection method that uses UV still finds some “stripes” (Fig.2(b)), caused by the texture or edges. HUTV and UTV-S have the same problem because the priori assumptions they use are the same as that used by UV. The main risk is that they will blur the details as stripes. 2.3 Structure-guided stripe detection method Before we propose the structure-guided stripe detection(SGSD) method that can distinguish between texture and stripes effectively, we introduce a new a priori assumption for stripe images. Hypothesis: There are no edges or textures that have straight characteristics in the horizontal direction in an image. The hypothesis above means that edges or textures are allowed to exist in an image even if they are straight, but their direction must not be horizontal. According to this Hypothesis, the difference between stripes and image texture is whether there is space consistency and whether their direction is horizontal. In the following, we propose SGSD method and show how to detect the space consistency along the x-direction. The term point refers to the pixel in an image. First, we calculate the matrix . All points that have a non-zero value in this matrix may be on the edge of some stripes. We use to denote the element whose coordinates in are : (4) is a threshold to remove some points whose difference value is too small. Second, for each point that has a non-zero value in , we check whether there exists a sequence that satisfies four conditions. If so, the point must be on the edge of some “heavy” stripes. The four conditions are as follows: 1. which means that there are at least points in and all of the points are in line along the x-direction and adjacent to each other; k is the y-coordinate of some point in sequence and is the y-coordinate of the first point in sequence 2. which means that the point

belongs to

.

3. which means that all of the points in 4.

or

which means that every point in After checking each non-zero point in

are non-zero.

,where is similar to the mean of , we get the matrix :

.

is a threshold to measure the similarity. (5)

which labels the locations of the edges of “heavy” stripes.

4

Fig.3. (a)Stripe-free image,(b) detecting result of (a) using SGSD, (c)simulated stripe image,(d) detecting result of (c) using SGSD

Fig.4. (a)MODIS stripe image,(b)

of(a), (c) removal of values of (b) that are too small,(d) detecting result of (a)using SGSD

Fig.3 shows the result when we used SGSD to deal with the original image and stripe image, respectively. Compared with Fig.2,it is clear that our method detected only stripes for both the stripe-free image(Fig.3(a)) and stripe image(Fig.3(c))and not textures or edges, which exist in Fig.2(b) and Fig.2(d). Fig.4 shows the comparative results of the real MODIS image obtained using the original method(Fig.4(c)) and SGSD(Fig.4(d)), respectively. There are not only stripes but also textures in Fig.4(c), which obviously demonstrates that the original method cannot distinguish between texture and stripes because both have large values of . However, in Fig.4(d),there are only stripes. The advantages of our method for detecting stripes are as follows: 1. We use similarity to solve the shortcomings of the ‘hard’ threshold for ; 2. Many isolated points, which belong to textures but not stripes, will be ignored by using the space consistency of stripes (adjacent constraint). 2.4

Weight Matrix

In this letter, we use different weighted parameters for in different regions of a stripe image. We only consider the following three types of regions: “weak” stripe regions, “heavy” stripe regions, and truncation regions. Y. Zhang, et al.[20] discussed that disregarding the truncation result is disadvantage 2 of the UV model. In fact, the direct cause for the formation of artificial stripes is that the set of filtering weights is unreasonable. We will consider this situation, but in a manner different from [20]. We will construct a weight matrix those three different regions:

to label the three different regions mentioned above. We use three matrices to mark

1.

the regions that are polluted by weak stripes, denoted as ; the filtering weight corresponding to them is , which is the basis weight. 2. the regions that are polluted by heavy stripes but their gray values do not reach the limitation of the gray level range, denoted as ; the filtering weight corresponding to them is . is the gain factor for . 3. the regions that are polluted by heavy stripes and their gray values reach the limitation of the gray level range, denoted as ; the filtering weight corresponding to them is . is the gain factor for . Thus, we have (6) The calculation of is as follows: (7)

5 where

means

for the image

,

is the original stripe image.

is the position matrix of truncation: (8)

where N is the bit wide of the sensor; for a gray image, N=8. (9) Where

means

for the image

which resulting after k iterations. (10)

During iterations, we need to recalculate periodically. Experience shows that it will be enough to update this matrix five to ten times during the whole process. By introducing the weight matrix , we can overcome disadvantages 2 and 3 of the UV model. 2.5 SGUV Model One solution to disadvantage 1 is to introduce a gray fidelity term in the UV model, just as for HUTV. Thus, the variational model we used is: (11) where represents the element-wise multiplication operator. and are the weights for the corresponding terms. The difference between our model and SAUTV is that we use a very different weight matrix on the regularization term and add a gray fidelity term. 2.6 Gradient Descent Optimization The iteration formula of gradient descent for solving Eq. (11) is as follows: where is the unit step length of gradient descent and computing formula of is as follows: E 'u  21 (u- f)  D1-

(12) is the derivative of the energy functional for variable u. The

D1+ u- D1+ f D2+ u  2 D-2 Dw + + | D1 u- D1 f | | D2+ u |

(13)

where D1+ , D1 , D2+ , D2 are difference operators; their computing formulas are as follows:

D1+ = u(i, j )  u(i, j  1)   D1 = u(i, j  1)  u(i, j )

(14)

D+2 = u(i, j )  u(i  1, j )   D2 = u(i  1, j )  u(i, j )

(15)

The iterative solving procedure is stopped when (16) where is a tolerance parameter. 3 3.1

EXPERIMENTS AND DISCUSSION

Experiment for parameter selection There are nine parameters in the SGUV model:

3.1.1 For

(stripe detection) and

(filtering).

Parameters of stripe detection and , we used simulated stripe images to help determine their suitable settings. We selected eight different

images(size 256*256,as shown inFig.5(a-h)) that cut from two stripe-free MODIS images(size 2030*1354) extracted from band

6 32 (11.770–12.270 μm). For convenience of calculation and display, all of the experimental data were coded to an 8-bit scale [27]. We added two types of simulated stripes to them: bright stripes and bright and dark stripes(e.g.,Fig.6(a) and (c)).Knowing the accurate “heavy” stripe location, we can measure the veracity of the stripe detected with different parameters settings quantitatively and then optimize those parameters.

Fig.5 Stripe-free MODIS image for parameters optimization

Fig.6 Simulation of stripe image and accurate “heavy” stripe locations We still use a matrix to label the “heavy” stripes position, and all non-zero points in this matrix stand for the locations of stripe edges. We use to represent the real stripe position(ground truth), to represent the real “heavy” stripe position and to represent the result of our stripe detection method; thus, the stripes position error(SPE)can be calculated as follows:

| D  D SPE  D h

g

De |

 error

h

D D D 1 g

e

(17)

h

1, if |  y u |ij   1, if Dg ij  0 and Dg1ij    0, otherwise 0, otherwise

where Dh  

In Eq.(17), the first term measures the wrong correct detection in the stripe area and the second term measures the wrong detection in the stripe-free area. The parameter controls the strength used to punish the second term, which will lead to detail blurring. will measure the error in only real stripe regions when and measure the error in only stripe-free regions when .All parameter settings are shown in TableⅠ; we select the setting that has the best(lowest) . TABLE Ⅰ

parameters setting

7 Intensity of simulated stripes

9 9

12 12

variation range of variation range of variation range of

15 15

30 10 13 15 25 [7 17] (intervals of 1) [5 15] (intervals of 1) [0.5 0.9] (intervals of 0.05)

30

0, 1, 50, 1000000

variation range of

The best settings for eight simulates stripe images when the stripe intensity is 30 and

=15are shown in Table Ⅱ.

TABLE Ⅱ INTENSITY OF SIMULATED STRIPES:30, :15

Optimal setting

Striped Fig.5(a)

Striped Fig.5(b)

Striped Fig.5(c)

Striped Fig.5(d)

Striped Fig.5(e)

Striped Fig.5(f)

Striped Fig.5(g)

Striped Fig.5(h)

0

15

15

15

15

15

15

15

15

1

7

11

9

12

15

15

16

8

50

7

11

9

12

15

7

14

8

1000000

7

11

9

12

15

7

14

8

0

5

5

5

5

5

5

5

5

1

15

15

15

15

10

14

12

13

50

15

15

15

15

15

15

15

15

1000000

15

15

15

15

15

15

15

15

0

0.5

0.5

0.5

0.5

0.5

0.5

0.5

0.5

1

0.9

0.9

0.9

0.9

0.75

0.9

0.8

0.9

50

0.9

0.9

0.9

0.9

0.9

0.9

0.9

0.9

1000000

0.9

0.9

0.9

0.9

0.9

0.9

0.9

0.9

0

0.038062

0.013629

0.0036609

0.003360 6

0.045717

0.023081

0.044336

0.02275 5

1

0.088393

0.11772

0.10119

0.18951

0.49769

0.49452

0.45585

0.46976

50

0.088393

4.4548

4.6073

8.9814

4.778

9.2129

4.6892

1.7538

1000000

0.088393

88512.8

91960.7

179427.2

86622.1

177534.4

85145.3

25991.5

The observations from Table Ⅱare as follows: 1.

2. 3. 4. 5.

For different stripe images, when , and always reach the bottom(5 and 0.5) of their respective ranges to obtain the best SPE, and when , they reach the top(15 and 0.9) of their respective ranges to obtain the best SPE. This means that increasing and is helpful for removing fake stripes, which are located in the stripe-free regions, and harmful to detecting the real stripes, which are in the stripe regions and texture regions simultaneously. For different stripe images, the optimal settings of and are almost the same. The best settings of are always no larger than . SPE is less sensitive to the change of when and are fixed, as shown in Fig.7(a). will form a peak in the histogram of |  y u | ,as shown in Fig.7(b).

Observations 1-5 are similar when the stripe intensity is 30 and

=10,13, 25,30.

8

Fig.7 (a)SPE versus S error=1000000,Lm S (b) peak in the histogram of |  y u | caused by (show only the peak) (c) SPE versus Lm error=1000000,S1 S By observing the best settings for eight simulated stripe images when the stripe intensity is 12 and observations:

=12, we get the following

6. For different stripe images, the best setting of is always close to . 7. When is large enough, always reaches the top(0.9) of its range to obtain the best setting. 8. For fixed and , SPE is less sensitive when is large enough, as shown in Fig.7(c). Observations 6-8 are similar when the stripe intensity is 9,15 and =9,15, respectively. Finally, we get the following conclusion by considering all our experiments results: 1. For different stripe images, should be as large as possible. 2. For different stripe images, should be close to but not larger than it. 3. For different stripe images, should be as large as possible. Thus, the suggested settings of those three parameters are: , , and , where P is the value corresponding to the peak of the histogram of |  y u | . We find that there may be no or many obvious peaks for real stripe images; when this happens, we suggest

. If this setting is not satisfied, we should test

in the range (7, 30).

3.1.2 Parameters of filtering Among all the parameters related to filtering, and almost do not need to change for different stripe images; our experiments illustrate that the following settings are suitable for most stripe images: and . The settings of are various for different stripe images. When de-striping an image, we use a default setting of first. By observing the de-striping result, we can obtain the information regarding how to adjust those three parameters. In general, if we find that there are detail blurs, we should decrease ; if we find residual stripes in the image, we should increase ; if there is no effect on removing residual stripes that maybe in the truncation area, should be increased. These methods always work for almost all stripe images we have ever seen. The default settings of

were determined as follows:

We selected two stripe-free MODIS images and then added bright and bright&dark stripes to them. We used peak signal-tonoise ratio (PSNR) for parameters optimal selecting. We used the following setting for testing the parameter , which ranged from0.01 to 0.8 at intervals of 0.03: . We found that for different simulated stripe images, the best setting of is always 0.1,as shown in Fig.8(a).Then, we set and changed from 1 to 150 at intervals of 4; we found that PSNR is less sensitive to the variation of , as shown in Fig.8(b). Then, we set and changed from to at intervals of 4 and found that PSNR is also less sensitive to the variation of , as shown in Fig.8(c). Finally, we set and , respectively, and then let range from 0.01 to 0.8 at intervals of 0.03. We found that for different settings of and , the best is always 0.1, which means its selection is less sensitive to the settings of and . Thus, we suggest that the default settings of these three parameters are .

9

Fig.8 PSNR versus (a) =30, =35, (b)=0.1, =,(c)=0.1, =30 So far, we have discussed the settings of all nine parameters. In the remaining experiments, we always used the following settings: or for all test images and adjusted based on the following settings . In our experience, often ranges from0.1to 0.5 and is often close to . can range from 10 to 150 for different stripe images. The complete procedure of the minimization problem (Eq. (11)) with gradient descent iteration is summarized in Algorithm 1. Algorithm 1: SGUV based on gradient descent optimization method 1:set 2:calculate the histogram of stripe image f, and then set

or

;

3:set 4:If Eq.(16) or

is not satisfied, update

with Eq.(12) and set

3.2 Real Experimental Results To test the performance of SGUV for real stripe images, we selected three real stripe images(two MODIS L1B data, size 512*512, and one EO-1 Hyperion L0 data, size 256*256,which were downloaded from https://ladsweb.nascom.nasa.gov/ and http://glovis.usgs.gov/, respectively, as shown in Fig.9), which have different stripe noises. In this experiment, we used the following indices: image distortion (ID), inverse coefficient of variation (ICV), improvement factors of radiometric quality (IF) and unidirectional variation difference(UVD).UVD is defined as follows:

UVD   |  xuR   xuE |dxdy where

and

(18)

are the raw and de-striped images, respectively.

Fig.9 Real stripe images: (a) from Terra MODIS band 24(4.433-4.498μm), collected from the east coast of China on September 5, 2012 (b) from Terra MODIS band 27(6.535-6.895μm), captured from the Japan coast on December 1, 2012.

10 (c) captured by NASA’s EO-1 Hyperion satellite on June 15, 2009over Lake Monona,band134(933-2396nm) ID is a critical and widely used metric for evaluating de-striping performance, i.e., the degree of information lost that is parallel to the direction of the stripe. A larger ID means less information lost. Note that even though there are residual stripes in an image, this index can still obtain a very large value. ICV and IF are specially designed to evaluate the variation of stripe noise before and after de-striping, and the larger the value of ICV/ IF is, the better the performance of the de-striping algorithm. These two indices cannot measure the information lost, unlike ID and UVD, so we analyzed the results of different algorithms by combining the four indices. We also compared the performances of different algorithms via visual inspection and drew the corresponding mean cross-track profiles and mean column power spectrum.

Fig.10 De-striping result for Fig.9(a), with (a)wavelet-FFT(L =3, `Wname' = db42, = 15), (b) SAUTV(=10=0.1),(c)HUTV(Th = 0.15 =0.025 =1.25), and (d) SGUV( =0.15, =150, =150)

Fig.11 Close-up of Fig.10. (a) area of the original stripe and the same area using (b) wavelet-FFT, (c) SAUTV,(d)HUTV, and (e) SGUV; (f) another area of the original stripe and the same area using (g) wavelet-FFT, (h) SAUTV,(i)HUTV, and (j) SGUV; the gray scale range of (f-j)has been adjusted to fit the display Fig.10 and Fig.11 show the de-striping results of four algorithms for Fig.9(a).It can be easily seen from Fig.10 that waveletFFT and SAUTV have obvious residual stripes in the right dark area(marked by red rectangles), which cannot be seen in the results of HUTV and SGUV. We can get more clear information regarding the difference between SAUTV, HUTV and SGUV from Fig.11. SAUTV has no detail blur, as shown in Fig.11(c), but has obvious residual stripes, as shown in Fig.11(h). On the contrary, HUTV blurs the details, which look like stripes, as shown in Fig. 11(d), but has few residual stripes, as shown in Fig.11(i). Because of the use of a poor prior, these two methods cannot strike a good balance between keeping details and removing stripes. Our method has perfect performance in both Fig.11(e) and Fig.11(j) due to the powerful ability to distinguish stripes from texture. According to the indices summarized in Table Ⅲ, we can obtain a conclusion similar to that above: HUTV achieves the smallest ID and biggest UVD among those three variational base methods, which means that this method blurs more details; this is also why its IF is the best. SGUV achieves the best ID and ICV simultaneously among those three variation-based methods, which means that this method can retain details while removing stripes. TABLE Ⅲ

11 ID/ICV/IF/UVD comparisons of Fig.10 Stripe image

Index

Wavelet-FFT

SAUTV

HUTV

SGUV

ID

\

0.98759

0.98353

0.96963

0.98933

UVD

\

61286

57739

237067

60280

ICV(sample1)

18.3888

31.942

26.1298

27.8144

28.4019

ICV(sample2)

21.959

23.3768

30.5243

25.7309

32.4318

26.2144

25.4626

27.3215

25.26

IF

\

Fig.12. Mean cross-track profiles of the images of Fig.10. (a) Stripe image, (b) Wavelet-FFT, (c) SAUTV, (d) HUTV, (e) SGUV. Fig.12 shows the mean cross-track profiles of Fig.10before and after processing using wavelet-FFT, SAUTV, HUTV, and SGUV. We can see that there are several peaks in Fig.12(a), which belong to the original stripe image. All four methods can remove them, but their curves are nuanced. There are several burrs(marked by the red rectangles) in Fig.12(c), which belong to SAUTV. According to Fig.10(b), we know that this occurs because SAUTV has several residual stripes. The curve of HUTV is smoother than other curves because it always blurs details. We emphasize that the smoothing degree of the mean cross-track profiles alone cannot measure whether the corresponding image has residual stripes or blurred details. Fig.13 is the mean column power spectrum for Fig.10. For the original image(only for MODIS data), its mean column power spectrum gives an abrupt value at frequencies of 0.1, 0.2, 0.3, 0.4,and 0.5. All four methods yield smooth curves, which means that these methods can remove most stripes easily; however, we do not know whether there were residual stripes or blurred details based only on Fig.13(b-e).

12

Fig.13. Mean column power spectrum of the images of Fig. 10. (a) Stripe image, (b) Wavelet-FFT, (c) SAUTV, (d) HUTV, (e) SGUV.

Fig.14 De-striping result for Fig.9(b), with (a)wavelet-FFT(L =4, `Wname' = db42, = 16), (b) SAUTV(=10=0.5),(c)HUTV(Th = 0.2 =0.1 =3.5), and (d) SGUV( =0.3, =20, =20)

Fig.15 Close-up of Fig.14. (a) area of the original stripe and the same area using (b) wavelet-FFT, (c) SAUTV,(d)HUTV, and (e) SGUV Fig.14 and Fig.15 show the de-striping results of the four algorithms for Fig.9(b). Table Ⅳ shows the four indices of Fig.14. It can be easily seen from Fig.14(a) that there are many residual stripes around the texture(marked by red rectangles), which belong to wavelet-FFT, whose UVD and ICV are the worst among the four methods. From Fig.15, we can see that wavelet-FFT has residual stripes in the right red rectangle area; SAUTV blurs the left red rectangle area; HUTV blurs the bottom and right red rectangle areas; SGUV has no residual stripes or blurred details. In Table Ⅳ, due to blurred details, SAUTV and HUTV have small ID and large UVD. Due to over-blurring, HUTV has the best IF. SGUV strikes a better balance between blurred details and stripes removed; it achieves a better ID than SAUTV and HUTV and the two best ICVs.Fig.16shows the mean column power spectrum forFig.14.

13 TABLE Ⅳ

ID/ICV/IF/UVD comparisons of Fig.14 Stripe image

index

Wavelet-FFT

SAUTV

HUTV

SGUV

ID

\

0.98161

0.92784

0.93964

0.96012

UVD

\

109118

217634

604738

183294

ICV(sample1)

1.9041

3.1369

3.69

3.699

3.9549

ICV(sample2)

1.664

11.5521

14.6551

13.9162

17.1811

\

39.4351

31.9176

36.4872

33.7297

IF

Fig.16. Mean column power spectrum of the images of Fig.14. (a) Stripe image, (b) Wavelet-FFT, (c) SAUTV, (d) HUTV, (e) SGUV.

Fig.17. De-striping result for Fig.9(c), with (a)wavelet-FFT(L =4, `Wname' = db42, = 14), (b) SAUTV(=2=0.3),(c)HUTV(Th = 0.29 =0.11 =3.5), and (d) SGUV( =0.15, =30, =25) Fig.17 shows the de-striping results of the four algorithms for Fig.9(c). Table Ⅴ shows the four indices of Fig.17. The main characteristics of Fig.9(c)are that the stripes are very dense and the contrast of the whole image is low. It can be seen that there is obvious detail blurring(marked by the red rectangle) in SAUTV, as shown in Fig.17(b), and residual stripes(marked by the red rectangle) in HUTV, as shown in Fig.17(c); thus, their IF values are the best and the worst among the three variation-based methods, respectively. SGUV has no residual stripes or blurred details, as shown in Fig.17(d),and it achieves the best ID and a better ICV than SAUTV and HUTV. Wavelet-FFT also strikes a good balance between blurred details and stripes removed. Fig.18 shows the mean cross-track profiles for Fig.17. In Fig.18(d), there is an obvious convex(marked by the red rectangle) near

14 0, which corresponds to the residual stripes in Fig.17(c) that lie on top of the image. TABLE Ⅴ

ID/ICV/IF/UVD comparisons of Fig.17 index

Stripe image

Wavelet-FFT

SAUTV

HUTV

SGUV

ID

\

0.97185

0.95075

0.94042

0.99027

UVD

\

16612

17627

47261

11172

ICV(sample1)

4.4688

9.514

8.2889

7.3365

9.4083

ICV(sample2)

8.1109

13.8267

12.0636

12.1205

12.8986

\

31.1864

30.985

25.4526

26.4755

IF

Fig.18. Mean cross-track profiles of the images of Fig.17. (a)Stripe image, (b) Wavelet-FFT, (c) SAUTV, (d) HUTV, (e) SGUV. 4 In this letter, we have proposed a structure-guided UV model that can effectively distinguish stripes from image texture. SGUV almost does not blur details while removing stripes due to introducing a new priori assumption, which considers the structure of stripes, into the variational model, whereas many existing techniques fail to achieve this. The proposed method is especially valuable for images with many stripe-like textures. Although there are too many(nine) parameters in our model, most of them do not need to change when de-striping different images. Through simulated experiments testing, a default setting for each of those nine parameters was given; one should readjust at most three parameters when de-striping an image. All of this makes our method more suitable for real applications. Comparative results of simulated and real stripe images demonstrated the effectiveness and superiority of our model.

The authors declare that there are no conflicts of interest regarding the publication of this paper.

This work was supported in part by the Fundamental Research Funds for the Central Universities (HUST: CXY13Q026),in part by the key project (ID: 60736010),and in part by the instrumentation-special project (ID: 61227007) of The National Natural Science Foundation of China.

15

REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27]

M. Bouali, S. Ladjal, Toward optimal destriping of MODIS data using a unidirectional variational model, vol. 49, no. 8, pp. 2924–2935, Aug. 2011. R. Pande-Chhetri, A. Abd-Elrahman, De-striping hyperspectral imagery using wavelet transform and adaptive frequency domain filtering, . vol. 66, no.5, pp. 620–636, Sep. 2011 Wang, Mi, et al. Unidirectional Total Variation Destriping Using Difference Curvature in MODIS Emissive Bands. Infrared Physics & Technology 75(2015):1-11.. J.J. Simpson, Reduction of noise in AVHRR channel 3 data with minimum dis-tortion, 32 (Mar (2)) (1994) 315–328. V.R. Algazi, G.E. Ford, Radiometric equalisation of nonperiodic striping in satel-lite data, 16 (Jul (3)) (1981) 287–295. T. Zhang, preprocessing based on the imaging process” in Automated Recognition of Imaged Targets. Wuhan :Hubei Science and Technology Press, 2003,pp.99-138 B. K. Horn, R. J. Woodham, Destriping LANDSAT MSS images by histogram modification, vol. 10, no.1, pp. 69–83, May. 1979. F. L. Gadallah, F. Csillag, Destriping multisensor imagery with moment matching, vol. 21, no. 12, pp. 2505–2511, 2000. J. Torres, S. O. Infante, Wavelet analysis for the elimination of striping noise in satellite images, vol.40, no. 7, pp. 1309–1314, Jul. 2001 B. Munch, P. Trtik, F. Marone, M. Stampanoni, Stripe and ring artifact removal with combined wavelet-Fourier filtering, vol.17, no. 10, pp. 8567–8591, 2009 J.G. Liu, G.L.K. Morgan, FFT selective and adaptive filtering for removal of systematic noise in ETM+ imageodesy images, 44(2006) 3716–3724. Marouan Bouali, Alexander Ignatov, Estimation of detector biases in MODIS thermal emissive bands, 51 (2013) 4339– 4348. X. Cui, Q. Zhang, S. Hong, Y. Liu, Z. Gui, The adaptive sonogram restoration algorithm based on anisotropic diffusion by energy minimization for lowdose X-ray CT, 125 (2014) 1694–1697. L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms, 60 (1992) 259–268. L. Yan, H. Fang, S. Zhong, Blind image deconvolution with spatially adaptive total variation regularization, 37 (2012) 2778–2780 X.Q. Lu, Y.L. Wang, Y. Yuan, Graph-regularized low-rank representation for destriping of hyperspectral images, 51 (2013)4009–4018 G. Zhou, H. Fang, L. Yan, T. Zhang, J. Hu, Removal of stripe noise with spatially adaptive unidirectional total variation, vol 125, no. 12, pp. 2756-2762, Jun. 2014 Y. Chang, H. Fang, L. Yan, H. Liu, Robust destriping method with unidirectional total variation and framelet regularization, vol 21, no. 20, pp. 23307–23323, 2013 G. Zhou, H. Fang, C. Lu, S. Wang, Z. Zuo, J. Hu. Robust Destriping of MODIS and Hyperspectral Data Using a Hybrid Unidirectional Total Variation Model,” vol.126, no. 7-8, pp. 838-845, Apr. 2015 Y. Zhang,et al., A Destriping Algorithm based on TV -Stokes and Unidirectional Total Variation Model, , vol.127 pp. 428–439, 2016; http://dx.doi.org/10.1016/j.ijleo.2015.09.246 M. Bertalmio, A.L. Bertozzi, G. Sapiro, Navier–Stokes, fluid dynamica, and imageand video inpainting, in: , 2001, pp.355–362. T. Rahman, X.-C. Tai, S. Osher, A TV-Stokes denoising algorithm, in: , Springer, Heidelberg, 2007, pp.473–482 Y. Chang, L. Yan, H. Fang, et al. Simultaneous Destriping and Denoising for Remote Sensing Images With Unidirectional Total Variation and Sparse Representation,. vol.11, no.3, pp. 1051-1055, 2014 G. Aubert, J. Aujol, A variational approach to removing multiplicative noise, vol 68, no. 4, pp. 925–946, 2008 N. Acito, M. Diani, G. Corsini, Subspace-based striping noise reduction in hyper-spectral images, 49 (2011) 1325–1342. X.Q. Lu, Y.L. Wang, Y. Yuan, Graph-regularized low-rank representation fordestriping of hyperspectral images, 51 (2013)4009–4018. H.F. Shen, L.P. Zhang, A MAP-based algorithm for destriping and inpainting of remotely sensed images, 47 (2009) 1492–1502

16

Highlights    

A novel de-striping method for MODIS and hyperspectral infrared bands is proposed. A novel stripe detecting method that can distinguish between texture and stripes is proposed. Spatial information extracted using the stripe detecting method is utilized to construct spatially weighted parameters. All parameters have the default setting, which greatly improves the practicability of our method.