Author’s Accepted Manuscript Total variation with overlapping group sparsity for speckle noise reduction Jun Liu, Ting-Zhu Huang, Gang Liu, Si Wang, Xiao-Guang Lv www.elsevier.com/locate/neucom
PII: DOI: Reference:
S0925-2312(16)30819-0 http://dx.doi.org/10.1016/j.neucom.2016.07.049 NEUCOM17409
To appear in: Neurocomputing Received date: 28 October 2015 Revised date: 3 June 2016 Accepted date: 31 July 2016 Cite this article as: Jun Liu, Ting-Zhu Huang, Gang Liu, Si Wang and XiaoGuang Lv, Total variation with overlapping group sparsity for speckle noise reduction, Neurocomputing, http://dx.doi.org/10.1016/j.neucom.2016.07.049 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Total Variation with Overlapping Group Sparsity for Speckle Noise Reduction Jun Liu1,2∗, Ting-Zhu Huang1†, Gang Liu1 , Si Wang1 , Xiao-Guang Lv3 1. School of Mathematical Sciences/Resrarch Center for Image and Vision Computing, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, P.R. China 2. School of mathematics and statistics, Northeast Normal University, Changchun, Jilin, 130024, P.R. China 3. School of Science, Huaihai Institute of Technology, Lianyungang, Jiangsu, 222005, P.R. China
Abstract Staircase effect usually happens on the total variation (TV) regularized solutions, while the overlapping group sparsity total variation (OGSTV) as a regularization has been proved to be effective for alleviating this drawback. For coherent imaging systems, such as the synthetic aperture radar, the acquired images are corrupted by speckles. In this paper, we propose a speckle noise reduction model based on the regularization of OGSTV. Under the framework of efficient alternating direction method of multipliers, we develop the corresponding algorithm for solving the proposed model. Numerical experiments are presented to illustrate the superiority of the proposed model and efficiency of the corresponding algorithm. Key words: Total variation; Speckle noise; Synthetic aperture radar; SSIM
1 Introduction Images captured by coherent imaging systems, such as synthetic aperture radar (SAR) [1], laser imaging [2], ultrasound imaging [3] and optical coherence tomography [4], acquire a peculiar granular appearance. This phenomena is usually referred to as speckle. The details on the statistical properties of speckle have been elaborately introduced in [5, 6]. Speckle ∗ †
E-mail:
[email protected] Corresponding author. E-mail:
[email protected]
1
noise degrades the valuable information such as edges and patterns in images, and then further affects the meaningful performance such as segmentation and classification. Consequently, it is important to work on the problem of speckle noise reduction. In different applications, the probability distributions of speckle noise are different. In this paper, we consider that the noise follows Gamma distribution. This kind of noise usually hap2 pens in the SAR images. Let g ∈ Rn be the observed noisy image, according to the degraded mechanism of speckle noise, the image observation model is mathematically expressed as g = u ◦ η,
(1)
2
where u ∈ Rn is an unknown true image, ◦ refers to the componentwise multiplication, and 2 η ∈ Rn is the speckle noise (multiplicative noise) which follows Gamma distribution with the probability density function (pdf): LL ηmL−1 −Lηm e , m=1 Γ(L) n2
pH (η) = Π
for ηm ≥ 0,
(2)
where Γ(·) is the usual Gamma function, L is the integer number which indicates the noise level. The smaller value √ of L, the severer corruption of noise. The mean of ηm is 1 and its standard deviation is 1/ L. In the literature, variational approaches for despeckling have attracted great attention. The aim of speckle noise reduction is to estimate the underlying scene (true image) from the noisy observation. Mathematically, it is an inverse problem that requires regularization [7]. One of the most successful and popular regularization techniques is the TV regularization which was proposed by Rudin, Osher and Fatemi [8], for its capability to preserve sharp edges in denoised images. In [9], Rudin, Lions, and Osher first proposed a TV-based multiplicative noise removal model, which used a constrained optimization approach with two Lagrange multipliers. However, their approach can only cope well with the noise that follows Gaussian distribution. For the multiplicative Gamma noise, Aubert and Aujol [10] derived a new model based on a maximum a posterior (MAP) estimator. Their model is often named as AA-model in the literature which is shown as follows: n2 gm min + Du1 , λ log um + (3) u u m m=1 where u ∈ Rn , Du1 is the TV regularization term, D = (D x ; Dy ) ∈ R2n ×n is the first-order n2 difference matrix with respect to the x− and y−directions, Du 1 = (Du)m 2 where · 2 2
2
m=1
2
is the Euclidean norm and (Du)m = ((Dx u)m ; (Dy u)m ) ∈ R2 , and λ is a positive parameter that balances the respective importance of the data fidelity term of u and the penalty requirement due to the regularization. In [10], Aubert and Aujol used the gradient projection algorithm to solve the minimization problem (3). Since the objective function in (3) is non-convex, it is difficult to obtain a global 2
optimal solution. In particular, the solution obtained by their method strongly relies on the initial guess. To resolve this non-convexity, in [7, 11, 12], the authors used the log transformation, i.e, z = log u and replaced the regularizer Du1 by Dz1 , then they derived the following convex model n2 min λ zm + gm e−zm + Dz1 . (4) z
m=1
With this logarithm-transformation technique, the solution of this model does not depend on the initial guess. Different from the idea that reformulates the data fidelity term into a convex one by the logarithm-transformation, Steidl and Teuber [13] replaced the data fidelity term in (3) by Idivergence. n2 min λ um − gm log um + Du1 . (5) u
m=1
They have proved that the solution of this convex model is theoretically equivalent to the exponential solution of the minimization problem (4), i.e., u λ = ezλ . We refer the reader to [13] for the detailed proof. This model has an advantage that it does not need nonlinear log transformation. There are also some other speckle noise reduction methods using the TV regularization, such as mth root relaxation method in [14] and proximal linearization approach in [15]. However, as shown in [16, 17], the TV regularization sometimes produces the unwanted staircase artifacts, i.e., it transforms the smooth regions (ramps) into piecewise constant regions (staircase effects). Staircase solutions lead to “false edges” that do not exist in the true images. To remedy the disadvantage of the TV regularization for multiplicative noise removal, Liu et al. [18] proposed to employ the combination of TV and high order TV [19] as the regularizer. Besides, Feng et al. [20] considered the famous total generalized variation (TGV) [16] as a penalty term. These methods work well on alleviating the staircase effects. In this paper, to reduce these artifacts, we assume that the derivative of the underlying image is not only sparse, but also exhibits the overlapping group sparsity, i.e., the general tendency of large values of the derivative congregate. This prior information of the true image is the regularization function named the total variation with overlapping group sparsity (OGSTV). It has been successfully applied in signal denoising [21] and image deblurring under Gaussian and impulsive noise contamination [22, 23], respectively. The experiments in their papers demonstrated the staircase artifacts alleviation ability of OGSTV regularizer. Then it is natural to use the OGSTV regularizer for despeckling here. The numerical experiments will demonstrate that the OGSTV regularizer is suitable for despeckling. The main contributions of this work are summarized as follows: (i) We propose an OGSTV regularization based model for speckle noise reduction. (ii) We show that the proposed model is suitable for despeckling since it can alleviate TVs drawback of staircase effect through various numerical experiments. The rest of the paper is organized as follows. In the next section, we first give the definition of the OGSTV function and then derive the new speckle noise reduction model. In section 3, 3
based on the alternating direction method of multipliers, we develop an efficient algorithm for solving the corresponding despeckling problem. In section 4, we give numerical experiments for speckle noise reduction to demonstrate the effectiveness of the proposed method. Finally, concluding remarks are presented in section 5.
2 OGSTV for despeckling In this section, we propose a despeckling model based on the OGSTV regularization. To facilitate the understanding of the proposed model, we first briefly review the definition of OGSTV regularization function which we have introduced carefully in previous works [22, 23].
2.1 OGSTV For a matrix u ∈ Rn×n , we define a K × K-point group at the location (i, j) of u ⎤ ⎡ ui−m1 , j−m1 +1 · · · ui−m1 , j+m2 ⎥⎥⎥ ⎢⎢⎢ ui−m1 , j−m1 ⎥ ⎢⎢⎢ ⎢⎢⎢ ui−m1 +1, j−m1 ui−m1 +1, j−m1 +1 · · · ui−m1 +1, j+m2 ⎥⎥⎥⎥⎥ ui, j,K = ⎢⎢⎢ ⎥⎥ ∈ RK×K , .. .. .. .. ⎥ . ⎥⎥⎥ ⎢⎢⎢ . . . ⎦ ⎣ ui+m2 , j−m1 ui+m2 , j−m1 +1 · · · ui+m2 , j+m2
(6)
where m1 = K−1 and m2 = K2 , x denotes the greatest integer not greater than x. But for the 2 2 image u ∈ Rn considered in this paper, to define a K 2 group at mth location of vector u, we set m = n( j − 1) + i, with i, j = 1, 2, · · · , n and let u i, j be the (n( j − 1) + i)th entry of the vector u, then we have 2
Pm,K = Vec(ui, j,K ) ∈ RK , u
(7)
where Pm,K means a K 2 group at mth location of vector u and Vec represents stacking the u columns of a matrix into a vector. Based on the notation in (7) and the definition in (6), we give the definition of the OGS function of u as follows n2 Pm,K . φ(u) = u 2
(8)
m=1
Consequently, we set the OGSTV regularization function ϕ of u to be of the form n2 Pm,K + Pm,K . ϕ(u) = φ(Dx u) + φ(Dy u) = Dy u Dx u 2 m=1
2
(9)
Since the function ϕ(u) is the sum of the convex function φ(u) with linear operators D x and Dy , the function ϕ(u) is then convex. It is interesting to note that ϕ(u) in (9) is the common anisotropic TV function [24] when K = 1. 4
2.2 Proposed model In this subsection, we propose an OGSTV regularization based model for speckle noise reduction. In section 1, we have introduced the TV-based exponential model (4) and I-divergence model (5). In [20], Feng et al. proposed two despeckling models PDTGV Exp and PDTGV Idiv by replacing the TV regularization in models (4) and (5) with TGV, respectively. Their experiments showed that PDTGV Idiv behaved slightly better than PDTGV Exp. Furthermore, the computation related to the I-divergence based model is simpler than the exponential model. Then we empirically consider I-divergence based model for our work. By combining the Idivergence data fidelity term with the OGSTV regularization, we propose the following convex despeckling model n2 min λ um − gm log um + ϕ(u), (10) u
m=1
where ϕ(u) is the OGSTV regularization function defined in (9). Since the model (10) is convex on u, then it admits a global solution. In the next section, we will develop an efficient algorithm for solving the minimization problem (10). Note that the regularization parameter λ in (10) is a scalar constant. Commonly, the regularization parameter λ plays an important role in improving the quality of denoised images. Since images always consist of features on different scales, a scalar regularization parameter may be not sufficient to handle different image scales. In [25], Chen and Cheng proposed a spatially adapted regularization parameter updating scheme for despeckling problem. In the next section, we will also consider their spatial varying parameter updating scheme.
3 Corresponding algorithms To solve the problem (10), we develop an efficient algorithm based on a popular and efficient tool named alternating direction method of multipliers (ADMM). This method is a splitting version of the augmented Lagrangian method [26, 27]. By introducing three new auxiliary variables w, vx and vy , we rewrite the minimization problem (10) as follows: min u
n2 m=1
λ wm − gm log wm + φ(vx ) + φ(vy )
(11)
s.t. w = u, D x u = v x , Dy u = vy . The augmented Lagrange function of (11) is n2 σ u − w + b0 22 λ wm − gm log wm + φ(vx ) + φ(vy ) + L(u, w, vx , vy , b0 , b1, b2 ) = 2 m=1 2 + D x u − vx + b1 22 + Dy u − vy + b2 2 ,
5
(12)
where σ > 0 is the penalty parameter and bi , i = 0, 1, 2 are the Lagrangian multipliers. Then the ADMM for solving (11) is to update variables alternatively by minimizing the augmented Lagrange function (12) over u, w, v x and vy . In particular, the iterative scheme for solving (11) can be expressed as follows: Fixing vkx , vky and bki , i = 0, 1, 2, we minimize L in (12) with respect to u 2 2 2 uk+1 = arg min u − wk + bk0 2 + D x u − vkx + bk1 2 + Dy u − vky + bk2 2 . u
(13)
The minimization problem (13) with respect to u is a least square problem which is equivalent to the corresponding normal equation (DTx D x + DTy Dy + I)uk+1 = (wk − bk0 ) + DTx (vkx − bk1 ) + DTy (vky − bk2 ).
(14)
The differential matrices D x and Dy are block circulant with circulant blocks (BCCB) when the periodic boundary conditions [28] are used. Then the computations with BCCB matrices can be performed efficiently using fast Fourier transforms (FFTs). For wk+1 , it can be obtained by k+1
w
n2 2 σ = arg min λ wm − gm log wm + uk − w + bk0 2 . u 2 m=1
With respect to each component of w, it is simple to show that the minimizer of (15) is ⎞ ⎛ ⎟ ⎜⎜⎜ 2 1 ⎜⎜ k+1 λ λ 4λgm ⎟⎟⎟⎟ k+1 k 2 k k+1 u + b0 − + + wm = × ⎜⎜⎜ u + b0 − ⎟⎟ , m = 1, 2, · · · , n . 2 ⎝ σ m σ m σ ⎟⎠
(15)
(16)
and vk+1 For vk+1 x y , we have 2 σ k+1 k − D u + b v x x 1 + φ(v x ), vx 2 2 2 σ = arg min vy − Dy uk+1 + bk2 + φ(vy ). vy 2 2
vk+1 = arg min x vk+1 y
(17)
The minimizations with respect to v x and vy can be treated as computing the well-known Moreau proximity operator [7, 29] of (1/σ)φ denoted prox (1/σ)φ at v˜ , i.e., for γ ≥ 0 1 proxγφ (˜v) ≡ arg min v − v˜ 2 + γφ(v). v 2
(18)
We use the majorization-minimization (MM) method [30, 31] to compute prox σ1 φ in (17). Since the related algorithm has been elaborately introduced in [23], we will not repeat it here.
6
At last, the updating scheme of the Lagrangian multipliers b k+1 is as follows: i ⎛ k+1 ⎞ ⎛⎜ bk + σ uk+1 − wk+1 ⎞⎟ ⎜⎜⎜b0 ⎟⎟⎟ ⎜⎜⎜ 0 ⎟⎟⎟ ⎜⎜⎜ k+1 ⎟⎟⎟ ⎜⎜⎜ k ⎟ k+1 k+1 ⎟ ⎜⎜⎝b1 ⎟⎟⎠ = ⎜⎜⎜b1 + σ D x u − vx ⎟⎟⎟⎟ . ⎟⎠ ⎜⎝ k bk+1 b2 + σ Dy uk+1 − vk+1 2 y
(19)
Based on the discussions above, to sum up, we can derive the resulting algorithm for solving (10) (we name the algorithm as OGSTVD: OGSTV for Despeckling). Algorithm 1 OGSTVD: OGSTV for Despeckling) initialization: noisy image g, group size K, initialize λ, σ, v 1x = v1y = g, b1i = 0, i = 1, 2, 3. iteration: 1. Compute uk+1 according to (14); 2. Compute wk+1 according to (16); 3. Compute vk+1 and vk+1 according to (17); x y , i = 0, 1, 2 according to (19); 4. Update bk+1 i 5. k = k + 1; until a stopping criterion is satisfied. It is clear that the minimizations in steps 1 and 2 have closed-form solutions. Obviously, if the minimization in step 3 can also be solved exactly, the convergence of Algorithm 1 is guaranteed. Thanks to Theorem 8 in [26], although step 3 in Algorithm 1 could not be solved exactly, the convergence of Algorithm 1 will not be compromised as long as the sequence of errors between approximate solution and exact solution at each iteration in (18) is absolutely summable. Similar conclusion can also be found in [7]. The stopping criterion of Algorithm 1 is given in the equation (22). If the parameter λ is chosen as a spatially adapted parameter reported in Chen and Cheng’s work [25], we can obtain the following Algorithm 2 for despeckling (we name it as SaOGSTVD: OGSTV with Spatially adapted parameter for Despeckling).
7
Algorithm 2 SaOGSTVD: OGSTV with Spatially adapted parameter for Despeckling initialization: noisy image g; local window size r 2 ; step size ρ = 5 ∗ L; k = 1, regularization parameter λ˜ 1 = λ1 ; pixel location m = ( j − 1)n + i, with i, j, ∈ 1, · · · , n; positive constant δ = 1/(2L) + 1/(12L 2 ); iteration: 1 Get uk by solving the minimization problem (10) using Algorithm 1 with λ = λ k ; 2 Update λk+1 based on the previous uk z = g/uk − log(g/uk ); r2 ζ(uk )m = r12 (Pm,r ); uk s=1
(λ˜ k+1 )m = (λ˜ k )m + ρ max(ζ(uk )m − 1 − δ, 0); r2 (λk+1 )m = r12 (Pm,r ); λ˜ k+1 s=1
3 k = k + 1. until a stopping criterion is satisfied. For the details of updating the spatially adapted parameter, we invite the reader to consult [18, 25]. The stopping criterion of Algorithm 2 is that the maximum number of outer iterations is three because it is enough for a good performance.
4 Numerical experiments In this section, we carry out some numerical experiments comparing the performance of the proposed methods OGSTVD and SaOGSTVD with two representative methods with different regularization functions: one is MIDAL [7] which is with the TV regularization, the other is PDTGV [20] which is based on TGV regularization. MIDAL was proposed by Bioucas-Dias and Figueiredo who applyed the ADMM to solve the exponential model (4). Due to the staircase effects alleviation property of TGV, Feng et al. [20] proposed the despeckling model PDTGV (named PDTGV Idiv in their paper). They employed the primal-dual method to solve their model. All runs are implemented in Windows 8 with 64-bit and Matlab v7.11.0 R2010b running on a laptop equipped with an Intel Core Duo i5-3317 CPU 1.70 GHz and 4 GB of RAM. Note that all methods are coded in Matlab m-file. Twelve test images, including six real SAR images obtained from http://www.sandia.gov/radar/complex_data, are used for the experiments. These images with sizes of from 256 × 256 to 420 × 420 are shown Figure 4.1. In the experiments, the first six images are all degraded by Gamma noise with mean one, the noise level is determined by L. We set L = 1, 3 and 8 in the synthetic tests. For the six real SAR images, we deal with them in the subsection of real SAR experiments.
8
(a) bldgA
(b) bldgB
(c) barbara
(d) cmrman
(e) parrot
(f) beauty
(g) SAR1
(h) SAR2
(i) SAR3
(j) SAR4
(k) SAR5
(l) SAR6
Figure 4. 1. The test images: bldgA (420 × 420), bldgB (350 × 350), barbara (256 × 256), cmrman (256 × 256), parrot (256 × 256), beauty (256 × 256); SAR1 (258 × 258), SAR2 (256 × 256), SAR3 (358 × 358), SAR4 (410 × 410), SAR5 (328 × 328), SAR6 (386 × 386).
9
4.1 Synthetic experiments We use the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) to quantitatively compare the quality of the restoration results of the synthetic tests by different methods. The SSIM is a well-known quality metric used to measure the similarity between two images. This method is developed by Wang et al. [32] and is based on three specific statistical measures that are much closer to how the human eye perceives differences between images. Let u and u˜ be the original image and the despeckled image, respectively. The PSNR and SSIM are defined as follows: n2 Max2u PSNR = 10log10 , (20) u − u˜ 22 where Maxu is the maximum possible pixel value of the image u, such as, when the pixels are represented by 8 bits per sample, it is 255. Unless stated otherwise, the pixels of the images in the following experiments are all in [0, 255]. SSIM =
(2μu μu˜ + C1 )(2σu˜u + C2 ) , + μ2u˜ + C1 )(σ2u + σ2u˜ + C2 )
(μ2u
(21)
where μu and μu˜ are averages of u and u˜ , respectively. σu and σu˜ are the variance of u and u˜ , respectively. σu˜u is the covariance of u and u˜ . The positive constants C 1 and C2 are stabilizing constants for near-zero denominator values. Two constant C 1 and C2 are default setting in the original Matlab code ssim written by the authors in [32]. The code can be downloaded from the Wang’s website https://ece.uwaterloo.ca/˜ z70wang/research/ssim/ssim.m. The stopping criterions for MIDAL, PDTGV and the proposed OGSTVD are the same. It is that the relative difference between the successive iterates of the despeckled image should satisfy the following inequality: uk+1 − uk 2 ≤ , (22) uk+1 2 we set = 3 × 10−4 for all experiments in this subsection. The regularization parameters for all methods (except our spatially adapted method Algorithm 2 which can automatically update the parameter) in our manuscript are all tuned by using the search grid method [33, 34] to give the best PSNR improvement. By searching through a given subset of the parameters, the search grid method selects the best parameters based on a performance metric such as PSNR in this paper. For fair comparisons, in each noise level, we fix the noise to be the same for all methods. Table 4.1 summarizes the PSNR, SSIM and the CPU time of the despeckled images by using MIDAL, PDTGV, and our proposed methods. We first analyze the behaviors of methods with constant scalar regularization parameter: MIDAL, PDTGV, and the proposed OGSTVD. Then we will conclude this subsection by illustrating the performance of SaOGSTVD. From Table 4.1, it is easy to see that the proposed OGSTVD achieves highest PSNR and SSIM compared with MIDAL and PDTGV. According to the computing time, the table shows that OGSTVD is, on average, 2 ∼ 4 times faster than MIDAL and 6 ∼ 20 times faster than
10
11 22.62 22.16 20.92 23.23 22.78 25.43 24.38 23.51 22.98 25.42 25.26 27.70
bldgA barbara bldgB cmrman parrot beauty
3
8
PSNR bldgA 20.84 barbara 20.77 bldgB 19.14 cmrman 21.15 parrot 19.98 beauty 22.37
Image
bldgA barbara bldgB cmrman parrot beauty
1
L
0.8004 0.6232 0.6407 0.7549 0.7777 0.8271
0.7160 0.5382 0.5429 0.7193 0.6933 0.7661 19.4 2.6 6.1 2.9 2.8 2.8
17.4 2.6 5.3 2.7 2.3 2.7 24.08 23.46 22.72 25.26 25.09 27.95
22.56 22.21 20.69 23.08 22.60 25.58 0.7840 0.6230 0.6294 0.7437 0.7694 0.8380
0.7078 0.5474 0.5310 0.6845 0.7051 0.7834 30.5 6.0 9.7 6.5 5.9 6.3
57.9 10.6 18.5 10.3 8.5 9.8
MIDAL PDTGV SSIM Time PSNR SSIM Time 0.6136 21.0 20.69 0.6004 122.9 0.4562 2.9 20.93 0.4813 21.9 0.4417 6.4 18.88 0.4335 40.7 0.6660 3.4 20.99 0.6204 21.0 0.6139 2.7 19.76 0.6183 17.7 0.6251 2.9 23.01 0.7206 21.3
24.56 24.11 23.02 25.64 25.56 28.41
22.92 22.52 21.02 23.54 23.29 26.19
0.8073 0.6657 0.6559 0.7686 0.8001 0.8653
0.7284 0.5720 0.5561 0.7214 0.7421 0.8157
OGSTVD PSNR SSIM 21.32 0.6305 21.24 0.5094 19.29 0.4600 21.46 0.6460 20.71 0.6628 23.42 0.7450
3.9 1.2 3.0 1.2 1.2 1.1
5.3 1.9 4.0 1.7 1.7 1.6
24.71 24.39 23.27 25.93 25.73 28.45
23.03 22.64 21.21 23.79 23.42 26.22
0.8170 0.6967 0.6829 0.7975 0.8108 0.8732
0.7403 0.5913 0.5856 0.7477 0.7483 0.8115
10.0 2.7 5.1 3.4 3.1 3.6
13.4 3.4 6.1 5.1 3.8 3.9
SaOGSTVD Time PSNR SSIM Time 5.4 21.36 0.6359 17.2 1.6 21.24 0.5036 5.0 3.2 19.33 0.4703 9.2 1.7 21.57 0.6707 6.6 1.8 20.85 0.6796 5.2 2.0 23.50 0.7404 5.2
Table 4. 1. Comparisons of the performance of different methods on PSNR, SSIM and computational time. The time is reported in seconds. Note that the blue number denotes the second best performance of this method, while the red number denotes the best.
(a) MIDAL
(b) PDTGV
(c) OGSTVD
(d) SaOGSTVD
(e) MIDAL
(f) PDTGV
(g) OGSTVD
(h) SaOGSTVD
(i) MIDAL
(j) PDTGV
(k) OGSTVD
(l) SaOGSTVD
(m) MIDAL
(n) PDTGV
(o) OGSTVD
(p) SaOGSTVD
Figure 4. 2. Despeckling results of the images “blgdA” and “bldgB” obtained by different methods: MIDAL [7], PDTGV [20], the proposed OGSTVD and SaOGSTVD with L = 1.
12
(a) MIDAL
(b) PDTGV
(c) OGSTV
(d) SaOGSTV
(e) MIDAL
(f) PDTGV
(g) OGSTVD
(h) SaOGSTVD
(i) MIDAL
(j) PDTGV
(k) OGSTVD
(l) SaOGSTVD
(m) MIDAL
(n) PDTGV
(o) OGSTVD
(p) SaOGSTVD
Figure 4. 3. Despeckling results of the images “barbara” and “cmrman” obtained by different methods: MIDAL [7], PDTGV [20], the proposed OGSTVD and SaOGSTVD with L = 3.
13
(a) MIDAL
(b) PDTGV
(c) OGSTVD
(d) SaOGSTVD
(e) MIDAL
(f) PDTGV
(g) OGSTVD
(h) SaOGSTVD
(i) MIDAL
(j) PDTGV
(k) OGSTVD
(l) SaOGSTVD
(m) MIDAL
(n) PDTGV
(o) OGSTVD
(p) SaOGSTVD
Figure 4. 4. Despeckling results of the images “parrot” and “beauty” obtained by different methods: MIDAL [7], PDTGV [20], the proposed OGSTVD and SaOGSTVD with L = 8.
14
26 bldgA barbara
25.5
bldgB cmrman parrot
25
beauty
24.5
PSNR
24 23.5 23 22.5 22 21.5 21
M V ST G O V TG PD L A ID M VM ST G O V TG PD L A ID M M V ST G O V TG PD L A ID M VM ST G O V TG PD L A ID M VM ST G O V TG PD L A ID M VM ST G O V TG PD L A ID M
Figure 4. 5. Visulization of Table 4.3.
(a) MIDAL
(b) PDTGV
(c) OGSTVD
(d) MIDAL
(e) PDTGV
(f) OGSTVD
Figure 4. 6. Despeckling results of the real SAR images “SAR1” and “SAR2” obtained by different methods: MIDAL [7], PDTGV [20], and the proposed OGSTVD. 15
PDTGV. For some experiments, such as the image “bldgA”, MIDAL gives better denoising results with respect to PSNR and SSIM in comparison with PDTGV. On the other side, PDTGV also behaves much better for some other experiments, such as the image “beauty”, than MIDAL. In other words, in terms of PSNR and SSIM, we know that MIDAL and PDTGV are approximately comparable with different experiments. Clearly, MIDAL is much faster than PDTGV. In Figure 4.2, we show the despeckled images “bldgA” and “bldgB” for L = 1. From the figures, we see that edges in the despeckled images are preserved very well by MIDAL, however, despeckled results are also very blocky, i.e., the so-called staircase effects. By contrast, the proposed OGSTVD alleviates this drawback well. Without doubt, it is clear from Figure 4.2 that PDTGV also alleviates the staircasing artifacts very well. This phenomena is especially obvious from zoomed-in regions for despeckled images in Figures 4.2 (e)-(h) and (m)-(p). However, we can also observe that the results of PDTGV are somewhat blurry compared with the other methods. However, the proposed OGSTVD balances well between the edge preserving and staircase effects reduction. 0
0
10
10 MIDAL PDTGV OGSTVD
MIDAL PDTGV OGSTVD
−1
−1
10
−2
relative error
relative error
10
10
−3
10
−4
−3
10
−4
10
10
−5
10
−2
10
−5
0
100
200
300
400
500
600
10
0
50
100
150
iteration
iteration
(a) SAR1
(b) SAR2
200
250
Figure 4. 7. Evolution of the relative difference between the successive iterates of the restored image along the iterations for different real SAR images. For L = 3, the despeckled results of the images “barbara” and “cmrman” are displayed in Figure 4.3. MIDAL produces serious staircase effects. PDTGV and MIDAL erases almost all details of scarf. In comparison, OGSTVD gives better results. As for despeckling the image “cmrman”, the columns of the building with the dome behind the tripod disappear by using MIDAL and PDTGV, while OGSTVD could recover them much better. Apparently, the middle silvery pole of the tripod is recovered very well by OGSTVD compared with PDTGV. Figure 4.4 shows the despeckled “parrot” and “beauty” for L = 8. For the image “parrot”, it is clear that MIDAL gives staircase artifacts in the despeckled result, while PDTGV overcomes this drawback to a great extent. As can be seen from the Figures 4.4 (b) and (c), in the white 16
Method
bldgA MIDAL 0.22 ± 1.0 PDTGV 0.22±1.2 OGSTVM 0.22±1.0
Image barbara bldgB cmrman parrot 0.14±0.8 0.22± 1.0 0.13± 1.1 0.13 ±1.4 0.14±0.9 0.23± 1.2 0.13±1.3 0.13±1.6 0.14±1.0 0.22±1.1 0.12±1.2 0.13±1.4
beauty 0.10±1.3 0.10±1.8 0.09±1.3
Table 4. 2. Relative errors in denoising different images for various methods. The experiments are repeated by 50 times under the same noise distribution with L = 3. Entries of the form μ ± σ mean that the mean is μ and the standard error is σ × 10 −3 . Image
Method
bldgA barbara bldgB cmrman parrot beauty MIDAL 22.63 ±0.05 22.08 ±0.05 20.93 ± 0.04 23.20 ± 0.08 22.65 ± 0.10 25.18 ±0.11 PDTGV 22.53 ± 0.05 22.13 ±0.06 20.70 ±0.04 23.08 ±0.09 22.46 ±0.10 25.39±0.14 OGSTVM 22.84 ±0.04 22.39 ±0.06 20.96 ±0.04 23.39 ±0.09 23.06 ±0.09 25.86± 0.11 Table 4. 3. PSNR in denoising different images for various methods. The experiments are repeated by 50 times under the same noise distribution with L = 3. Entries of the form μ ± σ mean that the mean is μ and the standard error is σ. region of parrot’s head, OGSTVD restores much smoother results than PDTGV. Besides, we can also observe from the zoomed-in regions for Figures 4.4 (f) and (g), that the beak of parrot obtained by OGSTVD contains more details compared with PDTGV. From the visual appearance of the despeckled images in Figures 4.4 (i)-(k), we see that OGSTVD reduces staircase effects well compared with MIDAL. Obviously, we see that PDTGV overcomes this drawback. It can be clearly seen from the face of “beauty”. In the same region, however, PDTGV produces many white spots which do not belong to the original image. From the zoomed-in beauty’s eye shown in Figures 4.4 (m)-(o), we find that OGSTVD can give sharper result with more details. To show the competitive advantage of the proposed OGSTVD, for the noise level L = 3, we repeat the experiments for 50 times with different noise. In Table 4.2 and Table 4.3, we give the mean values and standard variation of relative error and PSNR. They show that all methods are very robust, i.e., the results in terms of PSNR and relative errors do not change much by different noise samples under the same distribution. Furthermore, we can also conclude that the proposed OGSTVM behaves better compared with MIDAL and PDTGV. Figure 4.5 illustrates the results in Table 4.3. It visually displays the superiority of OGSTVM with respect to PSNR. We will next analyze the results obtained by the proposed SaOGSTVD, i.e., OGSTV-based despeckling with the spatially adapted regularization parameter. The PSNR and SSIM of the despeckled results in Table 4.1 obtained by SaOGSTVD are generally higher than OGSTVD with the scale parameter. Visually, SaOGSTVD is capable of preserving more details from the texture of the scarf in Figure 4.3(h) where the zoomed-in regions show that SaOGSTVD admits more details than other methods. Similar phenomena could also be observed from Figure 4.2(h)
17
(building roof) and Figure 4.4(p) (eyeball), and so on. However, the price is that we need triple or more computing time compared with OGSTVD to get better results which can be seen from Table 4.1.
4.2 Real SAR experiments In this subsection, we demonstrate the performance of MIDAL, PDTGV, and the proposed OGSTVD for despeckling of two real SAR images in Figures 4.1(h) and (i). The despeckled images are shown in Figure 4.6. It is clear that all methods could remove the speckle noise very well. However, MIDAL recovers the results with blocky phenomenon while PDTGV and OGSTVD effectively reduce the undesired effects. In Figure 4.7, we present the convergent curves with respect to the relative difference between the successive iterates of the despeckled image obtained by different methods as defined in (22). Note that the threshold value of in (22) is set to be 1 × 10−4 for all methods here. Obviously, our method OGSTVD converges faster compared with MIDAL and PDTGV. Furthermore, we show the despeckling results by the proposed OGSTVD for other four real SAR images in Figure 4.8.
5 Conclusion In this paper, we work on the despeckling problems based on the OGSTV regularization. Under the framework of ADMM, we develop an efficient algorithm for finding solutions of the minimization problems. Since the regularization parameter is very important for the despeckling, apart from the usual scalar constant parameter, we also consider the spatially adapted one. In order to demonstrate the superiority of the proposed methods, we carry out numerical experiments in comparison with other methods. The results show that our methods admit very good performance of fast computational speed and staircase effects reduction usually caused by common TV-based methods. It should be mentioned that the regularization method in this paper can be extended for image deblurring with speckle noise, such as Dong et al.’s work [35] and Zhao et al.’s work [36].
Acknowledgments The authors would like to express their great thankfulness to the Editor in Chief Prof. Zidong Wang and reviewers for their much helpful suggestions for revising this paper. The authors would like to thank Wensen Feng, Hong Lei and Yang Gao for providing their code and original test images. This research is supported by 973 Program (2013CB329404), NSFC (61370147, 61401172, 11401081), NSF of Jiangsu Province (BK20131209), the Fundamental Research Funds for the Central Universities (ZYGX2013 Z005).
18
(a) SAR3
(b) SAR4
(c) SAR5
(d) SAR6
Figure 4. 8. Despeckling results of Figure 4.1 (i)-(l) by the proposed OGSTVD.
19
References [1] Sara Parrilli, Mariana Poderico, Cesario Vincenzo Angelino, and Luisa Verdoliva. A nonlocal SAR image denoising algorithm based on LLMMSE wavelet shrinkage. IEEE Transactions on Geoscience and Remote Sensing, 50(2):606–616, 2012. [2] A V¨olker, Pavel Zakharov, B Weber, F Buck, and Frank Scheffold. Laser speckle imaging with an active noise reduction scheme. Optics Express, 13(24):9782–9787, 2005. [3] Manya Afonso and J Miguel Sanches. Image reconstruction under multiplicative speckle noise using total variation. Neurocomputing, 150:200–213, 2015. [4] Joseph M Schmitt, SH Xiang, and Kin M Yung. Speckle in optical coherence tomography. Journal of biomedical optics, 4(1):95–105, 1999. [5] Joseph W Goodman. Speckle Phenomena in Optics: Theory and Applications. Roberts and Company Publishers, 2007. [6] Chris Oliver and Shaun Quegan. SciTech Publishing, 2004.
Understanding Synthetic Aperture Radar Images.
[7] Jos´e M Bioucas-Dias and M´ario AT Figueiredo. Multiplicative noise removal using variable splitting and constrained optimization. IEEE Transactions on Image Processing, 19(7):1720–1730, 2010. [8] Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1):259–268, 1992. [9] Leonid Rudin, Pierre-Luis Lions, and Stanley Osher. Multiplicative denoising and deblurring: theory and algorithms. In Geometric Level Set Methods in Imaging, Vision, and Graphics, pages 103–119. Springer, 2003. [10] Gilles Aubert and Jean-Francois Aujol. A variational approach to removing multiplicative noise. SIAM Journal on Applied Mathematics, 68(4):925–946, 2008. [11] Yu-Mei Huang, Michael K Ng, and You-Wei Wen. A new total variation method for multiplicative noise removal. SIAM Journal on Imaging Sciences, 2(1):20–40, 2009. [12] Jianing Shi and Stanley Osher. A nonlinear inverse scale space method for a convex multiplicative noise model. SIAM Journal on Imaging Sciences, 1(3):294–321, 2008. [13] Gabriele Steidl and Tanja Teuber. Removing multiplicative noise by Douglas-Rachford splitting methods. Journal of Mathematical Imaging and Vision, 36(2):168–184, 2010.
20
[14] Sangwoon Yun and Hyenkyun Woo. A new multiplicative denoising variational model based on m-th root transformation. IEEE Transactions on Image Processing, 21(5):2523– 2533, 2012. [15] Hyenkyun Woo and Sangwoon Yun. Proximal linearized alternating direction method for multiplicative denoising. SIAM Journal on Scientific Computing, 35(2):B336–B358, 2013. [16] Kristian Bredies, Karl Kunisch, and Thomas Pock. Total generalized variation. SIAM Journal on Imaging Sciences, 3(3):492–526, 2010. [17] Konstantinos Papafitsoros and Carola-Bibiane Sch¨onlieb. A combined first and second order variational approach for image reconstruction. Journal of Mathematical Imaging and Vision, 48(2):308–338, 2014. [18] Jun Liu, Ting-Zhu Huang, Zongben Xu, and Xiao-Guang Lv. High-order total variationbased multiplicative noise removal with spatially adapted parameter selection. Journal of the Optical Society of America A, 30(10):1956–1966, 2013. [19] Marius Lysaker, Arvid Lundervold, and Xue-Cheng Tai. Noise removal using fourthorder partial differential equation with applications to medical magnetic resonance images in space and time. IEEE Transactions on Image Processing, 12(12):1579–1590, 2003. [20] Wensen Feng, Hong Lei, and Yang Gao. Speckle reduction via higher order total variation approach. IEEE Transactions on Image Processing, 23(4):1831–1843, 2014. [21] Ivan W Selesnick and Po-Yu Chen. Total variation denoising with overlapping group sparsity. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 5696–5700. IEEE, 2013. [22] Gang Liu, Ting-Zhu Huang, Jun Liu, and Xiao-Guang Lv. Total variation with overlapping group sparsity for image deblurring under impulse noise. PLoS ONE, 10(4), 2015. [23] Jun Liu, Ting-Zhu Huang, Ivan W Selesnick, Xiao-Guang Lv, and Po-Yu Chen. Image restoration using total variation with overlapping group sparsity. Information Sciences, 295:232–246, 2015. [24] Zhi-Kai Huang, Zhi-Hong Li, Han Huang, Zhi-Biao Li, and Ling-Ying Hou. Comparison of different image denoising algorithms for chinese calligraphy images. Neurocomputing, 188:102–112, 2016. [25] Dai-Qiang Chen and Li-Zhi Cheng. Spatially adapted total variation model to remove multiplicative noise. IEEE Transactions on Image Processing, 21(4):1650–1662, 2012.
21
[26] Jonathan Eckstein and Dimitri P Bertsekas. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming, 55(1-3):293–318, 1992. [27] Daniel Gabay and Bertrand Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Computers & Mathematics with Applications, 2(1):17–40, 1976. [28] Per Christian Hansen, James G Nagy, and Dianne P O’leary. Deblurring Images: Matrices, Spectra, and Filtering, volume 3. SIAM, 2006. [29] Patrick L Combettes and Val´erie R Wajs. Signal recovery by proximal forward-backward splitting. Multiscale Modeling & Simulation, 4(4):1168–1200, 2005. [30] David R Hunter and Kenneth Lange. A tutorial on MM algorithms. The American Statistician, 58(1):30–37, 2004. [31] M´ario AT Figueiredo, Jos´e M Bioucas-Dias, and Robert D Nowak. Majorization– minimization algorithms for wavelet-based image restoration. IEEE Transactions on Image Processing, 16(12):2980–2991, 2007. [32] Zhou Wang, Alan Conrad Bovik, Hamid Rahim Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004. [33] Viren Jain, Joseph F Murray, Fabian Roth, Srinivas Turaga, Valentin Zhigulin, Kevin L Briggman, Moritz N Helmstaedter, Winfried Denk, and H Sebastian Seung. Supervised learning of image restoration with convolutional networks. In Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, pages 1–8. IEEE, 2007. ¨ o, Sharif Chowdhury, Cecilia Garmendia-Torres, Jyrki [34] Pekka Ruusuvuori, Tarmo Aij¨ Selinummi, Mirko Birbaumer, Aim´ee M Dudley, Lucas Pelkmans, and Olli Yli-Harja. Evaluation of methods for detection of fluorescence labeled subcellular objects in microscope images. BMC bioinformatics, 11(1):1, 2010. [35] Yiqiu Dong and Tieyong Zeng. A convex variational model for restoring blurred images with multiplicative noise. SIAM Journal on Imaging Sciences, 6(3):1598–1625, 2013. [36] Xi-Le Zhao, Fan Wang, and Michael K Ng. A new convex optimization model for multiplicative noise and blur removal. SIAM Journal on Imaging Sciences, 7(1):456–475, 2014.
22