ARTICLE IN PRESS
JID: NEUCOM
[m5G;August 31, 2019;15:32]
Neurocomputing xxx (xxxx) xxx
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
An efficient nonlocal variational method with application to underwater image restoration Guojia Hou a,∗, Zhenkuan Pan a, Guodong Wang a, Huan Yang a, Jinming Duan b a b
College of Computer Science and Technology, Qingdao University, Qingdao, PR China School of Computer Science, University of Birmingham, Birmingham B15 2TT, UK
a r t i c l e
i n f o
Article history: Received 25 February 2019 Revised 4 July 2019 Accepted 14 August 2019 Available online xxx Communicated by Prof. Liu Guangcan Keywords: Underwater image formation model Variational model Nonlocal differential operator ADMM Image restoration
a b s t r a c t Light is absorbed and scattered when it travels though water, which causes the captured underwater images often suffering from blurring, low contrast and color degradation. To overcome these problems, we propose a novel variational model based on nonlocal differential operators, in which the underwater image formation model is successfully integrated into the variational framework. The underwater dark channel prior (UDCP) and quad-tree subdivision methods are applied to the construction of underwater image formation model to estimate the transmission map and the global background light. Furthermore, we employ a fast algorithm based on the alternating direction method of multipliers (ADMM) to speed up the solving procedure. To reproduce color saturation, we perform a Gamma correction operation on the recovered image. Both the real underwater image application test and the simulation experiment demonstrate that the proposed underwater nonlocal total variational (UNLTV) approach achieves superb performance on dehazing, denoising, and improving the visibility of underwater images. In addition, six state-of-the-art algorithms are compared under different challenging scenes to evaluate their effectiveness and robustness. Extensive qualitative and quantitative experimental comparisons further validate the superiority of the proposed method. © 2019 Elsevier B.V. All rights reserved.
1. Introduction On the account of numerous marine scientific applications such as sea life monitoring, ocean environments inspections, engineering exploration and underwater object detection and recognition, research on underwater image processing is rapidly growing. Unfortunately, as effects of absorption and scattering of light when it travels through the water, underwater captured images often suffer from low contrast, blurring or invisible, color distortion, nonuniform illumination and noise. To address these problems, many efforts have been devoted to enhance and restore underwater images [1–5]. For improving contrast, Petit et al. [6] used geometric quaternion transformation instead of underwater background color for the hue axis, and achieved enhanced underwater image based on optical attenuation inversion calculation. The method can effectively solve the problem that the underwater image is bluish due to light attenuation, but uneven illumination cannot be eliminated, and strong light area will be excessively enhanced. Ghani
∗
Corresponding author. E-mail address:
[email protected] (G. Hou).
and Isa [7] improved the unsupervised color correction method (UCM) proposed by Carlevaris-Bianco et al. [8] to increase the contrast and balance over-enhanced and under-enhanced areas. In literature [9], a novel approach was proposed to enhance contrast and retain more details, which is based on contrast limited adaptive histogram equalization (CLAHE) and wavelet transform (WT). To correct color, Ancuti et al. [10] presented a scheme based on the two-images fusion, in which they employ the color compensation and white balance to define the related weight map. Furthermore, the multiscale fusion strategy is adapted to avoid generating artifacts in the low-frequency components of the reconstructed image in the process of sharp weight map conversion. Bianco et al. [11] changed chromatic component by moving its distribution around the white point, and stretched he luminance component to improve image contrast. In letter [12], a weakly supervised color transformation technique inspired by cycle-consistent adversarial networks (CCAN) was introduced for color correction. In terms of eliminating the non-uniform brightness, Kaeli and Singh [13] estimated the attenuation coefficient and the speed of the illumination source based on the overlapping color image of Doppler velocity logarithmic and the sequence of sound ranges, respectively. Afterward, Zhang et al. [14] studied the characteristics of underwater imaging under artificial light source and achieved
https://doi.org/10.1016/j.neucom.2019.08.041 0925-2312/© 2019 Elsevier B.V. All rights reserved.
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041
JID: NEUCOM 2
ARTICLE IN PRESS
underwater image enhancement via color correction and illumination adjustment. They also improved the effect of shadow and background light on image quality and successfully solved the problem of uneven illumination. Partly, the process of producing underwater degradation is similar to the outdoor fogging model. Recently, a large number of underwater image dehazing methods emerged based on the image formation model which is derived from the dark channel prior (DCP) proposed by He et al. [15]. Chiang and Chen [16], Serikawa and Lu [17], and Li et al. [18] have applied the DCP method to improve the visibility of underwater images. Afterward, Wen et al. [19] and Drews et al. [20,21] improved the DCP method to overcome its limitations in being directly applied to underwater image restoration. The above-mentioned methods can improve the visibility and brightness, and suppress noise. However, they are always used to solve a certain problem of underwater degraded images. To address this shortcoming, multi-algorithms integrated techniques [22–25] are conducted step by step. On account of the influence of multi-factor coupling and independence of each step, the traditional multi-step method is often regarded as ‘care for this and lose that’. For example, noise will be simultaneously enlarged in the process of enhancing contrast, analogously, the image will become blur while reducing noise. Fortunately, many studies on variational methodology [26–28] have been made promotion by its superiority in convenient expression and stability of numerical calculation algorithms. In recent years, the variational method has been widely used in land-based image processing, especially in image enhancement [29–33], defogging [34–36], segmentation [37–40], registration [41–43], and inpainting [44,45]. Nevertheless, few researches [46,47] have applied the variational approach to underwater image processing. To address the previously discussed problems, we propose a novel approach which enables restoring underwater images based on nonlocal differential operators. The contributions of the paper are summarized as follows: (a) the underwater image formation model is successfully integrated into the variational framework; (b) the UDCP and quad-tree subdivision methods are combined to estimate the transmission map and the global background light; (c) a novel nonlocal variational model is proposed for dehazing, denoising, and improving color rendition simultaneously; (d) a fast algorithm is designed based on the ADMM to accelerate the whole progress. The rest of our work is organized as follows. The related work is briefly given in Section 2. Section 3 introduces the proposed method. Section 4 illustrates real-world application and simulation experimental results, furthermore, the qualitative and quantitative comparison results are presented. Finally, the conclusion and future work are provided in Section 5. 2. Background and foundation 2.1. Underwater image formation model Based on the Jaffe–McGlamery model [48,49], the underwater imaging model can be divided into three components: direct illumination (Ed ), forward scattering (Ef ) and background scattering (Eb ), which is given by
E = Ed + E f + Eb ,
(1)
Due to scattering and absorption, only part of light reaches the camera, then the direct component can be defined as:
Ed = J (x ) · t (x ),
[m5G;August 31, 2019;15:32]
G. Hou, Z. Pan and G. Wang et al. / Neurocomputing xxx (xxxx) xxx
(2)
where J is scene radiance, t is the transmission map, x is the pixel coordinate.
The forward scatter component can be regarded as a convolution of the direct component and the point spread function (PSF). It can be written as
E f = J (x ) ∗ k(x ),
(3)
where k is convolution kernel function which represents the diffusion of light due to forward scatter. The background scattering component does not originate from the reflection of the scene object, but partial ambient light is received by the camera. Therefore, Eb can be stated as
Eb = B · (1 − t (x ) ),
(4)
where B denotes global background light (GBL). Assuming that the scene is not far from the camera, the forward scatter can be ignored. Then, the simplified underwater imaging model can be expressed as
I (x ) = J (x ) · t (x ) + B(1 − t (x ) ),
(5)
where I is the captured underwater image. In order to obtain the haze-free image J, it is essential to estimate B and t. The estimation of global background light B is always empirically replaced with the brightest pixel in the observed intensity I, which is estimated in (6)
Bc = max min
x∈I y∈(x )
min Ic (y ) = 0.
(6)
c
where (x) is a square local patch centered at x, and c ∈ {r, g, b} represents the color channel. As is well-known, DCP is the most popular and effective method to estimate transmission t. For restoring underwater images, Wen [19] and Drews-Jr [20] improved the DCP by considering the blue and green channels, which called underwater dark channel prior (UDCP). The new concept of UDCP is redefined as
J
udark
(x ) = min
min (J (y ) ) , c
y∈(x )
(7)
c∈{g,b}
Empirically, in most cases, the intensity value of Judark tends to zero. The UDCP denotes that Jundark (x) → 0, which indicates that
min
y∈(x )
min c
J c (y ) Bc
= 0.
(8)
Combining with (5) and (7), the estimation of t is described as
I c (y ) t (x ) = 1 − min min c . c y∈(x ) B
(9)
2.2. Basic nonlocal operators Here, we display some basis definitions of nonlocal differential operators presented in [50]. The concept of the non-local operator is based on the similarity between the small neighborhood (patch) of the current pixel in the image and the small neighborhood (patch) of other pixels. The pixel similarity is always computed using only binary values (0 or 1) based on small patch. Given an image f (x, y ) : → R, the similarity between x and y can be described as
w(x, y ) = exp
−Ga (t )| f (x + t ) −
h2
f (y + t )| dt 2
,
(10)
where Ga is a Gaussian kernel with the standard deviation a, t is the displacement of pixel coordinate, and h denotes the threshold of similarity. Fig. 1 illustrates the scheme to compute the pixel similarity as follows.
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041
ARTICLE IN PRESS
JID: NEUCOM
[m5G;August 31, 2019;15:32]
G. Hou, Z. Pan and G. Wang et al. / Neurocomputing xxx (xxxx) xxx
3
y1
w(x, y1)
Patch
x
w(x, y3)
x
y
f(y+t)
y3 w(x, y2)
y2
Patch
f(x+t)
Fig. 1. Schematic diagram of non-local mean weight distribution.
Therefore,
the
nonlocal
gradient
operator
→ × can be defined as
(∇NL u )(x, y ) := (u(y ) − u(x )) w(x, y ).
∇ NL u(x,
(11)
For the nonlocal vector v(x, y ) : × → R , we define its dot product at x
v1 · v2 (x ) :=
v1 (x, y )v2 (x, y )dy.
and its inner product
v1 , v2 := v1 · v2 , 1 =
×
v1 (x, y )v2 (x, y )dxdy
Input Image I
y):
Global Background Light Estimation
(12)
G Channel
(13)
The magnitude of a vector is
√ |v|(x ) : = v · v =
(v(x, y ) ) dy.
Estimated Transmission Map
(14) UNLTV
Then, the nonlocal divergence can be defined as the adjoint of the nonlocal gradient
divNL v(x ) :=
B Channel
Quad-tree Subdivision 2
Medium Transmission Map Estimation
(v(x, y ) − v(y, x )) w(x, y )dy.
Recovered Image U
Image F
(15)
Now, the nonlocal Laplacian can be expressed by
1 2
NL u(x ) := divNL (∇NL u(x )) =
(u(y ) − u(x ))w(x, y )dy. (16)
Gamma Correction
3. The proposed UNLTV method Output Image J
3.1. Nonlocal total variational model
Fig. 2. Framework of the proposed approach.
Considering that underwater images captured under challenging scenes are often accompanied by noise, we simply modify (5) as follows:
I (x ) = J (x ) · t (x ) + B(1 − t (x ) )+n(x ),
(17)
where n(x) represents the noise, the definitions of I, J, B and t have been introduced in Section 2.1. Rearranging (17), we can obtain that
(B − I (x ) + n (x ) ) ·
1 = B − J (x ). t (x )
(18)
To convert product form into addition, we apply a logarithmic transformation on (18). Then, (18) can be rewritten as
u = f + tˆ,
Contrast Enhancement
(19)
where f = log F = log(B − I (x )+n(x )), u = log U = log(B − J (x ) ), tˆ = log t (1x ) . According to Kimmel’s variational Retinex algorithm and nonlocal differential operators, we design the minimization energy functional as follows:
min E (u ) = u
|∇NL u|dx + α
|∇ (u − f )|dx +
β
2
(u − f )2 dx, (20)
where α and β are both small free parameters. Furthermore, in order to avoid an over-enhanced or underenhanced restored image, we also add a Gamma correction
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041
ARTICLE IN PRESS
JID: NEUCOM 4
[m5G;August 31, 2019;15:32]
G. Hou, Z. Pan and G. Wang et al. / Neurocomputing xxx (xxxx) xxx
Fig. 3. Sample application test. (a) Original testing images; (b) their underwater dark channel; (c) their estimated TM and BL (marked with a red dot); (d) recovered results using our proposed method; (e) contrast enhancement after Gama correction operation.
operator on the image F = exp( f ). The Gamma correction is defined as
F = W ·
F γ1 W
,
(21)
where W = 255 in 8-bit images, and γ is a free parameter. After that we can obtain a new recovered F , and then multiply it by 1t to produce the image U = F · 1t . So U is
U U = F · = F
U
F 1− γ1 ,
(22)
W
Afterward, we can get the outcome image J = B − U . Additionally, in our underwater nonlocal total variational (UNLTV) framework, the initial value of transmission t is acquired via the UDCP method mentioned in Section 2.1. Because the background of an underwater scene tends to be green or blue, the traditional method to estimate the global background light is illsuited. To robustly estimate B, we employ a hierarchical searching method on the basis of quad-tree subdivision illustrated in literature [3]. A Framework of the proposed method is presented in Fig. 2. 3.2. ADMM algorithm for UNLTV In order to accelerate the whole progress, we utilize the alternating direction method of multipliers (ADMM) [51,52] to solve the energy Eq. (20). By introducing two auxiliary vector variables w = ∇NL u and v = ∇ (u − f ), (20) can be transformed into the following iterative formulation
u, w, v
⎧ ⎫ 2 w dx + α |v|dx + β2 (u − f ) dx ⎪ ⎪ ⎨ | | ⎬ +λ1 (w − ∇NL u )dx = arg min , 2 θ1 u,w,v ⎪+ 2 |w − ∇NL u| dx + λ2 (v − ∇ (u − f ) )dx⎪ ⎩ θ ⎭ 2 2 + 2 |v − ∇ (u − f )| dx (23) where λ1 and λ2 denote the Lagrange multipliers, and θ 1 , θ 2 are free nonnegative compensating parameters. The iterative optimization problem of (23) can be decomposed into following three subproblems by calculating one variable while fixing other two variables temporarily.
ε1 (u ) = min u
⎧ E u = ⎪ ⎨ ( )
⎫ (u − f )2 dx + λ1 (w − ∇NL u )dx⎪ ⎬ 2 + θ21 |w − ∇NL u| dx , +λ2 (v − ∇ (u − f ) )dx ⎪ ⎭ 2 + θ22 |v − ∇ (u − f )| dx
⎪ ⎩
β 2
(24a)
ε2 (w ) = min E (w ) = w
|w|dx + λ1
θ1 + |w − ∇NL u|2 dx , 2
(w − ∇NL u )dx (24b)
ε3 (v ) = min E (v ) = α v|dx + λ2 | (v − ∇ (u − f ))dx v
θ2 + (24c) |v − ∇ (u − f )|2 dx , 2
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041
ARTICLE IN PRESS
JID: NEUCOM
[m5G;August 31, 2019;15:32]
G. Hou, Z. Pan and G. Wang et al. / Neurocomputing xxx (xxxx) xxx
5
Fig. 4. The convergence of our method. (a) From left to right: two original images and their restored results with different number of iterations (it er = 1, it er = 3, it er = 5, iter = 7); (b) their corresponding energy curve, respectively.
Their corresponding Euler–Lagrange equations (i.e., for the kth loop) are
⎧
k k ⎪ β uk+1 − f + ∇ λ1 + θ1 ∇ wk − NL uk+1 + ∇ λ2 ⎪
⎪ ⎨ k k+1 +θ2 ∇ v − u − f = 0 in k+1
, (25a) k k k k+1 k ⎪ λ + θ w − ∇ u + λ + θ v − ∇ u − f 1 NL 2 2 ⎪ ⎪ 1 ⎩ ·n = 0on∂
wk+1
+ λ + θ1 w wk+1 1 k
k+1
− ∇NL u
k+1
= 0,
vk+1 α k+1 +λ2 k + θ2 vk+1 − ∇ uk+1 − f = 0, v
(25b)
(25c)
To solve (25a) and (25b) and (25c), we employ the GaussSeidel iteration method and generalized soft threshold formulation (GSTF), respectively. Then, we can obtain that
θ1 NL uk+1 + θ2 uk+1 − β uk+1 = ∇ λk 1 + ∇ λk 2 + θ1 ∇ wk + θ2 vk + θ2 f − β f,
w = max
k ∇ uk+1 − ∇NL uk+1 − λ1 − 1 , 0 NL θ1 θ1 ∇NL uk+1 −
(26a) λ1 k θ1
0 0
, 0 = 0,
λ1 k θ1
(26b)
k k k+1
∇ u − f − λ2 − λ2 , 0 θ2 θ2 k+1
λ2 k ∇ u − f − θ2 0 ×
λ2 k , 0 0 = 0. k +1 ∇ u − f − θ2
v = max
(26c)
According to (16), uk+1 and NL uk+1 in (26a) can be discretized as
⎧ k+1 u = ui+1, j k (x ) + ui−1, j k (x ) + ui, j+1 k (x ) + ui, j−1 k (x ) ⎪ ⎨ −4ui, j k+1 (x ) , n k+1
k +1 ⎪ ui, j (yN ) − uki,+1 (x ) w(x, yN ) ⎩NL u = j
(27)
N =1,y∈N (x )
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041
JID: NEUCOM 6
ARTICLE IN PRESS
[m5G;August 31, 2019;15:32]
G. Hou, Z. Pan and G. Wang et al. / Neurocomputing xxx (xxxx) xxx
Fig. 5. The influence of parameters α and β . (a) α = 0.0 0 01, α = 0.01, α = 1, α = 100, α = 10 0 0 0 with constant β = 1; (b) β = 0.0 0 01, β = 0.01, β = 1, α = 100, β = 10 0 0 0 with constant α = 1.
Fig. 6. Example of restoring an underwater image with different estimated TM. (a) The transmission map estimation via the DCP (left), DCPr (middle) and UDCP (right) for underwater images; and (b) their corresponding restoration results using proposed method.
Fig. 7. Example of restoring an underwater image with different estimated BL. (a) The background light (marked with a red dot) obtained by selecting the brightest pixel (left), using an improved algorithm in [15] (middle) and (d) quad-tree subdivision algorithm [3] (right); (b) their corresponding restoration results using proposed method.
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041
JID: NEUCOM
ARTICLE IN PRESS
[m5G;August 31, 2019;15:32]
G. Hou, Z. Pan and G. Wang et al. / Neurocomputing xxx (xxxx) xxx
7
Fig. 8. Underwater benchmark images. Three representative underwater natural-scene images in (a) and their histogram distributions of each color channel (R, G, and B) in (b).
where y ∈ N(x) represents the neighborhood of x, and n denotes its number. Finally, we update the Lagrange multipliers.
λ1 k+1 = λ1 k + θ1 wk+1 − ∇NL uk+1
. λ2 k+1 = λ2 k + θ2 vk+1 − ∇ uk+1 − f
(28)
The solving process of ADMM method for UNLTV is summarized as shown in Algorithm 1.
[12]. Both AHE and MSRCR are two typical image enhancement methods. KVR method is a variational model based on Retinex theory to improve the illumination of the image. The UDCP and WCID methods are designed based on the same underwater image formation model as well as our method. Furthermore, some quantitative comparisons on synthesized underwater images are conducted. All experiments are implemented using Matlab 2016b on a Windows 10 platform with an Intel (R) Core (TM) CPU i7-6500 U at 3.50 GHz and 16GB memory.
Algorithm 1 ADMM for UNLTV model (20). 1: Input: Underwater degraded image I 2: Using UDCP and quad-tree subdivision algorithms to estimate the transmission map t and the global background B 3: Initialization: Set (u, w, v) = 0, (α , β ) > 0 4: Compute u according to (26a) 5: Compute w according to (26b) 6: Compute v according to (26c) 7: Update Lagrange multipliers λ1 and λ2 according to (28) 8: Until convergence of u 9: Return u 10: Gamma correction on u 11: Output: Restored image J
4. Experimental results and discussion In this section, to demonstrate the performance of the proposed method, we first describe and analyze the experimental results on some real underwater images which suffer from low contrast, haze, and noise. In what follows, we further compare the proposed method with several state-of-the-art methods including adaptive histogram equalization (AHE) algorithm, multiscale Retinex with color restore (MSRCR) algorithm, Kimmel’s variational Retinex model (KVR) [27], underwater dark channel prior (UDCP) algorithm [19], wavelength compensation and image dehazing (WCID) algorithm [16], and Li’s color correction algorithm
4.1. Real underwater image test In order to illustrate the effectiveness of proposed method, we test several real underwater images captured under different conditions. Several real underwater images of which are shown as follows, other examples are listed in the following subsection of qualitative comparison. These degraded images are characterized by low contrast, blurring, and color diminished. In all of the following experiments, we set θ1 = 0.01 and θ2 = 0.01 of Eq. (23). Several visual examples in Fig. 3 present the performance of the proposed UNLTV method. In the original images of Fig. 3(a), four degraded underwater images under different challenge scenes are chosen for application test, and their corresponding underwater dark channel are shown in Fig. 3(b), respectively. We can observe that the intensity of the underwater dark channel can correctly reflect the thickness of the haze in the original image. Fig. 3(c) presents the estimated transmission map and background light by employing UDCP and quad-tree subdivision algorithms. The outcomes of tested images in Fig. 3(d) all show that the proposed method can effectively remove the haze, suppress noise, and correct color, which proves the values of TM and BL are well estimated. Furthermore, the gamma correction operation can effectively balance the too bright and too dark regions and reveal more details. Observing from Fig. 3(e), it can be seen that our UNLTV achieves a good visual quality in terms of contrast, color
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041
JID: NEUCOM 8
ARTICLE IN PRESS
[m5G;August 31, 2019;15:32]
G. Hou, Z. Pan and G. Wang et al. / Neurocomputing xxx (xxxx) xxx
Fig. 9. Histogram distributions of raw underwater images in (a), and our restored results in (b).
and visibility. Additionally, in regard to show the efficiency of the proposed algorithm, the recovered results and the corresponding energy curve by using ADMM algorithm for the proposed model are depicted in Fig. 4. From Fig. 4(a), we can see that the density of haze in the image decreases rapidly first and then remains constant even though increases the number of iterations. The satisfactory restoration results demonstrate its high-efficiency in convergence with fewer iterations, as shown in Fig. 4(b). To illustrate the robustness of the proposed algorithm to parameter variation, we present the influence of the free parameters α and β , as shown in Fig. 5. We test the effects of one parameter
by fixing another parameter with a constant value. As observed from Fig. 5(a), we can observe that the α values varies from le-4 to le4 with little effect on the output results. It concludes that our algorithm is robust to this parameter in the face of a wide changeable range. Analogously, if β > 1, Fig. 5(b) shows it makes minor influence on the results when the value of β typically increases. However, when β < 1, the water ripple and spot begin to appear in the bright region, and the smaller the value, the water ripples and spots become more obvious. Moreover, as described previously, the traditional algorithms to estimate the transmission map (TM) and background light (BL) are
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041
JID: NEUCOM
ARTICLE IN PRESS
[m5G;August 31, 2019;15:32]
G. Hou, Z. Pan and G. Wang et al. / Neurocomputing xxx (xxxx) xxx
9
Fig. 10. Comparisons of gradient maps. (a) Original underwater images; (b) the gradient maps of original underwater images; (c) the corresponding gradient maps of recovered results.
not appropriate for underwater image, which may lead to poor restored results. Figs. 6 and 7 illustrate two examples of the influence of TM and BL estimation on the proposed method. The images in the second row of Figs. 6 and 7 are the recovered results with different values of TM and BL where both UDCP and quadtree subdivision methods work well. As shown in the first row of Fig. 6(a), the inaccurate TM estimation by employing DCP and DCPr [2] algorithms generate unsatisfying results with low visibility. Besides, the correct BL estimation often cannot be obtained by simply picking the brightest pixel and using the improved algorithm in [19] because they erroneously regard bright foreground pixels as BL. The wrong BL causes the proposed method to fail to correct the distorted color, as shown in the first resorted result of Fig. 7(b). To evaluate the effectiveness of the proposed UNLTV method in terms of contrast enhancement and color correction, we further extract a set of underwater natural-scene images and underwater degraded images, and then present their histogram distributions to explore their relationships. The underwater natural-scene images and their histogram distributions are presented in Fig. 8. It can be observed from Fig. 8(b) that the histogram distributions of green and blue channel of underwater natural-scene images are wider and more consistent and the histogram distribution of red channel concentrates on the darkest side due to the effects of the absorption and scattering. Fig. 9 shows histogram distributions of three underwater degraded images and their corresponding results after using the proposed method. Intuitively, the histogram distribution of each color channel of the underwater degraded images is separate and shifted in a horizontal direction as shown in Fig. 9(a). In contrast, in Fig. 9(b), the histogram distributions of the restored images prove to be consistent with the results of Fig. 8(b). Furthermore, the gradient map shown in Fig. 10 demonstrate that the improved images are generated in terms of contrast and visibility increase. The restored results recover more detail and visibility than original images.
4.2. Qualitative comparison In our comparison experiments, we first apply our numerical algorithm and the above mentioned compared methods to several representative real underwater images. The restored results after using these methods are illustrated in Fig. 11(b)–(h), respectively. As we can observe in Fig. 11(a), to varying degrees, these original degraded underwater images suffer from color fading, blur, haze, low contrast, and noise. As shown in Fig. 11(b), AHE algorithm has a better performance on improving the local contrast and reveal more detail of the image. However, AHE has the problem of over-amplifying the noise of relatively uniform areas in the image and easily brings color deviation. From Fig. 11(c), we can that the MSRCR method can generate a well-enhanced recovered image, but it has no effect on reducing noise and also overexposes some bright regions in the foreground. In Fig. 11(d), the KVR algorithm performs better for the underwater images taken under low-light condition scene but fails to remove the haze and generates unsatisfying restoration results. Fig. 11(e)–(f) present the results obtained on the seven raw underwater images by using UDCP and WCID methods. It can be seen that both the methods success in defogging because they derive from the same image formation model as discussed previously. Nevertheless, since their estimated GBL are simply selected with the brightest pixel, which leads to a darker background, furthermore, the WCID algorithm easily produces a patch around the object in the foreground. Account for the complexity and challenge of underwater. From Fig. 11(g), we can observe that Li’s algorithm achieves a better outcome among these compared methods. Visually, Li’s method can correct color distortion, but it has no obvious effect on denoising. Lastly, Fig. 11(h) illustrates the restored results after applying the proposed UNLTV method for all the challenging scenes. Based on visual observation, the UNLTV method discloses more details
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041
JID: NEUCOM 10
ARTICLE IN PRESS
[m5G;August 31, 2019;15:32]
G. Hou, Z. Pan and G. Wang et al. / Neurocomputing xxx (xxxx) xxx
Fig. 11. Restored results of several compared methods. (a) Underwater raw images; (b)–(h): recovered images after using AHE, MSRCR, KVR, UDCP, WCID, Li’s and proposed method, respectively.
and makes the foreground object better distinguished from the background, which provides a satisfactory outcome. The qualitative comparison results demonstrate that the proposed method can simultaneously solve many problems such as contrast enhancement, dehazing, color correction, and noise reduction, which outperforms the other methods. To further see the detailed superiority of the proposed method, we crop a red rectangle area from tested images as shown in Fig. 12(a). The corresponding zoomed small sub-regions are displayed in Fig. 12(b). From Fig. 12(b), one can observe that the six compared method have similar problems in preserving texture features and eliminating non-uniform illumination. In contrast, after using the proposed UNLTV method, the details information in water pot are well preserved, and the box in the images are enhanced moderately. Furthermore, the restored results in the last row of Fig. 12(b) demonstrate that the proposed method outperforms the other compared algorithms in reducing noise. 4.3. Quantitative comparison In order to quantitatively evaluate the effectiveness and superiority of the proposed method, we firstly quantify the performance of these compared methods by employing several no-reference (NR) evaluation metrics. We conduct a quantitative analysis on the real underwater images of Fig. 11 by calculating the rate of visible edges e [53], the saturation σ [53]. For the e, higher values show better results; for the σ , lower show better results. Given an original image Io and the corresponding contrast restored image Ir . The
definitions of e, σ are can be written as
o e = nrn−n o ns σ = M×N
,
(29)
where no and nr are, respectively, the numbers of visible edges in Io and Ir , and ns represents the number of saturated pixels. M × N is the size of the input image. As shown in Table 1, the evaluation values of these three indicators of the restored results shown in Fig. 11 are listed one by one. In the Table, the average values of each indicator obtained from our method represent the best results. From row-7 of Table 1, we can observe that UDCP, WCID and Li’s methods achieve a higher value of e. The reason is that the amplified noises and stains are mistaken for edges. Meanwhile, there are some negative values of e, which demonstrate that the number of visible edges in the image is reduced after using MSRCR and WCID methods. The lowest σ (their values in all cases are zero) indicates that MSRCR and our method outperform other five compared methods. Moreover, we employ another two no-reference (NF) metrics namely UIQM [54] and UCIQE [55] to provide the associated quantitative comparison. UIQM metric was generated by linearly combing the colorfulness, sharpness and contrast, while UCIQE quantifies another three criterions: chroma, saturation, and contrast. Tables 2 and 3 present the values of these two metrics obtained in Fig. 11 after using the seven methods. As can be observed from Tables 2 and 3, for UIQM and UCIQE, our method achieves a higher value, which demonstrates that they
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041
ARTICLE IN PRESS
JID: NEUCOM
[m5G;August 31, 2019;15:32]
G. Hou, Z. Pan and G. Wang et al. / Neurocomputing xxx (xxxx) xxx
11
Fig. 12. Zoomed small sub-regions (red rectangles in (a)) for detail comparison; (b) the corresponding results by the compared methods (AHE, MSRCR, KVR UDCP, WCID, Li’s, and UNLTV, respectively). Table 1 Quantitative comparison of underwater restored images shown in Fig. 11 applying seven compared methods. (The bold values express the best metric values). Methods
AHE
MSRCR
KVR
UDCP
WCID
Li
UNLTV
Metric
e
σ
e
σ
e
σ
e
σ
e
σ
e
σ
e
σ
row-1 row-2 row-3 row-4 row-5 row-6 row-7 Average
1.200 0.567 0.091 0.152 −0.08 0.424 0.650 0.429
0.001 0.000 0.008 0.024 0.000 0.000 0.000 0.005
0.349 −0.317 −0.182 −0.105 −0.313 0.070 0.011 −0.070
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.205 0.004 0.023 0.034 0.004 0.091 0.043 0.058
0.000 0.000 0.000 0.024 0.003 0.000 0.001 0.004
0.549 1.048 0.098 0.375 0.064 0.137 0.705 0.425
0.000 0.000 0.000 0.000 0.107 0.150 0.088 0.049
0.414 0.344 −0.14 0.327 0.034 0.263 0.789 0.290
0.001 0.000 0.023 0.000 0.086 0.000 0.000 0.016
0.815 0.840 0.186 0.426 0.073 0.166 0.798 0.472
0.000 0.000 0.000 0.002 0.003 0.000 0.000 0.001
0.872 0.978 0.187 0.370 0.201 0.001 0.780 0.484
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Table 2 Quantitative comparison of UIQM for underwater images in Fig. 11. Methods
AHE
MSRCR
KVR
UDCP
WCID
Li
UNLTV
row-1 row-2 row-3 row-4 row-5 row-6 row-7 Average
1.4348 1.3504 1.9013 1.5386 1.7286 1.5651 1.6039 1.5890
1.0478 0.9052 1.1651 1.2112 1.1817 1.2755 1.2983 1.1550
0.9900 0.9989 1.4528 1.2993 1.5714 1.1529 1.2786 1.2491
1.1304 1.3943 1.6362 1.5373 1.7127 1.4068 1.6475 1.4950
1.1133 1.2640 1.8276 1.5960 1.7330 1.6683 1.6332 1.5493
1.5501 1.2028 1.4956 1.4608 1.3853 1.5631 1.6725 1.4757
1.5728 1.3758 1.7749 1.6007 1.3689 1.5357 1.6499 1.5541
can effectively balance the chroma, saturation, and contrast of the restored underwater images. Additionally, we further extracted fifty real underwater degraded images to calculate their average values of these four NF evaluation indicators, as shown in Fig. 13. Fig. 13 demonstrates that the UNLTV approach have similar and in general high values of the e, UIQM and UCIQE metrics. Overall, considering all these metrics, our dehazing and denoising method stands out among the compared methods.
4.4. Simulation experiment Since there is no ground truth available database for underwater images, the typical full-reference (FR) metrics for illustration are not suitable for underwater image quality evaluation, such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) [56]. To better objectively evaluate the performance of the proposed approach, in what
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041
ARTICLE IN PRESS
JID: NEUCOM 12
[m5G;August 31, 2019;15:32]
G. Hou, Z. Pan and G. Wang et al. / Neurocomputing xxx (xxxx) xxx Table 3 Quantitative comparison of UCIQE for underwater images in Fig. 11. Methods
AHE
MSRCR
KVR
UDCP
WCID
Li
UNLTV
(row-1) (row-2) (row-3) (row-4) (row-5) row-6 row-7 Average
0.5654 0.5353 0.5690 0.5766 0.5444 0.6175 0.6377 0.5780
0.4271 0.4653 0.5263 0.5102 0.5402 0.5480 0.5220 0.5056
0.3377 0.4470 0.5530 0.4612 0.5662 0.4457 0.4376 0.4641
0.3750 0.5551 0.6216 0.6091 0.5419 0.5377 0.6175 0.5511
0.4388 0.5609 0.6292 0.6053 0.5428 0.4942 0.5705 0.5488
0.4136 0.5746 0.6238 0.6026 0.6432 0.5209 0.5785 0.5653
0.4325 0.6107 0.6398 0.5911 0.6023 0.6271 0.5827 0.5837
Fig. 13. The results of (a) e, (b) σ , (c) UIQM and (d) UCIQE metrics obtained using different methods, respectively.
follows, the simulation tests are further conducted. According to (1), several simulated degraded underwater images are generated from original ground-truth images captured in the outdoor scene with known transmission t, global background light B, and noise. Here, we select three real underwater images under different challenging scenes as source images to estimate the values of t and B using UDCP and quad-tree subdivision methods. Three sets of simulated underwater images (upper) and their corresponding restored results (bottom) are illustrated in Fig. 14(a)–(c). For each set, underwater images are synthesized with variational Gaussian random noise, and different values of media transmission map and global background light. In order to clearly understand the
synthesis process of the simulated image, their simulation parameters are illustrated in Table 4. For instance, the two synthesized underwater images of Fig. 14(a) are selected with tG = 0.7038, tB = 0.6632, BR = 0.5176, BG = 0.5804, BB = 0.6784 and n = 0.0 0 05/0.0 05, respectively. As we can observe in Fig. 14, the synthesized underwater images are characterized by low contrast, color distortion and noise. In contrast, after applying the proposed method, it can generate a visually satisfying outcome. In addition, the indicators RMSE and PSNR mentioned earlier are used to assess the ability to suppress the noise, and the SSIM indicator is normally employed to measure the recovered information of luminance, contrast and structure.
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041
ARTICLE IN PRESS
JID: NEUCOM
[m5G;August 31, 2019;15:32]
G. Hou, Z. Pan and G. Wang et al. / Neurocomputing xxx (xxxx) xxx
13
Fig. 14. Simulation results, from left to right: source image, ground-truth image, two simulated images (upper: S1 and S2), and corresponding recovered images (bottom). Table 4 Simulation parameters. Parameters
Transmission map t (G, B)
Global background light B (R, G, B)
Noise variance
Fig. 14(a) Fig. 14(b) Fig. 14(c)
(0.7038,0.6632) (0.5162,0.5619) (0.4043,0.5680)
(0.5176,0.5804,0.6784) (0.8745,0.8863,0.7725) (0.6118,0.7608,0.5608)
0.0005/0.005 0.001/0.01 0.002/0.02
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041
ARTICLE IN PRESS
JID: NEUCOM 14
[m5G;August 31, 2019;15:32]
G. Hou, Z. Pan and G. Wang et al. / Neurocomputing xxx (xxxx) xxx
Fig. 15. Illustration of 10 extracted images from 100 test underwater images for quantitative comparison. (a) the ground truth images, (b) the corresponding synthesized underwater images with Gaussian noise added (variance = 0.001). Table 5 Quantitative comparison in RMSE, PSNR and SSIM. Metrics
RMSE PSNR SSIM
Fig. 14(a)
Simulated Restored Simulated Restored Simulated Restored
Fig. 14(b)
Fig. 14(c)
S1
S2
S1
S2
S1
S2
2.1874 4.3176 41.3323 35.4259 0.4850 0.6041
7.9228 5.1114 30.1533 33.9601 0.2887 0.5592
2.8993 2.7246 38.8850 39.4248 0.6476 0.8179
8.7383 4.7154 29.3023 34.6604 0.5206 0.7257
4.6029 4.6193 34.8702 34.8393 0.6481 0.7184
9.6448 7.3229 28.4449 30.8371 0.3324 0.5770
As shown in Table 5, the calculated results of these three FR metrics between the simulated images and the original land-based images further demonstrate that the simulated images are degraded with low contrast, haze, and noise. In contrast, the lower RMSE and higher PSNR and SSIM values state that our method can effectively improve contrast, remove the haze, and reduce the noise. Moreover, our method is more effective when the added noise becomes larger. Furthermore, we calculate these three indicators between the ground truth images and the corresponding synthesized underwater images restored using the other compared methods. The simulated underwater images are produced in the same way as aforementioned. Due to the limited space, we extract 10 images from 100 tested underwater images for illustration as shown in Fig. 15. The average RMSE, PSNR and SSIM values on the 100 tested underwater images for all the compared methods are demonstrated in Table 6. As can be observed from Table 6, our restoration method achieves the best performance among the compared methods in terms of the average values of MSE, PSNR and SSIM. In practice, our method achieves a higher SSIM exceeding 0.9 in some
Table 6 Comparison of average RMSE, PSNR and SSIM of restored results over 100 tested underwater images. Method\Metrics
RMSE
PSNR
SSIM
AHE MSRCR KVR UDCP WCID Li UNLTV
15.8257 14.1675 10.1312 9.4253 8.2574 14.5685 6.7247
24.1435 25.1049 28.0176 28.6449 29.7939 24.8625 31.5773
0.7552 0.7231 0.7457 0.7233 0.7059 0.8045 0.8256
cases. Satisfyingly, the qualitative and quantitative comparison results both validate the superiority of the proposed method on real and simulated underwater images. 5. Conclusion In this paper, we present a novel variational method for underwater image restoration based on nonlocal differential operator.
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041
JID: NEUCOM
ARTICLE IN PRESS G. Hou, Z. Pan and G. Wang et al. / Neurocomputing xxx (xxxx) xxx
Here, we successfully integrate the underwater image formation model into the proposed variational model. Moreover, we design an ADMM algorithm to accelerate the efficiency of the proposed UNLTV model. The UNLTV method is executed on a set of representative real and synthesized underwater images, which demonstrate that it can success in removing haze, suppressing noise, correcting color. Moreover, A large number of qualitative and quantitative experimental comparison results further ensure that the recovered underwater images have better quality than others’ works. In addition, the variational model itself can be expanded by adding a regularizer based on the properties of a pre-processing image (e.g. it can also be successfully used for foggy images dehazing). All of these will be in our future work. Declaration of Competing Interest None. Acknowledgment The research work is supported by National Natural Science Foundation of China (No. 61901240), Natural Science Foundation of Shandong Province (No. ZR2019BF042, ZR2019MF050), China Scholarship Council (No. 201908370 0 02) and China Postdoctoral Science Foundation (No. 2017M612204). References [1] R. Schettini, S. Corchs, Underwater image processing: state of the art of restoration and image enhancement methods, EURASIP J. Adv. Signal Process. 14 (1) (2010) 2–7, doi:10.1155/2010/746052. [2] A. Galdran, D. Pardo, A. Picón, A. Alvarez-Gila, Automatic red-channel underwater image restoration, J. Vis. Commun. Image Represent. 26 (2015) 132–145, doi:10.1016/j.jvcir.2014.11.006. [3] C.-Y. Li, J.-C. Guo, R.-M. Cong, Y.-W. Pang, B. Wang, Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior, IEEE Trans. Image Process. 25 (12) (2016) 5664–5677, doi:10.1109/TIP. 2016.2612882. [4] Y.-T. Peng, C.C. Pamela, Underwater image restoration based on image blurriness and light absorption, IEEE Trans. Image Process. 26 (4) (2017) 1579–1594, doi:10.1109/TIP.2017.2663846. [5] S. Zhang, T. Wang, J. Dong, H. Yu, Underwater image enhancement via extended multi-scale Retinex, Neurocomputing 245 (5) (2018) 1–9, doi:10.1016/j. neucom.2017.03.029. [6] F. Petit, A.S. Capelle-Laize, P. Carre, Underwater image by enhancement by attenuation inversion with quaternions, in: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2009, pp. 1177–1180, doi:10.1109/ICASSP.2009. 4959799. [7] A.S.A. Ghani, N.A.M. Isa, Underwater image quality enhancement through integrated color model with Rayleigh distribution, Appl. Comput. 27 (2015) 219– 230, doi:10.1016/j.asoc.2014.11.020. [8] N. Carlevaris-Bianco, A. Mohan, R.M. Eustice, Initial results in underwater single image dehazing, in: Proceedings of OCEANS MTS/IEEE Seattle, 2010, pp. 1– 8, doi:10.1109/OCEANS.2010.5664428. [9] X. Qiao, J.-H. Bao, H. Zhang, L.-H. Zeng, D.-L. Li, Underwater image quality enhancement of sea cucumbers based on improved histogram equalization and wavelet transform, Inf. Process. Agric. 4 (3) (2017) 206–213, doi:10.1016/j.inpa. 2017.06.001. [10] C.O. Ancuti, C. Ancuti, C.D. Vleeschouwer, P. Bekaert, Color balance and fusion for underwater image enhancement, IEEE Trans. Image Process. 27 (1) (2018) 379–393, doi:10.1109/TIP.2017.2759252. [11] G. Bianco, M. Muzzupappa, F. Bruno, R. Garcia, L. Neumann, A new color correction method for underwater imaging, Int. Arch. Photogr. Remote Sen. Spat. Inf. Sci. XL-5/w5 (2015) 25–32, doi:10.5194/isprsarchives- XL- 5- W5- 25- 2015. [12] C.-Y. Li, J.-C. Guo, C.-L. Guo, Emerging from water: underwater image color correction based on weakly supervised color transfer, IEEE Single Process. Lett. 25 (3) (2018) 323–327, doi:10.1109/LSP.2018.2792050. [13] J.W. Kaeli, H. Singh, Illumination and attenuation correction techniques for underwater robotic optical imaging platforms, IEEE J. Ocean. Eng. (2014). [14] W.-H. Zhang, G. Li, Z.-Q. Ying, A new underwater image enhancing method via color correction and illumination adjustment, in: Proceedings of Visual Communications and Image Processing, 2017, doi:10.1109/VCIP.2017.8305027. [15] K. He, J. Sun, X. Tang, Single image haze removal using dark channel prior, IEEE Trans. Pattern Anal. Mach. Intell. 33 (12) (2011) 2341–2353, doi:10.1109/TPAMI. 2010.168. [16] J.Y. Chiang, Y.C. Chen, Underwater image enhancement by wavelength compensation and dehazing, IEEE Trans. Image Process. 21 (4) (2012) 1756–1769, doi:10.1109/TIP.2011.2179666.
[m5G;August 31, 2019;15:32] 15
[17] S. Serikawa, H. Lu, Underwater image dehazing using joint trilateral filter, Comput. Electr. Eng. 40 (1) (2014) 41–50, doi:10.1016/j.compeleceng.2013.10. 016. [18] X. Li, Z. Yang, M. Shang, J. Hao, Underwater image enhancement via dark channel prior and luminance adjustment, OCEANS (2016) 1–5 2016, doi:10.1109/ OCEANSAP.2016.7485625. [19] H. Wen, Y. Tian, T. Huang, W. Guo, Single underwater image enhancement with a new optical model, in: Proceedings of IEEE International Symposium on Circuits and Systems, 2013, pp. 753–756, doi:10.1109/ISCAS.2013.6571956. [20] P. Drews Jr., et al., Transmission estimation in underwater single images, in: Proceedings of IEEE ICCV Workshop Underwater Vision, 2013, pp. 825–830, doi:10.1109/ICCVW.2013.113. [21] P.L.J. Drews, E.R. Nascimento, S.S.C. Botelho, M.F.M. Campos, Underwater depth estimation and image restoration based on single images, IEEE Comput. Graph. Appl. 36 (2) (2016) 24–35, doi:10.1109/MCG.2016.26. [22] C.-Y. Li, J.-C. Guo, Underwater image enhancement by dehazing and color correction, J. Electron. Imaging 24 (3) (2015) 033023, doi:10.1117/1.JEI.24.3. 033023. [23] C.-Y. Li, J.-C. Guo, C.-L. Guo, R.-M. Cong, J.-C. Gong, A hybrid method for underwater image correction, Pattern Recognit. Lett. 94 (15) (2017) 62–67, doi:10.1016/j.patrec.2017.05.023. [24] X. Luan, et al., Underwater color image enhancement using combining schemes, Mar. Technol. Soc. J. 48 (3) (2014) 57–62, doi:10.4031/MTSJ.48.3.8. [25] H.-M. Lu, et al., Underwater image enhancement method using weighted guided trigonometric filtering and artificial light correction, J. Vis. Commun. Image Represent. 38 (2016) 504–516, doi:10.1016/j.jvcir.2016.03.029. [26] Z.-G. Tu, N. Aa, C.V. Gemeren, R.C. Veltkamp, A combined post-filtering method to improve accuracy of variational optical flow estimation, Pattern Recognit. 47 (5) (2014) 1926–1940, doi:10.1016/j.patcog.2013.11.026. [27] R. Kimmel, M. Elad, D. Shaked, R. Keshet, I. Sobel, A variational framework for Retinex, Int. J. Comput. Vis. 52 (1) (2003) 7–23, doi:10.1023/A:1022314423998. [28] Z.-G. Tu, et al., Variational method for joint optical flow estimation and edgeaware image restoration, Pattern Recognit. 65 (2017) 11–25, doi:10.1016/j. patcog.2016.10.027. [29] Z.-Y. Zha, et al., Non-convex weighted p nuclear norm based ADMM framework for image restoration, Neurocomputing 211 (15) (2018) 209–224 05.073, doi:10.1016/j.neucom.2018. [30] G.-J. Hou, et al., Efficient L1-based nonlocal total variational model of Retinex for image restoration, J. Electron. Imaging 27 (5) (2018) 1, doi:10.1117/1.JEI.27. 5.051207. [31] R. Zhang, X.C. Feng, L.X. Yang, L.H. Chang, C. Xu, Global sparse gradient guided variational Retinex model for image enhancement, Signal Process. Image Commun. 58 (2017) 270–281, doi:10.1016/j.image.2017.08.008. [32] Y.-F. Pu, et al., A fractional-order variational framework for Retinex: fractionalorder partial differential equation-based formulation for multi-scale nonlocal contrast enhancement with texture preserving, IEEE Trans. Image Process 27 (3) (2018) 1214–1229, doi:10.1109/TIP.2017.2779601. [33] W. Cao, J. Yao, J. Sun, G. Han, A tensor-based nonlocal total variation model for multi-channel image recovery, Signal Process 153 (2018) 321–335, doi:10.1016/ j.sigpro.2018.07.019. [34] F. Fang, F. Li, T. Zeng, Single image dehazing and denoising: a fast variational approach, SIAM J. Imaging Sci. 7 (2) (2014) 969–996, doi:10.1137/130919696. [35] A. Galdran, J. Vazquez-Corral, D. Pardo, M. Bertalmio, Fusion-based variational image dehazing, IEEE Signal Process. Lett. 24 (2) (2017) 151–155, doi:10.1109/ LSP.2016.2643168. [36] Z. Wang, G. Hou, Z. Pan, G. Wang, Single image dehazing and denoising combining dark channel prior and variational models, IET Comput. Vis. 12 (4) (2018) 393–402, doi:10.1049/iet-cvi.2017.0318. [37] J. Lu, G. Wang, Z. Pan, Nonlocal active contour model for texture segmentation, Multimed. Tools 76 (8) (2017) 10991–11001, doi:10.1007/s11042- 016- 3462- 7. [38] Y. Ding, et al., Novel methods for microglia segmentation, feature extraction and classification, IEEE-ACM Trans. Comput. Biol. 14 (6) (2017) 1366–1377, doi:10.1109/TCBB.2016.2591520. [39] Y.-F. Wu, C.-J. He, Indirectly regularized variational level set model for image segmentation, Neurocomputing 171 (1) (2016) 194–208, doi:10.1016/j.neucom. 2015.06.027. [40] G. Wang, J. Lu, Z. Pan, Q. Miao, Color texture segmentation based on active contour model with multichannel nonlocal and Tikhonov regularization, Multimed. Tools Appl 76 (2) (2017) 24515–24526, doi:10.1007/s11042- 016- 4136- 1. [41] W.-R. Hu, Y. Xie, L. Li, W.-S. Zhang, A total variation based nonrigid image registration by combining parametric and non-parametric transformation models, Neurocomputing 144 (20) (2014) 222–237, doi:10.1016/j.neucom.2014.05.031. [42] J.-M. Duan, O.C. Ward Wil, L. Sibbett, Z.-K. Pan, B. Li, Introducing diffusion tensor to high order variational model for image reconstruction, Digit. Signal Process. 69 (2017) 323–336, doi:10.1016/j.dsp.2017.07.001. [43] J. Wang, et al., Gaussian field estimator with manifold regularization for retinal image registration, Signal Process. 157 (2019) 225–235, doi:10.1016/j.sigpro. 2018.12.004. [44] T.F. Chan, J.H. Shen, H.-M. Zhou, Total variation wavelet inpainting, J. Math. Imaging Vis. 25 (1) (2006) 107–125, doi:10.10 07/s10851-0 06-5257-3. [45] J.-M. Duan, Z.-K. Pan, B.-C. Zhang, W. Liu, X.-C. Tai, Fast algorithm for color texture image inpainting using the non-local CTV model, J. Glob. Optim. 62 (4) (2015) 853–876, doi:10.1007/s10898- 015- 0290- 7. [46] U.A. Nnolim, Improved partial differential equation-based enhancement for underwater images using local–global contrast operators and fuzzy homo-
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041
JID: NEUCOM 16
[47]
[48] [49] [50] [51]
[52]
[53]
[54]
[55]
[56]
ARTICLE IN PRESS
[m5G;August 31, 2019;15:32]
G. Hou, Z. Pan and G. Wang et al. / Neurocomputing xxx (xxxx) xxx morphic processes, IET Image Process. 11 (11) (2017) 1059–1067, doi:10.1049/ iet-ipr.2017.0259. M.-Z. Song, H.-S. Qu, G.-X. Zhang, S.-P. Tao, G. Jin, A variational model for sea image enhancement, Remote Sens. 10 (1313) (2018) 1–23, doi:10.3390/ rs10081313. J. Jaffe, Computer modeling and the design of optimal underwater imaging systems, IEEE JOE 15 (2) (1990) 101–111, doi:10.1109/48.50695. B. McGlamery, A computer model for underwater camera systems, in: Proceedings of SPIE 0208, Ocean Optics VI, 1980, pp. 221–231, doi:10.1117/12.958279. G. Gilboa, S. Osher, Nonlocal operators with applications to image processing, Multiscale Model. Simul. 7 (3) (2008) 1005–1028, doi:10.1137/070698592. T. Goldstein, B. Donoghue, S. Setzer, R. Baraniuk, Fast alternating direction optimization methods, SIAM J. Imaging Sci. 7 (3) (2014) 1588–1623, doi:10.1137/ 120896219. D. Sun, S. Roth, M.J. Black, Secrets of optical flow estimation and their principles, in: Proceedings of CVPR, 2010, pp. 2432–2439, doi:10.1109/CVPR.2010. 5539939. N. Hautière, J. Tarel, D. Aubert, E. Dumont, Blind contrast enhancement assessment by gradient ratioing at visible edges, Image Anal. Stereol. 27 (2) (2011) 87–95, doi:10.5566/ias.v27.p87-95. K. Panetta, C. Gao, S. Agaian, Human-visual-system-inspired underwater image quality measures, IEEE J. Ocean. Eng. 41 (3) (2016) 541–551, doi:10.1109/JOE. 2015.2469915. M. Yang, A. Sowmya, An underwater color image quality evaluation metric, IEEE Trans. Image Process. 24 (12) (2015) 6062–6071, doi:10.1109/TIP.2015. 2491020. Z. Wang, et al., Image quality assessment: from error measurement to structural similarity, IEEE Trans. Image Process. 13 (1) (20 04) 60 0–612, doi:10.1109/ TIP.2003.819861. Guojia Hou is now a lecturer in the College of Computer Science and Technology, Qingdao University. He received his BS degree in computer science in 2010 and his MS and Ph.D. degrees in computer applications technology from the Ocean University of China in 2012 and 2015, respectively. He is the author of more than 20 journal and conference papers. His current research interests include image processing and pattern recognition.
Guodong Wang now is an associate professor in College of Computer Science and Technology, Qingdao University. He received bachelor degree in 2001 and master degree in 2004 in control theory and control engineer, Qingdao University of Science and Technology, and received Ph.D. degree in pattern recognition and intelligent system in Huazhong University in 2008. His research Interest include: variational image science, face recognition, intelligent video survillance, 3D reconstruction and medical image processing and analysis.
Huan Yang received the Ph.D. degree in computer engineering from Nanyang Technological University, Singapore, in 2015, the M.S. degree in computer science from Shandong University, China, in 2010, and the B.S. degree in computer science from the Heilongjiang Institute of Technology, China, in 2007. Currently, she is working in the College of Computer Science and Technology, Qingdao University, Qingdao China. Her research interests include image/video processing and analysis, perception-based modeling and quality assessment, object detection/recognition, and machine learning.
Jinming Duan received his Ph.D. in computer science from the University of Nottingham, UK. Between 2017 and 2019, he was a research associate at the Imperial College London, UK. He is now a lecturer at the University of Birmingham, UK. His research interests include deep neural networks, variational methods, partial/ordinary differential equations, numerical optimization, and finite difference/element methods, with applications to image processing, computer vision and medical imaging analysis.
Zhenkuan Pan received his Ph.D. in 1992 from the Shanghai Jiao Tong University of Science and Technology. Since 1996, he has been a full professor at Qingdao University. He is also a member of the virtual reality professional committee of China graphic image association. He is the author of more than 300 papers. His interests include computer vision, image processing, and pattern recognition. He is also working on the application of multibody system dynamics and control.
Please cite this article as: G. Hou, Z. Pan and G. Wang et al., An efficient nonlocal variational method with application to underwater image restoration, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.08.041