Computers and Electrical Engineering 40 (2014) 41–50
Contents lists available at ScienceDirect
Computers and Electrical Engineering journal homepage: www.elsevier.com/locate/compeleceng
Underwater image dehazing using joint trilateral filter q Seiichi Serikawa a, Huimin Lu a,b,⇑ a b
Department of Electrical Engineering and Electronics, Kyushu Institute of Technology, Japan Japan Society for the Promotion of Science, Japan
a r t i c l e
i n f o
Article history: Available online 20 November 2013
a b s t r a c t This paper describes a novel method to enhance underwater images by image dehazing. Scattering and color change are two major problems of distortion for underwater imaging. Scattering is caused by large suspended particles, such as turbid water which contains abundant particles. Color change or color distortion corresponds to the varying degrees of attenuation encountered by light traveling in the water with different wavelengths, rendering ambient underwater environments dominated by a bluish tone. Our key contributions are proposed a new underwater model to compensate the attenuation discrepancy along the propagation path, and proposed a fast joint trigonometric filtering dehazing algorithm. The enhanced images are characterized by reduced noised level, better exposedness of the dark regions, improved global contrast while the finest details and edges are enhanced significantly. In addition, our method is comparable to higher quality than the state-of-the-art methods by assuming in the latest image evaluation systems. Ó 2013 Elsevier Ltd. All rights reserved.
1. Introduction Underwater vision is an important issue in ocean engineering. Different from the natural images, underwater images suffer from poor visibility due to the medium scattering and light distortion. First of all, capturing images underwater is difficult, mostly due to attenuation caused by light that is reflected from a surface and is deflected and scattered by particles, and absorption substantially reduces the light energy. The random attenuation of the light is mainly cause of the haze appearance while the fraction of the light scattered back from the water along the sight considerable degrades the scene contrast. In particular, the objects at a distance of more than 10 meters are almost indistinguishable while the colors are faded due to the characteristic wavelengths are cut according to the water depth [1]. There have been many techniques to restore and enhance the underwater images. Schechner and Averbuch [2] exploited the polarization dehazing method to compensate for visibility degradation. They also utilized image fusion method in turbid medium for reconstructing a clear image [3]. Hou et al. [4] combined the point spread function (PSF) and modulation transfers function to reduce the blurring effect. Although the aforementioned approaches can enhance the image contrast, these methods have demonstrated several drawbacks that reduce their practical applicability. First, the equipment of imaging is difficult in practice (e.g. range-gated laser imaging system). Second, multiple input images are required (e.g. two illumination images [3], white balanced image and color corrected image [1]). In order to solve these above two issues, single image dehazing method is mentioned. Fattal [5] firstly estimated the scene radiance and derived the transmission image by single image. However, this method cannot well process heavy haze images. He et al. [6] proposed the scene depth information-based dark channel prior dehazing algorithm by using matting Laplacian, which could be computational intensive. To overcome this disadvantage, He et al. also proposed a new guided image filter [7]
q
This paper is for CAEE special issue SI-40th. Reviews processed and approved for publication by Editor-in-Chief Dr. Manu Malek.
⇑ Corresponding author at: Department of Electrical Engineering and Electronics, Kyushu Institute of Technology, Japan. Tel.: +81 8042785999. E-mail addresses:
[email protected] (S. Serikawa),
[email protected] (H. Lu). 0045-7906/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.compeleceng.2013.10.016
42
S. Serikawa, H. Lu / Computers and Electrical Engineering 40 (2014) 41–50
with the foggy image as a reference image, which lead to incomplete haze removal. Lan et al. [8] proposed a non-local regularized method for haze removal. It fully considered the influences of sensor noise and multiplicative noise. However, this method is only used for natural images instead of underwater images. In this paper, we introduce a novel approach that is able to enhance underwater images based on single image to overcome the drawbacks of the above methods. We propose a new joint trigonometric filter instead of the matting Laplacian [6] to solve the alpha mattes more efficiently. In short summary, our technical contributions are in threefold: first, the proposed joint trigonometric guided filter can perform as an edge-preserving smoothing operator like the popular bilateral filter, but has better behavior near the edges. Second, the novel guided filter has a fast and non-approximate constant-time algorithm, whose computational complexity is independent of the filtering kernel size. Third, the proposed underwater transmission model is suitable in underwater environment. 2. Underwater image dehazing 2.1. Underwater imaging model In the optical model, the acquired image can be modeled as being composed of two components. One is the direct transmission of light from the object, and the other is the transmission due to scattering by the particles of the medium (e.g. airlight). Mathematically, it can be written as
IðxÞ ¼ JðxÞtðxÞ þ ð1 tðxÞÞA
ð1Þ
where I is the achieved image. J is the scene radiance or haze-free image, t is the transmission along the cone of vision, and t(x) = ebd(x), b is the attenuation coefficient of the medium, d(x) is the distance between the camera and the object, A is the veiling color constant and x = (x, y) is a pixel. The optical model assumes linear correlation between the reflected light and the distance between the object and observer. The light propagation model of Eq. (1) is slightly different from the underwater environment. In the underwater optical imaging model, absorption plays an important role in image degradation. Furthermore, unlike scattering, the absorption coefficient is different for each color channel, being the highest for red and lowest for blue in seawater. These are expressed by the following simplified hazy image formation model based on Eq. (1):
IðxÞ ¼ JðxÞeðbs þba ÞdðxÞ þ ð1 ebs dðxÞ ÞA ¼ JðxÞebs dðxÞ JðxÞeba dðxÞ þ ð1 ebs dðxÞ ÞA; ¼ K JðxÞebs dðxÞ þ ð1 ebs dðxÞ ÞA
ð2Þ
¼ K JðxÞtðxÞ þ ð1 tðxÞÞA where bs is the scattering coefficient, ba is the absorption coefficient of light, and K is a constant. In this paper, we simplify the model at a certain water depth, where the transmission t is defined only by the distance between the camera and scene (see Fig. 1), which is slightly modified by Schechner’ underwater imaging model [9]. In addition, we assume the term for the absorption coefficient is constant. With these simplifications, Eq. (2) can be easily computed as well as Eq. (1). 2.2. Estimating the transmission According to recent researches, we found that the red color channel is attenuated at a much higher rate than the green or blue channel [10]. In the same depth, the natural illumination changes slightly according water quality. We further assume that the transmission in the water is constant in short distance. We denote the patch’s transmission as ~tðxÞ. Take the max-
Fig. 1. Underwater optical imaging model (Y. Schechner et al.).
S. Serikawa, H. Lu / Computers and Electrical Engineering 40 (2014) 41–50
43
imum intensity of the red color channel to compare with the maximum intensity of the green and blue color channels. We define the dark channel Jdark(x) for the underwater image J(x) as,
J dark ðxÞ ¼ minð min J c ðxÞÞ r
y2XðxÞ
ð3Þ
where Jc(x) refers to a pixel x in red color channel in the observed image, and X refers to a patch in the image. The dark channel is mainly caused by three factors, shadows, colorful objects or surfaces and dark objects or surfaces. Here, take the min operation in the local patch on the haze imaging function Eq. (1), we assume the transmission as,
minðIc ðxÞÞ ¼ ~tc ðxÞ min ðJ c ðxÞÞ þ ð1 ~tðxÞÞAc y2XðxÞ
ð4Þ
Since Ac is the homogeneous background light and the above equation and the above equation perform one more min operation among all three color channels as follows:
Ic ðxÞ J ðxÞ ¼ ~t c ðxÞmin min c þ ð1 ~tðxÞÞ c y2XðxÞ y2XðxÞ Ac Ac
min min c
ð5Þ
As Ref. [11], let us set V(x) = Ac(1 t(x)) as the transmission veil, W = minc(Ic(x)) is the min color components of I(x). We have 0 5 V(x) 5 W(x). For grayscale image, W = I. Utilize the joint trilateral filter (JTF), we compute the T(x) = median(x) JTFX (|W median(x)|). And then, we can acquire it by V(x) = max{min[wT(x), W(x)], 0}, here w is the parameter in (0, 1). Finally, the transmission of each patch can be written as,
~tðxÞ ¼ 1 VðxÞ Ac
ð6Þ
The background Ac is usually assumed to be the pixel intensity with the highest brightness value in an image. However, in practice, this simple assumption often renders erroneous results due to the presence of self-luminous organisms. Consequently, in this paper, we compute the brightest pixel value among all local min corresponds to the background light Ac as follows:
Ac ¼ max min Ic ðyÞ
ð7Þ
x2I y2XðxÞ
where Ic(y) is the local color components of I(x) in each patch. 2.3. Bilateral filter In this subsection, we briefly review the bilateral filters. Tomasi and Manduchi [12] proposed the bilateral filtering in 1998. The bilateral filtering smooths images while preserving edges, by means of a nonlinear combination of nearby image values. This method is non-iterative, local, and simple. The traditional bilateral filter simultaneously weights pixels based on spatial distance from the center pixel as well as distance in tone. The domain filter weights pixels based on their distance from the center, ðxyÞðxyÞ 2r2 D
1 2
tðx yÞ ¼ e
ð8Þ
where x and y denote pixel spatial positions. The spatial scale is set by rD, The range filter weights pixels based on the photometric difference,
wðf ðxÞ f ðyÞÞ ¼
1 e 2
ðf ðxÞf ðyÞÞðf ðxÞf ðyÞÞ 2r2 R
ð9Þ
where f() is image tonal values. The degree of tonal filter is set by rR. The bilateral filter can be written as
R
f ðyÞtðx yÞwðf ðxÞ f ðyÞÞdy tðx yÞwðf ðxÞ f ðyÞÞdy Rd
RdR
ð10Þ
Note that kernels other than Gaussian kernels are not excluded. 2.4. Trilateral filter Bilateral filtering [12] has been applied in smooth images while preserving the edges. However, to avoid over smoothing structures of sizes comparable to the image resolutions, a narrow spatial window has been used. This leads to the necessity of performing more iterations in the filtering process. The standard Gaussian bilateral filter is given
~f ðxÞ ¼ 1 gðxÞ
Z X
Grs ðyÞGrr ðf ðx yÞ f ðxÞÞf ðx yÞdy
ð11Þ
44
S. Serikawa, H. Lu / Computers and Electrical Engineering 40 (2014) 41–50
R where gðxÞ ¼ X Grs ðyÞGrr ðf ðx yÞ f ðxÞÞdy. Grs is the Gaussian spatial kernel and Grr is the one-dimensional Gaussian range kernel. g is the normalization coefficient. Assuming the intensity values f(x) to be restricted to the interval [T, T]. Grr is approximate by raised cosine kernels. This is motivated by observation that, for al T 5 s 5 T,
2 2 cs N cs lim cos pffiffiffiffi ¼ exp 2 N!1 2q N
ð12Þ
where c = p/2T and q = crr are used to control the variance of the target Gaussian on the right, and to ensure that the raised cosines on the left are non-negative and unimodal on [T, T] for every N. The trigonometric function [13] based bilateral filter allows to express the otherwise non-linear transform in Eq. (11) as the superposition of Gaussian convolutions, applied on simple point-wise transforms of the image with a series of spatial filtering,
ðF 0 Grr ÞðxÞ; ðF 1 Grr ÞðxÞ; . . . ; ðF N Grr ÞðxÞ
ð13Þ
where the image stack F0(x), F1(x), . . . , FN(x) are obtained from point-wise transform of f(x). Each of these Gaussian filtering are computed using O(1) algorithm. And the overall algorithm has O(1) complexity. 2.5. Joint trilateral filter In this subsection, we proposed joint trilateral filter (JTF) to overcome the gradient reversal artifacts occurring. The filtering process of JTF is firstly done under the guidance of the image G which can be another reference image or the input image I itself. Let Ip and Gp be the intensity value at pixel p of the minimum channel image and guided input image, wk be the kernel window centered at pixel k, to be consistent with bilateral filter. JTF is then formulated by
X 1 W JTF pq ðGÞIq W ðGÞ JTF q2wk pq q2w
JTFðIÞp ¼ P
ð14Þ
k
where the kernel weights function W GBTF pq ðGÞ can be written by
W JTF pq ðGÞ ¼
X ðGp lk ÞðGq lk Þ 1 þ r2k þ e jwj2 k:ðp;qÞ2w 1
ð15Þ
k
where lk and r2k are the mean and variance of guided image G in local window wk, |w| is the number of pixels in this window. When both Gp and Gq are concurrently on the same side of an edge, the weight assigned to pixel q is large. When Gp and Gq are on different sides, a small weight will be assigned to pixel q. In this paper, we take a fast bilateral filter to accelerate the computational complex. In the next subsection, we will discuss this algorithm in details. 2.6. Recovering the scene radiance With the transmission depth map, we can recover the scene radiance according to Eq. (1). We restrict the transmission t(x) to a lower bound t0, which means that a small certain amount of haze are preserved in very dense haze regions. The final scene radiance J(x) is written as,
JðxÞ ¼
Ic ðxÞ Ac þ Ac maxðtðxÞ; t0 Þ
ð16Þ
Typically, we choose t0 = 0.1. The recovered image may be too dark. So, we take color histogram processing for contrast enhancement. 3. Experimental results The performance of the proposed algorithm is evaluated both objectively and subjectively by utilizing ground-truth color patches. We also compare the proposed method with the state-of-the-art methods. Both results demonstrate superior haze removing and color balancing capabilities of the proposed method over the others. In the experiment, we compare our method with Fattal, He and Xiao’s work. Here, we select patch radius r = 8, e = 0.2 0.2 in Windows XP, Intel Core 2 (2.0 GHz) with 1 GB RAM in MATLAB7.1. Fig. 2 shows the results of different methods. The size of the images is 345 292 pixels. The drawback of Fattal’s method is elaborated on the Ref. [6]. To compare with He’s method, our approach performs better. In He’s approach, because of using soft matting, the visible mosaic artifacts are observed. Some of the regions are too dark (e.g. the right corner of the coral reefs) and hazes are not removed (e.g. the center of the image). There are also some halos around the coral reefs in Xiao’s model [14]. Our approach not only works well in haze removal, but also cost little computational complex. We can see the refined transmission depth map to compare these methods clearly in Fig. 3.
S. Serikawa, H. Lu / Computers and Electrical Engineering 40 (2014) 41–50
45
Fig. 2. Different models for underwater image dehazing. (a) Input image. (b) Fattal’s model. (c) He’s model. (d) Xiao’s model. (e) Our proposed model. (f) Result of after contrast enhancement.
The visual assessment demonstrates that our proposed method performs well. In addition to the visual analysis of these figures, we conducted quantitative analysis, mainly from the perspective of mathematical statistics and the statistical parameters of the images (see Table 1). This analysis includes High-Dynamic Range Visual Difference Predictor2 (HDRVDP2) [15], JPEG-QI [16], and CPU time. In HDR-VDP2, ‘‘Ampl.’’ means the amplification of the image (values are between 0 (worst) and 100 (best)), and ‘‘Loss’’ means the loss of visible contrast (values are between 0 (best) and 100 (worst)). Similarly, the Q-MOS value is between 0 (worst) and 100 (best). Table 1 displays the Q-MOS of the pixels that have been filtered by applying HDR-VDP2-IQA. HDR-VDP2-IQA uses the input image as the reference image in these experiments. The results show that the effectiveness of the proposed method is the best. Fig. 4 presents the probability of detection map by HDR-VDP2-IQA measurement. The metric takes a model of the human visual system being sensitive to three types of structural changes: loss of visible contrast (green)1, amplification of invisible contrast (blue), and reversal of visible contras (red). Generally speaking, our approach mainly amplifies the contrast (blue) and a little locations exhibit reverse (red) and only a few losses (green) of contrast. This merit also can be confirmed by experiments in Fig. 5. The size of the images is 397 264 pixels. Fig. 5 shows the depth map of underwater fishes, and then estimated by HDR-VDP2-IQA index. We see that our method only contains the component of amplify of contrast (blue), nearly without the exhibit reverse (red) and losses of contrast (green).
1
For interpretation of color in Figs. 4 and 5, the reader is referred to the web version of this article.
46
S. Serikawa, H. Lu / Computers and Electrical Engineering 40 (2014) 41–50
Fig. 3. Transmission map and zoom in images. (a) Fattal’s model. (b) He’s model. (c) Xiao’s model. (d) Our proposed model.
Table 1 Quantitative analysis of dehazed images. Methods
CPU time (s)
JPEG-QI
HDR-VDP2-IQA (%)
Fattal (2008) He et al. (2010) Xiao et al. (2012) Our proposed
20.05 30.85 14.64 4.42
8.79 8.82 7.23 9.31
1.8(Ampl.), 16.7(Loss) 5.9(Ampl.), 24.3(Loss) 10.6(Ampl.), 30.8(Loss) 35.4(Ampl.), 1.8(Loss)
S. Serikawa, H. Lu / Computers and Electrical Engineering 40 (2014) 41–50
47
Fig. 4. Probability of detection map of coral reefs. (a) Fattal’s model. (b) He’s model. (c) Xiao’s model. (d) Our proposed model.
Fig. 5. Probability of detection map of underwater fish. (a) Fattal’s model. (b) He’s model. (c) Xiao’s model. (d) Our proposed model.
Fig. 6 shows the underwater tank image with the size of 380 287 pixels. We can see that, the Fattal’s model cannot achieve well in underwater image processing. The image is almost blurring. He’s model performs well, but the iron chain on the lower right corner of the image is hardly to see. To Xiao’s model, we can find that the image also contains a lot of suspended solids, the quality of image is bad. In our processed image, we can clearly watch the cable, iron chain et al., small objects very well. The processing speed of our proposed method is the fastest in the Table 2. That is because the proposed joint trilateral filter can be efficient computed with O(N). These conclude that our model for underwater image or video processing better than the-state-of-the-art models. In the fourth experiment, we simulate the underwater objects recognition by a tank in darkroom. We take OLYMPUS STYLUS TG-2 15 m/50 ft underwater camera for capture images. The distance between light-camera and the objects is 3 m. The size of the images is 640 480 pixels. The captured image contains a lot of suspended solids caused by plankton,
48
S. Serikawa, H. Lu / Computers and Electrical Engineering 40 (2014) 41–50
Fig. 6. Enhanced underwater tank images. (a) Fattal’s model. (b) He’s model. (c) Xiao’s model. (d) Our proposed model.
Table 2 CPU time of dehazed images in processing Figs. 5 and 6. CPU time (s)
Fish
Tank
Fattal (2008) He et al. (2010) Xiao et al. (2012) Our proposed
17.14 32.20 12.59 6.42
14.40 34.54 10.93 6.75
and electrical noise caused by optoelectronic imaging device. The artificial light also causes some haze in the right corner of the image. The processed result is presented in Fig. 7. Before processing, the captured image is distorted seriously. After the proposed dehazing method, the image is much clear than the other methods. The process of our algorithm is fast, nearly cost about 30 ms by OpenCV2.4 and Intel TBB parallel algorithms. The CPU running time, PSNR, SSIM [17] and Q-MOS is the highest among all of the other methods in Table 3. The results indicate that our approach not only works well for haze removal, but also results in lower computation time.
4. Conclusions In this paper we explored and successfully implemented a simple and effectiveness image dehazing techniques for underwater imaging system. We proposed a simple prior based on the difference in attenuation among the different color channels, which inspire us to estimate the transmission depth map through red color channel in underwater images. Another contribution is to compensate the attenuation discrepancy along the propagation path, and to take the joint trilateral filter for filtering the transmission depth map. The trilateral filter utilizes the reflectivity in addition to the spatial and intensity information so that geometrical features such as jump and edges are preserved while smoothing. Beside of this, the trilateral filter can achieve edge-preserving smoothing with a narrow spatial window in only a few iterations. Moreover, the proposed joint trilateral filter can remove the overly dark fields of the underwater images by refining the transmission depth map through trilateral filtered source image and estimated transmission depth map. The experiments also demonstrate that our algorithm is faster than the-state-of-the-art algorithms. That is suitable for real-time underwater imaging system in practice from the image processing effects and computation complex. The proposed algorithm also contains some problems, such as the influence of the possible presence of an artificial lighting source is not considering, and the quality assessment may be unsuitable for underwater images. In future, we attempt to solve the artificial lighting by utilizing the data fusion [18] based method (e.g. Ancuti et al. [19] by using two input images, color correction method [20], and Treibitz and Schechner [3] to fuse different illumination images) and to design a new index for underwater image quality assessment.
49
S. Serikawa, H. Lu / Computers and Electrical Engineering 40 (2014) 41–50
Fig. 7. Simulations of underwater dehazing. (a) Captured image in clean water. (b) Captured image in turbid water. (c) Fattal’s model. (d) He’s model. (e) Xiao’s model. (f) Our proposed model.
Table 3 Quantitative analysis of simulation images. Methods
CPU time (s)
PSNR
SSIM
Q-MOS
Fattal (2008) He et al. (2010) Xiao et al. (2012) Our proposed
20.61 10.66 5.21 5.46
11.207 16.935 24.626 28.337
0.4225 0.6992 0.8030 0.8859
46.1112 88.4452 92.3767 93.8854
Acknowledgments This work was partially supported by Grants-in-Aid for Scientific Research of Japan (No. (B)19500478) and the support of Research Fellowship for Young Scientists by Japan Society for the Promotion of Science (No. 25-10713). References [1] Ancuti C, Ancuti CO, Haber T, Bekaert P. Enhancing underwater images and videos by fusion. In: Proceeding of IEEE conference on computer vision and pattern recognition; 2012. p. 81–8. [2] Schechner YY, Averbuch Y. Regularized image recovery in scattering media. IEEE Trans Pattern Anal Mach Intell 2007;29(9):1655–60. [3] Treibitz T, Schechner YY. Turbid scene enhancement using multi-directional illumination fusion. IEEE Trans Image Process 2012;21(11):4662–7. [4] Hou W, Gray DJ, Weidemann AD, Fournier GR, Forand JL. Automated underwater image restoration and retrieval of related optical properties. In: Proceeding of IEEE international symposium of geoscience and remote sensing; 2007. p. 1889–92. [5] Fattal R. Signal image dehazing. In: SIGGRAPH; 2008. p. 1–9. [6] He K, Sun J, Tang X. Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mac Intell 2011;33(12):2341–53. [7] He K, Sun J, Tang X. Guided image filtering. In: Proceeding of the 11th European conference on computer vision, vol. 1; 2010. p. 1–14. [8] Lan X, Zhang L, Shen H, Yuan Q, Li H. Single image haze removal considering sensor blur and noise. EURASIP J Adv Signal Process 2013;2013:86.
50
S. Serikawa, H. Lu / Computers and Electrical Engineering 40 (2014) 41–50
[9] Schechner YY, Karpel N. Clear underwater vision. In: Proc of IEEE computer society conference on computer vision and, pattern recognition, vol. 1; 2004. p. 536–43. [10] Chambah M, Semani D, Renouf A, Courtellemont P, Rizzi A. Underwater color constancy: enhancement of automatic live fish recognition. In: Proc of SPIE, vol. 5293; 2004. p. 157–68. [11] Tarel J, Hautiere N. Fast visibility restoration from a single color or gray level image. In: Proceeding of IEEE conference of computer vision and pattern recognition; 2008. p. 2201–08. [12] Tomasi C, Manduchi R. Bilateral filtering for gray and color images. In: Proc of IEEE international conference on computer vision, vol. 839–846; 1998. [13] Chaudhury KN, Sage D, Unser M. Fast O(1) bilateral filtering using trigonometric range kernels. IEEE Trans Image Process 2011;20(12):3376–82. [14] Xiao C, Gan J. Fast image dehazing using guided joint bilateral filter. Visual Comput 2012;28(6–8):713–21. [15] Mantiuk R, Kim KJ, Rempel AG, Heidrich W. HDR-VDP2: a calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Trans Graph 2011;30(4):40–52. [16] Wang Z, Sheikh HR, Bovik AC. No-reference perceptual quality assessment of JPEG compressed images. In: Proc of international conference on image processing, vol. 1; 2002. p. 477–80. [17] Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: From error visibility to structural similarity. IEEE Trans Image Process 2004;13(4):600–12. [18] Lu H, Zhang L, Serikawa S. Maximum local energy: an effective approach for image fusion in beyond wavelet transform domain. Comput Math Appl 2012;64(5):996–1003. [19] Ancuti CO, Ancuti C, Bekaert P. Effective single image dehazing by fusion. In: Proc of 17th IEEE international conference on image processing; 2010. p. 3541–5. [20] Lu H, Li Y, Serikawa S. Underwater image enhancement using guided trigonometric bilateral filter and fast automatic color correction. In: Proc of 20th IEEE international conference on image processing; 2013. p. 3412–6. Seiichi Serikawa, received the B.S. and M.S. degrees in Electronic Engineering from Kumamoto University in 1984 and 1986. He received the Ph.D. degree in Electronic Engineering from Kyushu Institute of Technology, in 1994. Recently, he is the Dean of Department of Electrical Engineering and Electronics in Kyushu Institute of Technology. His current research interests include computer vision, sensors, and robotics. Huimin Lu, received the B.S. degree in Electronics Information Science and Technology from Yangzhou University in 2008. He received M.S. degrees in Electrical Engineering from Kyushu Institute of Technology and Yangzhou University in 2011, respectively. Recently, he is a Ph.D. candidate in Kyushu Institute of Technology. His current research interests include computer vision, communication and deep-sea information processing.