Structure from Motion of Underwater Scenes Considering Image Degradation and Refraction

Structure from Motion of Underwater Scenes Considering Image Degradation and Refraction

Available online at www.sciencedirect.com ScienceDirect IFAC PapersOnLine 52-22 (2019) 78–82 Structure from Motion of Underwater Structure from Moti...

684KB Sizes 0 Downloads 23 Views

Available online at www.sciencedirect.com

ScienceDirect IFAC PapersOnLine 52-22 (2019) 78–82

Structure from Motion of Underwater Structure from Motion of Underwater Structure from Motion of Underwaterand Scenes Considering Image Degradation Structure from Motion of Underwaterand Scenes Considering Image Degradation Scenes Considering Image Degradation and Refraction Scenes Considering Image Degradation and Refraction Refraction Refraction ∗ Xiaorui Qiao ∗∗ Yonghoon Ji ∗∗ ∗∗ Astushi Yamashita ∗

Xiaorui Qiao ∗∗ Yonghoon Ji ∗∗ Yamashita ∗∗ ∗∗ Astushi ∗ Hajime Xiaorui Qiao Yonghoon Ji Astushi Yamashita ∗ Hajime Asama Asama ∗ ∗∗ ∗ ∗ Xiaorui Qiao Yonghoon Ji Astushi Yamashita ∗ Hajime Asama ∗ ∗ Hajime Asama University ∗ The University of of Tokyo, Tokyo, ∗ ∗ The 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan The University of Tokyo, 7-3-1 Hongo, ∗ Bunkyo-ku, Tokyo 113-8656, Japan The University of Tokyo, (e-mail: qiao, yamashita, [email protected]). 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan (e-mail: qiao, yamashita, [email protected]). ∗∗ 7-3-1 qiao, Hongo, Bunkyo-ku, Tokyo 113-8656, Japan (e-mail: yamashita, University, ∗∗ [email protected]). Chuo University, ∗∗ ∗∗ (e-mail: qiao, yamashita, [email protected]). University, 112-8551, Japan 1-13-27 Bunkyo-ku, 1-13-27 Kasuga, Kasuga, ∗∗ Bunkyo-ku, Tokyo Tokyo 112-8551, Japan Chuo University, (e-mail: [email protected]) 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan (e-mail: [email protected]) 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan (e-mail: [email protected]) (e-mail: [email protected]) Abstract: Structure Structure from from Motion Motion (SfM) (SfM) can can reconstruct reconstruct three-dimensional three-dimensional (3D) (3D) structures structures Abstract: using only image sequences. However, SfM applied to the underwater environment is different Abstract: Structure from Motion (SfM) can reconstruct three-dimensional (3D) structures using only image sequences. However, SfM applied to the underwater environment is different Abstract: Structure frombecause Motion (SfM) can reconstruct three-dimensional (3D)suffers using onlyair image sequences. However, SfMformation applied tointhe underwater environment isstructures different from the environment image underwater environments from from the air environment because image formation in underwater environments suffers from using onlyair image sequences. However, SfMformation applied tointhe underwater environment is different from the environment because image underwater environments suffers from two major problems: image degradation and refraction. Consequently, images captured in two major problems: image degradation and refraction. Consequently, images captured in from the airproblems: environment because image formation in underwater environments suffers from underwater environments are loss of contrast, hazy and distorted geometrically. To achieve an two major image degradation and refraction. Consequently, images captured in underwater environments are loss of contrast, hazy and distorted geometrically. To achieve an two major image degradation andhazy refraction. Consequently, imagesand in accurate 3Dproblems: reconstruction in loss suchofunderwater underwater environments, image degradation refraction underwater environments are contrast, and distorted geometrically. Tocaptured achieve an accurate 3D reconstruction in such environments, image degradation and refraction underwater environments are contrast, hazy and distorted geometrically. To achieve an accurate 3D reconstruction in loss suchofunderwater environments, image degradation and refraction problems should be solved simultaneously. In this work, we propose a systematic SfM pipeline problems should be solved simultaneously. In this work, we propose a systematic SfM pipeline accurate 3D reconstruction in such underwater environments, image and problems should beenhancement solved simultaneously. In this work, we propose aimage systematic SfMrefraction pipeline containing image and refraction reduced SfM. The degradation enhancement part containing image enhancement and refraction reduced SfM. The image enhancement part problems should beenhancement solved simultaneously. In steps thisreduced work, we propose aimage systematic SfM pipeline improves the image quality for the following in the reconstruction part. We confirm containing image and refraction SfM. The enhancement part improves the image quality for the following steps in the reconstruction part. We confirm the the containing image andfollowing refraction reduced image enhancement part effectiveness ofimage the enhancement proposed pipeline using images captured by aa The submerged robot inconfirm an extreme extreme improves the quality for the steps in the SfM. reconstruction part. Wein the effectiveness of the proposed pipeline using images captured by submerged robot an improves theenvironment, qualitythe for the following steps in3 the reconstruction part. Weinconfirm the effectiveness ofimage the proposed pipeline using images captured by a Containment submerged robot an extreme underwater disaster area of Unit Primary Vessel (PCV) at underwater environment, the disaster area of Unit 3 Primary Containment Vessel (PCV) at effectiveness of the proposed pipeline using images captured by ashow submerged robot in an extreme underwater environment, the disaster areaExperimental of Unit 3 Primary Containment Vessel (PCV) at Fukushima Daiichi Nuclear Power Station. results that our proposed method Fukushima Daiichi Nuclear Power Station. Experimental results show that our proposed method underwater environment, areaExperimental Unitextreme 3 Primary Containment Vessel (PCV) at can successfully successfully reconstruct thedisaster structures inofthat that underwater environment. Fukushima Daiichi Nuclearthe Power Station. results show that our proposed method can reconstruct the structures in extreme underwater environment. Fukushima Daiichi Nuclear Power Station. Experimental results show that our proposed method can successfully reconstruct the structures in that extreme underwater environment. Copyright © 2019. The Authors. Published by Elsevier rights reserved. can successfully reconstruct the3D structures in Ltd. thatAll extreme underwater environment. Keywords: Underwater robots, Reconstruction, Structure from Motion, Motion, Image Keywords: Underwater robots, 3D Reconstruction, Structure from Image Enhancement, Refraction Keywords: Underwater robots, 3D Reconstruction, Structure from Motion, Image Enhancement, Refraction Keywords: Underwater robots, 3D Reconstruction, Structure from Motion, Image Enhancement, Refraction Enhancement, Refraction 1. INTRODUCTION INTRODUCTION 1. 1. INTRODUCTION INTRODUCTION Three-dimensional1.(3D) (3D) reconstruction has has been been studied studied Three-dimensional reconstruction intensively for underwater environments (Murino et al. al. Three-dimensional (3D) reconstruction has been studied intensively for underwater environments (Murino et Three-dimensional (3D) reconstruction has been studied (1998); Brandou et al. (2007); Yang et al. (2013); Wang intensively for underwater environments (Murino et al. (1998); Brandou et al. (2007); Yang et al. (2013); Wang intensively for Xu underwater environments et al. et al. al. (2016); (2016); etal.al. al.(2007); (2016)). Structure from Motion Motion (1998); Brandou Yang et al.(Murino (2013); Wang et Xuetet (2016)). Structure from (1998); Brandou etetal.al.(2007); Yang et al. (2013); Wang et al. (2016); Xu (2016)). Structure fromtoMotion (SfM), as a 3D reconstruction technique, is easy imple(SfM), as a 3D reconstruction technique, is easy to impleet al. because (2016); Xu et al. (2016)). Structure fromtoMotion technique, is easy implement of using only single camera. Generally, the (SfM), as a 3Dof reconstruction ment because using only aa single camera. Generally, the (SfM), as aconfined 3Dofreconstruction technique, is easy to implecamera is in a waterproof housing in underwater ment because using only a single camera. Generally, the camera is confined in a waterproof housing in underwater ment because ofThus, using only a single camera. Generally, the camera is confined in athe waterproof housing in underwater environments. light rays are refracted refracted twice environments. Thus, the light rays are twice camera is confined in acamera, waterproof housing in underwater environments. Thus, the lightone rays are refracted twice before entering to the is at the water-housing before entering to the camera, one is at the water-housing environments. Thus, the islight rays are twice interface, and the the other atone the housing-air interface. before entering to the camera, ishousing-air at therefracted water-housing interface, and other is at the interface. before entering to from the camera, one ishousing-air at the distortion water-housing Images captured scenes suffer from due interface, and the other is at the interface. Images captured from scenes suffer from distortion due interface, and thefrom otherscenes isdistortions at suffer the housing-air interface. Images captured from distortion due to this refraction. Severe would occur if the to this refraction. Severe distortions would occur if the Images captured from suffer from distortion to this refraction. Severe distortions would occur if due the refractive distortions arescenes not properly addressed. refractive distortions are not properly addressed. to this refraction. Severe distortions would occur if the Fig. Fig. 1. 1. Concept Concept of of an an underwater underwater environment. environment. refractive distortions are not properly addressed. Fig. 1. Concept of an underwater environment. Moreover, if floating particles and haze exist in the unrefractive not properly addressed. Moreover, distortions if floating are particles and haze exist in the unof an underwater environment. Moreover, if floating particles and haze exist in extreme the un- Fig. 1. Concept derwater environment, environment, the environment environment can be be derwater the can extreme ronment ronment is is quite quite different different from from that that in in an an air air environment. environment. Moreover, if floating particles and haze exist in extreme the un- ronment environment, for carrying out vision tasks, such as detection, recogThe image formation suffers from two problems derwater the environment can be is quite different from that in an air environment. for carrying out vision tasks, such as detection, recog- The image formation suffers from two major major problems derwater environment, the environment canand be mapping extreme ronment is quite different from from that and intwo anrefraction. air environment. nition and visual simultaneous localization simultaneously: image degradation for carrying out vision tasks, such as detection, recogThe image formation suffers major problems nition and visual simultaneous localization and mapping simultaneously: image degradation and refraction. ConseConsefor carrying out vision such as detection, recogimage formation suffers from twocontrast, major problems nition andFigure visual simultaneous localization andscene mapping simultaneously: image degradation and refraction. Conse(SLAM). depictstasks, typical underwater with The quently, (SLAM). Figure 11 depicts aa typical underwater scene with quently, the the images images captured captured are are loss loss of of contrast, hazy hazy and and nition andFigure visual1 simultaneous localization and mapping image degradation refraction. Conseimage degradation degradation and refraction. refraction. Suspended particles are simultaneously: distorted geometrically. In SfM (Hartley (SLAM). depicts a typical underwater scene with quently, images captured areconventional lossand of contrast, and image and Suspended particles are distortedthe geometrically. In the the conventional SfM hazy (Hartley (SLAM). 1 depicts a typicalSuspended underwater sceneray with the images captured areconventional loss of contrast, and SfM hazy (Hartley floatingdegradation atFigure the object’s object’s surroundings, and the theparticles light is quently, and (2003)), image sequences image and refraction. distorted geometrically. In the floating at the surroundings, and light rayare is and Zisserman Zisserman (2003)), the the image sequences are are regarded regarded image degradation and arriving refraction. Suspended particles are distorted geometrically. Inimage the conventional SfM (Hartley refracted twice before at the image sensor. The as clear scenes, and the formation process adopts floating at the object’s surroundings, and the light ray is and Zisserman (2003)), the image sequences are regarded refracted twice before arriving at the image sensor. The as clear scenes, and the image formation process adopts floating thecaptured object’s surroundings, light partiray is and Zisserman image sequences are regarded refracted before arriving theand image sensor. The as clear scenes,(2003)), and thethe image formation process adopts quality of ofattwice the image is at reduced bythe floating quality the captured image is reduced by floating parti- the the perspective perspective camera camera model. model. In In the the extreme extreme underwater underwater refracted twice before arriving at the image sensor. The as clear scenes, andtwo themodel. imageproblems formation process adopts cles and haze. The geometric distortion also occurs due to environment, if the major mentioned quality of the captured image is reduced by floating partithe perspective camera In the extreme underwater cles and haze. The geometric distortion also occurs due to environment, if the two major problems mentioned above above quality ofhaze. the captured image isformation reducedalso byinfloating partithe perspective camera model. In the extreme underwater the refraction. Thus, the image such an enviare not properly addressed, there may serious problems cles and The geometric distortion occurs due to environment, if the two major problems mentioned above the refraction. Thus, the image formation in such an envi- are not properly addressed, there may serious problems cles and haze. The geometric distortion also to environment, if the two majorthere problems above in occurs such andue envithe refraction. Thus, the image formation are not properly addressed, may mentioned serious problems the refraction. Thus, the image formation in such an enviare not properly addressed, there may serious problems 2405-8963 Copyright © 2019. The Authors. Published by Elsevier Ltd. All rights reserved. Peer review under responsibility of International Federation of Automatic Control. 10.1016/j.ifacol.2019.11.051



Xiaorui Qiao et al. / IFAC PapersOnLine 52-22 (2019) 78–82

happen in the 3D reconstruction. One is insufficient feature matches caused by the image degradation. The number of feature matches will affect the camera pose estimation and the number of sparse points in the 3D reconstruction. The other is distortion in 3D reconstruction caused by the refraction. Therefore, to achieve 3D reconstruction in extreme underwater environments, image degradation and refraction problems should be solved simultaneously. In this work, we propose a novel SfM pipeline to solve the two problems simultaneously for underwater environments with image degradation and refraction. First, the image sequences are enhanced by the enhancement method in our previous work (Qiao et al. (2018)). Meanwhile, the camera system is modeled using ray tracing (Schmitt et al. (1988)) and Snell’s law. Then, the camera pose can be estimated based on the modeled system using the enhanced images. Finally, the 3D reconstruction can be obtained by triangulation. Experiments using investigation video at Unit 3 Primary Containment Vessel (PCV) at Fukushima Daiichi Nuclear Power Station show that the proposed pipeline can solve the image degradation and refractive distortion problems simultaneously and reconstruct the object successfully. The remainder of this paper is organized as follows. In Section 2, the previous work related to the underwater 3D reconstruction is presented. Section 3 presents an overview of the proposed SfM pipeline. Section 4 gives a detail description of the proposed pipeline. In Section 5, experiments using investigation video at Unit 3 PCV at Fukushima Daiichi Nuclear Power Station confirm the effectiveness of the proposed SfM pipeline. Section 6 presents the conclusion. 2. RELATED WORK For 3D reconstruction in underwater environments, the related studies are mainly classified into two categories. 2.1 Considering Image Degradation Research on solving the underwater image degradation problem is widely studied to reduce noise (Yamashita et al. (2006); Farhadifard et al. (2017)) and dehazing (Ancuti et al. (2012); Drews Jr et al. (2013)). 3D reconstruction for clear underwater environments usually does not consider the image enhancement as the first step process. However, in turbid underwater environments, image degradation problem should be firstly taken into consideration. Li et al. (2015) presented a method to estimate scene depth and dehazing using a foggy video sequence simultaneously. In this study, the depth cues from stereo matching and fog information reinforce each other and outperform the conventional stereo or dehazing algorithms. Xu et al. (2016) developed an underwater 3D object reconstruction model for the continuous video stream. In this study, image enhancement is taken into consideration as the first step. However, they used the guided filter to acquire clear images. For haze and low contrast underwater images, only the guided filter is unable to achieve satisfactory restoration.

79

2.2 Considering Refraction To deal with the refractive problem, the camera systems were modeled by approximate methods, such as focal length adjustment (Ferreira and J. P. Costeira (2005)), lens radial distortion (Pizarro et al. (2003)) and a combination of the two (Lavest et al. (2000)) in early works. However, due to the non-linear camera system, these methods are inadequate to solve the refraction problem. Refractive Structure from Motion (RSfM), which models the camera system based on ray tracing and Snell’s law, has been studied in recent years. Most of studies focus on the flat shape camera housing (Jordt-Sedlazeck and Koch (2013); Shibata et al. (2015, 2018)). However, a waterproof housing can be designed with various shapes, depending on the application, such as cylindrical camera housing equipped on the robot sent to Unit 3 PCV for investigation (Adachi et al. (2018)). Consequently, specific constraints for a flat camera housing, such as the case where all the light rays are on the same plane, will be invalid in a general situation. Thus, the camera system modeling, calibration method and camera pose estimation method should be extended to a more general situation. Gedge et al. (2011) proposed refractive epipolar geometry to find stereo correspondences for underwater image matching. The corresponding point of a pixel in one image is restricted to a curve, not a line. Jordt-Sedlazeck and Koch (2013) considered the refractive effects of thick flat housing for 3D reconstruction, and virtual camera errors were proposed for the bundle adjustment. Kang et al. (2017) proposed two new concepts, “Ellipse of Refrax” and the “Refractive Depth” of a scene point, and is able to simultaneously perform underwater camera calibration and 3D reconstruction automatically without using any calibration object. However, the camera rotation should be known first. There are multitude methods can achieve the 3D structures in underwater environments, a miniature singlecamera system can be achieved using SfM technique for implementation in a narrow space. To achieve an accurate underwater 3D reconstruction, image degradation and refractive distortion problems should be solved simultaneously. Generally, the current studies focus on only one problem, image degradation or refraction. Image enhancement is essential in turbid water environments because the low image quality affects the feature extraction in the SfM pipeline. Refraction always exists when the camera capturing images in water. The current studies mainly focus on flat refraction. The camera housing can be designed with various shapes depending on the application. Thus, the method considering refraction in general camera housing is acquired. Therefore, underwater SfM is different from the conventional SfM due to the image degradation and refraction. Thus, an approach containing these two factors is required to obtain accurate 3D reconstruction. 3. OVERVIEW OF THE PROPOSED SFM PIPELINE As shown in Fig. 2, the process is the proposed SfM pipeline considering image degradation and refraction. The blue parts are different compared with the conventional SfM pipeline. Based on the two major problems

Xiaorui Qiao et al. / IFAC PapersOnLine 52-22 (2019) 78–82

80

Fig. 4. Conceptual image of RSfM. . the next step is the haze removal based on the single frame estimation.

Fig. 2. Proposed SfM pipeline. existing in underwater environments, the different parts are summarized into two aspects. First, the image enhancement part is added to the pipeline to solve the image degradation problem and improve the image quality in underwater environments. The objective of this part to achieve “clear” images for feature detection and feature matching. Second, refraction is considered in the 3D reconstruction part. The corresponding steps, such as calibration, pose estimation, and geometric reconstruction, use the modeled camera system to eliminate the refractive effects on these steps. In the refractive calibration part, the parameters between a camera and its housing are estimated. These parameters can be known parameters for the camera system modeling. Then, based on the modeled camera system, the camera poses can be estimated for the following 3D reconstruction. 4. DETAIL OF THE PROPOSED SFM PIPELINE 4.1 Underwater Image Enhancement In the underwater environments, occlusion noises, such as floating particle and water bubble, and haze exist. The main process of image enhancement is as depicted in Fig. 3. There are two major steps for image enhancement. One is occlusion noise removal based on multi-frame method, and

Fig. 3. Image enhancement flowchart.

Occlusion Noise Removal Multiple frames can be obtained from the video sequences. There is slight motion between the consecutive frames due to the moving of the underwater vehicle. The motion compensation is implemented to achieve the aligned frames. Thus, the differential images can be acquired by subtraction and the setting threshold. The noise regions can be detected based on chromatic properties and Combined Temporal Discontinuity Field (CTDF) (Qiao et al. (2018)). The detected noise regions can be restored by background pixels. Haze Removal The noise-free image is the input of the dehazing process. The artificial light map is estimated based on maximum reflectance prior (Zhang et al. (2017)), and transmission map can be estimated based on the red channel. Then, the haze can be removed based on the underwater light model (Jaffe (1990)). 4.2 RSfM Refraction is considered in the calibration, camera modeling and camera pose estimation steps. Refractive Calibration As shown in Fig. 4, a camera and a housing are called “local camera system” in this study. The center of the camera system is set at H. Estimation of the relative pose of the camera to the camera housing is called refractive calibration. After the refractive calibration, the local camera system can be modeled accurately. The back-projection (from a image pixel to a 3D point) can be represented based on ray tracing (Schmitt et al. (1988)) and Snell’s law. A checkerboard pattern is used as the method in Zhang (2000). When the checkerboard is captured by the camera system. A corner point on the checkerboard, projecting to the image sensor, corresponds a image pixel. If we project the image pixel back to the corner point, there is an error between a back-projected point and the corresponding ground truth point on the checkerboard. Minimization of the back-projection error is used to estimate the unknown parameters. Camera System Modeling After obtaining the parameters of the local camera system, the camera system can be modeled by ray tracing (Schmitt et al. (1988)) and Snell’s law. As shown in Fig. 4, for each feature point, there is a corresponding outer ray. Thus, the corresponding outer rays can be used for the relative camera pose estimation.



Xiaorui Qiao et al. / IFAC PapersOnLine 52-22 (2019) 78–82

Camera Pose Estimation The camera pose estimation is based on the modeled camera system. A camera system with a waterproof housing is shown in Fig. 4. The origin of local camera coordinate system H is set on camera housing center. After the camera housing calibration, the relative pose of camera to camera housing is estimated, and the outer intersected point and outer ray can be obtained by ray tracing (Schmitt et al. (1988)) and Snell’s law. The camera system undergoes a rotation R and a translation T. The first camera local camera system is assumed to be the world coordinate system origin. In first local camera system, the outer line can be denoted by L = (d, m) = (ro , Xmo × ro ), (1) where d and m is the direction and moment of the pl¨ ucker line, respectively.

81

Fig. 5. Enhanced result: a) the original image, b) the enhanced result.

Similarly, in the second local camera system, the outer line can be denoted by L = (d , m ) = (r o , X mo × r o ). (2) In order to determine the relative camera pose of the second camera, the pl¨ ucker vectors of the outer line in the second coordinate system is transformed to the world coordinate system: w

and

d = Rr o ,

(3)

m = (RX mo + T) × (Rr o ) = Rm + [T]× Rr o .

(4)

w

Thus, the outer line L in the world coordinate system can be written as: w

L = (Rr o , Rm + [T]× Rr o ).

(5)

W

Pl¨ ucker lines L and L intersect at point X. The intersection both lines is determined by w w d m + m d = 0. (6) The following equation can be obtained according to (4), (5), and (6).   ro  Rm + [T]× Rr o + m Rr o       [T]× R R ro ro = m R 03×3 m (7)    EG

=0, where [T]× is the skew-symmetric matrix of T, EG is the generalized essential matrix. The equation is called the generalized epipolar constraint (Pless (2003)). Therefore, we can calculate T and R by (7).

Geometric Reconstruction After obtaining the estimated poses and outer rays, the 3D reconstruction can be achieved by triangulation methods (Hartley and Sturm (1997)). 5. EXPERIMENTS Experiments using real images were implemented to confirm the effectiveness of the proposed method. In the experiments, the images captured by the submersible robot

Fig. 6. Reconstructed object in the Unit 3 PCV at Fukushima Daiichi Nuclear Power Station. sent to Unit 3 PCV at Fukushima Daiichi Nuclear Power Station were used. The inside environment of Unit 3 PCV is extreme because the water is filled with floating particles and turbid. We use the proposed SfM pipeline to reconstruct a typical scene in this underwater environment as shown in Fig. 5 5.1 Image Enhancement Figure. 5 shows an example of the enhanced result by proposed method. The enhanced image has more contrast, less noise and less haze compared with the original image. The image quality is improved obviously. 5.2 Two-view Reconstruction Two enhanced frames are selected for the 3D reconstruction. The dense optical flow (Farneb¨ack (2003)) method is applied to achieve each pixel’s correspondence for two consecutive frames. We remove the outliers in each local patch. Thus, the dense reconstruction can be obtained. As shown in Fig. 6, the object in the scene can be successfully reconstructed using the proposed method. 6. CONCLUSION We herein proposed a novel SfM pipeline for underwater environments to address image degradation and refraction problems. To solve the two problems simultaneously, the image enhancement and refractive SfM methods are proposed for the 3D reconstruction. We confirmed the effectiveness of the proposed SfM pipeline using the image data from an extreme underwater environment, the inside of

82

Xiaorui Qiao et al. / IFAC PapersOnLine 52-22 (2019) 78–82

Unit 3 PCV at Fukushima Daiichi Nuclear Power Station. The results show that the proposed pipeline can reconstruct the underwater scene successfully with enhanced images. It also shows the effectiveness of the proposed method in underwater environments with image degradation and refraction. ACKNOWLEDGEMENTS The authors would like to thank IRID and TOSHIBA for supporting this research. This work in part was supported by JSPS KAKENHI Grant Number JP18H03309. REFERENCES Adachi, H., Okada, S., and Matsuzaki, K. (2018). Investigation robots for inside primary containment vessel of fukushima daiichi nuclear power station. Journal of the Robotics Society of Japan, 36(6), 395–398. Ancuti, C., Ancuti, C.O., Haber, T., and Bekaert, P. (2012). Enhancing underwater images and videos by fusion. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, 81–88. Brandou, V., Allais, A.G., Perrier, M., Malis, E., Rives, P., Sarrazin, J., and Sarradin, P.M. (2007). 3d reconstruction of natural underwater scenes using the stereovision system iris. In Proceedings of OCEANS 2007 - Europe, 1–6. Drews Jr, P., do Nascimento, E., Moraes, F., Botelho, S., and Campos, M. (2013). Transmission estimation in underwater single images. In 2013 IEEE International Conference on Computer Vision Workshops, 825–830. Farhadifard, F., Radolko, M., and von Lukas, U.F. (2017). Single image marine snow removal based on a supervised median filtering scheme. In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, 280–287. Farneb¨ ack, G. (2003). Two-frame motion estimation based on polynomial expansion. In Scandinavian conference on Image analysis, 363–370. Ferreira, R. and J. P. Costeira, J.S. (2005). Stereo reconstruction of a submerged scene. In Proceedings of the 2005 Iberian Conference on Pattern Recognition and Image Analysis, 102–109. Gedge, J., Gong, M., and Yang, Y. (2011). Refractive epipolar geometry for underwater stereo matching. In Proceedings of 2011 Canadian Conference on Computer and Robot Vision, 146–152. Hartley, R. and Zisserman, A. (2003). Multiple view geometry in computer vision. Cambridge university press. Hartley, R.I. and Sturm, P. (1997). Triangulation. Computer vision and image understanding, 68(2), 146–157. Jaffe, J.S. (1990). Computer modeling and the design of optimal underwater imaging systems. IEEE Journal of Oceanic Engineering, 15(2), 101–111. Jordt-Sedlazeck, A. and Koch, R. (2013). Refractive structure-from-motion on underwater images. In Proceedings of the 2013 IEEE International Conference on Computer Vision, 57–64. Kang, L., Wu, L., Wei, Y., Lao, S., and Yang, Y. (2017). Two-view underwater 3d reconstruction for cameras with unknown poses under flat refractive interfaces. Pattern Recognition, 69, 251–269.

Lavest, J.M., Rives, G., and Laprest´e, J.T. (2000). Underwater camera calibration. In Proceedings of the 2000 European Conference Computer Vision, 654–668. Li, Z., Tan, P., Tan, R.T., Zou, D., Zhou, S.Z., and Cheong, L.F. (2015). Simultaneous video defogging and stereo reconstruction. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, 4988–4997. Murino, V., Trucco, A., and Regazzoni, C.S. (1998). A probabilistic approach to the coupled reconstruction and restoration of underwater acoustic images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1), 9–22. Pizarro, O., Eustice, R.M., and Singh, H. (2003). Relative pose estimation for instrumented, calibrated imaging platforms. In Proceedings of the 2003 Digital Image Computing Techniques and Applications, 601–612. Pless, R. (2003). Using many cameras as one. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2, II–587–93. Qiao, X., Ji, Y., Yamashita, A., and Asama, H. (2018). Visibility enhancement for underwater robots based on an improved underwater light model. Journal of Robotics and Mechatronics, 30, 781–790. Schmitt, A., M¨ uller, H., and Leister, W. (1988). Ray tracing algorithms–theory and practice. In Theoretical Foundations of Computer Graphics and CAD, 997–1030. Shibata, A., Fujii, H., Yamashita, A., and Asama, H. (2015). Absolute scale structure from motion using a refractive plate. In Proceedings of the IEEE/SICE International Symposium on System Integration, 540– 545. Shibata, A., Okumura, Y., Fujii, H., Yamashita, A., and Asama, H. (2018). Refraction-based bundle adjustment for scale reconstructable structure from motion. Journal of Robotics and Mechatronics, 30, 660–670. Wang, Y., Negahdaripour, S., and Aykin, M.D. (2016). Calibration and 3d reconstruction of underwater objects with non-single-view projection model by structured light stereo imaging. Applied Optics, 55(24), 6564–6575. Xu, X., Che, R., Nian, R., He, B., Chen, M., and Lendasse, A. (2016). Underwater 3d object reconstruction with multiple views in video stream via structure from motion. In Proceedings of OCEANS 2016 - Shanghai, 1–5. Yamashita, A., Kato, S., and Kaneko, T. (2006). Robust sensing against bubble noises in aquatic environments with a stereo vision system. In Proceedings 2006 IEEE International Conference on Robotics and Automation, 928–933. Yang, Y., Zheng, B., Zheng, H., Wang, Z., Wu, G., and Wang, J. (2013). 3d reconstruction for underwater laser line scanning. In 2013 MTS/IEEE OCEANS - Bergen, 1–3. Zhang, J., Cao, Y., Fang, S., Kang, Y., and Chen, C.W. (2017). Fast haze removal for nighttime image using maximum reflectance prior. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 7016–7024. Zhang, Z. (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8), 1330–1334.