Autonomous inspection of underwater structures

Autonomous inspection of underwater structures

Robotics and Autonomous Systems ( ) – Contents lists available at ScienceDirect Robotics and Autonomous Systems journal homepage: www.elsevier.com...

4MB Sizes 2 Downloads 171 Views

Robotics and Autonomous Systems (

)



Contents lists available at ScienceDirect

Robotics and Autonomous Systems journal homepage: www.elsevier.com/locate/robot

Autonomous inspection of underwater structures Marco Jacobi Fraunhofer IOSB, Am Vogelherd 50, 98693 Ilmenau, Germany

highlights • • • • •

Underwater inspection for a safe and secure harbor and AUVs reduce the costs. Basic tasks: find the object, approach and follow until whole area is covered. The pier walls are described as lines in the horizontal plane. Wall following algorithm with x-track error control is used. Adaptive path planning considering the sensor constraints to fully cover the area.

article

info

Article history: Available online xxxx Keywords: AUV Guidance Inspection Autonomous Underwater

abstract In the future our resource on land will be over exploited. The exploration of new resources in the ocean is in progress. Mining will be done on the bottom of the sea. The sea is also a big source of renewable energy. Off shore wind parks and tide plants are built. Also, the major world trade is handled over sea routes and several big harbors. All this maritime facilities are getting older and there are effects like corrosion or malfunctions. In general, they need to be inspected frequently. For deep sea applications, security reasons and cost reduction autonomous underwater vehicles (AUVs) will be the first choice. The project ‘‘CView’’ addresses one of these inspection problems, the harbor inspection. But the algorithms, we present in this article can be adapted to many other inspection tasks. One of the main goals in this project is to find cracks or damaged areas at the underwater buildings or to observe critical sections under water with cost effective methods. The platform for developing the guidance algorithms for inspection is the AUV ‘‘SeaCat’’. This underwater vehicle has a control software system with an user interface for mission planning, a mission control system, a precise navigation system, optimized motor control with an autopilot and sensors for obstacle detection and inspection. For obstacle and inspection target detection, a scanning sonar is used. The sonar images are automatically processed with edge detection and line extraction algorithms to get a simplified environment description, which is used by the guidance methods presented in this article. A pan–tilt enabled sensor head with camera, laser measurement and MBES (Multi Beam Echo Sounder) is used to inspect the detected objects. Additionally, these sensors provide distance information to the inspection object which can be used by the inspection guidance. This article presents methods for inspection using the online information from the vehicle sensors to guide the vehicle efficient and safely. It is also important to handle the interaction between mission planning and execution. During mission planning, the operator will define the type of the inspection object (wall, vessel, sluice, etc.). The algorithms we develop use the information from the mission planning and the online data from the vehicle sensors to guide the vehicle to get the optimal inspection results. Therefore, a precise distance control to the inspection object, collision avoidance and object recognition are needed. © 2014 Elsevier B.V. All rights reserved.

1. Introduction Autonomous underwater vehicles gain more importance now and in the future [1]. Nowadays, they are mostly used for explo-

E-mail address: [email protected]. http://dx.doi.org/10.1016/j.robot.2014.10.006 0921-8890/© 2014 Elsevier B.V. All rights reserved.

ration and other scientific applications. Sometimes they are also used for simple inspection tasks, which are mostly surveys similar to exploration tasks. The inspection of buildings, machines, etc. has been a very common task in our world for a long time. To operate a machine a lot of regulations exist, like for any kind of man made objects. These regulations intend to provide safety and functionality of these devices.

2

M. Jacobi / Robotics and Autonomous Systems (

Some of these objects are underwater constructions or floating objects like vessels or offshore working platforms. Underwater constructions can be harbor facilities, offshore drilling or pumping platforms, offshore wind park foundations, reservoir dams or sluices, etc. All these things need regular inspection to detect damages or corrosion. Also, the hulls of ships need inspection to discover leakages or smuggling goods. Most of these areas are difficult to reach and expensive to monitor frequently. Usually, these inspections were done with divers and/or remotely operated vehicles (ROVs). Unfortunately, divers need special training and the diving assignments are often dangerous. The usage of ROVs needs a crew onshore or on a mother-ship which is operating the ROV during the complete mission. Therefore, the ROV needs a steady communication with the operator through a cable. The operating crew has to pay a lot of attention to control every movement of the vehicle and to examine the sensor data. To minimize the personal and financial effort and to be able to do regular inspections, routine tasks need to be automatized. ROVs improve the security and also minimize costs but they need an operating crew and the communication cable. It cannot operate freely like divers. The use of an AUV can reduce the manpower needed for the same work done by an ROV by around 50% to operate the vehicle and for logistics [2]. Autonomous operating vehicles can do the routine tasks and they do not require the cable connection to an operator; they operate and maneuver in their environment freely. Solutions with autonomous underwater vehicles (AUVs) exist for unstructured or less structured environments. They are comparable to sea bottom exploration tasks. Mostly, a predefined path is followed and distance to possible dangerous objects is kept. The challenge is the maneuvering in complex environments like harbors and operating in short distance to the inspection object, which is needed for visual inspection. This can be done by ROVs, but with the described limitations. An example for the use of AUV for under water inspections is the HAUV project [3,4], which is able to inspect ship hulls using a DIDSON sonar and a Doppler log DVL for navigation relative to the ship. The Project ‘‘CView’’ has the goal to inspect underwater structures in harbors in visual range with AUVs. This article describes the vehicle guidance in harbor environments for inspection tasks. 2. Inspection ‘‘Underwater inspection is the continuation of the onshore fabrication of a facility through shipment, installation, to operation. It is first required during installation and then periodically when the facility is operational’’ [5, p. 23]. Inspection is generally the assignment to monitor the actual state of an object, machine or construction. Due to the inspection results maintenance can be planned, also the time for the replacement of the inspected object when required. Regular inspection will detect failures in an early state, but inspection itself is not gratis. A balance between the maintenance costs and the inspection costs has to be found. Additionally, safety issues have to be considered for planning and execution of the inspections, while automatic inspections will reduce the costs. 2.1. Harbor inspection Harbor facilities need regular inspections for damages and corrosion. The constructions which are not underwater are easily reachable, but the underwater constructions can only be reached by divers or when the water is removed. Usually, the underwater constructions of harbors are inspected for damages, which can be: collision of vessels with the piers, scouring on walls, piers or corrosion on walls or sheet pile walls. Different damages are shown in Figs. 2 and 3.

)



Fig. 1. Typical harbor, Hamburg Harbor [6].

Fig. 2. Surface damages [7].

Fig. 3. Laminar corrosion [7].

Collisions and known damages are examined and observed by divers and ROVs as mentioned in the introduction of this article. Unknown damages are difficult to be discovered by these instruments. Often, large areas have to be examined regularly; but therefore specially trained personnel are expensive. When using divers, the inspected areas have to be restricted. The communication and power cable of ROVs is a problem for the traffic in the harbor. Fig. 1 shows a typical harbor with its basins and traffic routes. It is easy to recognize the problem of an extensive and regular inspection. The common behavior of AUVs is the automatic path following of the planned tracks and the recording of the sensor information. The analysis of the data can be done online by the vehicle or post-mission. The post-mission analysis can be done manually by the operator or automatically. The advantage of online damage detection is that the AUV can approach the interesting area a second time and examine it in detail. 3. Hardware platform The used vehicle for the inspection will be the AUV ‘‘SeaCat’’ (Fig. 5) built by ATLAS ELEKTRONIK (AE). It is developed on the experience gained with the AUV ‘‘SeaWolf’’; also developed by AE.

M. Jacobi / Robotics and Autonomous Systems (

)



3

Fig. 4. SeaCat AUV thruster configuration.

Fig. 7. Overview software modules for inspection.

trigger events on the VGS and the vehicle can examine the anomaly a second time with a slower speed and a shorter distance. 4. Vehicle guidance

Fig. 5. SeaCat AUV [8].

Fig. 6. CView inspection sensor head with pan–tilt functionality [9].

The AUV ‘‘SeaCat’’ needs less energy for propulsion due to the use of only one main engine and fins for maneuvering, to extend the mission time. There are additional thrusters illustrated in Fig. 4 to improve the vehicle maneuverability during slow speed or when the vehicle is hovering. With the horizontal bow thruster, the vehicle can turn on its current position. The elevator fins are used for depth control supported by vertical thrusters for slow speed, which can also be used for descent on the spot. The additional thrusters provide maneuverability during low speed required by the sensor constraints and by the complex structured harbor environment. Navigation sensors. A laser gyroscope is used for navigation and positioning supported by a Doppler velocity log (DVL), a pressure sensor and when not submerged a GPS receiver. Additionally, a scanning sonar provides data for collision avoidance. This sonar is mainly used for the reactive vehicle control by the vehicle guidance system (VGS) and for detection of the inspection objects. The guidance of the vehicle and the interoperation with the collision avoidance sonar will be described in this article later in Section 4. Inspection sensors. The sensors for inspection are mounted on the bow of the vehicle. It is a head with a pan–tilt-unit for adjusting the sensors in the right position to the inspection object independent from the vehicle position as seen in Fig. 5. The used sensors are a video imaging system with laser illumination and a multi beam echo sounder (MBES), which are used for the anomaly detection. The sensor head is shown in Fig. 6. The inspection sensors also provide the VGS with distance values to the inspection object. The anomaly detection system can

Besides the inspection sensors, the vehicle guidance system has one of the most important roles for a successful mission. It guides the vehicle and is responsible for autonomous actions. A mission can be divided into three main stages: planning, execution and evaluation of the gained data. The autonomous vehicle guidance and control takes place during the mission execution stage. During the planning stage all known information have to be prepared to describe the mission in detail: what has to be inspected, where is the inspection track, what is the inspection distance and speed, where the vehicle can be deployed and recovered, where are dangerous areas for the vehicle, etc. All these points must be compiled together to make the inspection successful. Besides the known issues, there are many unknown parts in the mission execution. There can be obstacles which are not mapped in the harbor maps. Usually, the desired tracks in the mission plan are inexact. Therefore, the VGS must handle the unknown parts on the basis of the known information and fuse it together to guide the vehicle safely through its mission. 4.1. General structure The vehicle guidance system used during this project for inspection is based on the vehicle control software used for the ‘‘SeaFox’’ and ‘‘SeaWolf’’ vehicles from AE. It is adapted to the new vehicle ‘‘SeaCat’’ and extended with an inspection software module which guides the vehicle during the inspection. The vehicle guidance during the inspection is done by the module OIS which is described in detail in Section 5. This module interacts with the mission control module, the object identification module, the autopilot and the control module for the inspection sensor system as shown in Fig. 7. 4.2. Mission description The inspection task is an extension of the current vehicle behavior as mentioned before. Therefore, the vehicle software needs a new module for inspection such that the mission planning needs to be enhanced. As seen in Fig. 8 the inspection is done in a specific area, where the vehicle guidance system (VGS) can act freely without any bounds on specific tracks which need to be followed. When the vehicle leaves the inspection area, the inspection task will be finished, the VGS returns to the mission plan and executes the next task, such as following the next track. This inspection area has to contain the object which needs to be inspected. Additionally, it needs a start point where the inspection begins and an endpoint where the mission plan continues after the inspection.

4

M. Jacobi / Robotics and Autonomous Systems (

)



Fig. 11. Line representation of a wall.

Fig. 8. Inspection area.

Fig. 12. Basic maneuvers: wall approach.

The guidance system, especially the object inspection module,

− →

− →

− →

uses the point direction equation x = P W + a W as line description for further calculations (see Section 5). The values for − → the point PW , which is on the line, and the direction a w are calculated using the vehicle position PV , the north bearing angle αb and the distance db between the points PV and PW . The bearing Fig. 9. Planning of an inspection mission.

−−→

− →

line PV PW is a normal of the wall line, the direction vector a w can −−→ − → be calculated using the angular relations between a w and PV PW

− →

−−→

− →

−−→

where a w ⊥ PV PW and so a w · PV PW = 0. This description of the walls is used for the guidance algorithms described in the next section. 5. Object inspection system (OIS)

Fig. 10. Sonar wall detection—image creation (left), object detection (right).

The inspection area is defined around a simple track, provided by the current vehicle software, which is extended by an area on the right and left sides of the track. Fig. 9 shows an inspection track, highlighted in green (or light gray box), created with the mission planning software. 4.3. Wall detection The obstacle data used by the VGS is provided by the object detection module. This module analyzes the data from the scanning sonar and extracts lines which can be seen in Fig. 10. The VGS internal representations of these walls are lines. It is easier and more effective to use lines in the vehicle control algorithms than the unprocessed sonar image representation. The lines are represented as contact points on the detected walls, which are the nearest point on the wall according to the vehicle. This point PW is described by the distance db to the current vehicle position PV and the north bearing angle αb seen from the vehicle. Fig. 11 illustrates this description. Additionally to this minimal wall description, the geodetic coordinates are also delivered.

The object inspection system describes the part/module of the vehicle control software which guides the vehicle during an inspection mission. The OIS module will be activated by mission control when the vehicle enters the inspection area. When the object which has to be inspected is found, the OIS module gains the control over the vehicle. 5.1. Phases of inspection The inspection is subdivided into different phases, these are: Searching, Approach, Following, Turn and Finish. During the phase Search, the OIS module is activated, but has no vehicle control. It is analyzing the object detection module data for the object which has to be inspected using the data of object identification module and the information from the mission plan. When the information match, the OIS module requests the vehicle control from the mission control module and the inspection task enters the next phase. When the inspection object was found the phase Approach begins and the vehicle adapts its course so that it starts to follow the wall as shown in Fig. 12. These calculations will be done until the vehicle reaches the inspection distance to the wall. The phase Following is activated, when the vehicle reaches the inspection distance to the wall during the phase Approach. The OIS module calculates new virtual target points on inspection distance to follow the structure of the wall (Fig. 13). The calculation of the virtual target points is described in detail in Section 5.2 and illustrated in Fig. 14.

M. Jacobi / Robotics and Autonomous Systems (

)



5

with

− → aW − → v h = kdh − → ∥ a W∥

(2)

and

Fig. 13. Basic maneuvers: wall following.

− → − → P v− P W − → vi= di .

(3)

d

− →

It has to be regarded, the direction vector of the line a W , as north bearing angle αW , points in the same or in the opposite direction as the actual vehicle heading αV . The factor k in Eq. (2) pays attention to this issue, k = +1 when |αW − αV | 6 π2 , otherwise k = −1. The virtual target point is placed during wall following in the direction of the vehicle course, see Fig. 14. With the newly calculated virtual target point PT , the new vehicle course can be set and submitted to the autopilot module. This module controls the thrusters and fins in the way that the new course is set. The virtual target point calculation during the Approach phase is basically the same as the Following phase, but differs in the calculation of the direction indicator k, used in Eq. (2). The vehicle has to follow the wall in the direction defined in the mission plan (Fig. 9). The way points 2 and 3 of the inspection track in the mission plan describe the direction in which the wall has to be inspected. These points have to be selected well by the operator during the planning of the mission. The track between this both way points needs to be parallel to the object which has to be inspected (Fig. 9). In summary, the calculation of the virtual way point is the same as by the wall following but the direction in which the wall will be followed is defined by the direction of the two way

Fig. 14. Wall following with virtual target points.

−−−−−→

points WP2 WP3 instead of the current course of the vehicle. 5.3. Vertical path planning

Fig. 15. Calculation of the virtual target.

The inspection is successfully finished, when the vehicle leaves the inspection area and has inspected the whole surface of the object, below the water line, which has to be inspected. When the OIS module loses track of the wall, the inspection is also finished, but marked as unsuccessful. The OIS module informs mission control and returns the vehicle control. The VGS guides the vehicle to the next inspection object, and OIS will be activated again, or returns to the recovery area. 5.2. Calculation of virtual target points The virtual target points are kind of way points which guide the vehicle during the inspection, while following a wall. They are recalculated frequently to adapt the vehicle behavior to the current situation and they are the basic concept of the reactive control approach presented in this article.

− →

− →

The line description as point direction equation x = P

− →

− → a W , the vehicle position P

+ − →

W

and current vehicle course a V provided by the object detection and the navigation modules (Figs. 15 and 11). The inspection distance di and the lookahead distance dh to the next virtual target point are defined in the mission plan data. With this information the virtual target point

− → P

T

can be calculated:

T

− → → → = P W +− v h+− vi

− → P

V

(1)

As mentioned in the beginning of Section 5.1, the vehicle turns and descends or ascends in another diving depth to scan the wall or object vertically, illustrated in Fig. 17; when it leaves the inspection area as formerly shown in Fig. 8. The inspection sensors can only cover a small area of the inspection object or wall during one track alongside this object. The size of this area, which has to be covered, (dopt for the optical sensor and dMBES for the multi beam echo sounder) depends on the field of view angle of the used sensors, the resolution and the distance to the inspected object di as shown in Fig. 16. Additionally, the inspection distance di depends also on the range of view for optical sensors and the acoustic properties for the sonar sensors. The covered area is a function that depends on the inspection distance and the field of view of the specific sensor: dsensor = 2di · tan(γsensor ).

(4)

To cover the inspection object vertically and completely, it has to be inspected in several depth layers as seen in Fig. 17. The diving depths are so calculated that depending on the inspection distance the wall or inspection object is optimally covered by the sensors. Therefore, it has to be considered that the coverage is complete, less energy and time are consumed. An overlap of the individual scan areas is required to support the post mission analysis with redundant information discovered in different depths. Additionally, vehicle movements like variations in the diving depth and inspection distance can be compensated and the complete coverage is guaranteed. The calculation of the different diving depth planes needs the sensor parameters, like range of view, minimal and maximal distance to the inspection object, also the safety distance and the

6

M. Jacobi / Robotics and Autonomous Systems (

)



Fig. 18. Visualization of a mission an simulation results in Google Earth.

Fig. 16. Inspection sensor surface covering.

Fig. 19. Test of OIS in the lake.

6. Example and simulation

Fig. 17. Vertical wall inspection path.

vertical size of the wall or inspected object dwater and the degree of overlap δoverlap ∈ [0; 1]. The minimal number of depth planes n∗ is calculated with the integer division: n∗ = dwater ÷ ((1 − δoverlap )dsensor ).

(5)

When the number of the vertical paths does not cover the object completely an additional path has to be added:



1; n = n∗ + 0;

dwater mod ((1 − δoverlap )dsensor ) > 0 otherwise.

(6)

The data for the water depth profile dwater (PV ) are recorded during the first track on the inspected object. The actual diving depth and the height over ground are used to gain the actual water depth. The water depth used for the calculation of the vertical lanes can be the maximal, minimal or average water depth; the selection of the used water depth type can be done in the mission planning. Depending on this selection, several depth lanes can be multiple scanned because the safety distance to the sea bottom, the sea bottom itself and the water surface are natural constraints for the vehicle. A change between the different vertical lanes has be done when the vehicle leaves the inspection area, mentioned in Section 5.1. When turning the vehicle the high maneuverability of the vehicle due to the use of vertical and horizontal thrusters is advantageous; it can turn and dive while it does not dive forward.

A simulation environment was developed for testing the vehicle guidance during inspection missions. This environment simulates the controlled vehicle and all needed modules, like mission control, navigation and object detection, as shown in Fig. 7. To reduce the implementation effort of the inspection module OIS interfaces to the general vehicle control software were defined to gain an abstract and transparent interface which can be used by the simulation environment as mentioned in [10]. With these interfaces, the in C++ implemented OIS module and algorithms can run in the simulation environment and with the vehicle software from Atlas Elektronik or ConSys [11] without changes. The simulation output data of the vehicle movements can be done as logging data in files, as a visualization in Google Earth [12] or in CViewVR [13,14]. A scenario for testing the basic inspection functionality of OIS, as the approach to a wall, inspection of it and the leaving of the inspection area, was defined and simulated. The vehicle movements and the mission are shown in Fig. 18. A similar scenario was used for sea trials in the testing lake from AE, where the algorithms were tested on the vehicle. Therefore, the vehicle has to approach a wall, follow the wall when it is in the inspection area and finally returns to the tracks on the mission plan. The plan is shown and annotated in Fig. 9. Thereby, the activation and deactivation of the OIS module by the mission control module and the wall following behavior were successfully tested. A photo shot from this test is shown in Fig. 19. The vertical path planning has currently only been tested in the simulation. A sea test is scheduled in the next months. 7. Conclusion As mentioned in the introduction, inspection of underwater structures and installations is getting more and more important. Therefore, new algorithms have to be implemented and ideas from

M. Jacobi / Robotics and Autonomous Systems (

areal and land vehicle have to be adapted. An algorithm and its implementation in an AUV have been presented in this article and were initially tested in sea trials successfully. With the basic functions to approach walls and follow them for underwater inspection the foundation for more complex behavior is laid. During the next time the OIS module will be extended to inspect more complex regions. Edges and corners need to be recognized and successfully inspected. Acknowledgments The author would like to thank the German Federal Ministry for Economics (BMWi) for funding this research work under the project number 03SX262A. It is a project with several partners from German industry, Fraunhofer institutes and other German research facilities to develop an autonomous underwater vehicle for harbor inspection. References [1] P. Newman, R. Westwood, J. Westwood, Market prospects for AUVs, Marine Technology Reporter. [2] H. Kermorgant, D. Scourzic, Interrelated functional topics concerning autonomy related issues in the context of autonomous inspection of underwater structures, in: Oceans 2005—Europe, Vol. 2, 2005, pp. 1370–1375. [3] J. Vaganay, M. Elkins, S. Willcox, F. Hover, R. Damus, S. Desset, J. Morash, V. Polidoro, Ship hull inspection by hull-relative navigation and control, in: OCEANS, 2005. Proceedings of MTS/IEEE, Vol. 1, 2005, pp. 761–766. [4] J. Vaganay, M. Elkins, D. Esposito, W. O’Halloran, F. Hover, M. Kokko, Ship hull inspection with the HAUV: US navy and NATO demonstrations results, in: OCEANS 2006, 2006, pp. 1–6.

)



7

[5] M. Bayliss, D. Short, M. Bax, Underwater Inspection, Taylor and Francis, 1990, URL: http://books.google.com/books?id=H4nRvSM44gcC. [6] M. Senger, Wikimedia Commons, licensed under CreativeCommons License CC BY-SA 3.0, May 2004. http://creativecommons.org/licenses/by-sa/3.0/deed.en. [7] Bundesanstalt für Wasserbau, Merkblatt: Schadensklassifizierung an Verkehrswasserbauwerken, May 2011. [8] ATLAS ELEKTRONIK GmbH, Bremen. [9] DFKI, Bremen. [10] M. Jacobi, T. Rauschenbach, A tool chain for AUV system testing, in: OCEANS 2010 IEEE—Sydney, 2010, pp. 1–5. [11] T. Pfuetzenreuter, H. Renkewitz, ConSys—a new software framework for underwater vehicles, in: OCEANS 2010 IEEE—Sydney, 2010, pp. 1–6. [12] Google Earth. www.google.com/earth. [13] M. Jacobi, T. Pfützenreuter, T. Glotzbach, M. Schneider, A 3D simulation and visualisation environment for unmanned vehicles in underwater scenarios, in: 52. Internationales Wissenschaftliches Kolloquium (IWK, International Scientific Colloquium) at Ilmenau University of Technology, Vol. 1, 2007, pp. 371–367. [14] T. Glotzbach, T. Pfützenreuter, M. Jacobi, A. Voigt, T. Rauschenbach, CViewVR: A high-performance visualization tool for team-oriented missions of unmanned marine vehicles, in: 8th International Conference on Computer Applications and Information Technology in the Maritime Industries, COMPIT2009, 2009.

Marco Jacobi Graduated 2005 as Engineer of Electro and Information Technology from the University of Technology in Ilmenau. From 2005 to 2009 he was a Research Associate at the Department for System Analysis at University of Technology in Ilmenau. Since 2009 he has been a Research Associate at Fraunhofer IOSB, Department of Maritime Systems.