Proceedings of the 2nd IFAC Conference on Embedded Systems, Proceedings of the 2nd IFAC Conference on Embedded Systems, Computational and Telematics Control Systems, Proceedings of Intelligence the 2nd 2nd IFAC IFAC Conference onin Embedded Proceedings of the Conference on Computational Intelligence and Telematics inEmbedded Control Systems, Available online at www.sciencedirect.com June 22-24, 2015. Maribor, Slovenia Computational Intelligence and Telematics in Computational Intelligence and Telematics in Control Control June 22-24, 2015. Maribor, Slovenia June June 22-24, 22-24, 2015. 2015. Maribor, Maribor, Slovenia Slovenia
ScienceDirect
IFAC-PapersOnLine 48-10 (2015) 105–110
Evaluation of Methods for Robotic Evaluation of Methods for Robotic Evaluation of Methods for Robotic Evaluation of Methods for Robotic Mapping of Cultural Heritage Sites Mapping of Cultural Heritage Sites Mapping of Cultural Heritage Mapping of Cultural Heritage Sites Sites ∗ Dorit Dorit Borrmann Borrmann ∗∗ Robin Robin Hess Hess Daniel Daniel Eck Eck ∗ RobinN¨ Dorit Borrmann Hess Daniel Eck Hamidreza Houshiar Andreas u chter Klaus Schilling Dorit Borrmann Robin Hess Daniel Hamidreza Houshiar Andreas N¨ uchter KlausEck Schilling Hamidreza Houshiar Andreas N¨ u chter Klaus Hamidreza Houshiar Andreas N¨ uchter Klaus Schilling Schilling ∗ u ∗ Informatics VII: Robotics and Telematics, University of W¨ Informatics VII: Robotics and Telematics, University of W¨ urzburg, rzburg, ∗ ∗ Informatics Am VII: Robotics and Telematics, University of W¨ u Hubland, 97074 W¨ u rzburg, Germany Informatics Am VII: Hubland, Robotics 97074 and Telematics, urzburg, rzburg, W¨ urzburg,University Germany of W¨ Am Hubland, u (e-mail: Am
[email protected]). Hubland, 97074 97074 W¨ W¨ urzburg, rzburg, Germany Germany (e-mail:
[email protected]). (e-mail: (e-mail:
[email protected]).
[email protected]). Abstract: Abstract: In In archaeological archaeological studies studies the the use use of of new new technologies technologies has has moved moved into into focus focus in in the the Abstract: In archaeological studies the use of new technologies has moved into focus in past years creating new challenges such as the processing of the massive amounts of data. In Abstract: In archaeological studies the use of new technologies has moved into focus in the the past years creating new challenges such as the processing of the massive amounts of data. In past years new challenges such the processing of amounts data. In this we and 3D of use of past paper years creating creating newsteps challenges such as as for the smart processing of the the massive massive amounts of ofby data. this paper we present present steps and processes processes for smart 3D modelling modelling of environments environments by use In of this paper we present steps and processes for smart 3D modelling of environments by use the mobile robot Irma3D. A robot that is equipped with multiple sensors, most importantly aa this paper we present steps processes for smartwith 3D modelling of environments by use of of the mobile robot Irma3D. A and robot that is equipped multiple sensors, most importantly the mobile mobile robot Irma3D. A robot robot that that is equipped equipped with multiple multiple sensors, most importantly importantly a photo camera and a laser scanner, enables the automation of most of the processes, including the robot Irma3D. A is with sensors, most photo camera and a laser scanner, enables the automation of most of the processes, includinga photo camera and a laser scanner, enables the automation of most of the processes, including data acquisition and registration. The robot was tested in the W¨ u rzburg Residence. Methods photo camera and a laser scanner, enables the automation of most of the processes, including data acquisition and registration. The robot was tested in the W¨ urzburg Residence. Methods data acquisition and registration. The was in the u Residence. Methods for 3D color reconstructions of heritage are evaluated in data acquisition registration. The robot robot was tested tested in sites the W¨ W¨ urzburg rzburg Residence. Methods for automatic automatic 3Dand color reconstructions of cultural cultural heritage sites are evaluated in this this paper. paper. for automatic 3D color reconstructions of cultural heritage sites are evaluated in this paper. for automatic 3D color reconstructions of cultural heritage sites are evaluated in this paper. © 2015, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. Keywords: Keywords: laser laser scanning, scanning, 3D 3D modeling, modeling, multi-sensors multi-sensors Keywords: laser scanning, 3D modeling, Keywords: laser scanning, 3D modeling, multi-sensors multi-sensors 1. INTRODUCTION In 1. INTRODUCTION In this this paper paper we we describe describe the the data data collection collection with with the the robot robot 1. In this paper we describe the data collection with the Irma3D in this renowned historic sites, the post-processing 1. INTRODUCTION INTRODUCTION In this paper we describe the data collection with the robot robot Irma3D in this renowned historic sites, the post-processing Irma3D in renowned historic sites, the post-processing needed create aa full color model evaluate Irma3D in this this renowned historic post-processing needed to to create full 3D 3D color sites, modeltheand and evaluate the the to create aa full 3D model and the Archaeology quality of resulting comparing mapping needed tothe create fullmodel 3D color color model different and evaluate evaluate the Archaeology is is aa historical historical science science of of high high social social interest. interest. It It needed quality of the resulting model comparing different mapping Archaeology is science of social interest. It studies being its as buildings, of algorithms. Archaeology is aa historical historical science of high highsuch social It quality quality of the the resulting resulting model model comparing comparing different different mapping mapping studies the the human human being and and its legacy, legacy, such asinterest. buildings, algorithms. studies the human being its such as tools Cultural heritage sites can found studies the art. human being and and its legacy, legacy, as buildings, buildings, algorithms. tools and and art. Cultural heritage sites such can be be found all all algorithms. tools and art. Cultural heritage sites can be found all 2. EXPERIMENTAL EXPERIMENTAL SETUP SETUP AND AND DATA DATA over the world and they tell us the story of humanity in tools andworld art. and Cultural sites canof be found all 2. over the they heritage tell us the story humanity in 2. EXPERIMENTAL EXPERIMENTAL SETUP AND AND DATA DATA over the world and they tell us the story of humanity in ACQUISITION different areas of the world. Remote sensing has become 2. SETUP over the world and they tell us the story of humanity in ACQUISITION different areas of the world. Remote sensing has become ACQUISITION different areas of the world. Remote sensing has become state of the art in modeling archaeological sites. This way ACQUISITION different areas world.archaeological Remote sensing hasThis become state of the art of in the modeling sites. way Hardware The The data data was was acquired acquired with with the the mobile mobile robot robot state of in modeling archaeological sites. This way of buildings or as aa unique state of the the art artof modeling archaeological sites. way Hardware of digitization digitization ofinentire entire buildings or areas areas gives gives asThis unique Hardware The data was acquired with the mobile robot Irma3D (Intelligent Robot for Mapping Applications in of digitization of entire buildings or areas gives as a unique oportunity to preserve the current state of prehistoric Hardware The data was acquired with the mobile robot of digitization entire buildings or areas gives a unique Irma3D (Intelligent Robot for Mapping Applications in oportunity to ofpreserve the current state of as prehistoric Irma3D (Intelligent Robotbattery-powered, for Mapping Mapping Applications Applications in 3D). Irma3D Irma3D is aa small, small, light weight weight oportunity to preserve the current state of prehistoric builings and to join forces of experts all over the world. Irma3D (Intelligent Robot for in oportunity to preserve the current state of prehistoric 3D). is battery-powered, light builings and to join forces of experts all over the world. 3D). Irma3D is a small, battery-powered, light weight three Irma3D wheeled is vehicle. It consists consists of aa modified modified Volksbot builings and to join forces of experts all the Collecting the tedious It the a small, battery-powered, lightVolksbot weight builings and joinis of work. experts all over over finding the world. world. three wheeled vehicle. It of Collecting thetodata data isforces tedious work. It includes includes finding the 3D). three wheeled vehicle. It consists of a modified Volksbot RT 3 chassis with two front wheels. Each is actuated Collecting the data is tedious work. It includes finding the best position for a laser scan, moving the equipment to three wheeled vehicle. It consists of a modified Volksbot Collecting the for dataa is tedious It includes finding the 3 chassis with two front wheels. Each is actuated best position laser scan,work. moving the equipment to RT RT chassis with150two two front wheels.motor. Each The is actuated actuated by an an individual W front DC Maxon Maxon motors best position for aa laser the to the of position. 33 chassis with wheels. Each is best positionand for georeferencing laser scan, scan, moving moving the equipment equipment to RT by individual 150 W DC motor. The motors the position position and georeferencing of the the scanning scanning position. by an individual 150 W DC Maxon motor. The motors are powerful enough to move the robot at a maximum the position and georeferencing of the scanning position. Letting a robotic system take over this work reduces the by an individual 150 W DC Maxon motor. The motors the position and georeferencing of the scanning position. are powerful enough to move the robot at a maximum Letting a robotic system take over this work reduces the are powerful enough to move the robot at a maximum velocity of 2.2 2.2 m/s The The third the wheel is in in the back of of Letting aa robotic system take over this reduces the time in by % impact to powerful enough to move robot at the a maximum Letting robotic system take overdecreases this work workthe reduces velocity of m/s third wheel is back time spent spent in the the field field by 75 75 % and and decreases the impactthe to are velocity of 2.2 m/s The third wheel is in the back of the chassis and is swivel-mounted and thus completely time spent in the field by 75 % and decreases the impact to the sites. We present the robot Irma3D, that was designed velocity of 2.2 m/s The third wheel is in the back of time spentWe in present the fieldthe by robot 75 % and decreases to the chassis and is swivel-mounted and thus completely the sites. Irma3D, that the wasimpact designed the chassis and is swivel-mounted swivel-mounted and thus completely passive as it itand follows the directions directions and of the the front wheels. the sites. present the Irma3D, that designed to in aa tele-operated 3D models chassis is thus completely the sites. We We the robot robot fashion Irma3D,digital that was was passive as follows the of front wheels. to create create in present tele-operated fashion digital 3Ddesigned models the passive as it follows the directions of the front wheels. The high-powered electrical two-wheel drive powered by to create in a tele-operated fashion digital 3D models of environments. This paper describes the setup and the passive as it follows the directions of the front wheels. to create in a tele-operated fashion digital 3D models The high-powered electrical two-wheel drive powered of environments. This paper describes the setup and the The high-powered electrical two-wheel drive powered by by two 150 W DC Maxon motors is equipped with rotary of environments. This paper describes the setup and the capabilities of the robot and the steps to create these 3D high-powered electrical two-wheel drive with powered by of environments. describes thecreate setupthese and the two 150 W DC Maxon motors is equipped rotary capabilities of theThis robotpaper and the steps to 3D The two 150 W W DC Maxon motors is equipped equipped with rotary rotary encoders to DC measure wheel rotations. This information information is capabilities of and the to these 3D models from sources. 150 Maxon motors is with capabilities of the the robot robot andmultiple the steps stepssensor to create create theseThe 3D two encoders to measure wheel rotations. This is models automatically automatically from multiple sensor sources. The encoders to measure measure wheel rotations. rotations. This information information is used to provide pose estimates of the robot via odometry. models automatically from multiple sensor sources. The entire process is demonstrated by means of experiments encoders to wheel This is models automatically from multiple sensor sources. The to provide pose estimates of the robot via odometry. entire process is demonstrated by means of experiments used used to to provide provide pose pose estimates estimates of of the the robot robot via via odometry. odometry. entire process is by carried out heritage entire is demonstrated demonstrated by means means of of experiments experiments used carriedprocess out at at cultural cultural heritage sites. sites. carried out at cultural heritage sites. carried out at cultural heritage sites. In In previous previous work work we we presented presented results results of of tests tests at at Ostia Ostia In previous work we presented results of tests at Ostia Antica and the W¨ u rzburg Residence that showed that the In previous work we presented results of tests Ostia Antica and the W¨ urzburg Residence that showed at that the Antica and the W¨ u rzburg Residence that showed that the robot can help to automate the process of reconstructing Antica andhelp the W¨ rzburg Residence thatofshowed that the robot can to uautomate the process reconstructing robot can automate the process of reconstructing 3D this focus on the robot can help help to to In automate the we process 3D environments. environments. In this paper paper we focus of onreconstructing the quantitaquantita3D environments. In this paper we focus on the quantitative evaluation of the results. For this purpose use 3D environments. In this paper we focus on thewe quantitative evaluation of the results. For this purpose we use the the tive evaluation of the results. For this we data acquired u Residence and tive theW¨ results. this purpose purpose we use use the the dataevaluation acquired in inofthe the W¨ urzburg rzburgFor Residence and evaluate evaluate data acquired in W¨ u and evaluate the final model an additional sensor, iSpace system data in the the urzburg rzburg Residence Residence and evaluate the final acquired model using using anW¨ additional sensor, the the iSpace system Fig. 1. 1. Irma3D Irma3D in in the the Imperial Imperial Hall Hall of of the the W¨ W¨ urzburg rzburg final model using an additional sensor, the iSpace system from Nikon. The rzburg u final anu sensor, is theunique iSpacebaroque system Fig. from model Nikon.using The W¨ W¨ uadditional rzburg Resdience Resdience is unique baroque Fig. 1. Irma3D in the Imperial Hall of the W¨ u rzburg Residence. Visible in the background is one of the the from Nikon. The W¨ u rzburg Resdience is unique baroque palace in city W¨ u Germany that 1. Irma3D Visible in the in Imperial Hall of the W¨ urzburg the background is one of from W¨ ucenter rzburgof is unique baroque palaceNikon. in the theThe city center ofResdience W¨ urzburg, rzburg, Germany that Fig. Residence. Residence. Visible in the background is one of the transmitters of the iSpace localization system and palace in the city center of W¨ u rzburg, Germany that was named a UNESCO World Heritage Site in 1981. The Residence. the background one of and the transmittersVisible of theiniSpace localizationis system palace in the city center of W¨ urzburg, Germany was named a UNESCO World Heritage Site in 1981. that The transmitters of the the iSpace localization system and a box with two of the reflective targets used for was named a UNESCO World Heritage Site in 1981. The W¨ u rzburg Residence is ideally suited for the experiments transmitters of iSpace localization system and a box with two of the reflective targets used for was namedResidence a UNESCO World Heritage 1981. The W¨ urzburg is ideally suited forSite the in experiments a box box with with The two handvector of the the reflective reflective targets used for calibration. bar is is targets used to to used measure W¨ u rzburg Residence is ideally suited for the experiments as the large halls allow the setup of the iSpace system so a two of for calibration. The handvector bar used measure W¨ u rzburg Residence is ideally suited for the experiments as the large halls allow the setup of the iSpace system so calibration. The handvector bar is used to measure distinctive points in the environment that are used to as the large halls allow the setup of the iSpace system so that entire environment can by this calibration. The handvector bar is used distinctive points in the environment that to aremeasure used to as thethe large halls allow the setup of observed the iSpace so that the entire environment can be be observed bysystem this high high distinctive points in the the environment that are are used used to to evaluate the quality of the final model. that the entire environment can be observed by this high precision localization system. distinctive points in environment that evaluate the quality of the final model. that the entire environment precision localization system.can be observed by this high evaluate the the quality quality of of the the final final model. model. precision localization system. evaluate precision localization system.
Copyright © 2015 IFAC 105 Copyright©©2015, 2015IFAC IFAC (International Federation of Automatic Control) 105 Hosting by Elsevier Ltd. All rights reserved. 2405-8963 Copyright 2015 IFAC 105 Peer review© of International Federation of Automatic Copyright ©under 2015responsibility IFAC 105Control. 10.1016/j.ifacol.2015.08.116
CESCIT 2015 106 June 22-24, 2015. Maribor, Slovenia
Dorit Borrmann et al. / IFAC-PapersOnLine 48-10 (2015) 105–110
procedure using a few hundred points from a special sensor frame is applied. An optimization process calculates the position of all transmitters in a self-defined coordinate system. Three points, the origin, a point on the x-axis and a point on the y-axis allow the user to define its own coordinate system. In typical environments the iSpace system is able to perform measurements at a sampling rate of 40 Hz with a maximum error of [±0.25]mm. In practice environmental factors such as size, reflection of the surface and occlusions of the transmitters have to be taken into consideration.
sensor frame camera radio frequency module 3D laser scanner
2D laser scanner
Fig. 2. The Irma3D robot with the setup used in the Residence. The pose estimates are improved using data from the Xsens MTi IMU device that is also attached to the robotic platform. For obstacle avoidance when moving autonomously a Sick LMS 100 2D laser scanner is added to the front of the robot. This sensor can also be used to improve the localization of the robot. The central sensor of Irma3D is the 3D laser scanner VZ-400 by RIEGL Measurement GmbH. The scanner is mounted on top of the Volksbot RT 3 chassis. Attached to the top of the scanner is a Canon 1000D DSLR camera. After a 3D scan has been acquired the camera is used to acquire color information for the point cloud. Also mounted on top of the laser scanner is an iSpace sensor frame. iSpace is a high-precision position and tracking system from Nikon Metrology Nikon Metrology (2014). The optical laser based system consists of several transmitters. These are mounted on a wall or on tripods to cover the experimental area both indoors and outdoors. The rotating head of each transmitter emits two perpendicular fan-shaped laser beams at a unique distinguishable frequency near 40 Hz. The vertical openng angle of the laser beams is limited to 40 degrees and the detectable range lies between 2 to 55 meters. Several sensor frames can be located within the system. A sensor frame consists of at least one detector, a photo diode with a horizontal opening angle of 360 degrees and a vertical opening angle of 90 degrees. A small radio frequency module transmits the sensor data wirelessly to the base station of the iSpace system, a PC running the iSpace control software. A sensor frame with one detector is sufficient to acquire 3D position information. To measure also the rotation and to increase the accuracy of the position data the sensor frame used on the robot has a total of four detectors. A sensor frame with two detectors, the handvector bar, is used to measure points in the environment to evaluate the quality of the resulting model. The iSpace system differs from other position and tracking systems as the transmitters do not actively observe the position of the sensor frames. Instead, each sensor frame receives the laser data from the transmitters and sends the information on to the control PC. The control PC calculates the elevation and azimuth angles between all detectors for a sensor frame and each visible transmitter based on the received data defining a straight line between transmitter and detector. Given the relative transformation between the transmitters the length of the lines is calculated using triangulation. To determine the position of the transmitters a calibration 106
Experimental environments The robot was tested in two halls in the W¨ urzburg Residence, namely the White Hall and the Imperial Hall. The Residence Palace in W¨ urzburg Germany was labeled a UNESCO World Cultural Heritage site in 1981. Being built from 1720 to 1744 with the interior finished in 1780 it is now one of Europe’s most renowned baroque castles Bayerische Verwaltung der staatl. Schl¨osser, G¨arten und Seen (2014). It was laboriously reconstructed after being heavily damaged during World War II. Not destroyed during the war remained the large unsupported trough vault above the main stair-case designed by architect Balthasar Neumann, the Garden hall with ceiling paintings by Johann Zick, the white hall with the impressive stucco work by Antonio Bossi and the Imperial hall with frescos by Giovanni Battista Tiepolo. With its large colorful paintings by the Venetian painter Giovanni Battista Tiepolo and fine stucco work by stuccoist Antonio Giuseppe Bossi in many of the almost 400 rooms the W¨ urzburg Residence is a unique example of baroque style. Experiments were carried out in both the White hall and the Imperial hall, two large halls with impressive 3D structure. Together with the colorful paintings in the Imperial hall the environment can only be captured by the combination of two technologies, e.g., laser scanning and photography. Data collection To capture the entire environment, data has to be collected at several locations. This is especially crucial due to the restricted field of view of the camera. Accordingly the robot is moved to a scanning location and stops there for data collection. The Riegl VZ-400 laser scanner works in a way that it emits a beam of light into a specific direction. After the light is reflected from a distant object and returns to the scanner, the time between sending and receiving is used to calculate the distance to this object. The sensor is able to capture 125.000 points per second with an opening angle of 360◦ × 100◦ . Thus, a typical laser scan with a resolution of 0.04◦ to 0.03◦ takes between 3 and 5 minutes. To achieve the full 360◦ horizontal field of view, the scanner head is rotated around its vertical axis. This feature is used to capture the full 360◦ degrees with the camera. After the scan is taken, the scanner head is rotated in discrete steps to take 12 pictures with a resolution of 3888 × 2592 at each scanning location. Assuming that the transformation between the camera and the laser scanner is known the point cloud from the laser scanner can then be enhanced with the information from the camera. The calibration procedure used to determine this transformation between camera and the laser scanner is described in detail in Borrmann et al. (2014).
CESCIT 2015 June 22-24, 2015. Maribor, Slovenia
Dorit Borrmann et al. / IFAC-PapersOnLine 48-10 (2015) 105–110
The data acquired at different positions has to has to be brought into one common coordinate system. This process is called point cloud registration. In robotics research methods have evolved over the past years that are specifically designed to register point clouds collected by a mobile robot.
107
e2 · eT2 e3 · eT3 e1 · eT1 √ (4) + √ + √ λ1 λ2 λ3 Afterwards, for each new scan position the position of the laser scanner in the iSpace coordinate system is calculated: ˆ r,i = T ˆ m→r Tm,i . T (5) ¯ m→r · ˆ m→r = T R
3.2 Pairwise scan registration
3. SCIENTIFIC APPROACH 3.1 Calibration of the localization sensors To achieve precise pose estimates for the 3D laser scanner the localization sensors of the robot have to be calibrated to the laser scanner. The method used for odometry, IMU and the 2D laser scanner is explained in Elseberg et al. (2012). In this paper robotic mapping methods using a 3D laser scanner are evaluated using the iSpace system. To localize the robot in the iSpace coordinate system an iSpace sensor frame is attached to the VZ-400 laser scanner. After setting up the transmitters of the iSpace system several reflective markers were attached to objects in the environment. The centers of the markers are measured with the iSpace handvector bar, thus determining their position in the iSpace coordinate system. These markers show up nicely in the reflectance data of the laser scanner. To measure their precise position first a full scan of the environment is carried out. The RiScan Pro Software is used to detect the markers in the environment. An automatic procedure exists, but due to the fact that it creates a high number of false detections the markers were selected manually from the data as they are easily visible due to their high reflectivity. In a second step, fine scans of the markers are performed. The software controls the scanner automatically to scan the area around the selected markers with a very high resolution. If the existence of the marker can be verified, its precise position in the local coordinate system of the scan is calculated. Third, the coordinates of the markers in the coordinate system defined by iSpace are imported as control points and the scans registered to these control points based on the marker position. This yields the position and orientation of the laser scanner in the iSpace coordinate system at the time the scan was taken. Additionally, the pose of the sensor frame is also recorded. In the following poses will be treated as transformation matrices T, consisting of the rotation R and the translation t. Repeating this procedure for n scans gives n pairs of poses for the Riegl laser scanner Tr,i and the sensor frame Tm,i . From these poses the transformation Tm→r between the coordinate systems is calculated as: (1) Tm→r,i = Tr,i T−1 m,i . To reduce noise the average over all transformation matrices Tm→r,i is calculated as: n−1 ¯ m→r = 1 T Tm→r,i . (2) n i=0 This procedure works for the translation but is not guaranteed to yield valid solutions for the rotation, i.e., an orthogonal matrix with determinant one. Instead we search for ¯ m→r onto the nearest orthonormal matrix by projecting R SO(3) Horn et al. (1988). Let e1 , e2 , e3 be the eigenvectors and λ1 , λ2 , λ3 the eigenvalues of the square matrix ¯ ¯T (3) HHs = R m→r · Rm→r then the final rotation matrix is calculated as: 107
In robotics research methods have evoved over the past years to join data from several positions automatically. Commonly used for this task is the Iterative Closest Point (ICP) algorithm Besl and McKay (1992). The algorithm takes advantage of the fact that robots usually have a rough estimate of their current pose (position and orientation). Starting from such initial pose estimates the algorithm calculates effectively the correct transformation between two point clouds by means of minimizing distances between corresponding points. Corresponding points are chosen based on Euclidean distances. The algorithm is described in Algorithm 1. Algorithm 1 The ICP algorithm Require: point clouds m and d 1: find point correspondences 2: minimize for rotation R and translation t N 1 EICP (R, t) = ||mi − (Rdi + t)||2 N i−1 3: 4:
return pose (R, t) iterate 1 and 2
Given a registered point cloud M and a point cloud D with an initial pose estimate the ICP first tries to find for each point di from D the point mi in M that is closest to di . Then one needs to solve for a transformation (R, t) (translation R and orientation t) that minimizes the error function EICP . N¨ uchter et al. (2010) presents several minimization methods. These two steps are iterated to find the best transformation between the two point clouds. For best results a threshold tdist is introduced and all point pairs with a distance larger than tdist are discarded from the calculations. In practise this procedure is adopted as follows. Using the first scanning position as reference the nth scan is always registered against the (n−1)th scan. This way all scans are sequentially transformed into the same coordinate system. 3.3 Global optimization Small local errors for the registration of each pair add up and lead to larger errors for long sequences. To overcome this issue a global optimization was proposed by Borrmann et al. (2008). Given a graph of correspondencing scans for each pair point pairs are determined and the new error function Eopt solves for transformations for all scans simultaneously: ||Rj mi + tj − (Rk di + tk )||2 (6) Eopt = j→k
i
Methods to minimize Eopt are presented in N¨ uchter et al. (2010). In the experiments global optimization is performed using a complete graph connecting all scans. The implementation used here is freely available from the The 3D Toolkit (3DTK) Andreas N¨ uchter et al. (2014).
CESCIT 2015 108 June 22-24, 2015. Maribor, Slovenia
Dorit Borrmann et al. / IFAC-PapersOnLine 48-10 (2015) 105–110
3.4 Acquiring initial pose estimates ICP and the global optimization rely on initial pose estimates to determine the correct transformation between laser scans. Different methods to acquire pose estimates are described in the following. Odometry On mobile robots pose estimates are usually attained from odometry. Odometry for wheeled robots such as Irma3D is based on calculating the distance traveled by the robot based on wheel rotations. For this purpose the the relation between the count c of the wheel encoders and the wheel rotations are related to each other using a factor f . Knowing the diameter d of the tires the distance traveled by one wheel is calculated as ∆s = π · d · f · c. Considering the distance B between the two wheels and the distances traveled by each wheel ∆sl , ∆sr the pose at time step n (x, y, θ) of the robot is calculated as: θn = θn−1 + (∆sr − ∆sl )/B (7) xn = xn−1 + 0.5 · (∆sr + ∆sl ) · cos −θn yn = yn−1 − 0.5 · (∆sr + ∆sl ) · sin −θn The quality of these measurements depends highly on the behavior of the robot on the ground. If the floor is slippery and the wheels spin uncontrolled the estimates lack precision. To increase the precision of the position estimates the xSens IMU is attached to the robot and the thus measured accelerations are integrated into the position estimates. This is especially helpful for rotations. Odometry works sufficiently when short distances are covered and the robot follows smooth trajectories. With increasing path lengths, many rotations and frequent stops along the trajectory errors sum up. 2D SLAM When searching for the next best scanning position it happens that the robot is started and stopped several times leading to bad odometry results. To overcome this problem roboticists commonly use SLAM (Simultaneous Localization and Mapping) solutions. The SLAM problem is the problem of localizing itself in a map while creating it at the same time. Commonly used for SLAM approaches are grid maps built from 2D laser scanners. We use the 2D SICK laser scanner and the GMapping Grisetti et al. (2007) ROS ROS module to create an initial map. GMapping uses Rao-Blackwellized particle filters where each particle holds its own map of the environment. The movement of the robot and last observation are used to predict the next possible state thus maintaining a map of the already explored environment. Feature-based registration If no pose estimates were acquired during data collection the remaining option is to determine them directly from the data. Apart from range information modern laser scanners often capture the amount of light that is returned to the sensor. This information, known as reflectance value, can be used to detect features in the data. Conditioned by the buildup of the laser scanning platform the points are captured as range, reflectance and spherical coordinates. This facilitates the generation of a panorama image using the spherical coordinates and the reflectance information. The 3D data is thus projected onto an image. Different projection methods are evaluated in Houshiar et al. (2012). The panorama generation enables the use of image based feature matching methods. 108
Fig. 3. Illustration of a scan pair with Mercator projection and matched features. These methods analyze the image and create a description of areas with high changes in intensity. The most common features are SIFT (Scale invariant feature transform). They also show superior performance for feature-based point cloud registration. As the SIFT feature detector works in gray-scale images the panorama images from reflectance values of laser scans are ideally suited for feature matching. For the automatic registration of point clouds using these panorama images corresponding features are detected in the panorama images of scan pairs. Feature correspondences found in two reflectance panoramas are used to calculate pairwise transformation matrices of the point clouds with a RANSAC-like Fischler and Bolles (1981) approach. For this purpose the algorithm identifies the feature in one image that is the closest to the sampled feature from the other image based on a comparison of their descriptors (see Figure 3). The registration proceeds by testing a subset of 3 point pair matches and examining the two triangles that are defined by these points. The algorithm translates the triangles so that their centroids lie at the center of the common reference frame. The orientation that minimizes the error between the points is then computed by the closed form solution proposed by Horn (1987). 4. EXPERIMENTAL RESULTS Weißer Saal During first experiments in the White Hall of the Residence the robot was manually driven to 9 positions to collect data. The starting position was close to the door leading to the Imperial Hall. The other 8 positions were chosen in the corners of the hall facing once towards the wall and once the other way to account for the tilted mount of the laser scanner. Fig. 4 illustrates some of the localization methods, showing scan positions, marked by a black line, and the resulting 3D model. It is obvious that the pose estimates from odometry contain large errors. In an environment such as the one used here, they still suffice as initial pose estimates for ICP, in more complex environments, however, they easily render the registration impossible. The map generated from Gmapping seems to represent the environment well. The structure of the hall is clearly visible. Some faulty black lines are seen in the center of the hall. Using the pose estimes to position the 3D scans directly reveals some inaccuracies as the walls from different scans do not align perfectly. This is illustrated by using different colors for the individual scans. Applying
CESCIT 2015 June 22-24, 2015. Maribor, Slovenia
(a) Odometry + IMU
Dorit Borrmann et al. / IFAC-PapersOnLine 48-10 (2015) 105–110
109
(b) Gmapping
Fig. 6. Final 3D model of the Imperial Hall with transmitters and robot poses. (c) SLAM
(d) Gmapping map
Fig. 4. Floor plan of the White Hall. Projection of the 3D point cloud without the roof using poses from odometry and IMU (a), Gmapping (b), slam6D with pose estimates from Gmapping (c). Each scan is drawn in a different color. The black line connects the scan positions. (d) shows the map created from Gmapping using the 2D laser scanner.
Fig. 5. The final 3D model of the White Hall in the W¨ urzburg Residence. the global optimization on top of the pose estimates from Gmapping improves the results visibly. Not only the walls ovarlap but also the color distibution in the maps changes. Close to the individual scanning positions the density of the color of that scan becomes higher, while in the other maps the distribution is more random. This effect is the same, when comparing the results with start estimates from different methods. The final 3D model is displayed in Fig. 5 or can be seen at http://youtu.be/_wPug_So_iE. Kaisersaal To achieve a quantitative evaluation of the methods the iSpace system is set up before data collection in the Imperial hall. Six transmitters are set up and calibrated to define a coordinate system as depicted in Fig. 6. iSpace is then able to measure the robot position high precisely using the sensor frame attached to the top of the laser scanner. Furthermore, points in the environment are measured using the handvector bar to evaluate the quality of the final model (cf. Fig. 1). 109
The robot was manually driven to 11 scanning positions. For the registration methods the pose of the first scan in anchored in the iSpace coordinate system. At each scanning position the position of the robot was recorded using the iSpace system. Figure 7(a) shows the scan positions determined using the different localization systems. Again, odometry has large errors. All other systems yield similar pose estimates. After applying the global optimization to all of these estimates the final results are identical. For a quantitative evaluation of the pose estimates 7 distinct points in the environment were measured using the iSpace handvector bar. These points were identified in each individual point cloud and transformed into the iSpace coordinate system using the pose estimates calculated with the different localization methods. For each scan the average of the distances between the transformed points and the globally measured point coordinates are calculated. Points that cannot clearly be identified in the point cloud are ignored in the averaging. The results are plotted in Fig. 7(b). As expected, the error for odometry pose estimates is tremendous. A registration is still possible due to the fact that the environment contains of a single large hall. However, this asks for two phases of the ICP method, first with a maximum distance of 250 cm between point pairs and then with a maximum distance of 100 cm to generate a correct model. Even though at first sight GMapping produces a feasible map, the missing accuracy becomes obvious here. This is caused by several factors, the limitations of the 2D laser scanner, the computational requirements to perform the 2D registration continuously and the use of a voxel map with inherent loss of precision. Decreasing the voxel size increases automatically the computational requirements. Despite the promised accuracy of the iSpace system, the error in the model using this system directly for localization is surprisingly high. Possible reasons for this are interferences from the large windows and the glass chan-
CESCIT 2015 110 June 22-24, 2015. Maribor, Slovenia
Dorit Borrmann et al. / IFAC-PapersOnLine 48-10 (2015) 105–110
deliers. Additionally, the rotational accuracy promised by the system is not as high as the positional accuracy. However, small rotation errors have already a large impact on the final model given the large distances to the walls. Furthermore, to capture as much as possible from the environment, the robot was moved to the corners of the hall, where the transmitter coverage was not optimal. For each positional measurement the system calculates an uncertainty, which is shown in Fig. 7(c). It can be seen that the large errors for the reference points correspond to the scans with high uncertainty. Feature-based registration yields the best initial pose estimates. They differ only very slightly from the final results. The remaining error is caused by three factors. First, not all points could be measured with high precision. Second, some of the reference points were in areas that were hard to capture with the laser scanner due to the reflective properties of the material. In combination with the low resolution in areas far away from the laser scanner the points manually selected points lack precision. Third, the registration methods use the first scan as reference. The errors in the localization of this scan are therefore propagated to the remaining scans. Nevertheless, the evaluation shows, that the error is minimal for the ICP approach independent of the method used for pose estimates. In the future we will work on increasing the robustness of the approach by analyzing the scene to choose the most appropriate option for generating the pose estimates. The final model of the Imperial Hall is depicted in Fig. 6. A video is available at http://youtu.be/jKVxlLvu7Pk.
Borrmann, D., N¨ uchter, A., Dakulovi´c, M., Maurovi´c, I., Petrovi´c, I., Osmankovi´c, D., and Velagi´c, J. (2014). A mobile robot based system for fully automated thermal 3d mapping. Adv. Engin. Informatics, 28(4), 425 – 440. Elseberg, J., Borrmann, D., and N¨ uchter, A. (2012). Automatic and Full Calibration of Mobile Laser Scanning Systems. In Proc. of the 13th Intern. Symposium of Experimental Robotics (ISER ’12), volume 79. Springer Tracts in Advanced Robotics, Quebec City, Canada. Fischler, M.A. and Bolles, R.C. (1981). Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Communications of the ACM, 24, 381 – 395. Grisetti, G., Stachniss, C., and Burgard, W. (2007). Improved techniques for grid mapping with raoblackwellized particle filters. IEEE Transactions on Robotics, 23, 34–46. Horn, B.K.P. (1987). Closed–form Solution of Absolute Orientation using Unit Quaternions. Journal of the Optical Society of America A, 4(4), 629 – 642. Horn, B.K.P., Hilden, H.M., and Negahdaripour, S. (1988). Closed-form solution of absolute orientation using orthonormal matrices. Journal of the Optical Society of America Association, 5(7), 1127–1135. Houshiar, H., Elseberg, J., Borrmann, D., and N¨ uchter, A. (2012). A Study of Projections for Key Point Based Registration of Panoramic Terrestrial 3D Laser Scans. Journal of Geo-spatial Information Science. Nikon Metrology (2014). iSpace – Portable Metrology System User Manual and Startup Guide. http://www. nikonmetrology.com. N¨ uchter, A., Elseberg, J., Schneider, P., and Paulus, D. (2010). Study of Parameterizations for the Rigid Body Transformations of The Scan Registration Problem. Journal Computer Vision and Image Understanding (CVIU), 114(8), 963–980. ROS (2014). Robot Operating System. http://www.ros. org. ACKNOWLEDGEMENTS
REFERENCES Andreas N¨ uchter et al. (2014). 3DTK – The 3D Toolkit. http://www.threedtk.de. Bayerische Verwaltung der staatl. Schl¨ osser, G¨ arten und Seen (2014). http://www.residenz-wuerzburg.de. Besl, P.J. and McKay, N.D. (1992). A Method for Registration of 3-D Shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2), 239 – 256. Borrmann, D., Elseberg, J., Lingemann, K., N¨ uchter, A., and Hertzberg, J. (2008). Globally consistent 3d mapping with scan matching. Robotics and Autonomous Systems, 56(2), 130 – 142. Odo Odo+ICP250 Odo+ICP Odo Odo+ICP250 Odo+ICP100 FBR FBR+ICP GMap GMap+ICP iSpace iSpace+ICP
5 0 -5 -10 -15
-10
-5
0
5
x in m
10
14
100
10
1
2
4
6
(b)
8
10
iSpace measurements
12 10 8 6 4 2 0
0
scan no.
(a)
GMap+ICP iSpace iSpace+ICP
1000 average error in cm
y in m
10
FBR FBR+ICP GMap
positional uncertainty
15
The authors are grateful to the “Bayerische Verwaltung der staatlichen Schl¨osser, G¨arten und Seen” for access to the W¨ urzburg Residence.
0
2
4
6
8
10
scan no.
(c)
Fig. 7. Evaluation of the iSpace measurements. (a) The poses using the different localization methods. Odo = Odometrie and IMU, ICP = Global optimization with a maximum distance of 100 cm between points, ICP250 = Global optimization with a maximum distance of 250 cm between points, GMap = GMapping, FBR = Feature-based registration using the Mercator projection, SIFT features and the SIFT descriptor for panorama matching. (b) Average error in cm of the measured control points for the map generated using the different mapping methods. (c) Positional uncertainty calculated from the iSpace system for each scan position. 110