A File Structure and Reference Data Set for High Resolution Hyperspectral 3D Point Clouds

A File Structure and Reference Data Set for High Resolution Hyperspectral 3D Point Clouds

10th IFAC Symposium on Intelligent Autonomous Vehicles 10th IFAC Symposium on Intelligent Autonomous Vehicles 10th IFAC Symposium on Autonomous Gdansk...

2MB Sizes 0 Downloads 50 Views

10th IFAC Symposium on Intelligent Autonomous Vehicles 10th IFAC Symposium on Intelligent Autonomous Vehicles 10th IFAC Symposium on Autonomous Gdansk, July 3-5, 2019 10th IFACPoland, Symposium on Intelligent Intelligent Autonomous Vehicles Vehicles Gdansk, Poland, July 3-5, 2019 Available online at www.sciencedirect.com Gdansk, Poland, July 3-5, 2019 Gdansk, July 3-5, 2019 10th IFACPoland, Symposium on Intelligent Autonomous Vehicles Gdansk, Poland, July 3-5, 2019

ScienceDirect

IFAC PapersOnLine 52-8 (2019) 403–408

A File Structure and Reference Data Set A File Structure and Reference Data Set A File Structure and Reference Data Set Afor File Structure and Reference Data Set High Resolution Hyperspectral 3D Afor File Structure and Reference Data Set High Resolution Hyperspectral 3D for High Resolution Hyperspectral 3D for High Resolution Hyperspectral 3D Point Clouds for High Resolution Hyperspectral 3D Point Clouds Point Point Clouds Clouds Point Clouds∗ ∗ Thomas Wiemann utz ∗∗ ∗ Felix Igelbrink ∗ Sebastian P¨

∗ Sebastian P¨ Thomas Wiemann ∗∗∗ Felix Igelbrink u tz ∗∗∗ ∗ Thomas Felix Igelbrink u ∗ Sebastian Joachim Thomas Wiemann Wiemann FelixHertzberg Igelbrink∗,∗∗ Sebastian P¨ P¨ utz tz ∗,∗∗ ∗,∗∗ Joachim Hertzberg ∗ ∗ ∗,∗∗ Joachim Hertzberg Thomas Wiemann FelixHertzberg Igelbrink∗,∗∗Sebastian P¨ utz ∗ Joachim ∗,∗∗ ∗ Joachim Systems Hertzberg Group, University of Osnabr¨ u ck, ∗ Knowledge Based ∗ Knowledge Based Systems Group, University of Osnabr¨ u ck, ∗ Knowledge Based Systems Group, University of Osnabr¨ u ck, ∗ Osnabr¨ u ck, Germany ([email protected]) Knowledge Based Systems Group, University of Osnabr¨ u ck, Osnabr¨ u ck, Germany ([email protected]) ∗ ∗∗ Osnabr¨ u ck, Germany ([email protected]) Knowledge Based Systems Group, University of Osnabr¨ u ck,u Robotics Innovation Center, Osnabr¨ u ck Branch, Osnabr¨ ck, Osnabr¨ u ck, Germany ([email protected]) ∗∗ DFKI ∗∗ DFKI Robotics Innovation Center, Osnabr¨ u u ∗∗ DFKI Robotics Innovation Center, Osnabr¨ uck ck Branch, Branch, Osnabr¨ Osnabr¨ uck, ck, ∗∗ DFKI Osnabr¨ uck, Germany ([email protected]) Germany (e-mail: [email protected]) Robotics Innovation Center, Osnabr¨ u ck Branch, Osnabr¨ u ck, Germany (e-mail: [email protected]) ∗∗ Germany (e-mail: [email protected]) DFKI Robotics Innovation Center, Osnabr¨ uck Branch, Osnabr¨ uck, Germany (e-mail: [email protected]) Germany (e-mail: [email protected]) Abstract: Hyperspectral imaging has been extensively studied in remote sensing. In this Abstract: Hyperspectral imaging has been extensively studied in remote sensing. sensing. In In this this Abstract: Hyperspectral imaging has been extensively studied in remote community, several approaches exist for classifying different organic and an-organic materials. Abstract: Hyperspectral imaging has been extensively studied in remote sensing. In this community, several approaches exist for classifying different organic and an-organic materials. community, several approaches exist for classifying different organic an-organic materials. Abstract:this Hyperspectral imaging beenlarge extensively studied inand remote sensing. Inhence this However, data is usually collected distances (flight or satellite data) and community, several approaches existhas forfrom classifying different organic and an-organic materials. However, this data is usually collected from large distances (flight or satellite data) and hence However, this data is usually collected from large distances (flight or satellite data) and hence community, several approaches exist for classifying different organic and an-organic materials. lacks geometric precision, which is required for robotic applications like mapping and navigation. However, this data is usually collected from large distances (flight or satellite data) and hence lacks geometric precision, which is required for robotic applications like mapping and navigation. lacks geometric precision, which is for robotic applications like mapping and However, this data is usually collected from distances (flight or satellite data) and hence In this paper, we present a data set that maps hyperspectral intensity data to aa lacks geometric precision, which is required required forlarge robotic applications like mapping and navigation. navigation. In this paper, we present aa reference reference data set that maps hyperspectral intensity data to In this paper, we present reference data set that maps hyperspectral intensity data to aa lacks geometric precision, which is required for robotic applications like mapping and navigation. terrestrial 3D laser scanner to generate what we call hyperspectral point clouds (HPCs). To In this paper, we present a reference data set that maps hyperspectral intensity data to terrestrial 3D laser scanner to generate what we call hyperspectral point clouds (HPCs). To terrestrial 3D laser scanner to generate what we call hyperspectral point clouds (HPCs). To In this paper, we present aresulting data maps hyperspectral intensity data toTo a organize and the massive data, we designed an HDF5 structure that is terrestrial 3Ddistribute laser scanner toreference generate whatset wethat call hyperspectral pointfile clouds (HPCs). organize and distribute the resulting massive data, we designed an HDF5 file structure that is organize and distribute the resulting massive data, we designed an HDF5 file structure that is terrestrial 3D laser scanner to generate what we call hyperspectral point clouds (HPCs). To the basis to feed information derived from the raw data into robot control frameworks like ROS. organize and distribute the resulting massive data, we designed an HDF5 file structure that is the basis to feed information derived from the raw data into robot control frameworks like ROS. the to information derived the into control like ROS. organize distribute the resulting massive data,data we designed HDF5frameworks file structure is the basis basis and to feed feed information derived from from the raw raw data into robot robotan control frameworks likethat ROS. © 2019, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. the basis to3D feed information derived from the rawData data Storage, into robot control frameworks like ROS. Keywords: Mapping, Hyperspectral Imaging, Perception and Sensing, Keywords: Mapping, Hyperspectral Keywords: 3D 3D Mapping, Hyperspectral Imaging, Imaging, Data Data Storage, Storage, Perception Perception and and Sensing, Sensing, Information and Sensor Fusion Keywords: 3D Mapping, Hyperspectral Imaging, Data Storage, Perception and Sensing, Information and Sensor Fusion Information and Sensor Fusion Keywords: 3D Mapping, Hyperspectral Imaging, Data Storage, Perception and Sensing, Information and Sensor Fusion Information and Sensor Fusion 1. INTRODUCTION we initiate aa basis for identifying new methods for typical 1. INTRODUCTION we initiate for identifying new methods for typical 1. we initiate aa basis basis for identifying new methods for typical problems that are hard to solve by relying on RGB inwe initiate basis for identifying new methods for typical 1. INTRODUCTION INTRODUCTION problems that that are are hard hard to to solve solve by by relying relying on on RGB RGB inproblems in1. INTRODUCTION we initiate a basis for identifying new methods for typical formation alone. One obvious example is the classification problems that are hard to solve by relying on RGB inOver recent years, the rapid development of laser scanning alone. One obvious example is the classification Over recent recent years, years, the the rapid rapid development development of of laser laser scanning scanning formation formation alone. One obvious example is the classification Over problems that are hard to solve by relying on RGB inof vegetation to distinguish between crops and weed in formation alone. One obvious example is the classification technology has allowed to generate highly precise 3D Over recent years, the rapid development of laser scanning vegetation to distinguish between crops and weed in technology has has allowed allowed to to generate generate highly highly precise precise 3D 3D of of vegetation to distinguish between crops and weed in technology formation alone. One obvious example is the classification precision farming, where both classes of objects are more of vegetation to distinguish between crops and weed in Over recent years, the rapid development of laser scanning models of large scale environments. Via co-calibration of technology has allowed to generate highly precise 3D precision farming, where both classes of objects are more models of large scale environments. Via co-calibration of precision farming, where both classes of objects are more models of large scale environments. Via co-calibration of of vegetation to distinguish between crops and weed in or less green in RGB images. Other possible applications technology has allowed to generate highly precise 3D different kinds of cameras, additional modalities can be precision farming, where both classes of objects are more models of large scale environments. Via co-calibration of or less less green green in in RGB RGB images. images. Other Other possible possible applications applications different kinds kinds of of cameras, cameras, additional additional modalities modalities can can be be or different precision farming, where both classes of objects are more include classification between materials in urban areas or less green in RGB images. Other possible applications models of large scale environments. Via co-calibration of added to the measured distance and reflectivity values. different of cameras, additional modalities can be include classification between materials in urban areas added to to kinds the measured measured distance and reflectivity reflectivity values. classification between materials in urban areas added the distance and values. or less green in RGB images. Other possible applications (concrete, asphalt, stones) to distinguish roads include classification between materials inbetween urban areas different kinds of cameras, additional modalities can be include Adding information from RGB cameras is stateadded tocolor the measured distance and reflectivity values. (concrete, asphalt, stones) to distinguish between roads Adding color information from RGB cameras is state(concrete, asphalt, stones) to distinguish between roads Adding color information from RGB cameras is stateinclude classification between materials in urban areas and sidewalks or assessing building damages by analyzing (concrete, asphalt, stones) to distinguish between roads added to the measured distance and reflectivity values. of-the-art in commercial laser scanners. Adding other Adding color information from RGB cameras is statesidewalks or assessing building damages by analyzing of-the-art in in commercial commercial laser laser scanners. scanners. Adding Adding other other and and sidewalks or assessing building damages by analyzing of-the-art (concrete, asphalt, stones) to distinguish between roads the integrity of, e.g., walls. Our data set was collected in Adding color information from RGB cameras is stateinformation than color is seldom seen but desirable to ease and sidewalks or assessing building damages by analyzing of-the-art in commercial laser scanners. Adding other the integrity of, e.g., walls. Our data set was collected in information than than color color is is seldom seldom seen seen but but desirable desirable to to ease ease the integrity of, e.g., walls. Our data set was collected in information and sidewalks or assessing building damages by analyzing Botanical Garden at the University of Osnabr¨ u ck. The the integrity of, e.g., walls. Our data set was collected in of-the-art in commercial laser scanners. Adding other typical problems like segmentation and classification. One information than like colorsegmentation is seldom seen but desirable to ease the Botanical Garden at the University of Osnabr¨ u ck. The typical problems and classification. One the Botanical Garden at the University of Osnabr¨ u ck. The typical problems like segmentation and classification. One the integrity of, e.g., walls. Our data set was collected in garden is located in a former stone quarry and therefore Botanical Garden at the University of Osnabr¨ u ck. The information than color is seldom seen but desirable to ease example is the addition of infrared data to detect thermal typical problems like segmentation and classification. One the is located in aa former stone quarry and therefore example is is the the addition addition of of infrared infrared data data to to detect detect thermal thermal garden garden is located in former stone quarry and therefore example the Botanical Garden at the University of Osnabr¨ u ck. The features a large number of different materials from the garden is located in a former stone quarry and therefore typical problems like segmentation and classification. One leaks (Borrmann et al., 2016). example is the addition of infrared data to detect thermal features large number of different materials from the leaks (Borrmann (Borrmann et et al., 2016). 2016). featuresisaaalocated large application number of domains different materials from the leaks in a former stone quarry and from therefore aforementioned in aa relatively small features large number of different materials the example is the addition of infrared data to detect thermal garden leaks (Borrmann et al., al., 2016). aforementioned application domains in relatively small In contrast to these thermal cameras, which capture only aforementioned application domains in a relatively small features a large number of different materials from the area and is, therefore, an excellent environment to generate In contrast to these thermal cameras, which capture only aforementioned application domains in a relatively small leaks (Borrmann et al., 2016). In contrast to thermal cameras, capture only and is, therefore, an excellent environment to generate a narrow of the electromagnetic spectrum, hyperarea and is, therefore, an excellent environment to generate In contrastpart to these these thermal cameras, which which capture only area aforementioned application domains in a relatively small a reference data set of hyperspectral point clouds. a narrow part of the electromagnetic spectrum, hyperarea and is, therefore, an excellent environment to generate a part of the electromagnetic spectrum, hyperdata set of hyperspectral point clouds. In narrow contrast to these thermal cameras, which capture only aarea cameras to capture intensity information aa reference reference set hyperspectral point aspectral narrow part of allow the electromagnetic spectrum, hyperand is, data therefore, excellent environment to generate spectral cameras allow to capture capture intensity information reference data set of ofan hyperspectral point clouds. clouds. spectral cameras allow to intensity information In the remainder of this paper, we present our measurea narrow part of the electromagnetic spectrum, hyperover several hundreds of narrow wavelength intervals. The spectral cameras allow to capture intensity information In the remainder of this paper, we present our measurea reference data set of hyperspectral point clouds. over several hundreds of narrow wavelength intervals. The In the remainder of this paper, we present our measureover several hundreds of narrow wavelength intervals. The ment systems and characteristics of the collected data as In the remainder of this paper, we present our measurespectral cameras allow to capture intensity information analysis of such hyperspectral distributions for classificaover several hundreds of narrow wavelength intervals. The ment systems and characteristics of the collected data as analysis of such hyperspectral distributions for classificament systems and characteristics of the collected data as analysis of such hyperspectral distributions for classificaIn the remainder of this paper, we present our measurewell as a proposal to organize and distribute the huge over several hundreds of narrow wavelength intervals. The tion is a well-known technique in geo sciences and remote ment systems and characteristics of the collected data as analysis of such hyperspectral distributions for classificawell as a proposal to organize and distribute the huge tion is a well-known technique in geo sciences and remote well as a proposal to organize and distribute the huge tion is a well-known technique in geo sciences and remote ment systems and characteristics of the collected data as amount of data that arises when dealing with such high well as a proposal to organize and distribute the huge analysis of such hyperspectral distributions for classificasensing. However, freely available data from remote senstion is a well-known technique in geo sciences and remote amount of data that arises when dealing with such high sensing. However, However, freely freely available available data data from from remote remote senssens- amount of data that arises when dealing with such high sensing. well as a proposal to organize and distribute the huge resolution spectral data. We present reference images in amount of data that arises when dealing with such high tion is a well-known technique in geo sciences and remote ing has a poor spatial resolution, which makes this data sensing. However, freely available data from remote sensspectral data. We present reference images in ing has has aa poor poor spatial spatial resolution, resolution, which which makes makes this this data data resolution resolution spectral data. We present reference images in ing amount of data that arises when dealing with such high different spectral channels and a preliminary application resolution spectral data. We present reference images in sensing. However, freely available data from remote sensmore or less useless for robotic applications, where a high ing has a poor spatial resolution, which makes this data different spectral channels and a preliminary application in more or less useless for robotic applications, where a high different spectral channels and a preliminary application in more or less useless for robotic applications, where a high resolution spectral data. We present reference images in segmentation, where we computed the well-known NDVI ing has a poor spatial resolution, which makes this data local resolution is necessary to safely operate a robot. different spectral channels and a preliminary application in more or less useless for robotic applications, where a high segmentation, where we computed the well-known NDVI local resolution is necessary to safely operate a robot. segmentation, where we computed the well-known NDVI local resolution is necessary to safely operate a robot. different spectral channels and a preliminary application in index to recognize vegetation in the collected reference segmentation, where we computed the well-known NDVI more or less useless for robotic applications, where a high local resolution is necessary to safely operate a robot. index to recognize vegetation in the collected reference In this paper, we close this gap by presenting aa freely index to recognize vegetation in the collected reference segmentation, where we computed the well-known NDVI data. All collected data is stored in a HDF5 file structure In this paper, we close this gap by presenting freely index to recognize vegetation in the collected reference local resolution is necessary to safely operate a robot. In this we this gap by presenting aa freely data. All collected data is stored in aa HDF5 file structure available reference data set high point All collected data is HDF5 file In this paper, paper, we close close this of by resolution presenting3D freely index to be recognize vegetation inin collected reference that can accessed with number of source libraries available reference data set set ofgap high resolution 3D point data. data. All collected data isaastored stored inthe a open HDF5 file structure structure available reference data of high resolution 3D point that can be accessed with number of open source libraries In this paper, we close this gap by presenting a freely clouds, where each 3D point a spectral intensity that can be accessed with aastored number of open source libraries available reference set ofcarries high resolution point data. All collected data is in a HDF5 file structure and software packages. The presented reference data set clouds, where where each data 3D point point carries a spectral spectral 3D intensity that can be accessed with number of open source libraries clouds, each 3D carries a intensity and software packages. The presented reference data set available reference data set of high resolution 3D point distribution over 150 spectral channels. With this work, clouds, where each 3D point carries a spectral intensity and software packages. The presented reference data set that can be accessed with a number of open source libraries our open source processing software “Las Vegas Surdistribution over 150 spectral channels. With this work, and software packages. The presented reference data set distribution over 150 spectral channels. With this work, and our open source processing software “Las Vegas Surclouds, where each 3D point carries a spectral intensity and our open source processing software “Las Vegas Surdistribution over 150 spectral channels. With this work,  DFKI in Osnabr¨ and software packages. The presented reference data set face Reconstruction Toolkit” are available on the project’s our open source processing software “Las Vegas Suru ck is generally supported by the state of Niederand  Reconstruction are available on the project’s DFKI in Osnabr¨ is generally by the state of Niederdistribution overuuck 150 spectralsupported channels. With this work, face  1open sourceToolkit” face Reconstruction Toolkit” are available on the project’s DFKI in Osnabr¨ ck is generally supported by the state Nieder and our processing software “Las Vegas Surwebsite face Reconstruction Toolkit” are available on the project’s sachsen. The 3D scanner and hyperspectral camera used forof acquiring 1. DFKI in Osnabr¨ u ck is generally supported by the state of Nieder1. website sachsen. The 3D and hyperspectral camera used for acquiring 1 website sachsen. 3D scanner scanner hyperspectral used forof acquiring 1 ..  face Reconstruction Toolkit” are available on the project’s the data The setOsnabr¨ was financed using a grant by camera DFG. Thomas website DFKI in uck is and generally supported by the state Niedersachsen. The 3D scanner and hyperspectral camera used for Wiemann acquiring the data set was financed using a grant by DFG. Thomas Wiemann 1 the data set was financed using a grant by DFG. Thomas Wiemann is currently financed by aand grant of BMBFbyincamera the project SoilAssist-2. website . the data set was financed using a grant DFG. Thomas Wiemann sachsen. The 3D scanner hyperspectral used for acquiring is currently financed by a grant of BMBF in the project SoilAssist-2.

is currently financed by BMBF the SoilAssist-2. Sebastian P¨ uwas tz isfinanced financed by aof grant by Friedrich-Ebertis currently financed by a a grant grant of BMBFbyin in the project project SoilAssist-2. the data set using adoctoral grant DFG. Thomas Wiemann Sebastian P¨ u tz financed by doctoral grant by 1 www.las-vegas.uni-osnabrueck.de Sebastian P¨ uthis tz is issponsoring financed byis a agratefully doctoral grant by Friedrich-EbertFriedrich-EbertStiftung. All acknowledged. 1 www.las-vegas.uni-osnabrueck.de is currently financed by a grant of BMBF in the project SoilAssist-2. Sebastian P¨ u tz is financed by a doctoral grant by Friedrich-EbertStiftung. All this sponsoring is gratefully acknowledged. 1 www.las-vegas.uni-osnabrueck.de Stiftung. All this sponsoring is gratefully acknowledged. 1 1 www.las-vegas.uni-osnabrueck.de Stiftung. All Sebastian P¨ uthis tz issponsoring financed byis agratefully doctoral acknowledged. grant by Friedrich-Ebert1 www.las-vegas.uni-osnabrueck.de Stiftung. All this sponsoring is gratefully acknowledged. 2405-8963 © © 2019 2019, IFAC IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. Copyright Copyright © 2019 IFAC Copyright © under 2019 IFAC Peer review responsibility of International Federation of Automatic Control. Copyright © 2019 IFAC 10.1016/j.ifacol.2019.08.101 Copyright © 2019 IFAC

2019 IFAC IAV 404 Gdansk, Poland, July 3-5, 2019

Thomas Wiemann et al. / IFAC PapersOnLine 52-8 (2019) 403–408

2. RELATED WORK Most work that describes the use of 3D hyperspectral point cloud data is based on image data collected by UAVs. Although the calibration of hyperspectral cameras against 3D point clouds generated from UAVs is technically feasible, the actual generation of 3D point clouds with hyperspectral annotations is seldom seen. In Nevalainen et al. (2017), photogrammetically generated point clouds were fused with hyperspectral images from an UAV for tree classification. Similar results were shown in N¨ asi et al. (2015). However, the reported point density in these studies is relatively low with about 500 points per square meters with significant noise in the range of several centimeters. Using terrestrial laser scanners, point density can be increased by several orders of magnitude, while reducing the noise level to millimeters. Fusing point clouds from terrestrial scanners and color data from RGB camera images is state of the art in commercial systems. The most common solution to compute the extrinsic calibration is to use specific reference patterns with detectable feature points in both domains, e.g., a checkerboard (Zhang and Pless, 2004; Buckley et al., 2013). This idea was extended to thermal camera data in Borrmann et al. (2016). In Nieto et al. (2010), a calibrated RGB camera on top of a laser scanner is used to create an RGB-colored point cloud, which is registered to the hyperspectral image using SIFT features and a piecewise linear transformation. However, this approach requires an additional camera for calibration. In Buckley et al. (2013), a terrestrial laser scanner was calibrated against a laser scanner to generate hyperspectral point clouds. In contrast to our system, an external hyperspectral camera was used, presumably resulting in a large parallax error in near range. Furthermore, the approach required manually placed markers for registration. In our setup, we used a hyperspectral line camera that was mounted on top of the laser scanner and rotated with it during scanning. cloud and camera data, we used a similar camera model as in Buckley et al. (2013), but implemented a GPU-based mutual information optimization method for markerless and featureless ad-hoc calibration (Igelbrink et al., 2018). In the last years, several different data formats have become popular that allow efficient storage and fast access to large amounts of block-shaped data coupled with attached metadata. Those formats include, but are not limited, to the ROOT (Antcheva et al., 2011) and HDF5- Formats (Folk et al., 1999), which are among the most widely supported and mature. For this data set, we decided to use HDF5 for storing our multi-modal data (laser scanner, RGB camera, hyperspectral camera, pose information, and more) in a single file per data set.

3. GENERATING AND STORING HYPERSPRECTRAL POINT CLOUDS In this section we present the mobile mapping system that was used for generating hyperspectral point clouds, as well as the HDF5 file structure for storing the data from the different sensors. 404

Fig. 1. The Pluto robot with Riegl VZ400i laser scanner and Resonon Pika L hyperspectral camera in the Botanical Garden at the University of Osnabr¨ uck. 3.1 System Setup We installed our hyperspectral scanning system on the mobile robot Pluto that is based on the VolksBot XT platform. It features a Riegl VZ400i high resolution 3D laser scanner and a Resonon Pika L hyperspectral line camera, mounted on top of the rotating laser scanner. The camera records up to 297 independent spectral channels between 400 nm and 1000 nm, which are accumulated to a hyperspectral panorama image, while the scanner is rotating and recording the point cloud data. In addition, we installed a regular RGB camera on the scanner system to capture high resolution images. For precise pose estimation, the robot is equipped with 3 IMU units and a differential GPS sensor. The robot with laser scanner and hyperspectral camera is shown in Fig. 1. 3.2 Data Acquisition The data was collected in a stop-and-go fashion: The robot was manually steered to a desired scanning position. Here, we took a 360 ◦ laser scan and simultaneously recorded the hyperspectral data. After each scan, we switched the Resonon camera with a Nikon D610 camera to capture RGB images. This manual switching had to be done due to spatial restrictions. Since switching the sensors resulted in small but noticeable changes in the extrinsic orientation of the sensors, we had to re-calibrate the system every time we switched the cameras. Since our hyperspectral calibration procedure (Igelbrink et al., 2018) requires no external features and the calibration of the RGB camera can be easily re-adjusted after scanning using the software provided by the scanner manufacturer, the switching did not significantly affect the quality of the final data set. The data collected by the sensors were stored into dedicated directories of the on-board computer. This kind of data organization has several drawbacks. Fist, this implementation is generating many files. For hyperspectral registration, we need to fuse the single lines of the Resonon camera into a consistent panorama image. Since the reference driver does not provide time-stamped data and does not deliver data at a fixed frame rate, we need to save all received frames and fuse them according to the average rotation speed of the laser scanner to generate consistent panorama images for re-projection of the points. Currently, we save each frame into a time-stamped file. In the generation process, frames are duplicated or dropped

2019 IFAC IAV Gdansk, Poland, July 3-5, 2019

Thomas Wiemann et al. / IFAC PapersOnLine 52-8 (2019) 403–408

405

depending on the rotation speed of the laser scanner, which we currently do offline after a scan was collected. Similarly, we have to store all RGB images and pose information. Our initial idea was to use ROS bags as a container for all data, but practically it showed that the hyperspectral data alone blew the 1GB default limit of ROS bags. Increasing this size resulted in other side effects like missing frames that lead us to the current solution. Distributing the data in a file system structure is generally not desirable. Compressing the data in a ZIP file would solve the problem of scattered data, but uncompressing only parts is ineffective. From our point of view, HDF5 files overcome these drawbacks. Hence, we have implemented an HDF5 structure that fits the requirements for our data set. We implemented an interface that is compatible with common exchange formats and puts the data into the proposed structured within the HDF5 file. This structure is presented in the next section. 3.3 Data Storage and Structure The HDF5 file format has several advantages over using directory structures. An HDF5 file has an internal structure similar to a file system consisting of groups (directories) and data sets (files), to support flexible hierarchical storage of data. Additionally, the format supports attaching metadata to each object. This makes it easy to store data of different modalities in a self-explaining format, as required for our data sets. More so, it enables us to keep related data closely together, e.g., the metadata related to a data set is attached directly to it, rather than being kept in a separate file. The HDF5 format is very flexible regarding data storage, access and memory layout. As an alternative to storing a data set as a single continuous block of memory, a chunked layout, where a data set is split into several, equally sized, chunks, which are stored separately in the file, is available. This significantly improves i/o performance while working on subsets of the data, because not all data has to be loaded into memory. Storing the data in a chunked layout allows additional compression of a data set with multiple available algorithms. The different memory layouts and the compression can be freely mixed within one file, which provides flexibility. The reference implementation of HDF5 is developed by the HDF5-Foundation in the C programming language. Bindings into nearly all popular programming languages are available. This makes it easy to switch between languages in a clean manner and using, e.g., Python for exploration and analysis, and C++ for fast processing of the same files without having to rely on multiple different libraries to parse and load a complex directory structure. Data Organization in HDF5 Fig. 2 presents the general structure to organize our spectral and point cloud data. All collected sensor data is organized in a collection called raw. It contains meta information of where and when the data was acquired. The data collected by the individual sensors of the system is stored separately for each scanning position in sub-collections referring to the different sensors. For laser scan data, we store the initial pose estimation, the final registration in the global reference frame, field of view and angular resolution, the axis aligned bounding 405

(a) Scan Data

(b) Spectral Data

(c) Annotations

Fig. 2. Proposed organization of captured data in HDF5. box as well as the single points in a sub-collection called scans. The corresponding spectral data is stored in the spectral sub-collection. It contains the raw spectral panorama images and the calibration parameters for the cylindrical calibration model. The spectral data is organized in an image cuboid where each layer refers to an intensity image of a spectral channel. Information about the mapping of layers to spectral channels in encoded in the meta attributes of this spectral data set. Similarly, we store the acquired RGB images in a sub-collection images for each position. Besides the raw data that comes from the system, we store derived data from the original data in an additional group called annotations. Here, we store the annotated point clouds with hyperspectral data directly as an array of consisting of 150 unsigned characters per point as well as RGB data in a separate array. In addition to annotated point clouds, the user can store all other representations in specific groups and data sets that can be derived from the raw data. This can also include spatial indices like octrees and kd-trees that represent the stored point clouds. Our intention here is to keep all information that was derived from the original data closely together with the original data. The representations stored in these additional sections are not limited to structures related to point cloud data. It may also include meshes with textures computed from the raw data like the ones presented in Wiemann et al. (2018). In future work, we intend to store there map representations that can be used in robotic frameworks. Current ideas are Octomaps (Hornung et al., 2013), GridMaps (Kohlbrecher et al., 2011) or polygonal navigation maps (P¨ utz et al., 2016). The idea is to use the HDF5 data format as the basic storage for all static data about an environment and the can be re-used for different applications. The aim is to reduce the scattering of processed data from the original raw data that usually happens over time, when a data set is used for different experiments and application scenarios. We see this kind of organization within the HDF5 files as a persistence layer on file servers or other long term data storage. To make the data accessible in robotic contexts, an API layer and interface nodes to the used robot control architecture have to be implemented.

2019 IFAC IAV 406 Gdansk, Poland, July 3-5, 2019

Thomas Wiemann et al. / IFAC PapersOnLine 52-8 (2019) 403–408

Fig. 3. Components to make high resolution data available for robotic applications. The desired interaction between static storage and robot control architectures is sketched in Fig. 3. We mainly distinguish between a persistence layer, where all high resolution data is stored on a dedicated file storage with high bandwidth and high capacity. The raw and derived data are stored in the HDF5 file. The computations to generate the derived data are done by pre-processing modules that add new representations to the HDF5 file base. This includes map generation, scan registration, computation of the annotated point clouds, filtering, computation of indices, and generally all computations that are done on static environment data.

Fig. 4. Aerial view of a part of the Botanical Garden at the University of Osnabr¨ uck with an indication of the scanned area (Image provided by the City of Osnabr¨ uck, geo.osnabrueck.de).

To make the computed information available, interface nodes need to be implemented that access the file base and generate representations that can be used in different applications. For example, stored grid maps from the HDF5 files could be converted to ROS messages and then published via an existing ROS master. Another possible application is feeding of static information into semantic mapping applications like our SEMAP framework (Deeken et al., 2018) for qualitative spatial reasoning about the environment. The interface nodes could also be used to send relevant data on-demand, since a robot usually does not need to keep all information about an environment in working memory, but only the information that is needed the current task, e.g., the navigation map of a certain area.

Fig. 5 shows exemplary 3D views on the data set with different modalities. The top row shows renderings of the same scenes with annotated spectral intensities at wavelengths of 400, 600 and 800 nm respectively. For rendering, the measured intensities were normalized and mapped to an blue to red color gradient, visualizing the different intensity distributions at different wavelengths. The pictures in the bottom row show overview renderings of the whole data set with RGB annotation (left) and mapped NDVI values. Human-made an-organic structures like pathways are clearly distinguishable in this representation. We added this picture to demonstrate the potential of the combination of hyperspectral and spatial information for robotic applications. Especially for semantic classification the use of such sensor combinations seem to be extremely beneficial, especially in combination with methods from remote sensing, where classification of hyperspectral images is an established discipline.

Currently, we have implemented interface nodes to ROS to query polygonal meshes and cost maps from such HDF5 files. These maps are then used for local navigation. This so called Mesh Map Server is able to perform several range queries and deliver the relevant information via dedicated ROS messages. For this kind of interaction between persistence layer and application, it is important that the relevant data can be extracted without high latencies from the data storage. Hence, we have evaluated to performance of HDF5 files for our application.

150 buckets between 400 nm and 1000 nm. Additionally, we took 5 24 megapixel RGB images per scan for RGB annotation. The total amount of collected raw data sums up to 45 GB for this relatively small area. The single scans were automatically registered based on the GPS pose estimations provided by the Riegl laser scanner and the robots odometry using slam6d 2 .

Note, that the spectral intensities of the different scans have not been normalized yet. This task would require a reference spectrum. Due to the rotating camera in our setup, acquiring such a spectrum is difficult, since it is only valid for small variations of the camera’s field of view. We plan to resolve this issue in future work.

4. THE BOTANICAL GARDEN DATA SET

5. HDF STORAGE PERFORMANCE

In this section we describe the main features of the generated reference data set. It consists of 16 hyperspectral laser scans and covers an area of approximates 70 × 75,m. An aerial photo of the scanned area and the rough layout of the scan positions and covered area is shown in Fig. 4. Each 3D laser scan was taken with a field of view of 360 × 100◦ with an horizontal and vertical angular resolution of 0.05◦ , resulting in approximately 70 million points per scan. For each scan position, we collected the full spectral data in 406

So far, we presented the organization and features of the collected data. In this section we analyze the benefits of using HDF5 files for such data in terms of data compression and access speed. All experiments were performed on an Intel Core i7-4930K processor with 6 physical cores and 32 GB RAM. For accessing HDF5 files, we used libhdf5 in version 1.10. 2

www.slam6d.sourceforge.net

2019 IFAC IAV Gdansk, Poland, July 3-5, 2019

Thomas Wiemann et al. / IFAC PapersOnLine 52-8 (2019) 403–408

407

Fig. 5. Visualizations of the Botanical Garden data set. The top row shows renderings of the spectral intensities at 400 mn (left), 600 nm (middle) and 800 nm (right). The lower row shows an overview of the whole data set with RGB annotations (left) and normalized NDVI values (right); the gravel pathways clearly stick out in the NDVI representation. Table 1. Maximum compression rates and HDF5 file generation times for different storage modes. Method

File size

Generation Time

No compression 1 channel image aligned 5 channel image aligned Chunked 50

39.1 GB 21.3 GB 21.3 GB 21.0 GB

27:26 min 47:26 min 47:30 min 49:56 min

In a first experiment, we evaluated the resulting file sizes using different compression methods available in libhdf5. The maximum achievable compression rates and average file creation times for different compression methods are shown in Tab. 1. The first row shows the stats without compression enabled. With compression enabled, the HDF5 library splits the data into chunks which are organized in a binary search tree. The size of the chunks has an significant impact on the achievable compression ratios and influences the file generation time. When experimenting with the data, we found that the chunking of the float arrays of the point cloud data did initially not effect the file size noticeably, probably due to the encoding of floating point values. However, the chunking of the hyperspectral data proved to deliver higher compression rates. Therefore, we evaluated different chunk sizes as displayed in the remainder of Tab. 1. 407

First, we stored one chunk per hyperspectral channel in the panorama images (“1 channel image aligned”) as well as chunks consisting of 5 adjacent hyperspectral channels (“5 channel image aligned”). This reduced the initial file size, but showed no measurable differences between 1 or 5 channeled data access. However, creating chunks of 50 elements over all dimensions in the image cuboid (“Chunked 50”) reduced the file size further at cost of higher generation time. To find out how this kind of chunking effected the files, we varied the chunks sizes from 10 to 1000 as displayed in Fig. 6. Here, a minimum at the tested chunk size of 50 is clearly visible with the default compression enabled. In addition, we activated the data shuffle algorithm implemented in the HDF5 library. Here, the file size initially increased. To test our hypotheses that the internal floating point representation prevents further compression, we converted the scan data to integer values by converting from meter values to rounded units of 0.1 millimeters, which is well below the distance accuracy of the laser scanner. In this integer representation, the compression ratio increased, but combining the integer representation with shuffling in the end lead to very high compression rates. In this representation, we were able to achieve compression rates of about 64% compared to the initial raw data. To evaluate the time to access our data stored in HDF5 files we benchmarked the time needed to load a single

2019 IFAC IAV 408 Gdansk, Poland, July 3-5, 2019

Thomas Wiemann et al. / IFAC PapersOnLine 52-8 (2019) 403–408

File sizes using different compression methods 24 23 22 File Size [GB]

21 20

Compression on, no shuffle Compression on, shuffeld, integers Compression on, shuffeld

19 18 17 16 15 14 13

0

100

200

300

400

500

600

700

800

900

1000

Chunk Size

Fig. 6. HDF5 file sizes of the Botanical Garden data set with different compression methods and chunk sizes. Table 2. Comparison of creation and access times for a single scan position. Representation

Creation Time

Access Time

Size

Directory gzip HDF5

n/a 4:33 min 4:41 min

52 s 68 s 32 s

2.1 GB 1.5 GB 1.6 GB

scan position from a HDF5 file and compared it to direct loading of raw data from a directory and a gzip-compressed directory. For the gzip comparison, we extracted the compressed data to a separate directory and loaded the extracted raw data. The results of these experiments are shown in Tab. 2. As expected, the access time for gziped data is significantly slower than direct data access. Surprisingly, the HDF5 access is about 45% faster than direct reading of the individual files. The high HDF5 reading performance can be explained with the internal chunking which is optimized for fast data access. In directory access we store the singe hyperspectral channel images in PNG files, which may produce reading overhead. Generally, this evaluation proves that the use of HDF5 files is a good choice for permanent storage and fast access. The most significant drawback here is the comparatively long time that is needed to generate the HDF5 files from the initial directory representation of the raw data. 6. CONCLUSION In this paper we presented a reference data set consisting of 16 hyperspectral laser scans. In addition, we presented a HDF5 based organization of the acquired information provides compact storage and fast access to recorded data, as shown in the evaluation. Besides these features, the use of HDF5 allows to index the data and make it selfconsistent. Currently, we are creating the files from a intermediate directory structure on a hard drive. In future work, we plan to implement a solution that writes the acquired data directly into HDF5. The challenge here will be to deal with the relatively high latencies when creating such files. Furthermore, we plan to formalize the storage of map representations and other relevant data in HDF5 files for data distribution scenarios as indicated in Fig. 3. REFERENCES Antcheva, I., Ballintijn, M., Bellenot, B., Biskup, M., Brun, R., Buncic, N., Canal, P., Casadei, D., Couet, 408

O., Fine, V., et al. (2011). Root—a c++ framework for petabyte data storage, statistical analysis and visualization. Computer Physics Communications, 182(6), 1384–1385. Borrmann, D., Leutert, F., Schilling, K., and N¨ uchter, A. (2016). Spatial projection of thermal data for visual inspection. In 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), 1–6. IEEE. Buckley, S.J., Kurz, T.H., Howell, J.A., and Schneider, D. (2013). Terrestrial lidar and hyperspectral data fusion products for geological outcrop analysis. Computers & Geosciences, 54, 249–258. Deeken, H., Wiemann, T., and Hertzberg, J. (2018). Grounding semantic maps in spatial databases. Robotics and Autonomous Systems, 105, 146–165. Folk, M., Cheng, A., and Yates, K. (1999). Hdf5: A file format and i/o library for high performance computing applications. In Proceedings of supercomputing, volume 99, 5–33. Hornung, A., Wurm, K.M., Bennewitz, M., Stachniss, C., and Burgard, W. (2013). OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. Igelbrink, F., Wiemann, T., P¨ utz, S., and Hertzberg, J. (2018). Markerless ad-hoc calibration of a hyperspectral camera and a 3d laser scanner. In Proc. 15th International Conference on Intelligent Autonomous Systems (IAS-15). Springer. Kohlbrecher, S., von Stryk, O., Meyer, J., and Klingauf, U. (2011). A flexible and scalable slam system with full 3d motion estimation. In 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics, 155–160. N¨asi, R., Honkavaara, E., Lyytik¨ainen-Saarenmaa, P., Blomqvist, M., Litkey, P., Hakala, T., Viljanen, N., Kantola, T., Tanhuanp¨a¨a, T., and Holopainen, M. (2015). Using uav-based photogrammetry and hyperspectral imaging for mapping bark beetle damage at tree-level. Remote Sensing, 7(11), 15467–15493. Nevalainen, O., Honkavaara, E., Tuominen, S., Viljanen, N., Hakala, T., Yu, X., Hyypp¨ a, J., Saari, H., P¨ ol¨ onen, I., Imai, N.N., and Tommaselli, A.M.G. (2017). Individual tree detection and classification with uav-based photogrammetric point clouds and hyperspectral imaging. Remote Sensing, 9(3). Nieto, J., Monteiro, S., and Viejo, D. (2010). 3D geological modelling using laser and hyperspectral data. In Geoscience and Remote Sensing Symposium (IGARSS), 2010 IEEE International, 4568–4571. IEEE. P¨ utz, S., Wiemann, T., Sprickerhof, J., and Hertzberg, J. (2016). 3d navigation mesh generation for path planning in uneven terrain. IFAC-PapersOnLine, 49(15), 212 – 217. 9th IFAC Symposium on Intelligent Autonomous Vehicles IAV 2016. Wiemann, T., Mitschke, I., Mock, A., and Hertzberg, J. (2018). Surface reconstruction from arbitrarily large point clouds. In Robotic Computing (IRC), 2018 Second IEEE International Conference on, 278–281. IEEE. Zhang, Q. and Pless, R. (2004). Extrinsic calibration of a camera and laser range finder (improves camera calibration). In Proc. Intelligent Robots and Systems, (IROS 2004), 2301–2306. IEEE.