Methods for Personal Localisation and Mapping

Methods for Personal Localisation and Mapping

IAV2004 - PREPRINTS  5th IFAC/EURON Symposium on Intelligent Autonomous Vehicles  Instituto Superior Técnico, Lisboa, Portugal July 5-7, 2004    ...

359KB Sizes 0 Downloads 79 Views

IAV2004 - PREPRINTS  5th IFAC/EURON Symposium on Intelligent Autonomous Vehicles  Instituto Superior Técnico, Lisboa, Portugal July 5-7, 2004    

METHODS FOR PERSONAL LOCALISATION AND MAPPING  

Jari Saarinen1, Roman Mazl2, Miroslav Kulich3, Jussi Suomela1, Libor Preucil2 and Aarne Halme1  

1

Helsinki University of Technology, Automation Technology Laboratory, PL5500, 02015 HUT, Finland, Email: [email protected] 2 Czech Technical University, Gerstner Laboratory, Technicka 2, CZ 166 27, Prague 6, Czech Republic 3 Center of Applied Cybernetics, Czech Technical University, Technická 2, CZ 166 27, Prague 6, Czech Republic    

Abstract: This paper presents methods for beaconless localisation of a human being in indoor conditions. The base of the localisation is so-called human dead reckoning, which is implemented with inertial sensors and a self-made stride length measurement sensor. The focus of the paper is to study the possibility of using a laser range finder for human localisation and mapping. The paper discusses the differences between the traditional robot localisation and human localisation problems. Methods for scan matching, mapping and localisation using a priori map are presented. Copyright © 2004 IFAC Keywords: Indoor localisation, Mapping, Scan matching, Human dead reckoning    

1. INTRODUCTION

presence – from the mapped data. One of the main topics is localisation, which is the common denominator for the both entities in order to generate the feeling of presence. The accurate position of all crewmembers on a map gives a clear view of the situation to the operator. The exploring entities will know the positions of the others as well as their own and – above all – the accurate position makes it possible to map the environment. The a priori map, position of all entities and the real-time mapping data are fused to a model or a representation of the environment. Both the robots and the humans can use this representation – the common presence – for navigation and updating their sensor information.



The search and rescue type of missions have gained increasing attention in the past years. Utilising robotics in these missions can decrease the execution time of the mission and save human lives. The firefighting scenario is one of the most common rescue situations. Typical searching and mapping missions for fire fighters are accidents with leaking gases or liquids and – of course – the traditional smoke diving in a burning building. Presently a typical fire-fighting scenario includes several fire-fighter pairs and a mission coordinator. The coordinator stays outside the building and gives instructions via a short wave transmitter. Usually there is a rough map of the building at hand, but not a detailed one. According to the fire fighters, the knowledge of the positions of the fire fighting crew is crucial both for the safety and the mission control. For localisation, the frame of reference is also needed and in some cases there is no a priori map available for the mission. This means that the localisation system has to have the capability to either use an existing map of the building or to map the unknown environment during the mission.

The challenges of the work are how to generate a common presence that both the humans and the robots can understand, how to map the environment and finally how to localise a human without preinstalled beacons. The equipment for human localisation is called Personal Navigation system (PeNa). Additionally a human carries equipments used for data visualisation and communication. The whole system is called as Personal Assistance System. In Fig 1 is an outline of PAS.

European Community supported project “PeLoTe” (Building Presence through Localisation for Hybrid Telematic Systems) is studying a search and rescue concept based on cooperative human and robotic entities. The aim of the project is to map a totally or partially unknown environment by human and robotic explorers and generate a common map –

The core of the localisation is human dead reckoning. The dead reckoning is based on a compass, Inertial Measurement Unit (IMU), a wireless step-length measurement unit and a laser. The laser data is also used for map building and global localisation.

388

a priori map localisation; using particle filters (Thrun et al. 2000), extended Kalman filter (Dissanayake et al. 2001), correlation (Konolige and Chou 1999) and many others. The SLAM seeks to find a solution to the problem where the map and the vehicle position are unknown a priori (Dissanayake et al. 2001, Bailey and Nebot 2001). The aim of this paper is to concentrate to the scan matching and the localisation using a priori map. Also the mapping with known location is addressed. The transfer of algorithms developed for robots to human is however not straightforward. In mobile robots the sensor is usually installed statically on top of the robot, which is not possible in the case of a human. The walking (or in the worst case, running) is a dynamic process, where the whole body is involved. During the movement the body rotates around all three axes. The rotation around heading (yaw) causes large differences in the heading between consecutive scans (Fig. 2a.). The heading variations are typically from 20 to 40 degrees/s, but the worst case can be up to 200 degrees/s (when turning rapidly). The rotations around the pitch and the roll axis cause the violation of the 2D assumption. The changes in the roll causes e.g. that the apparent corridor width is not static. The most problematic are the variations in the pitch. When the laser rotates around the pitch axis it starts to measure the floor or the ceiling (Fig. 2b). In corridor type of environments this causes problems for algorithms that rely on geometric features such as lines. This is also a problem for the mapping. The map cannot be directly constructed from line-segmented scans, since a floor reflection would appear on the map as an obstacle.

 Fig. 1. PAS system.  The human can also use higher-level understanding for mapping and localisation. The operator can insert predefined objects (such as dangerous areas, victims found, etc.) into the model by request. Also human can identify higher-level concepts (corridor crossings, elevators, etc.) and use this information for localisation. This paper concentrates on the laser-based methods. The output of the dead reckoning system is used as input. More detailed description of the developed human dead reckoning can be found (Saarinen et al., 2004).    2. PERSONAL MAP BUILDING AND LOCALISATION In a rescue situation, no infrastructure to support the localisation can be expected. Thus the core of the localisation must be based on the dead reckoning. The SICK laser was chosen as an environmental perception sensor to support the human localisation. The laser is too clumsy to be carried by a fire fighter. Also the laser range finder is not suitable for smoky conditions. As the visibility drops the laser maximum measurement distance decreases rapidly. However, the intention is to develop methods in future emerging technologies program and thus the laser was selected. The longer wavelength of the new 1,5m laser technology will both decrease the size of the laser scanners and improve the laser visibility in difficult conditions. Other possible sensors could be e.g. ultrasonic sensors and millimetre wave radar.

b) a) Fig. 2. Problems of human scan matching. A) The heading error can be significant between scans. Also this causes that a lot of the data are outliers. B) The two consecutive scans. The current measurement is influenced by floor reflections.

The localisation methods based on a laser range finder can be divided into scan matching, global localisation methods using a priori map and Simultaneous Localisation And Mapping (SLAM) methods. The scan matching is a process of estimating the parameters to fit the executive measurements (Lu and Milios 1990, Gutmann and Schlegel 1996, Röfer 2002, Mázl and Preucil 2000). The scan matching is also called as laser odometry or laser based dead reckoning (Bailey and Nebot 2001). The localisation using a priori map is a process of finding the correspondence between the map and the measurement. There is a large variety of solutions to

Another difference to the robotic application is that the inputs for the system are unknown. This makes the prediction of the movement impossible; there is no model, which would apply for all situations. Finally, a human does not have any odometry sensor outputs. Even the placement of such sensors is difficult without restricting the human movements.

389

The positive thing in human constructed environments is that the vertical installations are very dominant. If the pitch angle , differs from the horizontal plane, the measured distance Lˆ can be calculated from the equation (1): Lˆ 

L , cos α

4. 5.

The 5th step expects that the nearest neighbours in the best estimate are correct. This is true if the match was successful.

(1)

If the reference map is sorted for the nearest neighbour search, the computational complexity of the algorithm is O( N x N y N α N log( M )) , where the N x is the number of poses to search in the x direction,

where the L is the real distance to the object. If the variations in  are within (0…10degrees) the variations in distance are (0…1,5%). This actually shows that the bias in the observed distance to the object is a less significant error source than the floor (and ceiling) reflections. If the sensor is mounted to the body at the height of 1.2m and =10 degrees, the sensor is measuring the floor from the distance of 6.8m. The knowledge of the sensor orientation would help the estimation process. The outliers could be detected and removed and the measurement could be projected into the correct plane. However the past work has shown that the long-term estimation of the sensor position is very challenging (Saarinen et al., 2004). Thus in this paper no orientation estimation is used (except heading).

N y in the y direction, Nα is the number of headings,

N is the number of the points in the fitted scan and M is the number of the points in the reference scan. The normal speed of a human is around 1m/s and the motion is usually forward. However the speed can vary (World record in 100m run is less than 10s) and the possibility of backward and side movements should also be considered. The measurement rate used in SICK laser is 4Hz. Without odometry this gives us an approximate search area of 80cm x 60cm x 80degrees. Even with a rough accuracy of 10cm x 10cm and 1 degree this gives us 3840 poses to be calculated. For a real time application with the limited computing power this is too much.

2.1 Human Dead Reckoning

To reduce the computational load additional sensors are required. The most significant source of the error is the heading. Adding a gyro to the system reduces the required search area from 80deg to approx. 4deg (since the synchronization is not perfect). To get information about the translational movement a Stride Length Measurement Unit (SiLMU) was designed and added to the system. SiLMU is based on ultrasound and it measures the distance between the ankles. Also to estimate the absolute angle a compass and the gyro are fused with Kalman filter.

To test the feasibility of the scan matching a correlation based algorithm was implemented. The algorithm was selected because of its simplicity. Also methods from (Kulich et al. 2001) were tested with no success. The methods included line-to-line matching, point-to-line matching and histogram matching. The main difficulty in these approaches seemed to be the line representation. The finding of corresponding lines is difficult in situations as presented in Fig. 2. The correlation based algorithm tries to maximise the overlap between the scans. The algorithm is very similar to the one presented in (Schultz and Adams 1998), except the matching is done with raw data instead of evidence grids. The basic idea is simple: the fitted scan is tested in different positions against the reference scan and the pose with the maximum correlation is selected. The correlation criterion is:

0, if d ( NN )  th , Cor    if d ( NN )  th n 1,

After going through all the poses, select the pose that has the biggest hit value. Calculate the least squares estimation for the selected pose.

The initial estimate for the scan matching is calculated using SiLMU and heading estimate. The accuracy of this estimate is in average 5% from distance walked, but can vary in short term. The initial estimate in scan matching algorithm is situated in the middle of the search grid. Currently a search is done within 60cm x 36cm area and angle is searched within 3 degrees, the grid being 6cm x 6cm and 0.6 degrees. The threshold is 6cm.

(2)

where n is the number of points in the fitted scan, d(NN) is the distance to the nearest neighbour in reference scan and t h is the predefined threshold. The whole algorithm is following: 1. Generate a set of poses, relative to the reference scan (even distribution, expected position in the middle of the grid). 2. Transform the fitted scan according to the pose. 3. Calculate the correlation

Fig. 3. A dead reckoning result. Dots are the position estimates and the map is drawn in every position to illustrate the accuracy. In Fig 3 is illustrated an example of the functionality of dead reckoning. The walk is a closed-loop and the 390

walked distance was 170m in 231s. The error between the end and the start is 3.7m giving a relative error of 2% from the distance traveled. All the calculations are done in real-time using 1GHz Pentium.

The second step of the map-building process consists of following sub-steps:  The occupancy grid obtained from the sensor data is transferred into a binary image by segmentation. Because of the accuracy of the laser measurements, mainly three values are present in the grid: empty, occupied and unknown. Therefore, a simple thresholding is fully sufficient for the segmentation.  The binary grid is adapted by a morphological filtering making-use of chaining of dilatation and erosion operations. The filtration repairs possible splitting of obstacles into isolated islands. The side effect of the morphological operations also improves compactness and smoothness of the obstacles and their boundaries.  The thickness of the object boundaries obtained in the previous steps is several grid cells, which is inducted by the inaccuracy in the position determination and the sensor measurements. Supposing symmetry of the errors, real objects' boundaries correspond to a skeleton of the filtered binary grid.  The obtained skeleton consists typically of several continuous regions, where cells representing one obstacle lie in the same region. In order to get a geometric description these regions have to be split so, that each region corresponds to one geometric primitive.  Finally, each split region is approximated by a geometric primitive. This is done by the recursive boundary splitting followed by the least square fitting. 

3. MAP BUILDING The maps used for localisation mainly include geometric maps such as line maps, occupancy grid maps (Thrun et al. 2000) or feature based maps (Bailey and Nebot 2001). The problems in the mapping are related to the uncertainty of the position estimate and to the uncertainty of the observation. The laser as a sensor is very accurate, but as discussed above the orientation changes causes additional noise. Especially the floor reflections (Fig 2b) are problematic. Using e.g. direct line segmentation causes extra obstacles to appear on the map, which can result in a completely unusable map for localisation. The occupancy grid (Elfes, 1990) is a widely used probabilistic solution for mapping. The problem of the occupancy grid is that if an accurate model of a large environment is required, the size of the map is growing rapidly. In PeLoTe the aim is to share the map information between the entities through a wireless communication. To keep the bandwidth requirements reasonable a line map representation was selected. Thus to use the properties of the occupancy grid, but to maintain a line model a method for the line segmentation from an occupancy grid was developed. 

The gathered sensor data is firstly processed in order to build an occupancy grid from which geometric primitives (line segments and polygons) representing obstacles' boundaries are extracted by using computer-vision techniques. 

Fig. 5. Final geometric representation obtained from the occupancy grid in Fig. 4. 

Fig. 4. An occupancy grid generated from laser rangefinder data. Light pixels represent high probability of occupancy; dark ones stand for the high emptiness. Gray pixels denote an unknown level.

The method gives the best results, when having an accurate localisation and when the whole data set is processed at once (offline). The method can be used also online when using a local occupancy grid. The segmentation is done when reaching the edge of the grid.

The occupancy grid is a two-dimensional field that maintains stochastic estimates of the occupancy state of the each cell in a spatial lattice (Elfes, 1990). The cell estimates are obtained by interpreting the sensor range data using probabilistic models that capture the uncertainty in the spatial information provided by the sensors. Readings taken from multiple points of view are combined by the Bayesian approach to allow the incremental updating of the occupancy grid.

In Fig 6 is illustrated noisy raw data and a line map segmented from this using local occupancy grids. The map is fused from several partial maps. The drawback is that the number of the line segments is growing, since many lines represent the same line. The floor hits visible in the raw data have however disappeared.

391

line from the estimated position to scan direction (Fig. 8).

Fig. 8. An example of the virtual scan The virtual scan is then matched against the current measurement. The global matching is done every time the reference scan is changed in the scanmatching algorithm. If the correlation (number of corresponding pairs) between the virtual scan and the measurement is above threshold, then the position is corrected. If not, the algorithm continues using dead reckoning, but the search area is increased for the next round. This has two positive effects: if the measurement is noisy, the algorithm does not try to correct the position and this allows going outside the map. In the PeLoTe project it is expected that some kind of a priori knowledge of the building exists. However, most probably the a priori map is somehow incorrect. Fig. 9 shows a result of using the algorithm with a partial map. Most of the time the localisation is done within the map (in the corridors), but in the bottom there are two rooms that does not belong to the map. Also in the left there are two small rooms that are not belonging to the map.

Fig. 6. A segmented map from partial occupancy grid maps 4. LOCALISATON USING GLOBAL MAP 4.1 Operator assisted Localisation The operator can also help in the localisation process. He observes the map received from the entities and makes the corrections based on this. In Fig. 7 is illustrated corrupted localisation data and manually corrected data. From the corrupted data it is easy to observe that in the bottom left corner there is a mismatch and the heading is lost. This can be corrected by defining three points from the map: a rotation origin, wrong direction and correct direction. The operator chooses these three points by clicking them from the map. The data from the rotation origin to the future is then rotated into the correct coordinate system. The translational errors can be corrected by defining the point to be translated and the correct position.

The algorithm does not support the global localisation and therefore the initial position should be known. Also, if the position error grows too large, so that there is not enough overlap between the virtual scan and the measurement, the algorithm cannot recover. This could happen e.g. if a too long distance is travelled outside the map.

Fig. 7. Top: Localisation data received directly from human entity. Bottom: same data manually corrected by the operator. 4.2 Virtual Scan Matching The virtual scan-matching algorithm is based on the same algorithm as the laser odometry. The reference scan is calculated from the line map by using the expected position. The range of the virtual scan measurement is obtained by finding the minimum distance to intersection between the map line and a

Fig 9. Localisation with partial map.

392

5. CONCLUSIONS AND FUTURE WORK

map building (SLAM) problem, IEEE TRA 17(3), 229-241 Elfes, A.(1990). Occupancy Grids: A Stochastic Spatial Representation for Active Robot Perception. In Proceedings of the Sixth Conference on Uncertainty in AI 2929 Campus Drive, San Mateo, CA 94403, Morgan Kaufmann Publishers, Inc. Gutmann J.S., Schlegel C. (1996), AMOS: Comparison of Scan Matching Approaches for Self-Localization in Indoor Environments, Proc. of the 1st Euromicro Workshop on Advanced Mobile Robots (EUROBOT96), IEEE Computer Society Press, pp. 61-67, 1996 Gutmann J.S. and Konolige K. (2000) Incremental mapping of large cyclic environments. In Proc. IEEE International Symposium on Computational Intelligence in Robotics and Automation,pages 318–325. IEEE, 2000. Konolige K. and Chou Ken (1999). Markov Localisation using Correlation. Proceedings of International Joint Conference on Artificial Intelligence (IJCAI99), Stockholm. Kulich, M. - Mázl, R. - Preucil, L. (2001) SelfLocalization Methods for Autonomous Mobile Robots Based on Range Scan-Matching. In: Proceedings of Field and Service Robotics. Helsinki : FSA-Finnish Society of Automation, vol. 1, p. 373-378. Lu, F. and Milios, E. (1994): Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans, IEEE Computer Vision and Pattern Recognition Conference (CVPR), pp. 935-938. Mázl, R and Preucil, L. (2000): Building a 2D Environment Map from Laser Range-Finder Data, Proceedings of the IEEE Intelligent Vehicles Symposium 2000, Dearborn, Michigan, USA, pp. 290-295, ISBN 0-78036363-9 Röfer, T. (2002). Using Histogram Correlation to Create Consistent Laser Scan Maps. In: Proceedings of the IEEE International Conference on Robotics Systems (IROS-2002). EPFL, Lausanne, Switzerland. 625-630. Saarinen J, Mazl R, Ernest P, Suomela J, Preucil, Sensors And Methods For Human Dead Reckoning. The 8th Conference of Intelligent Autonomous Systems, March 10-14, 2004 Schultz A. and Adams W.(1998). Continuous localization using evidence grids. Proceedings of the IEEE International Conference on Robotics and Automation, Leuwen, Belgium, may 1988. pp 2833-2839. Thrun, S. and Fox, D. and Burgard, W. and Dellaert, F.(2000) Robust Monte Carlo Localization for Mobile Robots, Artificial Intelligence, Volume 128, 1-2, pages 99--141.

The localisation of the rescue personnel can give valuable information to the personnel and the mission coordinator. If an environmental perception sensor such as laser range finder is used, the map is received as a by-product. The localisation and mapping has been a hot topic in the robotics. This paper focuses on the transfer of these methods to the human localisation and mapping. Following methods were presented and tested:  A correlation-based method for scan matching.  A mapping method that tolerates floor (and ceiling) reflections.  A localisation method (derived from scan matching) that uses the line-based maps. Based on this work and the results it seems that a human can be localised using laser with reasonable accuracy. The future work concentrates on improving the reliability of the presented methods targeted to be more tolerant with a partial map or no map at all. This basically means solving the SLAM problem when being outside the map. The aim is to build a centralized system, which fuses the local maps received from the multiple entities to a global map. The work presented in (Gutmann and Konolige, 2000) seems to be a very promising starting point for this purpose. 6. ACKNOWLEDGEMENTS The work of Jari Saarinen has been supported by MIRACLE project (grant of the European Commission no. ICA1-CT-2000-70002: MIRACLE in the context of the "Centres of Excellence" scheme) and Finnish Academy of Sciences. The work of Roman Mazl has been supported by the Ministry of Education of the Czech Republic within the frame of the projects "Decision making and Control for Manufacturing" number MSM 212300013. The support of the Ministry of Education of the Czech Republic, under the Project No. LN00B096 to Miroslav Kulich, is also gratefully acknowledged. All the work has been also supported within the IST2001-FET framework under project no. 38873 "PeLoTe". The all supports are gratefully acknowledged.   

REFERENCES

Bailey T. and E. Nebot (2001), Localisation In Large-Scale Environments, Robotics and Autonomous Systems, Robotics and Autonomous Systems, vol. 37(4), pages 261– 281 Dissanayake, M. W. M. G., Newman, P., DurrantWhyte, H. F., Clark, S. and Csorba, M. (2001) A solution to the simultaneous localization and

393