Sensor Cooperation and World Perception of the Autonomous Mobile Robot Romane

Sensor Cooperation and World Perception of the Autonomous Mobile Robot Romane

Copyright © IFAC Intelligent Autonomous Vehicles, Madrid. Spain, 1998 SENSOR COOPERATION AND WORLD PERCEPTION OF THE AUTONOMOUS MOBILE ROBOT ROMANE ...

1MB Sizes 0 Downloads 18 Views

Copyright © IFAC Intelligent Autonomous Vehicles, Madrid. Spain, 1998

SENSOR COOPERATION AND WORLD PERCEPTION OF THE AUTONOMOUS MOBILE ROBOT ROMANE

Alex LALLEMENT, Michel DUFAUT and Rene HUSSON

Centre de Recherche ell Automatique de Nancy Institut National Poly technique de Lorraine 2 avenue de [aforer de Haye 54516 Vandoeuvre-les-Nanc), FRANCE

Abstract: This paper presents a map building method of an autonomous mobile robot's environment. It is based on an original « focus of attention» cooperation system using a laser rangefinder as the main source of information and a video camera mounted on a pantilt unit to perform close inspection of selected objects in the scene; selection of such objects being performed by analysing data from the laser rangefinder. This cooperation process provides a sufficiently rich description of the robot's environment to call for more complex tasks such as localisation. Copyright © 1998 IFAC Keywords : multisensor integration, autonomous mobile robots, image processing , range data, scene analysis

I. INTRODUCTION

sufficient precIsIon to discriminate and recognise them. On the other hand, monocular vision is able to handle a lot of different geometrical landmarks but provides no direct distance measurements and has a low processing speed if a whole image is considered. One way to achieve a robust feature detection is the use of redundancy , that is to combine environmental information obtained by the laser rangefinder and the camera (Weckesser and Oillmann, 1996; Oelahoche, et ai. , \996). Such an approach provides more reliable and accurate information about what the sensory system of the robot really observes than the use of one sensor alone. Credibility of the observed features is therefore enhanced since they have been observed by two different sensors .

In this paper an approach to fuse sensor information from complementary sensors in order to perform a robust modelling of the environment is presented. The mobile robot ROMANE (RObot Mobile Autonome pour la Navigation en Exterieur) was used as an experimental platform. This work is part of a large project with the goal of making robot localisation and navigation safer and more reliable . The potential application is physically handicapped persons autonomous transport under security constraints for passenger, robot and environment. Mobile robot localisation or navigation using natural landmarks requires the availability of highly reliable features obtained by the external sensors of the robot in order to build up a robust local representation of the environment. ROMANE's perception system is based on the association of two sensors: a 20 laser rangefinder and a camera. The laser rangefinder can quickly give telemetric and angular measurements from the scene showing up occultation phenomena, but since the performed scan is parallel to the floor , it is not able to detect geometrical landmarks with

Robust modelling of the environment of ROMANE is performed with the use of high level features obtained by both a laser rangefinder and a CCD camera. Rather than fusing (Castellanos, et ai. , 1996) or associating (Kortenkamp and Weymouth, 1994) sensory information, the laser rangefinder and vision cooperation consists in validation and building up of laser information with vision information. An active vision process is thus proposed in order to associate

375

these complementary sensors: potential landmarks are first detected in the laser rangefinder signal and the vision system is then set in order to identify the landmarks. The used technique relies on the detection of corner primitives extracted from the laser signal which are matched with straight vertical segments obtained by a « focus of attention» image process. The aim of such an approach is to obtain the most robust detection of characteristic elements of the environment allowing reliable generation of a local map for localisation and navigation purposes .

Camera calibration allows to compute for different focus and focal values: - camera' s optical center mean position in the image plane in pixels, - the horizontal and vertical angular resolution for each focal setting in degree by pixel. •

This paper is structured as follows : section 2 begins with a brief overview of the sensors used and describes the cooperation process. Section 3 then discusses in detail the world modelling stages. Section 4 describes the experimental results. The conclusion and future research are exposed in section

a commercially available 2D laser rangefinder time-of-flight (PLS 101 SICK) mounted on the front of ROMANE. The laser rangefinder provides planar 180 0 scans at a rate of 25 Hz with an angular resolution of 0.5 °.The power output of the laser enables measuring distances of up to 50 m with an accuracy of 7 cm.

The two sensors and their parameters can be controlled independently and be driven by software. The camera unit can be aimed at an object by computing the object location in the laser reference frame and by bringing the image of the object into the center of the camera field of view

5.

2. SENSORS COOPERATION The experiments described in this paper, were carried out on the laboratory's experimental mobile robot ROMANE shown in figure l.

The reference attached to the laser rangefinder is selected as the base reference for expressing all the available features. The relative transformation between the camera reference frame and the laser reference frame has been obtained by calibration. Figure 2 depicted the three reference frames.

O'

laser reference

ttra/+

90

'

O'~ ...

+180'

Fig. 1. The mobile robot ROMANE It is equipped with several sensors: a camera on a pan-tilt unit, a 2D laser rangefinder on a tilt unit, odometers, a magnetic compass, a gyroscope, 2 inclinometers and a GPS system.

Fig. 2. Reference frames of the mobile robot

2.2 « Focus of attention » approach

2.1 Used sensors description The « focus of attention » approach is based on using one of the sensors (laser rangefinder) as the main source of information and the other (video camera) to perform close inspection of selected objects in the scene; selection of such objects being performed by analysing data from the laser rangefinder. This process allows to focus attention on a local decision : a sensor goes into service with another to strengthen a punctual information, validate a choice and improve a precISion.

The perception system used for this study is composed of two exteroceptives sensors: •

a monocular vision system (B&W PULNIX 765 Camera, CANON zoom 10-100 mm) mounted on a pan-tilt unit on the top of ROMANE. The focal, focus and shutter speed can be controlled externally. The tilt angle ranges from _45 0 to +60 0 whereas the pan angle ranges from -160 0 to 160 0 • They can be set electrically by steps of 0.1o .

376

ROMANE's sensory system allows to combine two complementary information: on the one hand the straight line segments coming from the rangefinder measures of the vertical plans and on the other hand the junction between these plans, which make up vertical segments detectable by an active vision system.

to reduce the location uncertainty of the corner primitive in the laser reference frame . This sensor cooperation process allows to : - decrease the probability to take into account noisy primitives during the localisation stage, - make easier the matching of local primitives with global reference landmarks, - decrease the number of primitives needed to compute localisation, - increase localisation resolution of primitives.

An overview of the «focus of attention» process might be sketched as follows. At first, segmentation of laser rangefinder measurements (i.e. 2D points) provides a set of laser segment primitives representing the structure of the environment of the mobile robot. Further processing of this set of segments produces a high level geometric information, namely laser corner primitives, which correspond to the intersection of two consecutive segments.

3. MODELISATION OF THE ENVIRONMENT The considered working area of ROMANE is a typical indoor structured environment composed of walls, corners, doors , etc. This environment is described by a global map made up by three different geometrical landmarks which can be observed with reliability all along the path of the robot: - landmarks detectable by vision only: vertical lines such as door or window frames, heater, panels, ... - landmarks detectable by laser rangefinder : walls, ... - landmarks detectable by both sensors : particular corners of walls.

The vision system is then used in order to validate and identify more precisely these corner primitives as they might match to known vertical edges of the environment checked in a global map. The camera's unit pan angle and the camera's focus and focal are set according to the position [D,e] of the corner primitive in the laser reference frame. A window of interest of width "-0is then defined in the image (cf. Fig.3).

Dynamic obstacles unchecked in the global map may be present in the working space of the robot. They can also move with unpredictable trajectories.

90°

Generation of the local map is based on extraction of segment and corner primitives from the laser scan data. The most reliable corner primitives are then analysed by the vision system and labelled as vertical edges or obstacles. These information are used to build a local map of the environment. 0°'-----'------",-'--\-- - ' + - - - - - - - - - - 1 8 0 °

3.1 Processing of laser rangefinder information

laser reference frame

-Wo _ _ _ __ _~~_ __

The data provided by the laser sensor are 2dimensional information (scanning plan). From the scanner' s polar representation of the measurements a Cartesian representation is computed. The next step of computing is the extraction of linear edge segments from the laser scan signal. This is done by using the Iterative End-Point Fit recursive algorithm (Siadat, et aI., 1997) for the determination of points belonging to a line segment. A line defined by the first and the last point of the scan is computed . Then the point with the maximum distance to this line is detected. This point divides the scan in two intervals and the algorithm starts recursively again until the maximum distance is smaller than certain threshold . This distance criteria sets the accuracy of the segmentation and therefore the number of segments computed. Once the points belonging to a line segment are determined, a least square approach is used to obtain a straight line. The first point PI and the last point P2 of the line segment are projected on

~--~- +Wo

camera reference frame

Fig. 3. Sensors cooperation Specialised image operators are then used in the window of interest in order to detect the present vertical edges. This process increases considerably the processing speed as only a narrow window rather than the whole image is used. The laser corner primitive is validated as a vertical edge if the corresponding vertical segments length is superior to a fixed threshold. After the validation step, the next step consists in searching supplementary visual cues (door or window frame, ...) well suited to identify the corner primitive more precisely. This is done by moving the window of interest on the right or the left of the vertical edge. The excellent angular resolution of the camera allows

377

the line previously defined and the projections PI and P2' represent the new start and end point of the line segment. This set of segments forms a segmented laser scan as shown in figure 4. IEPF segmentation threshold :- 10 cm I Segments

genefat~"

Corner pnmltlves formed by segments with a high robustness criteria and characterised by an angle Q close to 90° or 270° are tested in priority.

23

3.2 Processing of monocular vision sensor information

350

300

Given the location of the tested corner primitive In the laser reference frame (cf. Fig.6), the camera's unit pan angle <1> is determined as to set the tested corner primitive on the vertical optical axis of the camera. Optimal focus and focal of the camera are then computed.

250

200

150

segmented laser scan

100

300

\

f

50

0 -800

'I

-600

,

-400

-200

. 0

250

,

~

200

' 00

om

200

Fig. 4. Segmented laser scan.

~ 150

Next, a local map of the environment is built with two geometrical primitives: namely, segment and corner primitives (cf. Fig.S).

;COIJ*prim"ive (a)

100

50

For each segment primitives are known : - the number of constructive laser points that made up the segment, - the start and end points of the segment.

_gLOO---_~600--~400~---~20~0--~---=20~O-~~,OO

Fig. 6. Tested corner primitive (a) in the laser reference frame. The junction point of the segment primitives is circled.

Complementary information are then computed : - a virtual segment mark for segments characterised by a few constructive points (these segments generally correspond to an occultation phenomena), - a robustness criteria linked to each segment which depends of length, number of laser points and orientation of the segment,

A window of interest of narrow width and maximal

height is then defined in the image plane as to be centered on the supposed vertical edge (cf. Fig.7). The width A of the window has been set experimentally to 3° in order to take into account the angular imprecision on the location of the corner in the laser reference frame .

Corner primitives formed by two successive segments are then generated. Each corner primitive is defined by: - the two laser segment primitives composing the corner, - junction point coordinates of the two segment primitives, - angle Q formed by the two segments.

P' ~

( r D'P' o·

-

laser reference frame

s 1800

0"

-

The extraction of vertical segments can be outlined as follows : I . compute the direction and magnitude of the image

intensity gradient at each pixel. This is done by a Canny-Deriche recursive filter (Deriche, 1987) applied to the window of interest. This filter is tuned by a single parameter a who can be set in order to obtain a good localisation of the edge or a good Signal Noise Ratio. Using a recursive implementation, the edge extraction is reduced to 14 additions and 18 multiplication per pixel for any length of filter. 2. keep only the local gradient maxima which have a vertical orientation with an angular tolerance of ±20°, 3. extract a representative straight line for each chain by a least square approach,

n

S,

_--'-C-,----:-_---=-D _ __

180.

laaer ~f~ frame

'ament primitive; S

corDer prlmlUye

. start and end points (P, , P,) - number of constructive points - virtual segment IllaI'k - robustness criteria

- 2 conslitutive segments (5, , S) - junction point coordinates [D . 31 - angle (0)

Fig. 5. Segment and corner primitives.

378

4. fuse these segments under constraints of end points proximity.

These information are then used in order to build a local map composed of segments and corners primitives associated to landmarks or to obstacles of the environment. This map is then used in localisation and navigation stages .

A corner primitive is validate as a vertical edge if the corresponding vertical segments have a length superior to 1.5 meter since they are more likely to belong to a corner or door frame in the environment of the robot. Otherwise, the corner is considered as belonging to an obstacle unchecked in the global map. Once the vertical edge validate , its angle in the environment of the robot is computed with respect to the camera reference frame in order to reduce the uncertainty associated to the location of the corner. Then, the window of interest is moved as to search new vision primitives in the neighbourhood of the vertical edge. This supplementary information allows to strengthen the identification of the corner.

_--'---"r'--

4. EXPERIMENTAL RESULTS The experimental results have been obtained with ROMANE in a typical indoor environment (hall and corridor). An indoor standard local map involving typical vision-laser primitives configurations IS presented (cf. Fig.8) . 300

>::----

250

viSion sal1lngs: Focal lomm, locus l6Ocm. rotation 24degrees

f' - - ' - r -f - - - r -

-

,-

_

_

200

5 150 100

50

300

corner primitive (c)

350 0

-.00

-600

-400

400

200

-200

400

450

Fig. 8. Tested corner primitives (b) and (c) laser reference frame. 50

100

1SO

200

250

In

the

300

Two kinds of situations frequently arise:

Fig. 7. Tested corner primitive (a) in the image plane. The window of interest is delimited by the dotted vertical lines. The vertical edge extracted appears in white lines.



3.3 Local map strengthening

correct landmark detection situation validated by both laser and vision sources. This is the case with the corner primitive (b) corresponding to a bookcase frame (cf. Fig.9), vision settings: FocallOmm, locus 2 4Ocm, rotal ion -3 7dagrees

The position, orientation and angle of the corner primitive were obtained by the laser sensor, however, the integration of the vision edge allows the identification and the reduction of the uncertainty associated to the location of the corner relative to the robot. For each corner primitive, validates as a vertical edge, are thus known: - accurate location of the landmark in the laser reference frame , - a robustness criteria linked to the segments composing the corner, - angle.o. formed by the corner, - vision primitives in the neighbourhood of the edge.

250

300

350

400

450

500

Fig. 9. Tested corner primitive (b) in the image plane. The extracted vertical segments are long enough to validate the corner primitive as a vertical edge.

Moreover each segment of a corner primitive checked by vision can be labelled as an obstacle or a landmark.

379



REFERENCES

incorrect situations rejected by the vision system. In case of presence of obstacle along of the robot's path; correct primitives can be masked and false primitives can appear. This is the case with the corner primitive (c) corresponding to a box not registered in the global map (cf. Fig. 10).

Castellanos, l.A. , l . Neira, O. Strauss and l .D. Tardos (1996) . Detecting high level features for mobile robot localization. Proc. 1996 IEEEIS1CEIRSJ Int. Con! on multisensor fusion and integration for intelligent systems, pp. 611-618. Delahoche, L., C. Pegard , M . Mouaddib and P. Vasseur (1996). Localization based on target classification with a multisensor mobile robot. The Fourth International Conference on Control, Automation, Robotics and Vision , pp. 1111-1115. Deriche, R (1987). Using Canny ' s criteria to derive a recursively implemented optimal edge detector. Int. Journal of Computer Vision, pp. 167-187. Kortenkamp, D. , T. Weymouth (1994) . Topological mapping for mobile robots using a combination of sonar and vision sensing. AAA 1-94. Siadat, A., A. Kaske, S. Klausmann, M . Dufaut and R. Husson (1997) . An optimized method for a 2D laser-scanner applied to mobile robot navigation. IFAC Intelligent Components and Instruments for Control Applications, pp. 153-158. Weckesser, P. , R. Dillmann (1996). Sensor-Fusion of Intensity- and Laserrange-Images. Proceedings of 1996 IEEEISICEIRSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems , pp.SOI-508.

vision settinas: Focal 1<>mm. locus 28Ocm. rotation 75deQrees -.--~.

50

100

150

200

250

300

350

400

450

500

Fig. 10. Tested corner primitive (c) in the image plane. The extracted vertical segment length is inferior to the threshold and the corner primitive is labelled as an obstacle. The methodology used to accept or to reject landmarks by laser-vision cooperation previously developed is well suited to the case of a polygonal type indoor environment. The obtained results shows that the cooperation of laser and vision sensors is a powerful means to elaborate solutions which allow a precise, efficient and reliable map building function to be created.

5. CONCLUSION AND FURTHER WORK Robust sensing of the environment of a mobile robot is an important task both to localise a robot and to build a complete and reliable map of an environment. Sensor 's cooperation allows a robust extraction of geometric primitives necessary to generate a coherent and reliable sensorial model of the environment . The complementarity of the sensors is used to identify and classify characteristic elements of the environment. This classification could not be done with each separated sensor. The multi sensors cooperation process thus allows one to increase significantly the robot autonomy for indoor applications. Only corner primitives have been checked here by the vision system. Developments and improvements are presently engaged to extract by vision-laser cooperation new laser interest areas such as areas composed of many little segments or of a long segment in which vision enable to search for visual cues allowing to identify these areas.

380