Grid Map Building from Reduced Sonar Data

Grid Map Building from Reduced Sonar Data

Copyright © IFAC Intelligent Autonomou s Vehicles, Madrid , Spain, 1998 GRID MAP BUILDING FROM REDUCED SONAR DATA- F. B1anes, G. Benet. M. Martinez,...

1MB Sizes 6 Downloads 97 Views

Copyright © IFAC Intelligent Autonomou s Vehicles, Madrid , Spain, 1998

GRID MAP BUILDING FROM REDUCED SONAR DATA-

F. B1anes, G. Benet. M. Martinez, J. Sim6

Dep. Ingenieria de Sistemas, Computadores y Automatica. Fax: 34 (9) 6387 7579. Universidad Politecnica de Valencia. Cno. Vera 14. £-460022. Valencia. {pblanes,gbenet,mimar,[email protected]

Abstract: In this paper an ultrasonic sensor is presented, aimed to map building in autonomous robot applications. By means of a rotary movement the sensor can be settled up to 200 angular positions. The data processing perfonned in the sensor module gives more elaborated data than the usual ToF ultrasonic sensor systems and it is possible to detect all objects placed in the scenario with its signal amplitude value. Due to the quality of data is possible to reduce the amount of angular readings required to construct the map. As results indicate it is possible to obtain good maps using only a very reduced set of readings. Copyright © 1998 IFAC

Keywords: Environment Understanding, Perception systems, Sensor Fusion, Signal Processing, Ultrasonic transducers .

I. Not-enough directional measuring: the beam of a ultrasonic sensor is wider (30°-45°) than other distance measuring sensors like laser.

1. INTRODUCTION In recent years the use of ultrasonic sensors for distance measuring by robots and the construction of environmental maps has become increasingly common. Some reasons why ultrasonic sensors have been chosen over other devices like laser or vision are:

2. Non-constant sound speed: it depends mainly on the air temperature. 3. Specular effects of the signal. 4. Low bandwidth: it's difficult to resolve closely spaced objects.

I. Low cost: for some applications the final cost of a robot is important

As a result, the measures obtained by an ultrasonic sensor are uncertain, because it cannot detect the exact position of the object at the far edge of the cone. Consequently, the information provided by these sensors is poor and requires the confirmation through the repetition of measures with the sensor in different locations (robot exploration). The most common device used on robotics is the POLAROID

2. Low energy consumption: the autonomy of the robot depends on the installed sensors and actuators 3. Simple signal processing: interesting from the realtime point of view. On the other hand, there are some factors that make ultrasonic sensors less efficient:

- This work has been supported by the projects TAP97-1164-C03-03 of the Spanish Comisi6n Intenninisterial de Ciencia y Tecnologia (CICYT) and GV-C-CN-05-058-96 under the program Projectes d' Investigaci6 Cientifica i Desplegament Tecnologic "Generalitat Valenciana".

447

[Borenstein91 , Lee96, Pagac96, Leonard92] ,which gives a measure of the closest object in the front of the sensor into a cone around 30° wide. In this work is proposed a new target detection technique using more information from signal in order to decrease the number of readings needed for map building and the time spent in the map construction. This system is under development for integration into the YAIR architecture [GiI97, Sim097]. The robot has enough quantity of sensors to be used in non-well structured or unknown environments, and can make indoor displacements with two independent DC motors as drivers of the robot. The use of multiple sensors, calls for a meaningful way to combine the information provided by the individual sensors, in the process referred to as sensor integration or sensor fusion. This task is carried out in our software architecture .

2.2 Ultrasonic signal processing:

When the pair of ultrasonic transducers have been correctly positioned, a short train of sixteen pulses of 40kHz is applied to the emitter using a full bridge power driver. This produces a train of ultrasonic waves that is propagated through the air at 343 m/s approx. When a solid object is found, an echo returns to the receiver. The amplitude of this echo depends on the surface properties of the object. The time elapsed between the emission and the reception of an echo is called Time of Flight (To F), and is used to determine the distance d between the object and the transducers. If we denote the speed of sound as c, d can be obtained as follows:

d = c·ToF 2 Accurate measurement of the time taken for the echo to reach the receiver (ToF) is essential for achieving sufficient precision. Unfortunately, this is not an easy task, because of the shape of the signal supplied by the transducer. Usually, most of ultrasonic systems employ specific integrated circuits, designed to convert the pulsed input signal into an edge that is used to find the ToF. These IC's do not detect the true ToF. Instead, they generate the edge of the signal transition over a fixed threshold, thus loosing some initial cycles which do not reach the threshold amplitude. This method is very popular and modules based on this approach are widely used in robotics. This is the case for the ubiquitous Polaroid-based modules.

2. SONAR SYSTEM 2.1. Ultrasonic sonar module

This is an intelligent module that uses the 80C592 chip, a new version of a popular industrial microcontroller that includes a 10 bit AID converter. The microcontrolIer is used to: I. control the angular position of the

rotating transducer by means of a stepper motor with 1.8 degrees per step.

2. generate the ultrasonic waveforms to be supplied to the transducer, 3. process the echoes received from surrounding objects.

Unfortunately, the pulse amplitude is very dependent on the distance and surface reflection characteristics, and cannot be easily modelled. So, those ToF measurements made with a fixed threshold will be affected by a non-constant offset and will produce large errors. In practice, the error suffered by these non-compensated ultrasonic systems totals some tens of millimetres when using low-cost piezoelectric transducers at frequencies of around 40-50 kHz. Although this error may be tolerable for large distances, it seems unacceptable for distances under Im.

There are two ultrasonic transducers, one acting as emitter and the other as receiver. They are piezoceramic devices adjusted to operate at 40kI:Iz. This sensor has been prepared to have an effective measuring range from about 15cm to 4m. The module emits a short train of ultrasonic pulses in a given direction. The resulting echo is received, digitised and processed in order to obtain a estimate of the robot environment. This process can be repeated for each one of the 200 angular positions that the stepper motor can be set to. Using this data, a map of the world surrounding the robot can be made. However, it is not necessary to take all the 200 measurements to obtain a good map, as it will be later demonstrated.

Alternatively, digital signal processing methods can be used to determine d more accurately, but they require high computational loads and large amounts of data to be processed [Parrilla91]. Moreover, the frequency spectrum of the echo is centred around 40kHz, with a narrow bandwidth of few kilohertz. This implies sampling frequencies above 100kHz to digitise this signal directly from the receiver.

Depending on the requirements fixed by the main controller, the module can produce different quality of data. These can vary from simple rough detection of objects in one single direction, to more sophistica~ed and time consuming measurements such as: mappmg with information on the instantaneous speed of each point in the scenario; or assessing the relative hardness of objects. However, the module is normally set up to measure distances to objects, using fast algorithms as described in [Benet92].

Fortunately, it is not necessary to sample the signal at high frequencies to capture all the information present in the echo. Instead, the signal received can be previously demodulated, eliminating the 40kHz component and reducing the bandwidth of the signal

448

to be digitised to a few kilohertz. So, the sampling frequency required and the total number of points digitised will be drastically reduced. To retain all the infonnation present in the echo, a coherent demodulation must be used [Mehrdad94]. This means to obtain both in-phase and in-quadrature components of the echo envelope. These two components can be digitised at low sampling frequencies and processed digitally.

3.2 Problem Formulation. In order to simplify the following results, it is assumed that the signal has no propagation losses. The transmitter, located at x = 0 sends a time-dependent signal pCt) . This is a short duration signal (pulse) with a fmite duration in the time domain. The signal reaches the fIrst target at tl = XI / C (c is the speed of the wave propagation). Then the time-dependent signal that arrives is p(t - t l ) and the echo is

In our approach, the signal supplied by the receiver, is fIrst delivered to a programmable gain amplifIer, which has 8 gains: 2, 4, 8, 16,32,64, 128, and 256. This range of gains is enough to compensate the attenuation of the signals due to the propagation losses. After this amplifIcation, the signal is passed to a coherent demodulator, that uses the same 40kHz clock signal used in the emission of the pulse train. The demodulation uses also a 90 degrees shifted clock to obtain the quadrature component. These two components of the demodulated signal are passed into a fIfth order Butterworth fIlter stage to remove all the frequencies above 4kHz (the effective signal bandwidth). This fIltering also serves as a anti-aJiasing fIlter previous to the AID conversion, that samples these signals at 10kHz before storing them for the digital signal processing. This sampling frequency corresponds with a spatial resolution of 1.71 cm between samples.

IIP(t - t l ) and arrives after a

tl

time. The receiver

records the signal SI = IIP(t - 2t l ) , and the remaining signal P(t-tl)- flP(t-t l ) =(I - fl)P(t-t l ) travels to reach the second object. Assuming that

11,,1 « I

then (1- j,Jp(t - t l ) == p(t - t l ) and arrives

at this second target at t 2 = x 2 / c. As in the previous target the echo can be expressed as s 2 = 12p(t - 2t2 ) · Finally, the total echo signal that arrives to the receiver is s(t)

M

M

n=1

,,=1

= ~>,,(t) = Llnp(t -

2t,,)

3.3 Inversion: the decoDvolution problem. The last equation can be expressed as s(t) = p(t) * 10 (x)

3.1 The model. The solution to the scenario reconstruction after the sensor readings is based on an inverse formulation. The target model in this inverse problem is useful to identify the properties (distance) of the objects. The model used in this work assumes that the target region is homogeneous (air), and is composed of M nonsized reflector objects. Each object has its own that depends on its physical reflection coefficient

M

10(X) = LinO (x -xn)

where x = et / 2 is a linear transfonnation from x to t. Then, we can solve the problem retrieving lo(x) from s(t) , this involves the deconvolution of the model in eq.2. This deconvolution is easily obtained using the Fourier transform. So, the transform of the eq.2 gives

properties, and its coordinates x" in the spatial domain. The model is shown in the fIgure I .

S(O)) = P(O) · Fo (k x ) P(t)

transmitter/receiver

f\

f2

-

xl

x2

f3

x3

(3).

,,=1

I"

.-

(2)

that is, the signal received is the time domain convolution of the emitted signal p(t) with the spatial domain signal lo(x) expressed as the sum of delta functions at the coordinates of the targets

3. SCENARIO RECONSTRUCTION

Targets'reflectivity

(I)

-

where kx

f4

= 20) e

(4)

. is a lineal function of ro . Thus

one can obtain.

x4

Targets' coordinates

(5)

This last equation is valid only if P(ro) *- 0 for all ro . This is not always true, because sometimes the noise makes this restriction false . An alternative way is to use an inverse fIlter C(m) instead of 1/ P(m) to limit the deconvolution bandwidth to those frequencies where a relatively high SNR is expected. Several authors have proposed methods to obtain this

Figure I: Target model. This model is correct for our work because it could be assumed that every object is a combination of infmite point reflectors on its surface.

449

inverse filter. In the present work, is used the one proposed in [Anaya92]. This filter is expressed as

t

~

1- - k!p,,(m)!2

C(m)= - - --- -P(m)

08

(6)

0.6

where Pn (m) denotes P(ro) nonnalised to its maximum, and the parameters k and m are obtained after an optimisation process.

04

Using this filter in eq.5 results in (7)

After the inverse Fourier transfonnation of Fo(k x ), can obtained the estimate of the scenario,

-50 -40 -30 ·20 ·10

Number

30

40

50

4.2 Sensor Fusion Algorithm Before using the signal for the map generation, it is passed through an exponential filter that compensates for signal power losses. This filter gives a homogeneous view of the environment and allows us to coherently apply the fusion process.

4. GRID MAP CONSTRUCTION

Lateral resolution: readings.

20

Figure 2. - Nonnalised angular response of the transducer head of the sonar. Angles are in degrees.

fo(x) , as desired. The above method is simple and suitable for its implementation in real-time systems.

4.1.

10

of angular

The sensor is capable of taking up to 200 readings at different angular positions. This means that the angular resolution is 1.8°. A complete scan with this resolution implies to apply the above-described inverse filtering algorithm for each of the 200 readings, causing appreciable delays in the map construction.

Many algorithms are described in the bibliography, and most of them are based on a probabilistic Bayesian approach [Moravec88, Abidi92]. These algorithms are useful specially when the angular data readings are reduced (8 to 16 angular positions, typically). However, in our case, after the angular interpolation, up to 200 angular readings are available in each scan for the fusion process. This availability of data enables to use a more simple data fusion algorithm. In fact, each scan data consists of 200 angular readings spaced 1.8° and each of these readings represents the values of f o(x) , spaced 1.7cm approx. Each scan has been taken from different locations of the scenario, and the origin of the angles has been maintained common to all scans.

However, it is not necessary to take all the readings to obtain good estimates of the environment. The transducers head used in the prototype show an angular response that can be observed in the figure 2. As can be seen, the main lobe has an amplitude of ±15° approx. This indicates that it is not necessary to take samples at 1.8°, and angular spacing of 4, 5 or more steps between samples seem to be enough to reconstruct the original shape by means of interpolation. Due to the nature of the signal and its subsequent processing, it is difficult to establish a theoretical optimum angular spacing to re-construct the real shape. So, the optimal number of readings per scan must be obtained by comparing the results obtained after the fusion process.

The reconstructed map of the scenario is based on a rectangular grid. The size of the cells is of 2x2cm. The fusion algorithm is based on the accumulation for each grid of the values of fo obtained in the inversion process. In this way, the occupied cells that have been observed by more than one location will augment its cell value in comparison with the empty cells that do not have been observed. This fusion algorithm is simple, but gives good results, because the quality of the input data is high. Another advantage of this method is simplicity and low computational cost.

Using the above reasoning, the complete scan of the scenario can be done by taking only a reduced number of angular readings. After the inversion algorithm has been applied to this reduced set of data and before to apply the fusion process, an angular interpolation must be made to re-construct up to 200 angular values. This is necessary to obtain enough resolution in the grid data obtained after the fusion process, as will be described in the next section.

450

4.3. Tests

As it can be observed, the degradation of the obtained map is only evident in the figures 6 and 7. This results indicates that is possible obtain good maps of the scenario using only a very reduced set of angular readings per scan. However, due to the simple nature of the objects used in this work (walls, cylinder and square box), it is not possible to give the optimum value for this number.

The tests to the system were performed in the environment shown in the fig 3 that is mainly composed of walls and with two objects: a metallic cylinder and a square box of metal. In this area the sensor was positioned at 10 different locations and performed scans with different number of angular readings (200,50,40 and 20 readings per scan). Each of the readings taken was deconvolved by inverse filtering and compensated for the power losses during the flight. Previously to the fusion process, each scan was interpolated up to 200 angular readings per scan. After, the above described fusion process was performed and a grid map was obtained corresponding to each angular resolution in order to compare the quality of each estimation .

1.or I

110

i

j

1

~~ i

1

()

! l20 r

0

! !

C

I

1

I

'00

Fig.5. Contour plot of the grid map resulting after 10 scans performed with 4 angular steps per reading (50 readings per scan). Coordinates are in grid units.

" Jl , ___

1

r ______

1

J

I

~

~

~

~

~

II

!

'''I,

~ ~--~--~--~~--~--~--~--~ ~

'I

""" I

~

~

~

1 ~~...".,.. .

''' f

i 140~

Fig.3. Scenario of the tests performed. Solid lines indicate objects (walls, cylinder and a box). Small circles indicate the origin locations of the 10 scans.

;;

i

'OO~

,s611' .

I

The results of this tests performed are shown in the figures 4 to 7. These figures are contour plots of the values reached in each grid after the fusion process, indicating the original angular resolution of the 10 scans. The coordinates in these figures are in grid units (each grid is 2x2cm). •

·lOri i

.oo ~

l

I

.

,!

140t ,

'

, f

i

'20

oot ,

••

1

I

I

I l00 ~

t,)

~ I '
'00

'20

,<0

"IO

,OO

oo

12"

'00

'40

J i ,

'00

:lOO

'10

1 i

j

J

zzo

Fig.6. Contour plot of the grid map resulting after 10 scans performed with 5 angular steps per reading (40 readings per scan). Coordinates are in grid units.

Ii 1

I

1

c

i

200 !

t

(j,

i I

·2Or

j

200 ,

!!

j

1lOr

i

,ool

j

.40 >

']

I I

i

I I

1I

~

1

(~ '- ,

~

'
i

i

'j

1cor

,I

IO r

200

Fig.4. Contour plot of the grid map resulting after 10 scans performed at full angular resolution (200 readings per scan). Coordinates are in grid units.

oo

eo

.""

, '20

J

!

0

,.mJI

I1

J

. '"

I

j

.OO

,j I ..

200

220

Fig.7. Contour plot of the grid map resulting after 10 scans performed with 10 angular steps per reading (20 readings per scan). Coordinates are in grid units.

451

More tests with different complexity objects has to be performed before one can establish valid conclusions.

Y. Koren. University of Michigan. IEEE Journal of Robotics and Automation, Vo17, No 4, 1991.

[Gil97] "A CAN architecture for an intelligent mobile robot ". J.A. Gil et al. Universidad Politecnica de Valencia. 65-70. SICICA-97.

5. CONCLUSIONS AND FUTURE WORK

[Lee96] "The Map-Building and Exploration Strategies of a Simple Sonar-Equipped Mobile Robot ". David Lee. Cambridge University Press 1996.

5.1. Conclusions The new ultrasonic sensor and data fusion algorithm presented in this work gives a good results in the map building due to the signal processing implemented in the fusion algorithm. Until now most of ultrasonic sensors only gave the distance to the fIrst object in the sensor cone. With our new proposal it's possible to detect the fIrst object and background ones with its signal amplitude value. All this information could increase the knowledge in the fusion task to obtain accurate maps and decrease the time spent in this operation. Performed tests give an idea of the new possibilities.

[Leonard92] "Directed sonar sensing for mobile robot navigation ". J.J Leonard and H.F. DurrantWhyte. Kluwer Academic Press 1992. [Mehrdad94] "Fourier Array Imaging". Mehrdad Soumekh, State University of New York at Buffalo. Prentice Hall, 1994. [Moravec88] "Sensor fusion in certainty grids for mobile robots ". Hans P. Moravec. AI Magazine 1988. [Pagac96] "An evidential approach to probabilistic map-building" Daniel Pagac et all. University of Sydney. ICRA-96.

5.2. Future work This proposal is still under development and improvement. Our future developments will be driven in this lines: •

[Parrilla91] "Digital signal processing techniques for high accuracy ultrasonic range measurements" . M. Parrilla et al. IEEE Trans. Instrum. Meas. 40(4),759-763 . 1991.

New sensor head implementation (2 receivers/2 transmitters) with a reduced beamwidth to 10° and increased gain.



Signal processing algorithms implementation in DSP's architectures. Time measuring and characterisation for the use in the YAIR robot, with other real time software modules.



Defmition of a sensor behaviour for actuator-like use. The real time map construction is a difficult task. Our goal is to use the sensor to focus the attention depending on the time restrictions in unexplored areas. The result is a new actuator for the robot driven by a sensorial behaviour that determines where the sensor has to look and how many time to spend in it (number of angular and axial readings).

[Sim097] "Behaviour Selection in the YAIR Architecture" J. Sim6, A. Crespo, J.F. Blanes. In proceedings of IF AC Conference on Algorithms and Architectures for Real Time Control. Vilamoura, Portugal. AARTC'97.

6. REFERENCES [Abidi92] "Data fUsion in robotics and machine intelligence ". Mongi A. Abidi and Rafael C. Gonzalez. Academic Press 1992. [Anaya92]"A method for real-time deconvolution ". Anaya et al. IEEE Trans.lnstrum. Meas. 41(3),413-419.1992. [Benet92] "An intelligent ultrasonic sensor for ranging in an industrial distributed control system " G .Benet, et al. Proceedings of SICICA '92.47-51. Malaga, 1992

[Borenstein91]"Histogramic in-motion mapping for mobile robot obstacle avoidance ". J. Borenstein,

452