Fast Triangulation Method for An Ultrasonic Sonar System

Fast Triangulation Method for An Ultrasonic Sonar System

Fast Triangulation Method for An Ultrasonic Sonar System Bogdan Kreczmer ∗ ∗ The Institute of Computer Engineering, Control and Robotics, WrocÃlaw Uni...

636KB Sizes 0 Downloads 86 Views

Fast Triangulation Method for An Ultrasonic Sonar System Bogdan Kreczmer ∗ ∗ The Institute of Computer Engineering, Control and Robotics, WrocÃlaw University of Technology, ul. Janiszewskiego 11/17, 50–372 WrocÃlaw, Poland, (Tel: +48 71-320-2741; e-mail: [email protected])

Abstract: The paper presents an approach to processing of measurement data obtained from ultrasonic system. The approach makes possible to simplify computing of object location. The important advantage of the proposed method is that it eliminates operations on float point numbers. Thus an algorithm based on this approach can be implemented using a simple microcontroller. 1. INTRODUCTION Sonar is the most common sensor for mobile robots. It provides indications of echo detection times. This is enough to compute the distance of a path which is traveled along by an emitted signal. Unfortunately it isn’t enough to determine a position of an object reflecting the signal. This is because a beam of the emitted wave package is wide. To determine the position of the object at least a stereo sonar system is needed or two measurements performed at different places. It allows to apply a standard triangulation approach and determine the coordinates of the object. The disadvantage of the method is that operations on float point numbers are necessary. The approach presented in this paper eliminates this drawback. Moreover it makes possible to perform a simple verification of obtained results and reject some artifacts. (range readings that do not correspond to the locations of actual objects). In this way the approach allows to use simple microcontrollers to process the data. The method presented in this paper is a kind of triangulation. But instead of traditional approach it doesn’t use functions which have to operate on float point numbers. The inspiration of the approach was the method proposed by Kuc [2007b].

erate a dense sequence of detection times. Since the time density of these time points increases with echo intensity, they are similar to biological spikes. Therefore Kuc [2008] termed these sequences pseudoaction potentials (PAPs). This approach makes possible to distinguish weak and strong echos. It can be used to eliminate reverberation artifacts to produce robust sonar maps Kuc [2007a]. Using this approach Kuc [2007b] employed neuromorphic processing for classifying environmental reflection. It has been also used to create B-Scan (brightness scan) map Kuc [2008]. Another biologically inspired method has been proposed by Schillebeeckx et al. [2008]. Using a specially formed antennae for ultrasonic sensors it was possible to improve object localization. 3. DETERMINATION OF OBJECT DISCRETIZED POSITION Because presented approach is based on the method described in Kuc [2007b], in order to be coherent the notation used in this paper is based on the notation applied in Kuc [2007b].

2. RELATED WORK Considering triangulation methods the crucial problem is the accuracy of distance measurements. It was shown that for an ultrasonic range-finders it is possible to reach the accuracy 1mm Clapp and Etienne-Cummings [2006]Ega˜ na et al. [2008]. Another problem are artifacts. The most of them are sources of weak echos. Kuc proposed a method allowing to eliminate this type of artifacts using a standard Polaroid sonar. The 6500 ranging module controlling a Polaroid sonar can detect echoes beyond the initial one by resetting the detection circuit. The device specification suggests inserting a delay before resetting to prevent the current echo from retriggering the detection circuit. Ignoring this suggestion, Kuc applied another method. He repeatedly reset the module immediately after each detection to gen-

Fig. 1. A robot equipped with an ultrasonic sonar moves along a straight line and passes by an object. The pass distance xp (fig. 1) can determined using the equation (1). q (1) xp = rd2 − yp2 Assuming that the measurements are performed in placed which are regularly spread along the straight line (see fig. 2) the pass distance can expressed by the formula (2).

Thus we can choose the value of the slice in order to meet the equation (8). cs ∆τ dy = (8) 2 It allows us to simplify the formula (7) as follows 1 k = (m2k−l − m2k−l−1 + 1) + l 2 Fig. 2. The measurements of the distance to an object are performed at the points regular spread along the robot path. Building this formula it is also assumed that the object is passed by at the point determined by k = 0. q (2) xp,k = rk2 − (kdy )2

(9)

where mk−l and mk−l−1 are the numbers of time slices counted while the distances rk−l and rk−l−1 are measured. It is worth to note that all elements of the equation (9) are integer numbers. In this way the software implementation can be simplified and it makes even possible a hardware implementation of the approach. 4. OBJECT DETECTION AND IDENTIFICATION

Considering a sequence of three places the computed values of xp can be expressed by the set of equations (3). q xp,k = rk2 − (kdy )2 q 2 (3) xp,k−1 = rk−1 − ((k − 1)dy )2 q 2 xp,k−2 = rk−2 − ((k − 2)dy )2 It is more convenient to compute x2p instead of xp . The differences of x2p for each two following points are presented by formulae (4) 2 x2p,k − x2p,k−1=rk2 − rk−1 − 2kd2y + d2y 2 2 x2p,k−1 − x2p,k−2=rk−1 − rk−2 − 2kd2y + 3d2y

(6)

The general form of a formula exploiting rk−l and rk−l−1 is presented below 1 2 1 2 k = 2 (rk−l − rk−l−1 ) + + l. (7) 2dy 2 This formula can be still simplified while we take into account a method of distance determination. The most popular method is based on time measurement of a signal flight (TOF). Because the time is measured by digital clock, the measured distance can expressed as follows r=

The approach based on the formula 9 makes possible to represent the state of an observed environment by an register (fig. 3). When an object is detected and its index

(4)

The computed values xp,k , xp,k−1 , xp,k−2 should meet the condition xp,k = xp,k−1 = xp,k−2 = xp . In this way we obtained. 2 0=rk2 − rk−1 − 2kd2y + d2y (5) 2 2 0=rk−1 − rk−2 − 2kd2y + 3d2y These formulae allow us to determine k parameter. 1 1 2 )+ k= 2 (rk2 − rk−1 2dy 2 1 2 1 2 k= 2 (rk−1 − rk−2 ) + + 1 2dy 2

Determining the parameter k it is possible to compute xp . But the formula 9 also allows us to create an algorithm which can be used to trace an object while a robot moves. The k parameter can be treated as a kind of an object index. In general case its value isn’t unique. Thus the contents of the register should be rather considered as a robot surrounding stamp. However, to simplify further description the index concept will be exploited.

cs ∆t cs (m∆τ ) cs ∆τ = =m 2 2 2

where ∆τ is an elementary slice of time measured by clock, m is the number of elementary time slices. We can arbitrary choose the duration of the elementary slice ∆τ .

Fig. 3. A simple register can be used to store indexes of detected objects. is computed, the bit of the register can be marked. The bit position in the register corresponds to the index. During a single step a few object can be detected. It means that several bits can be marked. Each bit is combined with a single object. In this way in each successive step l a new register of determined parameters k is obtained. In order to trace objects in a robot surrounding during its motion a horizon of observation can be defined. This is the number of previous steps which of observation are taken into account. Assuming that h denotes this number we can perform logical multiplication of all registers obtained from the step l − h up to the current step l (see fig. 4). It allows us to extract common marked bits. The result can be stored in a separate register. Because the data contains information of objects observed during each step artifacts are filtered and rejected. It is done due to the fact that artifacts are very sensitive to a position dislocation of observation Kuc [2007a]. Thus the final register storing the result of logical multiplication can be treated as the register of stable objects (RSO). The size of the observation horizon depends on the distance range of sonars and the width of their beam. The main disadvantage of this approach is that using the formula (2) the parameter l must be remembered and consequently increased. Moreover at each step the parameter k for a new object is bigger and bigger and its value cannot be limited. Therefore it isn’t also possible

Reg

l−2

Reg

l−2

Reg

l−1

Reg

l−1

Reg

l

Reg

l

RSO

Fig. 4. Indexes of detected object in successive steps can be stored in registers. Comparing the registers object observed in the current and the previous step can be determined. to choose the proper size of the register. Thus it will be much more useful when a uniformed procedure is applied. It should guarantee that the size of the registers won’t depend on the step number l. This procedure should also allow us to remove objects which are passed by and they aren’t observed any longer. It can be done when instead of k, the result of the subtraction k − l is computed. Transforming the formula (9) it can be written as follow 1 κl = k − l = (m2k−l − m2k−l−1 + 1). (10) 2 Exploiting the same idea of registers it can be noticed that marked bits corresponding to observed objects are moving to the left side from step to step (see fig. 5). It causes Reg

l−2

Reg

l−1

Reg

l

RSO

Fig. 7. While registers contents are shifted before moving them to next registers, the same position can be established for bits corresponding to the same values of κ in all registers. The result stored in RSO can be exploit for robust position tracking of a mobile robot. The marked bits in RSO combine the robot position with visible objects in the local robot surrounding. To be able to trace a robot position it is necessary to compare the result obtained in Reg 0 with the previous value of RSO. In general case this procedure doesn’t seem to be simple.

Fig. 8. A fuzzy bit is created by extending a marked bit in order to take into account measurements errors. The presented approach doesn’t take into account problem of measurements errors. It can be solved by creating fuzzy bits. Instead of marking a single bit its two neighbors should be also marked. The similar approach had been applied in Kuc [2007b].

Fig. 5. While a robot moves values of indexes κ are decreased and marked bits corresponding to observed objects are moved to left side. that simple logical multiplication of all registers to extract observed objects isn’t possible. Fortunately this problem can be solved in a very simple way. Because the data stored in a register of the step i must be moved to a next register in order to make place for new data obtained at the step i + 1, it is enough to Step l − 2

Step l − 1

Step l

Reg

l−4

Reg

l−3

Reg

l−2

Reg

l−3

Reg

l−2

Reg

l−1

Reg

l−2

Reg

l−1

Reg

l

Fig. 6. Before placing a new result in a register the results obtained at prior steps are moved to next registers. Shifting their contents it is possible to establish the same position of bits corresponding to the same value of κ in all registers. shift bits of a single position at each step before moving data to a next register (see fig. 7). This procedure makes possible to obtain a proper location of all bits in order to perform logical multiplication of all registers.

5. MULTI-SONAR SYSTEM The important advantage of the method presented in the previous section is the elimination of artifacts which are sensible to a change of a sender and detector position. Using the method a sonar must be moved within the same distance step by step. When a sonar is mounted on a mobile robot it can be obtained during robot motion. But this solution introduce additional source of errors i.e. the distance measurement of a robot movement. It can be avoid when a set of sonars is used, instead a single moving sonar (see fig. 9). The sonars can be mounted within a regular shift and the error can be drastically reduced. The sonars arrangement creates a sonar array.



Fig. 9. Instead of regular sonar displacement a sonar array can be used. In the simples form it is possible to reduce it to a triaular sonar system. Considering the presented method the

system can be treated as a double stereo system. The second pair of sonars is used in order to verify the results obtain by the first pair. It shows that the method presented in Kuc [2007b] can be transformed to the known approach Kreczmer [1998] which was applied to reduce false sonars reading. However, the method described in Kuc [2007b] offers new important benefits in the sense of discretization presented in the previous section. It allows to speed up calculations and makes results more robust. The proposed method introduces additional error to calculations of the distance xp . It is caused by discretization. It should be noticed that the step of discretization dy is quit big. Comparing the approach based on moving a single sonar and tri-aular system it can be shown that the last one has an important advantage. It is due to the orientation of discretized coordinate axis to an object and a sonar acoustic axis. The fig. 9 shows the case when the situation for both systems are exactly equivalent. Because the width of a sonar beam isn’t very wide (30◦ for Polaroid transducers) in order to be noticed an object cannot be far from the acoustic axis of a sonar. When a sonar is pointed towards the movement direction (see fig. 10a), an object cannot be very far from this direction. Considering a triaural sonar system or in general a linear array system an object cannot be far from the acoustic axis of the system (see fig. 10b). The main difference of these two cases is a)

a)

b)

Fig. 11. a) While xp ¿ yp , a small change of k can cause a very big change of value xp . b) While xp À yp , the same change of k can cause a very little change of value xp . discretization. Creating the same type of grid for a sonar pointed towards its movement direction, much bigger cells a)

b)

b)

Fig. 12. The mixed Cartesian and polar coordinates discretization for a) a sonar pointed towards its movement direction, b) a multi sonar system.

Fig. 10. a) When a sonar is pointed into direction of its movement, an object cannot be far from this direction. Discretization of the distance yp is applied along the movement direction. b) The object cannot be far from the acoustic axis of the sonar linear array. Discretization of the distance yp is along the sonar array. that for a moving sonar xp < yp or even xp ¿ yp . The opposite relation is for a multi-sonar system i.e. xp À yp . This feature has very strong impact on an error value of xp . Considering the case of moving sonar pointed into the direction of the movement it can be noticed that an error of k being equal to 1 can cause a very big change of the value xp (see fig. 11a). The same situation considered for a multi-sonar system causes much smaller change of xp (see fig. 11b) and in consequence much lower error value. The type of discretization presented in fig. 11b gives an additional advantage. When a value of xp is needed to be established in order to locate an object in a robot surrounding, the formula (2) should be used. But the value of rk can be also digitalized and in this way the formula (2) isn’t needed any longer to show where the object is. Digitizing yp and rk a kind of grid is created (see fig. 12b) which is a mix of Cartesian and polar coordinates

are obtained. Moreover the angle coordinate is much more fuzzy (see fig. 12a) than for the case of discretization applied to a multi-sonar system. The presented approach to processing of measurement data obtained from a multisonar system makes possible to reduce necessary resources of the system controller. Using the final discretized representation of data and a local environment the controller can perform all calculations using only integer data. Which is the very important benefit of the described method. 6. ERROR ANALYSIS To estimate an computation error of the parameter k the formula (9) has to be considered. This formula shows that the parameter k is a function of rk−l , rk−l−1 and dy . Thus the error can be estimate as follows 1 ∆k = 2 (rk−l ∆rk−l + rk−l−1 ∆rk−l−1 )+ dy (11) ∆dy 2 2 | r − r | . k−l k−l−1 d3y The interpretation of the symbols used in the formula is presented in fig. 13. It is shown in the context of a stereo sonar system. The form (11) indicates that the error can be reduced by enlarging the distance dy between sonars. The chart in fig. 14 shows that the value of the error rapidly drops while dy is enlarged up to about 9cm. This

unfortunately in a very limited range. Because it isn’t possible to increase the distance between sonars without loosing detection of object located in the nearest robot surrounding.

Fig. 13. The schema of sonars arrangement used to determine error values.

At the end of the previous section creation of a fuzzy marked bits has been suggested in order to take into account measurements errors (see fig. 8). This procedure will be proper if the error of ∆k doesn’t exceed 1. The error analyze presented in this section shows that it is possible if the accuracy of distance measurement is 1mm and the distance between sonars is 8cm. In this way the condition ∆k ≤ 1 is guaranteed up to about 3m to an object. 7. EXPERIMENTS

Fig. 14. The error estimation of k parameter as a function of the parameter b (distance between sonars). The error is estimated for an object placed at the distance 2m and the azimuth to the object is 30◦ . conclusion isn’t true with regard to a real distance y = kdy because 1 ∆kdy = (rk−l ∆rk−l + rk−l−1 ∆rk−l−1 )+ dy (12) ∆dy 2 2 | r − r | . k−l k−l−1 d2y The chart in fig. 15 illustrates this relation. Unfortunately it isn’t possible to increase the distance between sonars significantly. If it is enlarge too much, an object cannot

Fig. 15. The error estimation of the coordinate y of the object position. This error is a function of the parameter b (distance between sonars). be detected by all sonars. It is due to a restricted size of the emission angle. Another reason is that sonars have much lower sensitivity when a signal is received from a direction being far from their acoustic axis. For Polaroid transducers the distance between sonars up to 8cm seems to be reasonable. The error estimations presented in fig. 14 and fig. 15 are computed for a specific direction. But the conclusions are true for the whole range of the sonar sensibility. This is due to the fact that the error almost doesn’t depend on the direction. The second component of the form (11) has very small influence into the final value. This is also the reason that the error is almost linear in the sense of the distance to an object. The same it can be said about the influence of the distance measurement error ∆r. Increasing the accuracy of the distance measurement it is possible to reduce the error of the parameter k. It can also be done by enlarging the distance dy between sonars but

This section presets results of preliminary experiments. A controller for a sonar system used in these experiments didn’t make possible to obtain enough good accuracy of distance measurement (it was about 5mm). Therefore the experiments were concentrated mainly on examination of features discussed in the section 5. The main goal of the experiments was a comparison of the results obtained from a single moving sonar and the tri-aular sonar system. The comparison has been done in the context of discussion presented in the section 5. The general scheme of the test bed is shown in fig. 16. The experiment consisted on performing measurements of a distance to a pole. The pole was located in five different places denoted in fig. 16 as PA , PB , PC , PD and PE . The distance between them was 5cm.

Fig. 16. The scheme of the testbed for the experiment.

The computations has been performed for two cases i.e. for a single sonar and for the tri-aular sonar system. In the first case the results of measurements obtained from a single sonar has been only considered (see fig. 17). Executing measurements for a sequence of different pole locations it was obtained the same effect as the measurement obtained from a moving sonar. The second row of the table 1 Fig. 17. The scheme of the testcontains the measured bed for measurements perdistances rd to a narformed by using a single row pole. The next row sonar. of the table shows the computed values of parameter κ. The values of κ are computed using the formula (9). They were calculated for each two locations of the pole. i.e. {PA , PB }, {PB , PC },

PA

PB

PC

PD

PE

rd [cm] κ

59.2 ×

54.3 11

50.1 9

45.9 8

41.6 8

xp [cm]

×

21.9

30.4

30.2

22.5

Expected κ values κ × 9 xp [cm]

×

38.5

TE [cm] TF [cm] TG [cm] κEF κF G

8

7

6

36.7

35.8

34.7

xp x ˆp

Table 1. The results obtained for a single sonar.

The fourth row of the table 1 shows the results. The two last rows of the table present expected values of κ and xp for these values. It can be noticed that the finale error of xp value is quite big. The source of the errors is a low accuracy of distance measurement (∆r = 0.5mm) and a big step of discretization (∆y = 5cm). Moreover the error value is enlarged by the effect discussed in the section 5 because the condition xp < yp is met. In the second part the measurements have been performed by a tri-aular sonar system. The middle sonar of the system is a sender and after emitting a signal all of them are used as receivers. After measuring TOF by each sonar a distance to an obstacle is computed using formulae tF F tF F ), dF = vs , 2 2 tF F ), dG = vs (tF G − 2

(13)

where vs – the velocity of ultrasonic wave, tF E – TOF for a signal emitted by the sonar F , reflected by an object and received by the sonar E. Notations tF F and tF G have analogous meanings. The table 2 presents obtained results. Rows denoted TE , TF and TG contain measurements data obtained from sonars E, F and G. The distance was computed using formulae (13). Rows κEF and κF G contain values of parameter κ computed for data obtained from Fig. 18. The scheme of the sonars pair EF and F G test-bed for measurements respectively. The last performed for a tri-aural two rows contain real values of xp and comsonar system. puted values x ˆp . The proper κ should meet the condition κEF = κF G + 1.

PB

PC

PD

PE

55.2 54.3 54.5

50.5 50.1 51.1

45.9 45.9 47.0

41.6 41.6 44.8

2 0

1 0

0 0

0 0

0 -2

58.8 59.0

54.5 54.8

50.1 50.5

45.8 45.9

41.4 40.5

Table 2. The results obtained for the tri-aural sonar sonar.

{PC , PD } and {PD , PE }. The values of κ should differ by 1 and should decrease from PB to PE . Due to measurements errors the values of the computed parameter κ are different from the proper ones. After determining the value of the parameter κ the value of the second coordinate xp can be calculated using the formula q xp = rd2 − (k ∗ dy )2 .

dE = vs (tF E −

PA 60.8 59.2 59.0

It is met only for PB . However, for all of the pole locations the values of κ are acceptable in the sense of a fuzzy bits which are described at the end of the section 4. Despite of errors the values x ˆp are very close to expected values xp . The discussion presented in the section 5 explains this phenomena. 8. CONCLUSIONS The approach presented in this paper makes possible to implement the method using a simple microcontroller. It can be done because operations on integer numbers are only needed. This feature is obtained by exploiting the discretization proposed in the paper. Unfortunately, it introduces additional source of errors. In order to reduce it the accuracy of the distance measurement should be as good as possible. The error analysis presented in this paper shows that the accuracy 1mm should be obtain in order to use the method in the distance up to 3m (the distance between sonars should be 8cm). REFERENCES M. A. Clapp and R. Etienne-Cummings. Single ping – multiple measurements: Sonar bearing angle estimation using spatiotemporal frequency filters. IEEE Trans. on Circuits and Systems, 53(4), April 2006. Aimar Ega˜ na, Fernando Seco, and Ram´on Ceres. Processing of ultrasonic echo envelopes for object location with nearby receivers. IEEE Trans. on Instrumentation and Measurement, 57(12), December 2008. B. Kreczmer. Path planner for a car-like robot applying multi-layer grid map of the environment. In Proc. the Int. Conference on Intelligent Autonomous Vehicles, volume 1, pages 324 –329, Madrid, March 1998. R. Kuc. Biomimetic sonar and neuromorphic processing eliminate reverberation artifacts. IEEE Sensors Journal, 7(3):361 – 369, March 2007a. R. Kuc. Neuromorphic processing of moving sonar data for estimating passing range. IEEE Sensors Journal, 7 (5):851 – 859, May 2007b. R. Kuc. Generating B-scan of the environment with a conventional sonar. IEEE Sensors Journal, 8(2):151 – 160, February 2008. F. Schillebeeckx, J. Reijniers, and H. Peremans. Probabilistic spectrum based azimuth estimation with a binaural robotic bat head. In 4th Inter. Conf. on Autonomic and Autonomous Systems, pages 142 – 147, 2008.