Robotics & Computer-Integrated Manufacturing, Vol. 13, No. 1, pp. 41~49, 1997 © 1997 Elsevier Science Ltd. All rights reserved Printed in Great Britain 0736-5845[97 $17.00+0.00
Pergamon
PII:S0736-5845(96)00017--8
Paper S O N A R M A P P I N G OF A M O B I L E R O B O T C O N S I D E R I N G P O S I T I O N UNCERTAINTY BYUNG-KWON
M I N , * D O N G W O O C H O , i ' § S A N G - J O LEE:~ a n d YOUNG-PIL PARK~
*Engineering Research Institute, Yonsci University, Seoul, Korea, tDepartment of Mechanical Engineering, POSTECH, Pohang 790-784, Korea and ~;Department of Mechanical Engineering, Yonsei University, Seoul, Korea
This paper suggests a new sonar mapping method considering the position uncertainty of a mobile robot. Sonar mapping is used for recognizing the unknown environment for a mobile robot during navigation. Usually accumulated position error of a mobile robot causes considerable deterioration of the quality of a constructed map. In this paper, therefore, a new Bayesian probability map construction method is proposed, which considers estimation of the position error of a mobile robot. In this method, we applied approximation transformation theory to estimate the position uncertainty of a real mobile robots and introduced ceil ordering uncertainty caused by the position uncertainty of a robot in ceil-based map updating. Through simulation we showed the effect of a robot's position uncertainty on the quality of a reconstructed map. Also, the developed methods were implemented on a real mobile robot, AMROYS-II, which was built in our laboratory and shown to be useful enough in a real environment. © 1997 Elsevier Science Ltd. All rights reserved.
method for fast navigation by simplifying the certainty grid concept. Then Moravec and C h o 3'5'7 provided a sound theoretical basis for probabilistic mapping to enhance the quality of a sonar map. A mobile robot uses various sensors to calculate its position and distance from obstacles. The values sensed by the robot inevitably contain errors due to electrical noise, dimensional irregularity of the mechanical parts, and other factors. Also errors are accumulated during operation. To reduce the position uncertainty while the robot is moving, Smith and Cheeseman9 suggested the approximate transformation (AT) method. They represented the position uncertainty of the robot at each position between the movements by a covariance matrix. Then they applied approximate transformation to estimate the uncertainty between the positions. The previous research on the position uncertainty of a mobile robot mostly dealt with the estimation and correction of the robot position using the uncertain information. In this paper, we focus on probability mapping, especially using certainty grids. We investigate the effects of the position uncertainty of mobile robots on the quality of probability map. We introduce a new method that minimizes the deterioration of map quality due to position uncertainty by incorporating the AT into a newly developed probability mapping method. In Section 2 we discuss the theory of probability mapping, and the application of AT to real robots. In Section 3, we develop a new Bayesian probability mapping formula
1. I N T R O D U C T I O N An autonomous mobile robot should be able to perform a given task in an unknown environment without the guidance of a human. Therefore, acquisition of environmental information is one of the most important features required for a mobile robot. Currently, much research on collision avoidance and mapping the surroundings of a robot using vision or range sensors is being conducted. Absolute position estimation of a mobile robot and the correction of the position error have also been big issues in mobile robot research. However, because a robot only knows what it has seen in an unknown environment, its position should be estimated based on the accumulated information, which necessitates building a more accurate map. There are a wide variety of methods for the representation of the reconstructed maps. A freespace map stores and represents free space as its first consideration, whereas an object-oriented map uses sets of polygons and line elements to represent obstacles or occupied regions. However, a composite-space map can represent both the free space and obstacles. Among these methods, we usually adopt the concept of a composite-space map for probabilistic mapping. Moravec and Elfes4'8 proposed the probabilistic representation of a map using certainty grids. Borenstein and Koren 1'2 suggested the HIMM
§Author to whom correspondence should be addressed. 41
Robotics & Computer-Integrated Manufacturing • Volume 13, Number I, 1997
42
considering the position uncertainty. Then, in Section 4 we suggest some other schemes for correction of the errors due to the position uncertainty, and compare them through experiments in a real environment, as well as by simulation. In Section 5, we draw the conclusions.
2.
PROBABILITY MAP CONSTRUCTION
2.1. Review of the earlier Bayesian updating model In a probability map, the workspace of a robot is divided into square cells of the same size and the existence of obstacles in the workspace is expressed by an occupancy probability of the cells between zero and unity. A cell of zero occupancy probability stands for an empty cell, and a cell of unity stands for an occupied cell. A 0.5 probability value means that no information about the occupancy is available. The probability map is updated using the Bayesian conditional probability formula whenever a new sensing operation of the robot takes place.
Empty
region
Occupied
region v
Fig. 1. Sensor footprint and sorted cells which belong to the footprint.
P( oilM n A) P(Ulo~ n A) . P(oilA)
P(ilo,
n
A). P(oiiA) + P(MIN n
a ) . P(NI A)
(1)
P(~IM n A) e(~l~ n A). e(~lA)
from the transducer. Since the beam must pass through the cells in the empty region and stop at a certain cell in the occupied region, P(MIo i n A) can be determined as follows (the details are well documented in Refs 3 and 7).
[/j+.-I ) P(MloinA)= £ [ ~ e(H-£1oinA)
e(MIo, n A). P(o;IA) + e(Ml~ n a). e(~l,a) (2)
.]
where P(oiIMnA) is the estimated occupancy probability of cell i given old information A and a new measurement M. Division of Eq. (1) by Eq. (2) gives
e(o,lM n A) P(MIo,n A) e(oila) e(~lMna) = P(MI~nA)'.P(-~IA)"
In Eq. (3), P(oilA) is the occupancy probability of cell i and can be retrieved from old information, and only P(M[oinA) is left unknown for calculation of P(oi[MnA). To evaluate P(MloinA ), which is the probability that a new measurement M can be obtained, assuming that the cell i is occupied and A is given, we arrange the cells in the sensor footprint as in Fig. 1. Then we introduce P(HiIA), the probability that the beam halts at cell i,
P(HiIA)
=
e(H~lo,) . P(o~IA) + P(HdN)
P(Hj+.loi n A )P(I-!,.+.IM)1 . Similarly, ,.
(3)
• P(~IA) (4)
where probabilities P(Hilo3 and P(HflN ) are determined from the sensor characteristics and the distance between the object and the sensor. P(Hilo3 represents the probability of a beam halting at occupied cell i, and P(H;IN-) at empty cell i. We assume them to be functions of both the distance between sensor and object and the angle
(5)
)
r /j+,.-I
P(Ml~ii n A) = ~=O[ I kl~=1 P(H-kkl~h A ) q
e(~.+. I~ n A)e(N+.
IM)I
(6)
P(I-Iy+nlM) in Eqs (5) and (6) is the probability that the beam halts at cell j+n in the occupied region, provided that the measurement has been obtained, e(-ffkk]oin A) in Eq. (5) is the probability that the beam halts at cell j, given the condition that cell i is occupied as well as old information A. Dividing Eq. (5) by Eq. (6), we can get the first term of the right-hand side in Eq. (3). P( MIoi n A) P(Ul~ n A)
ELo (1T;~'F*e(-~kklO,n A))e( ~+.lo, n AIP( Hj+.IM) ~,.n=O ~,1. (rr]+.-~ lk= 1 P(-ff£1~n A))P(Hj+.I~ n A)P(Hi+.IM)
(7)
Sonar mapping of a mobile robot • B.-K. M I N et al.
At the start of mapping, the probability values of all cells are set to be 0.5. As the construction progresses, the probability value of each cell that belongs to the sensor footprint increases or decreases. However, the probability value remains as 0.5 for the region where the sensor beam does not reach, such as the inside of an obstacle or a region far from the robot.
2.2. Estimation of robot position uncertainty To estimate the position uncertainty of a mobile robot, we use the AT method, 9 where the robot's position error is characterized by a probability distribution. The mean values and covariance matrix are used to represent the distribution. On a two-dimensional workspace, the position of a robot can be represented by (x, y, 0), the position and orientation of itself. In Fig. 2, the arrows that connect each point represent AT from one coordinate frame to another. Every ellipse represents the locus that has the same probability on the twodegrees-of-freedom multivariate Gaussian distribution. Therefore, a larger ellipse stands for larger uncertainty in estimated robot position. In the figure, W represents the initial position of a robot, and at the same time, the origin of absolute global coordinate. As the robot moves through the points L~, L2 and L3, the position uncertainty with respect to the point W increases. A, D, and E represent the corresponding coordinate transformations from the absolute coordinate frame. According to the compounding theory of AT, 9 the covariance matrix C3 to represent Lz(X3, Y3, 03) with respect to the absolute coordinate frame W can be calculated by compounding the coordinate transformations A and B. When C~ and C2 are the covariance matrices of A and B, respectively, C3 becomes
½ 13 B
C
43
C3 ~ J
jT.
°1
(8)
C2
In Eq. (8) J is the Jacobian matrix. Now, if we are aware of the probabilistic characteristics of the position error of the robot movement, we can estimate the position uncertainty by successively compounding the AT. After we find the estimated position by dead reckoning and covariance matrix of the robot, the probability p(x) that the robot is located on vector x is calculated by the multivariate Gaussian probability distribution.
p(x) =
1 e-½(x-2~Tc'(x-2). X/'(2r0" det C
(9)
In Eq. (9), n is the number of dimensions, X is the nominal mean vector, x is a vector of a particular point and C is a covariance matrix. At this time, the nominal mean vector X is considered to be the position of a mobile robot that is calculated by dead reckoning. From the above equation, the position error of a robot can be estimated from the covariance matrix for each movement. However, it is difficult to obtain the covariance matrices through the experiments for all possible cases of movement. Thus, a simplification is made based on the following assumption: although dead reckoning is subject to many types of error, all the errors occurring during movements, excluding wheel slips when the robot starts and stops, are related to the types of movements. From this assumption, we can determine the covariance matrix Ca when a robot performs motion A as follows:
CA = KA CM + Cs
(1O)
where CM is a covariance matrix for a movement (3x3), Cs is a covariance matrix for starting and stopping (3 x 3), and KA is a matrix determined by the amount and type of a mobile robot (3 x 3). Generally, since the error from its starting or stopping motion is negligibly small compared with the error from movement, Eq. (10) can be approximated to Eq. (11).
CA = KACM.
(I 1)
Now, if we restrict the type of movement to only those cases that can be measured experimentally, the covariance matrix for each type of movement can be obtained from the predetermined covariance matrix of that movement, and the amount of that movement. If the movement A is one of the predetermined types and CM is the covariance matrix for a unit movement that is determined in advance by experiments, KA of Eq. (11) becomes
KA = / ( a m o u n t of A) 2 Fig. 2. Approximate transformations of the robot position. 9
where I is a (3 x 3) unit matrix.
(12)
Robotics & Computer-Integrated Manufacturing • Volume 13, Number 1, 1997
44
1
9
10
Pord(A,B,x,y) = 1 + e -3.6#-2d
,
(16)
where
6 / / /.J
~ .8
Near point ~
Far point Near cell
2 Fig. 3. Ordering uncertainty.
3. BAYESIAN U P D A T E M O D E L CONSIDERING POSITION UNCERTAINTY In the earlier Bayesian updating model, P(Mloi n A) was to be calculated based on the distance between the sensor and the center of each cell. As shown in Fig. 3, an object in a farther cell from the sensor can be closer to the sensor than some object in a nearer one. This is due to the square shape of cells, which we call "cell ordering uncertainty". Therefore, the reconsideration of P(M]oinA) becomes necessary to take this uncertainty into account, as discussed below. When the order of cells is A, B, C, D and E (according to the distance from the sensor), let the probabilities of a beam halting at each cell be pA, pB, pC, pD and pE, and let the probability that the beam halting at C gives a range value be pHc. In the conventional update model, pHc is
pHc = pApBpC.
(distance between sensor and the center of cell B) d = -(distance between sensor and the center of cell A) (diagonal length of a cell) The uncertainty of the robot position makes the sensor footprint uncertain. This can be taken into account by expanding the sensor footprint as in Fig. 4. Now, we update the probability in the expanded region using the cell ordering function. When the robot position from dead reckoning is (X, Y), p(x, y) the probability that the robot locates at (x, y) can be obtained from Eq. (9). Reducing Eq. (9) to a two-dimensional ease, the probability becomes 1
=
pHc = pApBpCcB + pApCcB = -~pC~--CcB + cB). (14) In the same manner, including the probability that the object in cell D, E or A blocks an object in C (cD, cE and cA, respectively), pHc is modified as below:
o;,~
~.
(17) Now, let Pbtk(A, B) be the cell ordering function considering robot position uncertainty. It can be calculated by Eq. (18) because p(x, y) is a probability density function which gives a real probability value when integrated over its entire domain. We do not consider the effects of angular position uncertainty to calculate the cell ordering function. Because the cell ordering function depends on the distance between a cell and a sensor, the effect of angular position uncertainty is negligible.
(13)
To consider cell ordering uncertainty, we introduce a new term cB which represents the probability that an object in cell C blocks an object in cell B. Using this probability, Eq. (13) becomes
e20-~2) ~
Pbo,(a,
V I2"od
=
B,
y)
y) dx dy. (18)
To integrate the above equation numerically, we take the 90% confidence region of the robot position.
Pbtk(A,B) "m
II
'
Ellipse
eord(A,B,x,y)p(x,y)dxdY.o.-- ~ (19)
where 1/0.9 represent the 90% confidence region of
pHc =pC(pAcA + cA) (pBcB + cB) (15)
(pDcD + cD) (pEcE + cE). By generalizing cA, cB, and so on, we define a cell ordering function Pard(A, B, x, y) which represents the probability that cell A blocks cell B when the sensor is at (x, y). Since the direct derivation of the function from geometric relationships is too complicated to formulate, an approximate function was obtained with a Monte Carlo simulation. For the simulation we generated one hundred million random points of the obstacle and sensor positions and fit the results to a function as follows:
Dead reckoned robot position
~/ " U
} /
/
E"
/~\.
/~f
~ ~
/
"
Sensor footprint at dead reckoned robot position
Expanded sensor footprint
Possible robot position
Fig. 4. Expansion of sensor footprint.
Sonar mapping of a mobile robot • B.-K. MIN et al. Table
45
1. Assumed error in dead reckoning for simulation Mode
Direction Position error in x direction* Position error in y direction* Angular error*
Move along straight line (%)
Turn (%)
3 2 0
0 0 2
*Confidence interval = 1.96o. the ellipse. Now, instead of using the cells sorted by range values as in Eq. (5), we introduce the cell ordering function [Eq. (15)] to consider the position uncertainty as in Eq. (19). Then a new formula of P(nloifq A) can be obtained as follows:
r/J+,. P( M[oi n A) = ~ 1 0 c c u v i e d region; P(~) = 1.0
(1 - Ph dj + n, n)) + Pb k(J + n, n)) --] Empty region; P(c0 = 0.0 [~
P( Hj+,loi 0 A )P(~+,IM) ] .
Inside of obstacle (unknown region); P(c0 = 0.5
(20) Fig. 5. Model map for simulations. 4.
EXPERIMENTAL VERIFICATION
quantitative measure of the reconstructed m a p quality, we adopt a weighted match function s defined as follows.
4.1. Computer simulation of position error compensation In this section, a comparative assessment is given between the m a p constructed by the proposed method and that by the conventional method. The mobile r o b o t concerned has the same dimension as A M R O Y S - I I , which will be described in the next section, and was assumed to have the error characteristics as shown in Table 1. F o u r different cases were simulated under the conditions shown in Table 2. A m o n g them, Case 3 employs our new model proposed in this paper. The ideal m a p used in simulation is shown in Fig. 5. The simulation results are presented in Fig. 6. The lines in the reconstructed m a p represent the locus of robot movement. The letter " S " stands for a start position and " E " for the final position of the robot. The cells in the reconstructed m a p are filled with circles of various radii according to the occupancy probability, and the scale is given in Fig. 6. As a Table
Case number 1
2 3 4
Match(o)
= Iog2(1-I/(P(Oi)modelP(Oi)reeon +P(b-7)modelP(~i)recon)) (21) Match(e)
= log2(H(p(Oj)modelP(Oj)reeo + P(~JJ)m°delP(b-~)rec°n)) (22) Weighted match = Match(e)-
n(o) + M a t c h ( o ) . n(e) n(e) + n(o) (23)
2. Conditions of simulation of position error compensation
Position error in movement
Position error compensation
Map updating method
Expansionof occupied region
No Yes Yes Yes
No No Yes Yes
Old Old New Old
None None Exactly by ATs g = major axis
46
Robotics & Computer-Integrated Manufacturing • Volume 13, Number 1, 1997
oe
Iili, o,,
•
=,..o ........
l:.l;lllll.':::;Ir: •" ! il,..i,;; ...~
lloilieeoo• . .:1: ...... I
.,AI~ ..........
el
!?!!: ? ;=;::#::u "O.;
=.:::.
O=*ee*eeli "'14
E"
....
i..;O;
l
!!
-.
.;.i..;;•
.... o.. ~on iiiiiiii . lllloooooeeo " 8
; l 7;o~il. l i l i i i i i i ; i e ,
(a)
i . . UI. I i , IO
•, ........
= , , l l l l l l l l l e . . . . . . o111111111111114111111111 . . . . . . . I l e l O O l t l l l o e . •
==
=,o=*t=o==*
• I I •
•
F ii/ !!!ili!!!i!iiiT; i.e
:
e
.i
::~;~;il;~iekieol;,i;o;;~
(b)
iC~.U I
.= ...........
! ! i ~ ; ;! ; ~; ; i;l ;
!
e i l O - I ~ e ~ l S O , O I l l l • • O t t l l
il
liI!!iUiii! !/iil
1 i 2. 0
,,./::.|.=;t;
i.::::::::::::::::::::::: ===================== • tlolooo~..,
. . . . . . .
• ....
!;i
o.:. " : :•" .•.t:. . is/ o: :i :O: *i i:o: :i '; :.~|. ; i :...... . :.1
•...•,.,eeoc
(c)
(d)
..oeooooe
P
1.0
Fig. 6. Simulation result of the position error compensation: (a) Case 1 (weighted match = -99.1); (b) Case 2 (weighted match = -202.6); (c) Case 3 (weighted match = -117.1); (d) Case 4 (weighted match = -175.3).
where i and j represent the occupied and empty cells in the ideal map, respectively, and n(o) and n(e) represent the number of occupied and empty cells in the ideal map. Since Case 1 was simulated under an assumption
-50
-1 00-
-150"¢1
of no position error, the resulting map shows the highest quality with the best weighted match value as expected. Comparing Case 1 with Case 2 reveals that the robot position uncertainty considerably degraded the map quality when the old Bayesian model was used. The use of the proposed scheme (Case 3) enhances the map quality close to that of Case 1. Case 4 is not efficient enough to enhance the quality of the map. By unnecessary expansion of the occupied region, it becomes more difficult to updated the occupancy probability. Figure 7 compares the quality between the reconstructed maps using weighted matches with respect to moving distance. During the first few steps, there are no big differences between the four methods because the accumulated position error is small. As the robot moves a long distance, the plot shows that the new method distinguishes itself in maintaining map quality, in spite of the position uncertainty.
~U_20 O_ Jlol~
•
•
YONS[I-YCL
-250-
•
lelot~
Ilelel
•
•
•
ilOl
AMR0¥S-II
CCCCO Case l Case Case Case
-3oo
43
I
0
g
10
1'5
20
is
30
3'~
D i s t a n c e of m o v e m e n t
I
40
~'s
sc
(m)
Fig. 7. Comparison between the weighted matches for various cases.
l-
....
,I
L,_
o....
Fig. g. Physical dimensions of the AMROYS-II.
-,-I
47
Sonar m a p p i n g of a mobile robot • B.-K. M I N et al. 0.2
o o
0.1
67.5"
3
1
24 2
22.5"
•,
: " ,,,,
0.0
18
". t,$
16(11
• e-
O"
I:l ¢1
"C, --0.1 ~ID 13
14 II
is
I>
124
--0.2 -0.2
10
117
-0.1
0.0
variance
0.2
0.1
in
x
direction
(a) Fig. 9. Sonar sensor configuration o f the A M R O Y S - I I . 0.2
4.2. Robot hardware The mobile robot used in this research was built in our laboratory and named AMROYS-II (autonomous mobile robot of Yonsei-II). AMROYS-II has the appearance as in Fig. 8 and consists of fourwheeled actuators, a microcomputer for controls, an ultrasonic sensor system, and batteries for the power supply. Four Illonator wheels 6 actuated by independent servos enable AMROYS-II to move straight .and turn in any direction simultaneously. The
I:1 o
:,:! o.1 rj
".::,¢ .~
0.0
t=
=
0
m
"= -o.1
Y
-0.2 -0.2
, -0.1
variance
, 0.1
0.0
in
x
0.2
direction
(b) Fig. 11. Position error of mobile robot: (a) after moving I m; (b) after turning 1 rad..
t X v
Tzble 3. Statistical properties o f position error (units: meter and radian) Variables Standard deviation of x Standard deviation o f y Standard deviation of angle Covariance o f x and y Covariance of x and angle Covariance o f y and angle Correlation between x and y Correlation between x and angle Correlation between y and angle
Fig. 10. Coordinate system o f the A M R O Y S - I I .
Moving I m
Turning 1 rad
0.0423 0.0202 0.0437 0.00020 0.00030 0.00002 0.2343 0.1640 0.0272
0.0186 0.0173 0.0220 - 0.00005 0.00006 -0.00007 - 0.1427 0.1569 -0.1719
Robotics & Computer-Integrated Manufacturing • Volume 13, Number 1, 1997
48 CASE
1
CASE 2
[ ] Occupiedregion; P(c0 = 1.0 [~] Emptyregion; P(od = 0.0 []
Insideof obstacle (unlmownregkm);P(oi) = 0.5
Fig. 12. Real map of experimental environment.
microcomputer has a 80386 CPU and conducts the mapping, path planning and position control, including dead reckoning. An ultrasonic sensing controller system is a Proximity subsystem of TRC, l° which is equipped with 24 Polaroid sonar sensors, ll These sensors are arranged on the side of the robot as in Fig. 9, where the numbers stand for the firing sequence of the sensors. R o b o t motion is restricted by two types: moving on a straight line or turning without moving, that is, pure translation or pure rotation. The sonar sensors are operated only during translation, and the maximum sensing distance is 4 m. T h e coordinate system for the mobile robot is assigned as in Fig. 10. 4.3. Experimental verification First, we found the covariance matrix of the robot by experiments. A set of experiments consists of only one point-to-point straight or rotating motion to exclude the effects of accumulated error. We measured the real distance and angle in the experimental environment by the application of triangulation. We then normalized the measured value by the position acquired from the dead reckoning to compensate for control error such as overshoot. The results of 75 sets o f experiments on straight and rotating motion are shown in Fig. 11, and the statistical properties of position and angular errors are listed in Table 3. It is found that the robot has considerable position errors but little correlation exists between each
variable. These values arc used for map updating in the real world. Figure 12 shows the experimental environment, a 12 x 6 m 2 classroom with a smooth floor. The reconstructed maps are shown in Figs 13 and 14. In both maps the cell size is 0.25 x 0.25 m E and the work space of the robot is confined in a 7 x 11 m 2 area. Therefore, the probability map is composed of a grid of 28 x 44 cells. For both experiments, the robot was moved through a predetermined trajectory. The line segments in the reconstructed map stand for the locus of robot movement. The weighted match values of both reconstructed maps indicate the new method can build more accurate maps. The comparison at a far-off region from the starting position shows even larger difference in weighted match values. The robot position error is more sensitive to angular error than to translational error. In our first experiment, shown in Fig. 13, the robot undergoes more turning motions, thus being subjected to greater position error. The difference between the two methods ends up being larger than in the experiment shown in Fig. 14. The reconstructed map from the new method shows a clearer image of the center obstacle shown in Fig. 13. Also, in Figs 13 and 14, the reconstructed wall around the final position of the robot using the new method represents a better result. This implies that the new method improved the quality of the map after long distance movement.
5. C O N C L U S I O N In this paper, a new mapping method considering robot position uncertainty was proposed.
....
~i'iii:::
::'~b'~'b'""
• .ZII1221;;~
oee eeO
. . . . . . . . . . ,•. . . . . . . . . . . . . . . . . . . . . . eo
eeo ace
.
.
.
.
.
.
.
"
.
.
.
.
.
i:.:.~i...i
.
.
•
.
.
• Joe ....................
;A°~ °
iiill,~
....
olooloio
iou
.
.
.
•oil
,ee
.
.
.
. .
ee
*e. aw* •
.
.
.
''Io0810,,e...eOOee
(a)
....
•
.
.
,liu oil•
(b)
'
P
,
,
e
o
O
O
O
Q
1.0
Fig. 13. Experimentalresult of new Bayesianmethod (Case 1): (a)
by the conventional Bayesian update method (weighted match = 334,2); (b) by the new Bayesianupdate method (weighted match = 214.4).
Sonar mapping of a mobile robot • B.-K. MIN et al. i i l i i
• i*1
• ii
• iii
ilol
: ' . ,• . • : . . . . : .• . : : ......:.:. . . . . is
• ....
1 U . . . .
.
iJ
• ill
....
: .•
iii:.;imiEiiiiiiiiiiii. ...... .
:I .....
w. leelellelleeee
t
• •
. . . . . . . . . . . . . . I ! ! 1 1 1 ~ 1 ~ 1 1 1 1 1
] .............
implemented on our mobile robot and evaluated through simulations and experiments in a real environment. Although leaving us the task of improving computational speed, the method enhanced the quality of the sonar probability map.
• •
I I I I I I l l l l l l l l l l
i"
............
. . . . . . . . . . .
•
..........
I ! ! i !
. . . . . . . . .
. . . . .
•
. . . . .
. . . . . . . . . . . . . . . . . . . . . . .
aoooloeleoeeoo mooeaoQoogleee
lleleOOlOOllee I I I l l l l l l l l l l l l l l l { l l l l l l l l l Iii1111111111 l l l t l l l e i l l l l ~ 1 • I I t l
!, ?i\i •
• . . . . . . . . . . . . . . .
!!!!i~i!
!
:::::::::::::::::::::::::
l l l l t l l @ o t . • iI . . . . . . . . . . . i i I l i l l l . . . . l . . . . . . . . . . . .
.,:1:1::;;1.1.;........:...1
II!1
e el i J. •.... III
eoO.OllO•l
I • I l l l l l l l l l l l l .
(a)
REFERENCES
~llmb
:!
II II
. ~ 1 ,
I I I I t
(b)
.
p
*
~
e
D
o
o
@
49
O
1.0
Fig. 14. Experimental result of new Bayesian method (Case 2): (a) by the conventional Bayesian update method (weighted match = 149.5); (b) by the new Bayesian update method (weighted match = 126.3).
We surveyed the effects o f the position error of a mobile r o b o t on the m a p quality, and improved the earlier Bayesian m a p p i n g method by application of the approximation transformation and by establishing a cell ordering function. These approaches were
1. Borenstein, J., Koren, Y.: The vector field histogram-fast obstacle avoidance for mobile robots. IEEE Trans. Robotics Automn RA-7: 278-288, 1991. 2. Borenstein, J., Koren, Y.: Histogramic in-motion mapping for mobile robot obstacle avoidance. IEEE Trans. Robotics Automn RA-7: 535-539, 1991. 3. Cho, D. W.: Certainty grid representation for robot navigation by a Bayesian method. Robotica 8: 159-165, 1990. 4. Elfes, A.: Sonar-based real-world mapping and navigation. 1EEE Trans. Robotics Automn RA-3: 249265, 1987. 5. Lim, J. H., Cho, D. W.: Physically based sensor modeling for a sonar map in a specular environment. In 1EEE International Conference on Robotics and Automation, Nice, France. 1992, pp. 1714-1719. 6. McKerrow, P. J.: Introduction to Robotics. Sydney, Addison-Wesley. 1991. 7. Moravec, H. P., Cho, D. W.: A Bayesian method for certainty grids. A A A I Spring Symposium on Robot Navigation, Stanford. 1989, pp. 57-60. 8. Moravec, H. P., Elfes, A.: High resolution maps from wide angle sonar. IEEE International Conference on Robotics and Automation, St Louis. 1985, pp. 116-121. 9. Smith, R. C., Cheeseman, P.: On the representation and estimation of spatial uncertainty. Int. J. Robotics Res. 54: 56-68, 1986. I0. Proximity Subsystem User Manual. Release 4.6G, Danbury, CT, Transition Research Corporation. 1991. 11. Ultrasonic Ranging System. Cambridge, MA, Polaroid Corporation.