Robotics and Autonomous Systems 54 (2006) 159–164 www.elsevier.com/locate/robot
Appearance-based concurrent map building and localization J.M. Porta ∗ , B.J.A. Kr¨ose IAS Group, Informatics Institute, University of Amsterdam, Kruislaan 403, 1098SJ, Amsterdam, The Netherlands Received 2 October 2004; accepted 13 September 2005 Available online 9 December 2005
Abstract In appearance-based robot localization the environment map does not represent geometrical features but consists of an appearance map, which is a collection of robot poses and corresponding sensor observations. In this paper, we describe a concurrent map-building and localization (CML) system based on a multi-hypotheses tracker that is able to build and refine autonomously the appearance map required for localization as the robot moves in the environment. The results included in this paper validate our approach. c 2005 Published by Elsevier B.V.
Keywords: Appearance-based localization; Concurrent map building and localization
1. Introduction Autonomous robot localization requires some kind of representation or map of the environment. If we pay attention to the type of features included in the map, robot localization methods can be divided into two families: geometric methods [8,15,17], and methods based on appearance modeling of the environment [6,12,14,20]. Geometric-based localization methods rely on the assumption that geometric information (i.e., position of landmarks, etc.) can be accurately extracted from the sensor readings. However, this transformation is, in general, complex and prone to errors. As a counterpart, in appearance-based methods, the environment is not modeled geometrically, but as an appearance map that is a collection of sensor readings obtained at known positions. The advantage of the appearance-based approach is that the sensor readings obtained at a given moment can be matched directly with the observations stored in the appearance-based map and, thus, we can obtain pose information without requiring any intermediate processes to obtain geometric information (see [20] for more details). A comparison between two localization methods—one geometric-based and one appearance-based using vision as sen∗ Corresponding author. Present address: Institut de Rob`otica i Inform`atica Industrial (UPC-CSIC), Llorens i Artigas 4–6, 08028, Barcelona, Spain. Tel.: +34 93 4015751; fax: +34 93 4015750. E-mail addresses:
[email protected] (J.M. Porta),
[email protected] (B.J.A. Kr¨ose).
c 2005 Published by Elsevier B.V. 0921-8890/$ - see front matter doi:10.1016/j.robot.2005.09.025
sory input—can be found in [23], showing that the appearancebased method is more robust to noise, certain types of occlusions, and changes in illumination (when an edge detector is used to pre-process the images) than the geometric-based method. The main drawback of appearance-based methods is that the construction of an appearance map is usually a supervised process that can be quite time-consuming and that is only valid as long as no important modifications of the environment occur. While much work has been done on concurrent mapping and localization (CML) using landmarks [7,9,16,25], this is not the case within the appearance-based approach. Recent work in this line [22] does not exploit all the potential of the appearance-based framework such as, for instance, the ability to perform global localization (i.e., localization without any prior information on the robot’s position). In this paper, we replace the supervised map of the environment used in appearance-based localization by an approximation to it obtained autonomously by the robot. The basic idea we exploit is that, if the robot re-visits an already explored area, it can use the information previously stored to reduce the uncertainty in its position. Additionally, the improvements in the robot’s position can be back-propagated to map points stored in previous time slices using trajectory reconstruction techniques. The result is a correction of both the robot’s position and the map and, thus, we achieve the objective of concurrently localizing and building a map of the environment. Similar ideas are exploited in map building based on cyclic trajectories [2,10].
160
J.M. Porta, B.J.A. Kr¨ose / Robotics and Autonomous Systems 54 (2006) 159–164
However, these works aim at building a geometric map of the environment and not an appearance-based one. In the following sections, we first describe how to estimate the position of the robot (assuming that we have a map). After that, we describe how to extract features from the input images and how to approximate on-line the feature-based map that is necessary for localization. Next, we show the preliminary results obtained with the new CML system and, finally, we summarize our work and extract some conclusions from it. 2. Robot position estimation The probabilistic localization methods aim at improving the estimation of the pose (position and orientation) of the robot at time t, i.e. xt , taking into account the movements of the robot {u 1 , . . . , u t } and the observations of the environment taken by the robot {y1 , . . . , yt } up to that time. In our notation, the u1 Markov process goes through the following sequence x0 −→ u2 ut (x1 , y1 ) −→ · · · −→ (xt , yt ). The Markov assumption states that the robot’s pose can be updated from the previous state probability p(xt−1 ), the last executed action u t , and the current observation yt . Applying Bayes, p(xt |u t , yt ) gives p(xt |u t , yt ) ∝ p(yt |xt ) p(xt |u t ),
(1)
where the probability p(xt |u t ) can be computed propagating from p(xt−1 |u t−1 , yt−1 ) using the action model Z p(xt |u t ) = p(xt |u t , xt−1 ) p(xt−1 |u t−1 , yt−1 )dxt−1 . (2) Eqs. (1) and (2) define a recursive system for estimating the position of the robot. To compute the integral in Eq. (2) we have to make some assumption on the representation of p(xt−1 |u t−1 , yt−1 ). Sometimes this probability is represented as a Gaussian[15], but this is a rather restrictive assumption on the shape of p(xt−1 |u t−1 , yt−1 ). When a probabilistic occupancy grid [3,26] or a particle filter [19,28] is used, we can represent probability distributions with any shape. However, occupancy grids and particle filters are computationally expensive in memory and execution time. In our work, the probability on the state p(xt−1 |u t−1 , yt−1 ) is represented using a Gaussian Mixture (GM) X t−1 with N i , Σ i , wi ) | i ∈ components and parameters X t−1 = {(xt−1 t−1 t−1 [1, N ]}. As noted by several authors, GMs provide a good tradeoff between flexibility and efficiency [1,5,11]. Thus, we have that p(xt−1 |u t−1 , yt−1 ) ∝
N X
i i i wt−1 φ(xt−1 |xt−1 , Σt−1 ),
i=1 i , Σi ) where φ(xt−1 |xt−1 t−1 i . covariance matrix Σt−1
i is a Gaussian centered at xt−1 with i i The weight wt−1 (0 < wt−1 ≤ 1) provides information on the certainty of the hypothesis represented by the corresponding Gaussian. The motion of the robot is modeled as xt = f (xt−1 , u t , vt ), where vt is a Gaussian noise with zero mean and covariance Q. Thus, using a linear approximation, we can express p(xt |u t ) in
Eq. (2) as the GM resulting from applying f to the elements in X t−1 , i.e. a GM with the same number of components as X t−1 and with parameters X u t = {(xui t , Σui t , wti ) | i ∈ [1, N ]} with i xui t = f (xt−1 , u t ), i Σui t = FΣt−1 F > + G QG > ,
(3)
i , and G is the where F is the Jacobian of f with respect to xt−1 Jacobian of f with respect to vt−1 . After we have an approximation of p(xt |u t ), we need to integrate the information provided by the sensor readings p(yt |xt ) to obtain an estimation of the new robot’s pose p(xt |u t , yt ) (see Eq. (1)). As p(xt−1 |u t−1 , yt−1 ) is represented as a GM, so will p(xt |u t , yt ). Therefore, the problem is to determine a new GM, X t , from the one representing X t−1 and the additional information provided by the sensors. At this point, we assume we have an appearance map of the environment from which we can define p(yt |xt ). A classical method for approximating the sensor model p(y|x) from a supervised training set is to use kernel smoothing. This method scales linearly with the size of the map and, thus, it is inefficient for practical applications. For these reason, as proposed by Vlassis et al. in [28], we use a GM to approximate the sensor j j j model. The parameters of the GM X yt = {(x yt , Σ yt , w yt ) | j ∈ 0 [1, N ]} are obtained from the map point with an observation closest to the current one yt . x yt denotes the poses from which an observation similar to yt has been observed, and Σ yt and w yt are the associated covariance and weight factors. In Section 4, we describe how to create and update the map from which X yt is defined. If X yt has no components (N 0 = 0), the estimation on the robot’s pose obtained by applying Eq. (3) can not be improved and we have X t = X u t . If N 0 > 0, we have to fuse the Gaussian functions in X u t with those in X yt . The direct application of Eq. (1) amounts to multiplying each one of the elements in X u t (the predicted state after applying the action model) with those in X yt (the sensor model). This would produce a quadratic (N × N 0 ) number of hypotheses. To keep the number of hypotheses below a reasonable limit, we will only associate elements of X u t and X yt that are alternative approximations of the same positioning hypothesis. This raises the problem of data association: to determine which elements of X yt and X u t are to be combined. We perform the data association using an innovation-based criterion. For each j j j couple (i, j) with (xui t , Σui t , wti ) ∈ X u t and (x yt , Σ yt , w yt ) ∈ X yt , we compute the innovation as j
υi, j = xui t − x yt j
Si, j = Σui t + Σ yt , and we assume that that hypothesis on the robot position i and sensor reading j match if the following condition holds υi, j Si,−1j υi,>j ≤ γ ,
(4)
where γ is a user-defined threshold. A small innovation υi, j Si,−1j υi,>j indicates two similar Gaussian functions and, thus, two Gaussian functions that are very likely to refer to the same pose hypothesis.
J.M. Porta, B.J.A. Kr¨ose / Robotics and Autonomous Systems 54 (2006) 159–164
If there is a match, we update positioning hypothesis i with the sensor information j using the covariance intersection rule [27], since it permits estimation even with unmodeled correlations in the sensor readings. The more common Kalman update rule would be affected by the correlated noise in the sensor readings that is common when working with real data. The covariance intersection rule defines the new covariance matrix and the new average as j
(Σti )−1 = (1 − ω)(Σui t )−1 + ω(Σ yt )−1 j
j
xti = Σti [(1 − ω)(Σui t )−1 xui t + ω(Σ yt )−1 x yt ],
(5)
j
where ω = |Σui t |/(|Σui t | + |Σ yt |). Hypotheses on the state not matched by any Gaussian on X yt are not modified except for the weight update described below. Sensor components not matched by any state hypothesis must be introduced as new hypotheses on X t . This allows us to deal with the kidnap problem. The confidence on each hypothesis in X t is represented by the corresponding weight wti . Following [24], we increase wti for the hypotheses that are properly matched in the data association process i i wti = wt−1 + α(1 − wt−1 ),
(6)
where α is a learning rate in the range [0, 1]. For the not matched hypotheses, the confidence is decreased as i wti = (1 − α)wt−1 .
(7)
The parameter α (0.1 in our experiments) is a decay factor that allows us to set the speed at which hypotheses are reinforced/forgotten. With α close to 1, the system has a large inertia and hypotheses are reinforced/forgotten very slowly. Hypotheses with a weight below a given threshold λ (0.01 in our test) are removed from X t . 3. Image feature extraction Our sensory inputs for localization are images taken by the camera mounted on the robot. A problem with images is their high dimensionality, resulting in large storage requirements and high computational demands. To alleviate this problem, Murase and Nayar [18] built their appearance model not in the space of images but in the feature space. They compress images (z) to low-dimensional feature vectors (y) using a linear projection y = W z, in which the projection matrix W is obtained by principal component analysis (PCA) [13] on a supervised training set T = {(xi , z i ) | i ∈ [1, L]}, including images z i obtained at known states xi . However, in our case, we do not have a training set. For this reason, we use a standard linear compression technique: the discrete cosine transform (DCT) [21]. We select this transform since PCA on natural images approaches the discrete cosine transform in the limit [4]. The DCT computes features using a d × n projection matrix W defined as π W j,k = z k cos j (k − 1/2) (8) n
161
where j is the number of the feature to be extracted, z k is the k-th pixel of image z (considering the image as a vector), and n is the total number of pixels in the image. For a given image, we compute a set of d features, with d typically around 10. 4. Environment mapping The map of the environment used in the appearance-based framework is an approximation of the manifold of features y that can be seen as a function of the pose of the robot x, y = g(x). Actually, the objective of an appearance-based mapping is to approximate g −1 , since this gives us information about the possible poses of the robot given a set of features. Using the localization system introduced in the previous section, at a given time we have a pair (X t , yt ), where yt is a feature vector and X t is a GM that estimates the pose of the robot. We can use the sequence of such pairs obtained as the robot moves as a first approximation to the map needed for localization. Thus, our map M can be defined initially as M = {(X yt , yt )}, where X yt = X t . As explained in Section 2, when the robot re-observes a given set of features y, we can use X y to improve the estimation of the location of the robot after application of the motion model, X u . Due to noise, a given observation is never exactly re-observed. So, we consider that two observations y and y 0 are equivalent if ky − y 0 k < δ.
(9)
This discretizes the space of features with granularity δ. However, if y (±δ) is re-observed, we can also use the information the other way around; we can improve X y using the additional information provided by the current state approximation, X u t . For this, we need to introduce new information (X u t ) into a given GM (X y ). A procedure to do this is exactly what we have described in Section 2, but with the roles swapped: in the previous section we update the position of the robot X t using the sensor information X y ; now we adjust the map point X y using the information provided by the position of the robot X u t . So, we can apply the same procedure, but we have to swap X u t with X y , xu t with x y , and i with j. The only difference is that we assume the environment to be static and, thus, X y is not updated by any motion model. At a given moment, the robot can only be at a single position. So, when the state X u t includes more than one hypothesis, we are uncertain about the robot’s location and, consequently, any map update using X u t will include incorrect map modifications that we will need to undo later on. To avoid this problem, we use a conservative strategy in the map update: when X u t is not a single hypothesis (i.e., a single Gaussian) no map update is done. If the state update produces an uncertainty reduction in X t w.r.t. X t−1 , we back-propagate this uncertainty reduction to previous map points. This back-propagation can result in adding points to the map or in reducing the uncertainty of already existing points. In any case, this map improvement is performed using the normal state and map update procedures, but virtually moving the robot backwards by applying a motion model to undo the robot’s path (registered using the odometry
162
J.M. Porta, B.J.A. Kr¨ose / Robotics and Autonomous Systems 54 (2006) 159–164
readings). The back-propagation is carried on as far as there is an error reduction in the updated map points or up to the time in which the back-propagated hypothesis was initially introduced in the state estimation. There are two ways in which the uncertainty in X t can be lower than that in X t−1 . The first case is when X t−1 includes many Gaussian functions but, due to the weight update and weak hypotheses removal (Eq. (7)), X t only contains one Gaussian. The second case is when X t−1 is a single Gaussian but the application of the state update procedure (Eq. (5)) results in a reduction in the covariance of this Gaussian. Table 1 summarizes the CML algorithm that we introduce in this paper. The mapping strategy just outlined allows us to build an appearance-based representation of a given environment along the paths followed by the robot. When the map M is empty, we need to know the initial position of the robot, but when M has some elements, global localization is possible. To perform global localization, we have to initialize the system with X 0 containing a uniform distribution. As soon as the robot observes features already stored in the map (i.e., it visits a previously explored area) new hypotheses are added to X t using the data stored in the corresponding map points. Hypotheses not reinforced by the observations will lose weight and will eventually be removed from the state. When the state estimation is reduced to a single Gaussian, we will use the backpropagation procedure to add points to the map along the path followed by the robot (and which we can now locate w.r.t. the given map).
Table 1 The appearance-based CML algorithm
5. Experiments and results We first tested the proposed CML system by mapping a 25 m long corridor. We drive a Nomad Scout robot using a joystick all along the corridor and back to the origin. Fig. 1-A shows the path followed by the robot according to odometric information. The global frame of reference is defined at the initial pose of the robot with the X axis pointing to the front of the robot and the Y axis toward the left. As we can see, when the robot returns to the origin, there is a large error in the final position of the robot (about 0.40 m in the X axis, 3 m in the Y axis, and 20◦ in orientation). In the figure, each of the arrows corresponds to a robot’s pose where an image is taken. Our CML system detects that the final point in the trajectory is close to the initial one (see Eq. (9)). This coincidence allows a drastic reduction in the uncertainty in the robot’s location after the application of Eq. (5) and this reduction can be backpropagated, improving the quality of the map. Fig. 1-B show the corrected map points after the loop is closed. We can see that the correction affects mainly the points close to the end of the trajectory, where the back-propagated information is more certain than the previously stored information: close to the beginning of the trajectory the initial information is more reliable than the back-propagated information and, therefore, these map points are not modified. In Fig. 1-B, the light grey arrows correspond to poses where there is perceptual aliasing: the same set of features are observed from all these positions.
kak is the norm of vector a, |B| is the determinant of matrix B, and size(A) is the number of elements of set A.
Fig. 1. Path of the robot in a 25 m corridor: A — using only odometry; B — the map is corrected when the loop is closed; C — adding new portions to the map (so that the robot performs global localization).
Perceptual aliasing is one of the reasons why we need to keep track of many hypotheses simultaneously.
J.M. Porta, B.J.A. Kr¨ose / Robotics and Autonomous Systems 54 (2006) 159–164
Fig. 2. Left: The 3.5 × 3 m circuit used for the kidnap experiment. Right: Robot’s trace according to the CML system (solid line) and according to odometry (dashed line). A, B, C, D, O, x1 and x2 are robot poses along the circuit and y are the features of the images taken at pose x2 . All units are in m.
Another situation where multiple hypotheses should be considered is when performing global localization: when the robot is started at an unknown position (but in a previously mapped area) or after a kidnap. In the experiment reported in Fig. 1-C, we start the robot in a non-mapped area and we drive it to the beginning of the corridor previously mapped. In this case, the robot operates with a uniform distribution about its pose. When consistent matches with the already existing map points occur, the robot determines its pose, the uniform distribution is replaced by a Gaussian defined from the map, and new map points along the robot’s path are added. The next experiment was aimed at testing the performance of the CML system in front of large errors in odometry. We manually drive the robot along a 3.5 × 3 m rectangular circuit, starting the robot at O = (0, 0) (see Fig. 2-Left). The first loop is used to initialize the map. In the second iteration, a large error in odometry was artificially induced by slightly lifting and rotating the robot clockwise when it is at position D = (0, 0.75). After this we continue to manually drive the robot along the desired path. The kidnapping at D makes the information provided by odometry invalid and the position of the robot must be estimated by exploiting the map built in the first loop. Fig. 2-Right shows the path of the robot according to odometry (dotted line) and according to our localization method (solid line). After the introduction of the odometry error at point D, the main hypothesis on the robot’s pose follows the information provided by the odometry. However, matches of the images taken by the robot and those in the map result in the introduction of a new hypothesis on the robot’s position. This new hypothesis is reinforced as the robot gets more images while the weight of the hypothesis supported by odometry slowly decreases (Eq. (7)). At time t (see Fig. 2-Right), the odometry-related hypothesis is finally abandoned and the hypothesis supported by the map becomes the estimation of the robot’s position. Observe that, after the kidnap, no incorrect map points are added, since the state includes more than one hypothesis. In the last experiment, we tested the ability of our system to deal with the kidnap problem, even while the map is in early phases of its construction. We move the robot in the 3.5 × 3 m circuit of Fig. 2-Left. At the first loop, at position A the robot is kidnapped, lifted and displaced to position B and rotated 180◦ . Thus, according to odometry, the robot moves from A
163
Fig. 3. Error in localization per time slice when the robot is kidnapped at t = 50.
to C, while it is actually moving from B to O. Due to the kidnap, observations initially assigned to points in the path from A to C are actually occurring in the path from B to O. When the robot gets to O a new hypothesis (the correct one) is introduced into the state. This new hypothesis is reinforced by the observations and, after a few time slices, becomes the most relevant hypothesis and the wrong hypothesis is removed. Observe that this is a case where we corrupt the map with incorrect information but later on correct it. We can use an example to show how the map correction works. The set of features y is observed at position x2 but, initially, the incorrect association {(x1 , Σ1 , w1 = 1), y} is stored in the map. In the second loop, this map point is updated to {(x1 , Σ1 , w1 = 1), (x2 , Σ2 , w2 = 1), y}. In the following iterations, the weight of the association (x1 , y) decreases, since it is not observed again while the correct association (x2 , y) is reinforced. After a few loops, the association (x1 , y) is eventually removed from the map. Fig. 3 shows the error in localization as the robot moves around the circuit. The large errors at time slices 40–90 are caused by the initial kidnap but, as the robot gets to O again, the error is canceled and stays low. Every loop around the circuit takes 100 time slices, but the first one is shorter due to the kidnap. 6. Discussion and future work We have introduced a system that is able to simultaneously build an appearance map of the environment and use this map to improve the localization of the robot when it re-visits already explored areas. The on-line construction and updating of the map allows us to overcome the major hurdles of traditional appearance-based localization. First, the robot can operate in previously unknown areas and, second, we can deal with changes in the environment, since only observations consistently associated with a given position will be kept in the map. An implicit assumption in our mapping strategy is that the robot moves repetitively through the same areas/paths. However, this is quite a reasonable assumption for service robots moving in relatively small offices or houses, which are the kinds of environments in which we plan to use our system. Due to the way in which we define the map, the mapping error will be small close to the areas where the map is started and growing for points far away from the origin. Actually, the
164
J.M. Porta, B.J.A. Kr¨ose / Robotics and Autonomous Systems 54 (2006) 159–164
error for a given map point p is lower bounded by the error in odometry for a direct displacement from the origin of the map O to p. As a reference, our Nomad Scout robot can map an area of 20 × 20 m with an accuracy below 1 m. Since the error in odometry limits the area that we can map with a given accuracy, we would like to complement our localization system with additional odometric sensors (accelerometers, visionbased odometry, etc) to determine more accurately the relative displacements of the robot. In this paper we have introduced a novel CML algorithm and presented preliminary results that prove the validity of our approach. However, much work needs to be done to fully analyze and test the algorithm presented. Acknowledgments This work has been partially supported by the European (ITEA) project “Ambience: Context Aware Environments for Ambient Services”. J.M. Porta has been partially supported by a Ram´on y Cajal contract from the Spanish Ministry for Science and Technology. References [1] K.O. Arras, J.A. Castellanos, R. Siegwart, Feature-based multi-hypothesis localization and tracking for mobile robots using geometric constraints, in: Proceedings of IEEE International Conference on Robotics and Automation, 2002, pp. 1371–1377. [2] H. Baltzakis, P. Trahanias, Closing multiple loops while mapping features in cyclic environments, in: Proceedings of the International Conference on Robotics and Intelligent Systems, IROS, Las Vegas, USA, 2003, pp. 717–722. [3] W. Burgard, D. Fox, D. Hennig, T. Schmidt, Estimating the absolute position of a mobile robot using position probability grids, in: Proceedings of the National Conference on Artificial Intelligence, AAAI, 1996, pp. 896–901. [4] R.J. Clarke, Digital Compression of Still Images and Video, Academic Press, 1995. [5] I.J. Cox, J.J. Leonard, Modeling a dynamic environment using a multiple hypothesis approach, Artificial Intelligence 66 (2) (1994) 311–344. [6] J. Crowley, F. Wallner, B. Schiele, Position estimation using principal components of range data, in: Proceedings of the IEEE International Conference on Robotics and Automation, ICRA, 1998, pp. 3121–3128. [7] G. Dissanayake, P. Newman, S. Clark, H. Durrant-Whyte, M. Csorba, A solution to the simultaneous localization and map building (SLAM) problem, IEEE Transactions on Robotics and Automation 17 (3) (2001) 229–241. [8] A. Elfes, Sonar-based real-world mapping and navigation, IEEE Journal of Robotics and Automation 3 (3) (1987) 249–265. [9] H.J.S. Feder, J.J. Leonard, C.M. Smith, Adaptive mobile robot navigation and mapping, in: Field and Service Robotics, International Journal of Robotics Research 18 (7) (1999) 650–668 (special issue). [10] J.S. Gutmann, K. Konolige, Incremental mapping of large cyclic environments, in: Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation, CIRA, 1999, pp. 318–325. [11] P. Jensfelt, S. Kristensen, Active global localization for a mobile robot using multiple hypothesis tracking, IEEE Transactions on Robotics and Automation 17 (5) (2001) 748–760. [12] M. Jogan, A. Leonardis, Robust localization using eigenspace of spinningimages, in: Proceedings of the IEEE Workshop on Omnidirectional Vision, Hilton Head Island, South Carolina, 2000, pp. 37–44. [13] I.T. Jolliffe, Principal Component Analysis, Springer Verlag, 2002.
[14] B.J.A. Kr¨ose, N. Vlassis, R. Bunschoten, Omnidirectional vision for appearance-based robot localization, Lecture Notes in Computer Science 2238 (2002) 39–50. [15] J.J. Leonard, H.F. Durrant-Whyte, Mobile robot localization by tracking geometric beacons, IEEE Transactions on Robotics and Automation 7 (3) (1991) 376–382. [16] M. Montemerlo, S. Thrun, D. Koller, B. Webgreit, FastSLAM 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges, in: Proceedings of the International Joint Conference on Artificial Intelligence, IJCAI, 2003. [17] H.P. Moravec, Sensor fusion in certainty grids for mobile robots, AI Magazine 9 (2) (1988) 61–74. [18] H. Murase, S.K. Nayar, Visual learning and recognition of 3-D objects from appearance, International Journal of Computer Vision 14 (1995) 5–24. [19] M.K. Pitt, N. Shephard, Filtering via simulation: auxiliary particle filters, Journal of the American Statistical Association 94 (446) (1999) 590–599. [20] J.M. Porta, J.J. Verbeek, B.J.A. Kr¨ose, Active appearance-based robot localization using stereo vision, Autonomous Robots 18 (2004) 59–80. [21] K.P. Rao, P. Yip, Discrete Cosine Transform: Algorithms, Advantages, Applications, Academic Press, Boston, 1990. [22] P.E. Rybski, S.J. Roumeliotis, M. Gini, N. Papanikolopoulos, Appearance-based minimalistic metric SLAM, in: Proceedings of the International Conference on Robotics and Intelligent Systems, IROS, Las Vegas, USA, 2003, pp. 194–199. [23] R. Sim, G. Dudek, Comparing image-based localization methods, in: Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, IJCAI, Acapulco, Mexico, 2003. [24] B. Terwijn, J.M. Porta, B.J.A. Kr¨ose, A particle filter to estimate nonmarkovian states, in: Proceedings of the 8th International Conference on Intelligent Autonomous Systems, IAS, 2004, pp. 1062–1069. [25] S. Thrun, Robotic mapping: A survey, in: Exploring Artificial Intelligence in the New Millennium, Morgan Kaufmann, 2003. [26] S. Thrun, W. Burgard, D. Fox, A probabilistic approach to concurrent mapping and localization for mobile robots, Machine Learning 31 (1998) 29–53. [27] J. Uhlmann, S. Julier, M. Csorba, Nondivergent simultaneous map building and localization using covariance intersection, in: Proceedings of the SPIE Aerosense Conference, vol. 3087, 1997. [28] N. Vlassis, B. Terwijn, B.J.A. Kr¨ose, Auxiliary particle filter robot localization from high-dimensional sensor observations, in: Proceedings of the IEEE International Conference on Robotics and Automation, ICRA, Washington D.C., USA, May 2002, pp. 7–12.
Josep M. Porta received an Engineer Degree in Computer Science in 1994 and a Ph.D. in Artificial Intelligence in 2001, both from the Universitat Politcnica de Catalunya. After that, he joined the IAS group of the University of Amsterdam and currently he holds a post-doctoral position at the Institut de Rob`otica i Inform`atica Industrial (CSICUPC) in Barcelona. He has carried out research on legged robots, machine learning, vision-based methods for autonomous robot localization, and computational kinematics. Ben Kr¨ose is an Associate Professor at the University of Amsterdam, where he leads a group in Computational Intelligence and Autonomous Systems. He teaches courses on Computational Intelligence and Neurocomputing, and supervises four Ph.D. students and many M.Sc. students. As of 2004, he is parttime lector at the Institute of Information Engineering, where he leads a group in the field of Digital Life and Smart Homes. He has published numerous papers in the fields of learning and autonomous systems. He is a member of IEEE, the Dutch Pattern Recognition Association, and the Dutch AI Association.