Robot motion planning with uncertainty in control and sensing

Robot motion planning with uncertainty in control and sensing

Artificial Intelligence 52 (1991) 1-47 Elsevier 1 Robot motion planning with uncertainty in control and sensing Jean-Claude Latombe, Anthony Lazanas...

3MB Sizes 1 Downloads 103 Views

Artificial Intelligence 52 (1991) 1-47 Elsevier

1

Robot motion planning with uncertainty in control and sensing Jean-Claude Latombe, Anthony Lazanas and Shashank Shekhar Robotics Laboratory, Computer Science Department, Stanford University, Stanford, CA 94305, USA Received April 1988 Revised September 1990

Abstract Latombe, J.-C., A. Lazanas and S. Shekhar, Robot motion planning with uncertainty in control and sensing, Artificial Intelligence 52 (1991) 1-47. Most of the research in robot motion planning has focused on finding a collision-free path connecting two configurations of the robot among obstacles, under the assumptions that geometric models are complete and accurate and that robot control is perfect. In this paper we consider the problem of planning motion strategies in the presence of uncertainty in both control and sensing, using the preimage backchaining approach. Though attractive, this approach raises difficult computational issues. One of them, preimage computation, is the main topic of this paper. We describe two practical methods for computing preimages for a robot having a two-dimensional Euclidean configuration space. We also propose a combination of these two methods which yields the most powerful practical method devised so far for computing preimages. A motion planner based on these methods has been implemented and experimental results are given in this paper. We discuss non-implemented improvements of this planner, including a new way to embed knowledge in a motion command in order to produce larger preimages.

1. Introduction

One of the ultimate goals of robotics research is to create easily instructable autonomous robots. Such robots will accept high-level descriptions of tasks specifying what the user wants done, rather than how to do it, and will execute them without further human assistance. Progress toward this goal requires advances in many interrelated domains, including automatic reasoning, perception, and real-time control. One of the key topics in robot reasoning is motion planning. It is aimed at providing robots with the capability of deciding what 0004-3702/91/$03.50 © 1991 - - Elsevier Science Publishers B.V. All rights reserved

2

J.-C. Latombe et al.

motions to execute in order to achieve goal arrangements of physical objects. During the last ten years, it has emerged as a major research area [23]. Most of the research in robot motion planning has focused on the topological and geometrical problem of finding a collision-free path connecting two configurations of the robot among obstacles. Today, the mathematical and computational structure of the general path planning problem is reasonably well-understood and practical planners have been implemented in more or less specific cases [2, 4, 17, 30]. A major limitation of these planners, however, is that they assume complete and accurate geometric models of the robot and its workspace, and perfect control of the robot. These assumptions are reasonable as long as the errors in the planning models are small with respect to the tolerances of the task constraints. This is the case, for instance, when the motions are performed in a relatively uncluttered workspace and no delicate contact relations have to be made between objects. But there exists a variety of operations- e.g., grasping a part, mating two mechanical parts, navigating in a cluttered environment, docking and parking a vehicle-which cannot be achieved reliably by simply executing preplanned paths. These operations require uncertainty to be taken into account at the planning stage in order to generate motion strategies that combine motion and sensing commands. At execution time, these commands interact and take advantage of various sources of information to reduce uncertainty and lead the robot to the goal reliably. During the past few years, a trend in artificial intelligence research on autonomous agents interacting with a dynamic and/or uncertain external world has been toward "reactive planning". This trend grew up in reaction to the more traditional approach which considers planning and execution as two distinct, successive phases. An extreme position related to this trend has been to build systems that use almost no prediction of future states. But a fundamental difficulty of motion planning with uncertainty in robot control and in sensing is that uncertainty exists not only at planning time, but also at execution time. In most cases, this difficulty cannot be solved through straightforward reactive schemes, since uncertainty is not simply eliminated at execution time, say, by reading sensory inputs. It is necessary for the robot to reason in advance about the knowledge that will become available during motion execution. It must be sure that the execution of the generated plan will make enough knowledge available to either guide the robot toward the goal, or recognize failure and feedback pertinent data to the planner so that the plan can be amended appropriately. A typical notion for planning with uncertainty is that of a landmark, an element of the workspace that can be detected reliably. Sometimes, a direct path to the goal may seem attractive, but if it does not allow enough landmarks to be detected along the way, the robot may fail to attain the goal due to the errors in its planning models. Instead, a less direct path with expected landmarks along its way may allow the robot to maintain sufficient knowledge

Motion planning with uncertainty

3

on its position relative to objects in the workspace and attain the goal recognizably. Anticipating pertinent landmarks, selecting motion commands that will make them perceptible to the sensors while progressing toward the goal, and combining these commands with appropriate sensing acts requires complex planning. In this paper we address a limited class of robot motion planning problems with uncertainty. We assume that the robot is the only agent in a static workspace, and that the geometry of the robot and the workspace is completely and accurately known in advance. We thus assume that the only errors are in robot c o n t r o l - i . e . , the robot does not perfectly execute the motion comm a n d s - and in sensing-i.e., the sensors do not return accurate data. Given an initial subset 5~ of configurations in which the robot is known to be prior to execution and a goal subset ~3 of configurations, the planning problem is to generate a motion strategy whose execution guarantees the robot to move from inside 5~ into 9. We want the strategy to be successful whenever the errors in control and sensing stays within some predefined uncertainty bounds. ~ The most powerful known approach to this kind of planning problem is the preimage backchaining approach originally proposed by Lozano-P6rez, Mason and Taylor [31] and later extended by various researchers [10, 14, 33]. Given a motion command, a preimage of a goal for that command is defined as a subset of starting configurations of the robot from which the motion command is guaranteed to reach the goal (goal reachability) and terminate in the goal (goal recognizability). Preimage backchaining consists of iteratively computing preimages of the goal ~, preimages of computed preimages taken as intermediate goals, for various selected motion commands, until a preimage contains the initial subset 5~. This very general approach, however, raises difficult computational issues which still prevent its widespread application. In this paper we address some of them. The core part of the paper is a detailed description of two effective methods for computing preimages in two-dimensional configuration spaces. One method, backprojection from sticking edges, was originated in Donald's work [10]. The other, backprojection from goal kernel, was originated in a preliminary version of this paper [22]. Both methods proceed from the same general idea introduced by Erdmann [14], which consists of treating goal reachability and goal recognizability separately, and making use of the notion of backprojection, a concept weaker than that of preimage. The second method presents substantial advantages over the first because it usually computes larger preimages. In some cases, however, the first method is preferable. Fortunately, the two methods can be combined in order to draw more than the best of each. We have implemented a motion planner based on this combination. We think it is critical for a planner to produce plans with such well-defined characteristics, so that if execution fails, it is possible to diagnose why, by reviewing the assumptions made at planning time.

4

J.-C. Latombe et al.

We also discuss potential improvements of the implemented planner. One improvement, related to goal recognition, is aimed at computing larger preimages, so that the planner can solve more difficult problems more efficiently. Another improvement, the generation of conditional strategies, is aimed at solving trickier planning problems requiring to choose among multiple courses of actions at execution time. The detailed geometrical algorithms described in this paper require the robot's configuration space to be a two-dimensional Euclidean space. This corresponds to the case where the robot is modeled as a planar rigid object translating in the plane at fixed orientation. However, the concepts underlying our presentation are more general, though extending the geometrical algorithms to higher-dimensional configuration spaces with non-Euclidean structures would require substantial additional work. Section 2 provides the reader with a broad background of motion planning with uncertainty. Section 3 describes the modeling of a motion planning task in the robot's configuration space, with uncertainty in control and sensing. It gives a detailed formalization of the class of planning problems addressed in this paper. Section 4 is a short overview of the preimage backchaining approach. Section 5, the main section of the paper, develops the two preimage computation methods cited above. It also shows how the two methods can be merged into a more powerful one. Section 6 presents the implemented planner based on these methods and shows some of the experimental results obtained with it. The last two sections (Sections 7 and 8) discuss potential improvements of the planner. Both sections present novel results related to the preimage backchaining approach.

2. Background Research on robot motion planning has become active in the mid-seventies, when the goal of automatically programming robots from geometrical descriptions of tasks was first considered attainable [27, 28, 43]. Since the early eighties, a great deal of effort has been devoted to this domain, Part of this effort was motivated, on the one hand by the difficulties encountered in using explicit robot programming systems, and on the other hand by the goal of introducing autonomous robots in hazardous environments (e.g., nuclear sites, space, undersea, mines). During the last ten years, most of the research effort has been directed toward solving the path planning problem, i:e., the problem of planning motions without uncertainty. This has produced several major results, both theoretical and practical. Theoretical results mostly concern lower and upper bounds of the time complexity of multiple variants of the path finding problem [38, 40]. In particular, it has been shown that planning the motion of a robot

Motion planning with uncertainty

5

with arbitrarily many degrees of freedom is PSPACE-hard [38]. When the number of degrees of freedom is fixed, algorithms have been proposed, which are polynomial in the number of algebraic surfaces bounding the objects and their maximal degree [6, 39]. Some path planning methods have been produced as a side-effect of these results, but they have hardly been implemented. Another important result has been the development of the notion of "configuration space" [1], as a representational tool for formulating and exploring motion planning problems. This notion was popularized by Lozano-P6rez in the early eighties [29] and has given birth to various techniques for computing collision-free paths among obstacles (cf. [4, 18, 25, 30]). Finally, relatively fast path planning algorithms have been defined and implemented. Although these algorithms are usually not complete (they may fail to find a path while one exists), they can solve many practical problems. Faverjon and Tournassoud [17] reported on a system which uses an adaptation of Khatib's potential field method [20] for planning the motion of a manipulator with eight degrees of freedom, operating in the piping environment of a nuclear reactor. More recently, Barraquand and Latombe [2] described a potential-field-based path planner that escapes local minima of the potential function by executing random motions. This planner is quite fast and is able to solve tricky problems for robots with many (more than 10) degrees of freedom. Such practical techniques could bring substantial improvement to the programming of robot operations such as painting, welding, and riveting. The problem of planning motions in the presence of uncertainty is conceptually more difficult than the path finding problem. It has attracted less attention so far, and less results have been produced. Two approaches to this problem have been developed to some extent, in addition to preimage backchaining. The first of these approaches was proposed independently by Lozano-P6rez [28] and Taylor [43], and is known as the skeleton refining approach. It consists of (1) retrieving a plan skeleton appropriate to the task at hand and taking it as an initial plan; and (2) iteratively modifying the skeleton by inserting complements (typically sensor readings). Complements are decided after checking the correctness of the skeleton, either by propagating uncertainty through the steps of the plan skeleton [43], or by simulating several possible executions [28]. Subsequent contributions to the approach have been brought by Brooks [3], who developed a symbolic computation technique for propagating uncertainty forward and backward through plan skeletons, and by Pertin-Troccaz and Puget [35], who proposed techniques for verifying the correctness of a plan and amending incorrect plans. The second approach to motion planning with uncertainty was proposed by Dufay and Latombe [13], and is known as the inductive learning approach. It

6

J.-C. Latombe et al.

consists of assembling input partial strategies into a global one. First, during a training phase, the system uses the partial strategies to make on-line decisions and execute several instances of the task at hand. Second, during an induction phase, the system combines the execution traces generated during the training phase, and generalizes them into a global strategy. In fact, the training phase and the induction phase are interweaved. The generation of a strategy for the task ends when new executions do not modify the current strategy. A system based on these principles has been implemented, and experimented successfully on several part mating tasks. Both the skeleton refining and inductive learning approaches deal with uncertainty in a second phase of planning. The plan skeleton and the local strategies used during the first phase could be produced using path planning methods assuming zero uncertainty. The second phase takes uncertainty into account, either by analyzing the correctness of the current plan, or by directly experimenting with the local strategies and combining them into execution traces shaped by actual errors. In contrast, the rationale of preimage backchaining is that uncertainty may affect the overall structure of a plan in such a way that a motion strategy may not be generated by modifying or composing plans generated assuming no uncertainty. Preimage backchaining is also a much more rigorous approach to motion planning with uncertainty than the other two approaches. On the other hand, preimage backchaining raises difficult computational issues. While there have been practical implementations of the skeleton refinement and inductive learning approaches, preimage backchaining is less advanced in that respect. This does not mean that the computational issues which have to be faced with preimage backchaining, are completely absent from the other approaches. Preimage backchaining only makes explicit issues that are hidden in the other approaches because they are more ad hoc. Solving these issues is a prerequisite to implementing preimage backchaining, but not to implementing the other approaches. The preimage backchaining approach was first presented by Lozano-P6rez, Mason, and Taylor [31]. This early paper set up most of the basic framework, Large portions of Sections 3 and 4 below are based upon it. Mason [33] investigated several methods for searching the graph of preimages, including methods for generating conditional strategies, i.e., plans including conditional branching statements. He also analyzed the correctness and the completeness of the framework. Erdmann [14, 15] contributed to the approach in several ways. In particular, he separated the problem of computing a preimage into two subproblems, reachability and recognizability. By considering reachability alone, he introduced the notion of backprojection. (A backprojection of a goal for a given motion command is a subset of starting configurations of the robot from which the motion command is guaranteed to reach the goal.) The two methods for computing preimages presented in this paper draw upon Erd-

Motion planning with uncertainty

7

mann's work. Donald [10, 11] extended the preimage backchaining approach by considering uncertainty in the initial model of the workspace. The proposed extension consists of adding a dimension to the robot's configuration space for every parameter in the workspace whose value is not known accurately. The resulting space is known as the "generalized configuration space". Donald also introduced the notion of Error Detection and Recovery (EDR) strategies. Unlike the strategies considered in this paper, an EDR strategy is not guaranteed to succeed. However, it is guaranteed to either succeed or fail recognizably. Buckley [5] proposed an application of preimage backchaining to the analysis of the correctness of a given motion strategy. He also described a procedure for planning motion strategies in the forward direction. This procedure discretizes the robot's configuration space into atomic regions based upon the consistent sensory data they should generate, and builds a transition graph between these regions. Buckly implemented a planner operating in a threedimensional configuration space corresponding to a robot with three translations. The generation of sensorless motion strategies using techniques inspired from preimage backchaining has been investigated by Erdmann and Mason [16]. Canny and Reif [6, 8] proved that the three-dimensional compliant motion planning problem (the kind of problem attacked by the preimage backchaining approach) is nondeterministic exponential-time hard (NEXPTIME-hard). Using decision tools in the theory of the reals, Canny [7] gave an algorithm that computes motion strategies using the preimage backchaining approach when the envelope of the trajectories generated by the robot controller is described algebraically. However, the algorithm takes doublyexponential time in the number of motion commands contained in the generated strategy. Donald [12] described a less general algorithm for planning motion strategies in the plane, which is simply exponential in the number of commands. In parallel to the research mentioned above, there has been an increasing interest in planning motions of objects in contact with other objects [19, 21, 26, 44]. Like the research on collision-free path planning, most of this work has assumed accurate and complete prior knowledge of the workspace and perfect control of the robot motions. Recently, however, some of the methods developed for planning motions in contact space have been extended to handle some uncertainty [9, 24], in a way similar to that introduced by Buckley [5].

3. Task modeling

3.1. Configuration space We are interested in planning the motion of an object ~ / - the robot - in a workspace 7g~populated by obstacles ~i, i E [1, q]. A configuration of ~/is a

8

J.-C. Latombe et al.

specification of the position of every point in M with respect to a coordinate system embedded in 7¢" [1]. The configuration space of M, denoted by cg, is the set of all the possible configurations of M. Each obstacle ~i maps in cg to the subset ~ of configurations where M intersects Ygi, i.e.: ~i

: { q ~ ~ [ " ~ ( q ) f3 ~3i ~ ~ } ,

where M(q) denotes the subset of off. occupied by M at configuration q. The region c£~i is called a C-obstacle. In general, cg is a non-Euclidean manifold. For instance, if M is a rigid planar object moving freely in off.= ~2, then cg = ~2 x S 1, where S 1 denotes the unit circle. If M is a rigid three-dimensional object moving freely in °/47= ~s, ~g = ~ 3 x SO(3), where SO(3) denotes the Special Orthogonal Group of orthonormal matrices with determinant +1 [1]. However, in the rest of the paper, things are much simpler from the geometrical point of view. We assume that M is a two-dimensional object that can only translate in the plane ~2, e.g., an omnidirectional mobile robot that cannot rotate. A configuration is represented as q = (x, y) E ~2, where x and y are the coordinates of a specific point of M, known as the reference point, with respect to the coordinate system embedded in off.. Hence, both °W and cg are copies of 1/~e. Both M and the J3i's are modeled as regular 2 polygonal regions, M as a simple polygon, and each ~i as a region whose boundary is a simple polygonal curve that may, or may not be a closed-loop curve (allowing for non-bounded obstacles). With these assumptions, each C-obstacle cg~g is also a regular polygonal region. Figure 1 illustrates the above concepts. A simple setting in the workspace is depicted in Fig. l(a). The moving object M is a rectangle and there is a single

"q

0

b

Fig. 1. Peg-into-hole-task. 2 By definition, a set is regular iff it is equal to the closure of its interior.

Motion planning with uncertainty

9

polygonal obstacle N with a rectangular depression. The goal of the task is to insert M in N's depression ("peg-into-hole" task). Figure l(b) shows the mapping of this setting in M's configuration space. The moving object M maps to the point denoted by q. The obstacle N maps to the C-obstacle q~N. The width of the rectangular depression in ~N is equal to the difference between the width of the depression in N and the width of M. The goal of the peg-into-hole task is to move q (the configuration of M) to any location in the edge ~d at the bottom of ~ N ' s depression. The union of the obstacles, O q_~ Ni, is called the obstacle region and is denoted by N. The union of the C-obstacles, (J i=1 q ~ i ' is called the Cobstacle region and is denoted by ~N. The complement of the C-obstacle region in ~ is called the free space and is denoted by ~fre~' The subset of configurations q where M(q) intersects with the obstacle region without overlapping its interior is called the contact space and is denoted by ~contact •

The union of the free space (~free and the contact space (~contact is called the valid space and is denoted by ~vali0. A valid path between two configurations ql and q2 in (~valid is a continuous map 7:[0, 1]---~ ~¢valid such that ~-(0)= ql and ~-(1) = q2. (~free is an open subset of ~, whose boundary 0~fr~e is a set of polygonal curves. (~contact is also made of polygonal curves. It can be shown that 0(~free~ (~contact and cl(C~free) C C~valid [19]. In Fig. 1, the strict inclusion of 0~,e e in (~contact Occurs when M's width is exactly equal to the width of ~ ' s depression. Then the depression in ~ degenerates to a line segment contained in ~£co~t,~t, but not in 0~fr~e. In order to avoid such pathological cases, we assume that each maximally connected subset of the C-obstacle region ~ is homeomorphic 3 to a closed disc or a closed half-space. This assumption entails that Y~o,t~ct = 0(~free and (~valid = cl((~free); hence, a contact between M and the obstacle region can be broken by an arbitrarily small displacement of M. It also implies that every vertex in the contact space (~contact is the extremity of two edges only; thus, ~ o n t ~ consists of disjoint simple polygonal lines. An edge (respectively a vertex) in ~o~t~t is called a contact edge (respectively a contact vertex). The outgoing normal of a contact edge E is the unit vector ~5(E) normal to E pointing toward q~fr~. The vector - i f ( E ) is the ingoing normal of E. Let n~ be the number of edges of M and n~ the number of edges of the obstacle region. (~contact contains O(nan~) 2 2 edges which can be computed in 2 2 O(n~n~ log n~n~) time [41].

3 T w o topological sets E and F are b o t h f and f ~ are continuous.

homeomorphic iff there exists a bijection f : E--* F such that

10

J.-C. Latombe et al.

3.2. Motion command A motion command M is a pair (CS, TC). CS is called the control statement. Given a starting configuration qs, it specifies a (nominal) commanded trajectory of M, i.e., a curve: z ~ : t E [0, +o,) ~ ~-c(t) @ %~valid with re(0) = q~ and t denoting time. TC is called the termination condition. The controller stops the motion when TC evaluates to true. TC's arguments may be sensory inputs during the execution of the motion and the elapsed time since the beginning of the motion. In the following, we assume that the controller continuously monitors TC during execution and that the motion of M can be stopped instantaneously. 4 We assume that ~4's motion is controlled according to the generalized damper compliance model [32, 36, 46], which is less sensitive to errors than pure position control. This means that CS is parameterized by a vector b0 of orientation 0 ~ S 1, called the commanded velocity. In the absence of control errors, a generalized damper motion command along b; determines a unique trajectory described as follows. As long as the configuration of ~4 is in the free space, it moves at constant velocity b0 in a straight line oriented along b;. When it is in a contact edge, the configuration may move away from the edge, slide along it, or stick to it. If h; projects positively along the outgoing normal of the edge, it moves away. Otherwise, assuming a frictionless contact space, 's configuration slides along the projection of t~0 on the edge if this projection is nonzero and sticks to the edge if t~; points perpendicularly to it. Figure 2

Fig. 2. Generalized damper compliance.

4 The fact that these requirements cannot be exactly met by a real system should be taken into account in the specification of the uncertainty.

Motion planning with uncertainty

11

illustrates three such trajectories with the same commanded velocities and different starting configurations. Note that they all ultimately reach the same contact vertex. Friction in the contact space simply results in increasing the range of motion directions that stick to edges. Let /x/> 0 be the friction coefficient (Coulomb Law). Still assuming no uncertainty in robot control, a generalized damper motion along ~ sticks to a contact edge if the magnitude of the angle between b 0 and the outgoing normal of the edge is less than or equal to ~b = tan- 1 /z (0<~ ~b ~ ½-rr), i.e. if -fi~ lies in the half-cone of angle 2~b whose axis points along the outgoing normal of the edge (this half-cone is called the friction cone). The frictionless case corresponds to ~b = 0. For completeness, we must also define the behavior of ~ when the configuration is at a contact vertex. We assume that the range of reaction forces at a contact vertex can be any nonnegative linear combination of the reaction forces that can be generated by the two abutting edges. This corresponds to associating a friction cone with the contact vertex that has its sides parallel to the two most extreme rays of the friction cones in the incident edges. When the robot's configuration reaches a contact vertex, the motion sticks in it if - ~ ; lies in the friction cone. Otherwise, the robot consistently moves away from the vertex (in free or contact space) as if had just reached the vertex by sliding along one of the two adjacent edges. In the sequel we assume generalized damper control and we designate a motion command by ( ~ , r e ) where b~ is the commanded velocity. -

3.3. Uncertainty in control Uncertainty in control is modeled as follows. Let ~; be the commanded velocity. At any instant during the motion, the effective commanded velocity is a vector vo such that the magnitude of the angle between f~; and v0 is less than a fixed angle 7/, < ½~. In other words, v0 lies in a cone of angle 2T/~ whose axis points along the 0-direction. This cone is called the control uncertainty cone. In this paper we make almost no use of the velocity modulus. For simplification, we therefore assume that there is no uncertainty in the velocity modulus, i.e., II~0l[--II~;ll. This assumption has no significant impact on the methods described below and could be easily retracted. For further simplification (and without loss of generality) we assume that v0 (hence, bo) is a unit vector, so that the direction 0 suffices to characterize the vector b~ (hence, the range of vectors b o compatible with ~ ) . During motion, ~0 may vary arbitrarily, but continuously, between the two extreme orientations determined by v0 and ~L. Thus, if oN is in the free space, it moves along a trajectory whose tangent at any configuration is contained in the control uncertainty cone anchored at this configuration. If sq's configuration is in the contact space, at a point other than a vertex, it may move away, slide, or

12

J.-C. Latombe et al.

stick, depending on the orientation of the effective commanded velocity b o relative to the contact edge. In particular, if all vectors b 0 lying in the control uncertainty cone project positively on the outgoing normal of the edge, then ~¢ is guaranteed to move away from the edge. Instead, if the inverted control uncertainty cone is entirely contained in the friction cone, ~q sticks to the edge; if it is completely outside the friction cone with all the vectors in the control uncertainty cone projecting positively on the ingoing normal of the edge, ~ is guaranteed to slide in the direction of the projection of fi0 into the edge. 3.4. Reaction force

In the generalized damper compliance control model, the robot exerts a force proportional to the effective commanded velocity b0, say Jrappl = Bvo, on its environment. When the robot is in free space, no reaction force is applied to the robot and the force exerted by the robot is entirely used to create motion. When the robot is in contact space and pushes on an obstacle, the obstacle applies a reaction force. If the vector -fi0 lies in the friction cone at the contact configuration, the reaction force is - B 6 o, i.e., it completely cancels the force exerted by the robot, and there is no motion (Fig. 3(b)). If the vector - v0 lies outside the friction cone, the reaction force only partially cancels the force exerted by the robot and the net resulting force makes the robot's configuration slide in contact space (Fig. 3(c)). Let q be the actual configuration of the robot at some instant t. We denote by freac(q, v0) the vector in configuration space representing the reaction force applied at the robot's configuration when the effective commanded velocity is ~0" jCreac(q, 6O) is exhaustively defined as follows: • if q E c~f.... then f~eac(q, 60) =0; • if q E ~co,taet, q is in the interior of a contact edge E, and fi0 projects positively on the outgoing normal of E, then freac(q, 60)= 0; • if q E (~contact, q is in the interior of a contact edge E, and - b 0 lies in the friction cone at the contact point, then fre,c(q, V0)= --Bvo (see Fig. 3(b)); .._x

f,~.

./////~/////,, (a)

~]'///2"7J////, (b) Fig. 3. Reaction force.

(c)

Motion planning with uncertainty

13

• if q @ (~contact, q is in the interior of a contact edge E, and - v 0 projects positively on the outgoing normal of E, but lies outside the friction cone at the contact configuration, then freac(q, 6 O ) = f ' with f equal to the projection of - B 6 o into the nearest side of the friction cone at q; the projection is taken normal to the axis of the cone (see Fig. 3(c)); • if q @ CCcontact, q is at a contact vertex, and - v 0 lies in the friction cone at this vertex, then freac(q' 6O) = -Bvo; • if q E (~contact, q is at a contact vertex X, and - b o does not lie in the friction cone at X, then let fl and f2 be the reaction forces that would be generated by the two edges abutting X;-freac(q' V0) = max(f~, f2). (When q is at a contact vertex, the reaction force may not be determined. Then the above definition of freac is somewhat arbitrary, but it is consistent with our previous definitions.)

3.5. Uncertainty in sensing We assume that the robot ~/is equipped with two sensors - a position sensor and a force s e n s o r - which we model below. Other sensors could be modeled as well. The position sensor measures the current configuration of ~ . The sensed configuration is denoted by q*, while the current actual configuration is denoted by q. The uncertainty in the measurement is modeled as an open disc ¢llq(q*) d ~2 of fixed radius pq centered at the sensed configuration q*. This means that if the position sensor returns q*, the current actual configuration q may be anywhere in ~/q(q*) fq ~vali0. Reciprocally, if the actual configuration is known to be q, the sensed configuration q* should necessary lie in ~lq(q). The force sensor measures the reaction force exerted on M. The output of the sensor is a vector denoted by f*. For simplification, we assume in this paper that the modulus of the measured force is accurate. (Removing this assumption would only make some of the formal expressions given below slightly more complicated, with no significant impact.) On the other hand, the magnitude of the angle between f and f*, if they are nonzero vectors, is less than a fixed a n g l e e s < ~rr. Hence, f lies in an open cone of angle 2ej. whose axis points along f*.

4. Preimage backchaining Let 5t be a subset of (~valid in which it is known at planning time that ~ ' s configuration will be when the execution of the motion plan starts. ~ is called the initial region. Let q3 be another subset of (~validinput as the goal region. We want the planner to generate a motion plan whose execution moves M from its actual initial configuration in 5~ to a final configuration in ~d.

14

J.-C. L a t o m b e et al.

Let 3- be a subset of C~va,d and M = (60, TO) be a motion command. A preimage of 9- for M is defined as any subset ~ of (~valid such that: if M's configuration is in ~ when the execution of M starts, then it is guaranteed that M will reach ~- (goal teachability) and that it will be in 3- when TC terminates the motion (goal recognizability). Example 4.1. Consider the peg-into-hole problem illustrated in Fig. 4. The region 3 is the bottom edge of the depression of the C-obstacle region (see Fig. 4(a)). Let the commanded velocity be v0 as shown in the figure. We make the following assumptions: contact space is frictionless; the error in the orientation of the sensed force is bounded by ei < ¼"rr; and the depth of the hole is greater than 2pq. The region shown with a striped contour in Fig. 4(b) is a preimage of J- for the motion command M = (b0, YC), with TO terminating the motion when the position sensor measures a configuration consistent with O--and the force sensor measures a nonzero force whose orientation is consistent with that of the outgoing normal of 3-. More formally, TC is defined as: "l'G ----- [q*(t) E cylsphere(3-, pq)] A []*(t) # 0 A [angle(f*(t), if(J-))[ < e~] , where: • t is the current time,

• q*(t) and ]*(t) denote the sensed configuration and force at time t, respectively,

• cylsphere(~, Pu) is the edge S "grown" by pq, • angle(i 1, 12) designates the angle between two vectors i~ and i 2. The execution of M generates a trajectory that (in free space) is locally contained in the control uncertainty cone and (in contact space) slides over

(a)

(b) Fig. 4. Example of preimage.

Motion planning with uncertainty

15

edges that are not orthogonal to a direction contained in the control uncertainty cone. Thus, it is guaranteed to reach 3 whenever the starting configuration of s~ is in ~. Since ei < ¼"rr, the second condition of TC,

f*(t) # 0 A langle(f*(t), ~(J))[ < ej. , guarantees that the motion will terminate in one of the three horizontal contact edges. Since the depth of the hole is greater than 2pq, the first condition, q*(t) E cylsphere(3-, pq), a l l o w s TC to distinguish between the bottom edge 3and the two horizontal side edges bordering the entrance of the hole. Hence, l C will stop the motion in S, and ~ is a preimage of 3- for M. Now, suppose that an algorithm is available for computing preimages. Given 5~ and q3, preimage backchaining consists of constructing a sequence of preimages ~ , ~2 . . . . . ~p such that: • ~i, Vi C [1, p], is a preimage of ~i i for a selected motion command M~ (with ~0 = ~3); If the backchaining process terminates successfully, the inverse sequence of the motion commands which have been selected to produce the preimages, [Mr, , Mp ! . . . . . M I ] , is the generated motion strategy. This strategy is guaranteed to achieve the goal successfully, whenever the errors in control and sensing remain within the ranges determined by the uncertainty intervals. Example 4.2. Figure 5 illustrates an idealized preimage backchaining process. The C-obstacle region is shown dark; the initial region ~ is a rectangle located at the top-right of the C-obstacle; the goal region ~q is a vertical contact edge

(a)

(b) Fig. 5. Preimage backchaining.

16

J.-C. L a t o m b e et al.

located on the right-hand side of the C-obstacle. First, a commanded velocity v0t is selected and the corresponding preimage ~1 of ~ is computed (Fig. 5(a)). This preimage does not contain 5~ and is taken as an intermediate goal. Second, a commanded velocity v02 is selected and the preimage ~2 of ~E is computed (Fig. 5(b)). ~i is so thin at one end that the termination condition of this motion command can reliably recognize ~ ' s achievement only by detecting contact (using force sensing) in the edge denoted by E, which is part of 5~. Since ~2 contains 5~, the planning problem is thus solved with a sequence of two motions. Interestingly, the preimage backchaining process has resulted in selecting E as an intermediate "landmark" to help the reliable progression of the robot toward ~. Adding other types of sensors than position and force sensors would make it possible to consider more landmarks. The problem of generating the sequence of preimages can be transformed into that of searching a graph by selecting motion commands from a discretized set. The root of this graph is the goal region ~, and each other node is a preimage region; each arc is a motion command, connecting a region to a preimage for this command. The construction of this graph requires that the set of possible control statements be discretized. With generalized damper control, this amounts to discretizing the set S I of commanded velocity directions. Searching the graph of preimages introduced above leads us to generate sequential motion strategies, i.e., sequence of motion commands. In some cases, it is necessary or preferable to generate conditional strategies. We will address this issue in Section 8. It is interesting to notice the relation between preimage backchaining and "goal regression", a classical planning method in artificial intelligence [34, 45]. Both methods consist of computing the precondition (ideally, the weakest one) whose satisfaction before executing an action guarantees that some goal condition will be satisfied after the action is executed. However, while preimage backchaining has a strong geometric flavor, goal regression is more logic-oriented. In the sequel the word "goal" designates either the goal region q3 of the motion planning problem, or any preimage taken recursively as an intermediate goal. A goal is generically denoted by J-. Remark. The generalized damper control model has been used in most work related to preimage backchaining. The planner of Buckley [5], which makes use of the generalized spring model, is one exception. In reality, however. robot trajectories should be induced from a dynamic model of motion. In the presence of friction, however, deriving motions in contact from a dynamic model is not trivial (see [14, 37]). A practical illustration of a preimage for a peg-into-hole task where trajectories are derived from a dynamic model of motion in contact is given in [42].

Motion planning with uncertainty

17

5. Computation of preimages 5.1. General approach

The notion of preimage of a goal 3- for a motion command (fi~, TG) combines two basic issues, known as goal reachability and goal recognizability [14]. Goal reachability relates to the fact that any trajectory obtained by executing a motion commanded along fi0 from a preimage of 3- should reach 3-. Given a starting configuration, v0 only determines a c o m m a n d e d trajectory z o. Due to errors in control, any execution produces an actual trajectory z o that is slightly different. The planner must be certain that all the possible actual trajectories consistent with both b~ and control uncertainty (i.e., ~v) will reach 3- at some instant. Reaching 3-, however, is not sufficient since the termination condition could just let the robot traverse the goal without stopping it or, instead, it could halt the motion prematurely before the goal has been reached. In fact, the planner must also be certain that the termination condition TC will stop ~/in 3- (goal recognizability). TC is an observer of the actual trajectory z o being executed. Since sensing is imperfect, TC perceives "co as an observed trajectory "co which is most likely to be neither the commanded one, nor the actual one. Thus, the problem for the planner is to be sure that, for every possible observed trajectory "co, c

(1) TC becomes true at some instant, and (2) at the instant tf when TC first becomes true, all the actual trajectories % consistent with z 0* (i.e., the trajectories which may be observed as %* given the uncertainty in sensing) have reached the goal, i.e., %(tf) E 3-. Preimage computation has been investigated in depth in [14, 22]. For a given commanded velocity b~, the ideal method would compute the maximal preimage of 3-, i.e., a preimage that is not contained in any other prelmage ' 5 of 3-for f,~, and the method would also return the termination condition for the maximal preimage. Indeed, intuitively, a large preimage has more chance to include the initial region 5~ than a small one; in addition, if it is considered recursively as an intermediate goal, a large preimage has more chance to admit large preimages than a small one. Thus, considering larger preimages may reduce the size of the search graph; it may also produce "simpler" strategies, i.e., strategies made up of less motion commands. However, Erdmann [14] showed that: (1) there does not always exist a maximal preimage; 5 By definition, any subset of a preimage is also a preimage of the same goal for the same motion command.

18

J.-C. Latombe et al.

(2) assuming there exists one, it may not be unique; and (3) if there exists a unique maximal preimage, it may depend in a very subtle fashion on sensing history, the elapsed time since the beginning of the motion, and the knowledge embedded in the termination predicate (the predicate of TC). Most of the difficulties related to maximal preimages are due to the subtle interdependence between goal reachability and goal recognizability. One approach to the preimage computation problem is to use restricted forms of the termination condition so that these two issues can be treated separately. The basic idea is to identify a subset of the goal in which achievement of the goal can be recognized independently of the way this subset has been attained. Then it remains to compute the region from where the robot is guaranteed to attain this subset. In the next subsections, we describe and compare two methods for computing preimages using this approach: backprojection from sticking edges and backprojection from goal kernel. In general, none of these methods compute maximal preimages (over all the possible termination conditions), but both of them are easily implementable and we have used them in an operational planner. 5.2. Backprojection f r o m sticking edges

Given a commanded velocity v0 and a goal 3-, backprojection from sticking edges consists o f : 6 (1) determining the subset 3 -S of 3- N ~co,t,ct in which motions commanded along fi0 are guaranteed to stick; (2) computing the maximal region, denoted by 5¢0(g~), such that a motion commanded along fi~ and starting from within this region is guaranteed to reach 3 -S, The computation of 3- ~ is simply done by comparing the orientation of each edge in 3- relative to b 0. J-~ is made of every edge E in 3- N ~co,ta~ such that: [angle(-~,

f)(E)) t <~ ¢ - ~7~ .

(Notice that guaranteed sticking is possible only if ~b > ~%, i.e, the friction cone is larger than the control uncertainty cone.) We may also include isolated sticking contact vertices in 3 -s, i.e., those contact vertices where the friction cone contains the negated control uncertainty cone. In practice, only the isolated concave sticking vertices are interesting, since the sticking behavior at the other isolated sticking vertices is not stable under arbitrarily small displacements. Sticking at a concave vertex may in fact '~ Most of the c o m p o n e n t s of this m e t h o d were previously introduCed in [I0, 14].

Motion planning with uncertainty

19

be easier and more stable than sticking along a contact edge, since it may be reached by sliding along the two abutting edges. Hence, it may yield large preimages. The region 3?o(3-s) is called the maximal backprojection of ~-s for b 0. Erdmann [14] gave the following simple algorithm to compute the maximal backprojection of a single contact edge ow:

Step 1. Consider every vertex in ~con,=ct" Mark every non-goal vertex at which sticking is possible (i.e., such that the friction cone intersects the inverted control uncertainty cone). Mark every goal vertex if sliding away from the vertex in the non-goal edge is possible. Step 2. At every marked vertex erect two rays parallel to the sides of the inverted control uncertainty cone. Compute the intersection of these rays among themselves and with ~o=,=ct. Ignore each ray beyond the first intersection. The remaining segment in each ray is called a free edge. Step 3. Trace out the backprojection region. To that purpose, trace the edge 5e in the direction that leaves the C-obstacle region on the right-hand side, until an endpoint of 5e is attained; then, let all the contact and free edges incident at this point be sorted clockwise in a circular list; trace the first edge following 5e in this list; proceed in the same way until b~ is encountered again. This algorithm requires the backprojection to be bounded, which can be achieved by imposing that ~v=,o or (if ~, S 0 ) the C-obstacle region be bounded. Figure 6 illustrates this algorithm on an example for two commanded velocities v~l and fic0~. The vertices marked at Step 1 are depicted as small circles. Erecting the inverted control uncertainty cone at the marked vertices produces constraints- the free e d g e s - which prevent the backprojection from containing configurations from where a motion may reach a vertex or an edge I

I/"

I I

~l

i I I

i , / \ \// \ ,/;

v~,~,/ :;

/ I

IJ

I ,, ,'

;%/,,;

I

|

(a)

i/I sJ

li

(b) Fig. 6. Computation of backprojection.

I I

,I , ,,"

;,

|1

|l I



20

J.-C. L a t o m b e et al.

in which it can stick or slide away from the goal with no possibility of returning to it. The constructed backprojections are displayed in Fig. 7. Erdmann's algorithm has been generalized by Canny and Donald [101 to an algorithm that generates the maximal backprojection of any region ,9° described as the union of contact edges and polygonal regions in ~vali0" The principle of the algorithm is the following. It sweeps a line L across the plane perpendicularly to the commanded velocity ~0. The sweep starts at a position of L where it is tangent to S#, with owlying entirely on the side of L pointed to by the vector - v 0 . The sweep proceeds in the direction of the vector - ~ . During the sweep, the algorithm maintains the status of L, i.e., the description of its intersection with qgcontact, oWand free edges erected from contact vertices and goal vertices. This status changes qualitatively at events occurring when L passes through a vertex of ~contact, a vertex of 5g, the intersection of two free edges, the intersection of a free edge and cgc,,ntact, or the intersection of a free edge and 5e. At each of these events, the algorithm updates the status of the line and the list of future events. During the sweep, the algorithm traces out the backprojection. Sweep stops when the last event is encountered. The key idea underlying this algorithm is that the backprojection's boundary consists of straight segments and that at every event there exists a local criterion (which does not depend on what will happen later during the sweep) for deciding how to trace the backprojection's boundary. This criterion leads to erecting rays (yielding free edges) from both contact and goal vertices, in order to prevent the backprojection from containing configurations from where a motion may reach a vertex or an edge in which it may stick or slide away from the goal with no possibility of returning to it. The algorithm does not require the backprojection to be bounded. But it does require that the actual velocity of the robot never project negatively into the ~ direction. This condition is achieved if the

(a)

(b)

Fig. 7. Constructed backprojections.

Motion planning with uncertainty

21

friction cone is larger than the control uncertainty cone, i.e., 4,~>7/~, an assumption previously made in order to make guaranteed sticking possible. Another algorithm that does not require that ~b/> r/, has been proposed in [22], but it is computationally less efficient. Let n be the number of vertices in C~contac t. Assume the combined complexity of ~co,~ac~ and 5e, i.e., the number of vertices in C~contact, plus the number of vertices of 5e, plus the number of intersections between 5e and ~co,ta~, to be O(n). The sweep-line algorithm erects O(n) rays. The contour of 5¢0(5e) consists of goal edges (edges bounding 5e), free edges and contact edges. Each erected ray can contribute at most one edge to the backprojection. Each goal edge contributes at most one backprojection edge. Each non goal edge in contact space can contribute one or several backprojection edges, but whenever it contributes k backprojection edges, it also "consumes" O(k) free edges. Hence, the boundary of the backprojection consists of O(n) edges. Since each ray is interrupted at its first intersection with a goal edge, a contact edge, or another ray, the backprojection can be computed in O(n log n) time. At this point we still have to construct the termination condition that will recognize sticking in 9-S. Indeed, while sticking "physically" terminates a motion, the robot must recognize sticking in order to "logically" terminate the motion and execute the next motion command in the motion strategy or return success. One way to construct the termination condition is to project 5g0(3-~) into a line parallel to fi~. Let D be the length of the projection. Since the friction cone is larger than the control uncertainty cone, the actual velocity ~ of the robot has a positive component along v0 everywhere in ~0(~-s). By examining all the contact edges in the boundary of ~q0(~), one can compute 07-s the minimal value vmin of the component of b along v~ Co in 2e0(,~ ). A possible termination condition is: TC -~ t >

D/vrnin

.

The region 5f0(J -S) is a preimage of ~ for this termination condition, which will be referred to in the rest of the paper as the sticking termination condition. (Note that the construction of this termination condition would hardly be changed if we assumed some uncertainty in the commanded velocity modulus. Only the computation of Umin would be made slightly more involved.) If 3-is input as a list of O(n) contact edges, the computation of J-~ and ~ 0 ( g ~) takes O(n) time and O(n log n) time, respectively. The construction of the expression TC takes O(n) time. The evaluation of IC takes constant time. 5.3. Backprojection from goal kernel

Given a commanded velocity fi~ and a goal 5-, backprojection from goal kernel consists of:

J,-C. Latombe et al.

22

(1) identifying a subset go(g) of J - w h o s e achievement by a motion commanded along fi0 guarantees that a termination condition whose only arguments are q*(t) and f*(t) (sensing at the current time) recognizes the achievement of J ; (2) computing the maximal backprojection 5~o(Xo(J-)) (see Section 5.2). The subset X0(J-) is called the O-kernel of J-. Its definition requires the two notions of O-consistency and O-distinguishability to be first introduced. We let Fre,c(q, O) denote the set of all the possible reaction forces at configuration q when the commanded velocity is b~. It is the union of all the forces freac(q, ~0), as defined in Section 3.4, for all the effective commanded velocities 6o compatible with b 0. A straightforward algorithm computes F .... (q, 0) in constant time. We say that a configuration q ~ (~valid is O-consistent with q* and j* iff, when the robot commanded along b~ is at configuration q, q* and f* are possible sensed configuration and force given the geometry of ~valid, the friction angle ~b, and the uncertainty in control and sensing. The direction of the commanded velocity b 0 plays an important role in 0-consistency because the force that is sensed in contact space depends on it. (The robot may be in contact space without sensingany reaction force, it if moves tangentially to a C-obstacle.) We define YCo(q*,f*) as the set of configurations q that are 0-consistent with q* and f*. We have: • If f * = 0, Y{o(q*, f*) consists of all the valid configurations q which are distant from q* by less than pq and lie either in free space, or in contact space at a point where the actual reaction force may be zero:

Ko(q*, O) {q @ %alid I I[q-- q*l[ < Pq, O~ F .... (q, 0)}. =

• If IlY*fl 0, Co(q*, Y*) consists of all the contact configurations q which are distant from q* by less than pq and which may generate a nonzero reaction force j whose angle with f* is less than e:.:

Y{o(q*, f*)

= { q E ~contact I llq - q*H < P q , 3Y5 ~ 0 ~ F .... (q, 0):

Jangle(f, f*)[ < e:}.

(If we assumed some uncertainty in the modulus of the sensed force, these expressions would be slightly more complicated, but their construction would remain quite straightforward.) The condition defined as

?7{o(q*(t), f*(t))

C 3"

Motion planning with uncertainty

23

is such that when it evaluates to true it is guaranteed that the robot is in the goal. Its arguments are q* (t) and ]*(t), i.e., the sensed configuration and force at the time t when it is evaluated. In the rest of the paper, this condition will be referred to as the instantaneous sensing termination condition. It remains now to define a subset of 3- such that if it is attained by a motion, it is guaranteed that the above termination condition evaluates to true. This subset is precisely the kernel of the goal. Two configurations q~ and q2 in (~valid are said to be O-distinguishable iff: {(q*, f * ) [ ql,q2 E Y{o(q*, f*)} = O. In other words, if two configurations ql and q2 are 0-distinguishable, it is guaranteed that during a motion commanded along b~ there is no instant when the sensed configuration and force are 0-consistent with both ql and q2. Two configurations ql and q2 which are not 0-distinguishable are O-confusable. The O-kernel of 3-is: )(0(3-) ~---{q ~ 3 - l V q ' ~ (~valid\3-1 q a n d

q' are O-distinguishable}.

Again notice the importance of 0. Without taking the commanded velocity into account, we would be forced to distinguish between configurations using position sensing only, since no configuration in contact space is guaranteed to generate a nonzero reaction force under all commanded velocities. This would simply lead to isotropically shrinking the goal 3-in order to get its kernel. The computation of Xo(3-) is detailed in the next subsection. We assume that 3- consists of a finite collection of edges in (~contact and generalized polygons 7 in q~v~tid" It turns out that the 0-kernel of such a goal is of the same general form. As an example, consider Fig. 8. There is a single rectangular C-obstacle and the goal region 3- is a rectangle whose boundary partly lies in contact space (Fig. 8(a)). The 0-kernel for the commanded velocity b~ pointing downward is the region 3-i U 3-2 shown in Fig. 8(b). 3-'1 is a portion of a contact edge and if; is a generalized polygon in valid space. Indeed, based on position and force sensing, a configuration in g~ is 0-confusable only with configurations lying in the portion of the contact edge contained in the goal; based on position sensing only, a configuration in g/2\,~l or' or, is 0-confusable only with configurations inside the goal or inside the C-obstacle (but the latter are not achievable). Any configuration outside 3-t1 U ~0J--2is 0-confusable with a configuration in ~,,,t~d not located in the goal. We can compute the backprojection ~o(Xo(3-)) using Canny-Donald's sweep-line algorithm (see Section 5.2). Since go(3-) may include regions 7 A generalized polygon is a compact subset of ~2 whose boundary is a simple closed-loop curve made up of straight line segments and/or circular arcs.

24

J.-C. Latornbe et al.

:::::::::::::::::::::::::::::::::::::::::::::::: ."4 ::: :i~ ::':

(a)

(b) Fig. 8. A goal and its 0-kernel.

bounded by both straight edges and circular arcs. the only modification to the algorithm is to include the positions of the sweep-line L where it is tangent to the circular arcs as additional events. The boundary of the constructed backprojection consists of straight edges and circular arcs. Preimage backchaining may hence lead to computing the kernel of a goal consisting of contact edges and generalized polygons in ~va,a" As mentioned above, this kernel also consists of contact edges and generalized polygons in ~valid. Hence, the general form of the goals and their kernels is closed under backchaining. The region LPo(Xo(i)) is the computed preimage of i . Indeed, a motion starting from within ZPo(Xo(i)) and commanded along v0 is guaranteed to reach go(if) (if it is not terminated before). By definition of )Co(if) it is guaranteed that the termination condition Y{o(q*(t), s~*(t))C_ f will become true at some instant during the motion. The condition will certainly be truo when Xo(f) is attained, but it may become true before. When the condition becomes true, the definition of Y(0 guarantees that the robot is in the goal, even if the 0-kernel has not been attained yet. Hence. 5fo(go(f)) is a preimage of i for a motion commanded along b0 with: TC =

[Efo(q*(t),

jr*(t)) C 3-]

as the termination condition. Using the definition of ~ given previously, a straightforward algorithm computes the region ~fo(q*, f*) in O(n) time. Thus, the termination condition lC is computable by checking i for containment of 9{0. However, the time

Motion planning with uncertainty

25

taken by the evaluation of TC depends on the complexity of ~co,t,ct and 3-. This might be a drawback in practice since the termination condition has to be monitored in real time during motion. A possible improvement is to do as much precomputation as possible at planning time. But, there seems to be no way of making TC into a "compiled" expression whose evaluation takes constant time.

5.4 Computation of goal kernel We now give an algorithm for computing the 0-kernel of a goal 3-. We assume that 3- consists of a finite collection of open edges 3-~, i = 1 , 2 , . . . , in ~ontact and closed generalized polygons 3-P, j = 1 , 2 , . . . , in ~va~ia" The 3-~'s are called goal edges. The 3-P's are called goal polygons. The various goal edges and goal polygons are called goal components. The input goal components are allowed to overlap. The algorithm starts by making some cosmetic (though crucial for the rest of the algorithm) changes to the input description of the goal. Basically, the input goal components are transformed into a new set of goal components such that no two of them overlap. First, every edge E of every ~j~-Psuch that E C (~contact is inserted in the collection of goal edges. Second, if two goal edges overlap they are merged into a single one and if two goal polygons overlap, they are merged into a single one (we assume that this step produces only simple polygons). Third, every extremity of every goal edge 3-~ which is currently not a vertex in (~contact, is inserted as a new vertex of (~contact' Finally, every vertex of (l~contact that lies in an edge of a polygonal goal 3-~ is inserted as a new vertex of ~] ~rpJ and every vertex of every goal polygon 5OT-P j that lies in an edge of (~contact is inserted as a new vertex of (~contact' For example, the goal region shown in Fig. 8(a) is input as a single goal polygon. It is made into a goal edge and a goal polygon respectively denoted by 3-1 and 3-2 in Fig. 9.

"Jl

(a)

(b) Fig. 9. Goal components.

26

J.-C. Latombe et al.

Let { 3 - ~ , . . . , fiN} be the set of goal components after the above preprocessing. The algorithm considers them successively: • For every goal edge if,, it computes the subset 3-," C_ ~ of configurations that are 0-distinguishable from the configurations in ~v,,d\~-. • For every goal polygon ~ , it computes the subset 3-~ C ~ of configurations that are distinguishable from the configurations in ~valid\~-using position sensing only. We detail these computations below. It is straightforward to verify that X0(j- ) =

U k =Nl

'~/k" 07-t

5.4.1. Kernel in goal edge Consider a goal edge ~ . We want to compute the subset ~i, as specified above. Let F ~*~¢(q, 0) denote the set of all possible sensed forces at a configuration q when the commanded velocity is v0. It is derived from F .... (q, 0) as follows: • If Frea~(q, 0) contains the null vector, then 0 E Fr*ac(q, 0). *If Fr~a¢(q, O) contains a nonzero vector f, then all vectors f* such that [[f*[I = Ilfl[ and langle(f, f*)l < e: are in F~*e~(q, 0). We let F .... (~/i, 0) designate the set F .... (q, 0) for any q E if, (recall that J, is open, i.e., it does not include its endpoints). A configuration in ~ is 0-distinguishable from a configuration in ~ f ~ iff 0 ~ F~*~c(ff,, 0). Hence, if O7" 0 e F ~,~(,v,, 0), ~-; = 0. O7" 0). Then, a configuration q E 3, is 0-distinguishAssume that 0 ~ F .... (,~, able from any configuration in ~ f ~ . It is 0-distinguishable from a configuration q' contained in an edge E C ~ o , ~ iff: • either q' is distant from q by more than 2p o, $ (~/~, o7- 0 ) = 0 , i.e., the two edges have sufficiently dif• or F~ac(E, 0 ) N F .... ferent orientations. Therefore, the algorithm for computing 3-'i is the following: * if 0 E F .... (a07"~, O) then return i~; else S ~- ~.; for every edge E C ~co.~a¢,, E ~ S , do * * ( ~ i , e ) ~ ~ then; if Freac(E, 0) A Fr~a¢ S +-- Skcylsphere(E, 2pq); endif; enddo; return S; endif;

where cylsphere(E, 2pq) designates the edge E grown by 2pq.

Motionplanning with uncertainty

27

The above algorithm does not require the exhaustive computation of O-F F*e,c(3-i, 0). The test 0 E F .... (~i, 0) can easily be performed by checking that the null vector is in F .... (q~-, 0), for any q~-E ffi- The test F~*ac(E, O)N * O7 F .... (5~, 0 ) ~ 0 requires the cones spanned by the forces in F .... (qE, 0) and F .... (q~-, 0) to be first computed for any qE ~ E and q~- ~ ffi, next "grown" by ej-, and finally intersected. 3-'~ is computed in O(n) time.

5.4.2. Kernel in goal polygon Consider a goal polygon ~ . We want to compute the subset S~ of configurations that are distinguishable from the configurations in ~va~d\~-using position sensing only. This computation is not as simple as it first seems. It is clear that a configuration in ~ is distinguishable from any other configuration if the two configurations are distant by more t h a n 2pq. However, this is only a sufficient condition, since it does not check whether the second configuration is in ~valid, i.e. is achievable. Hence, if ~ is adjacent to the contact space, computing 3~ by "shrinking" ~ by 2pq only produces a subset of the region we are interested in. This is illustrated by Fig. 10(a). In this figure, if-1 is a rectangle, one side of which is in contact space. Only the three edges of J-~ which are not in the contact space should be "shifted in" by 2pq in order to get 3-'~. But shifting only the edges of ~ which lie in the free space does not always yield the region we want. If the C-obstacle region is thiner that 2pq along ~ , the edge of ~ should be shifted in by 2pq minus the thickness of the C-obstacle region along that

FtI e 2<2pq (a)

(b)

Fig. 10. Goal polygon adjacent to contact space.

e 3<2pq (c)

28

J.-C. Latombe et al. ¢'~- . . . . . . .

v

b

C

Fig. 11. Kernel in goal polygon.

edge (see 3-2 in Fig. 10(b)). The region we obtain by doing so is safe, but may now be too conservative. Indeed, it may happen that the other side of the C-obstacle region is also part of O- (see ff~ in Fig. 10(c)), in which case the edge of ~ may not have to be shifted at all. More generally, we can compute 3-~ using the following algorithm: Step 1. Construct the maximal connected polygonal region S as follows: ~ is first included in S; then, every connected component of ~ and goal polygon 3-k, k ~ j, that shares an edge with a region already in S is iteratively s included in S. Step 2. S' ~-- S. For every edge E in the boundary of S do: S' ~ S'\cylsphere(E, 2pq). Step 3. Return 3-~ = ~ f3 S'.

Figure 11 illustrates the operations performed by this algorithm for the goal polygon 3 2 shown in Fig. 9. Figure ll(a) displays the region S computed at Step 1. The region S' computed at Step 2 is shown in Fig. ll(b). Finally, the returned region, 3"2, is shown in Fig. l l(c). Let O(n) be the combined complexity of ~co°tact and ~ . The region constructed at Step 1 has O(n) edges. Using a sweep-line algorithm, Step 2 can be performed in O((n + c)log n) time, where c ~ O(n 2) is the number of intersections of the cylspheres. ~. O S' can be obtained from the sweep algorithm with the same time complexity. 5.5. Combination

Backprojecting from goal kernel usually generates preimages which are significantly larger than those produced by backprojecting from sticking edges. In particular, it can produce preimages of goals made of regions lying in the In other words, S is the transitive closure of all connected components of q ~ and goal polygons touching ~ .

Motion planning with uncertainty

29

free space and/or nonsticking edges in the contact space. There are situations, however, where backprojecting from sticking edges produces larger preimages. This is the case when the goal is a sticking edge E (for the commanded velocity fi0) and one of the two edges adjacent to E, or both, cannot be 0-distinguished from E. In this case, the 0-kernel of the goal is obtained by shrinking E by 2pq at both endpoints and the backprojection of the shrunk edge is smaller than the backprojection of the full edge. Fortunately, it is easy to combine the two methods. This consists of: (1) computing the union S C_ 3-of the sticking edges and the 0-kernel for the selected commanded velocity v0; and (2) computing the backprojection ~o(S). This backprojection is the computed preimage ~. The termination condition of the motion command is the disjunction of the two termination conditions given above, i.e., TC ~ [t> O/umin] v [~o(q*(t), f * ( t ) ) C ~-] ,

A motion starting from within 3~ cannot terminate before the goal is reached, since it cannot stick to an edge that is outside the goal. It is also guaranteed to terminate, since it will either stick in a goal edge or reach the 0-kernel of the goal. Notice that the preimage computed by combining the two methods may be larger than the union of the preimages separately computed by the two methods. Indeed, there may exist starting configurations from where the motion commanded along v0 is guaranteed to reach either a sticking edge or the 0-kernel, without knowing which one in advance.

6. Implementation and experimentation We have implemented a motion planner based on the preimage backchaining approach. This planner computes preimages using either the backprojectionfrom-sticking-edge method, the backprojection-from-kernel method, or the combination of the two. The user inputs the description of the configuration space, the goal region ~, and the initial region 5~. If successful, the planner returns a motion plan in the form of a sequence of unit commanded velocities and the associated sequence of computed preimages. The method used for computing the preimages determines the termination condition of every motion command in the plan. The planner is implemented in Allegro Common Lisp on an Apple Macintosh II computer. The planner constructs a graph of preimages by considering commanded velocities with the orientations {k~/K}k= o..... 2K-1, where K is input by the

30

J.-C. Latombe et al,

user. In our experiments, we used K = 2 or 4. The planner searches the graph in a breadth-first fashion, but other (and perhaps more efficient) search techniques could have been used as well (e.g., A*). The algorithms implemented in the planner are essentially those described above, with minor variations. In particular, the backprojection of a region is computed using an algorithm similar to that described in [22], rather than the sweep-line algorithm (which might be faster). The planner also approximates conservatively the generalized polygonal 0-kernels by regular polygons. These changes have no major impact on the visible operations of the planner. Below we describe experimental results obtained with the planner. In this description, we call method 1 (respectively 2, 3) the backprojection-fromsticking-edges method (respectively the backprojection-from-kernel method, the combination of the two methods). In all the examples shown, the initial region is a single point; in the figures it is the center of a disc that depicts position uncertainty. The control uncertainty cone and force uncertainty cone are not depicted. The figures are essentially hardcopies of those generated by the planner. Figures 12(a) and 12(b) illustrate motion plans generated by methods 1 and 2, respectively. None of them shows the computed preimages. The goal is the rectangular region ~. The plan constructed using method 1 consists of five steps, defined by the commanded velocities fi~ through b~. The corresponding sticking edges are 0-1 through 3 5. (More precisely, ~i is the intersection of the preimage of ~ . 1 and the sticking subset of (~contact for b~. A motion commanded along ~ and issued from within ffi t is guaranteed to terminate in ~-i, but some configurations in 3-i may not be reachable.) The plan constructed using method 2 is simpler and consists of a single motion commanded along fi~. Figure 13 shows an example where method 1 (Fig. 13(a)) produces a simpler plan than method 2 (Fig. 13(b)). Figure 13(b) displays two intermediate preimages. The third preimage, which includes the initial configuration of the robot, is not displayed. Figure 14 shows a typical example where method 1 works well. This is when the robot can "bounce" from one sticking edge into another. On the contrary, method 1 works poorly or fails when the C-obstacles are far apart and when the goal mostly (or completely) lies in the free space. Figure 15 displays an example with a single C-obstacle that method 1 fails to solve. Using method 2, the planner generated the plan illustrated in the figure. Figure 16 gives an example where method 3 resulted in a simpler plan than either method 1 or 2. Figure 16(a) shows a five-step plan generated using method 1. Figure 16(b) shows a three-step plan generated using method 2. Figure 16(c) shows a two-step plan generated using method 3. In this example, method 3 is not strictly needed, since the motion along b~ could have been generated using method 2, and the motion along v2 using method 1. However, it is more efficient to compute a single preimage for every goal and corn-

Motion planning with uncertainty

b Fig. 12. Example 1.

31

32

J.-C. Latombe et al.

tl Fig. 13. Example 2.

Motion planning with uncertainty

33

A

~!ii.

,,~......

I

4

G

Fig. 14. Example 3.

G

Fig. 15. Example 4.

34

J.-C, Latombe et al.

@

@

13

Fig: 16. Example 5.

Motion planning with uncertainty

35

manded velocity, using method 3, rather than two preimages, using methods 1 and 2 separately. All the examples given above were solved by the planner in a reasonable amount of t i m e - a few minutes at worst. Nevertheless, the implemented software is far from optimal and could definitively be made faster. Note furthermore that many robotic tasks that require uncertainty in control and sensing to be considered at planning t i m e - s u c h as part mating, grasping, docking-involve a space of rather small volume. Hence, apart from the fact that they are two-dimensional, the examples submitted to the planner are not unrealistically too simple. In the next two sections we describe non-implemented conceptual improvements of the planner.

7. On the termination predicate's knowledge

7.1. Purpose Consider the problem of achieving a rectangular goal region ~-in an empty two-dimensional configuration space, with a commanded velocity b 0 perpendicular to one edge of the rectangle (see Fig. 17). The only available sensing is position sensing, since force sensing is irrelevant in the absence of obstacle. We assume that both sides of the rectangular goal are longer then 4pq. Figure 17(a) shows the 0-kernel A" of 3- and Fig. 17(b) the preimage ~ of ~- constructed using the backprojection-from-kernel method; the outlined region in Fig. 17(c) depicts the set of sensed configurations q* for which the instantaneous sensing termination condition Y{o(q*(t))C_ J evaluates to true. The knowledge embedded in the predicate of this condition consists of the geometry of both 3- and

A

,,,\ 1

I

Y

7_p,

q'

(a)

(b)

(c)

Fig. 17. Rectangular goal in empty configuration space.

36

J.-C. Latornbe et al.

(~valid, the commanded velocity (although in this particular example, the direction of r 0 plays no role in the condition), and the uncertainty in control and in sensing. In this section we show that significantly larger preimages of 3-can be constructed by making the termination predicate know both the preimage from which the motion starts and itself. Knowing the preimage allows the predicate to rule out interpretations of the sensory inputs that suggest configurations of ~i/that are outside the range of configurations reachable from the preimage by a motion commanded along rio. Knowing itself allows the predicate to rule out interpretations of the sensory inputs that suggest configurations of ~ that are beyond the limit where the motion would have been stopped by the termination condition. The knowledge of both the preimage and itself clearly augments the "recognition power" of the termination predicate. 9 As shown below, it yields larger kernels whose backprojections form larger preimages. The result that the recognition power of the termination condition depends on the knowledge of the preimage is not new. It was first established in [31]. Its application to the above example led Erdmann [14] to construct a preimage that is larger than that shown in Fig. 17. However, we show below that, using the same knowledge, a slightly larger preimage can be built (Section 7.2). On the other hand, the fact that the recognition power of the termination condition is augmented by making the condition predicate know itself seems not to have been previously noticed. In the considered example, we show that this additional knowledge results in a preimage that is significantly larger than that constructed with the knowledge of the preimage alone (Section 7.3). Unfortunately, like in previous studies on this issue [14, 31], our analysis of how the knowledge embedded in the termination predicate affects preimage construction is very partial. The effective preimage construction results presented below are specific to the example of Fig. 17. The general problem of computing "large" (perhaps maximal) preimages, when the termination predicate is allowed to know both itself and the preimage from which the motion starts, is a difficult problem, since both the preimage and the termination predicate then depend recursively on themselves. It might nevertheless be possible to identify useful particular cases and construct specific solutions for them (as we do below for one example). The planner would then select one such specific solution whenever one is applicable; otherwise it would compute a preimage using the more general methods presented in Section 5. The following two subsections first introduce the notions of termination conditions with initial and final states in a general fashion. Then, they apply these concepts to the particular example of Fig. 17. An additional way of increasing the recognition power of the termination predicate, which is not explored in this paper, is to give it access (through its arguments) to the whole sensing history since the beginning of the motion, not just instantaneous sensing [t4, 22, 31, 33].

Motion planning with uncertainty

37

7.2. Termination condition with initial state Let ~ 0 ( ~ ) be the set of all the possible actual configurations which may be reached by executing a motion commanded along fi~ starting from within C_ ~vaud. ~ 0 ( ~ ) is called the forward projection of the region ~ for the commanded velocity o0. If (~valid = ~, as in the example of Fig. 17, ~ 0 ( ~ ) is the union of all the control uncertainty cones anchored in ~. We say that a configuration q C ~v~li~ is O-~-consistent with a sensed configuration q* and a sensed force f* iff q E Y(0(q*, J?*) N ~ ( ~ ) . This definition captures the fact that if a motion commanded along v0 is known to start from within a region ~, then at any time t during the motion, the current configuration q is in both Ko(q*(t),/7*(0 ) and o~0(~ ). We write:

~7{o,~(q*, f*) = ff{o(q*, J~*) A ,~o(~). Two configurations q~ and q2 in ~v..d are said to be O-~-distinguishable iff: {(q,, ./7,)] q,, q2 E Ko,~(q*, jr*)} = 0. The O-~-kernel of ~-is defined as:

q and q' are e-~-distinguishable}. When a preimage ~ of a goal J-is constructed for a motion command M, it is known, by definition of the preimage backchaining process, that the command M will be executed from a starting configuration in ~. (Indeed, if ~ does not include the initial region 5~, it is the recursive responsibility of the backchaining process to find a way to achieve ~ ) . Hence, ~ can be computed as the backprojection of the O-~-kernel, i.e., 5fo(Xo,~(g)) , with TC -= [~o,~(q*(t), f*(t)) C ~-] as the termination condition. This condition is called a termination condition with initial state. Example 7.1. Consider the example of Fig. 17 again. The region whose contour is labeled A B C D E F G H in Fig. 18 is a generalized polygon constructed as follows. The straight edge BC is at distance 2pq from the top horizontal edge of ~-. The circular edges A B and CD are circular arcs of radius 2pq centered at P and Q, respectively. P (respectively Q) are selected in the top horizontal edge of gr such that the intersection of ~- and a line passing through P (respectively Q) and parallel to the left (respectively right) side of the control

38

J.-C. Latombe et al.

\ Fig. 18. Using initial state.

uncertainty cone is a segment PP' (respectively QQ') of length 4pq. The circular edges AH and DE are circular arcs of radius 2pq with centers at P' and Q', respectively. The straight edge GF is at distance 2pq from the bottom edge of J-. The straight edges HG and EF are at distance 2pq of the left and right edges of J , respectively. It is rather easy to verify that the region thus outlined is the kernel X0,~(~-) with ~ = 5~o(Xo,,~(~-)). In particular, assume that at some instant during the motion the actual configuration is the extreme point marked A in the figure. All the possible sensed configurations at this instant lie in the disc of radius pq centered at A. If the forward projection is not taken into account, the set of all the interpretations of all these measurements is the disc of radius 2pq centered at A. The intersection of this disc with the forward projection ~0(3 ~) is completely contained in 3. Hence, the point A and any configuration outside 07" 0- are 0-~-distinguishable, so that A belongs to Xo,:,(~/). The same kind of verification can be extended to the other vertices B through H, the straight and circular edges connecting these points, and the interior of the outlined area. The resulting preimage is larger than that shown in Fig. 17(b). It is also slightly larger than that given in [14] (but Erdmann's construction is n o t precisely defined). The region outlined in a dashed line depicts the set of sensed configurations for which the termination condition Yf0,~(q*(t)) C_3-evaluates to true.

Example 7,2. Another example in which using a termination condition with initial state allows us to construct a larger preimage is shown in Fig. 19. The

39

Motion planning with uncertainty

jgs2

7" (a)

(b)

~sus,,,,~

(c)

Fig. 19. Using initial state (other example). configuration space contains a single C-obstacle bounded by an infinite line (the contact space). The goal region is a finite segment included in that line (Fig. 19(a)). The preimages 301 and 302 respectively computed as the backprojections of the 0-kernel and the 0-302-kernel are shown in Figs. 19(b) and 19(c). Note that 3°2 is the maximal preimage since it is equal to the backprojection of the goal. The apparent difficulty of using a termination condition with initial state is that both 30 and TC are defined in term of the preimage 30 that we want to construct. Canny [7] showed that, under reasonable assumptions, the problem of deciding whether a region 30 is a preimage, or not, with a termination condition with initial state can be reduced to a decision problem in the theory of the reals. Based on this result, he proposed a complete planning method that requires double exponential time in the number of motion commands in the generated strategy. 7.3. Termination condition with initial and final states

Any condition "I'G divides a forward projection ff0(30) into three regions which we denote by F 1, F 2 and F~: • Ft consists of all the configurations q C o~0(30) such that, for every possible sensed configuration q* and sensed force f* 0-consistent with q, TC evaluates to false. • ~ consists of all the configurations q ~ ,~0 (30) such that, for every possible sensed configuration q* and sensed force f* 0-consistent with q, TC evaluates to true. • F 3 = ~ 0 ( 3 0 ) \ ( F 1 U F2), i.e., any configuration in ~ may nondeterministically produce sensed configurations and forces for which TC evaluates to either true or false. At every instant during a motion commanded along fi0 and issued from within 30, if the current configuration is in F~, the motion keeps going; if it is in

40

J.-C. Latornbe et al.

F 2, the motion stops; if it is in F3, the motion may either continue or stop. In general, there exist subsets of ~ 0 ( ~ ) that are inaccessible from ~ because reaching them would require to previously traverse F2, where the motion would have been terminated. This is precisely why making the termination condition know itself increases its recognition power. Given a termination condition TC, a motion commanded along b~ and starting from within ~ can reach a configuration q, iff there exist a configuration qs E ~ and a valid path compatible with b~ which connects qs to q, without traversing F2, except possibly at qa itself. Let us denote the set of all such configurations qa by ~O,TC(~) _C o~0(~ ). We say that a configuration q E cCwiid is O-~-TC-consistent with a sensed configuration q* and a sensed force ]* iff q~Y{o(q*,]*)N~*o,rc(~ ). We write:

~{o,~,rc(q*, f*) = Y{o(q*, ]*) N ffa,TC(~). Two configurations qt and q2 in cgw,d are said to be O-~-TO-distinguishabte iff:

{( q*, ]*)1 q~, q: ~ ~{o,~,me(q*, J~*)} = O. The O-~-TC-kernel of a goal 3" is defined as:

x0.~.rc(J-) = {q ~ 3-1Vq' ~ %,,~\~-: q and q' are 0-~-TC.distinguishable}. The backprojection of the 0-~-TC-kernel of 3 is a preimage ~ of a goal 3" for a motion commanded along b0, with TC =- ~o,~.TC(q*(t), f*(t)) C_S as the termination condition. This condition is called a termination condition with initial and final states. Example 7.3. We show below that using a termination condition with initial and final states makes it possible to construct a preimage larger than that of Fig. 18. The region with striped contour shown in Fig. 20 is a generalized polygon constructed as follows. Let R (respectively S) be the points in the top horizontal edge of g such that the intersection of ff and a line passing through R (respectively S) and parallel to the left (respectively right) side of the control uncertainty cone is a segment RR' (respectively SS') of length 2pq. The line

Motion planning with uncertainty

41

P

Fig. 20. Using initial and final states.

R'S'

that forms the upper portion of the striped contour consists of two circular arcs of radius 2pq, with respective centers R and S, and a straight segment at distance 2pq from the top edge of the goal 3. The rest of the striped contour is the lower part of 3-'s boundary. The region thus outlined is the kernel Xo,j, TC(~-) with ~ = =~/?o(Xo,,~,TC(°]-)) and TC ----- [~{'o,~, Tc(q*(t)) (~ ~-].

The region outlined in a dashed line depicts the set of sensed configurations for which the termination condition evaluates to true. One can verify that the region ~ is a preimage of 3~. Indeed, all the motions starting from within ~ are guaranteed to reach the goal (if they are not terminated before) and none of them can be terminated before the goal has been attained. In addition, none of these motions can leave the goal without being terminated. In order to see how the termination condition uses the knowledge of itself, assume that at some instant during the motion the actual configuration is the point designated by R' is the figure. All the possible sensed configurations at this instant lie in the disc of radius pq centered at R'. If neither the forward projection nor the termination condition are taken into account, the set of all the interpretations of all these sensed configurations is the disc of radius 2pq

42

J.-C. Latombe et al.

centered at R'. The intersection of this disc with the forward projection ffo(~) contains a sector that is not contained in 3-. This sector (the striped sector in Fig. 20) can only be attained from ~ by crossing the segment marked R' R". Since for any configuration in this segment the termination condition evaluates to true, the sector cannot be attained. The preimage built in Fig. 20 is substantially larger than that shown in Fig. 18. A termination condition with final state is specified as a recursive function of itself. We do not know as yet whether a motion planner using such a termination condition can be effectively built.

8. On the generation of conditional strategies The backchaining procedure presented in Section 4 can only generate linear plans, i.e., sequences of motion commands. However, as noticed in [31, 33], some planning problems only admit conditional strategies for solutions. Consider the "point-on-hill" example illustrated in Fig. 21(a) (this example was first given in [33]). The configuration space contains a single, noncompact C-obstacle bounded by three edges, the top edge ~d, the left edge E 1 and the right edge E 2 (both E~ and E 2 are semi-infinite lines). The goal region is the top edge ~3. The initial region 5~ is all ~va,d. We assume perfect control (i.e., ~7~ = 0), no position sensing (i.e., pq = ~ ) , perfect force sensing (i.e., e / = 0), and frictionless edges (i.e., ~b = 0). No linear strategy can reliably achieve from J. Nevertheless, a motion commanded from any starting configuration in with a vertical commanded velocity pointing downward and Iti*(t)ll > 0 as the termination condition, is guaranteed to terminate in one of the three edges ~, E L and E 2. Although it is not possible to know in advance which one of these edges will be achieved, it is known at planning time that this will be decidable at execution time by comparing the orientation of the sensed force

Sd

~



(a)

(b) Fig. 21. Point-on-hill problem.

Motion planning with uncertainty

43

when the contact is made to the normal vectors of the three edges. In addition, it is easy to plan a motion that starts in E~ (respectively E2) and terminates in c~ recognizably. Hence, the point-on-the-hill problem admits a conditional strategy for solution. Planning a conditional strategy requires the backchaining process to compute preimages of goal sets of the form 5e~ = {3-~. . . . ,3-~}, where 3-i (i E [1, s]) is either the original goal aS or a previously computed preimage. (The 3~'s may not be disjoint.) Let M = (v0, TC) be a motion command and RCi, i = 1 . . . . . s, be conditions called recognition conditions. A preimage of 5e3- for M and the RG/s is any subset ~ of ~valid such that executing M from anywhere within ~ is guaranteed to reach U ~=~ 3-,. and terminate in it in such a way that, when the motion terminates, at least one recognition condition evaluates to true and, if the condition RCi (for any i E [1, s]) evaluates to true, 3-~ has been achieved. In the example of Fig. 21, the backchaining process would ideally proceed as follows: • First, it would successively generate two preimages of ~d ( ~ and ~2) for two motions commanded along v0, and b02 (see Fig. 21(b)), both with angle(f *(t), ~(~d)) = 0 as their termination conditions. Notice that E~ C ~ and E 2 C ~2" • Second, it would generate ~ = ~va~a as the preimage of the goal set { cg, ~1, ~2} for a motion commanded along ~ (see Fig. 21(b)), with IIf*(t)ll > 0 as the termination condition and angle(f*(t),~)=O, with r / = ~(~d), ~(El) and ~(E2), as the three recognition conditions. More generally, in order to generate a conditional plan, the backchaining procedure operates as follows:

Step 1. It creates a set of goals 5earl and initializes it to {q3}. Step 2. It selects a subset 5e~ of Y~d and computes a preimage ~ of 5~J-. Step 3, If ,¢ C_ ~, it exits with success. Otherwise, it inserts ~ in 5¢q3as a new goal and goes back to Step 2. It remains the problem of computing the preimage of a set of goals. This can be done by adapting the methods presented in Section 5. For example, the backprojection-from-goal-kernel method applies as follows. Let ow~-= { ~ . . . . , d-}, with s > 1, and let b; be the selected commanded velocity. The 0-kernel X0(.3-~) of every goal ~,, i = 1 . . . . , s, is first computed. A preimage of ,9°J- is then constructed as the backprojection of the union U~=~ x 0 ( ~ ) . The recognition condition ROg is:

Rc, =-

:*(t)) c_

for i = 1 to s. The termination condition is:

44

J.-C. Latornbe et al.

TC -= ~/ RCi. i=l

A planner can be constructed by restricting the possible orientations of the commanded velocity to a finite set of predetermined orientations. However, the number of goal sets 5eft to be considered grows exponentially with the number of preimages generated since the beginning of the backchaining process. Additional techniques for selecting velocities and guiding the choice of goal sets remain to be developed.

9. Conclusion

In this paper, we addressed the problem of planning motion strategies in a two-dimensional configuration space in the presence of uncertainty in robot control and sensing. We established a precise formalization of the problem and we considered the preimage backchaining approach to this problem. In response to one of the main difficulties raised by this general approach, we described in detail two effective methods for computing preimages: backprojection from sticking edges and backprojection from goal kernel. In general. the second method computes larger preimages than the first. In fewer cases. however, it is the contrary. The two methods can be combined into what we think is the most powerful practical method developed so far for computing preimages. A motion planner based on these methods was implemented and we experimented with it in simulation on problems of reasonable complexity. We discussed two non-implemented improvements of the planner. One improvement consists of increasing the recognition power of the termination predicate of a motion command by embedding the knowledge of both the preimage and itself in it (termination predicate with initial and final states). On a specific example, we showed how this knowledge makes it possible to construct significantly larger preimages. The other improvement extends the preimage backchaining approach to the generation of conditional strategies. We proposed a full computational framework for effectively planning such strategies. An important shortcoming of the implemented planner is that it blindly discretizes motion directions in order to build the preimage graph and that it uses no heuristics to guide the search of this graph. This is acceptable only for rather simple problems. Although there exist some straightforward heuristics e.g., move in priority in the direction of the g o a l - they would probably be easily deceived. Donald [12] proposed a method based on the notion of "nondirectional backprojection" for computing a motion command whose execution is guaranteed to reach a goal from a given region, if one such command exists. The method relies on the fact that the topology of the

Motion planning with uncertainty

45

backprojection of a region can change at a finite number of "critical" orientations of the commanded velocity, and that the containment of the initial region in a backprojection changes at a finite number of "pseudo-critical" orientations where an edge of the current backprojection is tangent to the initial region. It computes the backprojections at these critical and pseudo-critical orientations, and tests the backprojection for containment of the initial region at each pseudo-critical orientation. The method can be used in connection with the preimage computation methods described in Section 5 in order to generate a 1-step motion strategy, without blind discretization and search, whenever one exists. This is done by noticing that both the set of sticking edges in the goal and the 0-kernel of the goal change at a finite number of orientations of the commanded velocity. Hence, one can compute all the intervals of directions 0 in which the sticking edges and the 0-kernel remain constant, and determine for each interval if there is a backprojection of the region formed by the corresponding sticking edges and 0-kernel that contains the initial region. However, extending the approach to r-step strategies would require to deal with (pseudo-)critical (r-1)-dimensional surfaces rather than just orientations, and it might be quite complicated in practice. Another major limitation of our planner is that it requires the robot's configuration space to be a two-dimensional Euclidean space. The general principles of the planning methods immediately extend to higher-dimensional spaces, but the detailed geometric algorithms do not. One may seek to devise "exact" algorithms by, say, reducing planning to an algebraic decision problem (e.g., [7]). Such algorithms certainly provide upper bounds of the complexity of motion planning with uncertainty, but they are unlikely to be practical solutions to this problem. More pragmatic, but still systematic methodsperhaps in the vein of those described in [2] - remain to be developed.

Acknowledgement This research was funded by DARPA contract DAAA21-89-C0002 (Army), CIS (Center for Integrated Systems), and SIMA (Stanford Institute of Manufacturing and Automation). The paper has benefited from discussions with Bruce Donald and Mike Erdmann, and from suggestions by Randy Wilson.

References [1] V.I. Arnold, Mathematical Methods of Classical Mechanics (Springer, New York, 1978). [2] J. Barraquand and J.C. Latombe, Robot motion planning: a distributed representation approach, Report No. STAN-CS-89-1257, Department of Computer Science, Stanford University, Stanford, CA (1989); also: Int. J. Rob. Res. (to appear).

46

J.-C. Latombe et al.

[3] R.A. Brooks, Symbolic error analysis and robot planning, Int. J. Rob. Res. 1 (4) (1982) 29-68. [4] R.A. Brooks and T. Lozano-P6rez, A subdivision algorithm in configuration space for findpath and rotation, in: Proceedings IJCAI-83, Karlsruhe, FRG (1983) 799-8(16. [5] S.J. Buckley, Planning and teaching compliant motion strategies, Ph.D. Dissertation, Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA (1986). [6] J.F. Canny, The Complexity of Robot Motion Planning (MIT Press, Cambridge, MA, 1988). [7] J.F. Canny, On computability of fine motion plans, in: Proceedings 1EEE International Conference on Robotics and Automation, Scottsdale, AZ (1989) t77-182. [8] J.F. Canny and J. Reif, New lower bound techniques for robot motion planning problems, in: Proceedings 28th I E E E Symposium on Foundations of Computer Science, Los Angeles, CA (1987) 49-60, [9] R.S. Desai, On fine motion in mechanical assembly in presence of uncertainty, Ph.D. Dissertation, Department of Mechanical Engineering, University of Michigan; Ann Arbor, MI (1988). [10] B.R. Donald, Error detection and recovery for robot motion planning with uncertainty, Ph.D. Dissertation, Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA (1987). [11] B.R. Donald, A geometric approach to error detection and recovery for robot motion planning with uncertainty, Artif. Intell. 37 (1988) 223-271. [12] B.R. Donald, The complexity of planar compliant motion planning under uncertainty, in: Proceedings Fourth A C M Symposium on Computational Geometry, Urbana, IL (1988) 309-318. [13] B. Dufay and J.C. Latombe, An approach to automatic robot programming based on inductive learning, Int. J. Rob. Res. 3 (4) (1984) 3-20. [14] M. Erdmann, On motion planning with uncertainty, Tech. Report 810, AI Laboratory, MIT, Cambridge, MA (1984). [15] M. Erdmann, Using backprojections for the fine motion planning with uncertainty, Int. J. Rob. Res. 5 (1) (1986) 19-45. [16] M. Erdmann and M.T. Mason, An exploration of sensorless manipulation, in: Proceedings 1EEE International Conference of Robotics and Automation, San Francisco, CA (1986) 1569-1574. [17] B. Faverjon and P. Tournassoud, A local based approach for path planning of manipulators with a high number of degrees of freedom, in: Proceedings IEEE International Conference on Robotics and Automation, Raleigh, NC (1987) 1152-1159. [18] L. Gouz~nes, Strategies for solving collision-free trajectories problems for mobile and manipulator robots, Int. J. Rob. Res. 3 (4) (1984) 51-65. [19] J.E. Hopcroft and G. Wilfong, Motion of objects in contact, Int. J. Rob. Res. 4 (4) (1986) 32-46. [20] O. Khatib, Real-time obstacle avoidance for manipulators and mobile robots, Int. J. Rob. Res. 5 (1) (1986) 90-98. [21] A. Koutsou, Planning motion in contact to achieve parts mating, Ph.D. Dissertation, Department of Computer Science, University of Edinburgh, Scotland (1986). [22] J.C. Latombe, Motion planning with uncertainty: the preimage backchaining approach, Report No. STAN-CS-88-1196, Department of Computer Science, Stanford University, Stanford, CA (1988). [23] .1.C. Latombe, Robot Motion Planning (Kluwer Academic Publishers, Boston, MA, 1991). [24] C. Laugier, Planning fine motion strategies by reasoning in the contact space, in: Proceedings IEEE International Conference on Robotics and Automation, Scottsdale, AZ (1989) 653-659. ]25] C. Laugier and F. Germain, An adaptative collision-free trajectory planner, in: Proceedings International Conference on Advanced Robotics, Tokyo, Japan (1985). [26] C. Laugier and P. Th6veneau, Planning sensor-based motions for part-mating using geometric reasoning techniques, in: Proceedings ECAI-86, Brighton, England (1986). [27] L.I. Liebermann and M.A. Wesley, AUTOPASS: an automatic programming system for computer controlled mechanical assembly, IBM J. Res. Dev. 21 (4) (1977) 321-333.

Motion planning with uncertainty

47

[28] T. Lozano-P6rez, The design of a mechanical assembly system, Tech. Report AI-TR 397, Artificial Intelligence Laboratory, MIT, Cambridge, MA (1976). [29] T. Lozano-P6rez, Spatial planning: a configuration space approach, 1EEE Trans. Comput. 32 (2) (1983) 108-120. [30] T. Lozano-Pdrez, A simple motion-planning algorithm for general robot manipulators, 1EEE J. Rob. Autom. 3 (3) (1987) 224-238. [311 T. Lozano-P6rez, M.T. Mason and R.H. Taylor, Automatic synthesis of fine-motion strategies for robots, Int. J. Rob. Res. 3 (1) (1984) 3-24. [32] M.T. Mason, Compliance and force control for computer controlled manipulators, IEEE Trans. Syst. Man and Cybern. 11 (6) (1981) 418-432. [33] M.T. Mason, Automatic planning of fine motions: correctness and completeness, in: Proceedings IEEE International Conference on Robotics and Automation, Atlanta, GA (1984) 492-503. [34] N.J. Nilsson, Principles of Artificial Intelligence (Morgan Kaufmann, Los Altos, CA, 1980). [35] J. Pertin-Troccaz and P. Puget, Dealing with uncertainty in robot planning using program proving techniques, in: R.C. Bolles and B. Roth, eds., Robotics Research 4 (MIT Press, Cambridge, MA, 1987) 455-466. [36] M.H. Raibert and J.J. Craig, Hybrid position/force control of manipulators, J. Dyn. Syst. Measurement and Control 1112(1981) 3-18. [37] V.T. Rajan, R. Burridge and J.T. Schwartz, Dynamics of rigid body in frictional contact with rigid walls, in: Proceedings 1EEE International Conference on Robotics and Automation, Raleigh, NC (1987) 671-677. [38] J.H. Reif, Complexity of the mover's problem and generalizations, in: Proceedings 20th Symposium on the Foundations of Computer Science (1979) 421-427. [39] J.T. Schwartz and M. Sharir, On the piano mover's problem II: general techniques for computing topological properties of real algebraic manifolds, Adv. Appl. Math. 4 (1983) 298-351. [40] J.T. Schwartz and M. Sharir, A survey of motion planning and related geometric algorithms, Artif. lntell. 37 (1988) 157-169. [41] M. Sharir, Efficient algorithms for planning purely translational collision-free motion in two and three dimensions, in: Proceedings IEEE International Conference on Robotics and Automation, Raleigh, NC (1987) 1326-1331. [42] S. Shekhar and O. Khatib, Force strategies in real-time fine motion assembly, in: Proceedings A S M E Winter Annual Meeting, Boston, MA (1987). [43] R.H. Taylor, Synthesis of manipulator control programs from task-level specifications, Ph.D. Dissertation, Department of Computer Science, Stanford University, Stanford, CA (1976). [44] J. Valade, Automatic generation of trajectories for assembly tasks, in: Proceedings ECAI-84, Pisa, Italy (1984). [45] R. Waldinger, Achieving several goals simultaneously, in: E. Elcock and D. Michie, eds., Machine Intelligence 8 (Ellis Horwood, Chichester, England, 1975) 94-136. [46] D. Whitney, Force feedback control of manipulator fine motions, J. Dyn. Syst. Measurement and Control 99 (1977) 91-97.