Visually guided navigation

Visually guided navigation

Robotics and Autonomous Systems ELSEVIER Robotics and Autonomous Systems 21 (1997) 37-50 Visually guided navigation* J a n a Kogeckfi 1 Robotics lab...

1MB Sizes 1 Downloads 140 Views

Robotics and Autonomous Systems ELSEVIER

Robotics and Autonomous Systems 21 (1997) 37-50

Visually guided navigation* J a n a Kogeckfi 1 Robotics laboratory, Department of Electrical Engineering and Computer Science, 333 Cory Hall 98, UC Berkeley, Berkeley, CA 94720-1772, USA

Abstract

Rich sensory information, robust control strategies and proper representation of the environment are crucial for successful navigation of the mobile robot. We propose a model of the environment which is suitable for global navigation using visual sensing. At the lowest level of interaction of the robot with the environment we employ visual servoing techniques which facilitate robust local navigation by means of relative positioning. Further we demonstrate how to use these local control strategies in a global setl~ng. The environment is represented in terms of place graph, where the nodes correspond to places and arcs have associated servoing strategies for moving from one place to another. The global navigation is expressed as a sequence of relative positioning tasks obtainable from the search of a place graph. The proposed model facilitates generation of motion plans which c;m be executed in a robust manner thanks to the presence of the sensory information in the feedback loop. Keywords: Visual servicing; Motion planning; Navigation

1. I n t r o d u c t i o n

Robust and reliable navigation of the mobile agent in the environment involves several subproblems. Depending on the extent to which the model of the environment is known, the successful navigation task consists of sensing the environment and planning the motion given a model. The difficulties of the off-line motion planning methods are not only the requirement for existence of the a priori given model of the world, but also the fact that it is very hard to deduce an efficient control scheme which would follow the planned path robustly with regard to the uncertainties in the sensing and the perturbations due to the imperfect modeling of the robot and the environment. Numerous approaches have been proposed in the past, emphasizing different aspects of the global navigation problem, The approac:hes vary in the type of sensing used as well as the amount of priori information about the environment. In the motion planning literature [22] the authors typically assume that the model of the environment is known and that the odometric readings provide an accurate estimate of the mobile robot's position in the global coordinate frame. Vex~¢few with the exception of the potential field based approaches (e.g. [10,18,30]) provide an actual control strategy for achieving the goal. In some cases even though the assumption about the exact model of * This work has been carried out in GRASPlaboratoryat the Universityof Pennsylvaniasupportedby grants ARMY DAAH4-96-1-007, ARPA N00014-9291647 and NSF IRI93-07126 and SBR89-2023. 1 E-mail:[email protected]. 0921-8890/97/$17.00 © 1997 Elsevier Science B.V. All rights reserved PII S0921-8890(97)00005-5

38

J. Kogeckd/Robotics and Autonomous Systems 21 (1997) 37 50

the environment can be relaxed [17,25], perfect knowledge of the goal and the global position of the mobile robot is still required. Different models of the environment which are typically employed vary from quantitative to more qualitative ones. The quantitative models are described in terms of some geometric primitives, which the qualitative models capture the neighborhood relationship between different regions of the free space. Most of the geometric approaches concentrate on the localization problem, by choosing a set of geometric primitives suitable for representing given environment and some a priori knowledge about the environment the problem is to determine the pose of the mobile robot with respect to some stationary coordinate frame. Handling of the uncertainty associated with the sensory measurements and uncertainty in the position estimate is done using various techniques from estimation theory. The localization problem is typically formulated in the Kalman filter setting or similar recursive estimation framework with a partial explicit model of the environment expressed in terms of geometric features. In case of ultrasound sensing the difficulty of the localization problem is in establishing the correspondences between the sensed features and the features existing in the a priori model of the environment [6,24,26]. The Kalman filter has been also used for fusion of binocular and trinocular images between different views by Ayache and Faugeras [2,3]. The main and applealing objective was to achieve a representation of the environment which would capture both the geometry and the uncertainty. In [3] authors developed an approach for combining the uncertainty arising from the imaging process with the uncertainty arising from the measure of displacement of the mobile base. Processing was based on points, lines and planes, which were chosen as the primitive parts of the environment model. Above-mentioned models based on geometric features handle uncertainty well. However, they do not address control and the task related issues. This problem was partially elevated by methods embedded in the theory of dynamical systems which emphasize the control issues, while still pertaining to the notion of the explicit model of the environment. The geometrical structure of the environment is either expressed explicitly [30] or merely enters the problem in the form which is convenient for control purposes [7]. Both the potential field methods and dynamic systems approach concentrates mainly on the control issues and assumes near to ideal sensing. Somewhat orthogonal efforts to the ones mentioned so far attempt to capture the structure of the environment in terms of relational maps [8,15,21,27,34] represented in the form of a graph. The nodes of the graph usually represent distinctive landmarks or places and the arcs represent the neighborhood relation between them and are typically labeled by the actions which need to be executed in order to get from one location to another. The choice of distinctive places/landmarks usually transfers the difficulty of the problem of establishing the correspondence between the measured data and the a priori model to the problem of recognition. Theses approaches often resulted in a successful working systems but failed to be generalizable due to the weak link between the relational model of the environment and its intrinsic geometrical structure. Visual servoing approaches [12] provide suitable platform for taking into account sensing, incorporating the geometric assumptions about the environment and deriving stable control strategies for achieving the goal. Either using geometric primitives alone [9] or relating the parameters of the optical flow model, with the motion of the mobile base [33] various local navigational tasks can be accomplished in a robust way. The issues pertaining to global navigation have so far not been addressed within this paradigm. We propose, given some a priori information about the environment, a model which employs certain geometric features in the environment and their spatial relationships. The model combines the geometry of the environment and the control of the mobile robot in such manner that global navigation can be done in a robust and reliable way. We formulate the global navigation as a sequence of relative positioning problems, where the relative positioning is accomplished by a sensory based closed-loop strategy with respect to the environment. We show how to use visual sensing to derive such strategies and how to partition the model of the environment in such manner that there exists an appropriate strategy for every configuration of the mobile robot. 2

2 The last conditionof the existenceof a appropriateclosed-loopstrategyfor everypoint in robot's configurationspace is not necessary for successfulnavigationsince certain navigational tasks can be accomplishedin an open-loopfashion.

J. Kogechi/Robotics and Autonomous Systems 21 (1997) 37-50

39

By choosing distinctive landmakrs in the environment and geometric features associated with them, the environment is partitioned to a set of visibility regions. Within these regions the mobile robot can either be reliably positioned with respect to a set of visible landmarks or can follow a boundary visible from a given region, leading to a landmark. The spatial relationship between visibility regions (places) is represented in a "place graph". The goals of the mobile robot are specified in terms of places which correspond to the nodes of the place graph and particular sequence of places whiLch need to be visited while moving from one location to another results from the search of the place graph (using e.g. Dijkstra's algorithm). We will employ the visual servoing paradigm for accomplishment of relative positioning tasks and propose partitioning of the environment in terms of visibility regions given its 2D polygonal mode.

2. Visual servoing Visual servoing is generally referred to in instances when the vision sensor is used in the closed loop with the control system. Several existing visual servoing approaches are typically partitioned in two different categories: look-and-move and seJwo approaches. In the former case the actual 3D position of the sensor is computed first and the control law is derived in the task space, while "servo" approaches are typically image based. Various techniques have been successfully applied in hand--eye coordination problems, manipulation [1,11,14] and more recently in mobile robotics [29,33]. A more detailed overview of different approaches can be found in [13]. Our approach follows the theory developed in [9,32] and is referred to as task function approach. The overall philosophy of this approach is to formalize a servoing task as regulating a paraticular error function to zero. The visual servoing task is repsented by a particular alignment of the camera and the environment (target) frames (see Fig. 1) anti the overall approach consist of three steps: (i) defining a set of features in the image, (ii) specifying their desired goal configuration, and (iii) applying a control law that ensures the convergence from any initial configuration to a desired goal configuration in the image. Suppose the camera measurements (sensor output) in a vector s = [sl . . . . . sk] T, where sj is only dependent on the relative pose between Fc and FT and the pose of camera C is an element r form SE3: sj = sj(Fc, FT) = sj(r) E ~. The relation between changing pose r and the sensory measurement is expressed as a differential of the above equation and can be written in a familiar form sj = LyTcT,

(1)

where TCT = [Vx, Vy, vz, Wx, Wy, Wz] is the velocity screw between the sensor and the environment. V = [vx, Vy, Vz] and S2 = [COx,Wy, coz] are vectors corresponding to translational and rotational velocities and L~" is the Jacobian

Fig. 1. Relative relationship between the camera fram FC and the target fram FT. The normalized camera model with unit focal length and optical axis along z axis is considered.

40

J. Kogeckd/Robotics and Autonomous Systems 21 (1997) 37-50

of the elementary sensor output sj given the displacement TCT between the camera and the environment. The robot task can then be formulated as a regulation of an output function ~(r, t) to 0 during the time interval [0, T]. The generic form of the task (output) function can be written as ~(r, t) = C(s(r, t) - Sd(t)) with Sd being the desired value or the trajectory of the sensory input, and s(r, t) is the current value of the sensor measurement. The matrix C allows us to take into account more measurements than the actual number of task constraints. It can be shown that a simple gradient based scheme [31 ] is sufficient to ensure exponential convergence to zero of the output. The control law chosen has the following form: Td = --X~(r, t), where ~. is positive gain factor. In order to guarantee the local convergence of the above control law, the error has to be decreasing, i.e., e < 0. The error rate is expressed as ae as ae e(r, t) = a s " a--; " 7"or + -~,

which can be rewritten using Eq. (1): ae

~(r, t) = C . L T. TCT + - ~ , where Oe/Ot represents eventual motion of the target. In the case when Oe/Ot = 0 we write ~(r.t) = C . L T. TCT = - ) ~ C . L T. ~(r, t). The exponential convergence of the error function will be ensured under condition C L T > 0. The simple way to satisfy this condition is to choose for matrix C, the pseudoinverse of the interaction matrix computed at value Sd, C = LT=sa . The resulting control law for the desired velocity screw has the following form: T+ Td = --).Ls=sd (s(r) -- Sd).

(2)

In order to apply this framework to the global navigation problem a set of geometric primitives has to be chosen and the desired goal configuration of the primitives in the image plane and associated interaction matrix LT=sd is determined. The form of interaction matrix for point features is: L T, = [ - 1 / Z LTi = [ 0

0 x/Z

-1/Z

y/Z

xy

- (1

lq-y2

q-X 2) y], _xy

-x],

(3)

which corresponds to the well-known optical flow equation written in matrix form. Derivation of the interaction matrix for line features can be found in [9,19]. The desired velocity screw is related to the error measured in the image plane and represents commands to the actuators needed for accomplishment of the task. Choosing points and lines as elementary features we derive the appropriate control laws for the nonholonomic mobile base with the pan platform. Further on we formulate the global navigation problem as a sequence of door servoing and wall following tasks. 2.1. Kinematic model of the mobile camera platform Since the desired velocity screw Td is the one of the camera frame, with respect to the environment TOT, we need to derive the relationship between the motion of the mobile platform with associated frame FM and the motion of the camera frame Fc. The angular and linear velocity of the camera frame Fc, expressed in F¢ are: C ~ c =CMR~M,

C VC =CMR(VM -t-/2M X /°CO),

(4)

£ Kogeckdt/Robotics and Autonomous Systems 21 (1997) 37-50

41

,Yc

Fig. 2. Kinematicmodel of the platform. where VM is a vector representing the linear velocity of the mobile base, ~"2M a vector representing the angular velocity of the mobile base with respect to some fixed frame Fo, r~rR the rotation matrix of the frame Fc with respect to FM and P¢o = [dx, dy, dz] T is the coordinate of the origin of frame Fc with respect to FM. For a detailed derivation of Eq. (5) see [5]. Matrix CMRcorresponds to the relative rotation of FM with respect to the Fc. In general it holds that CR = (MR)T. The rotation matrix MR in our setting comes to [- -- sin ap MR = L c O ~

sin 4) cos lp

cos ~ cos ap -]

sinsin$cos ~ $

sinc o~p sin s $ _$

1' _]

where angles $ and if, correspond to tilt and pan angles of the camera, respectively. Due to the nonholonomic constraints of our vehicle, the motion of the mobile base is constrained to forward translation along XN with linear velocity VM and rotation around vertical axis ZM with angular velocity toM (see Fig. 2). The linear and angular velocities of the mobile base VM and I'2M can be then written as VM = [VM, 0, 0] and = [0, 0, toM]- Velocities VM and tom correspond to the actual control inputs of the mobile platform. Depending on the relative position of the camera with respect to the mobile base the desired velocity screw of the camera in the camera coordinate frame can be computed. In case Pco = [dx, 0, dz] T and the pan angle ~p = 0 (i.e., the camera pointing straight ahead), the TCT of the camera is: TCT = [dxtoM

iYM sin

$ VMcos q~ 0

toM

COS~

- - toM

sin $].

(5)

In the general case where Pco = [dx, dy, dz] T the Tcr of the camera is

TT =

,:lxtoM cos ~p -- (VM -- dytoM) sin (VM -- dytoM) cos ~ sin $ + dxtoM sin ~p sin ~b (VM -- dytoM) cos ap cos q~ + dxa~v[ cos (# sin ¢r 0 toM COSt# - - a ~ sin q~

Individual servoing tasks are preceded by the choice of geometric features and their initial and desired positions in the image plane. Once the features to be used have been determined, the visual servoing scheme for generating appropriate velocity control for the camera frame can be directly applied according to Eq. (2). In the course of doing so one has to make sure that the set of chosen features provides enough constraints to accomplish the task. 3 In the following we outline two relative positioning tasks, which can be accomplished by means of visual servoing. 3 This is done by checkingthe rank of the interactionmatrix.

42

J. Ko~eckd/Robotics andAutonomous Systems 21 (1997) 37-50

Fig. 3. Initial and desired view of the straight line feature. The line corresponds to the intersection of the ground plane and vertical wall plane. 2.2. Wall following

This task can be characterized as the task of aligning the camera frame Fc with a wall which is characterized as a straight line resulting from the intersection of the ground plane and vertical wall plane perpendicular to it. Starting from any initial configuration where the line is visible, the task is to position the mobile base so it will be aligned with the wall at a particular distance from it. An example of an image pair of possible initial and final configuration is shown in Fig. 3. For the kinematic model of the mobile base the interaction matrix relating the motion of the image features and the velocity screw is singular at the desired configuration, which corresponds to the situation when the mobile base is aligned with the the wall at the correct distance. This intuitively corresponds to the fact that in those configurations the linear velocity of the base does not influence the motion of the image features. By setting the linear velocity to some positive desired value, we obtain the control law for angular velocity in the following form: ~OM = - k p ( p - Pd),

(6)

where the constants kp is a positive gain factor dependent on the geometry of the relative relationship between the camera frame Fc and the desired position of the line. The results of simulations from several initial configurations are shown in Fig. 4. 2.3. Door servoing

For the door servoing we also employ the pan platform providing an additional degree of freedom: the pan angle Vz of the camera mounted on the turntable. This allows search for the doorway over an extended field of view. The objective of the door servoing task is to position the mobile robot with respect to the doorway so it can pass through it. The control needs to take into account the nonholonomic constraints of the vehicle in a slightly different way. The doorway is defined by two points at the same height above the ground plane located at the door frame. A sample view of a doorway from an initial and desired configuration is shown in Fig. 5. In this scenario the control of the mobile base and the control of the pan platform are decoupled. Suppose that we want to position the mobile base (i.e., the origin of the base frame FM) relative to the doorway at position P = [X, Y, 0] x in the global coordinate frame in front of the doorway, with arbitrary orientation. 4 The linear velocity vc of the origin of the camera frame Fc, will be the same as linear velocity of the mobile base VM, namely: i~M :

Uxx ~t- Uzy"

4 The reason why the orientation is omitted from the specification of the desired goal configuration is so we can employ the closed-loop control law for generation of velocities in x andy as if it would be a holonomic mobile base. The computed velocities are then projected to the current heading direction.

43

J. Kogeckd/Robotics and Autonomous Systems 21 (1997) 37-50

heading = - 3 0 deg

heading = -120 deg

1000

1000

500

500

0

0

%.i-11-

-500

-500

-1000

i

i

0

5000

10000

- 100C

L_ i

i

0

5000

heading = 0 deg

heading = -90 deg

1000

1000

500 0 " " - - . . . . . .

500

0~

0

-500 -100C

10000

-500 i

i

0

5000

10000

-1000

A

i

0

5000

10000

Fig. 4. Trajectories for the wall following task. Several runs from various initial configurations we recorded based on the odometric readings. The desired position of the mobile base was at the end reached within a given threshold. The units on both axis are in millimeters and are scaled differently.The individual plots correspond to different initial configurationswith different initial headings. The dash line is the desired line along which the robot should align.

Fig. 5. Initial and desired view of the doorway features. It is a known fact that given the coordinates of two points one can compute the pose of the camera with respect to them. By estimating the pose of the camera, the velocity control is governed by the control law based on the simple gradient based scheme which was introduced in [ 19]. Another approach which would bypass the computation of the pose is to use a dlifferent representation of the pose of the mobile base with respect to the doorway. The pose

44

J. Kogeckt/Robotics and Autonomous Systems 21 (1997) 37-50

can be expressed in terms of visual angle ot and depth ratio/5 between the two points as proposed by Odobez [28]. The desired linear velocity can then be computed by differentiating this relationship. The angular velocity of the camera sensor frame toc around ZM is derived using a visual servoing paradigm. In this case will take into account the Xm coordinate of the middle point between the features in the image, since it is the one affected by the overall by the overall turning velocity 09. The error for the door servoing in the x direction in the image has the following form: = ~ = .f = ( - ( 1 + X 2) cos ~ - Y sin ~b)to + Ox/Ot,

(7)

where Ox/Ot is the contribution due to the camera translation and to is the total angular velocity of the camera frame around the ZM axis which is due to both the angular velocity of the platform toM and angular velocity of the turntable toc: to = toc + toM. The desired angular velocity can then be computed from Eq. (7) and partitioned to the contribution by the mobile base and pan platform.

3. Relational model In the previous sections we outlined two relative positioning strategies for the camera mounted on the mobile platform with respect to the environment. The choice of the model of the environment and the use of these elementary strategies within the model is motivated by the tasks a mobile robot is typically involved in. One type of tasks are so-calledfind-and-fetch or delivery tasks, where the robot is asked to search for an object it can recognize (e.g., a barrel of chemical waste, pipe) and deliver it to a desired location. In such a scenario robot needs the capability of systematically exploring the environment and searching for the object, while building a model of the environment [35]. In order to be able to accomplish such a task, the capabilities of the robot involved in delivery tasks must include: capability of recognizing a set of distinctive landmarks, localizing itself with respect to a landmark and following boundaries of obstacles. Position of the mobile robot in the global coordinate system is often not necessary. The type of maps which can be typically acquired by this means are relational maps, with local coordinate system attached to the individual landmarks [36]. Such an exploration strategy, however, may by quite time consuming and in some cases ineffective. When some a priori model of environment is already available, the location of the object to be delivered may be described qualitatively and there is no need for a thorough exploration. In case of qualitative models there is still a need for the robot to be able to localize itself in a place and to navigate safely from one place to another. In the following section we present a qualitative model of the environment which facilitates basic delivery tasks. The mobile robot is modeled as a single point in the two-dimensional configuration space C. The environment is partitioned into a family of subsets, called places, such that the subsets form a cover of C. Places are characterized by visibility constraints and can be partitioned into two different categories: (i) visibility regions associated with landmarks and (ii) visibility regions associated with boundaries leading to landmarks. The first type of place 5 is associated with landmarks and is defined as a set of points in a free space, such that from any point in the place there exists a path to any other point in the place, such that along the path the landmark is visible all the times. Landmarks are defined as a set of naturally occurring features, which are visually distinctive and recognizable [23]. We assume that the set of landmarks and their relative relationships is determined ahead of time by the designer (i.e., the landmarks and their features coordinates in the global coordinate system are known). In 5 In this case the notion of a place is the same as the notion of a visibilityregion of a landmark [23].

J. Kogeck6/Robotics and Autonomous Systems 21 (1997) 37-50 ~ooaw^Y

45

.I.-. . . . + ~deted

visibilityregion

~

obstacles ~

visibilityreglo.

Fig. 6. The visibility region of a doorway. The limited range of the sensor as well as limited field of view is taken into account. The visibilityregion is delimited by two horopters cooresponding to the viewing angle 60° and 10° which correspond to the range 10 m given the width of the doorway of 1 m intersected with two visibilitypolygons associated with individual features (in a dark pattern). We include only front half plane to the visibility region of the doorway. The restricted visibility region is depicted in a striped pattern.

case the relationship between landmarks is not known it can be established by exploration. An exploration strategy for discovering relationships between a given set of recognizable landmarks and building a relational model of the environment was proposed in [36]. Since in our case the landmark is composed from various features, we need to take into account the lLimited field of view of the camera, which restricts the points from which all features can be seen simultaneously. The camera is mounted on a turntable and can look in all directions. The visibility region of a landmark is defined in the following way: Visibility region o f a ~andmark is a set o f points from which all features belonging to a landmark are visible under the limited field o f view assumption. A doorway is an exmnple of a landmark and is characterized by two points located on opposite sides o f the door frame. The visibility region of a doorway is delimited by two horopters and visibility regions of individual point/line features 6 and is depicted in Fig. 6. The inner horopter places a restriction on the viewing angle of the two points which cannot be larger than the field of view of the camera (FOV) and the outer horopter puts the restriction on maximal range (i.e., minimal viewing angle) from which the two points are visible. In order to assure that the visibility region is in free space, the are delimited by the two horopters is intersected with the visibility regions of individual features. The delimitation of the visibility region o f a doorway by horopters is incorporated for the purpose of conveniently employing the visual servoing strategy described in [28]. By using this strategy for two points the mobile base can be positioned relative to the doorway as de,;cribed in Section 2.3. We term the second type of the region associated with a landmark the approachability region. A landmark is approachable from a given point if there is a visible boundary of an obstacle which can be followed in order to 6 In the context of stereo vision horopter is a locus of points which can be seen with stereo pair with a particular vergence angle. It turns out that this locus is a circle with the center on the axis going through the middle point between the two retinal planes, perpendicular to the line connecting them and passing through the two points. In our case the situation is reversed. We use the notion of a horopter as a locus of points (circle) from which the two points can be seen at a given visual angle.

46

J. Kogeckd/Robotics and Autonomous Systems 21 (1997) 37-50

[]

gateway ~

restricted visibilJtyregion ~

obstacles

CORNER

Fig. 7. The approachability region of a comer. The corner landmark is defined through approachability via boundary following. Since in our case all the obstracles are polygonal their boundaries consist of a set of linear segments, which can be reliably followed by the wall servoing strategy. For the visibility region of the boundary (filled with striped pattern) we agains take into account the limited range of the sensor. In order to be able to follow the boundary, the camera is tilted with respect to the ground plane pointing down, so the boundary is visible only when the mobile base is in its vicinity. Due to this particular camera configuration the visibilityregions of the walls differ from the visibility region of the doorway. The approachability regions associated with the corner landmark correspond to the visibility regions of boundaries leading to the comer.

approach the landmark, in other words, the point is in the approachability region of a landmark if it is in the visibility region of a boundary leading to a landmark. The approachability region allows us to eventually reach the landmark in spite of the fact that the landmark is initially not visible. This constraint is used for the second type of landmarks, which constitute corners of the room or intersections of two corridors which can be approached by following the walls. When the mobile base is at the point where the boundary which leads to a corner landmark is visible, the landmark is directly approachable. The corner landmark has two places associated with it, corresponding to the visibility regions of walls leading toward the corner (see Fig. 7). Selecting naturally occurring features in the office environments as a set of landmarks and computing the associated visibility region with each landmark and boundary leading to a landmark we obtain a partitioning of the environment in terms o f places. We are currently looking at some means to obtain this partitioning automatically from 2D polygonal model of the world or by exploration. 7 Two landmarks are in the neighborhood of each other if their associated places overlap. The regions where places overlap are referred to as g a t e w a y s [36]. For the successful navigation between the places we have to make sure that the setpoints of the elementary servoing strategies are chosen in such a manner that they will bring the camera frame (or the mobile base frame for that matter) to a gateway region o f two landmarks. The desired goal configuration of the door features in the image is chosen such that the mobile base is positioned correctly to pass through the door and at the same time is in the visibility region of another landmark or boundary leading to a landmark (e.g., a wall). After approaching the doorway and entering the visibility region of a wall, the wall boundary needs to be recognized. Subsequently the wall following strategy can be invoked. The servoing strategy then guarantees convergence to a particular distance of the camera frame from the wall 7 The obtained partitioning may be often quite redundant. It is sufficient to choose such a subset of landmarks that the associated places form a cover of the environment.

J. Kogeckti/Robotics and Autonomous Systems 21 (1997) 37-50

47

coml~ • mOrAY

÷- ....

÷

visibility region

~

obstacles

Fig. 8. Landmarks chosen from the 2D map of the laboratory. The gateway between the visibility region of a doorway and the visibility region of the wall boundary is depicted in a checkered pattern. The initial configuration of the mobile base is in the visibility region of the doorway. After accomplishing the relative positioning task with respect to the doorway a wall following strategy is invoked and the mobile base aligns with the wall the eventually approahces the comer landmark.

DOORWAY

~e" . . . . +

obstacles

gateway

Fig. 9. The gateway regions correspond to the nodes of the place graph. Some of the gateways were omitted for clarity.

and terminates when the robot arrives at the end of the boundary which corresponds to the c o m e r landmark (see Fig. 8 for details). In!itially we require the position of the landmarks in the global coordinate frame Fo in order to partition the environment to a set of places. However, the position estimate of the mobile base in Fo is not necessary, since the objectives of the servoing strategies are specified in the local coordinate frames of the landmarks and boundaries. The partitioning of the environment in terms of places corresponding to visibility or approachability regions of doo:rways and comers is generalizable to most indoor environments. The automatic acquisition of such model is more difficult. In order to facilitate global navigation the neighborhood relationships between landmarks are represented in a place graph (see Fig. 9). By first determining the initial and goal location of the

48

J. Kogeck6/Robotics and Autonomous Systems 21 (1997) 37-50

Fig. 10. Place graph corresponding to a set of landmarks chosen in our laboratory. G2 corresponds to the gateway between the door and the boundary of the wall leading to two corner landmarks. The edge between G2 and G5 is labeled by a wall following strategy and the executed path is Fig. 8. Some of the gateway nodes and edges were ommited from the graph for clarity. mobile base within a place graph in terms of places, the overall plan can be constructed as a result of a graph search. 3.1. Place graph

A corresponding place graph given a set of landmarks is depicted in Fig. 10. Individual nodes in the place graph represent the intersections of the visibility regions of specified landmarks or boundaries leading to landmarks. The edges coming out of the node coorespond to the possible servoing strategies that can be applied from a given place. Some of the nodes and edges were ommited from the figure for clarity. A node is characterized by a set of visible or approachable landmarks, i.e., set of landmarks with respect to which the mobile robot can be positioned. In case every point in the free 2D configuration space belongs to a certain place, the place graph is fully connected. In the current experimental setup the initialization of the servoing strategies as well as recognition of landmarks is done by the user. What follows is an example of the door servoing task expressed in task specification language introduced in [19]. The search for and initialization of doorway features can be written in the following way:

InitServo(doori)

:= Search(s)11 ServoC,

where we utilize the degree of freedom of the pan platform. Once the door features are found the actual servoing task can be initiated: S e r v o := Track(Sd, sIIServoMllServoC,

where S e r v o C and S e r v o M are control strategies for mobile base and camera platform and Track is the feature detection and tracking routine which at each instance of time generates appropriate velocity commands to both camera pan platform and mobile base controllers. Operator II denotes parallel composition of two processes (strategies). A single relative positioning task, expressed in task specification language, can be then written as

RelPosition(doori,

Sd) :=

InitServo(doori)(s)

: Servo(s,

sd),

where : denotes conditional composition operator, which specifies that the first strategy must be completed successfully before the second one can be initiated. By labeling the individual arcs of the place graph by expressions of the above form, we can then generate the global plan as a result of a place graph search.

J. Kogeck~/Robotics and Autonomous Systems 21 (1997) 37-50

49

4. Discussion The outlined approach demonstrates the use of relative positioning strategies with respect to the set of naturally occurring landmarks for global navigation. Currently the disadvantage of the approach is that the desired position of the features in the image plane which correspond to the desired relative position of the mobile base with respect to the landmark has to be determined ahead of time. The approach can be extended by developing relative positioning strategies for more general class of landmarks or views. Some work along these lines has been reported in [4,16]. The authors proposed more complex methods for measuring the "distance" between two views, which was subsequently used for estimating the relative position of the mobile base. The reference views and set of places they belong to were chosen ahead of time and stored in a database. Ideally it should be possible to acquire the set of reference views, which would correspond to the distinctive places in the environment automatically though an exploration process. An attempt to address this difficult problem in a probabilistic framework was made by Yeh and Kriegman [37]. The approach outlined obove proposed the servoing strategy as only means of navigation and the environment model was partitioned in such manner that at every point in the free space some servoing strategy could be applied. This type of approach can be well combined with some open-loop motion planning. Relying purely on the odometric readings in the short range, servoing strategies could serve as means for resetting the position and/or heading of the mobile base. We took into accotmt both the limited field of view and limited range of the sensor and provided partition of the environment such that every point in the free space belongs to a place (i.e. at any point one of the elementary servoing strategies can be employed). The combination of visual servoing and obstacle detection was tested earlier [20] in the context of simpler visual features.

References [1] P. Alien, A. Timcenko, B. Yoshimi and E Michelman, Automated tracking and grasping of a moving object with a robotic hand-eye system, in: Proc. Visual Servoing Workshop, 1EEE Int. Conf. on Robotics and Automation (1994). [2] N. Ayache and O. Faugeras, Building, registrating and fusing noisy visual maps, International Journal of Robotics Research 7 (6) (1988) 45-65. [3] N. Ayache and O. Faugeras, Maintaning representations of the environment of a mobile robot, IEEE Transactions on Robotics and Automation 5 (6) (1989) 804-819. [4] R. Basil and E. Rivlin, Localization and positioning using combinations of model views, in: Proc. ARPA Image Understanding Workshop (1993) 377-396. [5] J.J. Craig, Introduction to Robotics: Mechanics and Control (Addison-Wesley,Reading, MA, 1989). [6] J.L. Crowley, World modeling and position estimation for a mobile robot using ultrasonic ranging, in: Proc. IEEE Int. Conf. on Robotics and Automation, May (1989) 674--680. [7] Ch. Engels and G. Schfner, Dynamic fields endow behavior-based robos with representations, Robotics and Autonomous Systems 14 (1995) 55-77. [8] S. Engelson, Passive navigation and visual place recognition, Ph.D. thesis, Yale University, 1993. [9] B. Espiau, E Chaumette and P. Rives, A new approach to visual servoing in robotics, IEEE Transactions on Robotics and Automation 8 (3) (1992) 313-326. [10] R. Grupen and J. Connoly, Harmonic functions for navigation, Technical report, University of Massachusets, Amherst, 1992. [11] G.D. Hager, A modular system for robust positioning using feedback from stereo vision, Technical Report YALEU/DCS/RR- 1074, Yale University, May 199~. [12] G. Hager and S. Hutchinson, Eds., Visual Servoing - Achievements, Applications and Open Problems, Visual Servoing Workshop, 1EEE Int. conf. on Robotics and Automation (IEEE Press, New York, 1994). [13] K. Hashimoto, Visual Seivoing (World Scientific, Singapore, 1993). [14] N.J. Hollinghurst and R. Cippolla, Uncalibrated stereo hand-eye coordination, Image and Vision Computing 12 (3) (1994) 187-192. [15] I. Horswill, Specialization of perceptual processes, Ph.D. Thesis, Massachusets Institute of Technology, 1994. [16] D.P. Huttenlocher and W.J. Rucklidge, A multi-resolution technique for comparing images using Hansdorff distance, Technical Report TR92-1321, Cornell University, 1992. [17] I. Kamon and E. Rivlin, Sensory based motion planning with global proofs, in: Proc. IROS-95, Pittsburg, USA, 1995.

50

J. Ko~eckd/Robotics and Autonomous Systems 21 (1997) 37-50

[ 18] O. Khatib, Real-time obstacle avoidance for manipulators and mobile robots, International Journal of Robotics Research 5 (1) (1986) 90-98. [19] J. Ko]eck~, a framework for modeling and verifying visually guided agents: Design, analysis and experiments, Ph.D. Thesis, University of Pennsylvania, 1996. [20] J. Ko~eck~i, R. Bajcsy and M. Mintz, Control of visually guided behaviors, in: C. Brown and D. Terzoupoulos, eds., Real-Time Computer Vision (Cambridge Press, 1994). [21] B. Kuipers and Y. Byun, A robot exploration and mapping strategy based on semantic hierarchy of spatial representations, Robotics and Autonomous Systems 8 (1991). [22] J.C. Latombe, Robot Motion Planning (Kluwer Academic Publishers, Dordrecht, 1991). [23] A. Lazanas and J.C. Latombe, Landmark-based robot navigation, Algorithmica 13 (1995) 472-501. [24] J. Leonard, Directed sonar sensing for mobile robot navigation, Ph.D. Thesis, University of Oxford, Department of Engineering Science, 1990. [25] V. Lumelsky and T. Skewis, Incorporating range sensing in the robot navigation,IEEE Transactions on Systems, Man and Cybernetics 5 (20) (1990) 1058-1068. [26] R. mandelbaum, Sensor processing for mobile robot localization, exploration and navigation, Ph.D. Thesis, University of Pennsylvania, 1995. [27] M. Mataric, Environment leafing using a distributed representation, Proc. IEEE Int. Conf. on Robotics and Automation (1990). [28] J.-M. Odobez, A visually oriented representation of planar relative position, Technical report, Grasp Laboratory, University of Pennsylvania, 1995; ICPR-95, submitted. [29] R. Pissard-Gibollet and P. Rives, Applying visual servoing techniques to control a mobile hand-eye system, in: Proc. IEEEInt. Conf. on Robotics and Automation, Nagoya, Japan (1995). [30] E. Rimon and D.E. Koditschek, Exact robot navigation using artifical potential functions, IEEE Transactions on Robotics and Automation 8 (5) (1993) 501-519. [31 ] P. Rives and R. Pissard-Gibollet, Reactive mobile robots based on visual servoing approach, in: Artificial Intelligence Planning and Simulation (A1S), Juillet, Perth, Australia (1992). [32] C. Samson, M. Le Borgne and B. Espiau, Robot Control: The Task Function Approach, Oxford Engineering Science Series (Clanderon Press, London, 1991). [33] J. Santos-Victor and G. Sandini, Docking behaviors via active perception, in: Proc. 3rd Int. Symp. on Intelligent Robotic Systems, Pisa, Italy (1995) 303-314. [34] K. Sutherland, Landmark selection for accurate navigation, in: Proc. DARPA Image Understanding Workshop (1993). [35] C.J. Taylor and D.J. Kdegman, Exploration strategies for mobile robots, in: Proc. IEEE Int. Conf. on Robotics and Automation, Atlanta, Georgia (1993). [36] C.J. Taylor and D.J. Kriegman, Vision-based motion planning and exploration algorithms for mobile robots, in: K. Goldberg, ed., Proc. Workshop on the Algorithmic Foundations of Robotics (1994). [37] E. Yeh and D.J. Kriegman, Toward selecting and recognizing natural landmarks, in: Proc. 1EEE/RSJ Int. Conf. on Intelligent Robots and Systems (1995).

Jana Kosecka received the M.S.E.E. degree in electrical engineering and computer science from Slovak Technical University, Bratislava in 1988, and an M.S. in computer science from the University of Pennsylvania in 1992. She obtained a Ph.D. in computer science from the University of Pennsylvania in 1996. Currently she is a postoctoral researcher at the University of California, Berkeley. Her research interests include active vision, mobile robotics, and discrete event and hybrid modelling and control of autonomous systems.