Autonomy and remote control experiment for lunar rover missions

Autonomy and remote control experiment for lunar rover missions

Control Eng. Practice, Vol. 5, No. 6, pp. 851-857, 1997 Copyright @ 1997 Elsevier Science Ltd Printed in Great Britain. All rights 0967-0661/97 $17.00...

732KB Sizes 0 Downloads 106 Views

Control Eng. Practice, Vol. 5, No. 6, pp. 851-857, 1997 Copyright @ 1997 Elsevier Science Ltd Printed in Great Britain. All rights 0967-0661/97 $17.00 + 0.00

Pergamon PII:S0967-0661(97)00070-1

AUTONOMY AND REMOTE CONTROL EXPERIMENT FOR LUNAR ROVER MISSIONS M. Maurette, L. Boimier, M. Delpech, C. Proy and C. Quere Centre Nationale d'Etudes Spatiales, 18 ave E. Belin, 31055 Toulouse Cedex, France([email protected])

(Received July 1996; in final form April 1997)

Abstract: Teleoperation will be used for rover control during lunar missions. However the interesting places for scientific investigations are located in difficult areas: the south pole, or deep craters in which communications to the earth will be limited and possibly interrupted. To investigate the feasibility of such missions, a testbed has been built in order to check the feasibility of remote control of the rover in the presence of important time delays with low-rate communications, as well as to demonstrate the capability of autonomous motion during communications blackout periods. The testbed consists of a 6x6 vehicle able to move on a moon-representative terrain, either in an autonomous mode or in low-communication-rate teleoperation called TESA. This paper describes the vehicle, its control and the results obtained during the tests. Copyright © 1997 Elsevier Science Ltd Keywords: Lunar exploration, robot control, space robotics, autonomous vehicles, vision, delay compensation

point of view, it implies a capability of teleoperating the rover from earth, with limited telemetry and time delays of up to 10 seconds in the loop. Autonomous perception of the environment, path planning and locomotion may also be required during temporary interruptions of the data link, and to perform onboard security checks of the teleoperated paths.

1. INTRODUCTION The interest in moon exploration has been rising in recent years and Europe is studying a technological and exploration mission to the moon, scheduled for the beginning of the next century. Different objectives of a candidate mission, not yet decided, can be considered: exploration of scientifically interesting areas such as the south pole, preparation of moon-based future scientific instruments, and so on. These objectives imply access to some difficult areas, where the communications with the ground will be poor because of the limited power available on board (limitations on the data rate), and/or because of the multiple data relays needed for permanent contact with the ground station (important time delays). Furthermore, some special care will have to be taken to avoid possible link interruptions by local obstacles, or while entering a crater.

An experiment has been built to check the feasibility of both modes of operation, and to evaluate their performances under representative conditions.

2, TESTBED DESCRIPTION 2.1 The EVE vehicle

The ground demohstrator robot called "EVE" (standing for Experimental Vehicle for Exploration) has been built on a locomotion chassis having six electrically motor-driven wheels, derived from Mars mission studies and manu£acCued by VNH Transmash in Russia. It includes NiCd batteries located in the pyramid-shaped wheels, and low level co-ordination of the motors in order to perform openloop trajectories.

The feasibility of missions of this kind, involving a lOOKg rover for several weeks' operation, relies on satisfactory solutions to critical problems such as power supply and night survival. From the control 851

852

M. Maurette et al.

The chassis has been equipped with an on-board computer based on a Power PC 601 board, running under the Lynx-Os operating system, and input/output boards for analog and digital acquisitions and serial communications (RS232, Ethernet). It can be operated under external power supply, or self-powered with 2 hours' autonomy.

The cameras are located on a 2-axis orientable platform mounted on a 1.2m-high mast. A widefield global vision camera is located on the stereo vision system, and is used by the operator to select the progressive goals. As no inertial unit is installed on this testbed, position sensing is simulated by ground-based equipment and returned on-board via the digital data link between the rover and the ground station. The position is obtained using an active beacon mounted on the vehicle (reflectors and infrared LED's), and a tracking telemeter which is part of the GEROMS test site facilities (Delail, 1994). The positional accuracy is better than +/- 2 cm along the X, Y, Z axes, all over the site. 2.2.2 Localization sensors.

Attitude measurement is performed on-board by inclinometers and magnetometers located inside the stereovision equipment; their accuracy is roughly 1o as long as the measurement is a static one. 2.2.3 Internal sensors. The roll and pitch angles of

Fig. 1: The EVE robot 2.2 On-board equipment Stereo vision system. The perception of the environment, presumed completely unknown at the beginning of the mission, is performed by a stereo vision subsystem based on digital cameras designed and manufactured by LAS (Laboratoire d'Astronomie Spatiale) and CNES, in order to provide fast stereo vision processing with moderate computing power, and to avoid calibration procedures during the test campaigns. The stereo algorithm takes less than 7s on a Power PC 601-50 MHz for images with 280x384 pixels. The camera's focal length is 5.8, and this gives a field of view of about 70 ° when the DTM has been reconstructed. 2.2.1

the front and rear axles with respect to the middle axle are sensed by potentiometers and acquired by the on-board computer. This information enables updating of the kinematic model that is used for computing the robot's position from the beacon position. The wheels' rotations are obtained from incremental encoders (one encoder per wheel), and are used in combination with the heading information for implementing odometry algorithms. 2.3 Ground segments

Two different ground segments are associated to the different operational modes: • The.TESA mode is performed on a SILICON GRAPHICS station, allowing real-time display of a synthetic, predicted landscape and operators' control inputs on joysticks; • Autonomous mode supervision and goal designation are implemented on a SUN SPARC workstation. 2.4 Functional architecture

The functional architecture is presented in Fig. 3. All the functions representative of the on-board ones have been implemented on the on-board computer, except for the localization by the inertial unit, which is simulated. Data exchanges with the vehicle are performed on an Ethemet link.

Fig. 2: Stereo vision system

The low-level control of the wheels (including speedcontrol loops and coordination in order to provide straight motion, turns with a constant gyration radius, or rotations in place) is performed on a separate, ragged computer on the rear part of the vehicle. The trajectory-following control is performed on the main on-board computer, after the acquisition of the chassis angles, odometers, attitude sensors, and position, transmitted from the ground.

Remote Control Experiment for Lunar Rover Missions

853

generated on the vehicle, from the images taken during the move. The operator is provided with 3D knowledge of the environment in a short range beyond the vehicle (8-10 m), that gives him a good understanding of the difficulties and navigable areas of the terrain. The major constraint is the computing cost of generating the DTM on board at a sufficient rate (0,1 Hz) but this is counterbalanced by the reduction of the bandwidth required for this type of telemetry: the size of a 5 cm resolution DTM is around 15 k-bytes. The main advantage is the possibility of compensating for the time delay by displaying to the operator the predicted position of the vehicle in the DTM. Using that technique, the operator can drive the vehicle continuously, since he perceives the effect of his commands immediately.

Space segment

Gmundsegment

Fig. 3. Functional architecture

3. CONTROL MODES The two different control modes to be used, imbedded in a real mission, are implemented separately on the testbed:

3.1 Teleoperation mode (TESA) 3.1.1 Principle. For teleoperating a lunar rover from the ground, the classical method consists of transmitting video images to the control station. The operator can drive the vehicle by using a control device and looking at these images to see the effects of his commands. This technique is easy to implement; it suffers, however, from the following constraints: • The communication bandwidth has to be very high to accommodate a continuous flow of video images (1 Mbits/s). • In most situations, the operator must apply a "move-and-wait" strategy, since no time-delay compensation can be implemented, and this becomes tedious when the time delay exceeds a few seconds. The method proposed under the name TESA (standing for Teleoperation in an Acquired Synthetic Environment) belongs to the class of teleoperation modes with predictive display. The efficiency of this technique has already been demonstrated for ROTEX manipulator control during the SpaceLab D2 mission (Hirzinger, et al., 1992). In the rover control case, the ground is supplied with Digital Terrain Models (DTM) that are periodically

3.1.2 Teleoperation mode variants. Within the class of control modes with predictive display, one can consider different systems, depending on the availability of position information and its use, either on the ground or on-board. • Singlefeedbackpredictive teleoperation: There is no localization system on the vehicle, and there is no reliable or available odometry; the only feedback that can be implemented is based on the periodic DTM that enables position information to be inferred at a low rate (0,1 Hz) and with an important delay (10 s). • Doublefeedbackpredictive teleoperation: There is a localization system on the vehicle, that provides the ground segment with real positional information (absolute or relative) at a higher rate (>2 Hz) and with a shorter delay (5 s) than the DTM. The predictor is then implemented using two control loops: - one for setting the vehicle position w.r.t, the new DTM, and - one for updating the vehicle position in the current DTM • Predictive teleoperation with shared control: There is a localization system on the vehicle, and the positional information is now used on-board for trajectory servoing. This information is also sent to the ground, but it is used only for operator supervision. Here, the operator defines the vehicle objective continuously, instead of driving it as in the two previous modes. 3.1.3 Algorithm description. Before giving details of the algorithm, it is necessary to define the different time delays and sampling periods that intervene in the system: TA~ : Overall uplink and downlink communication time (distance, commutation, etc.) TnrM : DTM construction time on board Thw: DTM transfer time due to communication bandwidth. For the single feedback prediction mode the overall time delay becomes: r, = + r rM + (1)

M. Maurette et al.

854

For the double feedback prediction mode, the time delay is essentially due to the communication link back and forth: T2 = T~ (2) The sampling periods being considered are the following: - ts 1: DTM construction ts2: vehicle position (localization system) ts3: control command generation. -

• the predicted robot position expressed in the frame of the current DTM ( X m ) is: X~.,[t'] = )(,,a,[t']- ff,,a,[t- T~] for t < t' < t + tsl (3) and where t is DTM reception time. • the robot absolute position (~'~b,) is obtained by: )(,,~.,[,, ] = X n,[,~]+ ks. (X..[t~ - 1"21- X~.~[t,- T~])

(4)

-

The algorithm handles the following three variables: • the predicted absolute position (X~b ~) that is expressed w.r.t, the robot's location at a given time in the past, and is obtained by integrating the robot model without any positional updates (so called "open-loop" predicted position),

when localization data (Xm) is being received, or by: L b s [ t ] = X , os[t] between two receptions.

(5)

• the open-loop predicted absolute position ( X ~ ) is:

^





the predicted absolute position (X~bs) that is deduced from the previous one by taking account of the localization estimates performed on board ("closed-loop" prediction), the predicted relative position ( Sre I ) that is expressed w.r.t, the frame of the last DTM, and which represents the information displayed to the operator.

A. Single/double-feedback prediction. The singlefeedback prediction can be obtained by setting the predictor gain k s to zero. The predictor equations are as follows:

ii!iiiiii ilililiiiiiiiiiiiiiiiiiiiiiiiiiiii!!iiiii ii ~ iiiiiii ~ ~iiiiii!ii ::: : ' : ~ : : ; ii ~ ~

(6) The limited capability of the single-feedback prediction mode comes from the prediction horizon that varies periodically between two values: T~
B. Prediction with shared control. This mode relies on a control system on the vehicle, that performs trajectory tracking using a localization system (inertial unit or combination of star and sun sensors).

~i i!iiiiiii!iiiiiiiiiiiiiiii!iiiiiii !!iiiii!i!!iiiiiii!iiii~:~ i iii i i!i!iii i!ii i ii!iii i lii i~ii ~

iii iii !ii i i! i ~,~-

~iii~i~iiii~i~i~ii!iiiiiiii~iiiiiiiiiiiiiiiill if!i! ~ili!!iii~i!!ii!! ii!!ii(m~!iiiiii!!iiii !iiii!!! iiiiiii~ i iiiiii!iiiiiiiiiii"i~i iii!iiiii!iiii !iii ~i~ili!i~ili!i:ii~ii!~iii~i~!iiiiii!!iiili!!ii~

!I

Fig. 4. Single/double feedback prediction

Fig. 5. Prediction with shared control

~ !i!~!~iiiii! i if!i!ilill iiiiii!iii

Remote Control Experiment for Lunar Rover Missions It is designed to maintain the vehicle within a specified corridor, and thus keeps the deviations between predictable values; these represent the prediction error bounds. If the deviations exceed the corridor limits (+/- 15 cm), the vehicle stops and waits for the operator's acknowledgement before moving again, according to the new commands. The predicted relative position in the DTM (Xre t ) is nOW:

= for t < t' < t + ts 1

^

,

]- x.[t-

855

corresponding objective at the date of position acquisition, in order to monitor the deviations. This information is preferably presented in the DTM top view, using appropriate symbols. When the operator stops moving the objective, the real position representation must become coincident with the objective (or most likely get close to it). If the deviation exceeds the authorized value, the operator is informed of that situation (the vehicle has already stopped) by flashing symbols.

T,] (7)

3.1.4 Vehicle model. The remote control system allows the vehicle to move according to three displacement modes: linear motions, constant rate of turn, and rotations in place. Given the small velocities (15 cm/s max) and inertia of the vehicle (60 Kg), its kinematic model constitutes a satisfactory approximation for deriving the vehicle's behaviour in each of these modes.

3.1.5 Man-machine interface. The man-machine interface is designed to achieve the highest level of comfort for the operator in the absence of a stereovision display. It includes the following features (Fig. 6): • a 3D view of the DTM, with a representation of the vehicle in its predicted location and orientation; the simulated camera is located behind the vehicle so that the operator can get a better perception of obstacle distances and sizes. For a better rendering of the relief, the DTM display is generated by combining the information on luminance from one of the original images, with the map of altitudes. Furthermore, the sharp discontinuities or steep slopes in the DTM are detected after some analysis, and are eoloured in a bright way to make them immediately interpretable by the operator. • a top view of the DTM, indicating the location of the vehicle, with its current direction and potential position in the next few seconds; the altitudes are made understandable by the use of different colours. • a section view ("cut") of the DTM in the direction of displacement that gives an excellent perception of the shape and distance of obstacles in front, as well as the slope inclination. • a compass representation display, using arrows to display the predicted direction (black arrow) as well as the real one that is delayed (grey arrow).

For the shared control teleoperation mode, what is displayed in the DTM perspective view is the desired attitude of the vehicle instead of the predicted position generated using telemetry data. However, the operator needs to know where the vehicle is located w.r.t, the current objective, as well as the

Fig. 6. Man-machine interface

3.1.6 Test results. So far, only the single feedback and double feedback prediction modes have been implemented and fully tested on the GEROMS site. The testing phase has been limited so far to functional validation and qualitative evaluation (man-machine interface ergonomy.... ).

Feasibility of control has been demonstrated for both modes, and displacements at full speed (15 cm/s) have been achieved in the following conditions: • terrains with medium block density (0,1 obstacle per m 2) and small slopes (< 10 °) • overall time delays up to 15 s • DTM refresh rate: every 10 s The first observable limitation comes from the low accuracy of open-loop motions, especially the rotations. With "single feedback prediction", this causes significant shifts when DTMs are switched, and this may oblige the operator to stop the vehicle and perform a rotation on the spot. This phenomenon occurs less often with the "double feedback prediction mode", since the prediction error remains smaller, and this allows longer displacements without stopping. Another limitation in the present implementation is coming from the blind area in front of the vehicle,

856

M. Maurette et al.

which becomes a real problem when the operator makes a stop. To initiate another motion, a small unknown area has to be crossed, and this makes the operator uncomfortable. This default can be avoided by merging the previous DTM with the current one, but this is accurately achievable only for the "double feedback prediction mode", since it requires some localisation system (DTM merging algorithms have not yet been implemented). The last limitation, which is easy to circumvent, comes from the short-term vision provided to the operator. The DTM offers a range of view that is limited to the next 8 to 10 meters ahead of the vehicle, and this is insufficient for the avoidance of some obstacle areas. To prevent the vehicle from being blocked in some dead-end, the operator has to be provided with a periodic video image at rather low rate (1 per minute) in order to be able to determine the long-term path.

3.2 Autonomous mode 3.2.1 Principle. The autonomous mode has been derived from Mars exploration studies, where a high degree of autonomy is required (Proy, et al., 1994). The control of the rover is performed according to the following scheme: • The operator sets the goal according to the mission objectives and the global vision camera image that has been transmitted • A DTM is generated with the stereo vision pair acquired by the vision subsystem. The terrain areas are labelled as "navigable" or "dangerous", according to the locomotion capacities in terms of the maximum obstacle that can be passed over and the maximum slope that can be crossed. For a real mission, the criterion of the absence of link occultation should also be added, in getting the navigable areas. • The maximal length path in the direction specified by the operator, avoiding the dangerous and unknown areas, is then computed, taking into account the locomotion uncertainties. • The computed trajectory is then executed until a "safe. point", optimising the subsequent perception. This sequence is repeated until the final goal is reached. 3.2.2 Test procedures. An autonomous mission starts displaying an image of the terrain, taken with a panoramic camera, that has been transmitted to ground. The operator points out a direction of progression, either directly on the image, or by giving heading and distance indications. The corresponding goal, in a frame known by the robot, is then computed and transmitted to the EVE demonstrator. Although the path generation and execution are then fully autonomous, a real-time display of the different computing results is presented to the operator, to allow a good

comprehension of the behaviour of the robot. The following steps are displayed on the MMI: • right and left images from the stereo cameras, • on-board computed navigation map (40 mm resolution), with superimposition of the computed path, and • the acthal trajectory performed by the robot. A global view of the relief of the terrain (300 mm resolution) is also displayed, in order to follow the progress of the rover relative to the final goal. All the data related to a test, including stereo pairs and robot parameters, are stored for off-line analysis, or to replay a particular test. 3.2.3 Test results. Several hundred tests have been performed, in different terrain areas and under different lighting conditions. The following conclusions have been reached: The stereo vision algorithm is quite robust and fairly insensitive to the presence of shadows, provided that the image exposure is correct. A specific auto iris algorithm has been introduced in order to get a proper distribution of the grey levels. One of the corresponding DTMs obtained during the tests is shown in Fig. 7.

Fig. 7. DTM computed on board The path-fmding algorithms generate trajectories whose length is usually limited to 5 or 6 meters, mainly because of the unknown areas, behind the rocks, that are too large and numerous for the limited height of the cameras on a planetary rover. The ability to detect either positive (rocks) or negative (canyons) obstacles is reliable in the field of view of the robot, as shown in Fig. 8. The failure to fred a possible path, which has been met in very difficult terrains, is mainly due to the fact that the robot tries to fund a path in a single field of view. Although very large field lenses have been used, the presence of a large rock at a short distance does not allow sufficient room for the robot size. An additional strategy has been added to rotate the cameras in a more appropriate direction, according to the final goal position, in such a case.

Remote Control Experiment for Lunar Rover Missions

857

4. CONCLUSION The feasibility of both teleoperated and autonomous control modes have already been proved in a realistic environment with a real implementation on a massrepresentative vehicle, with on-board execution of all the flight software. The main limitations to the autonomous path finding come from the precision of path execution on one side, and the field of view when a single perception is used, on the other. Research to overcome these problems will be part of the on-going I-ARES program (Boissier and Marechal, 1995) Fig. 8. Obstacle-avoidance trajectory

The present algorithms have been sufficient to cross the GEROMS test site, even when the final goal was located beyond a canyon, which necessitates a large deviation. The average speed of the robot is detrmed by a perception, navigation and path-execution cycle, performed every 2 minutes, allowing a 5-meter progression (half kilometer per day) in difficult terrain, which is much more than what can be obtained in teleoperated mode on the moon. However, to maximise the efficiency of the pathf'mding algorithms, it will be necessary, in the future, to mix several perceptions in order to obtain a wider field of view around the robot. When the robot fails to fmd a path to its goal in any direction, a call to the ground segment is generated. The performances of the path-execution algorithms have to be included in the margins for the pathfinding between obstacles. In difficult terrains, where few possible paths exist, accurate path execution is necessary to avoid very slow progression with very short trajectories. Although the trajectory control has been improved, path deviation remains an important source of limitations. Future work will be oriented to "vision during motion" algorithms to overcome this problem.

For teleoperated modes, TESA brings an effective solution to the data-rate limitation for planetary exploration (mainly due to the available power for telemetry). Its application field remains limited to the moon: the transmission delays to other planets will not allow reliable enough predictive models for continuous driving. "Move-and-wait" strategies can, however, be applied with the TESA mode at a lower expenditure in communications than in a classical video-operated mode in such cases.

5. REFERENCES Boissier, L. and L. Marechal (1995). Journal of Autonomous Robots. Delail, M. (1994). First campaigns on the GEROMS mobile robot test site. IARP, Montreal, June. Hirzinger, G., et al. (1992). The sensory telerobotic aspects of the space technology experiment ROTEX. iSAIRAS, Toulouse, September. Proy, C., M. Lamboley and L. Rastel (1994). £utonomous navigation system for the Marsokhod rover project, iSAIRAS, Pasadena, October 18-20,.