Copyright © IFAC Intelligent Autonomous Vehicles, Madrid, Spain, 1998
SENSORIAL AND NA VIGA TION SYSTEMS FOR A MOBILE ROBOT (ROGER)
Pere Ridao, Josep Forest, Llni's Pacheco, Rafael Garcia, Xavier Cuff
Computer Vision and Robotics Group Institute of Informatics and Applications University of Girona. Avda. Lluis Santal6 sin. 17003 - GIRONA (Spain). Tel: +34 - 72 - 418474 Fax:+34 - 72 - 418098 Email.·{pere.forest}@eia.udg.es
Abstract: This paper presents basically the sensorial system, the navigation systems and the data communication of the ROGER autonomous mobile robot. This robot is capable to operate in structured indoor environments and in certain outdoor environments. The robot can place itself independently in its workspace, and can perform some basic manipulation operations. These features make possible the use this mobile robot in a wide range of applications (industrial, social, etc.) Copyright © 1998 IFAC keywords: mobile robots, computer vision, colour, teleoperation, autonomous navigation.
1. INTRODUCTION.
Fig. I shows the first implemented prototype which has been used in several applications such as the recognition and tracking of objects, based on their chromatic characteristics; statistic and dynamic trajectory planning in structured environments, avoiding collisions with obstacles, manipulation and transport of certain objects.
ROGER is designed as a work platform to test the autonomous navigation systems and sensor elements based on the processing of images in real time.
With the goal to obtain an adequate behaviour of the robot's movement on irregular surfaces, a mechanical tracking system based on two rubber belts has been used, driven by two DC motors that allow the robot to turn round itself without any significant displacement. The robot is made of a metal tube structure with three platform levels. The power supply system is located on the lower platform, based on batteries and an auxiliary electric generator. It weights approximately 15Kg. and its location is chosen in order to improve the stability of the entire system. The control systems and sensors are placed on the intermediary platform. The cameras performing the vision system are fixed on the top platform. The approximate dimensions of the robot are : 960mm length, 840mm
Fig.l : First prototype of the autonomous mobile robot ROGER.
279
WHEELS CONTROL ---.-------------------------~
RADIO MODULE
,,,--------------,
"
~------------------------------I"
, ULTRASONIC: SENSORS :
,, ,,
Figure 2: Scheme of the sensorial system of ROGER 2.1 Colour Image Processing. Ultrasonic sensors
width and 620mm height, 60Kg. weight. The motored wheels of the robot (with a diameter of 140mm) move the two rubber belts and are located at the upper vertexes of the trapezoid. Its dimensions are: 600mm for the lower base, 800mm for the upper one and a height of 500mm.
One of the most important sensoring elements of the robot is the computer vision system. The input of the system is provided by a composite video, auto-iris and auto-focus CCD colour camera. It's allowed to go through the workscene of the robot by 2 DOF. The first DOF consists of a 270 degrees rotation around an axis orthogonal to the robot's plane. The second DOF is the pitch of the camera's optical axis (90 degrees).
The power supply system of the robot consists of three 12V, 24Ah batteries. Two batteries are used to power the tracking motors of the robot, and the third is used to supply the control systems. For safety reasons, the power of the robot can be connected or disconnected through a switch that is remotely controlled by the link computer.
The hardware converts the composite video into the chromatic components HSI (Hue, Saturation, Intensity) as shown in fig . 6. The processor is capable to realize a segmentation of images in real time, using a determined margin of the HSI components. If the chromatic characteristics (Hue and Saturation components) of the object to be manipulated are known and these are sufficiently well determined, the image can be segmented so that the new image only contains the object(s) of interest. The sensorial system has also a set of ultrasonic sensors that detect obstacles in the closest environment of the robot. These sensors serve as an alarm system in the case that obstacles are detected at a distance inferior to a determined security threshold. The central computer consults periodically the status of the ultrasonic sensors.
2. SENSORIAL SYSTEM It is known that there exists an important relationship between the performances of the mobile robot and the characteristics of the sensorial system (Thorpe et ai, 1995).
The core of the sensorial system of the robot consists of a basic PC-Pentium board computer that supervises the functioning of all the subsystems of the robot, as shown on the scheme of fig . 2. The subsystems of the robot are: the colour image processing boards, the control entity of the ultrasonic sensors, the robot tracking controllers and the camera movement controllers.
280
2.2 Tracking control of the robot
3. J Teleoperated navigation
Two 12V, 150W DC motors, used for the tracking of the robot, allow that the robot moves at a speed of approximately 0.5 m/s. It is equipped with an odometric system consisting of 2 incremental encoders joined to the axis of 2 sixteen centimetres of diameter free wheels, placed close to the tracking belts and in permanent contact with the ground.
In this type of navigation, the user controls the movement of the robot through a series of commands that are transmitted from a remote work station (Conway et al. 1987). The robot has a radio transceiver system which allows the reception of control commands and the emission of signals, provided by the sensors, in order to feedback the position. In addition, a TV emitter allows the transmission the video signal captured by the camera. This exchange of information is performed by a link computer, established in the work environment of the robot, equipped with a TV receiver and a radio signal transceiver module, similar to the one used on the mobile robot.
The central computer sends to the motor controllers a series of commands through one of the parallel ports indicating: the motors to be driven, in which direction to turn and at what speed. The controllers generate pulse width modulation outputs (PWM) for each motor, as well as a signal that indicates the direction of the turn. These signals drive a power stagephase consisting of a FET transistors bridge.
The user can teleoperate the robot from the link computer through the use of a peripheral (joystick, mouse, keyboard, .. .) or from any remote computer connected to the link computer by internet (using the TCP/lP protocol). Fig. 3 shows the used scheme.
The incremental encoders supply signals that indicate, through another parallel port, the position of the robot towards the central computer. Each revolution of the encoders is detected by a counting of 64 pulses (6 bits), which implies a resolution in the order of a centimetre.
2.3 Camera movement control. A stepper motor drives each Jomt, while an incremental encoder carries out the feedback of the position. The motor controllers of the joints receive from the central computer the commands for the movement, which indicate the direction of the movement and the number of steps. The controllers generate outputs that can drive directly the inputs of the drivers of the conventional stepper motors.
Figure 3: Communications diagram
The incremental encoders fixed to the axes of the joints, provide some signals that indicate their positions to the central computer.
3.2 Indoor Navigation. In an indoor structured environment, we are working with two different navigation approaches: Position Based Navigation and landmark Based navigation. The application of other methods is also possible, by using pattern projection techniques described by Asada et al.(l986) and Salvi et al.(l996).
3. NAVIGATION SYSTEMS AND COMMUNICATIONS OF THE ROBOT Hereafter we describe the three sorts of navigation methods allowed by our robot: Teleoperation, Indoor navigation and Object following behaviour.
Position Based Navigation The user specifies the initial and final points of the trajectory as well as the obstacles in the work environment. The link computer obtains the optimal trajectory using the determination techniques of the work environment configuration space and the generalized diagrams of
281
The world is now modelled by corridors intersected by crossroads (Fig.5-left-). The robot position only has to be known within the actual corridor or crossroad. Each crossroad is identified by a landmark located at the walls of its neighbour corridors. The landmark is always found at the right hand of the robot movement direction. When entering in a crossroad Ci the robot finds an InitiaCCrossroad landmark and when getting away from it, the robot finds an Ending_Crossroad landmark. Now, the navigation is like that done by a person looking for an unknown place in a city: "Go straight ahead, and take the first turn to the right ... ". Using a Fol-
Voronoi. This mixed technique utilizes basically the configuration space to determine the optimal path to follow (minimal length trajectory). At the intersection points of the trajectory and in the presence of obstacles, the generalized diagrams of Voronoi (GVD) are used locally in order to obtain the equidistant trajectory between them, to minimise the risk to collide with them (Lee et al. 1981; Lozano-Perez 1983). Fig. 4 shows the scheme of a trajectory in a determined work environment. The coordinates of the trajectory points are sent from the link computer to the control computer
Fig. 4 (Left): Trajectories using c-space configuration (continuous line) and Voronoi diagrams (dashed line). The Voronoi diagrams are used only near the point A (high risk of collision). Figure 4(Right): Trajectory followed by the robot.
low_Corridor Behaviour (like that explained in (Arking, 1989» the robot may go through a corridor keeping a constant distance within the walls. The robot realises that it is entering in the crossroad Ci when it finds the InitiaCCrossroad landmark, then it leaves the Follow corridor behaviour and turns around its own axis getting the direction of the next corridor. The robot realises that it is getting away from Ci when it finds the Ending_Crossroad landmark. The initial position and the goal position of the robot are defined by the label of the related corridor or crossroad and its (x,y) position relative to the corridor. The path planner used at this level is very simple (see fig.8) . In fact, it is a modified version of the Dijkstra algorithm applied to a graph obtained from the world model (See Fig.5-Right-).
in the mobile robot. The mobile robot follows the specified trajectory, comparing the programmed position with the position it occupies in the work environment (position feedback of the sensors). A PlO controller acts on the motor controller according to the obtained error signal. The information of the robot position is sent to the link computer with the aim of monitoring the trajectory followed by the robot, and to obtain information about the behaviour of the robot movement on different types of soils or specified trajectories. The main problem in this kind of navigation is the accumulated error in the position estimation due to the use of incremental encoders in the odometry.
Landmark Based Navigation In order to keep low the accumulated errors, we're still working in another kind of navigation.
282
oal Cj IL.·················
•...
i. 1
Fig .S (Left) Indoor environtment. The landmarks are used to identify when the robot is entering or getting away from a crossroad . (Right) Graph used to find the minimal path. The edges are labeled the lenght of the
Landmark detection hardware. In order to keep low the load of the Control PC, we have built a custom hardware to detect the landmarks. Fig.6 shows the block diagram of the used system. The composite video signal generated by an auto-focus, auto-iris CCO colour camera is converted to RGB and from RGB to HSI in real time, by using a VCOIRGB conversion module and a specific RGBIHSI (Real Time Colour) preprocessor. The HSI signals are used as the input for another processor used to interpret the landmarks. The segmentation of colour bars forming the landmarks is much more efficient using the HSI colour space, as stated by (Batlle, 1993). The landmark detector processor allows the interpretation of the different landmarks, generating a binary output code associated to each different landmark. This code is read and interpreted by the Control Pc. The PC then generates the proper commands to actuate over the OC Motor Control Module. Fig.7 shows a landmark detector processor block diagram. The landmarks are configured by 3 different colour bands. When a pixel of these colours is detected, the MUX outputs the counting value stored in the corresponding latch. This value is loaded on the input counter and it's incremented and loaded in the corresponding latch. When a pixel of another colour is detected, the operation is repeated using the corresponding latch. The landmark detector processor control block generates the signals that allow the realisation of the previous process. A valid landmark band detection is considered when a threshold is achieved in the counter, for an image frame . Then a specific flag is set and input to the processor's state machine.
""
---~
Pentium
Processor
r---~----'
/' ,/
Fig.6 Landmark detection hardware Mu~iplexer
Counter
Control
Block
I
-~
.~" -i
"1 ')
Latch
-
Ct Latch
State Machine
-
C2
:[~~]i'
.1.
Fig.7 Landmark detector processor The decoding of the different landmarks is made depending on the state of the flags , generating the binary code that may be read and interpreted by the Control Pc. The design of this processor has been made using 2 PLO's ispLSI1024 (Lattice Semiconductor).
3.3 The Follow Object behaviour. When this behaviour is enabled, our robot is able to follow an object in an indoor or outdoor environtment. The tracking algorithm makes use of the well-defined chromatic parameters (HSI) in order to identify the object to follow. After
I
World Model
Actual position
outdoor environments have been successfully tested. In the latter, even though HSI space colour and auto-iris CCD camera have been used, the system's behaviour is slightly dependent on the workscene light conditions, mainly in outdoor environments. The work is also being focused on the application of landmarks in outdoor environments.
Goal position
REFERENCES Arking RC.(1989) Motor Schema-Based Mobile Robot Navigation.. The Internation Journal on Robotic Research. pp 92-112. Asada,M.,. Ichikawa,H, Tsuji,S.( 1986). Determining of Surface Properties by Projecting a Stripe Pattern. Proceedings of the 8th International Conference on Pattern Recognition, pp. 1162-1164.
Direction=Right; FollowCorridor PI TURN ·90
Direction=Down; FollowCorridor P4 TURN 90; GO (xg ,yg )
BatlIe, J. (1993) Movement detection contribution, independent of camera movement, based on colour image processing., PhD Thesis Universitat Politecnica de Catalunya.
Fig.8 Obtaining the path plan to follow. the segmentation process, done by a specialized real-time hardware (RTC), the gravity center of the object and its area is computed. These parameters are used by a casuistic algorithm to compute the new velocity vector to be sent to the motor wheel controller. When the object is found at the right side of the scene, the robot turns to the right and when it is found at the left side of the scene, the robot turns to the left. In the same way, when the area of the object is small, the robot has to increment the velocity and when it is big, it has to decrease the velocity. So, the robot tries to keep the object centered at the screen and its area within a range. Of course, it may lose the object if it moves at very high speed. When the object disappears from the robot view field, the robot stops until the object appears another time. As a further work, we want to codify this algorithm using fuzzy logic.
Batlle,J.,Cuff, X, Forest, J., Garcia,R., Martf,J., Pacheco,LI., Tomas,J .,J uarez,M. , Villanueva, M.(1995)ROGER (Generic Operational Robot in Experimental Research).. Competition of Intelligent Autonomous Vehicles. Conway,L.,Volz,R,Walker,M (1987).Teleautonomous Systems: Methods and Architectures for Intermingling Autonomous and Telerobotic Technology. IEEE International Conference on Robotics and Automation. Lee,D. T., Drisdale, RL. (1981) Generalised Voronoi Diagrams in the Plane. Ill. SIAM J. Comput. , Vol.1 0, n° I, pp. 73-87. Lozano-Perez,T.(1983) Spatial Planning: A configuration space approach .. IEEE Trans. on Computers, Vol. C·32, pp. 108-120. Saivi,J.,Batlle,J. (1996).Contribution on the Mobile Navigation in Structured Indoor Environments using Laser Beam Patterns., . EUROPTO'96-SPIE.
4. CONCLUSIONS ROGER is a versatile mobile platform that allows the test of different behaviour algorithms depending on the application. Several robot navigation methods have been described : teleoperated navigation, indoor navigation and outdoor navigation. This platform also allows the test of different perception methods. As described in the paper, several real time vision systems have been developed to support the navigation. Teleoperated navigation, indoor navigation based on landmarks and tracking objects in
Saivi,J .,Pacheco,L1.,Garcia,R( 1996). ROGER: A Mobile Robot for Research Experimentations.. Proceedings of the 6th International Symposium on Robotics and Manufacturing (lSRAM-96), vol. 6, pp. 745750. Thorpe, C.E., Herbert, M.( 1995) Mobile Robotics: Perspectives and Realities .. Proceedings of the 7th International Conference on Advanced Robotics.
284