‘Pull to position’, a different approach to the control of robot arms for mobile robots

‘Pull to position’, a different approach to the control of robot arms for mobile robots

Journal of Materials Processing Technology 123 (2002) 393±398 `Pull to position', a different approach to the control of robot arms for mobile robots...

205KB Sizes 2 Downloads 61 Views

Journal of Materials Processing Technology 123 (2002) 393±398

`Pull to position', a different approach to the control of robot arms for mobile robots D.J.F. Toal*, C. Flanagan The Department of Electronic and Computer Engineering, University of Limerick, Limerick, Ireland Received 18 April 2001; received in revised form 18 April 2001; accepted 21 November 2001

Abstract This paper investigates an alternative approach to the current techniques used in robotics for the control of articulated limbs. With an interest in equipping mobile robots with limbs, the starting point was to consider approaches used in other robot application domains, speci®cally industrial robotics as this is the domain where robotic arms have found most widespread application. The proposed control approach, however, differs radically from the inverse kinematic solution approach employed in current serially jointed robots. Creatures with limbs do not use inverse kinematics and transformation matrices when they move a limb from one position to another. Nature uses some other superior technique such that even insects are capable of controlling and coordinating the movement of six limbs each having up to ®ve degrees of freedom. The approach used here is to develop a simple structured linked model of the articulated limb. This model is manipulated in simulation to `pull' the end of the limb towards the desired destination position and orientation. Then the actual motion is realised by driving the joints and linkages through the trajectories followed in simulation. This control approach could be considered to be a primitive version of the perception driven control which living creatures can employ. # 2002 Elsevier Science B.V. All rights reserved. Keywords: Robotics; Kinematics; Model based control

1. Introduction This paper describes the work being carried out on the design and control of light articulated limbs for mobile autonomous robots. The work follows and extends the work carried out by the authors in the design, control and navigation of simple autonomous mobile robots, which can operate, wander, avoid obstacles, explore, and map the unstructured environment in which they ®nd themselves. The design of any robot must of necessity address the target domain and thus the design of autonomous mobile robots must address the fact that such robots can move and therefore operate in a non-static environment. This implies that the designer should not attempt to control or predetermine the environment for the purposes of robot control or intelligence. To enable autonomous mobile robots to interact with and manipulate their environments or manipulate objects, their encounter requires them to have robot arms. With an interest in equipping mobile robots with limbs the starting point was to consider design and control approaches used in other robot application domains, speci®cally industrial robotics as this is the domain in which robotic arms have * Corresponding author. Tel.: ‡353-61-202264; fax: ‡353-61-338176. E-mail address: [email protected] (D.J.F. Toal).

found most widespread application. On investigation, however, the control approaches employed in the better structured industrial robotics domain were found not to be wholly appropriate for the control of limbs of simple mobile robots. In the industrial robotics scenario object position and presentation are positively controlled by ®xturing and part feeders. Mobile robots must as stated operate in unstructured environments and in situations not fully envisaged by the robot builders. Such robots will encounter unidenti®ed or unrecognised objects at random locations. In one form or another, the control approach used in the vast majority of industrial robot applications are variants on position control [1]. The robot's spatial geometry (link lengths, joint rotations) is known and the robot is programmed to manipulate objects whose geometry, positions and orientations in a world reference frame, are known or can be ascertained with appropriate sensors. In the unstructured environments in which autonomous robots are targeted to operate, this is not the case. Potential objects of interest may be encountered and recognised by a robot's sensors but the exact position, size and orientation of such objects relative to the robot will not be known. To manipulate such objects, the robot could utilise sophisticated measuring equipment such as laser ranging equipment and vision systems in order to establish object position, size and

0924-0136/02/$ ± see front matter # 2002 Elsevier Science B.V. All rights reserved. PII: S 0 9 2 4 - 0 1 3 6 ( 0 1 ) 0 1 2 2 6 - 2

394

D.J.F. Toal, C. Flanagan / Journal of Materials Processing Technology 123 (2002) 393±398

Autonomous mobile robotics is an area of signi®cant research interest. Much of the literature describes work on the control of such robots employing arti®cial intelligence (AI). The real world is a complex, unstructured environment. Robots, which are designed to operate in the real world, must have the ability to deal with this environment. They must be able to operate in situations which their designers only vaguely envisaged and must also have the ability to respond appropriately and quickly to unexpected events. The ``classical'' AI approach to interact with an environment is to divide the task into a number of major subsystems, typically: (1) perception; (2) world modelling; (3) planning; (4) task execution; and (5) motor control. The perceptual subsystem handles the sensing devices connected to the robot. The modelling subsystem converts the sensor input into a description of where the robot is in relation to its internal model of the environment. The planning subsystem attempts to work out how it will achieve its goals, given the current world state. Task execution breaks down the plan into detailed motion commands and ®nally the motor control causes these commands to be executed. Each of these subsystems is a complex program, and all have to work together perfectly for the robot to operate at all. Some of the subsystems are extremely complex, e.g. the tasks of perception and world modelling. Currently, it is only possible to design such subsystems for structured environments. The noisy and random environment of the real world overwhelms them. In particular, as the complexity of the environment increases, the time needed to perceive, model and plan about the world increases exponentially. These points would indicate that this approach will not yield a solution to the control of such robots or that at the very least the technology of AI has not suf®ciently matured to solve such problems. If the environment is too complex for a given robot with this classical AI, the robot will simply not achieve its goals and do nothing. As shown in Fig. 1, this classical approach to robot control can be thought of as a type of vertical division of the control problem into functional units, each of which is responsible

for a well-de®ned task which is incapable of controlling the robot by itself. An important feature of this approach to robot control is that the world perceived by the robot must be reduced to some sort of symbolic description which can then be reasoned about by the planning subsystem using the classical AI approach based on state-space search and treepruning. However, this task of grounding symbols in aspects of the real world has been, and remains, one of the most dif®cult and elusive goals of AI. The subsumption approach to robot control is an alternative to the classical approach. It is a minimalist approach. Instead of having a number of complex individual vertical tasks, the subsumption approach tackles the control problem by thinking of it as a number of horizontally arranged layers as shown in Fig. 2. Each layer is designed to implement a competence, i.e. the ability to display a behaviour. Thus, each layer is fully capable of controlling the robot by itself, albeit in a possibly simple-minded way. Thus, each of the horizontal layers contains elements of all the vertical tasks found in a classical system, usually implemented very simply. Each layer implements one behaviour, such as the ability to move away from an obstacle, to move around an area in a random fashion, or to explore the robot's environment. Thus, each process need not tackle the whole complex problem of perceiving everything in its environment, or planning the whole business of getting the robot from A to B, instead, it only performs that part of perception, planning, etc. which is appropriate to the task it has to perform. As an example, a process which is designed to implement obstacle avoidance need only be able to move away from an obstacle noticed by the robot's sensors. A classical subsumption architecture, as described by Brooks [2] consists of a set of complete robot control systems, each of which achieves a particular level of competence. Brooks has de®ned eight such layers of competence which he labelled as layers 0±7. The layer 0 control system provides the robot with the ability to avoid contact with objects (whether these are moving or stationary). The layer 1 system allows the robot to wander around aimlessly without hitting things. Layer 2 endows the robot with the ability to explore the world by seeing places in the distance using its sensor array and heading for them if they appear to be reachable. Layers 3±7 add much more complex behaviour such as the ability to map the environment, formulate plans about it and reason about the state of the world.

Fig. 1. The classical AI approach to robot control.

Fig. 2. Subsumption architecture.

orientation relative to the mobile robot. Once target geometry and location are known, similar control strategies are employed by industrial robots, it could be used for target manipulation. There is though, a simpler approach. 2. Background

D.J.F. Toal, C. Flanagan / Journal of Materials Processing Technology 123 (2002) 393±398

Our initial work on mobile robotics [3±5] was to build robots employing the subsumption architecture and the control of robots with minimalist sensory subsystems. With the clear-cut division of competencies in subsumption, we found that on subsuming control the controlling behaviour had no knowledge of the lower level competence's task. This led to erratic behaviour under certain circumstances. For example if a robot, while approaching a destination or target point, encountered an obstacle the obstacle avoidance competence would take over control of the robot. In avoiding the obstacle the robot could turn in many directions. Some possible directions if chosen by the obstacle avoidance competence would in fact take the robot in a direction away from the desired target whilst other trajectories although not directly on course, would be preferable in that the robot would continue to progress towards the target. This problem was overcome by fusing the outputs of certain competencies using fuzzy logic techniques [6]. Till now, we have built small autonomous mobile robots capable of moving around in an unstructured environment, mapping that environment, avoiding obstacles in their path while still choosing a path which brings the robot closer to a target destination. Our current work is aimed at equipping such robots with arms capable of manipulating objects in their environment. The individual competencies we have equipped our robots with are very simple and primitive. Nevertheless the resulting emergent behaviour appears complex and intelligent. One of the important lessons of subsumption for the operation of robots in unstructured real world scenarios is that the domain is too complex to model. Basing robot control on perceiving and modelling the robot overwhelms the world to an extent such that decision making becomes impossible and the robot achieves little. In many aspects, the lesson of subsumption is not to model the world but to use the world as its own best model based on direct sensory input. When addressing the problem of equipping autonomous robots with arms, we felt that the axiom is still applicable, use the world as its own best model. To build internal models, of the size, location and orientation of target objects of interest, would require sensory systems capable of accurately measuring these parameters, then representing images of these objects in the robot's internal model of the world and using this model to make decisions and act upon in order to affect changes in the robot's environment. Since this approach is complex, fragile and has not been employed in controlling the robots to navigate their environment, it is

395

not an appropriate approach to controlling the manipulation of encountered objects with robot arms. The robot limb control approach that we are using is not based on the world model but rather directly related to sensory input as is true to the subsumption architecture approach. The control of the autonomous robot arm is in reaction to stimulus/input from the world. Creatures with limbs do not use inverse kinematics and transformation matrices when moving a limb from place to place. To illustrate this, pick an object such as a pencil off the desk and then replace it. Swivel around in your chair, close your eyes and turn back again. Now see how easy it is to pick up the pencil while keeping your eyes shut. Not so easy, you may manage, there again you may not. If our limb control involved knowing and recording co-ordinates of objects in world co-ordinates and applying inverse transformations to derive joint motions, we could pick up that pencil every time with our eyes shut. This is how industrial robots operate. However if the pencil was moved each time between pick ups a robot system would require sophisticated vision, recognition and measuring devices where we, with our eyes open, can easily pick up the pencil from any location. 3. Robot control The control approaches employed in the majority of industrial robots are variants on a point to point position control [1]. The controller moves the robot arm from known current position and orientation to desired destination position and orientation. As seen in Fig. 3, direct kinematics involves determining the position and orientation of the tool centre point (TCP) of the robot's end effector in world coordinates from a knowledge of the robot joint angles (available from joint encoders), link lengths and offsets. Inverse kinematics yield possible joint co-ordinates or angles and link extensions (for linear drives) which will deliver the TCP to a given location at a certain orientation [7]. Encoder information fed back from robot joints is used to determine current robot position. Target points where the robot carries out operations such as grasp or release are transformed into robot joint co-ordinates and these form the position reference input for the position control of the robot arm. This is the fundamental control approach employed in servo-controlled industrial robots. Trajectory control is also controlled by prior knowledge of required trajectory geometry.

Fig. 3. Direct and inverse kinematics.

396

D.J.F. Toal, C. Flanagan / Journal of Materials Processing Technology 123 (2002) 393±398

Industrial robots are programmed in two main ways. They are programmed under operator control via a teach pendant where the operator moves the various robot joints to move the arm into a desired location and the point is then stored (encoders read) to memory. Such locations can be stored either in joint co-ordinates or world co-ordinates. A program is written which controls the sequence of robot motions to the various points and the operations carried out at each point. An alternative approach often employed is off line programming based on CAD models of robot arms and part geometry. Both of these approaches are based around control of robot end effector relative to geometric references and dimensioning or co-ordinate geometry. Another programming approach used in certain industrial robot applications is the teach by showing programming approach. The pull to position control approach we are using is akin to the teach by showing approach used for industrial robots applications like spray painting. Teach by showing robots are sometimes also known as lead by the nose robots. Such robots are used where the path of the end effector or robot tool is of primary importance to the robot application. The robot arm is generally not required to come to a stop at unique positions/orientations and perform functions which is common in applications employing point-to-point control like assembly robots. Typically teach by showing robots are taught by the operator physically grasping the arm and leading it through the desired path in the exact manner and speed as will be mimicked by the robot in replay motions. As the arm is moved through the desired path, the position of each joint/axis is recorded on a constant time base. This forms a continuous time history of each axis/joint position throughout the motion and this is used to drive the robot in replay. Since the operator must grasp and move the robot through the desired path, the arm must be counterbalanced and free to move while being taught so that the operator can perform the task. Programming in teach by showing robots is direct and very instinctive. Possible collisions are dealt with directly by the operator as he can control the position or trajectory of each axis or linkage throughout the motion by moving the robot arm in a manner to avoid collisions while still achieving a desirable motion of the tool or end-effector. 3.1. Pull to position In the case of autonomous mobile robots operating in unstructured environments the position, geometry and orientation of objects of interest encountered by the robot are not known. A robot control approach to move the robot gripper towards target objects using inverse kinematics would ®rst require the determination of the position and orientation of objects encountered relative to the robot. Then the arm could be controlled to address the objects suitably for grasping or manipulation. As stated this would require sophisticated range ®nding, vision and object/image recognition capabilities so that the internal model of the robot's situation could

be updated, allowing planning and execution of the object manipulation. We aim to use sensor inputs from targets directly to control the motion of the robot and thereby use the world and its objects as its own best model rather than internalising a facsimile of the world. The robot arm structure, dimensions and joint angles, however, are well known or directly available from joint encoders and so it is natural to use these for control. So the robot arm current con®guration is internally modelled but the surrounding encountered objects are not. There are a number of alternative possibilities for implementation of pull to position. One approach is to develop a simple structure linked model of the articulated limb. The model records link lengths, joint angles and joint stiffness. The model is manipulated to pull the end of the limb towards the desired destination position and orientation, the direction of this pull being derived from robot sensors. Then the actual motion is realised by driving the joints and linkages through the trajectories followed by the corresponding joints and linkages in the model. By incorporating continuous sensory feedback of joint angles and deriving updated target vectors from target sensors, very simple limb models are suf®cient for the control task. Simply pulling the limb end point towards the destination entails a simple vector mapping (Fig. 4). Thus the control sequence is to: 1. 2. 3. 4.

determine desired direction of pull to target from sensors, apply pull or vector to model of robot arm, record changes of joint angles in the modelled robot arm, drive joints of robot through the associated model joint angle changes, 5. recalculate direction to target from sensor, 6. go through loop again. The above steps can be considered to form the basis of a direction servo controller. Employing servo control of motion direction rather than position means that the limb will achieve the target point even with simpli®ed modelling. The arm will move towards the target without a ®x of target position in any co-ordinate system. There is also no need to accurately model linkage weights and friction, as once servo motors or joint torque control is adequate to drive the linkages, the arm will move towards the target.

Fig. 4. Arm control realised through model manipulation.

D.J.F. Toal, C. Flanagan / Journal of Materials Processing Technology 123 (2002) 393±398

397

Fig. 5. Autonomous mobile robot with revolute jointed arm insect eye sensor array.

Fig. 6. Target direction sensor housing.

The above pull to position control approach could be considered to be a primitive version of the perception driven control which living creatures can employ. As said, above creatures with limbs do not use inverse kinematics and transformation matrices when moving a limb from place to place. 4. Experimental validation The control approach described in this paper is being tested in two ways. First, the aim is to prove the concept of controlling a jointed arm by manipulating a model of that arm. To this end work is underway on the model based control of a bench top educational robot, the MA2000 which, although not a very sophisticated robot arm in terms of control resolution, nevertheless should prove suf®cient for the purpose of proving the concept. The controller for the MA2000 robot is being replaced for the purpose of this experiment by a separate computer/controller of the output which is interfaced directly with the individual robot arm joint servo motors, thereby allowing direct driving of the robot arm. In order to test the control of jointed robot arms being derived from sensory input indicating direction to the `target',

a second experiment has been setup. The sensory input devices one might expect for determining direction to target would be CCD (charged couple device) cameras. Such a setup could use either stereoscopic cameras or ®xed cameras mounted orthogonally, viewing the robot arm work space. With such an arrangement the robot could recognise objects of interest in its ®eld of view and derive target direction information relative to the robot end effector. Such a process is by no means trivial with issues such as light level, image

Fig. 7. Electrical schematic of target direction sensor.

398

D.J.F. Toal, C. Flanagan / Journal of Materials Processing Technology 123 (2002) 393±398

Fig. 8. Trajectory followed albeit not in a straight line.

manipulation, greyscalling, feature recognition and extraction all being involved processes. Even with these issues being catered for deriving direction to a target object would require two or more cameras with the associated problem of cross relating the same object from two camera images. To avoid all the technical dif®culties of employing vision systems and to concentrate effort on testing pull to position, a simpler, lower cost, sensory system was designed for our robot experiments. An array of light dependent resistors (LDRs) each mounted at different angles, much like the honeycomb arrangement of insect eyes, is used for the purpose of determining target direction. For the testing of the concept, a very simple target is being used namely a light source and the experiments are carried out in the dark. The eye element detecting most light will govern target direction. This sensory array will initially be mounted on a robot arm which in turn is mounted on the mobile robot platform. For this experimental work the robot arm need carry nothing other than this sensory array and so can be lightweight (Fig. 5).

built into the direction servo loop the robot arm will still move towards the target, albeit not in a straight line as shown in Fig. 8. 5. Conclusions The control approach for robot limbs presented here is more appropriate to the limb control on autonomous mobile robots than to a conventional position control approach used in industrial robotics. This is because accurate co-ordinate geometry information for the objects the robot encounters in its environment is not known at the programming or design stage. With such a control approach, mobile robots should be able to interact with and manipulate objects in the unstructured environments they encounter. This non-measurement based control approach may well have good application in the area of industrial robotics for applications where the parts being manipulated by the robots are not presented in a structured fashion.

4.1. Sensor design Fig. 6 shows the target direction sensor housing in which the LDRs are mounted. The individual LDRs are 4 mm in diameter and are recessed in the holes such that the deeper they are set the more directionally sensitive is their response to incident light. The dark resistance of the LDRs is 1 MO and at a light level of 10 lx their resistance is in the range 10±50 kO. Fig. 7 shows an electrical schematic of the sensor array. Under computer control, the analogue multiplexer (MUX) selects the individual LDRs in turn and the light value from each is read. The LDR detecting the brightest light is then selected in software and this gives the angle to target. The angular resolution of the target direction sensor is a function of the packing density of the LDRs. This packing density is limited by the size of the individual LDRs and the requirement of having a compact sensor. The packing density of LDRs in the prototype built for the experiment is quite low. Nevertheless with the sensor derived direction

References [1] H. Chae, et al., Model Based Control of a Robot Manipulator, MIT Press, Cambridge, MA, 1988. [2] R.A. Brooks, A robust layered control system for a mobile robot, IEEE J. Robot. Automation RA-2 (1986) 14±23. [3] C. Flanagan, D. Toal, B. Strunz, Subsumption control of a mobile robot, in: Proceedings of the 16th Conference of Polymodel on Applications of Artificial Intelligence, Sunderland, UK, 1995, pp. 150±158. [4] D. Toal, C. Flanagan, C. Jones, B. Strunz, Subsumption architecture for the control of robots, in: Proceedings of the IMC-13 ReEngineering for World Class Manufacturing, Limerick, Ireland, 1996, pp. 703±711. [5] P. Fitzpatrick, C. Flanagan, A new approach to control architectures for the control of robots, in: Proceedings of the Expersys97, Sunderland, UK, 1997. [6] M. Leyden, D. Toal, C. Flanagan, A fuzzy logic navigation system for a mobile robot, in: Proceedings of the Second Wismar Symposium on Automatic Control, Wismar, 1999. [7] J.M. Selig, Introductory Robotics, Prentice-Hall, Englewood Cliffs, NJ, 1992.