Material-Handling Robots for Programmable Automation

Material-Handling Robots for Programmable Automation

MATERIAL-HANDLING ROBOTS FOR PROGRAMMABLE AUTOMATION C.A. Rosen SRI International, 333 Ravenswood Avenue, Menlo Park, CalIfornia 94025 ABSTRACT degr...

2MB Sizes 368 Downloads 162 Views

MATERIAL-HANDLING ROBOTS FOR PROGRAMMABLE AUTOMATION C.A. Rosen SRI International, 333 Ravenswood Avenue, Menlo Park, CalIfornia 94025

ABSTRACT

degrees. Tactile sensing may be sufficient for a simple kask in which the workpieces are already oriented and require crude endpositioning at the destination such as in a palletizing operation. Visual sensing may be required when the workpieces are unoriented originally, are in motion, or when the final position is not precisely determined. In some instances, both tactile and visual sensing would be the most effective method, as when packing an unoriented container with workpieces in a desired order. Finally, in automated assembly processes, visual and/ or tactile sensing may be required to bring parts together in fitting operations, or for insertions of shafts or bolts in holes. In the latter cases, the precision requirements for a passive accomodation system, together with the completeness of jigging or fixturing would govern how much sensor control was necessary. The major issues thus are the relative costs of orienting parts, of fixturing to preserve position and orientation, of highperformance robots (speed and precision) and on the trade-off attainable by the use of sensor control to reduce the cost of the other alternatives.

This paper deals with the requirements, present development, and research issues concerning advanced material-handling systems based on programmable manipulators that are commonly called industrial robots. Industrial robots are becoming increasingly important as part of programmable automation systems being developed for the production of batch-produced discrete part goods. As contrasted with hard or fixed automation, programmable systems are designed for flexibility, that is, for ease of set-up for new or changing products in small or variable lot sizes, and adaptively responsive to a variable environment. Material handling entails the controlled manipulation of raw materials, workpieces, assemblies and finished products. Together with inspection and assembly, these functions represent essential parts of the total manufacturing process that are still highly labor-intensive. The great majority of present industrial robots (first generation) are performing relatively simple repetitive material handling tasks in highly constrained or structured environments. They are also increasingly used in spot-welding and paint-spraying operations. Second generation robots which incorporate computer processors or minicomputers have begun to appear in factories, considerably extending rudimentary pick-andplace operations. More advanced robots are being developed which incorporate visual and tactile sensing, all under computer control. These integrated systems begin to emulate human capabilities in coping with relatively unstructured environments. There are two major requirements for advanced material handling systems. The first requirement is for a programmed robot to be able to acquire randomly-positioned and unoriented workpieces, subassemblies or assemblies from stationery bins or from moving conveyors, in a predetermined manner, and to convey them via a predetermined path, safely, to some desired position and orientation. The second requirement is for the robots to be able to place the acquired workpiece or assembly with a desired orientation in a position with a specified precision. These two requirements share the need for sensor-controlled manipulation to varying

INTRODUCTION This paper deals with the requirements, the present developments, and the research issues of advanced material-handling systems based on programmable manipulators, commonly designated industrial robots. The domain of interest is the fabrication of batch-produced discrete part goods for which programmable systems appear to be best matched, although programmable material-handling methods may also prove useful or sometimes superior to fixed or hard automation methods for highvolume mass-production applications. As contrasted with hard automation, advanced programmable automation is characterized by flexibility and adaptability, that is, by the ease with which one can set-up for new or changing product mix in small or variable lot sizes, and by the capability of adaptively responding to variations in workpieces, assemblies, and associated machines. Material handling entails the controlled manipulation of raw materials, workpieces, assemblies, and finished products. It is also an essential part of the inspection and the assembly processes; together, these are 147

148

C. A. Rosen

the three major elements of the total batchproduction process that are still highly labor-intensive (1)(2). For the purpose of this paper, programmable material-handling applications can be crudely divided into two classes that have important requirements in common. For convenience, these classes are tentatively labelled "predictable sequence material-handling" and "adaptive sensor-controlled material-handling"; they are described in the following sections. Predictable Sequence Material-Handling Systems In this class, raw materials, workpieces, assemblies, and finished goods are acquired, transported, repositioned, and oriented from one fixed location to another. The acquisition positions, the trajectory, and the deposition positions are known beforehand and variations in these parameters are sufficiently small so that no "fine tWling" is necessary. Typical examples include: • The feeding of raw stock from a fixtured position to a stamping-machine, or to a press. • The loading and unloading of parts to and from machine tools and furnaces, in which the accurate part position and orientation is effected using fixtures or mechanical stops. • The stacking of cartons on pallets, in which position and orientation of the acquired cartons and of the pallet to be loaded are predetermined and fixed. The majority of presently-used limitedsequence pick-and-place robots and many 5 and 6 degree-of-freedom robots equipped with more sophisticated internal sensors and control systems are successfully performing this class of operations. Occasionally, external photoelectric or electro-mechanical binary switches are incorporated into the handling systems as safety devices for interlocking multiple machines, and to select one of several stored programs. To a mild degree, these systems are adaptive, but they do not have provisions for altering the programmed orientations of the end-effector (hand) nor the trajectories of the arm during execution of a previously programmed sequence. It is expected that robots for this class of operation will be gradually improved, and their utility extended by increasing performance factors such as speed and accuracy, and developing better end-effectors. There is little doubt that these relatively expensive robots Will be increasingly used in large numbers for simple repetitive jobs which do not require adaptability. As it becomes more expensive to constrain the associated environmental conditions, one must turn to a more sophisticated class of robot--an adaptive sensor-controlled robot-which is discussed next. Adaptive Sensor-Controlled Material-Handling Systems The second class of material-handling systems provides for the acquisition, the

safe transport, and the positioning or presentation of raw materials, workpieces, assemblies, and finished goods under variable and generally not fully predictable conditions. Examples include: • The acquisition of randomly-oriented parts or assemblies, from a fixed or moving conveyor belt, their transport and deposition With a predetermined orientation and position into a tote box (buffer storage bin). • The acquisition of randomly-oriented parts, one at a time, from a storage bin, their repositioning and reorienting in a predetermined manner for presentation to some other manufacturing process, such as inspection, machining, or assembly. • The acquisition of workpieces (or assemblies), one at a time, from an overhead moving conveyor and the transporting and positioning of each workpiece on a rack, into a plating bath, a furnace or a similar processing machine. These tasks require that the present capabilities of robots be extended so that they perceive and interact with the surrounding environment. In particular, it appears desirable to develop sensor-mediated computer-controlled robots which emulate human capabilities in identifying and locating parts, and in controlling the manipulation and presentation of these parts in a predetermined manner with a specified precision. Human workers make use of both contact (force, torque, and touch) sensing and noncontact (visual) sensing. Similarly, sensors for programmable automation may be classified into contact sensors (force, torque, and touch sensors) and noncontact sensors (television cameras, diode arrays, optical proximity sensors, and range-imaging sensors) . A review of the use of sensors in programmable automation is given elsewhere (3). A potential strategy for control of manipulation is to make use of noncontact sensors for coarse resolution reqUirements and contact sensors for fine resolution. Thus, randomly-oriented workpieces can be located crudely using a television camera, and acquisition of the workpiece can be effected using touch sensors which can control the forces applied by fingers to within safe limits. A number of researchers have made use of electro-optical imaging sensors and force/torque / touch sensors to identify, locate, and manipulate workpieces. A few examples follow: • Gato (4) built a hand with two fingers, each having 14 outer contact sensors and 4 inner, pressure-sensitive, conductiverubber sensors. He used the touch information to acquire blocks randomly located on a table and packed them tightly on a pallet. • Bolles and Paul (5) describe a binary touch sensor conSisting of two microSwitches, one on either side of each

Ma t e rial-handling robots for programmable automation











finger. They used it to determine whether a part was present or absent and to center the hand during automated assembly of a water pump. Hill and Sword (6) describe a hand-incorporating force with touch and proximity sensors, attached to a Unimate robot for material-handling applications. The sensors include a wrist force sensor, using compliant elements and potentiometers to sense the relative displacement of the hand, as well as touch and proximity sensors. This hand was used for the orderly packing of water pumps into a tote box: The Unimate moves rapidly to a previously programmed starting position, then successfully moves along three orthogonal axes, stopping when threshold forces along these axes are reached, finally releasing the pump in a tightly packed configuration in the tote box. Takeda (7) built a touch-sensing device for object recognition. The device consists of two parallel fingers, each with an array of 8-by-lO needles that are free to move in a direction normal to each finger and a potentiometer that measures the distance between the fingers. As the fingers close, the needles contact the object's contour in a sequence that depends on the shape of the object. Software was developed to use the sensed touch pOints to recognize simple objects, such as a cone. Johnston (8) describes the use of multiple photoelectric proximity sensors used to control the positioning of a manipulator. Lateral positioning of a hand was controlled by signals from two sensors to center the hand over the highest point of the object. Heginbotham, et al . (9) describes the use of a visual sensor to determine the identity, position, and orientation of flat workpieces, from a top-view image obtained by a television camera. The camera and a manipulator were mounted on a turret in the same fashion as lens objectives are mounted on a common turret of a microscope . After the identity, position, and orientation of each workpiece had been determined, the manipulator rotated into a position coaxial with the original optical axis of the camera lens and acquired the workpiece. At Hitachi Central Research Laboratory (10), prismatic blocks moving on a conveyor belt were Viewed, one at a time, using a Vidicon television camera. A low-resolution image (64-by-64 pixels) was processed to obtain the outline of each block. A number of radius vectors from the center of area of the image to the outline were measured and processed by a minicomputer to determine the identity, position, and orientation of each block. The block was then picked up, transported, and stacked in an orderly fashion by means of a simple suction-cup hand whose motion was controlled by the minicomputer.

14 9

• Olsztyn, et al. (11) describes an experimental system which was devised to mount wheels on an automobile. The location of the studs on the hubs and the stud holes on the wheels were determined using a television camera coupled to a computer, and then a special manipulator mounted the wheel on the hub and engaged the studs in the appropriate holes. Although this experiment demonstrated the feasibility of a useful task, further development is needed to make this system cost-effective. • Yachida and Tsuji (12) describe a machine-vision system, including a television camera coupled to a minicomputer, which was developed to recognize a variet y of industrial parts, such as gasoline-engine parts, when they were viewed one at a time on a conveyor. Resolution of l28-by-128 elements digitized into 64 levels of gray scale were used. In lieu of the usual sequence of picture processing, extraction of relevant features, and recognition, the system uses predetermined part models that guide the comparison of the unknown part with stored models, suggesting the features to be examined in sequence and where each feature is located. This procedure reduced the amount o f computation required. Further, b y showing sample parts and indicating important features via an interactive display, an operator can quickly train the system for new objects; the system generates the new models automaticall y from the cues given by the operator. The s ystem is said to recognize 20 to 30 complex parts of a gasoline engine. Recognition time and training time were 30 seconds and 7 minutes, respectively. • Agin and Duda (13) describe a hardware / software system under minicomputer control developed to determine the identity, position, and orientation of each workpiece placed randomly on a table or on a moving conveyor belt . This syste~using a Unimate industrial robot, acquires that workpiece and moves it to its destination. The electrooptical sensors employed include a solid-state lOO-by-lOO area-arra y camera and a solid-state l28-b y-l linear arra y camera. A workpiece is recognized b y using either a method based on measuring the entire library of features ( "nearestreference" classification) or a method based on sequential measurement of the minimum number of :features that can best distinguish one workpiece from the others ("decision-tree" classification) . Selection of the distinguishing features for the second method is done automatically by simply showing a prototype to the viewing station of the system . The decision-tree classification metho9 was applied to recognition of different workpieces, such as foundry bastings, water-pump parts, and covers of electrical boxes. The time required for the

C. A. Rosen

150

system to bUild a new sequential tree after showing examples of objects, and the time for recognition of a given object is less than a fraction of a second on a small minicomputer. (Digital Equipment Corporation PDP 11/40.) There still has not been widespread use of these new techniques in industry. However, solid-state television cameras and microcomputers have been rapidly descreasing in price and it appears possible at present to provide a useful visual sensing subsystem for approximately $5,000; the remaining hurdle is the cost of generating appropriate picture processing software and integrating the visual system with existing industrial manipulators. RESEARCH ISSUES There are numerous research issues that require serious study and attention in nascent robot technology. The basic issue is to develop a theory and understanding of manipulation sufficiently broad to encompass a wide variety of applications to material handling, assembly, and prosthesis. This is a complex area and has been very slow to develop. More current issues involve applied research aimed at the near term goals of implementing economically-viable manipulators for industrial use. These latter issues include the choice of alternatives for open loop or closed loop control of manipulation, safety considerations, and ease of programming. These issues are discussed in the following sections. MODELS AND THEORY OF MANIPUlATION Material handling research and development using industrial robots has been conducted almost entirely by empirical and experimental means. Hardware configurations for robot manipulators and end-effectors crudely emulate simplified versions of the human arm, wrist, and hand with far fewer degrees of freedom and far less manipulative capability. It is true that engineering concepts in kinematiCS, dynamics, servomechanisms, sensors, electronic control, pneumatic, hydraulic and electrical activators, and so on, have been cleverly utilized in producing relativel y high performance industrial robots and teleoperators. These machines can exceed the performance of the human arm and hand for some applications. They cannot, however, match the human in dexterity and in adaptability for a wide range of application. The development of a theory of manipulation is required that can lead to descriptive and analytic models of arm, wrist, and hand that are valid for both human and machine manipulation. With such models, designers can develop robots that are either adapted for more general purpose use or optimized for peak performance for a restricted class of tasks. It should be noted that important work on subsets of the general problem has been proceeding, primarily in large universities and research laboratories, often directed at the

goal of providing a scientific base for developing good prosthetic limbs. In a recent symposium held in Warsaw (14), a number of papers were presented, reporting on the results of research in dynamic and kinematic modeling of manipulators (15)(16), effects of strength and stiffness constraints on the control of manipulators (17), optimization of manipulator motions (18), and others. Of particular importance is the need to develop some insight into the design of general purpose end-effectors. How many mechanical fingers do we need? How many degrees of freedom? What kind of sensors? How many are needed? Where placed? Should we add a microprocessor to make a "smart end-effector? Shall we dispense with mechanical fingers altogether and rely on easily-changeable specialized tools? At the very least, there is required a canonical claSSification of dexterous manipulative operations based on classes of tasks. OPEN LOOP AND CLOSED LOOP CONTROL As previously noted, the majority of present material-handling robots are used in an open loop control mode in well-structured environments. Objects to be manipulated must generally be carefully positioned and oriented by fixtures or other means. This is often too expensive to implement or causes intolerable delays. The trend in manipulator design has been to continually improve performance by increasing both precision and speed of response as new requirements for manipulation arise, such as required for assembly processes. This trend follows the history of numerically controlled (NC ) machines. Unfortunately, increasing the precision and speed of response is very costly. This trend has led to the following requirements: ~~re rigid structures, thus requiring more powerful drives to accelerate the increased masses; position sensors with increased resolution (12 to 15 bits); faster-acting valves to improve stopping characteristics ; more complex servo mechanisms (position, velocity, and pressure feedback) to ensure stability , and so on. This expensive approach is characteristic of all open-loop, feed-forward control systems. With the advent of economic visual and tactile servoing it has become timely to explore an alternative approach, namely a closed-loop feedback robot system in which manipulator trajectories are mediated by real-time contact and noncontact sensor y feedback to achieve comparable speed and precision of performance with far less expensive manipulators. There are thus two contending methods of approach for achieving desired performance: open-loop internall y-controlled preciSion manipulation operating on highly-constrained (in position and orientation) objects, and closed-loop externall y-controlled manipulation with far less constraints on positions and orientations of objects. Research is required to determine the

Material-handling robots for programmable automation applicability of each of these methods to the various classes of tasks. SAFETY

The size and power of large robots make it mandatory to develop equipment, software, and procedures to ensure that human workers are not hurt and expensive machinery and workpieces are not damaged by any link of the fast-moving manipulator and its load. It is not yet feasible to have constant surveillance by one or more television cameras (with associated computers) to ensure safety in all possible configurations and trajectories. The processing of pictures by scene analysis techniques is still in its infancy. At present, safety for humans can be partially assured by the use of fences, warning signs, pressure mats, proximity detectors and so on. For the next generation of adaptively-controlled manipulators, the trajectories of each link and of the load will not be precisely known a priori. It may be possible to predetermine volumes in space which define the outer limits of possible paths for each link and load for any given task during training, and thus by monitoring these volumes continuously with suitable sensors, avoid collisions (19)(20). This may require a separate microcomputer system, devoted to this purpose, independent of the other computer(s) controlling the majo functions of the manipulator. Novel new approaches would be welcome. TRAINING AND PROGRAMMING The training and/or programming of a robot for a new task must be acceptably simple and rapid for the user to ensure acceptance in the factory. Training aids have been explored that include special input devices such as speech input (6), joystick controls that invoke coordinated simultaneous actions by all the manipulation links (6), and special software which automatically generates visual identification programs in a "training by showing" mode (12)(13). A number of robot software programs and languages have been and are being developed (21) that are designed to simplify the training procedure and provide some editing facilities to reduce the time for modifications and debugging. All of these programs are in a developmental state, and as yet there is no dominant program which more than one organization has adopted. Clearly it would be highly advantageous for all to arrive at some degree of standardization (similarly to the use of APT in NC machines) such that the various programs for manipulation being generated could be shared. One inviting option for a good programming! training system is to provide a trained but nonprofessional user with a high-level interactive language with the capability of training a sensor-controlled manipulator system using high-level commands, such as "Pick up the box from the moving conveyor and transfer it to the platform". Obviouslr, this implies a highly-advanced "intelligent'

151

master program, able to understand the command, then able to interrogate the trainer to determine, unambiguously, the details implied in the command, and finally to assemble stored subprograms in an appropriate sequence to carry out the task. A more conventional and far simpler option is to develop a high-level language and procedure (with good editing facilities) that will enable a professional programmer to assemble the required program for the task, with an interface designed to permit minor modifications and fine tuning by an unsophi~ ticated user. The latter option will most certainly be the first to be implemented by many. It is quite possible that a mixed strategy employing elements of both options will be attempted and the programming system will become more "intelligent" with time, as experience is gained in constructing intelligent programs. REFERENCES (1)

R. H. Anderson, Programmable automation: The future of computers in manufacturing, Datamation, 18, 46 (1972).

(2)

D. Nitzan and C. A. Rosen, Programmable automation, IEEE Transactions on Computers, C-25, 1259 (1976).

(3)

c.

(4)

T. Gato, Compact packaging by robot with tactile sensors, Proc. 2nd. Int. Symp. on Industrial Robots, lIT Research Institute, Chicago, Illinois (1972).

(5)

R. C. Bolles and R. Paul, The use of sensory feedback in a programmable assembly system, Stanford Artificial Intelligence Project, Memo. No. 220, Stanford University, Stanford, CalifonUa (1973) •

(6)

c.

(7)

S. Takeda, Study of artificial tactile sensors for shape recognition--algorithm for tactile data input, Proc. 4th Int. Symp. on Industrial Robots, 199-208, Tokyo, Japan (1974).

(8)

A. R. Johnston, Proximity sensor technology for manipulator end-effectors, Proc. 2nd. Conf. on Remotely Manned Systems, California Institute of Technology, Pasadena, California (1975).

(9)

w.

A. Rosen and D. Nitzan, Use of sensors in programmable automation, to be published in Computer (1977).

A. Rosen, et al., Exploratory research in advanced automation, Rpts. 1 through 6, prepared by Stanford Research Institute under National Science Foundation Grant G138100X (December 1973 to December 1976).

B. Heginbotham, et al., Visual feedback applied to programmable assembly machines, Proc. 2nd. Int. Symp. Industrial Robots, lIT Research Institute, Chicago, IllinOiS, 77-88 (May 1972).

(10) Hitachi Hand-Eye System, Hitachi ReView, 22, No. 9, 362-365.

152 (11)

C. A. Rosen J. T. Olsztyn, et al., An application of computer vision to a simulated assembly task, Proc. 1st. Int. Joint Conf. on Pattern Recognition, 505-513 (1973).

(12)

M. Yachida and 3. Tsuji, A machine vision for complex industrial parts with learning capacity, Proc. 4th Int. Joint Conf. on Artificial Intelligence, 819-826 (1975).

(13)

G. J. Agin and R. O. Duda, SRI vision research for advanced industrial automation, Proc. 2nd. USA-Japan Computer Conf., 113-117 (1975).

(14)

(15)

A. Morecki and K. Kedzior, editors, On the Theory and Practice of Robots and Manipulators, 2nd. CISM-IFT.MM Symp . , WN-Polish Scientific Publishers, Warsaw, Poland (1976). A. Liegois, et al., Mathematical models

of interconnected mechanical systems, presented at symp . ref. 14 (1976). (16)

B. Shimano and B. Roth, Ranges of motion of manipulators, presented in symp . ref. 14 (1976).

(17)

w.

J . Book, Characterization of stre~ and stiffness constraints on manipulator control, presented at symp. ref. 14 (1976).

(18)

M. K. Ozgoren, Optimization of manipulator motions, presented at symp. ref. 14 (1976).

(19)

T. Lozano-?erez, The design of a mechanical assembly system, Artificial Intelligence Laboratory, M.I.T., Cambridge, Massachusetts, Report AI-TR397 (1976) .

(20)

B. Dobrotin and R. Lewis, A practical manipulator system, to be published in IJCAI (1977).