261
Applications
A Computer Vision and Robotic System for an Intelligent Workstation K u l d i p S. Rattan, Alfred J. Scarpelli a n d
Randall E. Johnson Digital Control and Robotics Laboratory, College of Engineering and Computer Science, Wright State Unioersity, Dayton, OH 45435, U.S.A. This paper describes a computer vision and robotics system for an automated workstation. The system can be taught to find an object, recognize it, and direct a robot arm to move to the object, pick it up and place it in user taught bins. The vision system recognizes an object in a frame and extracts the position and orientation information of the object which the robotics system uses to retrieve the object. Direct and inverse kinematic equations are developed for the Mitsubishi Movemaster II RM-501, a small five degree of freedom industrial robot arm. All software is written in "C" and runs under CP/M-68K on a CompuPro MC68008 system. The robotics software is written so as to be independent of the vision system and can be easily used with other computer vision systems or rehosted on other computers. The complete system is designed and implemented in a modular fashion so that a conveyor system may be later added to the workstation.
Keywords: Robotic systems, Vision systems, Automated manufacturing, Direct and inverse kinematics.
1. Introduction Computer vision is a technology which allows a computer to "see" a scene, understand that scene and recognize the objects within that scene as well as to determine their locations and orientation. Robotics is a field which has achieved prominence in the manufacturing world since the 1960s. Robots perform the work of a man in both repetitive labor and in tasks which are dangerous to humans. Highly accurate industrial robots are at work building automobiles, spot welding and binKuklip S. l~ttan received his BS degree in electrical engineering from Punjab Engineering College (India) in 1969, and his MS and PhD degrees in electrical engineering from the University of Kentucky in 1972 and 1975, respectively. Dr. Rattan's present research interests are in digital control systems, robotics, CAD/CAM, and microprocessor applications. He joined the Wright State University faculty in 1979 and currently holds the rank of professor of electrical and computer engineering. During the 1988 academic year, he was a visiting professor at Carnegie Mellon University's Robotics Institute. Dr. Rattan is the author of over eighty technical articles and reports. His external funding of over $500,000 includes grants from the National Science Foundation, the Air Force Office of Scientific Research. Wright-Patterson Air Force Base, and Intel Corporation. Dr. Rattan was a faculty research associate at Wright-Patterson Flight Dynamics Laboratory during the summers of 1980, 1984, 1985, and 1986. He currently serves as a consultant to industrial organizations and Wright-Patterson Armstrong Aeromedical Research Laboratory. He is the past president of IEEE Control Systems Society (Dayton Chapter) and is a senior member of IEEE. Alfred J. Sem'imlli is an electronics engineer with the Wright Research and D e v e l o p m e n t Center's Avionics Laboratory at Wright-Patterson Air Force Base, Ohio, working in the area of computer architecture development. He received a BS in Computer Science from the University of Dayton in 1979 and a MS in Computer Engineering from Wright State University in 1987.
Elsevier Computers in Industry 13 (1989) 261-269
Applications
262
(?omputers m Industry
ning objects from a conveyor belt, to name just a few applications. The Wright State Robotics Laboratory has an in-house developed Computer Vision System (CVS) designed for use in the investigation of image processing, pattern recognition and computer vision [1,7]. The hardware consists of a standard black and white video camera, digitizing hardware, a black and white video monitor, and a host processor [1]. The software consists of a set of utilities and routines developed to give the necessary feedback required for control of robotic devices [7]. Image acquisition, enhancement, processing and recognition algorithms are written in the " C " programming language and run under the C P / M - 6 8 K operating system [8]. In all, the CVS provides the host processor with the ability to recognize and locate specific objects which the system has been trained to identify and then control and direct a robot to perform useful tasks. The robot is the Mitsubishi Electric Movemaster II "RM-501" robot [10]. The RM-501 is a small, light-weight, entry level industrial robot arm. The RM-501 requires external control to specify body, arm and hand movements, and the function of the hand. External computers can control the operations of the RM-501 through
intelligence commands defined by a robot language. Combining the CVS with the RM-501 Robotics System provides the facilities for an intelligent workstation [13]. An image of an object and its surrounding area (called a frame) is sampled by the host processor. The vision system then locates the object within the frame providing positional information relative to the frame and rotational information which tells the orientation of the object (the angle at which the robot hand should pick up the object). The object is then picked up and placed in a bin. The system is expandable for use with a conveyor system. Hardware and software have been provided for input of a photoelectric limit switch signal which detects objects on the belt, and for starting and stopping the conveyor motor [13]. 2. Computer Vision System Hardware and Software
The hardware which comprises the CVS is shown in Fig. 1. The video digitizing system resides in its own chassis. Within this system is a TRW TDC-1007 8-bit "flash" Analog to Digital Converter ( A / D ) which digitizes the video signal
8/W MONITOR TERMINAL
_ _ ~ VIDEO DIGITIZING SYSTEM
CAMERA
U
B"
DISK
S"
~
OBJECT [-~ L
CompuP~o68008
I
F I
IU
F
Fig. 1. C o m p u t e r vision a n d robotics system.
DISK
Computers in Industry
K.S. Rattan et al. / Computer Vision and Robotic System
from the camera, a Telemation TMC 1100 standard television raster scan camera. The TDC-1016 video Digital to Analog ( D / A ) converter converts a binary image into a video signal that is amplified, combined with composite sync and blanking information and output to a standard black and white monitor. Within the video digitizing system is 256K bytes of RAM. 240K is used to hold an image resolution of 480 lines with 512 8-bit samples per line. The camera and the monitor are directly connected to the video digitizing system [11. The host processor is the CompuPro 68008 S-100 system, controlled by an 8 MHz Motorola MC68008 microprocessor. The system consists of a CPU card, a serial I / O card for the terminal and printer, a disk controller card that can control up to four disk drives, two 64K RAM cards for system memory, four 64K RAM cards for image workspace, and a custom S-100 interface card for handshaking with the video digitizing system. The video digitizing system internal memory appears to the MC68008 processor as a part of the system memory so that accesses to the digitizing memory are done in the same fashion as that memory located within the CompuPro S-100 backplane [1]. Also included in the system is a card which interfaces the host to the RM-501 Drive Unit [13]. The CVS software is an interactive menu driven program written entirely in " C " , with many features and options. These options range from testing the vision system, to automatically learning and recognizing objects, to operations that save, retrieve and modify image information. Defined setups for the system can be stored and later recalled. These setups indicate the type of smoothing, edge enhancement and thresholding to be done; use of the freckle remover; recognition tolerance; and minimum object size for recognition. The CVS software uses the concept of frames to store image data obtained by the video digitizing system as it samples an image. A frame is an abstraction of an image and can be defined to be any size, limited by the resolution of the system. The largest frame is 480 by 512 pixels (picture elements). Thus a digitized image can be partitioned into smaller frames as desired for further analysis or enhancement. The CVS software uses image processing techniques to detect objects within a frame. It provides
an object's location within the frame, as well as its orientation (rotation of its coordinate axes) relative to the start of the frame. In the recognize or learn modes, the CVS samples, or digitizes, the video image from the camera. At the user's preference, the binary image can then be smoothed, have edges of objects within the image enhanced, have pixel values thresholded to two, four or eight levels of thresholding, and have freckles removed. Then the system searches the frame for an object. Once found, characteristics of the object are computed as the object is analyzed. If the object is unknown, the user may name the object, give it an ID number and store it. The system then continues until the entire frame has been searched. Object characteristics which are computed include its perimeter, its area, first moments (centroid), slope, max X, max Y, min X, min Y and the number of holes within the object. Perimeter is the sum of all pixel edges that make up the object. Area is the sum of all pixels that make up an object. The first X moment is the summation of the product of the number of pixels in a column and its row. The first Y moment is the summation of the product of the number of pixels in a row and its column. (X, Y) is the object's location from the upper left corner of the display with the X direction down the display and the Y direction across the display. The major slope is the X and Y coordinates in the long direction of the object. The minor slope is the X and Y coordinates in the short direction of the object. Using a robot with a sliding gripper style hand, the minor slope is usually the direction that an object will be picked up with since the distance between the grippers is small. From these statistics, a transformation matrix is computed representing the object's position and orientation (rotation) information.
263
3. Mitsubishi Movemaster II RM-501
The main components of the RM-501 include the Robot and the Drive Units. Options include the Teaching Box and the standard hand. The robot arm operates at variable speeds and has joints allowing five degrees of freedom. The hand is a sliding gripper type which opens and closes. The RM-501 may be programmed using a hand held Teaching Box to direct the robot movements, or via an external computer. In either case, the
264
Applications
Computers m Industry
RM-501 requires external control to specify body, arm and hand movements, and the function of the hand. Additionally, programs may be stored in ROM so the RM-501 can operate stand-alone or in conjunction with an external system controller. The Drive Unit provides the intelligence for the robot. External computers can control the o p e r ations of the RM-501 through intelligence commands defined by a robot language.
Table 2 P a r a m e t e r s a n d o p e r a t i n g angles Parameter
Joint
Operating angle/pulse
al a2 a3 a4, a 5
Waist Shoulder Elbow Wrist pitch & roll
0.025 0.025 0.025 0.075
o o o o
3.1. Robot Unit
The Robot Unit has five degrees of freedom consisting of the waist, shoulder, elbow, wrist pitch and wrist roll. The arm is a vertical multi-joint type. Only the waist rotates horizontally; the shoulder, elbow and wrist move up and down only. The Robot Unit, including the base, weighs approximately 27 kg. The joints are driven by DC servomotors with a maximum speed of 400 m m / s . Table 1 indicates the range of movement for each joint. The standard hand is a sliding gripper type with fingers that can be slid open and closed. Gripper pressure can be user specified to a maximum of 4 kg. The maximum handling weight of the Robot Unit is 1.2 kg, which includes the weight of the hand (0.7 kg). 3.2. Drive Unit
The RM-501 Drive Unit provides the intelligence and control for the Robot Unit. Externally, the Drive Unit can interface to other computers for its instructions via serial RS-232-C or parallel Centronics interfaces. Additionally, the Drive Unit can operate in a stand-alone mode where all instructions are received from an internal ROM. A ROM programmer is provided as part of the Drive Unit. Further, the Drive Unit may send and
Table 1 R M - 5 0 1 R a n g e of m o v e m e n t Waist rotation Shoulder r o t a t i o n Elbow rotation Wrist pitch W r i s t roll
300 130 90 + 90 + 180
? o o ° o
receive control information from external I / O terminals located on the rear of the unit. Embedded within the Drive Unit is a robot language consisting of intelligence commands that control the Robot Unit. There are 38 intelligence commands broken down into operation control, I / O control and program control. Examples of operation control include nesting the robot to its mechanical origin, moving a joint or joints, opening and closing the gripper, and position setting. The Drive Unit computes all arm control information for each instruction. It handles arm velocity, acceleration, deceleration and movement trajectories for the user. The move immediate and position set instructions require a parameter for each joint angle. Up to four joints may be moved simultaneously with one instruction. The relationship between the parameters and the operating angles is shown in Table 2. The number of steps specified by the parameter multiplied by the operating angle per pulse gives the angular movement of that joint. The Teaching Box allows a user to manually move all joints of the Robot Unit and store positions as desired. Stored points may be moved to individually, sequentially or directly from one point to some other stored point. The Drive Unit is interfaced to the CompuPro host processor via its Centronic parallel interface. An S-100 board was built to accept 8-bit data over the S-100 bus and pass that data on to the Drive Unit [13,14]. This board also accepts busy and acknowledge signals back from the Drive Unit. Additional functions provided for future use, but not currently used, are an output signal to start and stop a conveyor belt motor, and an input signal to the status register to determine if an object has tripped a photoelectric limit switch.
Computers in Industry
K.S. Rattan et aL / Computer Vision and Robotic System
4. RM-501 Direct and Inverse Kinematics
frame of the base {0}:
To compute the direct or forward kinematics for the RM-501 robot, frames must be established at each joint angle (Fig. 2). Since it is arbitrary, the base frame {x0, Y0, z0 } is established at the joint where both the waist rotation axis and the shoulder rotation axis intersect for simplification. The waist frame {Xl, Yl, & } and the shoulder frame { x 2, Y2, z2 } are assigned to this point with the z-axis in each case chosen so that the joint rotates about it. The elbow frame {x 3, Y3, z3 }, wrist pitch frame {x 4, Y4, z4 } and the wrist roll frame {x 5, Ys, z5 } are chosen similarly, always making sure that each coordinate system adheres to the right hand coordinate rule. Table 3 lists the link parameters for the RM-501 based on the frames chosen for each link [3,5,9,11]. The general form of the transformation matrix which relates the position and orientation of a point in frame { i } to the coordinate system of frame { i - 1 } [3] is used to obtain a single transformation which relates the frame of the end-effector {5} to the
0T =
rll
r]2
1"13
r21
/'22
!"23
r31 0
r32 0
r33 0
/'21 = 51C234C5 "~- C155, /'31 = 5234C5, /'12 = -- C1C23455 - SIC5, /-22 --- _ 5 1 6 2 3 4 S 5 Jr" C165, _ 523455,
/13 = - C 1 5 2 3 4 ,
r23 = _ $15234, /'33 = C234,
P, = C1(C23L3+ C=L2 ), = S, (
+ G
P~ = S23L3 + S 2L2,
zs
I
Waist
Fig. 2. F r a m e assignments for the RM-501 manipulator.
1
I'll = C1C234C5 - 5155,
x4
She
(1)
where
r32 -
265
),
266
Applications
Computer~ in Industry
Table 3 RM-501 link parameters
and
i
a,_ 1
a, 1
d,
O,
1 2 3 4 5
0 90 ° 0 0 -90 °
0 0 L2 L3 0
0 0 0 0 0
01 02 03 04 05
As seen by (4) there are two possible solutions for
o2. 04 = a r c t a n
2($4, 64),
(5)
where
L 2 and L 3 are the lengths of links 2 and 3
S 4 = - - C 2 3 [ C 1 r 1 3 + $1r23 ] - $23r33 , C4
using the formulas CAB = cos(A + B) = cos A cos B - sin A sin B, SAB = sin(A + B) = cos A sin B + sin A cos B.
The inverse kinematic problem is that of finding all possible sets of joint angles which will place the robot arm at a desired position and orientation given the desired wrist frame relative to the base frame, ( ~ T ) . Some or all of the solutions may not be physically possible for the robot to attain, but for those positions and orientations within the valid workspace of the robot, at least one solution will exist. Using an algebraic solution technique to solve the direct kinematic equations, the following equations will solve for the five joint angles of the RM-501 (01, 02, 03, 04 and 05 ) [3]. 01 = arctan 2 (p~, P~ ).
(2)
For the arc tangent of 01, the two argument arc tangent function is used so that the correct quandrant in which the angle lies will be computed. 03 = arctan 2 ( ± $ 3 , ~ ) ,
(3)
=
-S23161r13 + Slr23 ] + C23r33.
A second solution m a y be obtained by adding 90 degrees to the first solution which amounts to flipping the wrist. When picking up an object, the wrist must always be pitch down towards the workstation. 05 = arctan 2 ( S 5 , C 5 ),
(6)
where $5 = Clr21 - S l r n ,
C5 = Clrz2 - S l q 2 .
A second solution m a y be obtained by adding 180 degrees to the first solution since the orientation of the gripper is the same in either case. The sliding gripper looks the same at 0 degrees as it does at 180 degrees. The total number of potential solutions is 16. 01 has only one solution since the arm is a vertical joint type; thus the waist can only be at one angle to pick up an object. The elbow, 03, has two possible solutions. The shoulder, 02 , has two solutions, two each for each 03. The wrist pitch, 04, also has two solutions, two each for each 02. Finally, the wrist roll, 05 , has two solutions, two each for each 04 making a total of 16 solutions.
where Pxe + p ) + P / C3 =
L 2 - L~
5. Solving for a Goal Frame
2 L 2L 3
and since sin e x + cos 2 x = 1, we can solve for S 3 $3 = !/1 - 632 . As seen by (3) there are two possible solutions for 02 = a r c t a n 2 ( S 2 , ~ C 2 ) ,
where
pz(c / 3 + Le) - S,L,(Cd'x + Sle ) S2 = Cee + S?e + e? + 2clsle e
(4)
Given a goal frame { G } attached to an object to be picked up by the robot hand, the transformation matrices make it fairly easy to solve for the position and orientation of the object. The goal frame, or desired tool frame {T }, is where the hand or tool is to move so that ( T } equals ( G }. This point, and the object's orientation (indicated by the approach and sliding vectors), are known relative to the station frame (from the vision system) yielding (ST). The station frame relative to the base frame (sBT) is known beforehand from setup and physical measurement.
267
Computers in Industry
K.S. Rattan et al. / Computer Vision and Robotic System
Multiplying (saT) and ( S T ) together gives the tool frame relative to the base frame (aTT). Now subtract the quantity of the hand length times the approach vector of ( S T ) from the position vector of (saT) in order to find the required position of the wrist. The effect is to subtract the length of the hand in the approach direction to get the true wrist location. The resulting matrix is the wrist frame relative to the base frame (~vT). The inverse kinematic equations are now used to solve for the joint angles that will place the wrist at the required location and in the correct orientation.
of the end-effector in relation to the station frame, "SOLVE" computes the five joint angles that the arm must be moved to in order to achieve the desired position and orientation for the end-effector. Inputs to "SOLVE" include the tool frame relative to the station frame (destination position and orientation), the five current joint angles of the arm, and the station frame relative to the base frame. Since multiple solutions may exist for the inverse kinematic equations, "SOLVE" returns two solutions. One solution is the set of valid angles closest tO the current set of joint angles. The other valid solution is the next nearest set of angles. Additionally, a flag is returned to indicate whether any solution is valid. A solution is considered valid if the set of joint angles are within the physical workspace, or capabilities of the RM-501. "INVKIN" takes as input the wrist frame relative to the base frame and the five current joint angles. It implements (2)-(6) to produce, if achievable, the nearest joint angle solution and the next nearest solution.
6. Robotics Software The robotics software required to implement the direct and indirect kinematics of the RM-501 consist of the kinematic routines themselves plus additional floating point support software, I / O routines, trigonometric functions, RM-501 command software, and hardware drivers. All software is written in the " C " programming language and runs under the C P / M - 6 8 K operating system running on the CompuPro 68008 system.
6.1. Kinematics Routines The kinematic software consists of four distinct modules which perform the direct and indirect kinematics. The routines "WHERE" and "KIN" deal with the direct kinematics. "WHERE" computes the location of the tool frame relative to the station frame. It may be used to locate the exact position of the end-effector. Inputs to this routine are the five current joint angles of the RM-501, the station frame relative to the base frame, and the tool frame relative to the wrist frame. "WHERE" computes ST=BT-1
B W T. wTT
(7)
To compute this matrix, "WHERE" invokes the routine "KIN". "KIN" computes the 12 RM-501 direct kinematic equations (1). Input to "KIN" are the five current joint angles. Output from "KIN" is the wrist frame relative to the base frame, or in other words, the position and orientation of the wrist. The routine "SOLVE" solves the robot arm joint angles given the tool frame relative to the station frame. In other words, given the desired location
6.2. Trigonometric Functions The RM-501 kinematic equations require the sine, cosine, arc tangent and square root functions to solve for the position and orientation matrices and joint angles. The " C " library does not provide these functions so they were developed using the floating point format. The trigonometric functions are implemented using a table look-up scheme for execution speed and linear interpolation for precision [2].
6.3. RM-501 Intelligence Commands The RM-501 Intelligence Commands software provide the interface to the RM-501 Drive Unit. Seventeen of the RM-501 intelligence commands, those which are practical for use by an external computer, are supported. Each command is represented by a function which can be easily called to make the robot perform as desired. The commands supported are: nest, home, origin, move immediate, move, increment, decrement, position set, here, position clear, gripper pressure set, gripper open, gripper close, grip flag, speed, time and reset. Execution of an intelligence command is as easy as calling the function with any required arguments. For example, to return
268
Applications
the arm to its mechanical origin, simply call "NEST", or to open the gripper, call "GRIP OPEN".
6.4. S-IO0 Interface Driver The RM-501 Intelligence Commands software uses the S-100/Drive Unit Interface software to communicate their instructions to the Drive Unit. This software is comprised of five functions to control the card in the CompuPro chassis that interfaces the S-100 bus to the Centronics parallel port of the RM-501 Drive Unit.
7. System Operation The following scenario illustrates how an object is found by the CVS and picked up by the Robotics System. The system is first initialized and the robot hand-taught the locations of the bins with the Teaching Box. Prior to object recognition, an object must be taught to the CVS. The station frame in camera view is first sampled. Based on user options, the CVS does smoothing, edge enhancement, thresholding and freckle removing to the sampled image. The frame is then searched for objects. The search begins at the origin of the station frame, which is the upper lefthand corner of the monitor at pixel location (0, 0). The search is done in a raster scan fashion of row by row, always starting at the first column of each row. When an object is found, key parameters of the object are determined and the object is presented to the user on the display. The user is asked to either store or ignore the object. If the user decides to store it, a name and ID is given for the object and the user must enter the height of the object in millimeters. The height is the location on the object where the grippers grab it. The height must be hand entered since the CVS is only a two dimensional system and does not have the capability to measure height. Also, the user must enter the bin number in which the object is to be stored. The station frame may be further searched for more objects or the user may exit to the main menu. After teaching the CVS an object, the CVS is run to find and recognize objects. Once again the station frame is sampled, the image smoothed, edges enhanced, thresholded and freckles removed, all based on user setup. When an object is
Computers in lndustD"
found, the CVS computes the statistics on the object which include perimeter, area, first moments (centroid), slope, max X, max Y, min X and min Y, (all quantities in pixels), and the number of holes. It then presents the object to the user by highlighting it on the display. It is the tool frame relative to the station frame transformation matrix that provides the interface between the CVS and the robotics software. The CVS software will provide the (x, y ) location of a recognized object in centimeters from the upper left hand comer of the station frame, viewed as the upper left hand corner of the monitor. The P~ coordinate is the first X moment and the P~ coordinate is the first Y moment. The ~ coordinate was determined when the object was taught to the CVS. The sliding angle is obtained from the X and Y values of the minor slope since this is the short side of the object and can fit between the grippers. The minor slope is the rise over the run and is presented as two pixel values, rise and run, to the Robotics system. The tool frame relative to the station frame, ST, is computed from the positional information (Px, Py, Pz } and the gripper sliding angle 0. The approach is in the - z direction. All pieces of information are now known: ST, saT, ~vT, and WT. "SOLVE" can be called to compute the RM-501 inverse kinematics to find the valid sets of joint angles which will put the wrist over the object. The arm can now be moved to the object, and the object picked up and binned.
8. Conclusion The Wright State Computer Vision and Robotics System utilizing the Mitsubishi RM-501 Robot provides a nearly automated workstation. The CVS identifies objects, presents their location and orientation to the RM-501, and the robot picks up the objects and places them in pre-taught bins. To further extend this system to a totally automated system would involve adding a conveyor belt system which would transport objects into the station frame for manipulation. Manipulation could involve binning as described above or removal of an object for further processing, such as on a lathe. The object could then be returned to the conveyor for other work or packaging down the fine.
Computers in lndustry
K.S. Rattan et a L / Computer Vision and Robotic System
The complete system as it exists today is designed for such future work. The hardware is designed to accomodate the addition of a conveyor system with functions to start and stop a conveyor motor and an input designed to accept the output of a limit switch. It is very easy to interface the robotics software with another vision system as its interfaces are clearly defined. The only information required from a vision system is the position and orientation matrix of the object to be manipulated. The robotics software is written in "C" and can be easily transported to another computer system. In conclusion, the Wright State University Computer Vision and Robotics System has evolved over several years to the current successful system which has the capability to recognize a object, move a robot arm to the object, pick it up and bin it. The complete system has been built very modular to make future enhancements, uses and rehosting a minimal task.
[2] W.J. Cody, Jr. and W. Waite, Software Manual for the Elementary Functions, Prentice-Hall, Englewood Cliffs, N J, 1980. [3] J.J. Craig, Introduction to Robotics Mechanics & Control, Addison-Wesley, Reading, MA, 1986. [4] C.D. Crowell, "Floating-point arithmetic with the TMS32020", Application Report SPRA011, Texas Instruments, Inc., Dallas, TX, 1985. [5] K.S. Fu, R.C. Gonzalez and C.S.G. Lee, Robotics: Control, Sensing, Vision, and Intelligence, McGraw-Hill, New York, 1987. [6] V.C. Hamacher, Z.G. Vranesic and S.G. Zaky, Computer Organization, 2nd edn., McGraw-Hill, New York, 1984. [7] R.E. Johnson, "Computer vision system for an automated workstation", Wright State University, Dayton, OH, 1987. [8] B.W. Kernighan and D.M. Ritchie, The C Programming Language, Prentice-Hall, Englewood Cliffs, N J, 1978. [9] C.S.G. Lee, "Robot arm kinematics, dynamics, and control", IEEE Computer, December 1982, pp. 62-80. [10] "Movemaster II RM-501 model instruction manual", Mitsubishi Electric Corporation, Tokyo, August 1984. [11] R.P. Paul, Robot Manipulators: Mathematics, Programming and Control, MIT Press, Cambridge, MA., 1981. [12] C.F. Ruoff, "Fast trig functions for robot control", Robotics Age, November 1981, pp. 12-20. [13] A.J. Scarpelli, "An automated computer vision and robotics system for an intelligent workstation", Wright State University, Dayton, OH, 1987. [14] W.E. Snyder, Industrial Robots: Computer Interfacing and Control, Prentice-Hall, Englewood Cliffs, N J, 1985. [15] TMS32020 User's Guide, Preliminary, Texas Instruments, Inc., Dallas, TX, Prentice-Hall, Englewood Cliffs, N J, 1985.
References [1] M.J. Bakan, A. Cotterman, K. Howell and K. Rattan, "Wright State computer vision system", Wright State University, Dayton, OH, 1985.
269