Vision-guided fixtureless assembly of automotive components

Vision-guided fixtureless assembly of automotive components

Robotics and Computer Integrated Manufacturing 19 (2003) 79–87 Vision-guided fixtureless assembly of automotive components Gary M. Bonea,*, David Caps...

963KB Sizes 0 Downloads 67 Views

Robotics and Computer Integrated Manufacturing 19 (2003) 79–87

Vision-guided fixtureless assembly of automotive components Gary M. Bonea,*, David Capsonb a

Department of Mechanical Engineering, McMaster Manufacturing Research Institute, McMaster University, Hamilton, Ont., Canada L8S 4L7 b Department of Electrical and Computer Engineering, McMaster Manufacturing Research Institute, McMaster University, Hamilton, Ont., Canada L8S 4L7

Abstract Assembly operations in many industries make extensive use of fixtures that are costly and inflexible. The goal of ‘‘robotic fixtureless assembly’’ (RFA) is to replace these fixtures with sensor-guided robots. In this paper, the development of a vision-guided RFA workcell for automotive components is described. Each robot is equipped with a multiple degree-of-freedom programmable gripper, allowing it to hold a wide range of part shapes without tool changing. A 2D computer vision is used to achieve part pickup which is robust to positioning errors. A novel 3D computer vision system is used to align the parts prior to joining them. The actions of the workcell devices are coordinated using a flexible distributed object-oriented approach. Experimental results are presented for the RFA of four automotive body components. r 2003 Elsevier Science Ltd. All rights reserved. Keywords: Automated assembly; Automotive assembly; Computer vision; CORBRA; Fixtureless assembly; Pose measurement; Programmable fixture; Serro gripper; 3D Vision; Universal gripper; Vision-guided robotics

1. Introduction Assembly operations in many industries make extensive use of dedicated fixtures to hold and align parts before they are joined together. These fixtures are part specific, and therefore must be modified or replaced when product design changes are introduced. The cost of redesigning, manufacturing and installing these fixtures is substantial (e.g. on the order of $100 million/plant/year for automotive manufacturers). The goal of ‘‘robotic fixtureless assembly’’ (RFA) is to eliminate the use of dedicated fixtures. This involves replacing the fixtures with sensor-guided robots equipped with programmable grippers. Reconfiguration for new products then involves only a software change (as opposed to changing the fixturing hardware). This would allow rapid product changeover in response to customer demand. Several products could also be made using a single RFA workcell, saving the costs associated with multiple installations. Our interest is limited to the RFA of rigid parts. The RFA concept was first introduced in the literature by Hoska in 1988 [1]. He described the advantages of RFA, some of the technical challenges *Corresponding author. E-mail address: [email protected] (G.M. Bone).

involved and discussed potential solutions. Ro et al. [2] presented an approach for finding optimal kinematic postures for two robots performing RFA. They used velocity and force ellipsoids to develop a suitable performance criterion. Their algorithm was demonstrated for two 2D robots using computer simulations. Mills and his group have studied the control issues involved in the RFA of flexible parts [3,4]. In [3], they developed a dynamic model of the parts using finite element analysis and Guyan reduction. Combining this with a dynamic model of the two robots allowed them to investigate several control methods. Simulation results revealed that standard control methods could achieve stable part mating results. In [4], they proposed a control algorithm, which does not require measurements of the part deflections. Successful experimental mating control results are presented for a pair of flexible, fender-like parts. Plut and Bone [5,6] presented a grasp planning strategy for RFA. This strategy produces grasps, which immobilize objects kinematically, requiring minimal friction or clamping forces. The grasping points are also planned such that initial positioning errors are corrected by the grasping action. In experiments performed on two automotive parts, the standard deviation of the part location was reduced from 0.43 mm (before grasping) to 0.01 mm (after grasping). Choi [7] presented dynamic models of two robots performing a

0736-5845/03/$ - see front matter r 2003 Elsevier Science Ltd. All rights reserved. PII: S 0 7 3 6 - 5 8 4 5 ( 0 2 ) 0 0 0 6 4 - 9

80

G.M. Bone, D. Capson / Robotics and Computer Integrated Manufacturing 19 (2003) 79–87

fixtureless assembly task, which included uncertainty. He developed a robust fuzzy controller for this problem and demonstrated its performance using numerical simulations. Walczyk et al. [8] described a method for the fixtureless assembly of flat or nearly flat sheet metal parts. They employ CNC machined holes for part alignment. This approach could not be easily applied to the complex-shaped parts used in our work. Stavnitzky and Capson [9] developed a multiple camera, visual servo system that can track and align objects in 3D using wire-frame models. Independent object pose estimates from each camera are computationally fused to provide tracking that is stable and robust to occlusion. Experiments are presented demonstrating the successful alignment of two sheet metal automotive body parts. After spending several years developing the required technologies, we have completed an RFA workcell and have successfully demonstrated the vision-guided RFA of four automotive body parts. This paper describes the component technologies employed in this workcell and the extensible approach we used to integrate them. The results of real-time assembly experiments are then presented. The advantages and limitations of the workcell are discussed to conclude the paper.

2. Workcell overview The workcell has been designed to be flexible and easily extensible. It currently consists of a PUMA-762 robot, a GMF-A1 robot, two programmable grippers, two 2D vision systems, one 3D vision system and four PCs. A distributed object-oriented control approach is used to coordinate the actions of the workcell devices in an efficient and flexible fashion. Details of this approach are presented in Section 6. The assembly sequence begins with the four parts roughly placed on a worktable. Each part is located prior to being picked up by using a gripper-mounted video camera and a featurebased 2D vision algorithm. Our part locating methodology is described in Section 4. The programmable grippers consist of a three-finger, nine-degrees-of-freedom (DOF) device and a three-finger, four-DOF device. These are described in Section 3. Following pickup, the parts are successively aligned and mated by the robots using feedback from a novel 3D vision system. In this way, positioning errors due to the robots, grippers or parts are compensated for. The 3D pose measurement and correction methods are described in Section 5.

minor software changes would be required for product changeover. In industry today, parallel-jaw grippers are the norm. They are often as inflexible as fixtures so they are not a suitable choice for fixtureless assembly. At the other end of the spectrum are the dexterous hands that have been developed in many laboratories. While these devices are highly reconfigurable they lack the robustness needed for the manufacturing environment. Fixtureless assembly requires a gripper which is highly reconfigurable while still being mechanically robust. We have developed two such grippers. The first design is shown in Fig. 1 [10,11]. This design features three fingers. Each fingertip contains a Vshaped circumferential groove (VCG) which provides a kinematic constraint when applied to the edge of the automotive body parts. Finger motion in X 2Y 2Z is produced by a hybrid parallel–serial mechanism driven by pneumatic servo actuators. This gives the gripper a total of nine computer-controlled DOF. When the fingers reach their target positions the actuators can be locked by pneumatic-powered brakes. This produces a mechanically rigid truss structure. The gripper has a workspace of 400 mm diameter  75 mm deep. This large workspace and nine DOF make it capable of grasping a very wide range of parts. An alternate philosophy was used when designing the second programmable gripper [12]. With the first design, high reconfigurability was the primary goal. For the second design, our goal was minimum complexity. The objective was to produce a design with the fewest DOF required to grasp a given set of parts. The range of the actuator strokes was also minimized. This in turn reduced the size and weight of the gripper, which is desirable. The second design is shown in Fig. 2. The set of parts for which the gripper was designed is shown in Fig. 5. Three fingers with VCGs are used as before. The first finger has two DOF, Y and Z: The motion range is 150 mm for both. Fingers two and three each have a single DOF with a 210 mm range in the X direction. Computer-controlled DC motor-driven lead screws are used with each DOF. These are heavier than the

3. Programmable grippers Traditional fixtures must be modified or replaced when the products being manufactured change shape. If suitable programmable grippers were used instead, only

Fig. 1. Nine-DOF programmable gripper.

G.M. Bone, D. Capson / Robotics and Computer Integrated Manufacturing 19 (2003) 79–87

81

solution at the cost of increased computation time. This also allowed symmetric contours such as circular holes to be used. Unique edge contours were first selected from an image of the part at its goal location to form a template set. For example, if the part contains only one large diameter hole, and only one long straight edge, the corresponding edges would be good choices for this set. The PLT can then quickly find these contours in the image of the part obtained on-line. The 2D correction is calculated from the centroid locations of the contours. A correction can be calculated from a pair of centroid locations. We used four contours, resulting in six unique centroid pairs and six solutions for the 2D correction. Anomalous solutions are rejected (if they occur) and the remaining values are averaged to produce the 2D correction. The gripper is then moved to a position equal to its pickup position plus this correction and the part is picked up.

5. 3D pose measurement and correction 5.1. Assumptions Fig. 2. Four-DOF programmable gripper.

pneumatic servos used with the first design but much more accurate (0.1 mm vs. 0.4 mm). In spite of this, the gripper has a mass of 12 kg, which is less than half the 25 kg mass of the first design.

4. Part locating for pickup using 2D vision In industry, the parts would likely be delivered to the workcell by a pallet conveyor. We approximate this situation by roughly placing the parts on a flat worktable. If for a moment it is assumed that the part has been perfectly placed, then it is located in what we term its pickup location. A gripper has a corresponding pickup location for picking up this perfectly placed part. These locations can be easily taught off-line a priori. Of course, the part will never be located at its pickup location in practice, so the pickup location of the gripper must be corrected based on the actual position of the part. It is assumed that each part will be within the field of view of the palm-mounted camera when the gripper is at its pickup location. Since the parts are sitting on a flat worktable the required correction is only 2D, i.e. X ; Y and y: A commercial 2D vision tool is used with custom C++ code to compute this correction. The ‘‘part locator’’ tool (PLT) from the HexSight [13] software package was selected for this application. The PLT uses edge contours in the image to locate objects. In theory, a single asymmetric edge contour could be used for locating. We chose instead to use several edge contours to increase the reliability and accuracy of the

It is assumed that the 3D poses of the parts when they are accurately aligned are known a priori. These can be obtained off-line by carefully moving the robots using their teach pendants. The corresponding locations of the parts, robots and the pan/tilt unit of the 3D vision system are termed their respective goal locations. In practice, errors due to the robots, grippers, and the parts will prevent the parts from reaching their goal locations without corrections obtained from sensor measurements. The error between the actual location and the goal location of each part is assumed to be less than 10 mm in position and 21 in orientation. Even for the relatively old robots used in our experiments this is a reasonable assumption. 5.2. Pose measurement An object’s pose in 3D can be characterized by a minimum of six parameters corresponding to its six DOF (X ; Y ; Z; roll, pitch and yaw). A novel vision system was developed to measure the 3D pose of the parts. The pose information is used to align the parts for joining, as described in Section 5.3. The hardware for the vision system consists of a video camera, two line generating lasers fastened to a computer-controlled pan and tilt unit, a video capture card, and a PC. This hardware is shown in Fig. 3. The pan/tilt unit allows the vision system to be aimed at any location in the workcell where visual feedback is required. The first laser is located above the camera with its line oriented horizontally. The second laser is located to the side of the camera with its line oriented vertically. The lasers

G.M. Bone, D. Capson / Robotics and Computer Integrated Manufacturing 19 (2003) 79–87

82

Laser 1

CCD Camera Laser 2

Pan-Tilt Unit

Fig. 3. 3D pose measurement hardware.

can be independently switched on/off by the PC in a synchronized fashion with the video capture. This allows the laser line image data to be easily segmented from the background by subtracting the image captured with the laser(s) off from the image captured with the laser(s) on. The flexibility of this hardware allowed two distinct approaches to be developed for pose measurement, one based on the part itself, and the other based on a special target. 5.2.1. Part-based pose measurement With this approach the 3D pose of the part is not determined explicitly. Instead the 3D positions of a set of edge points on the part are measured and this information is used to correct the part’s pose using the method described in Section 5.3. The edge point measurements are therefore a type of implicit pose measurement. The measurement locations are determined a priori and it is assumed the part will be close enough to its goal location for these locations to be usable on-line. The minimum number of edge points required to characterize an object’s pose is four, but normally more points are used to improve the quality of the pose measurement. The procedure used to determine the part’s implicit pose is as follows: 1. With one of the lasers turned on, the pan/tilt unit is served to aim at a measurement location. 2. The centre points of the laser line image data are determined. 3. To eliminate extraneous data, points which are beyond a specified distance from the edge of the part in its goal location are rejected. 4. A line segment is fitted to the remaining points using least squares and the end point of this segment corresponding to the outer edge of the part is determined. The use of multiple image points to

calculate the edge point increases the precision of the measurement. 5. The position of the edge point is converted to 3D camera coordinates using calibrated models of the camera and laser. 6. Steps 1–5 are repeated to obtain the required number of edge point measurements. 5.2.2. Target-based pose measurement Sometimes it is not feasible to use points on the part to determine its pose, for example when the part is occluded by another part or by a gripper. To deal with this situation, a second pose measurement approach was developed which employs special targets. These targets are fixed to the grippers at several easily viewable locations. Two of these are shown in Fig. 2. Each target consists of a flat, white square marked with four black dots. The size of the square and the locations of the dots are precisely known. To determine a target’s explicit pose the following procedure is employed: 1. The pan/tilt unit is first aimed approximately at the centre of the goal location for the target. Both lasers are then turned on, projecting a cross of light onto the target. 2. The centre points of the laser line image data are determined. These points are converted from image coordinates to 3D coordinates using calibrated models of the camera and lasers. 3. To eliminate data not due to the target, points which are beyond a specified distance from the plane of the target in its goal location are rejected. 4. A 3D plane is fitted to the remaining points using least squares. 5. The four points where the laser lines reach the edge of the target are found in the image and are converted to 3D camera coordinates. 6. These points are used to estimate the coordinates of the centre of the target and its in-plane orientation. 7. Using the information from the previous step to start the search, the four dots are found in the image of the target. The centre positions of the dots are determined and converted to 3D camera coordinates. 8. The dot centre positions are used to refine the estimates of the target centre and in-plane rotation obtained in step 6. This completes the determination of the target’s explicit pose. 5.3. Pose correction The objective of pose correction is to move each part using the robot until it is as close as possible to its goal location. When two parts have been aligned in this fashion they can then be joined, using welding for example, to complete that stage of the RFA process.

G.M. Bone, D. Capson / Robotics and Computer Integrated Manufacturing 19 (2003) 79–87

The pose measurement must be converted into the correction required for the robot’s commanded location. The procedure to compute and apply this correction is as follows: 1. A Jacobian matrix, J, relating the change in pose measurement to the change in the robot’s commanded location is estimated off-line by repeatedly dithering the robot relative to its goal location and measuring the pose of the part or gripper target. For the target-based measurement this matrix will be 6  6. For the part-based measurement, this matrix will be 3N  6, where N is the number of edge point measurements. 2. If J is ill-conditioned then use a different target or relocate the measurement locations for the edge points and repeat step 1. 3. Compute J+, the pseudoinverse of J, off-line and store the result. 4. On-line, measure the actual position of the part or target. Determine the new commanded location for the robot using:

System Control Object

Camera Server Object

83

Robot Object (9 DOF Gripper)

Robot Object (4 DOF Gripper)

Network

Robot Object (GMF)

Part Locator Object

Robot Object (PUMA)

Robot Object (Pan/Tilt)

3D Pose Measurement Object

Fig. 4. Functional layout of the distributed object-oriented control system (note: two camera server objects are not shown).

Pkþ1 ¼ Pk þ GJþ Ek ; where Pk+1 is the new commanded robot location, Pk is the previous commanded robot location (note: the robot is initially commanded to move to its taught goal location so P0 equals this location), G is a scalar gain parameter, Ek ¼ M  Mk is the error vector, M* is the measurement vector for the part or target at its goal location, and Mk is the measurement vector for the part or target obtained at the previous commanded robot location. This corrective action is performed repeatedly until Ek is within the accuracy of the vision system. A G value of slightly less than one should be utilized to allow the iterations to converge smoothly and quickly. A significant advantage of this correction procedure is that it does not require the locations of the robot bases and the base of the pan/tilt unit to be precisely known. Obtaining precise measurements of these locations is very difficult in practice.

6. Distributed object-oriented control 6.1. System architecture The actions of the workcell devices are coordinated using a distributed object-oriented control approach. CORBA [14], an open-standard for object-level communications over a network, is employed along with a 100 Megabit/s ethernet LAN to implement this approach. With CORBA an object created on one computer can be remotely accessed from any other

computer on the network as if it was created locally. The functional layout of the system is shown in Fig. 4. Physically, the objects are distributed over the four PCs on the LAN; and the controllers for the robots, grippers and pan/tilt unit connect to one PC using RS232 serial links. To simplify the high-level control of the workcell the machine specific syntax is encapsulated within robot server objects. One such object is created for each device as shown in the figure. Software to obtain image data from each of the video cameras is encapsulated in three camera server objects. This hides the specifics of the camera, lens and video capture hardware and also allows the data from the camera to be pre-processed. In this system, the image distortion due to the non-ideal lens characteristics is corrected by pre-processing. The part locator object encapsulates the software used to locate the parts for pickup. When this object receives a request to measure a part’s location it contacts the appropriate camera server to obtain a corrected image of the part, analyses the image to determine the location, and returns the result. Similarly, the 3D pose measurement object contacts the server for the pan/tilt unit to point the device at the desired measurement location(s), turns the laser(s) on or off, obtains the required image(s) from the server for its camera, computes and returns the pose measurement. Note that the laser illumination is controlled via the same camera server object. The system control object acts as the coordinating hub for the system. All user robot commands are directed through this object to provide a central point of control. The system control object and all robot server objects reside on a single PC for safety purposes. In this way, the workcell can be safely shutdown even in the event of

84

G.M. Bone, D. Capson / Robotics and Computer Integrated Manufacturing 19 (2003) 79–87

Table 1 A simple workcell programming example Exec(‘‘setup.cs’’) A=CheckRobot(‘‘RobotA’’) B=CheckRobot(‘‘RobotB’’) A sp=WorldSetpoint(10,20,30,40,50,60,0,0,0) A th=WorldSetpoint(0.1,0.1,0.1,0.1,0.1,0.1,0,0,0) SetWorldAndWait(A, A sp, A th ,A timeout) B pos=GetWorld(B) B offset=WorldSetpoint(30,90,80,70,60,50,0,0,0) B pos=AddSetpoints(B pos, B offset) SetWorld(B, B pos)

Part 2

Part 3 Part 4

Part 1

Fig. 5. Four parts to be assembled placed on a flat worktable.

a network failure. The state information for the system is also stored here and can be queried by any of the other objects. The actions of the workcell are programmed using a high-level interpreted language, described further in the next section.

6.2. High-level workcell programming To perform the fixtureless assembly task the actions of the workcell components must be properly sequenced. This was programmed using CorbaScript [15]. CorbaScript is a freely available interpreted scripting language that is specifically designed to interact with CORBA objects. The high-level control could have been programmed in C++ at the cost of increased complexity and lower flexibility. For example, with CorbaScript, if the operation that is running fails for some reason the user may interactively type commands into the interpreter to diagnose the problem and to move the robots to safe locations. Interactivity of this sort is not easily achievable with a compiled language. While this scripting language is single threaded, multithreaded execution can be simulated due to the nonblocking nature of the calls to the robot server objects. As well, it is capable of calling C language functions from pre-compiled libraries, so trigonometric calculations can be easily implemented, for example. Since the programs used for fixtureless assembly are too long to be described here, a simple programming example is given in Table 1 for illustrative purposes. In the first line, the ‘‘setup.cs’’ script is run to define the specialized functions and to connect the CorbaScript interpreter to the system control object. Then we check whether the robots RobotA and RobotB are available, and obtain their handles if they are. If either is not available then the program will immediately stop executing. Next, a setpoint is defined in world coordinates for RobotA. The setpoint format supports nine arguments to accommodate the values needed for the nine-DOF gripper. If the robot in use has fewer DOF then zeros are used for the remaining arguments. Here, the both robots have six DOF. In the next two lines the

RobotA is commanded to move to the setpoint A sp. The function SetWorldAndWait also suspends execution at the current line until the actual position of the robot is within A th of the commanded setpoint. If for some reason the move does not complete after A timeout seconds the program execution is stopped. In lines seven and eight, the current position of RobotB is assigned to B pos and the setpoint variable B offset is created. B pos is then incremented by B offset and RobotB is commanded to move to this new location using SetWorld. This is equivalent to a relative motion equal to B offset. The function SetWorld is non-blocking so the script will continue to execute while the robot is moving. This capability allows several robots to be moved simultaneously, for example.

7. Experimental results RFA experiments were conducted on a set of four sheet metal automotive body parts. Each experiment began with the parts roughly placed on a flat worktable as shown in Fig. 5. This was intended to approximate the delivery of the parts by an industrial pallet conveyor. The approximate dimensions of parts 1–4 in millimetre are 150  160  60, 250  640  170, 180  800  200, and 290  110  70, respectively. The assembly sequence is executed as follows: 1. Visually locate and then pickup part 1 with the fourDOF gripper and GMF robot. 2. Visually locate and then pickup part 2 with the nineDOF gripper and PUMA robot. 3. Pose part 2 using part-based pose correction. 4. Pose part 1 using target-based pose correction and join it with part 2 to form sub-assembly1 1–2. Open the four-DOF gripper and retract the GMF robot. 1 The term ‘‘sub-assembly’’ denotes a set of two or more parts which have been rigidly joined together.

G.M. Bone, D. Capson / Robotics and Computer Integrated Manufacturing 19 (2003) 79–87

85

5. Return sub-assembly 1–2 to the worktable using the PUMA robot. 6. Visually locate and then pickup part 3 with the PUMA robot. 7. Pickup sub-assembly 1–2 using the GMF robot. 8. Pose part 3 using part-based pose correction. 9. Pose sub-assembly 1–2 using target-based pose correction and join it with part 3 to form subassembly 1–2–3. Open the four-DOF gripper and retract the GMF robot. 10. Visually locate part 4 and pick it up using the GMF robot. 11. Pose part 4 using target-based pose correction and join it with sub-assembly 1–2–3 to form the completed sub-assembly. Open the four-DOF gripper and retract the GMF robot. 12. Place the completed sub-assembly on the worktable using the PUMA robot to finish the RFA operation.

In steps 4, 9 and 11, Velcro strips are used to join the parts together. Although spot welding is normally used with sheet metal parts, the use of Velcro2 allowed the parts to be easily disassembled for re-use in our experiments. Note that the way the parts must be layered in the completed sub-assembly required a nonlinear assembly sequence, i.e. the parts cannot simply be joined one on top of the other. This was accomplished by swapping sub-assembly 1–2 from the PUMA robot to the GMF robot via the worktable in steps 5 and 7. This demonstrates a significant feature of the RFA workcell. Since both grippers can pickup any of the parts, nonlinear assembly sequences can be employed to build more complex sub-assemblies than otherwise possible using non-programmable grippers. To achieve the same flexibility with simple grippers would require a large number of them (one for each part used with the workcell) and a series of tool changes. Tool changing would increase the cycle time, and storing the grippers would require factory floor space, both of which are costly. Sample experimental results are shown in Figs. 6–8. In Fig. 6, a 2D vision result for part 4 is shown. The found edge contours (located by the part locator tool) are outlined in white and their centroids are indicated by the coordinate axes shown. As previously mentioned, the orientation information from each contour is not used when calculating the part’s orientation. Based on repeated measurements with all four parts, the accuracy of the 2D part locating was 71 mm in X and Y and 70.21 in y: This was sufficient for reliable grasping to be achieved. For the part-based 3D pose measurement employed in steps 3 and 8 of the assembly sequence, six and nine edge point measurements were utilized with 2 Velcro is used extensively in the assembly of car interiors, under hoods and inside trunks.

Fig. 6. A 2D vision result for part 4.

Stripe from Laser 2

Edge Point

Fig. 7. During 3D pose measurement of part 3.

parts 2 and 3, respectively. The more non-uniform dimensions of part 3 necessitated using a larger number of measurements. A photograph taken during the 3D pose measurement of part 3 is shown in Fig. 7. Here, the stripe from laser 2 is being used to measure a point on the part’s top edge. A series of photographs documenting steps 10 and 11 of the assembly sequence are given in Fig. 8. Fig. 8a shows part 4 after it has been located and picked up by the GMF robot and sub-assembly 1–2–3 held by the PUMA robot. In Fig. 8b, part 4 is in the process of being moved to its pose correction location. In Fig. 8c, the laser cross is being employed during the target-based pose correction. In Fig. 8d, part 4 has been joined to sub-assembly 1–2–3 and the four-DOF gripper

G.M. Bone, D. Capson / Robotics and Computer Integrated Manufacturing 19 (2003) 79–87

86

8. Conclusions

(a)

Completed sub-assembly of Parts 1-3

Part 4

(b)

Vision Target Part 4

(c)

Vision Target

Part 4

(d)

A vision-guided RFA workcell has been developed and verified experimentally. A 2D computer vision is used to correct for errors in initial part placement prior to part pickup. A 3D computer vision is used to correct for errors in the pose of the parts prior to joining. In this way, part pickup and joining were made reliable and robust to position and orientation errors. Two novel programmable grippers were developed which allow the workcell to be used with a wide range of part shapes without requiring additional grippers or tool changing. A distributed object-oriented approach, based on the open-standard CORBA and the CorbaScript scripting language was used to coordinate and control the workcell devices. The use of objects hides the specifics of the hardware from the end-user, greatly simplifying re-programming for different RFA tasks. This integration approach is flexible and easily extensible, allowing new commands or robots to be quickly incorporated, for example. The assembly of four automotive components, requiring a non-linear assembly sequence, was successfully implemented with the workcell. The workcell currently has several limitations. It is limited to working with rigid parts and the grippers are only suitable for parts made from formed sheet materials. Modifications to the gripper fingers, such as knurling, could be made to allow other types of parts to be gripped. More significantly, the assembly speed and accuracy achieved in our tests are too low for most industrial applications. For automotive assembly, the cycle time would have to be less than 1 min and the accuracy better than 70.5 mm. The cycle time could be greatly reduced by speeding up the robot motions. These were kept slow in our tests for reasons of lab safety. The assembly accuracy is dependent on the accuracy of the 3D pose measurements. This could be significantly improved by employing lasers which produce narrower light stripes, a higher resolution video camera and a higher quality lens. Acknowledgements

Fig. 8. Photograph sequence taken during the pickup, alignment and joining of part 4.

has been retracted to leave the completed sub-assembly with the PUMA robot. Numerous assembly experiments were completed to verify the workcell’s reliability. Based on these experiments, the cycle time for the assembly sequence was 3 min and accuracy of the completed assembly was 72 mm.

This research was financially supported by the Institute for Robotics and Intelligent Systems. The HexSight vision software was donated by HexaVision Technologies Inc., and the automotive body parts were donated by MASSIV DIE-FORM Inc. The extensive software development work performed by J. Stavnitzky is also gratefully acknowledged.

References [1] Hoska DR. Fixtureless assembly manufacturing. Manuf Eng April 1988;100:49–54.

G.M. Bone, D. Capson / Robotics and Computer Integrated Manufacturing 19 (2003) 79–87 [2] Ro PI, Lee BR, Seelhammer MA. Two-arm kinematic posture optimization for fixtureless assembly. J Robotic Systems 1995;12:55–65. [3] Mills JK, Ing JGL. Dynamic modeling and control of a multirobot system for assembly of flexible Payloads with applications to automotive body assembly. J Robotic Systems 1996;13:817–36. [4] Nguyen W, Mills, J K. Multi-robot control for flexible fixtureless assembly of flexible sheet metal auto body parts. IEEE Proceedings of International Conference on Robotics and Automation, 1996. p. 2340–5. [5] Plut WJ, Bone GM. Limited mobility grasps for fixtureless assembly. IEEE Proceedings of International Conference on Robotics and Automation, 1996. p. 1465–70. [6] Plut WJ, Bone GM. 3-D flexible fixturing using a multi-degree of freedom gripper for robotic fixtureless assembly. IEEE Proceedings of International Conference on Robotics and Automation, 1997. p. 379–84. [7] Choi HS. Dynamics of cooperating manipulators for fixtureless assembly and robust control based on fuzzy logic. J Robotic Systems 1999;16:93–103.

87

[8] Walczyk DF, Raju V, Miller R. Fixtureless assembly of sheet metal parts for the aircraft industry. Proc Inst Mech. Eng, Part B: J Eng Manuf 2000;214:173–82. [9] Stavnitzky J, Capson D. Multiple camera model-based 3-D visual servo. IEEE Trans Robotics Autom 2000;16:732–9. [10] van Varseveld RB, Bone GM. Design and implementation of a lightweight, large workspace, non-anthropomorphic dexterous hand. ASME J Mech Des 1999;121:480–4. [11] Bone GM. Three orthogonal directions movable fingers for holding and/or manipulating a three-dimensional object. United States Patent 6,273,483, 2001. [12] Balan L. Gripperdesign, grasp planning for fixtureless assembly. M.A.Sc. Thesis, McMaster University, Hamilton, Ont., Canada, 2001. [13] HexSight Version 1.2, HexaVision Technologies Inc., 1999. [14] Object Management Group. The OMG’s CORBA website. http:// www.corba.org/, 2000. [15] Laboratoire d’Informatique Fondamentale de Lille. The CorbaScript language. http://corbaweb.lifl.fr /CorbaScript/, 2000.