Copyright © IFAC Low Cost Automation 1986 Valencia, Spain, 1986
APPLICATION OF A LOW COST VISION SYSTEM TO AUTOMATIC ASSEMBLY R. M. H. Cheng and T. Montor Centre for Industrial Control, Concordia University Montreal, Canada Abstract In this paper, a cost-effective optical system for the determination of the centroid and orientation of a class of 2~ D dimensional objects is discussed. To test the method and its suitability to near-industrial conditions, an experimental vision station, consisting mainly of a low cost binary camera and an IBM PC microcomputer, have been mounted on a flat-top conveyor. For demonstration, the station has been serially linked to a Puma 560 manipulator to assist in performing some rudimentary tasks on the moving objects. The test results have successfully demonstrated that the resolving power and speed of this simple system can bring significant benefits to small and medium industries in automating their assembly lines . Keywords Identification; information retrieval; on-line operation; centroid; orientation
1. INTRODUCTION
materials
handling;
vision;
For a large portion of industrial applications, the above mentioned information can usually be extracted using inexpen sive vision systems comprised, for the most part, of a low resolution camera and a general purpose microcomputer. In the customary production planning and sched uling, a comparable performance can be achieved by optimising simple algorithms for the targeted production run of the day. It is in accordance with this philosophy that the proposed system has been developed.
There are many applications in industry which require online information on the position and orientation of objects lying on a moving conveyor belt. In automating the factory of tomorrow, there is an increasing number of instances, in which components of all sorts are transported from one work station to another via a conveyor belt .adium . For economic reasons, the objects are usually placed on the belt with little attention paid to their relative location and orientation. Along the line, it may be necessary to remove the work pieces with a manipulator for assa.bly or packaging, for feeding into another machine, for sorting in the case of a batch of mixed work pieces, or for other processes. In all these situations, the robot controller must, as a priori, possess information as to the location and orientation of the object on the belt. In addition, for the automated operation to be economically attractive, the information must be made available quickly and at a moderate system cost and operating cost. Although systems catering to the first condition exist, their costs, more often than not, are prohibitive for all but a select number of industries . The trend towards high resolution cameras with specialized computers and specialized electronics usually price such systems beyond what most small and medium industries can afford.
DESCRIPTION OF THE SYSTEM 2.1 The Algorithms The main algorithms at the centre of the system were first devised in their original form by the authors of [IJ in 1980 and those of [2J in 1983. As discussed in [2], the centroid of a silhouette may be calculated by taking 1st moments about 2 mutually perpendicular reference lines . The method is straightforward and adapts itself , particularly well, to a binary environment. Essentially, each rectangular pixel of the digitized binary image is treated as a unit area, with the sides serving as x and y unit distances for the respective moment arm calculations .
287
R. M. H. Cheng, T. Montor
288
Arranged in a checker board fashion, the x and y moment arms for each "on" pixel representing object presence, is easily calculated by counting the number of pixels separating it from the 2 mutually perpendicular reference lines . Having done this throughout the image, the aligned moment arms are then summed and divided by the number of "on" pixels or area . Mathematically, if a circumscribing rectangle is positioned around the silhouette and its respective top left and bottom right coordinates are given by (M,P) and (N,R) from the reference origin, the centroid can be described by the following equations:
N
R
}:
}:
Xj AAij
i=M j=P X
=
R
N }:
}:
i=M
j =P
R
L::.
. Xscale
(1)
A. . 1J
Fig . 1 . Silhouette (optical image) of an enginneering component with a circumscribing rectangle to evaluate orientational parameters
M Y.
}:
}:
j=P
i=N
N
R
1
L::.A •• 1J
Y = }:
}:
i=H
j=P
. Yscale
(2)
AAij
where: 1 . Xscale' Yscale are optical scale factors along the x and y axes . 2 . AAij = 0 or 1 depending if the pixel at location (xj'Yi) is OFF o~ ON indicating the absence/presence of the object at this position .
The procedure employed in determining the orientation of an object is organized in 2 phases . The first is a preparatory phase which consists in creating a lookup table composed of a select number of geometric parameters extracted from the silhouette . Figure 1, shows typically how a c ircumscribing rectangle constructed around the image of a given object may produce a family of dimensionless geometric parameters. With every incremental rotation of the profile, the non - dimensional values are generated and inserted with the angle to which they belong . The number of parameters as well as their selection, is a function of the image resolution and the silhouette geometry. For every profile a subset is constructed consisting of the minimum and most discriminating combination of para meters . In the real - time phase, angular identification is realized by matching the parameters extracted from the target, with those found in the corresponding table.
The entire preparation phase requires no time or effort on the part of the end user . An algorithm may be devised for this purpose, which rotate s the profile, generates the 4 parameter table, and decides, based on the table and camera resolution, which and how many parame ters will be sufficient to form the final look-up table . Dimensional information on the profile can be entered i n many of several ways . If the silhouette is relatively simple, dimensions can be entered d i rectly from the blueprint . For more complex components, an optical image may be used instead. As can be appreciated, the optical algo rithms are particularly well suited to the row by row placement of the optical data in memory, requiring little computing effort and enabling the processing of the image before it is fully gathered . The interleaved nature of the process is so efficient, in fact, that at moderate conveyor speeds (15 - 20 cm/s) , the pro cessing algorithm must wait until such time as another frame is available . This leads to a situation where, upon recep tion of the very last frame, all t.hat remains to be done is to process the tail end of the silhouette contained in the last frame, and match the extracted parameters with those in the table . Thus, the entire on - line evaluation for a typical silhouette consisting of 4 frames (fig 1), will take an IBH PC under 1 sec to completely determine the centroid and orientation of the object .
Vision System for Assembly
The role of detecting the object is performed by a pair of LEDs straddled on either side of the conveyor belt. As the object moves past the LEDs and well into the field of view of the overhead camera, the light beam is broken and an interrupt signal is sent to the computer. Informed of the object's presence, the image gathering and processing is then initiated . By processing the image, the computer is able to extract the centroid and orientation of the silhouette . The camera has an optical array of 128 x 64 elements with a field aspect ratio of 4.8 : 1. To accommodate components of different sizes, a series of frames can be taken and concatenated by software, to produce a composite image. As the object moves through the field, an opto-encoder assembly connected to the drive shaft of the conveyor supplies a tracking circuit with displacement as well as velocity data. For each displacement equivalent to one optical field length, the tracking circuit generates an interrupt to the computer so that another frame may be collected. Program control after each image is quickly transferred to the high level language where the conditioning/ processing resumes. Among other things, the image conditioning is concerned with parallax errors. Image distortion varies greatly with the object's position and orientation . To cope with the varying distortion, knowledge of the object's height (h) and camera distance from the belt surface (B) is used to calculate a simple correction factor (CF) which can be applied to the perceived edge of the silhouette (B) in real-time (fig. 2) .
A Fig . 2 .
= B-(CF) = B-(1
- h/B)
Image conditioning due to parallax error
289
The transfer of each frame from the camera assembly to the computer's memory is done at a rate of 15.3 Kbaud, requiring 67 . 5 ms to perform a complete transfer. With a typical exposure time of 8 ms per frame, the camera is thus capable of snapping 13 frames/s (1/75 ms), which, taking the present field of view : 30 x 6~ cm . , translates into a maximum belt speed of 80 cm/so In terms of lighting, adequate flood lights have been installed to reduce the exposure time to approximately 8 ms. while providing a sufficient contrast between dark objects and a lighter coloured background . This may not be adequate to cover all industrial requirements. Structured lighting may then be implemented to ensure highlighting irrespective of component/ background colour . Host notable in this domain, for instance, is the surprisingly simple approach taken by General Hotors Research Laboratories [3] in creating an artificial contrast scene between object and background. Although the exposure time is relatively long by comparison with CCD (charged coupled devices) and video type counterparts, the error introduced is negligible for the application. Typically, for a conveyor speed of 20 cm/s, the error at the silhouette edge is: (20 cm/s)(.008 s)/2 = . 8 mm
R. M. H. Cheng, T. Montor
290
3.DISCUSSION AND CONCLUSION At present, the vision system can only cater to one object/stable position at a time . As a natural extension, the capability of handling multiple objects/ stable positions would be a desirable feature. In order to do so, identification of the target silhouette would be essential in selecting the correct lookup table. To this end, distinction on the basis of height could be used as a first discriminator. The status of LED pairs, mounted at various elevations on either side of the belt, could be used as GO/NO GO gauges to he read at the time of object detection . In addition to the discriminating power it would provide, the correct parallax correction factor would be chosen, allowing among other things, for cases of same elevation to be further discriminated on the basis of differing profile areas. Under those circumstances where differing areas and elevations would not be a sufficient discriminator, special precautions would have to be taken. For example, an object such as a spanner can be flipped over to yield a 2nd distinct silhouette having the same area as the first . In this instance, the desi.r ed stable position for the spanner on the belt would have to be ensured. The vision station that has been discussed forms part of a larger robotic assembly system . The system was developed for the purpose of demonstrating the feasibility of a low cost approach to automating some areas of the assembly line . The attempt has been to advance techniques and methodologies that not only perform well, but are affordable to those small and medium industries which are looking towards automation to remain competitive. In light of the fact that the cost of the purchased and developed equipment is easily 1/10th the cost of existing packages, the authors are convinced that the approach taken is a worthy solution to the outlined conveyor problem .
REFERENCES [1] Cheng, R . M.H. and A. E. Fahill (1980). On-line System for identifying of a Class of the Angular Orientation Industrial Engineering Components, Elect. & Control Systems, Vol. IECI-17, #3, IEEE Transactions. [2] Cheng R . M.H . , S LeQuoc, and D. Athanasoulias (1983). An On-Line System for Locating the Position of the Centroid of a Flat Object, Proceedings of the Ninth Canadian Congress of Applied Mechanics, University of Saskatchewan, Saskatoon. [3] Ward M. R., D. P . Rheame, S . W. Holland, and J.H. Dunseth (1982), Production Plant Consight Installations, General Motors Research Laboratories, Warren, Michigan, internal report no: GHR-415S.