Machine vision system for inspecting electric plates

Machine vision system for inspecting electric plates

Computers in Industry 47 (2002) 113±122 Machine vision system for inspecting electric plates Franci Lahajnar, Rok Bernard, Franjo PernusÏ, Stanislav ...

397KB Sizes 6 Downloads 261 Views

Computers in Industry 47 (2002) 113±122

Machine vision system for inspecting electric plates Franci Lahajnar, Rok Bernard, Franjo PernusÏ, Stanislav KovacÏicÏ* Faculty of Electrical Engineering, University of Ljubljana, TrzÏasÏka 25, 1001 Ljubljana, Slovenia Received 11 December 2000; accepted 15 June 2001

Abstract This paper presents a machine vision system for automated visual inspection of plates of electric cookers, with the goal to reduce labour cost and ensure consistent product quality. A number of dimensions of an electric plate are accurately de®ned by using two cameras with telecentric lenses, by applying sub-pixel edge detection techniques, and by semi-automatically calibrating the system. The dimensions of plates, which are manufactured by CNC lathes, may fall in or out of prescribed tolerance standards and are as such a valuable indicator of the lathe tool blunt or breakage. Extensive experiments showed that the inspecting system is fast, accurate and robust. The machine vision system is, thus, able to keep up with the plate production process, inspect every plate and reject the defective ones. # 2002 Elsevier Science B.V. All rights reserved. Keywords: Machine vision; Automated visual inspection; Dimensions; Electric plates; Sub-pixel precision

1. Introduction Inspection is the process of determining whether products deviate from a given set of speci®cations [11]. Automated visual inspection (AVI) determines properties of products using visual information and is most often automated by employing machine vision techniques. The machine vision ®eld is growing rapidly, thanks to research efforts in the computer vision community and advancements in computer and vision technology. Over the years vision technology has matured; it has become more powerful and is now readily accessible to many more users, simply because it costs less and because it is easier to use. Practically, any standard PC can be upgraded to a reasonably powerful vision machine. Nowadays, AVI can provide cost-effective solutions to a number of *

Corresponding author. Tel.: ‡386-1-4768-310; fax: ‡386-1-4264-630. E-mail address: [email protected] (S. KovacÏicÏ).

``standard'' visual inspection tasks, demanding accurate and reliable operation in real time, which only a few years ago was either too expensive, too slow, or sometimes even impossible. Vision approaches proved to be useful, and new vision applications are appearing. In the papers by Chin [6], Chin and Harlow [5], Newman and Jain [11], and Thomas et al. [12] major published vision applications have been surveyed. Another driving force in the ®eld of AVI is the inevitable requirement for improved quality control. Manufacturers look for reliable and consistent automated visual inspection of products to reduce manual involvement in the application of pass/fail criteria. An automated visual inspection system introduces fast, precise, ef®cient and objective inspection. It can be used for measuring dimensions, for inspecting surfaces, and for verifying parts (e.g. [4,10,13]). It does not disturb the manufacturing process or the products; therefore, there is no risk of damaging a product during inspection. It enables 100% inspection of part series. Automated visual inspection is valuable when

0166-3615/02/$ ± see front matter # 2002 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 6 - 3 6 1 5 ( 0 1 ) 0 0 1 3 4 - 8

114

F. Lahajnar et al. / Computers in Industry 47 (2002) 113±122

Fig. 1. Two views of an electric plate: (a) arbitrary view, (b) side view.

the product has large part volumes, demands precise measurement, requires consistent inspection, or is in a hazardous environment. However, high speed, high quality, and high resolution applications require inovative, customized solutions, outside the scope of standard ``off the shelf'' systems. This work has been motivated by a mass-producer of plates for electric cookers who was planning to automate the dimensional measurement of plates after being processed on a series of CNC lathes. The inspection system should keep up with the production process, inspecting every plate and rejecting the defective ones. This requirement limits the overall inspection time for one plate to about 1 s. Fig. 1 shows two views of a plate which needs to be inspected. The plates are transported from a series of lathes to the machine vision system by means of a conveyor belt. Dimensions of successive plates may fall out of the prescribed tolerance standards due to lathe tool blunt or breakage. Such situations must be detected as soon as possible to stop the production of defective products. Thus, inspection must be done on-line to provide in-time feedback to the lathe (e.g. operator

intervention required due to tool damage). In this paper, we describe a machine vision system, developed for fast and precise automated measurement of electric plate dimensions. 2. Problem formulation The task of the automated visual inspection system is to determine whether certain dimensions of each electric plate (diameter  200 mm, height  25 mm), are within prescribed standards or not. Fig. 2 illustrates the widths (D1 ; D2 ; D3 ) and heights (H1 ; H2 ; H3 ) of a plate which have to be measured. The diameter D2 has the smallest tolerance of 0.15 mm. All the other dimensions can deviate at most by 0.25 mm. As a rule of thumb, for every measured dimension, the accuracy of the system has to be at least ®ve times higher than the dimensional tolerances are. This means that the system has to be able to measure the diameter D2 with the accuracy of 0.03 mm, and all the other dimensions with 0.05 mm, while the repeatability of the measurements required by the

Fig. 2. The dimensions D1 ; D2 ; D3 ; H1 ; H2 ; H3 and angle j that have to be determined.

F. Lahajnar et al. / Computers in Industry 47 (2002) 113±122

producer is 10 times higher than the dimensional tolerances are. This means that repeatability of results has to be better than 0.03 mm for dimension D2 and 0:05 for all other dimensions. The production cycle of a plate, and therefore, the maximal processing time allowed, is about one second. The required measurement accuracy, repeatability, and speed are quite demanding. In addition, the surface of the object is highly re¯ective (Fig. 1), which may pose serious problems in any image-processing task. In the following section, we describe how the machine vision system of adequate performance was designed. 3. Methodology Various approaches exist for measuring dimensions using machine vision. They can vary considerably in price and performance. Perhaps the most relevant engineering question in that context is how to design a cost-effective system of adequate performance in terms of speed and accuracy. To design an effective measuring system based on machine vision techniques, the most important issues one has to consider are:    

selection of the image sensor and frame grabber; selection of lenses; selection of illumination; development of efficient image processing algorithms.

3.1. Sensor The discrete nature of imaging sensors introduces a limited image resolution depending on the number of discrete sensing elements (pixels) available. For example, sensing an object (electric plate) of size 200 mm using a standard 2/300 CCD sensor containing 756 pixels  581 pixels yields 200/756  0:26 mm effective resolution. The discretization effect can be reduced either by using a higher resolution and, therefore, a more expensive CCD sensor, or several suitably arranged and calibrated lower resolution sensors. In our case, to meet the required accuracy at moderate costs, two parallel cameras with 756 pixels  581 pixels sensing elements were used.

115

Fig. 3. System set-up based on two cameras and a backillumination.

The system set-up is illustrated in Fig. 3. Using such a camera arrangement, only a small portion (approximately 40 mm  40 mm in size) around the left and the right edge of the plate is sensed. Since a much smaller region is sensed, the discretization error is effectively reduced. For example, sensing a region in size of 40 mm using a 756 pixels  581 pixels sensor results in 40/756  0:05 mm resolution. Nevertheless, this approach still does not ful®ll the requirements and, therefore, has to be further improved by using the sub-pixel approach. 3.2. Lenses When using conventional lenses, the size of an object in the image varies with its distance to the lens. In cases like ours when dimensions are of interest, image size variations should be either compensated or eliminated. An obvious way to approach this problem is to keep the object-to-lens distance constant. Otherwise the distance has to be measured. Neither is trivial to achieving in an industrial environment. The telecentric lenses offer an effective solution to this problem. The main advantage of using telecentric lenses is that they ensure constant image size for substantial variations in object-to-image distance. On the other hand, the ®eld-of-view of a telecentric lens, and therefore, the allowable object size is directly related to the diameter of the lens. For example, to sense the whole electric plate by one camera only, the diameter of the lens should be of about the size of the plate, i.e. 200 mm. Telecentric lenses of such a large diameter would be prohibitively expensive. Obviously, two cameras equipped with two telecentric lenses of smaller diameter (70 mm in our case) provide a cheaper solution. Since two cameras are used

116

F. Lahajnar et al. / Computers in Industry 47 (2002) 113±122

to measure one dimension, e.g. the diameter D1, from the edge in one image and the corresponding edge in the other image, the cameras have to be set in parallel. Otherwise, the telecentric lenses would not yield the desired effect. 3.3. Illumination An appropriate illumination of the object is crucial for measuring dimensions. Because the dimensions are derived from the object edges, they must be determined accurately and their position in the image should not depend on light variations. To determine the dimensions D1 ; D2 ; D3 and H1 ; H2 ; H3 the plate is illuminated from behind (Fig. 3). Such an illumination eliminates the non-uniform surface re¯ectance. Undesirable re¯ections at the edge of the plate, due to its circularity, are suppressed using a collimated light source emitting rays parallel to the optical axis of the camera. 3.4. The algorithms To measure the required dimensions a number of straight line segments (Fig. 4) in the left and right portions of the plate are extracted and the distances between the mid-points of line segments are computed. In short, the algorithm performs the following steps: 1. isolates straight-line segments to sampling precision; 2. obtains sub-pixel position of segments; 3. calculates the required dimensions. First, in the left and the right image, the plate is segmented from the background by thresholding.

For that purpose a global threshold is used. The contour-tracking algorithm [2] is then applied to the binary image to locate the edges of the plate to sampling precision. The algorithm scans the left image from up to down, left to right to locate the ®rst point on a given curve. Once the ®rst point is located, its eight-connected neighbors are checked in clockwise direction to ®nd the next tracking point. To attain the same order of line segments in both the images, tracing in the right image goes from up to down, and right to left, while the neighbors of a point are checked counterclockwisely (Fig. 4). The procedure is repeated until all points forming the curve are traced. The ®nal result is a pair of curves, one per image, expressed as a sequence of edge points. A corner detector [14] is then employed to partition the curves into straight-line segments. The method consists of two steps: (1) estimate the curvature at each point on the curve by a bending value, and (2) locate points with local maximum bending values as corners. Since the resolution of the image is not suf®ciently high, straight-line segments are further determined to sub-pixel precision. Various approaches for subpixel edge determination were developed [1,3,7±9]. To achieve an appropriate performance, we have applied the method of linear interpolation of gray values [3] and statistical approaches [7]. Knowing the approximate slopes of the segments (greater or smaller than 45 ), a one-dimensional linear interpolation of gray values is used to recalculate the positions of straight-line segments to sub-pixel precision. For each segment the accuracy of its position is further improved by least-squares ®tting of a line to the segment points. After that, for each line segment, its mid-point is computed. The plate dimensions D1 ; D2 ; H1 , and H2 are then calculated as distances between the mid-points of corresponding lines at opposite sides of the plate. From dimensions D3 and H3 , the angle j is computed (see Eq. (1)). To further improve the results, this procedure is repeated several times and the median of the values is taken as the ®nal result. 3.5. Measurements

Fig. 4. Left and right portions of the plate as sensed by the cameras, contour tracking directions, and straight-line segments which need to be found.

According to Figs. 2 and 5, the dimensions of a plate (D1 , D2 , D3 , H1 , H2 , H3 ) can be determined solely from distances de®ned in the left and right

F. Lahajnar et al. / Computers in Industry 47 (2002) 113±122

117

Fig. 5. The distances on the images needed for measurement and calibration.

images, provided that the two cameras are properly aligned and that the distance b between the cameras and the pixel size coef®cients kx ; ky in mm/pixel are known. If d1l ; d2l ; d1r ; d2r are the distances measured from the left edge of the images, h1l ; h1r ; h2l and h2r are the distances between line segments midpoints, and h3l ; h3r ; d3l and d3r are the distances between corresponding line segments cross-sections, all expressed in pixels (see Fig. 5), the required dimensions can be calculated by using the following equations: D1 ˆ kx …b

d1l ‡ d1r †;

…d3l ‡ d3r † ; D 3 ˆ kx 2 …h2l ‡ h2r † H 2 ˆ ky ; 2  H3 j ˆ arctan : D3

D2 ˆ kx …b

d2l ‡ d2r †;

…h1l ‡ h1r † H 1 ˆ ky ; 2 …h3l ‡ h3r † H 3 ˆ ky ; 2 (1)

Because the coef®cients kx ; ky and the distance b between the cameras are not known in advance, they need to be determined before the individual plate measurements take place. 3.6. Calibration The overall system calibration procedure consists of two stages. In the ®rst stage, which is required only when the system is installed or maintained, the position and orientation of the two cameras and lenses are adjusted. Cameras have to be set in parallel, otherwise the variation in plate-to-lense distance would cause errors in diameters D1 and D2 , even if telecentric lenses were used. To correctly align

Fig. 6. A schematic representation of the calibration block.

the cameras a calibration block, made of transparent plexy-glass with dot-patterns engraved on the opposite sides of the block was accurately manufactured (Fig. 6). During the adjustment of the cameras, the block is placed in front of the cameras in place of the plate. Based on the captured images, the cameras are manually aligned such that the optical axes of both cameras point in the same direction. In the captured images, this situation is manifested as the back-side dots being centered with respect to the front-side dots. Moreover, the cameras are rotated around their optical axes, namely the z-axis until the x-and y-coordinate axes in both images are aligned as well (Fig. 7). In the second stage, the parameters kx ; ky and b are estimated. For that purpose, a plate of known dimensions, i.e. the calibration plate, was manufactured and all its dimensions were accurately determined by a non-vision technique. During system calibration, the calibration plate is placed into the working place in the same way as are the plates which will be measured later. Under the assumption that the cameras are parallel, the distance b, x-axis coef®cient kx and

118

F. Lahajnar et al. / Computers in Industry 47 (2002) 113±122

Fig. 7. Image of the calibration block in the case of (a) highly misaligned and (b) properly aligned camera. Black dots-front side, gray dots-back side.

y-axis coef®cient ky can be calculated by the following equations: kx ˆ

Dc1 Dc2 d1l ‡ d1r

d2l Dc1 ‡ d1l bˆ kx

d2r

d1r ;

;

ky ˆ

2H1c ; h1l ‡ h1r

nized and connected to the frame grabbers. A Meilhaus ME-96 I/O PC card is used for the communication between the inspection system, acting as a slave, and the plate positioning system, acting as a master. A detail of cameras set-up in a stage of development is shown in Fig. 8a. The system has been integrated into the production line. A detail of the installed system is shown in Fig. 8b. Before measurements start, the system must be calibrated (e.g. once per week). During the calibration phase, the calibration plate has to be positioned in front of the cameras. After the system is calibrated, the measurements can start. Measurements are initiated by the plate positioning system. A measurement starts when a plate stops in front of the cameras and the start signal is asserted on the I/O card. After the dimensions are de®ned, an accept/reject signal is sent to the positioning system, the results are displayed on the computer monitor and appended into a database for further statistical analysis.

(2)

where Dc1 ; Dc2 ; H1c are the dimensions of the calibration plate in millimeters, while d1l ; d1r ; d2l ; d2r ; h1l ; h1r are the distances in the two images, all in pixels (Fig. 5). 3.7. System description The automated visual inspection system consists of a Pentium II PC, running under Windows NT 4.0, two built-in Matrox Meteor frame grabbers, two SONY XC-77CE 2/300 CCD cameras with standard CCIR video output and 756 pixels  581 pixels of size 11 mm. The cameras are equipped with Carl Zeiss Visionmes 70/11/01 telecentric lenses. The lens diameter is 70 mm, i.e. for a 2/300 CCD sensor the ®eld of view is 42 mm  56 mm. The cameras are synchro-

4. Experimental results Several experiments were performed in order to validate the proposed machine vision system in terms of repeatability, accuracy and operational adequacy. To de®ne the repeatability of the measurements, an arbitrarily selected plate was kept in place and measured 20 times. The results obtained are shown in Fig. 9 and Table 1. One can notice that the highest scattering of results (D) occured for the dimension D1 (Fig. 9a, namely, 11 mm, although 50 mm is allowed (see Section 2). By observing the measurements of D2 (Fig. 9b), we conclude that scattering of results is within 10 mm range and is suf®ciently small, i.e. it is less than the allowed 30 mm.

Fig. 8. The machine vision system: (a) detail from the lab, (b) detail from the factory.

F. Lahajnar et al. / Computers in Industry 47 (2002) 113±122

119

Fig. 9. Repeatability of the measurements: (a) diameter D1 , (b) diameter D2 , (c) height H1 , (d) height H2 .

Table 1 The repetability of D1 , D2 , H1 , and H2 shown as minimum, maximum and D-values

Minimum Maximum D ˆ maximum minimum

D1 (mm)

D2 (mm)

H1 (mm)

H2 (mm)

186.997 187.008 0.011

179.313 179.323 0.010

21.270 21.273 0.003

3.493 3.495 0.002

The measurements of heights H1 and H2 (Fig. 9c and d) are even more reproducible. This can be explained by the fact that more edge pixels were used for determining the mid-points (segments 1 and 6 in

Fig. 4), and that only one camera was involved (left or right) to determine the distance. To estimate the dependance of the measurements on the position of the plate in a working place, the calibration plate was placed onto the conveyor belt 70 times, and measurements were obtained. In this way, the effect of positioning uncertainty introduced by the positioning system (about 5 mm in all directions) was simulated. The results (Fig. 10), presented as worst case errors, are shown in Table 2. Observing the results, one can notice that the errors are higher than the D in the repeatability test. The highest positive error obtained is ‡25 mm for diameter D1, while the highest negative

Fig. 10. Measurement accuracy: (a) diameter D1 , (b) diameter D2, (c) height H1 , (d) height H2 .

120

F. Lahajnar et al. / Computers in Industry 47 (2002) 113±122

Table 2 The results of the worst case errors (WCE) test

WCE WCE‡

D1 (mm)

D2 (mm)

H1 (mm)

H2 (mm)

0.019 0.025

0.019 0.023

0.011 0.013

0.010 0.004

error is 19 mm for D1 and D2 . Since the most restrictive worst case error is 30 mm for diameter D2 , the highest estimated error of all dimensions is within error limits. Therefore, all the tests con®rm that the highest error never exceeds the allowed limits. To conclude, the results collected con®rmed that the

measurements were clustered within a 10 mm interval, while the worst case error for all measured dimensions is better than 30 mm. In Fig. 11, we show some measurement results collected during the continuous operation of the system. A sequence of about 1200 measurements of plates produced on the same lathe is shown. The random scatter was mainly due to imprecise lathe tooling, and much less due to measuring uncertainty. Observing the measurements of diameters D1 , D2 and angle j (Fig. 11a, b, and e) a drift and sudden jumps can be noticed. The drift is caused by the lathe tool wearing out, while the jumps appear due to tool replacements. The drift and jumps are smaller for

Fig. 11. D1 (a), D2 (b), H1 (c), H2 (d), and j (e) values for nearly 1200 sequentially produced plates on the same lathe.

F. Lahajnar et al. / Computers in Industry 47 (2002) 113±122

dimensions H1 and H2 (Fig. 11c and d) than for diameters D1 , D2 and angle j. The angle j increases due to the wearing out of the cutting tool. Moreover, scattering of results is higher when the tool is used up. 5. Conclusion We described a machine vision approach to measure the dimensions of electric plates. The obtained results show that the methods and equipment were properly selected and ef®ciently integrated into a working system. The required precision and speed have been achieved: the system accuracy is better than 0.03 mm, and the total measuring time for one plate is about 0.3 s. Therefore, the installed system allows 100% inspection of electric plates. Acknowledgements The authors would like to thank the co-workers of ETA Cerkno and the Ministry of Science and Technology of the Slovenia. References [1] H.K. Aghajan, C.D. Schaper, T. Kailath, Machine vision techniques for sub-pixel estimation of critical dimensions, Optical Engineering 32 (1993) 828±839. [2] B. Bell, L.F. Pau, Contour tracking and corner detectionin in a logic programming environment, IEEE Transactions on Pattern Analysis and Machine Intelligence 12 (1990) 913± 917. [3] D. Braggins, Achieving sub-pixel precision, Sensor Review (1990) 174±177. [4] J.G. Campbell, F. Murtagh, Automatic visual inspection of woven textiles using a two-stage defect detector, Optical Engineering 37 (1998) 2536±2542. [5] R.T. Chin, C.A. Harlow, Automated visual inspection: a survey, IEEE Transactions on Pattern Analysis and Machine Intelligence 6 (1982) 557±573. [6] R.T. Chin, Automated Visual Inspection (1981±1987), Computer Vision and Image Processing 41 (1988) 346±381. [7] L. O'Gorman, Sub-pixel precision of straight-edged shapes for registration and measurement, IEEE Transactions on Pattern Analysis and Machine Intelligence 18 (1996) 746± 751. [8] X. Liu, R.W. Ehrich, Sub-pixel edge location in binary images using dithering, IEEE Transactions on Pattern Analysis and Machine Intelligence 17 (1995) 629±634.

121

[9] E.P. Lyvers, O.R. Mitchell, M.L. Akey, A.P. Reeves, Subpixel measurements using a moment-based edge operator, IEEE Transactions on Pattern Analysis and Machine Intelligence 11 (1989) 1293±1309. [10] J.W.V. Miller, V. Shridhar, E. Wicke, C. Griffth, Very lowcost in-process gauging system, machine vision systems for inspection and metrology VII, Proceedings of SPIE 3521 (1998) 14±19. [11] T.S. Newman, A.K. Jain, A survey of automated visual inspection, Computer Vision and Image Understanding 61 (1995) 231±262. [12] A.D.H. Thomas, M.G. Rodd, J.D. Holt, C.J. Neill, Real-time industrial visual inspection: a review, Real-Time Imaging 1 (1995) 139±158. [13] J.B. Zhang, Computer-aided visual inspection for integrated quality control, Computers in Industry 30 (1996) 185±192. [14] M.J. Wang, W.Y. Wu, L.K. Huang, D.M. Wang, Corner detection using bending value, Pattern Recognition Letters 16 (1995) 575±583. Franci Lahajnar received his BSc (1996) and MSc (1999) degrees in Electrical Engineering from the University of Ljubljana, Ljubljana, Slovenia. He is currently a PhD student working on texture analysis. His research interests are computer vision, image processing and analysis, and development of machine vision systems and their integration in industry. Rok Bernard received his BSc and MSc degrees in Electrical Engineering in 1995 and 1999, respectively, from the University of Ljubljana, Ljubljana, Slovenia. He is currently a PhD student working on segmentation of articulated anatomical structures by hierarchical statistical modelling. His research interests are computer vision and image processing in medicine and industry. Franjo PernusÏ received his BSc, MSc, and PhD degrees in Electrical Engineering from the University of Ljubljana, Ljubljana, Slovenia, in 1976, 1979, and 1991, respectively. Since 1976, he has been with the Department of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia, where he is currently an Associate Professor and Head of the Biomedical Image Processing Group (BIPROG). His research interests are in computer vision, medical imaging, and the application of pattern recognition and image processing techniques to various biomedical and industrial problems. He is author or co-author of >100 papers published in international journals and conferences.

122

F. Lahajnar et al. / Computers in Industry 47 (2002) 113±122 Stanislav KovacÏicÏ received his BS, MS, and PhD degrees in Electrical Engineering from the University of Ljubljana, Ljubljana, Slovenia, in 1976, 1979, and 1990, respectively. Since 1976, he has been with the Department of Electrical Engineering, University of Ljubljana. From 1985 to 1988, he spent four semesters as a visiting researcher in the

General Robotics and Active Sensory Perception Laboratory at the University of Pennsylvania. His research interests include computer vision, image processing and analysis, biomedical and machine vision applications. He has authored or co-authored about 100 papers addressing several aspects of the above areas. He is currently an Associate Professor at the Department of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia and the President of Executive Committee of the Pattern Recognition Society of Slovenia.