Computer-Aided Visual Inspection in Assembly

Computer-Aided Visual Inspection in Assembly

Computer-Aided Visual Inspection in Assembly M. Lanzetta, M. Santochi (11, G. Tantussi Dept. of Mechanical, Nuclear and Production Engineering, Univer...

2MB Sizes 3 Downloads 64 Views

Computer-Aided Visual Inspection in Assembly M. Lanzetta, M. Santochi (11, G. Tantussi Dept. of Mechanical, Nuclear and Production Engineering, University of Pisa, Pisa, Italy Received on January 4,1999

Abstract Some of the more critical aspects for the diffusion of vision systems in assembly plants are the skill required for the system set-up, the definition of algorithms and the programming phase. In this paper a new methodology is proposed to reduce the implementation time and cost by means of a computer-aided system working off-line. The designed system named CAWS (Computer Aided Visual Inspection System) integrates several modules as product and algorithm databases, expert system for decision support, CAD modeller to generate synthetic images and software design. Some of the modules are still at a development stage. The output are the vision devices configuration and the inspection software. CAVIS has been tested on an industrial application for error detection in assembly: a new general-purpose algorithm for visual inspection is presented and results are discussed. The main features of the algorithm are suitable with the described approach: easy programming, unnecessary vision operator’s experience, and off-line preliminary estimation of parameters. Keywords: Assembly, Visual inspection, Image analysis algorithm

1 INTRODUCTION The importance and trends in sensor technology for assembly systems have been recently investigated [l]. Among the various sensors used in the actual assembly plants it seems that while most industries have installed very simple sensors, they have recently paid attention to high technology sensors. An example of this evolution are vision systems, which are general purpose and flexible tods and can successfully replace one of the most powerful human sensorial capability. They can be advantageously used in assembly operations like part recognition, part mating, glueing and sealing and in inspection for quality control which seems one of the most diffused application both in mechanic and in electronic industries [2] [3]. Nevertheless the critical aspects of a wider diffusion in industry especially in case of small batches include: layout design, selection of the lighting system, difficulties in the programming stage, still very expensive, preliminary tests on the assembly plant, need for on-line training on good and faulty parts. Most of these problems could be overcome by a computer-aided system able to simulate the results of the use of a vision system, to work connected to a CAD modeller, to help the user in the selection of the algorithms for the image analysis and to avoid or reduce the programming phase. In addition such a system would allow to reduce the on line trial and error phase required to adjust the parameters to the actual environment. Such a complete system is not available even if some attempts to solve single problems can be found in the literature. The reference data for the vision system automatically computed from the CAD model have been proposed in [4]. The CAD database is also used in [5] to predict and simulate the camera actual view of a part. In [S] a multiscale algorithm for the visual inspection of assembled products that can be trained on synthetic images of the components is described. In addition a method is presented to simulate and optimise the position of the light source and camera, taking into account the component material characteristics from the CAD model.

Annals of the ClRP Vol. 48/1/1999

Light modelling has also been considered to improve the effectiveness of rendering. A high level programming language for optical inspection tasks and an automatic planning and simulation system to determine the optimal location and minimal sensor number is presented in p]. Lighting is one of the most important aspects in vision: a set of rules for the proper selection of the light source type and position and for camera location can be extracted from [8]. In [9] a new method for the optimised adjustment of imaging parameters is presented to improve the reliability of inspection. An application independent scheme for model based vision systems structured as a knowledge-base can be found in [lo]. Aiming to solve most of the mentioned critical aspects, an integrated general-purpose system is under development named Computer-Aided Visual Inspection System (CAVIS). The system is the result of some experiences of application of vision systems in industrial plants [ll] 1121. 2 GENERAL DESCRIPTION CAVIS (Figure 1) includes the following modules which may work concurrently:

Supporting Module

Programming Module

Figure 1: Diagram of the system.

13

Figure 2: Details of rendered good (a) and defective (b) car-lock CAD model.

Product Database (PDB) In addition to CAD models, in this module the actual industry should collect information on the requirements of the inspection system, an overview of typical defects deduced from previous experience on similar problems, the geometric and tolerance information for the CADM. The PDB should include the coefficients to ‘render’ the environmental variability such as the lighting source type, position and intensity, the object colour, material, surface conditions, reflectivity, transparency, and opacity, and the effects of lubricants, dirt, ageing, etc. Layout DesQn Module (LDM) In this module, working in a CAD environment, the actual inspection cell (or a general-purpose parametric one) can be designed upon the suggestions of the DSM. The following elements are considered regarding image acquisition and lighting: the number, type and position of cameras and light sources, sensor features (resolution, linear/matrix, monochromdcolour, gain, etc.) and lens focal length. The parameters coming from the selected configuration directly affect the image rendering. To ensure a bi-directional correspondence with the CAD model, vision and lighting devices connected to a central unit, are represented in parametric form. This yields flexibility and expandability to the system for a wide range of applications. CAD Modeller (CADM) The aim of this module is the generation of solid models of the components of the assembled product and of the views obtained using the configuration data coming from the LDM. Rendering is the usual way to produce realistic synthetic images (Figure 2). An alternative is the extraction of different features from boundaries [S] or regions [13]. Decision Supporfing Module (DSM) This module, structured as a rulebased expert system, helps the user in selecting the best viewing and lighting conditions and the proper algorithms. The operator’s experience can also generate new rules. For the algorithm selection the user finds a decision tree in the form of a menu with a classification of available functions. Algorithm Database (ADS) At present it contains a large set of algorithms available in the literature or from deve!oped applications. The algorithms are classified according to two criteria: 1. General-purpose high level solutions for usual problems such as inspection, recognition, measurement, localisation, etc. 2. Tools, for image analysis and preprocessing hierarchically grouped, such as arithmetic and logic or morphological operations, statistical and blob analysis, etc., in addition to basic functions, like image conversion, storing and retrieval.

14

Programming Module (PM) In this user-friendly environment the user can build the complete image analysis software upon the basis of the suggested algorithms or its personal experience. In addition he can run the so obtained software on the 2D views coming from the CADM or on real images. So a simulated off line test of the vision system is possible. A high level language is used in order to record in a sequence of instructions the operator trials. The programming language features are: modularity with custom controls, standard platform, data and image format, protocds and communication with other commercial packages. The selection among different algorithms could be performed also automatically with an exhaustive method trying all the possibilities with a function to optimise reference parameters, such as the ratio between false or correct defective parts detected to the total number. On-line tests An important aspect of the system is the on-line test. In the negative case a feedback on the main modules invdved in this problem is necessary. The purpose of this feedback is twofold: to select a different configuration or algorithm and/or to modify the information included in the PDB. AN EXAMPLE APPLICATION The described system has been tested on a real industrial case: the semiautomatic assembly of car-locks. To detect out-of-tderance parts, the system described in [121 has been successfully used. Concerning assembly errors at present, on the production line only a mechanical test is performed consisting in the application of predefined forces and by measuring the resulting lever excursion. This test is not able to detect all the possible assembly errors (Figure a), in particular the exchange of similar parts (H, W) or their absence (K) that are detected after the lock has been installed on the car. Therefore a visual inspection station has been located on the line in order to allow a non-destructive recovery of materials in case of errors and in order to maximise the parts visibility, according to the assembly sequence and before the deposition of lubricants. The CAWS developed modules have been tested . The information in the PDB have been used for the generation of good and faulty parts by the CADM. The optimal light position has been selected, in order to maximise the difference between good and defective parts, so obtaining the synthetic images reported in Figure 2. Considering the assembly line layout, a top view of the car-lock on the pallet has been selected in the LDM. The peculiarities of the images have allowed the DSM to suggest the use of a new algorithm, previously stored in the ADB. The rendered images have been used for a preliminary estimation of the algorithm parameters in the PM. 3

The proposed algorithm The selected algorithm is based on the comparison between the grey-level distribution (Figure 3)in two areas referred to in the remainder as inspection mask, that can be selected manually by non-specialised staff with a pointing device. The reference area contains those pixels that are subjected to small or no changes on images of different parts, e.g. a flat uniform surface of a part always present in the assembly. In the cmtro/W area changes occur in case of enor. For instance concerning the exchange of part H, a bright metal surface is compared with a dark area on the defective part. When different lots are assembled or in case of new

products, the corresponding inspection masks are loaded. The algorithm is divided in three steps (histograms of Figure 3): 1. For every image, the mean I,,,R and the variance S2 of the empirical grey-level distribution in the reference areas are calculated. 2. I,, R is used to evaluate the percentage nc of pixels in the corresponding controlled areas with grey-level IC lower than a threshold It, calculated as 11 = c l x ImR 3. The part is considered as defective if

nc > c2 x nTOT where nToT is the number of pixels in the controlled area. The algorithm parameters are: the position and dimensions of the inspection mask and the coefficients ct and CP. The easiest way to increase the reliability is to increase the camera resolution or zooming the areas with defects in order to increase nTOT. In case the observed assembly is subjected to positioning errors, the window size should be reduced proportionally. The histogram parameters extracted from the reference area can also be monitored for further controls on positioning, cameras and lighting. The coefficients c1 and CP only depend on the light intensity distribution. To optimise the off-line layout design and to preliminary estimate the less and more changing areas (for the inspection mask) and the algorithm parameters c1 and CZ, the environmental conditions can be simulated on the synthetic images using the different rendering parameters. For fine-tuning, real images are necessary. The coefficients ct and c2 can be calculated with two methods. If a theoretical grey-level distribution can be assumed, the position and dispersion parameters p and o can be estimated with a predefined confidence limit. In this example, a Gaussian distribution is assumed for the reference and controlled area of a good part and a bimodal distribution for the controlled area of a defective part. Consequently the fiducial lower limit of p~ and the upper limit of OR in the reference area can be estimated with a probability level PI. These limit parameters can be used to estimate, as a function of cl, the probability PR that a pixel value ICin the reference area is below the threshold It in the worse case. A similar operation is performed on the controlled area for defective assemblies. In the example, the bimodal distribution is split in three parts: the bright, the dark and the intermediate zone. Assuming a Gaussian distribution in the dark zone, the corresponding fiducial higher limits po and OD can be determined for a probability level PI and consequently the theoretical probability p~ that a pixel in the dark zone is below the threshold It. The probability that a pixel in the whole controlled area is below the threshold It in the worse case is pc = Po x no 1 nTOT where nD is the pixel number in the dark zone; pc is also a function of ct. If the hypothesis on the theoretical distributions is not applicable, the probability PR and pc can be obtained em’mentally from images. In both cases the conect- and the defective-assembly curves PR and pc are obtained from several images. The highest correct and the lower defective curves are

Figure 3: Correct and faulty parts in a car-lock. The inspection mask with the reference (R) and controlled (C) windows for part H and grey-level histograms. displayed in Figure 4. ct is selected in order to maximise the distance between the two curves, to increase the algorithm reliability. c2 is selected in the range A - B between the curves PR and pc for the selected value of CI. A lower value of cz is to be preferred, in order to. reduce the possibility of assessing a defective part as good.

Results and discussion Tests have shown that the designed vision system with the selected algorithm is able to detect all the possible assembly errors for the car-lock shown in Figures 2-3, and to point out which one has occurred. Regarding the algorithm reliability, in the example (Figure 4) the coefficients c1 = 0.8 and CZ = 0.15 have been finally determined from a set of one hundred real images with a probability level PI = 0.9995 . The consequent algorithm reliability is 2 p2. The described method, which can also be used as a stand-alone inspection algorithm out of C A W , has the following advantages: 0 Inspection is performed without a template image because both the reference and the controlled windows belong to the image being examined. If contiguous windows are selected, they are influenced in the same way by the ever-changing environmental conditions, such as lighting or surface reflectivity due to lubricants or dirt. The algorithm can be used both in 2-0 and in 3-0 problems: it is not necessary that the obsetved areas

15

7

Pc

- . = . 1

0,9

0.7

0,6

.

~

i

0,5

c1

Figure 4: Curves of PR and pc as a function of the parameter CI. belong to the same plane, it is only necessary that the projection of the observed zones in the image does not change. This method does not require the vision system calibration and the optical distortion correction because no absolute measurement is necessary. The algorithm effectiveness, reliability and consequently its applicability must be assessed on the field using real data, even if a preliminary estimation can be done on synthetic images. In fact matching algorithms always require real data to assess their applicability, on the opposite they are usually faster and easier to implement. 4 CONCLUSIONS The described system represents a new development opportunity for industrial vision, aiming to solve the more relevant problems for its diffusion by integrating several solutions like off-line design and programming (LDM and PM). The simple algorithm proposed does not require expert staff for implementation, thanks to self-explaining programmable parameters; the system reliability estimation method is also provided in case of the theoretical method is applicable. Further work to improve CAVlS involves different aspects, including: Hardware design to increase the system flexibility, through 'plug and play' peripherals, or through bidirectional link between simulated configuration parameters and real devices. High level languages and interfaces, for easier programming and for more general tasks. Further development of the expert system for the optimal configuration and algorithm selection. ACKNOWLEDGEMENTS The authors wish to thank Ing: Carlo Scarabeo for his contribution to this study and Atoma Roltra S.p.A. Magna Group for providing the material and their technical support. The technical staff of the Department of Mechanical, Nuclear and Production Engineering is acknowledged. The research work has been supported by Italian MURST.

6 [ll

16

REFERENCES Santcchi, M.; Dini, G.,,1998,Sensor Technology in Assembly Systems, Annals of the CIRP, 47/2:1-22.

Tonshoff, K., Janocha, H., Seidel, M., 1988, Image Processing in a Production Environment, Annals of the CIRP, 37/2:579-590. Feldmann, K., Krimi, S., 1998, Alternative Placement Systems for Three-Dimensional Circuit Boards, Annals of the CIRP, 48/1:23-26. Weck, M., Etscheidt, K., 1993, Avoiding Teach-In by Using CAD-Data for Model-Based Recognition of complex 3D-Objects, Product. Eng., 1/1:167-170. Sallade, J.S., Philpott, M.L., 1997, Synthetic template methodology for CAD directed robot vision, Int. Jou. of Mach. Tools and Manufacture, 371 2: 17331 744. Khawaja, K.W., Maciejewski, A.A., Tretter, D., Bouman, C.A., April 1996, Camera and Light Placement for Automated Assembly Inspection, Int. Conf. on Robotics and Automation, Minneapolis Minnesota: 3246-3252. Schuler, H., Hsieh, L.-H., Seliger, G., Crampton, S., 1994, A Flexible Laser Scanner System for Online Process Monitoring, Int. Symp. on Manufacturing Science and Technology for the 21st Century, Tsinghua Univ., Beijing China: 1-6. Schroeder, H.E., 1986, Practical Illumination Concept and Technique for Machine Vision Applications, in Robot Sensors - Vision, Springer Verlag, 1:229-244. Pfeifer, T., Wiegers, L., 1998, Adaptive Control for the Optimized Adjustment of Imaging Parameters for Surface Inspection Using Machine Vision, Annals of the CIRP, 47/1:487-490. Roth, N., Giinther, K.G., Rummel, P., Beutel, W.. 1989, Model Generation for Sensor-Guided Flexible Assembly Systems, Annals of the CIRP, 3811 ~ 4 - 8 Lanzetta, M., 1998, The quality control of critical assembly components: visual inspection of o-rings, 2nd Int. Conf. on Planned Maintenance, Reliability and Quality, Oxford UK. Lanzetta, M.; Tantussi, G.: 1999, Vision System Calibration and Sub-pixel Measurement of Mechanical Parts, 5th Int. Conf. on Advanced Manufacturing Systems and Technology, Udine Italy, Springer-Verlag, to be published. Milutinovic, D.S., Milacic, V.R., 1987, A ModelBased Vision System Using a Small Computer, Annals of the CIRP, 36/1:327-330.