Image Processing in a Production Environment

Image Processing in a Production Environment

Keynote Paper Image Processing in a Production Environment H. K. Tonshoff ( l ) , H. Janocha, M. Seidel' Advances in the semiconductor industry ha...

1MB Sizes 0 Downloads 67 Views

Keynote Paper

Image Processing in a Production Environment H.

K. Tonshoff ( l ) , H. Janocha,

M. Seidel'

Advances in the semiconductor industry have had a tremendous impact on the field of image processing. Within a few years industrial image processing has found its way from crude binary syste?,.,, that were extremely sensitive to changing light conditjons, t o rugged industrial. systelrs processing grey level images at rates of several hundred mjllions of instructions per second. Image processing techniques that once took several minutes to perform under laboratory conditions, can now he procpssed in realtime in industrial applications. This paper will briefly discuss the development of image processing in the past and w j l l . then give a short introduction to image processing techniques, s o that the unfamiliar reader may be able to appreciate the potential that image processing might have in his area o f research. The emphasis of this contribution, however, will be on current res~orch activitjes an4 on the different industrial appl iCations, such a s quality cont.ro1, material f l o w a n d robotics, to mention but a few. Finally an outlook is presented on iuturc trends and de-,elopments in industrial inage processing.

1. Development of Image Processing Image processing started about four decades ago. Some of the first applications were automatic character recognition ( O C R ) for purposes of mail sorting and remote sensing. Much work was also done in the biological and medical fields as well as in the material sciences. The first systems were binary, i.e. they could only distinguish between black and white, and therefore lighting had to be precisely controlled. Even up to a few years ago, most of the systems found in industry were binary.

way, top to bottom data flow shown in Fig. 1 can be supplemented by data paths in opposite direction, where information from later steps is used to adapt to situations that require different preprocessing. The outermost loop for example, could be used to alter the illumination or the camera position if the analysis does not provide sufficient information. Research into this field has only recently begun and will be strongly linked to the advances made in the field of artificial intelligence.

In recent years grey level processing, where the light intensity of each pixel (picture element) is represented by 8 bits, has led to new applications, being found mainly in the quality control and inspection area. The main reason for the rapid growth Of industrial image processing is the dramatic increase in processing power of today's image processing hardware. Many operations, like image filtering, are now performed by special hardware modules that can be freely configured in order to maintain flexibility of the system. The development of CCD cameras has also contributed to the acceptance of industrial image processing, since CCD cameras are less sensitive to the adverse industrial environment than tube cameras.

Image sampling

2. Image Processing Methodology

When discussing image processing in general, one should not forget that optical methods, as well as digital computers, can be employed for image processing. Optical image processing has brought about the possibility of adding, subtracting, multiplying, storing and even performing 2-D Fourier transforms of images using optical devices such as lenses, mirrors and beam splitters [58]. The field of optical image processing is receiving much attention, since all processing is done in parallel at the speed of light. However, despite this tremendous advantage in speed, optical processing has the disadvantage of requiring complex and precise optical systems, and, to date, the sensitivity of such systems has limited its use in industry. Optical image processing is nevertheless a rapidly growing field of research with great potential for simplifying image pre-processing. For industrial applications, image processing by digital computers is by far the more important area and shall hereafter simply be referred to as image processing. The standard methodology of image processing is shown in Fig. 1. In the process of image samp'ling, a scene from the environment is projected onto a one- or two-dimensional sensor, which digitizes and stores the image. Much attention has to be paid to the lighting, in order to extract the maximum useful, and the minimum useless information from the scene. Although grey level systems are less dependent on precise and uniform lighting than binary systems, image processing often stands or falls by the methods used for illumination. Image sampling is followed by image conditioning, where frame averaging or low pass filtering can be employed to improve noisy images, and by feature enhancement, where edge and contour filters are applied in order to highlight the important features of the image. The next steps are segmentation of the image and the extraction of information relating to the segments, such as dimensions, areas and so forth. The last step is the analysis of the features, and some action arising from this information. The one With contributions by other CIRP members

Annals of the CIRP Vol. 37/2/1988

-

Images can be sampled by sensors working in the infrared, visible light or X-ray range of electromagnetic radiation, as well as by sensors that work in the acoustical domain, for example ultrasonic image processing for welding seam inspection. For industrial applications, the visible range of light is the most important one, although X-ray techniques are gaining increasing interest. Examples include checking castings for shrinkage faults or checking complete assemblies for correct location of parts.

environment

image conditioning

r----{

feature enhancement

I

segmentation

"

action

1 6532

Fig.1: Image processing methodolgy In industrial applications, tube cameras have mostly been replaced by CCD (charge coupled device) cameras. Problems with tube cameras, such as sensitivity to magnetic fields and to vibration, ageing and possible destruction of the tube by exceeding light intensities, are not encountered with CCD cameras. Todays resolution of standard CCD TV cameras is typically 640 x 512 non-square pixels, but the trend goes towards 1024 x 1024 square pixels with identical X and Y resolution. Higher resolutions are unlikely since optics for such systems would be too expensive (often, the optical system is the limiting factor in CCD metrological applications). One-dimensional transducers are available in sizes ranging from 128 to 5000 pixels in a row. These sensors are often used for scanning a part moving on a conveyor belt perpendicular to the line array. This setup is used for low cost systems or for applications requiring extreme resolution of 5000 pixels in each row. Stationary objects can be scanned by moving the line

579

array relative to the optical system. Additionally, arrays with single pixel access are currently under development. Image Conditioning Noise reductlon and contrast enhancement are commonly used techniques in image conditioning. Noise reduction can be achieved by several methods, the most efficient one being the averaging of several frames like shown in Fig. 2. Frame averaging is especially useful for noise reduction in X-ray images, however it is also used for noise reduction in some of the newest home video cassette recorders.

I

I

single image distorted

Fig. 2 :

average of severol

average 01 Several hundred images

Noise reduction by frame averaging

Noise reduction can also be achieved by substituting a pixel value by the weighted sum of its neighbours, a method referred to as moving average. This method can also be expressed as a 2-D convolution of the pixel neighbourhood with an array of weighing factors (also called kernel). Convolutions are extremely important in image processing and can be used for lowpass, bandpass and highpass filtering, as well as for edge detection. Another important technique is histogram equalization for contrast enhancement, i.e. equalizing the distribution of grey levels by re-assigning new pixel values to intensities, or by manipulation of gain and offset of the A/D converter. If the image has to be transformed to binary form, a histogram may also be advantegeous to find the optimum threshold level. Another image conditioning technique for metrological applications is coordinate transformation, which may be required in order to compensate for distortions from the camera's optical system. Feature Enhancement and Extraction The next stage is feature enhancement and extraction. A great variety of techniques can be employed to highlight useful information, the most common technique being edge filtering by convolution to detect contours or edges. The type of filter can be easily changed by altering the convolution kernel, i.e. array of weighing factors. Standard filters for edge detection are the Roberts, Sobel, and Laplace filters. The Roberts filter is a first derivative filter and is sensitive to noise, due to its small kernel size of 2 x 2. Laplace is an omnidirectional and Sobel a directional filter, both based on the second spatial deri,Jative of the pixel intensity levels. Altholigh several other filters do exist, the Sobel filter is preferred for edge detection since it is robust, with little sensitivity to noise at an affordable kernel size of 3 x 3 . Another method of feature enhancement is texture analysis, which describes the pattern of a region by location and distribution of grey levels in a neighbourhood. Texture can be used to detect defects in textured materials like fibre-glass, cloth or wood surfaces.

distance between contour points and moments of inertia. Arriving at a list of segment features is often the final aim, since a tremendous amount of iconic image data is reduced to a much smaller feature representation. Special hardware (blob processors) is available to deal with problems where segment features have to be computed for several hundred small segments in real time. Analysis and Classification The last stage in image processing is analysis of the features or classification of parts which leads to some form of action, such as controlling a robot to pick a part. The methods of analysing and interpreting features are so diverse and specific to the particular application, that structuring these processes is futile. In the analysis stage, dimensions of segments can be checked, areas can be computed, surface defects can be detected, fruit can be checked for colour, or assemblies may be inspected for all components being at their correct location. classification of parts is necessary for many material handling applications, and for processes where actions depend on the part or class of parts, for example: detect all truck rear axles on a conveyor belt and spray-paint them red, paint all other axels black. During classification features of the image are compared to the feature lists of all previously recorded parts. Another way of recognition is template matching, where an image of a part is directly compared to a previously recorded template by means of correlation [61]. The cross-correlation function can be used advantageously in cases where peaks have to be detected in the presence of unknown background intensities or in the presence of noise [64]. Correlation can also be used for nicro-positioning by correlating the speckle pattern of a surface point with an initially recorded speckle pattern. Classification by correlation requires special hardware for fast correlation. Feature matching requires efficient search strategies, especially when several hundred parts are involved. Morphology Morphology is a method that can be employed at several of the stages discussed so far. It can be described as a mathematical set of nonlinear, neighbourhood operations with properties that are well understood and documented. Morphology represents a well structured method of image processing that deals directly with shapes and, therefore, is less baffling to the beginner in image processing than many of other methods, which appear to be somewhat arbitrary. The most basic morphological operations are erosion and dilation. Erosion of white leaves pixels white only when all neighbouring pixels are white. Dilation of white sets a pixel to white if any of its neighbours is white. An erosion followed by a dilation is called opening and has the effect of breaking small bridges between areas. Dilation followed by erosion has the opposite effect of closing regions or lines with missing pixels. The power of morphological methods is illustrated in Fig. 3a where tablets have to be located. Segmentation would be difficult since tablets touch and form one region. Multiple erosions solve the problem efficiently as shown in Fig. 3c.

r

L

2 uns u i t a b I e s e grn e n 1at ion

Since many of the techniques are too complex and time consuming to be applied over the entire image in real time, the concept of dealing only with areas of interest (AOIs) of the image is often used to reduce processsing time, when the approximate location of an image feature is known. Segmentation Most applications require segmentation, i.e. partitioning the image into areas of uniform content. Features for segmentation can be as diverse as areas within closed contours, greylevels, textures, colour or motion. From the shape of the segments further information can be extracted such as areas, largest

580

Fig. 3: Segmentation problem solved by morphology

3. Current Image Processing Research

3.2 Hardware Research and Development

3.1 Image Processing Techniques

Due to the growth of the vision industry, more attention has been paid to the requirements of this industry, leading to the first semiconductor devices being designed especially for image processing. Some examples are the TMC 2301 for real time geometry transformation, the AD 9502 for image digitization, fast video RAMS and the new chip family of LSI Logic, which can perform 8x8 convolutions in real time. In the last few months several suppliers have offered new modules for their systems based on LSI chips, thereby greatly enhancing the capabilities of their systems. New concepts and architectures for parallel and pipelined processing, for example systolic arrays and wavefront arrays [16], [31] have had an impact on image processing research, but industrial implementation can not be expected within the next few years.

The whole research area of image processing techniques is so broad that it can not be covered here completely. Only some topics which are of importance to production engineering will be briefly discussed, and the reader will be referred to literature that covers these topics in greater depth. A good introduction to optical image processing for inspection and automation is given by [ E l and by [ 6 5 ] : They discuss the basic concepts and components required for addition, correlation and Fourier analysis of images by optical methods. Naturally, most of the work is done within the field of physics. One of the few production engineering faculties investigating the use of optical processing techniques is that at the University of Twente, Netherlands. Optical preprocessing using interference, holography, speckle methods and moire topography is studied for use in robot vision.

Many publications on image processing stress the inportance of the optical system and of the lighting methods. Recently some research projects were initiated to investigate the potential of dynamic and adaptive lighting in general image processing. Edge detection techniques for industrial vision are discussed by [15]. There exists a great number of papers which evaluate various edge filtering techniques by large kernel convolution, like the basic paper by [37]. However, these filters are too timeconsuming for real time implementation. For a good source of information, the reader is referred to the IEEE Transactions on Pattern Analysis and Machine Intelligence, where new edge detection techniques are discussed in almost every issue. A good introrluction to morphological methods is given by [ 6 9 ] and with a more mathematical approach by [19]. Morphological methods are well documented, and offer increasing potential as special modules for real time morphology become available. Applications which employ morphology are discussed by [17].

Fourier analysis and its application to machine vision is discussed by [59]. Image filtering in the Fourier transformed spectral domain can be faster than complex convolution filters based on large kernels. Analysis of image contents in the spectral domain is discussed in [ 4 4 ] . Subpixeling techniques are used to improve measurement resolution of industrial metrology applications. Under certain circumstances a resolution 20 times as high as the pixel size can be achieved, however these techniques require extreme fine-tuning to the particular application. Several subpixeling techniques, and their limitations, are discussed by [43]and by [ 7 4 ] . New subpixeling techniques are frequently introduced in the IEEE transactions on Pattern Analysis and Machine Intelligence, hereafter referred to as PRMI. Image sequence processing is a relatively new area of research. Some examples of applications are found in medical diagnosis (detecting heart defects from X-ray sequences), automatic tracking of measurement spots during car crash tests and 3-D reconstruction from image sequences of a moving object [42]. Industrial applications are not foreseen within the next years. An area where production engineering is strongly involved in the development of image processing techniques is workpiece recognition and classification. An excellent source of information concerning this area is again PAMI. Model based representation and analysis of images is discussed by [ZO]. The advantages of using probability methods for part recognition are discussed by [ 5 4 ] . Lines, corners and circular elements are used for representation of parts in the learning phase. In the analysis phase the most distinctive features in the image are determined and a heuristic search is started, with an assumption made about the location of the next feature. If this feature is occluded or hidden by shadows, the probability of correct identification decreases, but the part can still be recognized if the next feature is found . A problem in recognition is perspective distortions due to tilted parts. These distortions lead to poor probability of recognition since features are not found in their normal position. To our knowledge, no methods of recognition that can handle tilting of more than a few degrees have been reported so far.

Main requirements for industrial vision systems are: processing power flexibility software support cost effectiveness and few suppliers can meet all of these requirements. Real time image filtering, for example, requires special hardware which limits flexibility. Flexibility however, is indispensable for cost effective development of new applications. Systems that are designed to make exclusive use of relatively unified approaches, such as morphology or template matching, can cope with less hardware flexibility whilst still remaining powerful [ 5 6 ] , : 5 5 ] . A program aimed at developing image processing hardware is sponsored by the German Ministry for Research and Technology. aetween 1985 and 1987, seven industrial suppliers of vision systems and seven research institutes have developed specifications regarding buses, architectures and communication links for a family of image processing modules. Specialized modules, for example for preprocessing and for image segmentation, and software tools are currently being developed in accordance with those specifications.

Another project entitled PIPE is in progress at the National Bureau of Standards [ 2 8 j in Washington, DC. PIPE is a hardware implementation of Tanimoto's hierarchical cellular logic (HCL) and is intended for pipelined processing during the early stages of image processing. PIPE consists of a sequence of identical inodules that can perform a number of neighbourhood operations. One of the exceptional features of the system is the ability to link modules via forward, recursive or backward paths. Neighbourhood operations can thereby be performed in the spatial, temporal or joined spatial-temporal domain, such as three dimensional convolution operations for detecting edges with specified orientation, moving in a given direction. PIPE is complemented by the ISMAP device for feature extraction (generation of iconic images from feature lists). The powerful and versatile concept of PIPE is ideally suited to low level realtime image processing. 3.3 Stereo Vision and Range Sensing Due to the enormous potentials of 3-D techniques for analysing, classifying and locating parts, great effort has been spent on the development of distance measuring techniques. Some of these techniques, such as shape from shading or from texture gradient, are quite exotic and not sufficiently reliable. Only the techniques with industrial potential are discussed. For a good overview about distance measurement techniques, the reader is referred to [62] and [26]. A relatively new method of distance measurement employs a laser spot which is projected onto the surface of an object. Distance is computed from the amount of blurring of an out-of-focus imaging system. The method is accurate and precise enough for industrial application [32], but measures distance only at one spot. Relatively new is an extension of this method, which detects distance in the entire image by measuring the depth of field gradient over the image from the amount of blurring of two, out of focus images [48]. Surprisingly, this method has been known for some time to be used by the human visual system (the focal length of the eye is modulated by up to one diopter at about 2 Hz). The method avoids many of the shortcomings of other techniques such as occluded areas or matching of images. Accuracy is comparable to stereo vision.

581

In industry, triangulation is the most widely used technique for range detection, since it is robust, cheap and fast. With a known base length and two known base angles in a triangle, the lengths of the other sides can be calculated. Active triangulation techniques employ a light source to project a point onto a surface, as shown in Fig. 4a. Distance is calculated from the location of the image point on the detector. Commercial triangulation sensors offer resolution up to the um range. Triangulation can also be used with a line that is projected onto a surface, resulting in a height profile as shown in Fig. 4b. If the line is sequentially scanned across the surface, the complete surface profile can be extracted as shown in Fig. 5. Scanning requires approximately 10 seconds, which is too slow for the majority of industrial applications. Passive triangulation is also referred to as stereo vision. This method evaluates the difference of images taken by two cameras spaced apart. The main problem is to find the matching features in the images, for example when two holes are seen in one image and only one hole in the other image. A priori information about the scene can often help to solve matching problems, for example information about Another example is [ 2 0 ] , corners in a scene [ 2 3 ] . where additional information about buildings is used to match edges found in aerial photographs.

I

loser

time of flight in the pico-seconds range. Measurement error can be reduced by averaging several samples (typically 10). Results look promising as shown in Fig. 6 from (41, where a camera image, a range image and a range map of a telephone are shown.

DI range map height 15 o r ~ ~ o r l ~ o tnoa Intenl aity 300. 300 pixels taken I" 10 seconds

a

pain: triangulation

b

line triangulc tian ,FLY

I

Fig. 4:

is>' ....

Distance measurement by triangulation

photo of a hydraulic suspension part

image of the lines used for triangulation

Great interest has also been shown in the automatic analysis of moire and holographic fringe patterns The main disadvantage of distance measu[41], [ 5 2 ] . rement by the moire technique is the difficulty to detect whether the distance difference between two fringe lines is positive or negative. By moving the object or one of the moire gratings between the sampling of two images, the direction of change can be determined [ 6 ] , and the moire technique can be used for absolute distance measurement of continuous surfaces. Figure 7 shows a contour moire pattern on a turbine blade (taken from [6]). Image Processing and CAD

Lately, the possibility of linking image processing with CAD are being investigated, with the aim of making existent CAD information available to the vision process. Since CAD is a tool thet strongly involves visual techniques, many similarities between CAD and vision would be expected. However, some fundamental differences exist due to the different

range map of the part

source ITMl

IFW 6 5 2 3

Fig.5: Surface measurement by triangulation Applications of stereo vision in industry are few and only found when matching of the images is no problem. Examples are detecting windshields of cars or finding the leads of electrical components [ 1 2 ] . In the latter example, only one camera is used while the workpiece is rotated by a known amount. This leads to a simpler and much smaller design. Occlusion of the light rays as indicated in Fig. 4a and feature matching are the main problems associated with triangulation techniques. In [13] a system is described where three cameras are used in a passive triangulation system, to overcome problems due to ambiguities in stereo matching. Another method of range finding is time of flight measurement, either of acoustic or of light waves. Time of flight of acoustic waves can easily be measured, however spatial resolution is extremely poor. Light waves offer nearly infinite spatial resolution at the disadvantage of having to measure

582

carnputer generated DersDective v i e w

Fig.6: Time of flight distance measurement

3.4

I 1

C1

source A Boehnlein

IFW 6 5 2 1

Fig. 7: Moire pattern on a turbine blade purposes of CAD and image processing. Different modelling techniques have evolved in the two areas, and what is needed most is a unified method of representation which suits both, CAD and vision. Constructive solid geometry (CGS), which is used by many CAD systems, is not such a representation, since it employs simple solids for modelling while vision works with images of surfaces. The importance of CSG might increase as 3-D vision techniques advance to a degree where 3 - D segmentation can reliably be performed. Matching between 3 - D CAD models and images, involves matching points in the 2 - D image space to the 3 - D model space. Extended Gaussian surfaces (EGS) are a good form of representation for this task since surfaces are mapped onto a Gaussian sphere according to their surface orientation and area. Therefore, EGS suits both, image processing and CAD [5].

Linking CAD and vision for part recognition involves off-line and on-line computing, [39], [24], [ 5 ] . During the off-line phase of the system descrlbed by [39], generic images for several orientations of a part are generated from a 3-D CAD model. For each incremental orientation (10 degree increments), features such as perimeter, center of gravity and average distances between contour points are extracted from the generic images. During the on-line phase, the features found in the real image are compared with the feature tables for all incremental orientations, with the best match representing the detected orientation. The reference feature tables could also be extracted from real images. This however, is a cumbersome and timeconsuming task since all incremental orientations for all stable states have to be considered. One technique which could be a tool over a wider spectrum of applications is CAD based strategy planning, for example for recognition of occluded parts as described by [24]. A CAD model is used to evaluate the most distinctive features of objects that shall be detected in a scene with several occluding parts. In the on-line phase of the recognition process, only these optimum features are used to find possible matches with image features, thereby avoiding the combinatorial explosion of the search space. The idea of using CAD models and artificial intelligence to find the particular object features which allow maximum efficiency of image processing, can be extended to other areas. For classification problems involving several hundred parts, it would be substantially easier to identify the unique features of each part with a CAD database, than to acquire images of all stable states of all objects, picking features by hand and determining which of the features could serve in a unique feature set. CAD based image processing is still in its initial phase, and a lot of work is necessary to understand the problems as well as the future potential. 3.5 Implementation of Artificial Intelligence

The main areas of application of A1 in industrial vision over the next few years will be expert systems for image processing techniques and systems for defect interpretation. Artificial intelligence for industrial and autonomous assembly robots is currently being investigated by many researchers F57], [66], however, industrial use is not likely within the next years. 4. Industrial Applications of Image Processing 4.1 Quality Control

Approximately 70% of all industrial vision systems are employed in quality control. Especially in the automotive and in the consumable goods industry, quality has become one of the key factors for the consumer regarding buying decisions. Automation of quality control by means of image processing reduces costs for control and improves control by reducing subjective checking by humans. Sometimes, vision becomes indispensable, for example, when a 100% control is required at production speeds which exceed the human capabilities for visual control. Examples are checking tablets in the pharmaceutical industry, checking bottles for foreign objects in the food industry or verifying the correct assembly of oil filters at a rate of 1000 filters per minute. These applications are expensive since they require fast hardware and extensive software development to keep up with production speed. In some cases, such as tablet checking or printed circuit board (PCB) inspection, the cost of software can be divided between many customers. The majority of applications, however, requires unique software solutions, which often exceed the cost of hardware. Image processing can solve a wide range of problems encountered in quality control. Operations can be as diverse as control of surfaces, PCBinspection, X-ray inspection, assembly control or wood inspection. Examples from the following areas shall be discussed:

Applications of A1 concerning image processing in general are automatic diagnosis in the medical field, interpretation of remotely sensed images, representation and reasoning about 3-D objects from image data [34] and control of autonomous vehicles. Industrial applications include expert systems for image processing techniques, intelligent multiple sensor systems for robots, knowledge based image analysis and automatic software development. Several expert systems for image processing are currently being developed with the aim of speeding up the development of application software [50], [21]. There exists an abundance of image processing techniques and yet little information regarding a systematic comparison of their performance. An expert system for image processing techniques would therefore be a valuable knowledge source. Such systems are under development for special areas such as morphology or X-ray inspection. In [50] an expert system for automatic x-ray inspection is described, which contains approximately 700 rules. Task description and sample images lead to an evaluation of possible image processing techniques, to program generation and simulated testing. Many of the rules could also be used within an expert system for general image processing. Another aim of the system is the automatic fine tuning of the methods and parameters in order to relieve humans from time consuming and repetitive programming, and to find optimum techniques with regard to robust performance. Another system for knowledge-based program generation is discussed by [21]. The programming strategy is seen as a hierarchical planning process, i.e. the complex main task is repeatedly de-composed into smaller tasks of reduced complexity until tasks are at the level of single image processing operations. For part identification and classification the usual top down search, i.e pattern matching, can be replaced, by or combined with bottom up, model based methods. When reasonable assumptions can be made about the scene a combined approach will greatly improve efficiency, [45], Knowledge based image analysis is employed advantageously in industrial applications where missing or conflicting data is involved, where a weak formulation is required or where probability methods need to be used without an existing a priori statistical database [ 6 0 ] . In addition, artificial intelligence can help to interpret detected faults during quality control [ 361.

PCB inspection Surface control Assembly control X-ray inspection 4.1.1 Printed Circuit Board Inspection An increasing number of vision systems is used for PCB inspection, especially in the automatic production of board that carry surface mounted devices (SMT components). During production SMT components can be misplaced or they can get lost before soldering. Most errors, however, occur during the soldering process, for example tombstoning of the devices. Due to the high packing density and production speed, visual control by humans is not sufficiently reliable. Inspection systems for PCBs are able to locate misplaced or missing components and can detect some soldering errors. Inspection can be greatly simplified by intelligent use of lighting as discussed [30]. Two directional light sources are used sequence to produce shadows of the SMT component as By subtraction of the two images shown in Fig. 8. information about height as well as location of the SMT device is acquired while useless information, for example markings on the PCB, is reduced.

9:

I image memory A

substroclion

IIA-Ell

lrnoge memory

>YII'C

Fig.

B

I I(u,,,r,ru

8:

.. .

Advantageous use of lighting for PCB inspection

Inspection of the PCB layers for bridges and hairline cracks becomes increasingly important, since defects in the inner layers of a multilayer PCB can not be corrected. Such an inspection requires extremely high resolution which can only be offered by sequentially scanning small areas of the board.

583

This, however, is tine consuming and expensive and machine vision is unlikely to solve this problem within the next few years. 4.1.2 Surface Control

The type of surface conditions that can be evaluated image processing rar.ges from colour measurementbythrough small defect detection in sheet metal to surface roughness. Defects are often so small, compared to the dimensions of the surface, that only specialized machines with complex optical systems as shown in Fig. 9 can offer cost effective performance [51].

laser

suffice as shown in Fig. 10, which was taken from :27]. Other tasks may require sophisticated lighting, optical systems or software techniques, for example the verification that piston rings, which are conical in the range of less than a tenth of a degree, are inserted correct side up. Due to the small height of the rings compared to their diameter, evaluation of the conical shape can only be achieved with special lighting and software using subpixeling methods (source: same as [ 5 3 ] ) . 4.1.4 X-ray Techniques

In the digitized X-ray image shown in Fig. 11 oil filters are checked for the presence of a spring in the filter's bottom part (the spring is missing in the filter in the middle).

mirror

',.

!

\missing spring

xrce

,Fa

Sick

Fig. 9:

X - r a y inspection of oil filter assembly

"i'5

checking speed 1000 filters /minute

Laser-scanner for surface inspection

The system shown can monitor the production of surfaces, such as sheet metal or foils and distinguishes between twenty different surface defects. Statistical analysis provides the possibility of reacting quickly to possible defects or process deviations. The development cost of such a system can only be justified by multiple installations of the system. Another interesting application is in-process surface roughness measurement as described by [35]. Compared with non-contacting commercial systems that evaluate roughness from the distribution of light scattered from a single illuminated surface point as discussed in [3], the system correlates roughness with the histogram data of a 2-D image. This has the advantage of acquiring information from a larger surface area. The coefficient of variation between compared with roughness measurements is only 8.6% 15% for the tactile Talysurf method. Additionally, scratches can be detected by analysing the Fourier transformed image data.

Fig. 11:

Assembly control of oil-filters by X-ray inspection

The filters are checked at the rate of 1000 filters per minute, [lj. This is only possible due to the small area of interest where the spring is located and the relatively simple task of checking for presence. Other applications of X-ray checking include nondestructive inspection of welded seams, detection of shrinkage faults in foundry parts and verifying that the single layers of a multilayer PCB are positioned correctly. Due to the health risk and the additional costs involved, X-rays are only used when no other means of checking parts exists. This will usually be the case when the interior of parts or the assemblage of parts have to be checked [ 7 0 ] , [9]. 4.2

Robotics

The fact that in 1988 two major conferences, i.e. ROBOTIC 12th and VISION ' 8 8 were held for the first time as a joint conference is an indication of the trend towards integrating robotics and vision. Making robots "intelligent" by equipping them with "senses" such as vision or tactile sensors has been one of the years. main aims of the robotics industry for Today, the number of applications where vision is used to locate parts is rapidly increasing. The general "bin picking" problem, however, still remains unsolved, despite several publications stating the contrary. Problems are encountered when parts are partially occluded, hidden by shadows or when pattern matching fails due to perspective distortions. These problems are unlikely to be solved without 3-D range sensing. From Fig. 12 which is taken from [ 6 9 ] the advantages of a range map for picking up parts can easily be appreciated. The range map allows easy detection of the uppermost, non-occluded object.

Fig. 10:

Assembly control employing A01 histogram evaluation

4.1.3 Assembly Control

In assembly control presence and correct location of parts are checked. If the presence of a part needs to be verified, relatively simple methods, such as the evaluation of an area of interest histogram, may

584

Many robot suppliers have integrated machine vision into their control systems. However, performance and flexibility is often limited since the systems are often designed for easy programming by shop personnel and for the exclusive task of locating single, separated parts that have contours with high contrast. If the vision application requires a higher degree of sophistication, the design of the vision system is usually handeled by a vision supplier or by a research institute specializing in image processing and automation.

"distance transform" which calculates the minimum distance from a point to a set of contour points. With some restrictions the method can be used to detect the location of a part with 0.1 pixel accuracy and with an orientation error of 0.2 degrees. For handling operations that involve parts of greatly varying dimensions, different grippers may be required. In the system described by [72], a CCD array is mounted in the robot side of a tool-changing system, and different optics are installed at the tool side, thereby easily adapting to the different requirements of the grippers without the need to equip each gripper with an own camera. system for precision assembly is introduced in and the advantages of using morphological methods for detecting object orientation are discussed. A special cyto-computer is used which can perform arithmetic and morphological operations at a rate of 10 million pixels per second. Processing power can easily be increased by pipelining of modules. A

[33]

Fig. 12:

Range map of flanges in a container

Finding the location of 4 and 6 cylinder engine blocks i s described by 1221. A stationary camera finds the blocks within a larger field of view, and a camera at the robot hand locates the blocks to within 0.7 mm accuracy. Features for recognition are prcgrammed via an interactive teach-in process, which is also used to define different searching strategies for cases where features are missing due to occlusion or shadows. Fig. 13 shows the original image and the processed image with detected features. A system for picking up objects from a moving conveyor belt is discussed by [ 4 9 ] . A camera which is attached to the robot hand watches objects passing on the conveyor belt without a priori information about belt speed. Speed measurements are continuously updated during tracking. system consisting of an FMS cell, a robot and a vision system is described by the authors of [la]. Image analysis is used to recognize workpieces supplied in random order with no prefixed orientation. A

vision system determines the best gripping position for the robot which loads the workpieces into the FMS cell. According to the gripping and loading position, the relevant NC machining programs are retrieved from the host computer. The same authors discuss robot vision for complete robotization of a tool adjustment room. Randomly distributed tools on a pallet are recognized, their main dimensions are measured and they are then arranged in maqazines on the assembly bench.

A

c ) mage of ' w o

enqne D I O C K S

ource 4 Hinkeimonn

Fig. 13:

Other applications for robot vision include seam tracking for welding. This application requires permanent, i.e. real time evaluation of the images at video rates and consequently specialized image prccessing hardware. However, it is often possible to simplify processing by the use of specialized sensors [25]. Monitoring of welding processes Will be discussed in the section Process Control. Some assembly tasks allow the use of stereo vision for robot guidance. Workpiece contours have to be simple to avoid problems in matching object features in the two images. Car wheels, for example, can be assembled by using the wheel bolts and holes as features for stereo matching. Other workpieces that are successfully assembled with the help of stereo vision are windshields and doors [63]. In some cases it is even possible to measure the position of the car body by stereo vision. 4.3

Metrology

In recent years several suppliers of 3-D coordinate measurement machines have integrated vision into their systems. The main advantage is that the sensing is non-contacting, therefore giving the possibility of measurina flexible objects which would deflect if tactile sensors were used. Additionally, flat workpieces with no edges such as films, foils or printed circuit boards can be measured. Finally, extremely fine object features such as holes in watch housings can be measured automatically. A basic discussion of the possibilities of coordinate measurement by image processing is given in [71] and [52]. Fig. 14 shows the measurement head of a 3-D coordinate measurement machine equipped with a tactile sensor and a binary vision system. Due to the well controlled lighting conditions in measurement rooms binary systems will often suffice. Grey level systems can be used to increase resolution by subpixeling techniques. For subpixeling additional a priori information about the object is needed, for example the information that a straight line is being measured. In this case subpixel accuracy can be achieved with a line fit algorithm. Even without subpixeling, measurement accuracy can be as high as 2 um.

IFW 6 5 3 0

Image features for locating engine blocks

In many robot applications two cameras are used, one for detecting workpieces within a greater field of view, and one camera mounted at the robot end effector for accurate measurement of workpiece position. This set-up allows easy detection of a part whilst maintaining the ability to perform operations that require a high resolution image of the whole part. Further merits of this set-up are discussed in [27] and [38]. The latter paper also discusses a

.3 i r L e

.e

Fig. 14:

55

IFW 6 5 ? 4

3-D coordinate measurement machine with vision sensor head

585

For the measurement of 3-D flexible surfaces, such as car seats, triangulation sensors can complement tactile sensor heads. Triangulation offers the additional advantage of measuring with an extremely small spot size, thereby avoiding the problem of measuring an equidistant contour with the ball tip of a tactile sensor. When a camera sensor head is available, triangulation can be achieved by simply adding a laser line or point source. is is in to

A distance sensor for 3-D measurement of castings described in [29]. A five axis measurement machine used to position the sensor square to the surface order to reduce measurement error and problems due occlusion.

4.3.1 Measurement of Tools

Due to the desire for totally automated production, much interest has been paid to monitoring cutting tool wear. Research projects concerning tool wear monitoring are being carried out in, among other countries, France, Italy, Japan and in both parts of Germany. The approaches, especially the choice of lighting, vary significantly. In a feasibility study by [14], several approaches to tool wear monitoring are described and several viewing angles and lighting conditions are evaluated. Another tool wear sensor is discussed by [18]. Again, lighting was found to play a fundamental role. Two different light sources are used to highlight flank wear and crater wear. The crater wear is analysed by projecting a diffraction pattern onto the tool surface as shown in Fig. 15. The crater profile can be computed by triangulation. The sensor head is highly compact, allowing easy installation in a machine or tool magazine. In [75] a system for unmanned detection of wear and chippings of cutting tools is described. Three images are taken with different lighting conditions that highlight wear over different tool areas. The images are processed and combined to one image giving further information about the location and type of chipping. A method of adjusting cutting tools to within an accuracy of 1 um is described in [53]. The precise adjustment of each of the tool tips of a multi-tip milling t o o l led to a considerable reduction in wear over single, badly positioned cutting tips.

h

laser

,

electron microscopes. Optical methods approach their useful limits when the size of the object being measured approaches the wavelength of the light which is used, thus causing diffraction. With an electron microscope the line height can be measured to within an error of 0 . 0 4 um. Other precision measurement applications include the testing of plotters used for the generation of computer holograms [47], and automatic evaluation of CRTs for geometrical distortions and trueness in colour [ 6 7 ] . 4.5

Material Flow

Vision can be used to control material flow by either directly identifying parts or by reading various kinds of codes, for example a bar code. Simple bar code scanners, which read the code across one scanning line only, can easily become confused by labels which are defect or dirty. This prevents their use in rough industrial environments. With a vision system, a 2-D image of the label can be evaluated resulting in an improved recognition rate and better reliability. Some bar-code reading stations employ two cameras. A wide angle camera detects the label within a larger field of view and a tele-camera is then automatically aimed at the label for reading. Direct classification of the parts by analysing areas, moments of areas, features like holes or feature lists. ples of direct classification are proceedings of the VISION conferences will not be discussed here. 4.6

can be achieved single distinct Several examgiven in the and in PAM1 and

Process Control

In industry most vision systems for process control are used for robot welding. Usually a line is projected across the welding gap and the gap width and height difference of the workpieces is extracted from a camera image by triangulation. This information is used to control wire feed, welding speed and the positioning of the welding tool over the gap. Some researchers are investigating possibilities to extract further information about the welding process from images taken directly from the welding process. Several parameters like seam width and seam height can be measured in-process [7] and the beginning of welding a hole can be detected by comparing the seam height and width with the amount of wire fed to the seam. In [46] a system is described which allows evaluation of the quality of laser cutting and welding. Images are taken directly from the welding spot, with the configuration shown in Fig. 16. By further development, it should be possible to use the set-up for on-line control of the laser process.

lens

crnter image

9

aperture-+ tliter-s

bisector plane

rurce F G l u s l i

Fig. 15:

IPW 6 5 2 R

Optical set-up of a tool wear sensor

A simple and cost effective sensor for tool wear monitoring is discussed in [ l o ] . A CCD line sensor and a light source are integrated in a single sensor head for measuring flank wear. An expert control system for tool-life management In intervals employing vision is described by [68]. images of the tools are taken and feature lists are compared to identify new tool surfaces which correspond to decay affected areas. 4.4

Precision Metrology

Image processing can also be used for precision metrology, for example the measurement of photoresist line profiles on VLSI circuits, as discussed in [40]. This application requires a resolution in the nanometer range, which can only be offered by images of

video

recorder

monitor

iurce

F O 01sen

IFW

6533

Fig.16: Set-up for monitoring a laser welding process Another example of process control is described in [70]. The propellant in aerosol cans had to be overfilled since the aerosol level could only be neasured if it exceeded the can dome. By controlling the filling with X-ray image processing, the amount >f propellant could be reduced by 50 0 .

4.7 AUtonOmous Vehicles Applications of autonomous vehicles are, for example, selfguided transport systems for material handling. First systems are on the market, for example the FTS system by the Schenck AG. This system navigates by using real time vision to detect line markings on the factory floor. The lines, which may be intermittant, do often already exist. Otherwise, painting them on the floor is considerably cheaper and faster than installing induction loops in the floor. Once the vehicles have complete information about the available paths, they independently search for the shortest path to their goal. No central control is required, but is available by means of radio, should a redirection be necessary. Due to its low costs and short installation time, the system will be most effectively used in situations with long transport distances and few vehicles. Further information on autonomous vehicles may be found in [73] and [ll].

ment time and costs. Additionally they will lead to more robust processing due to automatic fine tuning of the processing techniques. Much attention will be paid to 3-D sensing and processing of range data since this offers tremendous potential to production engineering. While only 5% of the present vision systems are used for 3-0 analysis, according to [2], the comparable percentage for 1990 is predicted to be 30%. Consequently, this will lead to a growing interest in linking CAD and 3-D vision. CAD models will be used for finding unique part features and for the fine tuning of part recognition. 5.2

Hardware

New hardware modules and architectures will increase processing power by orders of magnitude. The relative merits of parallel, pipelined and neural processing have still to be fully evaluated, and development of new architectures is likely. Modules such as PIPE which was discussed earlier or modules incorporating 16 transputers are already on the market. Digital signal processors will greatly contribute to increased data throughput, whilst also reducing the price of image processing systems. Standard algorithms such as convolution will be integrated in special image processing semi-conductors, thereby dramatically reducing hardware development time and cost. CCD sensor technology is tending towards arrays of 1000 x 1000 square pixels and towards colour sensors. Additionally, different shutter speeds (of up to 1 / 10000 second) will be available to "freeze" motion. Finally, the standards for high definition television (HDTV) should have some impact on sensor technology. 5.3 Industrial Applications

iource Niernann

Fig. 17:

4.8

IFW 6.997

Safety control in a robot cell by image processing

Safety Control

The surveillance of a robot work space by means of an independent vision system is discussed in 1421. Motion is detected by subtracting consecutive frames thereby detecting the current robot position as well as the path of a person approaching the robot. In the case of a path that is converging with the robot path, as shown in Fig. 17, the robot is stopped.

5 . Future Trends and Developments

5.1 Research and Development Image processing research over the next years will focus on areas where new hardware modules offer new These possibilities for industrial applications. especially include colour processing, analysis of Fourier transformed images, morphology and 3-D processing. Colour images can be de-composed into red, green and blue signals or into hue, saturation and intensity (HSI). While standard techniques can be used for processing RGB signals they are inefficient since each signal has to be processed separately. Efficient filtering and analysis of HSI coded images will lead to new possibilities in image processing. Due to the processing speed of new hardware modules it will be possible to perform a 2-D Fourier transformation of an image in near video real time. This technique can therefore be considered for complex filtering or analysis of images with regular patterns. The importance of morphology has already been discussed. It only remains to be said that its importance will still increase as new hardware modules for morphology come onto the market. Another major area of research will be pattern matching and pattern analysis. More efficient techniques are required for part recognition as the number of parts in sorting applications is increasing. The study of artificial intelligence techniques for automatic image understanding will be of great interest to the image processing research community in general, however, little impact on production engineering can be foreseen within the next years. Expert systems for image processing techniques will prove to be more valuable, since they promise a reduction of develop-

In the coming years machine vision will become a technology generally accepted by industry and it will to some extent loose its image of necessarily being "high tech". Developments in sensors and computing hardware will greatly widen the industrial potential although general purpose systems are unlikely to take over broad segments of the industrial image processing market unless software for specific industrial applications is available. The increasing complexity and the required sophistication of application software will lead to a further increase of the expenses for software development. Institutions that are dedicated to developing image processing applications will counteract this trend by employing highly adaptable and modular systems, which, on the other hand, require great initial capital expenditure and great effort to become fully acquainted with the possibilities of such a system. For demanding applications, the same systems will be installed at the production site. For applications that require less processing power, the software will, to some extent, be developed on the more powerful modular systems which allow for greater efficieny during evaluation of image processing techniques. This software will then be crosscompiled and transferred to smaller and less expensive general purpose systems. Another way to avoid the excessive expenses for software development will be to focus on special areas such as metrology or control of material flow where application software can be standardized to some extent. Quality control will benefit from colour processing and from higher resolution CCD cameras for metrological applications. Dynamic lighting will improve part inspection and simplify part recognition. Solving the "bin picking" problem would help to increase robot flexibility while reducing the expenses for mechanical separating and sorting devices, however, even major advances might be overestimated in their economic effect. The general bin picking problem requires full-field 3-D sensing which is not affected by problems due to occlusion. The only technique to accomplish this is light time of flight measurement which is too slow, too sensitive and much too expensive to be cost efficient. Even if major advances are made, bin picking will not be economical for the next few years. The number of machine vision applications in industry is expected to increase at an average rate of 3 0 - 50% over the next few years. While prices for machine vision hardware decrease, this will, as already discussed, not necessarily be the case for software and application development. Therefore, machine vision will be employed most successfully in applications that can be standardized to some extent. Current examples include the checking of LCD displays

587

and needle instruments or equipping coordinate measurement machines with vision systems. Many suppliers of image processing hardware realize the potentials of the OEM market, especially for PCB inspection in the electronics industry. 5.4 Future Development of the Machine Vision Industry Continuous growth of the machine vision industry has been predicted by several market research studies, such as that by Frost and Sullivan, the Delphi study for the Automated Vision Association and the study by Prognos Institute. Predicted annual growth rates range from 30% to 62%. The more optimistic growth rates which were predicted three years ago have not been achieved. The new technology led to unrealistic expectations and the difficulties in developing vision applications were grossly underestimated. Many vision suppliers did not survive the last years and attendance of suppliers at the VISION '87 conference and exhibition dropped to only 50% of that of the previous year. This led to the statement in the Chairman's opening speech: "It has been a mistake to think that we have a machine vision industry". Today the possibilities a s well as the difficulties of industrial machine vision are seen in a more realistic light and results of recent market studies are expected to be more reliable. One of the major trends in machine vision is the fact that the leadership role of the automotive industry (an estimated 45% of all industrial vision systems in the US was sold in the Detroit area) is declining as the electronics industry takes over. According to the Delphi study the automotive industry's share is expected to decline to 31% in 1990 with the electronics industry taking 36% of the market. Most of the systems in the electronics industry will be used for inspection of printed circuit boards, especially when boards are equipped with SMT components. According to the Delphi study, the largest barrier to the implementation of machine vision is a shortage of user expertise to develop applications. This fact has led to the establishment of new companies, institutes and technology transfer centers that offer application development and system integration services. Meanwhile, the image processing industry has shifted its focus from service to production.

capital expenditure and long pay-back periods. Evaluating the advantages of machine vision only in terms of the pay-back period however, will lead to a disregard for the innovative value of a new technique with great potential. Technology transfer centers and companies specializing in vision applications will help to overcome the problems of high costs and excessive development tines. Their services should also include training of production engineers in image processing techniques. What is now needed most is a better education of production engineers in the basics of image processing and its potentials.

7. Acknowledgements

We would like to thank the following persons, who, by sending their contributions, suggestions and discussions, were of great help in preparing this paper. Prof. L. Alting, Technical University of Denmark, Lyngby, Denmark; Dr. -1ng. K.H. Breyer, Zeiss, Federal Republic of Germany: R. Geslot, Centre Technique des Industries Mecaniques, Senlis, France: Prof. F. Giusti, University of Pisa, Italy: Dr. K.G. Gunther, Siemens AG., Federal Republic of Germany: Dr. R.J. Hocken, Center of Manufacturing Engineering, NBS, Washington DC., United States of Amerika; Prof. C.J. Heuvelman, University of Twente, Netherland; T. Xomatsu, Toshiba Coorporation, Japan: Prof. H . Kunzmann, PTB, Braunschweig, Federal Republic of Germany; Prof. N. Martensson, Linkoping Institute of Technology, Stockholm, Sweden; Prof. V. Milacic, University of Belgrade, Yugoslavia: Prof. J.M. Peters, Catholic University of Leuven, Belgium: Dr.-Ing. W. Rienecker, Forschungsinstitut Industrielle Bildverarbeitung Hannover, Federal Republik of Germany; Dr. N. Roth, Siemens AG., Federal Republic of Germany: Prof. M. Santochi, University of Pisa, Italy: Prof. G. Spur, Technical University of Berlin, Federal Republic of Germany: Prof. H. Weber, Technical University of Karl Marx Stadt, German Democratic Republic; Prof. M. Weck, Reinisch Westfalisch Technische Hochschule Aachen, Federal Republic of Germany.

8. References

Tables [l] and [2] give an overview of the distribution of vision systems in the US by industry and by application, according to the Delphi study. The figures for the European market are similar. Market Share

198519871990 Automotive Electronics Defense Equipment Commercial Aerospace Food/Beverage Biomed/Pharmaceutical Military Non-Mfg Other

49% 26%

40%

31%

30%

3 6%

5%

5% 3%

5%

11% 5%

5% 5% 11% 6%

1% 5%

1% 5%

1% 5%

3%

6%

Table 1. Shipments of Machine Vision systems, by Industry Percent of unit Sales

1985 Verification Counting Character Recognition Identification Sorting Gauging Inspection Flaw detection Adaptive Control Seam Tracking Process Control

1% 5%

7% 7% 7% 27% 20% 7%

1987_ _1990 _ _ 7%

6%

3% 6%

2%

6% 6%

5%

27% 22% 7%

1% 4%

6%

4%

6%

4%

6%

4% 24% 24% 12% 7% 4% 6%

Table 2. Shipments of Machine Vision Systems, by Specific Application

6. Conclusions The importance of machine vision to production engineering will rapidly grow as production becomes increasingly automated. Vision will become a fundamental tool in surveillance, in process and quality control and in handling workpieces. Industrial machine vision is a relatively new technique that still involves many risks in terms of high initial

[l], Adam, W., Lehnert, T., Nickolay, B., 1987, "Verfahren der Bildverarbeitung fur Anwendungen in der Produktionstechnik", Zeltschrift fur wirtschaftliche Fertigung und Automatisierung, 9/1987, pp. 515520

[Z], Ahlers, R.J., 1986, "Industrial Applications of Automatic Optical Inspection", Proceedings of SPIE, Vol. 730, Automated Inspection and Measurement, 1986, pp .28-31 [3], Ahlers, R.J., 1987, "Bildverarbeitung in der Blechteileprufung", Stahl u . Eisen 107, Nr. 12, 1987, pp.27-30

[4], Bhanu, B., HO, C.-C., 1987, "CAD-Based 3D Object Representation for Robot Vision", IEEE journal Computer, August 1987, pp.19-35 [5], Bhanu, B., 1987, "CAD-Based Robot Vision", IEEE journal Computer, August 1987, pp.13-16 [ 6 ] , Boehlein, A., Harding, K.G., 1986, "Adaption of a Parallel Architecture Computer to Phase Shifted Moire Interferometry", SPIE, Cambridge meeting 1986

[71, Brummer, F., 1987, "Fernsehbild regelt SchweiRprozeb", Industrie-Anzeiger, 98/1987, pp.34-36

[ E l , Casasent, D., 1987, "Optical Processing Techniques for Inspection and Automation", Vision '87 Conference, Detroit, Ref. 7.1 [9], Chu, Y., Douglas, B., Marconcini, V., 1987, "Real-time X-ray System for Nondestructive Inspection", Vision '87 Conference, Detroit, Ref.4.41 [lo], Delius. M., Grumpelt, D., Scharfenort, U., 1985, "Optoelektronisches Sensorsystem zur Werkzeugzustandsidentifikation in CNC-Drehmaschinen", CAIP 1. Internationale Fachtagung Automatische Bildverarbeitung, 1985, Berlin, Ref. T3 [ll], Dickmanns, E.D., 1987, "4D-Szenenanalyse mit 9. DAGMintegralen Raum-/Zeitlichen Modellen", Symposium Mustererkennung 1987, Braunschweig, pp.257271

[12], Driels, M.R., Collins, E.A., 1985, "Assembly of Non-Standard Electrical Components Using Stereoscopic Image Processing Techniques", Annals of the CIRP Val. 34/ 1/1985 [13], Doemens, G., Kutzer, E., Roth. N., 1986, "General Purpose Sensory Controlled System for Parts Production", ESPRIT'85: Status Report of Continuing Work, The Commision of the European Communities, pp.1335-1354 1985, [14], Duchaine, J., Gabriel, S., Racine, D . J . , "Capteur optique pour la mesure des usures des OutllS de coupe a arete definie. Etude de faisabilite", La surveillance automatique des outils de coupe, Recueil de conference senlis 1985, pp.23-30 [15], Englander, A.C., 1987, "Edge Detection Techniques for Industrial Machine Vision", Vision '87 Conference, Detroit, Ref. 5.85

[16],Fortes, J.A., Wah, B.W., 1987, "Systolic ArraysFrom Concept to Implementation", IEEE Journal COmpUter, July 1987, pp.46-55 [17], Gasperi, M.L., 1986, "Introduction to Morphological Image Processing", Vision '86 Conference, Detroit, Ref. 5.63 [la], Giusti, F., Santochi, M., Tantussi, G., 1987, "On-Line Sensing of Flank and Crater Wear of Cutting Tools", Annals of the CIRP, Vo1.36/1/1987, pp.41-44 [l9], Haralick, H.M., Sternberg, S.R., Zhuang, X., 1987, "Image Analysis Using Mathematical Morphology", IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-9 No.4 1987, pp.532-551 [ZO], Herman, M., Kanade, T., 1986, "Incremental Reconstruction of 3D Scenes from Multiple, Complex Images", Artificial Intelligence, No.30 1986, pp.289341 [21], HeBe, R., Kelin, R., Klette, R., 1987, "Wissensbasierte Programmgenerierung fur maschinelles Sehen", Bild und Ton 40, 1987-8, pp.236-278 [22], Hinkelmann, A., Schafer, T., 1988, "Spaher fur den Roboter", Industrie Elektrik + Elektronik, Nr.5 1988, pp.64-66

[33], Lougheed, R.M., Tomko, L.M., 1986, "Robot Position Control Using Machine Vision", Vision ' 8 6 Conference, Detroit, Ref. 4.39 [34], Lowenfeld Walker, E . , Herman, M., Kanade, T . , 1987, "A Framework for Representing and Reasoning about Three-Dimensional Objects for Vision", Proceedings of the AAAI Workshop on Spatial Reasoning and Multisensor Fusion, Oct. 1987 [35], Luk, F., Huynh, V., 1987, "A Vision System for In-Process Surface Quality Assessment", Vision '87 Conference, Detroit, Ref. 12.43 1361, L U O , R.c., Chen, M.J.W., 1985, "Artificial Intelligence in Advanced Robotic Sensor Technology", IEEE journal IECON'85, pp.292-295 [37], Marr, D., Hildreth, E., 1980, "Theory of Edge Detection", Proceedings of the Royal Society London, B 207, pp.187-217 [38], Martensson, N., 1987, "The STU Research Program - Adaptively Controlled Industrial Robots 1983-1987", available from the Swedish National Board for Technical Development. [39], Milutionvic, D.S., Turchan, M.P., Kimura, F., Mllacic, V.R., 1987, "A Model-Based Vision System for Industrial Parts using a Small Computer", Robotics and Computer-Integrated Manufacturing, Vo1.3 No.4, pp.439-450 [40], Miyoshi, M., Kanoh, M., Yamaji, H., Okumura, 1986, "A Precise and Automatic Very Large Scale Integrated Circuit Pattern Linewidth Measurement Method Using a Scanning Electron Microscope", J . Vac. Sci. Technol. 84 (2), Mar/Apr.1986, pp.493-499 K.,

[41], Murakami, K., Murakami, Y., Seino, T., 1982, "On the Setting of the Light Source for the Grid Illuminating Type Moire Method", Annals of the CIRP Vol. 31/1/1982 [42], Nagel, H.-H., 1985, "Analyse und Interpretation von Bildfolgen", Informatik - Spectrum, 1985, Vol.8, pp.178-200 and pp.312-327 [431. Negin, M., 1987, "Subpixel Resolution Methocology Limitations", Vision '87 Conference, Detroit, Ref. 10.51

[23], Honda, T., Kanedo, S., Takeyama, H., 1985, "Pattern Recognition of Part and/or Workpieces for Automatic Setting in Production Processings", Annals of the CIRP Vol. 34/1/1985, pp.29-32

[44;, Nichols, S.M., Thompson, J.C., 1987, "Part Identification with FFT", Vision '87 Conference, Detroit, Ref. 12.59

[24], Horaud, R., Skordas, T., 1987, "Model-Based Strategy Planning for Recognizing Partially Occluded Parts", IEEE journal Computer, August 1987, pp.58-65

[45], Niemann, H., 1985, "Wissensbasierte Bildanalyse", Informatik - Spektrum, 1985, vo1.8, pp.201-214

[25], Janocha, H., 1987, "Sensorgefuhrtes Bearbeiten mit Robotern", Proceedings of the Conference Visuelle Industrieautomation, Hannover 1987

[46], Olsen, F.O., "Investigations in Methods for Adaptive Control of Laser Processing", available from Inst. of Manufacturing Engineering, Tech. Univ. of Denmark, DK-2800, Lyngby, Denmark.

[26], Jarvis, R.A., 1983, "A Perspective on Range Finding Techniques for Computer Vision", Pattern Analysis and Machine Intelligence, Vol. 5, No. 2, March 1983, pp.122-139

[471, Ono, A . , Wyatt, J.C., 1984, "Plotting Errors Measurement of CGH Using an Improved Interferometric Method", Applied Optics, 1984, Vo1.23, No.21, pp.3905-3910

[271, Keferstein, C.P., Rauh, W., t987, "Einsatzmoglichkeiten der Bildverarbeitung in der Qualitatssicherung", Qualitat und Zuverlassigkeit, Heft 6 1987, pp.297-302

[481, Pentland, A.P., 1987, "A New Sense for Depth of Fleld", IEEE Transactions on Pattern Analysis and Machine Intelligence, 1987, vol. 9, ~ 0 . 4 ,pp.523-531

[28], Kent, E.W., Shneier, M.O., Lumia R., "PIPE Pipelined Image Processing Engine", available from Industrial Systems Division, National Bureau of Standards, Washington, D.C., 20234, USA. 1291, Kishinami, T., Kanai, S., Yokkouchi, H., Nomura, N., Hayakawa T., 1987, "Automatic Recognition System for Relative Position of Required Shape in Workpiece Space", Annals of the CIRP, Vo1.36/1/1987, pp.391-394 [30], Komatsu, T., Uno, S., Inoue, M., Sekiguchi, S., 1987, "An Automated Inspection system for Chip Electronic Parts on a Printed Circuit Board", Annals of the CIRP, VOl 36/1 1987, pp.399-402 [311, Kung, S.X., Lo, S.C., Jean, S.N., Hwang, J.N., 1987, "Wavefront Array Processors - Concept and Implementation", IEEE journal Computer, July 1987, pp.18-33 1321, b i n , M., 1986, "Die Realitat holt die Zukunft eln, Roboter erlernen Tasten und Sehen", Opt0 Elektronik Magazin Vol. 2 , No. 5, 1986, pp.427-430

[49], Peters, J.M., 1988, Personal Communication with Prof. J.M. Peters, Catholic University of Leuven, Belgium [50], Pfitzner, K., Strecker, H., 1987, "XRAY - An Experimental Configuration Expert System for Automatic X-ray Inspection", 9. DAGM-Symposium Mustererkennung 1987, Braunschweig, pp.315-319 [51], Pietsch, K., Muller, P., 1988, "Laserprufung in der Stahlindustrie-Vollautomatische Qualtitatssicherung bei der Stahlblechherstellung". Optoelectronik Magazin, Vo1.4 No.2, 1988, pp.160-167 [521, Poulter, K.F., 1985, "Vision Systems for Dimensional Metrology", Annals of the CIRP voi. 34/2/1985 [ 5 3 ] , Rienecker, W., 1987, Bildverarbeitungsanwendunqen in neuen Losungen fur CIM-Konzeptionen und flexiblen Fertigungszellen", available from Forschungsinstitut Industrielle Bildverarbeitung, MisburgerstraRe 81, D3000 Hannover 61, FRG.

589

[54], Rummel, P., Beutel, w., 1984, Workpiece Recognition and Inspection by a Model-Based Scene Analysis System", Pattern Recognition vol. 17, NO. 1, pp.141-148, 1984 [55], schlichtig, R.J., Machine Vision System", Detroit, Ref. 8.19

1986, "A Hardware Based Vision '86 Conference,

[56], Schone, G., 1986, "Einsatz von Videosystemen mit Bildverarbeitung fur Realzeitaufgaben", Fortschritte in der MeB- und Automatisierungstechnik durch Informationstechnik, 1986, pp.196-208

!65], Tschudi, T., 1987, "Optische Bildverarbeitung der Feinwerktechnik", Technisches Messen, Heft 6/1987, pp. 253-260 in

[66], Unknown Author, 1987, "ARI, Assembly Robot with Intelligence", June/8/'87, available from Toshiba Corporation, Japan [67], Uno, S . , Douji, R., Inoue, M., Goto, Y., 1984, "Automatic Evaluation System for CPT Picture Characteristics", IEEE journal IECON'84, pp.432-437. Villa, A., Quaglia, G., Chiara, R., Rutelli, Levi, R., 1985, "An Expert Control System for Tool Life Management in Flexible Manufacturing Cells", Annals of the CIRP Vol. 34/1/1985 [68],

G.,

[57], Shneier, M.O., Lumia, R., Herman, M., 1987, "Prediction-Based Vision for Robot Control", IEEE journal Computer, Aug. 1987, pp.46-55 [581, Sluss, J.J., Veasey, D.L., Batchman, T.E., 1987, "An Introduction to Integrated Optics for Computing", IEEE journal Computer, Dec 1987, pp.9-23 [59], Smyth, B.E., 1987, "Fourier Analysis and its Application to Machine Vision", Vision '87 Conferenc? Detroit, Ref. 5.19 [60], Solinsky, J.C., 1986, "The Use of Expert Systems in Machine Vision", Vision '86 Conference, Detroit, Ref. 4.139 [61], Spur, G., Lehnert, T., Nickolay, B., 1988, "Erkennungsverfahren mittels Werkstuck-Kreis-Schnittmerkmalen fur den industriellen Einsatz", Zeitschrift fur wirtschaftliche Fertigung und Automatisierung, 6/1988, pp. 296-300 [621, Strand, T.C., 1985, "Optical Three-Dimensional Sensing for Machine Vision", Optical Engineering, 1985, V01.24, NO. 1, pp.33-40

[631! SVenSSOn, R., 1985, Carbody Assembly with Asea 3D-Vision", 15th International Symposion on Industrial Robots, 1985, pp.819-828 [64], Tonshoff, H.K., Brinksmeier, E., 1981, "Optimization of Computer Controlled X-Ray Stress Analysis", Annals of the CIRP Vol. 30/1/1981

[69], Vogt, R.C., 1987, "Formalized Approaches to Image Algorithm Development Using Mathematical Morphology", vision '87 conference, Detroit, Ref. 5.17 [70], Wagner, G., 1987, "Combining X-Ray Imaging and Machine Vision", Vision '87 Conference, Detroit, Ref. 4.27 [71], Warnecke, H.J., Keferstein, C.P., Schreiber, L., 1986, "Moglichkeiten der Bildverarbeitung in der Koodinatenmesstechnik", Werkstatt-Technik 76, Nr. 8, 1986, pp.461-466 [72], Weck, M., 1988, Report of the Sonderforschungsbereich 208, Grundlagen und Komponenten flexibler Handhabungsgerate im Maschinenbau, Teilprojekt B2, Ergebnisbericht 1986-88 [73], Weisbein, C.R., 1987, Guest editors introduction to several articles about autonomous vehicles, IEEE Expert, Winter 1987 [74], Woschni, H.G., Christoph, R., Kramer, H., 1986, "Erweiterung der Auflosungsgrenze von LangenmeBsystemen mit CCD-zeile", Feingeratetechnik, Nov.1986, pp.403-405 [751. Yamashina, H., Okumura, S . , Hosoe, K., Okamura K., 1987, "Unmanned Detection of Wear and Chippings of Cutting Tools by Image Processing Techniques", Proceedings of the 6th International Conference on Production Engineering, Osaka 1987, pp.200-206.