A new approach to high precision 3-D measuring system

A new approach to high precision 3-D measuring system

Image and Vision Computing 17 (1999) 805–814 A new approach to high precision 3-D measuring system Lin Chi-Fang*, Lin Chih-Yang Institute of Electric...

1MB Sizes 2 Downloads 107 Views

Image and Vision Computing 17 (1999) 805–814

A new approach to high precision 3-D measuring system Lin Chi-Fang*, Lin Chih-Yang Institute of Electrical Engineering and Computer Engineering and Science, Yuan-Ze University, Chungli, Taiwan, ROC Received 16 February 1998; received in revised form 24 July 1998; accepted 26 August 1998

Abstract A 3-D measuring system is important in many industrial applications like object modeling, medical diagnosis, CAD/CAM, multi-media, and virtual reality systems. However, it takes a lot of effort and cost to develop a high precision 3-D measuring system. A novel approach is introduced in this paper, with a view to develop a 3-D measuring system to meet the requirements of high performance through low cost equipment. A laser projector with controllable light intensity and phase is designed and used as the active light source. When the light is projected on an object, the varying depths of the object surface cause phase variations in the active light projected. These phase changes are used to find out the surface coordinates of the test object using simple Digital Signal Processing techniques. The advantages of the proposed method include the following: (1) as the measuring speed is fast, our method is suitable for on-line 3-D applications, e.g. the monitoring of human body or on-line detection; (2) only a single image is required to obtain the necessary phases, so the vibration errors can be avoided; (3) each CCD element is taken as a sample point, thus a high resolution result can be obtained; and (4) the measuring result is accurate. The experimental results reveal the superiority of the proposed method. 䉷 1999 Elsevier Science B.V. All rights reserved. Keywords: Active stereo; Phase measuring; Camera calibration; Low pass filter

1. Introduction A 3-D measuring system is important in many industrial applications like object modeling, medical diagnosis, CAD/ CAM, multi-media, and virtual reality systems. The existing 2-D image monitoring approaches are insufficient to meet the accurate measuring requirements of products with high density and high quality, and a full assessment of automatic fabrication. Although the development of a 3-D measuring system applied to the above fields is compelling, it is not easy to achieve the goal without costly equipment and high performance computers. So, the purpose of this paper is to provide a method to construct a 3-D measuring system, which is able to meet the demand of high performance through low cost equipment. Previous studies [1–18] concerning the 3-D measuring systems can be broadly classified into the following two categories: the passive method and the active method, depending on the illumination source used. In the first category, the source of illumination for test objects come from natural environments, in which the power and direction of light source are not controlled. Contrarily, the light source for test objects in the second category comes from manipu* Corresponding author. Tel.: ⫹886-3-4638800-359; fax: ⫹886-34638850. E-mail address: [email protected] (L. Chi-Fang)

lated environments, in which controlled light, magnetic field, or supersonic waves are incident upon the test objects so as to obtain the depth information of the objects. A brief review of these methods is provided in the following sections. 1.1. Passive method The passive measuring method functions like the human vision system. The method can be further classified into the following three modes: • the monocular mode [1], • the binocular mode [2], • and the trinocular mode [3]. No light-related projectors are required in this method which accordingly reduces implementation costs. However, this approach is inferior in its measuring reliability and precision compared to the active method. 1.2. Active method The active measuring method is widely applied on industrial quantitative measurements. Related studies proposed previously include: 1. the structured light method [4–13], which is based on triangular measuring mechanism;

0262-8856/99/$ - see front matter 䉷 1999 Elsevier Science B.V. All rights reserved. PII: S0262-885 6(98)00158-9

806

L. Chi-Fang, L. Chih-Yang / Image and Vision Computing 17 (1999) 805–814

Fig. 1. The structure of the laser projector.

2. the phase deviation method [14] or the interference method [15], which is based on phase measuring approach; and 3. the Computer Tomograph (CT) or the Magnetic Resonance Imaging (MRI) approach, which utilizes transparent light source and energy to detect objects in medical environments. In order to obtain the required measuring results, skillfully manipulated energy sources, e.g. light, magnetic field, supersonic waves, etc., are incident upon test objects in order to obtain the depths of the objects. For instance, the depth of an object can be characterized through the illumination of a grid patterned light source, or a fringe patterned light source. The projected pattern transformations can be utilized to figure out the depth of objects. Since manipulated active energy source is utilized, this method is useful in applying quantitative measurements. Details are presented in the following sections. 1.2.1. Structured light method Previously proposed structured light methods include the following: • • • • • •

the the the the the the

point light source method [4], single-slit light source method [8], multiple-slit light source method [5], binary-encoded light source method [6], gray-encoded light source method [12,13], and color-encoded light source method [11].

These methods are practical while applying on simple shaped objects but perform poorly due to like confinement of projection lines and the resolution of the CCD camera used. The measuring speed is limited due to the confinement of the number of sample points, which are correlated to the corresponding projection lines. The need for taking many images and many grating projections present a problem. Though studies using a single image exist, the ambiguity in the measuring results is a problem yet to be solved. 1.2.2. Phase measuring method By utilizing the phase measuring interferometer to measure the 3-D contour data of the test object, the method presented by Bahl and Liburdy [15] obtains a satisfactory result. Their method makes use of the phase changes of

fringes projected from the interference patterns on the test object. The depth measurement can be obtained based on the relationship between the phase changes and depth variations. The Moire fringe approach [14,16,17] also utilizes the phase principle and performs well. The advantage of the phase approach is that the corresponding 3-D data can be accurately obtained from each pixel of the image taken from a CCD camera. From the evaluation of the above approaches, the phase measuring approach is found to be the best choice for high precision measuring. However, this approach requires more than one grating projection and image taken, so an expensive, stable optical table should be used to prevent the erroneous result caused by mechanical vibration. Other problems like mechanical positioning, and reduction in speed due to the mirror or polarizer adjustments need to be solved. The Fourier Transform Profilometer (FTP) approach [19] requires only a single image to obtain related phases, however, a large number of calculations for the fast Fourier transform and the inverse fast Fourier transform have made this approach impractical in the 3-D measurement applications. The study thus proposes a novel approach to solve the above problems through the utilization of a simple Digital Signal Processing (DSP) device. The mechanism of DSP works in the following manner: 1. a laser projector (see Fig. 1) with controllable intensity and phase generates the active light source; 2. phase changes due to the variant surface depths will cause phase changes in the projected active light on the test object; and 3. a CCD camera and an image grabber card are used to obtain images. The phase differences can then be obtained after a number of calculations and filtering. The CCD camera and the projector require necessary calibration in order to obtain related parameters, which are used to derive the 3-D coordinates of the test object. The paper is organized as follows: Section 2 reviews the system flow of the proposed method, and presents a brief introduction to the FTP approach. A detailed description of the proposed method is described in Section 3. Section 4 provides the experimental design and the results of this study. Section 5 gives the conclusions.

L. Chi-Fang, L. Chih-Yang / Image and Vision Computing 17 (1999) 805–814

807

Fig. 2. The flowchart of the proposed 3-D measuring system.

2. System flowchart and FTP method 2.1. System flowchart The following two steps are needed to conduct a 3-D measuring process: 1. (a) the calibration of the CCD camera; and 2. (b) the measurement of the 3-D contour data for the test object. The purpose of the camera calibration is to find out the camera parameters, including the intrinsic parameters like

image center, focal length, and scale factor, and the extrinsic parameters like camera position and orientation parameters. These parameters are required to measure the 3-D contour data of the test object. The flowchart of the proposed system is depicted in Fig. 2, and the detail is presented in Section 3. 2.2. The FTP approach As shown in Fig. 3, plane R represents the reference plane, h signifies the ‘negative’ distance between the test point and the reference plane R, and L0 is the distance between the reference plane R and point C, which is the

Fig. 3. The diagram of the proposed 3-D measuring system.

808

L. Chi-Fang, L. Chih-Yang / Image and Vision Computing 17 (1999) 805–814

after the process of inverse Fourier transform, and a logarithmic calculation is proceeded as follows:    ÿ  ÿ  1 ÿ  log c x; y ˆ log b x; y ⫹ if x; y : …5† 2 Finally, the phase value f (x,y) can be obtained by taking out the imaginary part of Eq. (5). To obtain a phase value, the FTP approach mentioned above requires a large number of calculations like Fourier transform, inverse Fourier transform, and band pass filtering, which are inadequate to be performed merely through the operation of hardware circuits. In the following sections, a simpler DSP circuit device and a one-time image taking approach will be illustrated with a view to find out the precision 3-D outer shape information of the test object. 3. Proposed methods

Fig. 4. The grating pattern used in this study.

origin of the CCD coordinate system. Assume f0 is the fringe frequency of the grating pattern used in this study as depicted in Fig. 4 (i.e. the reciprocal of T0 as shown in Fig. 3.) The illumination value of the grating pattern projected on the test object can be expressed by the equation [18]: ÿ  ÿ  ÿ  ÿ  ÿ …1† g x; y ˆ a x; y ⫹ b x; y cos 2pf0 ⫹ f x; y ; where g(x,y) is the illumination value of the object (i.e. the point intensity on the CCD camera), a(x,y) is the direct current value (i.e. the DC values), b(x,y) is the brightness value of fringe, and f (x,y) represents the phase deviation at point. In general, a(x,y) in Eq. (1) is very small and must be far smaller than f0; otherwise, a(x,y) cannot be removed to extract out f0 correctly after the Fourier transform process. We can allocate a filter with 632.8 nm (i.e. the wavelength of a He–Ne laser) in front of the CCD camera, so as to reduce the value of a(x,y) by filtering out most background values and merely permitting the passage of laser fringes. The following equation is obtained by rewriting Eq. (1) in an exponential form [20]: ÿ  ÿ  ÿ  ÿ  …2† g x; y ˆ a x; y ⫹ c x; y e…2pif0 x† ⫹ c* x; y e…⫺2pif0 x† ; where c *(x,y) is the complex conjugate of c(x,y), and ÿ  1 ÿ  c x; y ˆ b x; y e…if…x;y†† : 2

…3†

A Fourier transform of Eq. (2) is given as: ÿ  ÿ  ÿ  ÿ  …4† G f ; y ˆ A f ; y ⫹ C f ⫺ f0 ; y ⫹ C * f ⫹ f 0 ; y : ÿ  ÿ  Taking out C f ⫺ f0 ; y or C * f ⫹ f0 ; y and transferring them into C(f,y) through the band pass filter [21–23] from the above equations, c(x,y) in Eq. (3) can then be obtained

The measuring method proposed in this paper belongs to an active approach, in which a laser projector is utilized to generate fringes with sinusoidal distortions. The equations for ideal fringes [18] is given by ÿ  ÿ  …6† g x; y ˆ DC ⫹ Icos 2pf0 x ; where g(x,y) is defined in Section 2.2, and DC and I represent the values of direct current and brightness values, respectively. The ideal values for DC and I are constants. Owing to the influences of background light sources, object reflection coefficient, and object depth, when the output fringe pattern is projected onto a test object, the phase and intensity will change according to the following equation: ÿ  ÿ  ÿ  ÿ  ÿ …7† g x; y ˆ a x; y ⫹ b x; y cos 2pf0 x ⫹ f x; y : As shown in Fig. 3, the depth of the test object will affect the phase of fringe patterns. A number of approaches have been proposed previously to find out the phase values, however, those approaches require complicated calculations and more than one image taking, which are very time-consuming and inefficient. A new approach is then proposed in this study to solve these problemsÿ and presented as follows. First, multi ply Eq. (7) by cos 2pf0 x and obtain the following new equation:  ÿ  ÿ  ÿ  ÿ g x; y cos 2pf0 x ˆ a x; y cos 2pf0 x ÿ  ÿ  ÿ  ÿ ⫹ b x; y cos 2pf0 x ⫹ f x; y cos 2pf0 x …8† ÿ  ÿ  ÿ  ÿ ÿ  ˆ a x; y cos 2pf0 x 12 b x; y cos 4pf0 x ⫹ f x; y ÿ  ÿ ÿ  ⫹ 12 b x; y cos f x; y : There is only one low frequency term in the above equation, i.e. 12 b…x; y†cos…f…x; y†† (because the first two terms own the f0 values). Eq. (9) can be further generated after processing Eq. (8) by a low-pass filter that is discussed later: g1 …x; y† ˆ

1 2

b…x; y†cos…f…x; y††:

…9†

L. Chi-Fang, L. Chih-Yang / Image and Vision Computing 17 (1999) 805–814

809

Divide Eq. (11) by Eq. (9) and obtain the following equation: ÿ  ÿ  ÿ ÿ  ÿ ÿ  g2 x; y ⫺…1=2†b x; y sin f x; y ÿ  ˆ ÿ  ÿ ÿ  ˆ ⫺tan f x; y : …12† g1 x; y …1=2†b x; y cos f x; y So, we can obtain ÿ



f x; y ˆ tan

Fig. 5. The relation between the wrapped and unwrapped values of f (x,y). (a) The wrapped f . (b) The unwrapped f . (c) The comparison between the wrapped and unwrapped f .

Similarly, multiply Eq. (7) by sin(2p f0x) and obtain the following new equation:  ÿ  ÿ  ÿ  ÿ g x; y sin 2pf0 x ˆ a x; y sin 2pf0 x ÿ  ÿ ÿ  ÿ  ⫹ b x; y cos 2pf0 x ⫹ f x; y sin 2pf0 x  ÿ  ÿ ÿ  ÿ  ÿ ˆ a x; y sin 2pf0 x ⫹ 12 b x; y sin 4pf0 x ⫹ f x; y ÿ  ÿ ÿ  …10† ⫺ 12 b x; y sin f x; y ; and there is only a low frequency term in Eq. (10), 1 ÿ  ÿ ÿ  i.e. b x; y sin f x; y :Similarly, process Eq. (10) by the 2 low-pass filter and obtain the following equation: ÿ  ÿ  ÿ ÿ  …11† g2 x; y ˆ ⫺ 12 b x; y sin f x; y :

⫺1

ÿ ! ⫺g2 x; y ÿ  ; g1 x; y

…13†

where f (x,y) represents the phase value at point (x,y). The phase value found by the above equation is the same as that found in Eq. (5). The phase value calculated by Eq. (13) is located between …⫺p=2; p=2† and is a wrapped value. This means that if the actual phase value is less than ⫺p=2, it will be increased by p=2, and if the actual phase value is greater than p /2, it will be decreased by ⫺p=2. The increasing and decreasing operations are executed repeatedly until the final phase value is located between …⫺p=2; p=2†. So, the value calculated by Eq. (13) is not the required phase value. Let the actual phase value of the object at point (x,y) be denoted by f 0(x,y). The relation between f 0(x,y) and f (x,y) can be expressed by ÿ  ÿ  f0 x; y ˆ np ⫹ f x; y ;

…14†

where n is a constant. To recover f 0(x,y) from f (x,y), the unwrapped approaches [21–23] can be adopted. An illustrative example showing the relation between the wrapped and unwrapped values is depicted in Fig. 5. The phase value f r(x,y) of the reference plane can be obtained in the same manner. The phase deviation between the test object and the reference plane at point (x,y) is calculated by ÿ  ÿ  ÿ  Df x; y ˆ f0 x; y ⫺ fr x; y :

…15†

For simplicity, we use the notation Df to represent Df (x,y) hereafter. The following equation can be generated from the

Fig. 6. The camera model used in this study.

810

L. Chi-Fang, L. Chih-Yang / Image and Vision Computing 17 (1999) 805–814

Fig. 7. The structure of an anti-causal and zero-phase bidirection filter.

equation:

understanding of Fig. 3: tanup ˆ mn=ma; tanuc ˆ mn=mb )

tanup ⫹ tanuc tanup tanuc

!

ab ˆ Df·

…16† where mn represents the Euclidean distance from point m to point n. The same rule applies on ma and mb. Based on Fig. 6, the u c in Eq. (16) can be rewritten as: Fx ; j X ⫺ Ix j

…17†

and Eq. (16) can be rewritten as: hˆ

ab·tanup : 1 ⫹ tanup …jX ⫺ Ix j=Fx

…19†

where T0 is the periodic distance of the grating pattern on the reference plane. The following equation can be generated by Eqs. (18) and (19):

tanup ab   ; ) h ˆ mn ˆ ab·  ˆ mn 1 ⫹ tanup =tanuc

tanuc ˆ

T0 ; 2p

…18†

The displacement distance ab of the phase movement on the reference plane can be expressed as in the following

h ˆ Df·

tanup T0 : · 2p 1 ⫹ tanup …jX ⫺ Ix j=Fx

…20†

The CCD camera needs to be calibrated before measuring the 3-D contour data for the rest object. Suppose the origin (0,0,0) in the CCD coordinate is the projection center, and (0,0,F) is the coordinate of the image center, (Ix,Iy), where F ˆ (Fx,Fy) is the focal length of the camera as shown in Fig. 6 The studies in [24–28] are good references to obtain the parameters of the CCD camera, including image center (Ix,Iy), focal length F ˆ (Fx,Fy), distance L0, and projection angle up (L0 and up are depicted in Fig. 3). After obtaining these parameters, the following equations can be generated (to simplify the derivation of the 3-D coordinates, the orientation of the camera must be adjusted so that the image plane is parallel to reference plane R. The method is presented in Section 4): ÿ   X ⫺ Ix ÿ · z ⫹ L0 ; …21† xˆ Fx 

 Y ⫺ Iy ÿ  · z ⫹ L0 : yˆ Fy

…22†

From Eq. (20), the following equation is generated: z ˆ L0 ⫹ h ˆ L0 ⫹ Df·

Fig. 8. The comparison between the FIR filter and the bidirection filter. (a) Original signal. (b) Filtered by the FIR filter. (c) Filtered by the bidirection filter. (d) The comparison of (a), (b) and (c).

T0 · 2p

tanup : j X ⫺ Ix j 1 ⫹ tanup · Fx

…23†

The obtained (x,y,z) through Eqs. (21–23) is the desired 3-D coordinate. The bidirection anti-causal filter shown in Fig. 7 is utilized in this study to design the low-pass filter, in which X(z) is the z-transform of a sequence x(n), X(1/z) is the z-transform of the time reversed sequence x(n), and H(z) is the transfer function of the filter. When jzj ˆ 1, i.e. z ˆ e jw, the output reduces to X…ejw †·j H…ejw †j2 . The filter is composed of two Finite Impulse Response (FIR) filters. Although the Infinite Impulse Response (IIR) filter requires a lesser order of computation as compared with the FIR

L. Chi-Fang, L. Chih-Yang / Image and Vision Computing 17 (1999) 805–814

811

Fig. 11. The pattern used for calibrating camera parameters.

Fig. 9. The coin and the flat plate are used as the test objects.

filter, a phase distortion will be resulted due to the lack of linear phase properties. The bidirection filter is proved to be zero-phase distorted, which means that no testing signal is required to input into the system to avoid unnecessary distortion. The filter works very well since all necessary signals become available as soon as an image is captured. Fig. 8 illustrates the comparison result of signals filtered by the FIR filter and the bidirection filter. As the result shows, the latter obtains a better result, since a zero-phase distortion status is achieved. References regarding detailed design of low-pass filters are available in related studies on digital signal processing [21–23]. 4. Experimental results As shown in Fig. 9, a coin and a flat plate fabricated for measuring the accuracy of the proposed approach are taken as the test objects. The CCD camera and the laser light projector are schemed in a manner as illustrated in Fig. 3, and the experimental model is constructed as shown in Fig. 10. The calibration pattern shown in Fig. 11 is located on a flat plate treated as the reference plane. The calibration Table 1 The parameters obtained after calibrating the CCD camera

Fig. 10. A picture of the 3-D measuring system constructed for experiment.

up T0 (on R) T0 (on CCD) Ix Iy Fx Fy L0

30.64 (degree) 0.548 (mm) 12.059 (pixel) 330.55 (Pixel) 233.19 (Pixel) 1337.67 (Pixel) 1725.56 (Pixel) 60.759 (mm)

812

L. Chi-Fang, L. Chih-Yang / Image and Vision Computing 17 (1999) 805–814 Table 2 The specifications of the designed filter Filter type Order number Sampling frequency Pass band frequency Stop band frequency Pass band ripple Stop band attenuation

Fig. 12. Transformed fringes appear on the coin due to the changes of coin surface depth.

pattern is used to calibrate the camera parameters as well as check whether the image plane of the camera is parallel to the reference plane or not. The method is by repeated try and test and stated as follows. The camera is placed in front of the calibration pattern, and the image is taken. If the image plane is parallel to the reference plane, the parallel lines of the pattern will appear in the image plane as parallel lines. Contrarily, if the image plane is not parallel to the reference plane, the parallel lines of the pattern will appear in the image plane as intersected lines, which are intersected at the vanishing points. The positions of the vanishing points are then calculated and checked to see whether they are far

Fig. 13. The spectral analyses of the fringe patterns. (a) The fringe patterns projected on the reference plane. (b) The spectral analysis of (a). (c) The fringe patterns projected on the coin. (d) The spectral analysis of (c).

Parks–McCellan 85.0 1.0 (Hz) 0.035 (Hz) 0.06 (Hz) 0.1 (dB) 40.0 (dB)

from the image center or not. If the answer is yes, the orientation of the camera is fixed and the calibration procedure is in progress; otherwise, the orientation of the camera is adjusted, and the next test operations are performed. A calibration procedure for the CCD camera and the light projector is implemented, and the related calibration parameters of the constructed system are obtained including the following: the angle of the projection up , the grating period T0, the image center of the CCD camera (Ix,Iy), the focal length (Fx,FY), and L0 as listed in Table 1. Due to the changing depth of the coin surface, the transformed fringes will appear on the coin with sinusoidal intensity distribution as shown in Fig. 12. The fringe patterns projected on the reference plane and on the coin surface are further processed by a series of spectral analyses, and the results are depicted in Fig. 13. It can be seen from Fig. 13 that the peaks of the spectral analyses of these fringe patterns are both detected to appear on the spectral domain at 0.083 Hz, which means that the frequency of the projected fringe is f0 ˆ 1/T0 ˆ 0.083 Hz. The specifications of the filter designed in this study are presented in Table 2, and the responses of the filters are illustrated in Fig. 14.

Fig. 14. The frequency responses of the designed filter.

L. Chi-Fang, L. Chih-Yang / Image and Vision Computing 17 (1999) 805–814

813

Fig. 15. The 3-D coordinates of the coin are drawn in a wire frame.

The phase values can be obtained after a number of calculations and filtering done on the fringe patterns. The 3-D coordinates of the coin surface can be obtained after unwrapping the phase values and inputting into the

equations generated previously. The 3-D coordinates of the coin are then drawn in a wire frame, as shown in Fig. 15. The program was written in C language, and run on a personal computer PC-486. The processing time for

Fig. 16. The 3-D coordinates of the flat plate are drawn in a wire frame.

814

L. Chi-Fang, L. Chih-Yang / Image and Vision Computing 17 (1999) 805–814

measuring the 3-D coordinates of the test coin is less than 3 s. If a real time application is taken into account, e.g. the 3-D modeling for a test object with dynamic surfaces, DSP chips for the filtering process and a faster computer like a workstation to increase the performance of our system are required. To measure the accuracy of the proposed approach, the 3D coordinates of the flat plate are first calculated using our method, and drawn in a wire frame as shown in Fig. 16. They are next fitted to an ideal plane equation by the the Least Mean Square Error (LMSE) technique. Finally, the error measured in distance per pixel is calculated to be 0.005 mm, which is very accurate for most 3-D measurement applications. 5. Conclusions

[4] [5] [6]

[7] [8] [9] [10]

[11] [12]

The 3-D measuring systems find wide industrial application in object molding, medical science, CAD/CAM, as well as the development of multi-media and virtual reality systems. In this study, we develop methods to implement a high precision 3-D measuring systems. The advantages of the proposed approach include the following:

[13]

[14]

1. the measuring speed is fast, which is suitable for on-line spot measuring, e.g. the monitoring of human body or online detection; 2. only a single image is required to obtain the necessary phases, thus avoiding the errors caused due to the mechanical vibrations; 3. every picture element can be considered as a sample point thus achieving a high image resolution; and 4. the result is accurate as revealed by the experimental results.

[15]

The CCD camera can observe only the front image of the test object, and the back and the side images of the object are not accessible. In addition an included angle between the projector and the CCD camera will create a shadow area on the test object. These problems can be solved if the test object is put on a turning stage and more than two light projectors are allocated while the image taking process proceeds. However this is beyond the scope of this paper and requires further study and discussions.

[20]

References

[26]

[1] M. Ishii, S. Sakane, M. Kakikura, Y. Mikami, A 3-D sensor system for teaching robot paths and environments, Int. J. Robot Research 6 (1) (1987) 45–59. [2] K. Ikeuchi, Determining a depth map using a dual photometric stereo, Int. J. Robot Research 6 (1) (1987) 15–31. [3] S.W. Chen, A.K. Jain, Strategies of multi-view and multi-matching

[16] [17] [18] [19]

[21] [22] [23] [24] [25]

[27]

[28]

for 3-D object recognition, Image Understanding 57 (1) (1993) 121– 130. P.J. Besl, Active optical range imaging sensors, Machine Vision and Applications 1 (1988) 127–152. M. Maruyama, S. Abe, Range sensing by projection multiple slits with random cuts, IEEE Trans. PAMI 15 (6) (1993) 647–651. P. Vuylsteke, A. Oosterlinck, A range image acquisition with a single binary-encoded light pattern, IEEE Trans. PAMI 12 (2) (1990) 148– 164. N. Shrikhande, G. Stockman, Surface orientation from a projected grid, IEEE Trans. PAMI 11 (6) (1989) 650–655. G. Hu, G. Stockman, 3-D surface solution using structure light and constraint propagation, IEEE Trans. PAMI 11 (4) (1989) 390–402. A.C. Sanderson, L.E. Weiss, S.K. Nayar, Structured light inspection of specular surface, IEEE Trans. PAMI 10 (1) (1988) 44–55. Y.F. Wang, A. Mitiche, J.K. Aggarwal, Computation of surface orientation and structure of objects using grid coding, IEEE Trans. PAMI 9 (1) (1987) 129–137. K.L. Boyer, A.C. Kak, Color-encoded structured light for rapid active ranging, IEEE Trans. PAMI 9 (1) (1987) 14–28. K. Sato, H. Yamamoto, S. Inokuchi, Tuned range finder for high precision, IEEE 8th International Conference on Pattern Recognition, 1986, pp. 1168–1171. S. Inokuchi, K. Sato, F. Matsuda, Tuned range finder for high precision, IEEE International Conference on Pattern Recognition, 1984, pp. 806–808. L. Salbut, K. Patorski, Polarization phase shifting method for Moire interferometry and flatness testing, Applied Opt. 29 (10) (1990) 1471–1473. S. Bahl, J.A. Liburdy, Three-dimensional image reconstruction using interferometric data from a limited field of view with noise, Applied Opt. 30 (29) (1991) 4218–4226. S.W. Kim, H.G. Park, Moire topography by slit beam scanning, Applied Opt. 31 (28) (1992) 6157–6161. D.M. Meadows, W.O. Johnson, J.B. Allen, Generation of surface contours by Moire patterns, Applied Opt. 9 (4) (1970) 942–947. D.C.D. Hung, 3-D scene modeling by sinusoid encoded illumination, Image and Vision Computing 11 (5) (1993) 251–256. M. Takeda, H. Ina, S. Kobayashi, Fourier transform method of fringe pattern analysis for computer-based topography and interferometry, Opt. Society 72 (1) (1981) 156–160. R.K. Nagle, E.D.B. Saff, Fundamentals of Differential Equation and Boundary Value Problems, Addison-Wesley, New York, 1993. D.J. DeFatta, J.G. Lucas, Digital Signal Processing: A System Design Approach, Wiley, New York, 1988. W.D. Stanley, G.R. Dougherty, R. Dougherty, Digital Signal Processing, Reston Publish Company, Virginia, 1984. A.V. Oppenheim, R.W. Schafer, Discrete-Time Signal Processing, Prentice-Hall Inc, New Jersey, 1989. W. Chen, B.C. Jiang, 3-D camera calibration using vanishing point concept, Pattern Recognition 24 (1) (1991) 57–67. R.Y. Tsai, A versatile camera calibration technique for high-accuracy 3-D machine vision metrology using off-the-shelf TV cameras and lenses, IEEE Journal of Robotics and Automation RA-3 (4) (1987) 323–344. J.L. Crowley, P. Bobet, V. Schmid, Auto-calibration by direct observation of objects, Image and Vision Computing 12 (1) (1993) 67–81. J. Weng, P. Chen, Marc Hernion, Camera calibration with distortion models and accuracy evaluation, IEEE Trans. PAMI 14 (10) (1992) 965–980. Y. Liu, T.S. Huang, Determination of camera location from 2-D to 3D line and point correspondences, IEEE Trans. PAMI 12 (1) (1990) 28–37.