Range camera self-calibration with scattering compensation

Range camera self-calibration with scattering compensation

ISPRS Journal of Photogrammetry and Remote Sensing 74 (2012) 101–109 Contents lists available at SciVerse ScienceDirect ISPRS Journal of Photogramme...

920KB Sizes 1 Downloads 103 Views

ISPRS Journal of Photogrammetry and Remote Sensing 74 (2012) 101–109

Contents lists available at SciVerse ScienceDirect

ISPRS Journal of Photogrammetry and Remote Sensing journal homepage: www.elsevier.com/locate/isprsjprs

Range camera self-calibration with scattering compensation Derek D. Lichti ⇑, Xiaojuan Qi, Tanvir Ahmed Department of Geomatics Engineering, The University of Calgary, 2500 University Dr., NW, Calgary, AB, Canada T2N 1N4

a r t i c l e

i n f o

Article history: Received 9 July 2012 Received in revised form 18 September 2012 Accepted 21 September 2012 Available online 23 October 2012 Keywords: Range camera Geometric self-calibration Error modelling Scattering compensation

a b s t r a c t Time-of-flight range camera data are prone to the scattering bias caused by multiple internal reflections of light received from a highly reflective object in the camera’s foreground that induce a phase shift in the light received from background targets. The corresponding range bias can have serious implications on the quality of data of captured scenes as well as the geometric self-calibration of range cameras. In order to minimise the impact of the scattering range biases, the calibration must be performed over a planar target field rather than a more desirable 3D target field. This significantly impacts the quality of the rangefinder offset parameter estimation due to its high correlation with the camera perspective centre position. In this contribution a new model to compensate for scattering-induced range errors is proposed that allows range camera self-calibration to be conducted over a 3D target field. Developed from experimental observations of scattering behaviour under specific scene conditions, it comprises four new additional parameters that are estimated in the self-calibrating bundle adjustment. The results of experiments conducted on five range camera datasets demonstrate the model’s efficacy in compensating for the scattering error without compromising model fidelity. It is further demonstrated that it actually reduces the rangefinder offset-perspective centre correlation and its use with a 3D target field is the preferred method for calibrating narrow field-of-view range cameras. Ó 2012 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) Published by Elsevier B.V. All rights reserved.

1. Introduction Time-of-flight range cameras directly measure 3D co-ordinates of objects within their field of view at video frame rates. Photonic mixer device cameras, the subject of this study, illuminate the scene with amplitude-modulated NIR light emitted from an array of integrated LEDs. The backscattered signal is focused onto a solid state detector where it is synchronously demodulated at each detector site to obtain the phase difference and, hence, the range. See Lange and Seitz (2001) for a complete description of the principles involved. Range cameras offer great potential for engineering measurement applications such as structural deformation measurement, human-computer interaction and robotics, to name a few. Their calibration is an important quality assurance process in order to maximise the accuracy of range-camera derived measurements. The geometric calibration of range cameras such as the SwissRanger SR4000 and the PMD[vision] CamCube 3.0 has been addressed by a number of researchers. The self-calibration method offers many advantages for this purpose (Lichti and Kim, 2011). The many variations of this approach can be summarised as belonging

⇑ Corresponding author. Tel.: +1 403 210 9495; fax: +1 403 284 1980. E-mail address: [email protected] (D.D. Lichti).

to one of two categories: two-step calibration or one-step integrated calibration. Two-step methods in which the camera-lens calibration is performed first followed by range error calibration have been reported by Kahlmann and Ingensand (2008), Karel (2008), Chiabrando et al. (2009), Robbins et al. (2009), Boehm and Pattinson (2010) and Lindner et al. (2010). In the one-step integrated method (Lichti et al., 2010; Shahbazi et al., 2011) both processes are performed in the same self-calibration adjustment. In either case the calibration of the range measurements is conducted by imaging a planar surface, which may be featureless or comprise a set of signalised targets, at normal incidence. These restrictions on imaging network geometry lead to strong correlation between the range camera exterior orientation parameters (EOPs) and the rangefinder offset parameter as explained in Lichti and Kim (2011). Naturally, there is much interest in reducing this correlation since the rangefinder offset is key interior orientation parameter (IOP) that can influence the accuracy of range camera data. The inclusion of additional object space distance constraints in the self-calibration network has been tested but does not effectively reduce the correlation regardless of whether the constraints are imposed between camera exposure stations or between object points in the target plane (Lichti and Qi, In Press). A 3D calibration range could likely achieve the de-correlation but its use has not yet been possible due to range biases caused by the internal light

0924-2716/$ - see front matter Ó 2012 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) Published by Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.isprsjprs.2012.09.008

102

D.D. Lichti et al. / ISPRS Journal of Photogrammetry and Remote Sensing 74 (2012) 101–109

scattering phenomenon. The cause of this error is the internal reflection of light received from a bright, foreground object that contributes to the light received directly from a background targets, which biases both the amplitude and the phase of the measured signal from background objects (Mure-Dubois and Hügli 2007a). Many have observed and modelled the effect. Mure-Dubois and Hügli (2007b) model the scattering effect with a shift-invariant point spread function (PSF) comprising a set of additive Gaussian functions whose parameters chosen are by trial and error by an expert user. The scattering range errors are then compensated by de-convolution. Kavli et al. (2008) used a small circular reflector to measure the scattering effect and report non-Gaussian behaviour. They propose an iterative compensation algorithm to correct the effect. Karel et al. (2012) also used a small circular object develop shift-variant and shift-invariant PSFs for image restoration in a SwissRanger SR3000. Chiabrando et al. (2010) performed a twoplane experiment on a SwissRanger SR4000 to study the scattering effect of the foreground plane on the background plane. They conclude that SR4000 measurements are not affected by scattering caused by foreground object. Jamtsho and Lichti (2010) performed a different version of the two-plane experiment and report scattering-caused range biases in both SR3000 and SR4000 cameras, though the effects in the latter are much smaller. Their splinebased compensation model was built from dense experimental data captured at different distances and with different foreground areas. In the scattering compensation method of Dorrington et al. (2011), multiple measurements of the same scene are captured at different modulation frequencies. This permits separation of strong signals from the weak signals due to scattering, multipath and mixed pixels, thereby allowing recovery of the unbiased scene. Here an empirical modelling procedure is proposed based on observations from the two-plane experiment of Jamtsho and Lichti (2010). It has been developed for the purpose of system self-calibration under specific conditions, namely a target field comprising two staggered, parallel planes. The aim of its development was to allow the extension of the target field from 2D to 3D thereby improving the self-calibration solution by reducing the correlation mechanism between the rangefinder offset and the EOPs. The basic idea is to augment the range observation equations for the background objects with a model that compensates for the scattering bias due to the presence of the foreground object. The empirical model parameters are simultaneously estimated along with all other model variables in the self-calibration solution. The proposed model is not intended for use in object reconstruction due to the high dependence of the scattering error on the scene. In the next section the functional models for the extended collinearity condition used in range camera self-calibration are presented. This comprises both the observation equations and the required systematic error models, including the new scattering-error model. Details of experiments conducted to test the efficacy of the new model are then provided. This is followed by the presentation and analyses of the results as well as concluding remarks. 2. Models 2.1. Observation models The geometric model for a range camera is an extension of the collinearity condition in which an object point, its homologous image point and the camera’s perspective centre (PC) lie on a straight line and the length of that line is the observed range. It comprises two image point co-ordinate equations

xijk þ exijk ¼ xpk  ck

U ij þ Dxijk W ij

ð1Þ

yijk þ eyijk ¼ ypk  ck

V ij þ Dyijk W ij

ð2Þ

and one range equation

qijk þ eqijk ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðX i  X cj Þ2 þ ðY i  Y cj Þ2 þ ðZ i  Z cj Þ2 þ Dqijk

ð3Þ

where (x, y)ijk are the observed co-ordinates of point i in image j captured with camera k; (X, Y, Z)i are the co-ordinates of object point i; (Xc, Yc, Zc)j are the object space co-ordinates of the PC of image j; (xp, yp, c)k are the interior orientation parameters of sensor k, namely the principal point offset and principal distance; qijk is the observed range; (ex, ey, eq)ijk are the additive random errors and (Dx, Dy, Dq)ijk are the additive systematic error correction terms. Models for both instrumental systematic errors and scattering systematic errors are considered here. The auxiliary quantities (U, V, W)ij are computed from the rigid body transformation from object space to image space j

0

U

1

0

X i  X cj

1

B B C cC C @ V A ¼ R3 ðjj ÞR2 ðuj ÞR1 ðxj ÞB @ Yi  Yj A c W ij Zi  Zj

ð4Þ

in which (x, /, j)j are the camera orientation angles and R1, R2 and R3 are the fundamental rotation matrices. 2.2. Instrumental systematic error models The first group of systematic error models comprises additional parameters (APs) that are independent of the scene and camera operating conditions (e.g. warm-up): lens distortions, the rangefinder offset and periodic range errors, whose many possible sources are described in Godbaz et al. (2012). Though a general systematic error model is often presented in the literature, it is common practice that only those APs that have been identified as significant are included in the final calibration model. Many tools are available to decide which APs belong to this set including graphical and statistical analyses of least-squares adjustment residuals; hypothesis testing of the AP estimates; analyses of correlation coefficients between key parameters; and information-theory measures such as the Akaike criterion. Here, only the parameters identified as significant for the tested cameras are listed:

Dxijk ¼ xijk ðk1k r 2ijk þ k2k r4ijk Þ

ð5Þ

Dyijk ¼ yijk ðk1k r 2ijk þ k2k r4ijk Þ

ð6Þ

    4p 4p Dq1ijk ¼ d0k þ d4k sin qijk þ d5k cos qijk U U     8p 8p qijk þ d7k cos qijk þ d6k sin U U

ð7Þ

where (k1, k2)k are radial lens distortion coefficients; d0k is the constant rangefinder offset; (d4–d7)k are periodic errors and U is the unit length (half the modulating wavelength) of the range camera’s light source and

xijk ¼ xijk  xpk

ð8Þ

ijk ¼ yijk  yp y k

ð9Þ

2ijk r2ijk ¼ x2ijk þ y

ð10Þ

The additional subscript (1) for Dq in Eq. (7) indicates that these terms belong to the first category of systematic errors. While Lindner et al. (2011) model reflectance dependence in their range error

D.D. Lichti et al. / ISPRS Journal of Photogrammetry and Remote Sensing 74 (2012) 101–109

terms, no such variation is modelled here since homogeneous white photogrammetric targets are used for the calibration. 2.3. The scattering systematic error model The second category comprises terms that model the biases in the range measurements to a background (BG) object caused by the internal scattering of light received from a foreground (FG) object. The model was developed based on a number of observations about BG range error behaviour reported in the aforementioned two-plane experiment of Jamtsho and Lichti (2010): The range error depends on the FG–BG object separation; The range error depends on the distance to the FG object; The range error depends on the FG surface area; The range error strongly depends on the image distance from the FG–BG boundary measured in the direction normal to the boundary; and  The range error depends on the lateral position in the image plane, in other words the distance from image centre measured parallel to FG–BG boundary.    

Guided by these observations, the following assumptions have been made to develop the compensation model:  The target field scene comprises only two nominally parallel planes;  All range images of the target field are acquired orthogonal to the planes and in landscape format (negligible roll angle, j) and with negligible tilt (x) and convergence (/) angles;  The separation between the FG and the BG planes is constant during the image acquisition;  The FG plane reflectivity properties are scale independent; and  The range error due to scattering can be parameterized in terms of image-space model variables. Based on these assumptions the error in observed ranges to BG targets due to scattering can be modelled as the sum of four terms:

Dq2ijk ¼ aAjk ebdijk þ cx2ijk þ ddijk

ð11Þ

The first term models two behaviours: the normalised FG area (A) dependence and the exponential decay with image distance from the FG–BG boundary (d). See Fig. 1 for the graphical definition of these variables. The area is expressed as the proportion of the image format spanned by the FG object and is therefore unitless. The second term models the quadratic range error behaviour in the transverse direction, i.e. nominally parallel to the FG-BG boundary (x). The final term models additional linear behaviour as a function of d. The model of Eq. (11) does not include any range dependence, a fact that will be addressed in later discussion. The scattering-error

Fig. 1. Scattering model variables.

103

additional parameters to be estimated are thus (a, b, c and d) and the range error model implemented for self-calibration is given by Dqijk ¼ Dq1ijk þ Dq2ijk . It is recognised that the scattering compensation model is specific to a particular target scene and camera and that no physical principle has been attributed to its parameters. The basis of its development was the behaviour reported by Jamtsho and Lichti (2010) as well as new experimental data presented herein. Figs. 2–4 show differences between the observed and computed ranges to background targets as functions of (A, d and  x) with visible trends that validate the choice of model terms in Eq. (11). In these examples the computed ranges were determined from independently-measured target centre co-ordinates and the camera PC co-ordinates estimated by redundant space resection. 3. Experiment description 3.1. Range cameras Three SwissRanger cameras were tested: two SR4000s (denoted cameras 3 and 4) featuring a lens with a principal distance of 5.8 mm and programmable modulation frequency; and one SR3000 (camera 0) with an 8 mm principal distance and the modulation frequency (fm) set to 20 MHz. Both feature arrays of 176  144 pixels2 with 40 lm pixel pitch. Two calibrations were performed for each SR4000. Camera 3 was calibrated with the modulation frequency set to both 29 and 30 MHz and Camera 4 was calibrated at 30 and 31 MHz. Thus five calibration datasets in total were captured. Details about the selfcalibration datasets are given in Table 1. 3.2. Target fields and networks Two similar target fields were constructed for the testing (Fig. 5). Each comprised two nominally parallel planes separated in depth by 0.76 m, the height of the desks that were tipped on their sides and served as the foreground object next to the wall background. This foreground-background separation was chosen as a practical trade-off between a large separation, which would be advantageous from the perspective of parameter de-correlation, and a small separation that would achieve a more favourable distribution of range observations over the ambiguity interval. The approximate width and height of the target arrays were

Fig. 2. Background target range differences vs. normalised foreground area (A) and linear trend, camera 0, 20 MHz.

104

D.D. Lichti et al. / ISPRS Journal of Photogrammetry and Remote Sensing 74 (2012) 101–109

Fig. 5. Calibration target field. Fig. 3. Background target range differences vs. distance from the FG–BG boundary (d) and exponential plus linear trend, camera 4, 30 MHz.

Fig. 4. Background target range differences vs. lateral position ( x) and quadratic trend, camera 3, 29 MHz.

3.2 m and 2.8 m, respectively. Two sizes of white targets (150 mm and 240 mm diameter) mounted on a black background were used. The target field used to calibrate Cameras 0 and 3 had 86 targets while that used for Camera 4 had 67 targets. In each case the number of FG and BG targets was approximately equal. Following Lichti et al. (2010), the self-calibration networks comprised both convergent images and normal images captured

at different stand-off distances (Fig. 6). The networks’ convergence angle ranged from 93° to 114°. Two orthogonal roll-angle images were captured at each convergent station. A dense set of normal images was captured with exposure stations at separated by approximately 250 mm in stand-off distance at 200 mm in height in order to obtain a large sample as a function of the parameters A and d. At each location a sequence of 25 images was captured and averaged. The integration time (tint) for each camera was fixed during the data capture. The target centre measurement in the image plane was performed by precise ellipse fitting with eccentricity correction. The range to the target centre was determined by bilinear interpolation from the ranges of the four surrounding pixels. Only the x and y observations of targets appearing in the convergent images were incorporated into the self-calibration adjustment, whereas all three observations (x, y and q) made in the normal images were included. The target centre co-ordinates were independently determined with a Leica HDS6100 terrestrial laser scanner (TLS). The two target fields were scanned from a standoff distance of 8.0 m. After performing model-based target fitting (i.e. a circle fit constrained to lie in the target plane) the estimated precision of each centre was 0.5 mm in depth (Y), 0.8 mm in height (Z) and 1.7 mm in the transverse dimension (X).

3.3. Self-calibration cases Three self-calibration adjustments were performed for each dataset. They are distinguished by the presence or lack of a scattering model and which range observations were included:

Table 1 Summary of Case 1 self-calibration adjustment details. Camera

fm (MHz) tint (ms) # Normal images # Convergent images # Image points Total # range observations # FG range observations Case 1 degrees-of-freedom Group 1 APs

0

3

3

4

4

20 5.3 88 22 5174 4254 2137 13681 k1, d0, d4, d5, d6, d7

29 10.3 68 23 5021 3962 2129 13198 k1, k2, d0, d6, d7

30 10.3 65 14 4608 3876 1847 12361 k1, k2, d0, d6, d7

30 4.3 95 18 5608 4715 2560 15050 k1, k2, d0, d6, d7

31 4.3 100 20 5564 4618 2525 14823 k1, k2, d0, d6, d7

105

D.D. Lichti et al. / ISPRS Journal of Photogrammetry and Remote Sensing 74 (2012) 101–109 Table 4 Self-calibration results, camera 3 30 MHz. Case 1

rvq (mm) IOPs c (mm) d0 (mm) a (mm) b (mm1) c (mm1) d (unitless) b1 (mm) Correlation Mean d0  Yc Max d0  Yc Max Dq2 APsYc Dq2 APs-c Dq2 APs-d0 ab

Fig. 6. Self-calibration network geometry.

6.3 Estimate 5.8851 18.3

Case 2

r 0.0022 0.5

0.70 0.79

6.0 Estimate 5.8777 22.2 38.61 2.12 0.34 1.56 0.47

Case 3

r 0.0026 0.6 1.90 0.17 0.06 0.27

0.67 0.76 0.27

5.4 Estimate 5.9004 17.3

r 0.0031 0.6

0.74 0.82

0.48 0.42 0.57

Table 2 Self-calibration results, camera 0 20 MHz.

rvq (mm) IOPs c (mm) d0 (mm) a (mm) b (mm1) c (mm1) d (unitless) b1 (mm) Correlation Mean d0  Yc Max d0  Yc Max Dq2 APsYc Dq2 APs-c Dq2 APs-d0 ab

Case 1

Case 2

Case 3

13.3 Estimate 8.1681 69.7

12.5 Estimate 8.1391 68.2 42.42 0.77 0.28 0.14 1.31

12.0 Estimate 8.1362 44.9

r 0.0030 1.4

0.84 0.93

r 0.0036 1.4 2.33 0.11 0.08 0.51

0.82 0.91 0.13

r 0.0036 2.9

Table 5 Self-calibration results, camera 4 30 MHz.

rvq (mm) IOPs c (mm) d0 (mm) a (mm) b (mm1) c (mm1) d (unitless) b1 (mm)

0.95 0.98

Correlation Mean d0  Yc Max d0  Yc Max Dq2 APsYc Dq2 APs-c Dq2 APs-d0 ab

0.26 0.14 0.68

Case 1

Case 2

7.3 Estimate 5.8726 20.4

7.1 Estimate 5.8755 24.3 37.49 1.73 0.10 4.16 0.57

r 0.0022 0.6

0.59 0.73

0.59 0.71 0.22

Case 3

r 0.0028 0.7 3.53 0.20 0.07 0.30

6.8 Estimate 5.8871 23.6

r 0.0031 0.7

0.63 0.77

0.46 0.41 0.74

Table 3 Self-calibration results, camera 3 29 MHz.

rvq (mm) IOPs c (mm) d0 (mm) a (mm) b (mm1) c (mm1) d (unitless) b1 (mm) Correlation Mean d0  Yc Max d0  Yc Max Dq2 APsYc Dq2 APs-c Dq2 APs-d0 ab

Case 1

Case 2

Case 3

7.4 Estimate 5.8829 18.5

7.0 Estimate 5.8734 23.1 47.18 1.69 0.33 1.90 0.59

5.6 Estimate 5.8837 20.8

0.67 0.77

r 0.0021 0.6

0.65 0.75 0.24

r 0.0026 0.7 2.59 0.13 0.06 0.31

r 0.0030 0.8

The network datum definition was by inner constraints imposed on object point co-ordinates. The APs included in the group 1 error model were determined by using the aforementioned model identification strategies and are listed in Table 1.

4. Results and discussion 4.1. Model fit 0.71 0.80

0.47 0.39 0.55

1. The normal image ranges to both BG and FG targets were included but with no scattering error modelling; 2. The normal image ranges to both BG and FG targets were included with scattering error modelling for the BG targets; and 3. The normal image ranges to FG targets were only included, i.e. no BG target ranges were included.

The pertinent results from each calibration dataset are tabulated in Tables 2–6. The standard deviation of residuals shows that the inclusion of the scattering error model (Case 2) improves the model fit in all five datasets. Although the overall numerical improvement is small (<1 mm), graphical analyses of the BG range residuals clearly reveals that the modelling is indeed important to compensate for the scattering range errors. Residual plot examples from Cases 1 and 2 (without and with scattering compensation, respectively) are given by Figs. 7 and 8. Note that the un-modelled trends have opposite sign to that of the actual systematic error. The Case 3 adjustments comprise only FG ranges, so the total number of ranges was only 48–55% that of the other cases. Clearly Case 3 gives the best results in terms of model fit for all datasets. On this basis alone it would appear that this is the preferred solution, but other evidence described below will contradict this

106

D.D. Lichti et al. / ISPRS Journal of Photogrammetry and Remote Sensing 74 (2012) 101–109

conclusion. The higher dispersion of the residuals in Case 2 relative to Case 3 may be due to more complex error behaviour that is not accounted for in the scattering compensation model.

Table 6 Self-calibration results, camera 4 31 MHz. Case 1

rvq (mm) IOPs c (mm) d0 (mm) a (mm) b (mm1) c (mm1) d (unitless) b1 (mm) Correlation Mean d0  Yc Max d0  Yc Max Dq2 APsYc Dq2 APs-c Dq2 APs-d0 ab

7.4 Estimate 5.8742 19.0

0.55 0.70

Case 2

r 0.0022 0.6

7.0 Estimate 5.8743 22.2 33.14 1.22 0.20 4.06 0.82 0.54 0.68 0.21

Case 3

r 0.0027 0.7 2.69 0.15 0.07 0.29

6.7 Estimate 5.8830 22.6

r 0.0030 0.7

0.59 0.74

0.43 0.40 0.68

Fig. 7. Background target range residuals vs. normalised FG area (A), camera 4, 31 MHz, without (top) and with (bottom) scattering compensation (Cases 1 and 2, respectively). Best-fit trend lines are shown for reference.

4.2. Scattering model parameters Analysis of the Case 2 results reveals that the a parameter, which represents the scattering error at the FG–BG boundary for unit FG area, ranges between 3 and 5 cm. While the two sets of a and b parameters estimated for both Cameras 3 and 4 appear to differ, the estimated correction models (Fig. 9) match very well at 95% confidence except near the FG–BG boundary (i.e. d < 0.75 mm), where the exponential part of the model is not well observed. Observations to background targets near the boundary were excluded from the adjustment if they were partially occluded by the foreground plane, which is simply a practical reality of the necessity to use large targets. The length constant, b1, is short (0.5 to 0.8 mm) for all SR4000 datasets but the linear correction term is more significant for these cameras. This term is most significant in the Camera 4 datasets and its omission from the model results in a slight inflation of the range residuals due to the un-modelled linear trend and the number of iterations required for convergence increases by three. The exponential model component of the SR3000 decays more slowly (length constant 1.3 mm) but the linear term d is not statistically significant at 95% confidence. The quadratic term c exhibits good repeatability between the datasets captured with camera 3 but its effect is very small in all cases. The differences between the scattering model parameters of the two SR4000 cameras can be explained by the differences in their corresponding target fields to which the scattering model is ultimately coupled. A very important consideration in augmenting the range model with the scattering correction terms is whether or not this introduces undesirable correlations between any parameters. For all five datasets no large correlations between the Dq2 model APs (a, b, d and c) and any other model variables were found, though there is moderate correlation between a and b and,in the case of the SR3000 dataset, moderate correlation between and a and d (0.59) as well as between b and d (0.67). In all other datasets the magnitude of these correlations was 0.36 or less. The largest correlation among the remaining pairs of scattering model variables (a–c, b–c and c–d) was 0.17. In all SR4000 datasets the largest correlations with the three most pertinent parameters (c, d0 and Yc) is only 0.48. The reported correlations are with the d parameter, which is somewhat intuitive since this is in effect a linear scale factor like the principal distance, c, albeit for only a portion of each image. The SR3000 correlations are lower and are also not of concern. It can therefore be concluded that the addition of the scattering compensation model does not degrade the self-calibration solution quality. In fact, the inclusion of the scattering model actually slightly reduces the correlation between d0 and the PC depth position, Yc, which as mentioned at the outset is the mechanism that has motivated this investigation. 4.3. Interior orientation parameters

Fig. 8. Background target range residuals vs. distance from the FG–BG boundary (d), camera 0, 20 MHz, without (top) and with (bottom) scattering compensation (Cases 1 and 2, respectively). Best-fit trend lines are shown for reference.

Detailed analyses have been made of the estimates of the principal distance c and rangefinder offset d0 since these parameters can be strongly affected by network geometry. The estimates of these parameters differ between all three adjustment cases in varying degrees for each dataset. The Case 1 d0 estimates are known to be biased due to the presence of the scattering error and are 3–5 mm smaller than those of Case 2 for the SR4000 datasets. The largest differences among d0 estimates are in the SR3000 dataset; the reason for this is discussed below. The precision estimates for the rangefinder offset are largely unaffected by the

107

D.D. Lichti et al. / ISPRS Journal of Photogrammetry and Remote Sensing 74 (2012) 101–109

Table 7 Between-case parameter difference significance test results. Y indicates a difference is significant at 95% confidence, N indicates it is not. Camera 0 20 MHz

Camera 3 29 MHz

Camera 3 30 MHz

Camera 4 30 MHz

Camera 4 31 MHz

Principal distance (c) Case 1–2 Y Case 2–3 N Case 1–3 Y

Y Y N

Y Y Y

N Y Y

N Y Y

Rangefinder Case 1–2 Case 2–3 Case 1–3

Y Y Y

Y Y N

Y N Y

Y N Y

offset (d0) N Y Y

Comparison of all pairs of these two parameters reveals that most (22 out of 30) between-case pairings are significantly different (Table 7). This highlights the impact of both the scattering error compensation and of network geometry. Both parameters are significantly affected by the presence of scattering-caused range biases. Furthermore, the weaker geometry of the planar target field also significantly influences these two parameters. The betweencamera comparisons (Table 8) reveal very repeatable results as only 3 of 12 parameter pairings differ significantly. This demonstrates the repeatability of the scattering error and of the methodology. The huge difference that exists between the SR3000 d0 estimates with and without the scattering model (Cases 2 and 3, respectively) can be traced to the camera’s field-of-view (FOV). While the SR4000 has a relatively wide FOV (62.5°  52.8°), that of the SR3000 is comparably narrow (47.5°  39.6°). This considerably impacts the correlation between d0 and Yc (Lichti and Kim, 2011) when a planar target field is used. This can be seen not only in the Case 3 correlation coefficients but also in the d0 estimate and its precision. Thus, while Case 3 might seem like a viable calibration option for wider-angle range cameras, it clearly is not for narrow-angle devices. 4.4. Accuracy assessment For each dataset at least 10% of the normal images was randomly selected and excluded from the self-calibration solution. Target co-ordinates were computed by radiation from this set of images using the independently-determined parameters from each of the three self-calibration cases. The parameters of the 3D rigid body transformation between these co-ordinates and the TLS coordinates were estimated and root mean square error (rmse) measures were computed to assess the accuracy of each calibration case (Table 9). The differences in accuracy metrics among the various cases and datasets are on the order of only a few millimetres. In all datasets the depth (Y) accuracy is slightly improved by adding the scattering compensation model. This dimension is largely determined by the range, so the slight improvement is attributed

Fig. 9. Estimated scattering range error models and 95% confidence envelopes: (a) Camera 3; (b) Camera 4; (c) Camera 0. For all cases A = 0.5 and  x ¼ 0.

addition of the scattering range model and the ‘degradation’ of the target field to a plane in Case 3. The exception is the SR3000 for which the d0 precision is inflated by a factor of two. The principal distance precision is only slightly inflated for adjustment Cases 2 and 3.

Table 8 Between-camera parameter difference significance test results. Y indicates a difference is significant at 95% confidence, N indicates it is not. Camera 3 (29 and 30 MHz)

Camera 4 (30 and 31 MHz)

Principal distance (c) Case 1 N Case 2 N Case 3 Y

N N N

Rangefinder offset (d0) Case 1 N Case 2 N Case 3 Y

N Y N

108

D.D. Lichti et al. / ISPRS Journal of Photogrammetry and Remote Sensing 74 (2012) 101–109

Table 9 Accuracy assessment results.

# Images # Points

Camera 0 20 MHz

Camera 3 29 MHz

Camera 3 30 MHz

Camera 4 30 MHz

Camera 4 31 MHz

9 402

7 365

7 434

10 500

10 518

Case 1 Rmse X (mm) Rmse Y (mm) Rmse Z (mm)

3.9 10.8 3.1

7.7 6.6 6.4

7.3 7.2 7.2

8.2 7.5 7.5

6.9 7.1 6.4

Case 2 Rmse X (mm) Rmse Y (mm) Rmse Z (mm)

4.5 9.3 3.4

10.1 6.5 7.8

9.2 6.5 8.3

8.3 6.9 6.9

7.5 6.9 6.6

Case 3 Rmse X (mm) Rmse Y (mm) Rmse Z (mm)

9.4 10.5 6.6

7.7 6.7 6.3

5.4 7.3 6.0

7.4 7.5 7.0

6.6 7.1 6.2

Another possibility is that the scattering signal is inherently weak due to the dark FG object used. The white target circles comprise only 23% of the total image area in the dataset shown in Fig. 10. The linear behaviour of this figure also demonstrates that the assumption of FG object surface homogeneity is well founded, though the spatial distribution of white target components is less invariant. 5. Conclusions An empirical scattering-error compensation model implemented in the integrated self-calibrating bundle adjustment for range cameras has been proposed. It has been demonstrated to be effective in removing the systematic error trends in background target range residuals and results in measurable statistical improvement. Introduction of the model into the self-calibration solution does not cause any undesirable parameter correlations or degrade positioning accuracy. In fact it actually slightly reduces the correlation between certain EOPs and IOPs of interest. Though little difference between calibration adjustment cases was found in the accuracy assessment, it is the superior approach for the narrow FOV SR3000 camera in terms of parameter correlation. Future work may include extension of the scattering compensation model to convergent imaging geometry. Acknowledgements Funding for this research was provided by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canada Foundation for Innovation (CFI). References

Fig. 10. Area of white targets in the foreground as a function of foreground area, all Camera 3 (30 MHz) normal images.

to the improved range error parameter estimates. (Though the rangefinder offset has been the primary focus of the analyses, the long-wavelength periodic error coefficients are also correlated to the PC co-ordinates.) For some datasets this improvement is accompanied by a slight degradation in the transverse dimensions where the principal distance, which is also affected by the choice of model (c.f. Table 7), plays a greater role in positioning. In summary it can be deduced that adding the four scattering model parameters does not greatly compromise the measurement accuracy. 4.5. Further discussion The scattering bias is expected to decrease when the distance to the FG target is increased (Jamtsho and Lichti, 2010). However, this behaviour was not observed in any of the datasets in this study. A range-dependent version of the model given by Eq. (11) was also implemented and all datasets tested. No improvement to the self-calibration results was realised by using this new model. This may be explained by the inclusion of the floor in the camera’s FOV for longer range images, which could act as an additional scattering surface and therefore contribute to the range bias, but this would require deeper investigation to substantiate. Furthermore, illumination from outside the FOV can also contribute to the scattering error (Godbaz et al., 2012) and violate the stated assumptions.

Boehm, J., Pattinson, T., 2010. Accuracy of exterior orientation for a range camera. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 38 (Part 5), pp. 103–108. Chiabrando, F., Chiabrando, R., Piatti, D., Rinaudo, F., 2009. Sensors for 3D imaging: metric evaluation and calibration of a CCD/CMOS time-of-flight camera. Sensors 9 (12), 10080–10096. Chiabrando, F., Piatti, D., Rinaudo, F., 2010. SR-4000 TOF camera: further experimental tests and first applications to metric surveys. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 38 (Part 5), 149–154. Dorrington, A.A., Godbaz, J.P., Cree, M.J., Payne, A.D., Streeter, L.V., 2011. Separating true range measurements from multi-path and scattering interference in commercial range cameras. Three-Dimensional Imaging, Interaction, and Measurement. Proc. SPIE, vol. 7864, 24 January, pp. 786404-10–786404-10. Godbaz, J.P., Cree, M.J., Dorrington, A.A., 2012. Understanding and ameliorating nonlinear phase and amplitude responses in AMCW lidar. Remote Sensing 4 (1), 21–42. Jamtsho, S., Lichti, D.D., 2010. Modeling scattering distortion of 3D range camera. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 38 (Part 5), 299–304. Kahlmann, T., Ingensand, I., 2008. Calibration and development for increased accuracy of 3D range imaging cameras. Journal of Applied Geodesy 2 (1), 1–11. Karel, W., 2008. Integrated range camera calibration using image sequences from hand-held operation. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 37 (Part B5), 945–951. Karel, W., Ghuffar, S., Pfeifer, N., 2012. Modelling and compensating internal light scattering in time of flight range cameras. The Photogrammetric Record 27 (138), 155–174. Kavli, T., Kirkhus, T., Thielemann, J.T., Jagielski, B., 2008. Modelling and compensating measurement errors caused by scattering in time-of-flight cameras. Proc. Two- and Three-Dimensional Methods for Inspection and Metrology VI. SPIE, vol. 7066. 28 August, pp. 706604–706604-10. Lange, R., Seitz, P., 2001. Solid-state time-of-flight range camera. IEEE Journal of Quantum Electronics 37 (3), 390–397. Lichti, D.D., Kim, C., 2011. A comparison of three geometric self-calibration methods for range cameras. Remote Sensing 3 (5), 1014–1028. Lichti, D.D., Kim, C., Jamtsho, S., 2010. An integrated bundle adjustment approach to range-camera geometric self-calibration. ISPRS Journal of Photogrammetry and Remote Sensing 65 (4), 360–368. Lichti, D.D., Qi, X., 2012. Range camera self-calibration with independent object space scale observations. Journal of Spatial Science 57 (2), http://dx.xoi.org/ 10.1080/14498596.2012.733623.

D.D. Lichti et al. / ISPRS Journal of Photogrammetry and Remote Sensing 74 (2012) 101–109 Lindner, M., Schiller, I., Kolb, A., Koch, R., 2010. Time-of-flight sensor calibration for accurate range sensing. Computer Vision and Image Understanding 114 (12), 1318–1328. Mure-Dubois, J., Hügli, H., 2007a. Real-time scattering compensation for time-offlight camera. In: Proc. ICVS Workshop on Camera Calibration Methods for Computer Vision Systems, Bielefeld, Germany, 21–24 March, 10 p. Mure-Dubois, J., Hügli, H., 2007b. Optimized scattering compensation for time-offlight camera. In: Proc. Two- and Three-Dimensional Methods for Inspection and Metrology V, 25 September, SPIE, pp. 67620H–1–67620H–10.

109

Robbins, S., Murawski, B., Schroeder, B., 2009. Photogrammetric calibration and colorization of the SwissRanger SR-3100 3-D range imaging sensor. Optical Engineering 48 (5), 053603-1–053603-8. Shahbazi, M., Homayouni, S., Saadatseresht, M., Satari, M., 2011. Range camera selfcalibration based on integrated bundle adjustment via joint setup with a 2D digital camera. Sensors 11 (9), 8721–8740.