Pattern Recognition Letters xxx (2014) xxx–xxx
Contents lists available at ScienceDirect
Pattern Recognition Letters journal homepage: www.elsevier.com/locate/patrec
Review Article
Error analysis of photometric stereo with colour lights q Martin Klaudiny ⇑, Adrian Hilton Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford GU2 7XH, United Kingdom
a r t i c l e
i n f o
Article history: Available online xxxx Keywords: Photometric stereo Photometric stereo with colour lights Error analysis Sensitivity analysis
a b s t r a c t This paper presents a comprehensive error analysis of photometric stereo with colour lights for surface normal estimation. An analytic formulation is introduced for the error in albedo-scaled normal estimation with respect to all inputs to the photometric stereo – pixel colour, light directions and light-sensor-material interaction. This characterises the error in the estimated normal for all possible directions with respect to the light setup given discrepancies in the inputs. The theoretical formulation is validated by an extensive set of experiments with synthetic data. Example discrepancies in each input to the photometric stereo calculation show a complex distribution of the error of an albedo-scaled normal over the space of possible orientations. This is generalised in the empirical sensitivity analysis which demonstrates that the magnitude of the error propagation from the light directions and the light-sensor-material interaction depends on the surface orientation. However, the image noise is propagated uniformly to all normal directions. There is a linear relationship between the uncertainty in the individual inputs and in the output normals. The theoretical and experimental findings provide several recommendations on designing a capture setup which is the least sensitive to the inaccuracies in the pixel colours, the light directions and the light-sensor-material interaction. An example is provided showing how to assess the inaccuracies in PSCL calculation for a real-world setup. Ó 2014 Elsevier B.V. All rights reserved.
1. Introduction Photometric stereo recovers normals and reflectance properties of a surface by analysing its appearance under different lighting conditions. The approach works locally on a per-pixel basis, so fine surface details can be reconstructed. This is the main advantage over other 3D reconstruction approaches such as stereo, thus photometric stereo is commonly used for acquisition of detailed normal and reflectance maps of objects. Woodham [21] proposed the original photometric stereo with white lights (PSWL) for Lambertian surfaces. An object is separately illuminated by three white lights from different known directions. Three grey-scale images captured from the same viewpoint provide enough constraints to determine the normal and the grey-scale albedo at every pixel. In practice, the observed pixel intensities can be corrupted by image noise, shadows, specular highlights, inter-reflections or subsurface scattering of light which bias the normal and albedo estimation. Barsky and Petrou [3] employ four known light sources, so that a pixel corrupted under one illumination does not prevent correct normal estimation using the remaining three. Shadows and specular highlights in individual
q
This paper has been recommended for acceptance by Edwin Hancock.
⇑ Corresponding author. Tel.: +44 1483 689842; fax: +44 1483 686031.
E-mail addresses:
[email protected] (M. Klaudiny), a.hilton@surrey. ac.uk (A. Hilton).
images are detected by several complementary algorithms. This technique uses colour images which also enables estimation of colour albedo for every pixel. Extension of this work by Argyriou et al. [2] can cope with an arbitrary number of light sources larger than three. Standard photometric stereo with time-multiplexing of lights is not suitable for 3D capture of moving objects because pixel measurements are mis-aligned between the images due to motion. An engineering solution is high-speed image acquisition and light switching together with image alignment of the observations [17]. Photometric stereo with colour lights (PSCL) separates illumination conditions spectrally rather than in time. Therefore, all illumination conditions can be recorded simultaneously in a single colour image. This is the crucial property for dynamic capture because normal maps of the surface can be reconstructed at every frame with no motion limitation. The initial concept was presented by Drew [7]. They observe that a pixel colour changes only with the orientation of the corresponding normal assuming uniform colour albedo of the surface. Therefore, a single linear mapping between normals and pixel colours exists across the whole surface. This can be estimated by prior calibration of the surface material and the light setup [5,20]. The constant colour albedo assumption has been incrementally relaxed to cope with spatially-varying surface reflectance [11,1,20]. Common drawbacks of photometric stereo methods are sensitivity to image noise and impact of photometric calibration errors. The calibration involves light direction estimation and in the case
0167-8655/$ - see front matter Ó 2014 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.patrec.2013.12.013
Please cite this article in press as: M. Klaudiny, A. Hilton, Error analysis of photometric stereo with colour lights, Pattern Recognition Lett. (2014), http:// dx.doi.org/10.1016/j.patrec.2013.12.013
2
M. Klaudiny, A. Hilton / Pattern Recognition Letters xxx (2014) xxx–xxx
of PSCL estimation of light-sensor-material interaction. Error analysis was conducted in the context of PSWL, thus this was focused on the inaccuracy of image colours and light directions. The majority of previous work revolves around finding the optimal light configuration in the presence of image noise [15,12,18,6,19]. There is a limited error analysis of small discrepancies in the light directions based on partial derivatives of photometric stereo calculation [15,12]. However, the output errors are evaluated only for a few normal orientations. Alternatively, Barsky and Petrou [4] concentrate on the optimal parameters for shadow and highlight detection necessary for handling non-Lambertian surfaces. This paper extends the error analysis presented in previous work in several ways. The error in the albedo-scaled normal is analytically formulated for an arbitrary discrepancy in any of the inputs to the photometric stereo calculation (Section 5). This allows exact quantification of the error for large discrepancies and proper understanding of how the error changes with the normal orientation. The inaccuracy in pixel colours (image noise) is revisited (Section 5.3) and the inaccuracy in the light directions is explored in greater depth than before (Section 5.1). Both are relevant for PSWL but this work focuses on PSCL which has not been addressed from this perspective previously. Hence, the inaccuracy in estimating light-sensor-material interaction is examined as well (Section 5.2). Theoretical formulations are validated by experiments with synthetic hemisphere data (Section 6). Tests with example discrepancies in each input to PSCL show the distribution of errors in all normal directions in specific cases (Section 6.1). To generalise this, empirical sensitivity analysis investigates the relationship between the general uncertainty in the individual inputs and the uncertainty in the reconstructed albedo-scaled normals (Section 6.2). The results of the sensitivity analysis on a virtual capture setup allow accuracy assessment of the inputs to PSCL from a similar real setup (Section 6.3). Moreover, a division of the output error between the normal direction and albedo is observed in all experiments.
2. Related work 2.1. Photometric stereo with colour lights Accuracy of PSCL largely depends on the calibration of the capture setup which defines the mapping between image colours and albedo-scaled normals. The mapping for a given object can be recovered from example normal-colour pairs by least-squares fitting [10,5]. The example pairs are obtained by capturing a calibration board with a flat patch of the object material at various orientations. A surface normal can be fully calculated but material and light properties are not explicitly determined (included in the normal-colour mapping). Alternatively, the light directions are estimated separately using a mirror sphere and the light-sensor-material interaction is calculated from images of the object under the individual colour lights [11]. Hernández [9] propose self-calibration by capturing a short sequence of the object undergoing rigid motion. The normal-colour mapping is optimised for dominant chromaticity on the surface. Vogiatzis and Hernández [20] incorporate the Phong reflectance model which allows normal map computation for surfaces with mixed diffuse and specular reflectance. Monochromatic specular albedo of the surface is assumed to be constant together with the chromaticity. The assumption of uniform chromaticity is restrictive for textured objects where normals in regions with non-dominant chromaticities would be biased. Anderson et al. [1] extend PSCL to Lambertian surfaces with multiple piece-wise constant chromaticities. This is achieved using a coarse normal map from stereo 3D
reconstruction. Kim et al. [13] eliminate the uniform chromaticity assumption by resorting to time-multiplexing of three mixed colour illuminations. This enables estimation of normal and colour albedo maps of moving object at a half frame-rate of the camera. The captured images need to be aligned by optic flow but pixel accuracy is not required. Fyffe and Yu [8] obtain per-pixel normal and colour albedo of the surface from a single image without time-multiplexing illumination. This is achieved by constructing a camera system with 6 colour channels (bright and dark RGB). Multi-spectral pixel colour provides enough constraints for 5 degrees of freedom in the normal and colour albedo. 2.2. Error analysis The work published on this topic has been focused solely on PSWL. Initial work by Ray et al. [15] compiles a comprehensive list of potential sources of imprecision and identifies two major factors – image noise and calibration error in the light directions. Formulas describing the sensitivity of the estimated surface gradient with respect to errors in pixel intensity and light direction are derived. However, they are based on the first order Taylor approximation using partial derivatives, hence they are valid only for small errors. Magnitudes of the reconstruction error are tabulated for a single normal given different discrepancies in the input parameters. Jiang and Bunke [12] simplify the formulation by Ray et al. [15] expressing the surface orientation with a unit normal instead of the gradient. Drbohlav and Chantler [6] theoretically derive the optimality of the orthogonal set of three lights in the presence of image noise. They also provide the optimal configuration for more than three lights which should be placed equidistantly on a circle with uniform slant 54.74°. Spence and Chantler [18] optimise the configuration of three light sources by gradient descent. The magnitudes of the partial derivatives of the normal coordinates with respect to image intensities are used to estimate the variance in the normal given the variance in the intensities. The ratio between the variances is minimised which results in the orthogonal light directions. The same result is reached by Sathyabama and Divya [16] using differential evolution as the optimisation method. Sun et al. [19] theoretically derive the relationship between the uncertainty in an albedo-scaled normal and the uncertainty in measured intensities. Covariance matrices representing the respective uncertainties are related through the first order Taylor expansion of the photometric stereo calculation. The minimal trace of the covariance matrix of the normal also confirms that the orthogonal light configuration minimises the impact of image noise. This is significantly influenced by brightness of the surface albedo as well. Empirical results on the synthetic sphere are presented for various light configurations and albedo levels. Barsky and Petrou [4] focus the error analysis on shadow and specular highlight detection in their PSWL method [3]. Various stages of the algorithm are assessed with respect to errors in image colours and light directions and typically the optimal threshold settings are inferred. They also offer a theoretical relationship between the error in a albedo-scaled normal and the aforementioned input errors. The same conclusion about the orthogonality of lights is reached but is not experimentally evaluated. 3. Photometric stereo with white lights Standard photometric stereo with time-multiplexed white lights proposed by Woodham [21] is based on several assumptions. The observed surface is assumed to be Lambertian resulting in a linear dependency between the grey-scale intensity of an image pixel and the associated normal. To preserve this linearity, the
Please cite this article in press as: M. Klaudiny, A. Hilton, Error analysis of photometric stereo with colour lights, Pattern Recognition Lett. (2014), http:// dx.doi.org/10.1016/j.patrec.2013.12.013
3
M. Klaudiny, A. Hilton / Pattern Recognition Letters xxx (2014) xxx–xxx
camera sensor must have a linear response to incoming radiance. Moreover, the illumination is modelled with point light sources at a large distance which guarantees constant direction of light rays across the capture volume. The grey-scale images fIj gJj¼1 of the surface are taken from the same viewpoint. The image Ij captures the surface illuminated by the light j with a direction vector lj (jlj j ¼ 1). Eq. (1) describes the relationship between the pixel intensity g j in Ij for a particular surface point and its corresponding normal n. T
g j ¼ lj n
Z
T
EðkÞSðkÞRðkÞdk ¼ lj an
ð1Þ
The interaction between the light source, surface material and camera sensor is expressed by the integral over wavelength k. The function EðkÞ is the light spectrum which is assumed to be the same for all light sources. The function SðkÞ is the spectral sensitivity of the grey-scale camera sensor. The function RðkÞ is the reflectance of the material which varies across the surface. The integral is represented by a scalar factor a which is commonly referred to as an albedo [3]. Assuming a stationary surface and camera, the constraints defined by Eq. (1) can be stacked into a single linear system for each pixel location. In Eq. (2) the vector g contains the image intensities g j and a 3 J illumination matrix L consists of the light direction vectors lj .
2 g ¼ L an
!
3 2 T3 g1 l1 6. 7 6. 7 6 .. 7 ¼ 6 . 7an 4 5 4. 5 T gJ lJ
an ¼
L1 g;
J¼3
1
ðLT LÞ LT g; J > 3
Z 3 X T lj n Ej ðkÞSr ðkÞRðkÞdk
ð2Þ
ð4Þ
j¼1
The red camera sensor has its own spectral sensitivity Sr different from the green and blue sensor. Each combination of sensor and light has a specific interaction, thus the integral is not constant for different light conditions as for PSWL (Eq. (1)). Therefore, the albedo term a from PSWL needs to be reformulated in the context of PSCL. Surface reflectance can be separated into wavelengthdependent chromaticity qðkÞ and spatially varying grey-scale albedo a (RðkÞ ¼ aqðkÞ). In Eq. (5) the albedo a is factored out of the integral and scales the unit normal n.
cr ¼
Z 3 X T lj an Ej ðkÞSr ðkÞqðkÞdk
ð5Þ
j¼1
To simplify the formulation, Eq. (6) defines a coefficient v rj which encapsulates the integral from Eq. (5) (similarly for the green and blue sensor).
v rj ¼
Z
Ej ðkÞSr ðkÞqðkÞdk
ð6Þ
A vector v j ¼ ½v rj v gj v bj T is then a response of the red, green and blue sensors to the light j given the surface material. Eq. (7) shows the full relationship of the RGB colour c with the albedo-scaled normal an (later referred to as a scaled normal).
2
In the case of 3 lights, a single solution is calculated by inversion of the known illumination matrix (Eq. (3)). Note that for L to be invertible, lj have to be linearly independent (they should not lie on the same 3D plane). Because n is a unit vector, the albedo a equals jL1 gj. In the case of more than 3 lights, the linear system is overdetermined and is solved in a least squares manner.
(
cr ¼
c ¼ VLan
!
cr
2
3
6 7 4 cg 5 ¼ ½ v 1 cb
v2
4. Photometric stereo with colour lights The main drawback of PSWL is time-multiplexing of lights which requires the camera and surface to be stationary while all illumination conditions are captured. Any motion of either of them causes errors in the estimated normal and albedo. This issue is addressed by photometric stereo based on colour lights [5]. Different lighting conditions are separated in wavelength rather than time. Therefore, the surface can be illuminated by all lights simultaneously. The number of lighting conditions is effectively limited to three by the number of sensors in a conventional colour camera (red, green, blue). A single colour image contains all the information necessary for the reconstruction of normal and albedo map. Therefore, this technique is suitable for moving surfaces because the surface detail can be reconstructed at each frame independently. A single colour image I of the surface is captured under simultaneous illumination by spectrally different colour lights j 2 ½1 . . . 3. Because of the independent reflection of different lights incident upon the surface, the observed RGB colour c of a surface point is the sum of contributions from the individual lights. This preserves the linearity of photometric calculation defined for PSWL in Eq. (1). Eq. (4) defines an observed intensity in the red channel cr as the sum of intensities contributed by three lights with different spectra Ej .
ð7Þ
The illumination matrix L has a size 3 3 and V denotes a 3 3 interaction matrix which consists of v j for individual lights. A unique solution for the normal and albedo is calculated according to Eq. (8) if V and L are known. The vectors v j cannot be linearly dependent so that V is invertible (similar condition as for L in Eq. (3)).
an ¼ L1 V1 c ð3Þ
3 T l 6 1T 7 v 3 64 l2 75an T l3
ð8Þ
An important assumption of the technique is that the chromaticity
qðkÞ is uniform across the surface, thus one matrix V can be estimated for the whole surface. If qðkÞ varies across the surface, V would be an additional unknown for each pixel which leads to an under-constrained problem. The uniform chromaticity assumption allows independent per-pixel estimation of an using Eq. (8) which yields a normal and grey-scale albedo map for the single image I. The presented formulation of PSCL allows correct calculation of the scaled normals only for surfaces with uniform chromaticity. 5. Error analysis Reconstruction of normal and albedo maps of an object using PSCL requires a prior calibration process and acquisition of the input colour image I. The calibration estimates the parameters describing a capture setup – the illumination matrix L and the interaction matrix V. Quality of the surface reconstruction is influenced by acquisition errors (image noise) and errors in the calibration of L and V. The aim of this analysis is to understand how these different types of input discrepancies affect the reconstructed normals and albedos across the surface. The error of a scaled normal with an arbitrary direction and magnitude is analytically formulated for each type of input error. 5.1. Error in the illumination matrix The error analysis of PSCL is performed at first for a discrepancy in an arbitrary illumination matrix L. The actual parameters
Please cite this article in press as: M. Klaudiny, A. Hilton, Error analysis of photometric stereo with colour lights, Pattern Recognition Lett. (2014), http:// dx.doi.org/10.1016/j.patrec.2013.12.013
4
M. Klaudiny, A. Hilton / Pattern Recognition Letters xxx (2014) xxx–xxx
of a capture setup LS (denoted by the index S) deviate from the estimated parameters LR (denoted by the index R). The interaction matrix V ¼ VS ¼ VR is correctly estimated and equals the identity matrix. This is equivalent to PSWL with three lights of the same intensity and a surface material with white chromaticity. The simulation-reconstruction chain in Eq. (9) firstly calculates ~ S ¼ aS nS using the actual paramea colour c for a scaled normal n ~ R ¼ aR nR is reconstructed ters LS . Secondly, the scaled normal n from c using the estimated parameters LR . The error between the ~ S and the reconstructed n ~ R is expressed as a difference original n vector d.
~S c ¼ LS n
!
~ R ¼ L1 n R c
!
~R n ~ S ¼ L1 ~ d¼n R ðLS LR ÞnS
T
31 02 T 3 2 T 31 T l1R l1S l1R 6 7 B6 7 6 7C 6 T 7 B6 T 7 6 T 7C ~ d ¼ 6 l2R 7 B6 l2S 7 6 l2R 7CnS ¼ d1 þ d2 þ d3 4 5 @4 5 4 5A T T T l3R l3S l3R 2
d1 ¼
ð9Þ Assume a discrepancy in estimation of the blue light direction l3 , thus the illumination matrices LS and LR have different rows T T l3S and l3R . Fig. 1(a) illustrates an example case where the discrepT T ancy ðl3S l3R Þ is in the tilt /3 of blue light direction (the slant h3 is unchanged). Eq. (10) expands the expression of d from Eq. (9) and simplifies it for the case of error in l3 . Eq. (10) can be rearranged into Eq. (11) to separate the direction of d (term C) and the magnitude of d (term A term B).
2
31 02 T 3 2 T 31 T l l l 6 1T 7 B6 1T 7 6 1T 7C 1 T T 7 B6 l 7 6 l 7Cn ~ S ðl1 l2 Þ d¼6 ðl3S l3R Þn l 4 2 5 @4 2 5 4 2 5 A ~ S ¼ T l ðl l Þ 1 2 3R T T T l3R l3S l3R ð10Þ d¼
1 T T ~ S j cos b3 m3 jl3S l3R jjn |{z} cos c3 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} |fflffl{zfflffl} term C term B
ð11Þ
term A
The vector m3 is the normalised vector ðl1 l2 Þ and c3 is the angle between m3 and l3R . The term A is a scalar coefficient which scales d according to the spatial configuration of lights. The term B defines a zero-error plane whose normal is given by the discrepancy T
T
ðl3S l3R Þ and which always crosses the origin of WCS (illustrated in Fig. 1(a)). On this 3D plane jdj vanishes because the angle b3 ~ S and ðlT3S lT3R Þ becomes 90 (cos b3 ¼ 0). The magnitude between n jdj linearly increases outside the zero-error plane with the distance ~ S and the plane which is expressed by between the tip of n ~ S j cos b3 . The slope of this dependency is given by the magnitude jn
T
of discrepancy jl3S l3R j. The term B also flips the direction of d ~ S is on (illustrated depending which side of the zero-error plane n 0 by vectors d; d in Fig. 1). Eq. (12) expands Eq. (9) for the general case where there are discrepancies in all light directions.
d2 ¼
d3 ¼
1 T
l1R ðl2R l3R Þ 1 T
l2R ðl3R l1R Þ 1 T
l3R ðl1R l2R Þ
T T ~ S ðl2R l3R Þ ¼ ðl1S l1R Þn
T
T
T
T
~ S ðl3R l1R Þ ¼ ðl2S l2R Þn
~ S ðl1R l2R Þ ¼ ðl3S l3R Þn
ð12Þ
1 T T ~ S j cos b1 m1 jl l jjn cos c1 1S 1R 1 T T ~ S j cos b2 m2 jl l jjn cos c2 2S 2R 1 T T ~ S j cos b3 m3 jl l jjn cos c3 3S 3R
An important observation is that d is a vector sum of the difference vectors dj introduced by errors in individual lj (as defined by Eqs. (10) and (11)). Because of this combination the direction and magnitude of the resulting d can vary across the space of scaled normals in a complex way. This depends on the spatial configuration of light sources and location of the zero-error planes created by the respective discrepancies in their estimated direction. Fig. 1(b) illustrates an example situation with all lights tilted in one direction by the ~S same angle. The final difference vector d vanishes only for n aligned with the Z-axis where the zero-error planes cross each other. Otherwise, its magnitude increases with distance from the Z-axis. ~ S directly depends on the spatial relationNote that jdj for any n ship between ljR . Each contribution dj is scaled by cos1 c (term A in j
Eq. (11)) which is independent of the magnitude of discrepancy. The term A is minimal for each light (cos1 c ¼ 1) if ljR is an orthonorj
~ S when mal set of vectors. Therefore, jdj is scaled the least for any n the light directions are perpendicular to each other. 5.2. Error in the interaction matrix The analysis in the previous section can be expanded to include discrepancies in an arbitrary interaction matrix V. It is assumed that only the estimation of V is incorrect (VS – VR ; L ¼ LS ¼ LR ). The simulation-reconstruction chain in Eq. (13) is therefore modified to reflect this.
~S ! n ~ R ¼ L1 V1 c ¼ VS Ln R c
!
~R n ~ S ¼ L1 V1 ~ d¼n R ðV S VR ÞLnS ð13Þ
A formula for d with individual vector components is in Eq. (14).
2 T 31 l1 6 7 6 T7 d ¼ 6 l2 7 ½ v 1R 4 5 T l3
v 2R v3R 1 ½ v1S v 1R v 2S v2R
2 T3 l1 6 7 6 T 7~ v 3S v 3R 64 l2 75nS T l3 ð14Þ
~ S , the actual Fig. 1. Spatial relationship between an error d for a scaled normal n light directions ljS and the estimated light directions ljR in WCS (viewed against the Z-axis). The directions can also be expressed in terms of the tilt angle / and the slant angle h. A discrepancy between the sets ljS and ljR in the tilt of blue light (a), in the tilt of all lights (b). (For interpretation of the references to colour in this figure caption, the reader is referred to the web version of this article.)
By decomposition of matrix ðVS VR Þ it is possible to separate error contributions made by discrepancies in individual vectors v j . This results in the sum d ¼ d1 þ d2 þ d3 as for the illumination matrix L in Eq. (12). The difference vector d1 expressed in Eq. (15) is related to a discrepancy in v 1 .
Please cite this article in press as: M. Klaudiny, A. Hilton, Error analysis of photometric stereo with colour lights, Pattern Recognition Lett. (2014), http:// dx.doi.org/10.1016/j.patrec.2013.12.013
5
M. Klaudiny, A. Hilton / Pattern Recognition Letters xxx (2014) xxx–xxx
d1 ¼
1 T
l1 ðl2 l3 Þ
½ l2 l3
1
2 6
l3 l1
3
cR ¼ cS þ D. The coordinates of D are drawn independently from the Gaussian distribution N ð0; r2 Þ. Note that d does not depend ~ S as in other sources of errors. on n
7 ~S ðv 1S v 1R Þl1 n
~S ! n ~ R ¼ L1 V1 cR cS ¼ VLn
l1 l2
ðv 2R v 3R ÞT T
v T1R ðv2R v 3R Þ 4 ðv3R v 1R ÞT 5 ðv 1R v 2R Þ 2
¼
h
m1 cos c1
m2 cos c2
m3 cos c3
uT1 6 jv1R j cos d1
term B
u3 j cos d
ð15Þ
term C
3R 3 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
term A
The magnitude jd1 j linearly increases from a zero-error 3D plane as for errors in the illumination matrix L. The normal of the plane is l1 ~ S from it where g1 is an angle and term B defines the distance of n ~ S and l1 . The increase of jd1 j depends on the discrepancy between n vector (term C) but the final scaling coefficient is given by the magnitude of this vector after transformation by the matrices in term A. The direction of d1 is also set by the transformed discrepancy vector ~ S apart from the opposite sign on each side and is the same for all n of the zero-error plane. The matrix V1 R in term A can be expressed in terms of unit vectors uj in a similar manner to L1 in Eq. (12) with the difference that the original v j do not have to be unit vectors. The vector u1 is the normalised vector v 2R v 3R and the angle d1 is defined between u1 and v 1R (respectively for other uj ). Eq. (16) for the aggregate error vector d reformulates Eq. (14) according Eq. (15).
2 d ¼ d1 þ d2 þ d3 ¼
h
m1 cos c1
m2 cos c2
m3 cos c3
~R n ~ S ¼ L1 V1 D d¼n ð17Þ
3
7 i6 7 6 uT2 7 jn ~ S j cos g1 ðv 1S v 1R Þ 6 jv2R j cos d2 7 |fflfflfflfflfflffl ffl{zfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} 5 4 T jv
!
uT1 6 jv1R j cos d1
3
7 i6 7 T 6 u2 7 6 jv2R j cos d2 7 5 4 T
ð16Þ
u3 jv 3R j cos d3
~ S j cos g1 ðv 1S v 1R Þ þ jn ~ S j cos g2 ðv 2S v 2R Þ þ jn ~ S j cos g3 ðv 3S v 3R ÞÞ ðjn The zero-error planes are perpendicular to the light directions lj and intersect at the origin of WCS. Each discrepancy vector ðv jS v jR Þ is multiplied by common factor L1 V1 R (term A in Eq. (15)) and scaled by a distance from the respective zero-error plane (term B in Eq. (15)). As was mentioned in Section 5.1, the scaling coefficients 1 have the minimal value 1 when the light directions are orthogcos c j
onal. In this case, the length of any vector is not changed by multiplication with L1 . Similarly, coefficients jvjR j 1cos dj in V1 R are minimal when v jR is an orthogonal set of vectors and their magnitudes are large. Because all entries in VR have to be positive (integrals over wavelength in Eq. (6)), VR needs to be a diagonal matrix to fulfil the orthogonality criterion. Also, the length of any vector multiplied by V1 R is shortened if the diagonal entries of VR have large values greater than 1. With these properties of VR and L; jdj is scaled the ~ S regardless of the magnitude of discrepancies in least for any n the interaction matrix. In practice, the diagonal interaction matrix is achieved by matching light spectra and spectral sensitivity of sensors (E1 ðkÞ ¼ Sr ðkÞ; E2 ðkÞ ¼ Sg ðkÞ; E3 ðkÞ ¼ Sb ðkÞÞ. Large magnitudes of v jR are a result of strong light sources, highly sensitive sensors and a bright surface material. 5.3. Image noise The last source of imprecision in reconstructed normals is the image noise. Assume for this case that the estimation of both V and L is correct (V ¼ VS ¼ VR ; L ¼ LS ¼ LR ). The simulation-reconstruction chain is then defined as Eq. (17) where a simulated colour cS is perturbed by a noise vector D to obtain a noisy measurement
Optimality of light directions with respect to the noise in intensities has been investigated in the context of PSWL [6,18,19]. The conclusion is that L has to be orthogonal to minimise the impact of image noise. The theoretical proof by Drbohlav and Chantler [6] can be generalised to include the interaction matrix V from the formulation for PSCL. As a result, the combined matrix VL has to be formed by orthogonal vectors. Assuming L to be orthonormal, v j need to be perpendicular to each other to retain the orthogonality of VL. Thus, V has to be a diagonal matrix with large entries to minimise amplification of the image noise. This is compatible with conclusions for the errors in V directly. 6. Experiments The presented theoretical formulations are investigated by experimental analysis on synthetic data. A general virtual capture setup is illustrated in Fig. 1 in relation to WCS. Several variants with different illumination and interaction matrices L; V are used in the experiments: Setup1 – Orthogonal matrix L (h1 ¼ h2 ¼ h3 ¼ 54:74 ; /1 ¼ 330 ; /2 ¼ 210 ; /3 ¼ 90 ). Diagonal matrix V ¼ I. Setup2 – Non-orthogonal matrix L (h1 ¼ h2 ¼ h3 ¼ 26 ; /1 ¼ 330 ; /2 ¼ 210 ; m/3 ¼ 90 ). Diagonal matrix V ¼ I. Setup3 – Orthogonal matrix L (h1 ¼ h2 ¼ h3 ¼ 54:74 ; /1 ¼ 330 ; /2 ¼ 210 ; /3 ¼ 90 ). Non-diagonal matrix V (v r1 ¼ v g2 ¼ 1; v b2 ¼ v b3 ¼ 0:5, otherwise 0). The light directions have equidistantly spaced tilts and the same slant for both Setup1 and Setup2. However, Setup2 has non-orthonormal light directions which are closer together. This is often the case in practice when the amount of self-shadows needs to be limited. The spectra of the lights are matched to the spectral sensitivity curves of the camera sensors for both Setup1 and Setup2, thus red, green and blue light are used. The interaction matrix V is then the identity. For Setup3, V differs in the coefficients v b2 and v b3 which split the sensitivity of the blue sensor evenly between the green and blue light. The virtual capture setup observes a test object which is a hemisphere because it provides all possible normal directions visible from a single camera. The white albedo is uniform across the hemisphere to comply with the chromaticity assumption of PSCL ~ S j ¼ 255). The hemisphere is placed at the origin of WCS and (jn the camera points along the Z-axis towards the origin (Fig. 1). A ground-truth normal and albedo map of the hemisphere are generated for the camera view assuming orthographic projection (Fig. 2(a) and (b)). The parameters L; V of the setup are used to compute a synthetic image of the hemisphere (Fig. 2(c)) captured by the virtual camera (resolution 500 pixels). Note that shadows are not modelled and negative RGB values occur in areas occluded from one or more lights. Also, colours are not clamped to 255 in each channel, so that regions of the image do not become saturated. These modifications of the actual acquisition process keep the focus on the reconstruction of scaled normals without additional hindering factors. 6.1. Investigation of error impact The impact of errors in the inputs for PSCL is investigated by empirical replication of the simulation-reconstruction chain used
Please cite this article in press as: M. Klaudiny, A. Hilton, Error analysis of photometric stereo with colour lights, Pattern Recognition Lett. (2014), http:// dx.doi.org/10.1016/j.patrec.2013.12.013
6
M. Klaudiny, A. Hilton / Pattern Recognition Letters xxx (2014) xxx–xxx
Fig. 2. Test hemisphere used in the experiments – the colour-coded normal map (a), the albedo map (b) and an example image generated using Setup2 (c). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
in the theoretical analysis (Eqs. (9), (13) and (17)). Firstly, an image of the hemisphere is simulated from the ground-truth albedoscaled normal map using the actual parameters of the capture setup (VS ; LS ). Secondly, the simulated image colours cS can be corrupted by the Gaussian noise. Thirdly, the map is reconstructed back from the captured noisy colours cR using the estimated parameters (VR ; LR ). The difference between the albedo-scaled normal maps is examined for different types of discrepancies. Patterns of jdj over the hemisphere show how the error vector spatially varies depending on the normal direction. Different root mean square (RMS) statistics across the whole hemisphere are plotted for increasing magnitude of individual discrepancies. They include RMS of the magnitude of the difference vector jdj, RMS of the angle ~ R; n ~ S Þ and RMS of the between actual and reconstructed normal \ðn difference between actual albedo and reconstructed albedo ~R; n ~ S Þ and the difference ðaR aS Þ are added ðaR aS Þ. The angle \ðn to illustrate the split of d between the normal direction and the albedo. 6.1.1. Error in the illumination matrix The first set of experiments is focused on discrepancies in light directions and the simulation-reconstruction chain follows Eq. (9). The parameters for Setup2 are used and the interaction matrix V is fixed. In Experiment 1A the actual direction of blue light l3S is tilted away from the estimated direction l3R by an angle M/ as shown in Fig. 1(a). The pattern in Fig. 3(a) corresponds to the theoretical expression in Eq. (11). The black belt across the hemisphere indicates the zero-error plane. The magnitude jdj increases linearly as normals tilt away from this plane. With changing M/ the plane T T rotates because ðl3S l3R Þ changes its direction. There is a linear T T dependency between jl3S l3R j and jdj according to Eq. (11) but T T the relationship between jl3S l3R j and M/ is sinusoidal. However, the RMS of jdj has effectively linear trend for small angles M/ as visible in Fig. 4(a). The magnitude jdj is symmetrical around
(a)
(b)
Mh ¼ 0, so the error does not depend on the direction of tilt. The graph also shows that the error in normal direction and albedo has a similar trend as the error in scaled normal. The RMS of ~R; n ~ S Þ is lower than the introduced angular discrepancy in the \ðn light direction. The remaining inaccuracy is propagated to the albedo which is proportionally more affected than the direction of the normal. In Experiment 1B the vector l3S is slanted away from the estimated l3R by an angle Mh. In comparison to Experiment 1A the position of the zero-error plane is different but jdj changes across the hemisphere in a similar way (Fig. 3(b)). The RMS curves in Fig. 4(b) depict more severe impact on the quality of reconstruction than for a tilt. This is due to spatial location of the zero-error plane with respect to the hemisphere. The plane orientation also causes a small asymmetry of the RMS curves with higher values for positive Mh. The reason is that larger number of normals is perpendicular to the zero-error plane when it approaches XY plane. All light directions ljS are tilted in the same direction by an angle M/ in Experiment 1C as depicted in Fig. 1(b). The error vector d is expressed theoretically in Eq. (12). The pattern of jdj in Fig. 3(c) shows that the only zero error is on top of the hemisphere and jdj increases radially from the Z-axis. This reflects the theoretical formulation of three zero-error planes crossing each other along the Z-axis. The graph in Fig. 4(c) depicts that the dependency between the RMS of jdj and M/ is again effectively linear for small angular discrepancies (reflecting Eq. (12)). Interestingly, the albedo error is zero regardless of M/. This is caused by the fact that scaled normals only change direction opposite to the rotation of the light setup and their scale is not modified. In Experiment 1D all light directions ljS are slanted in the same direction by an angle Mh. The zero-error planes for individual lights cut through the hemisphere in the same manner as in the single-light case (Experiment 1B). They intersect at the origin of WCS, therefore none of the normals in Fig. 3(d) are completely accurate. The magnitude jdj is smallest along the Z-axis and decreases radially towards the edges
(c)
(d)
Fig. 3. The magnitude of error jdj in scaled normals across the hemisphere: Exp. 1A – the change M/ ¼ 10 in the tilt of blue light (a), Exp. 1B – the change Mh ¼ 10 in the slant of blue light (b), Exp. 1C – the change M/ ¼ 10 in the tilt of all lights (c), Exp. 1D – the change Mh ¼ 10 in the slant of all lights (d). The magnitude jdj is encoded in grey-scale – black = 0, white = 200 coordinate units.
Please cite this article in press as: M. Klaudiny, A. Hilton, Error analysis of photometric stereo with colour lights, Pattern Recognition Lett. (2014), http:// dx.doi.org/10.1016/j.patrec.2013.12.013
7
M. Klaudiny, A. Hilton / Pattern Recognition Letters xxx (2014) xxx–xxx
12
40 |d| ~ ~ ,n ∠(n R S) (aR - aS)
35 30
30
25 RMS
RMS
10
35 |d| ~ ~ ,n ∠(n R S) (aR - aS)
8 6
RMS
14
20 15
4
10
50
20
40
15
20 10
5
0 -10
0 -10
0 -10
5
10
-5
(a)
0 Δθ
5
10
(b)
-5
0 Δφ
5
10
(c)
|d| ~ ~ ,n ∠(n R S) (aR - aS )
30
10
2 0 Δφ
60
25
5
-5
70 |d| ~ ~ ,n ∠(n R S) (aR - aS)
RMS
16
0 -10
-5
0 Δθ
5
10
(d)
Fig. 4. The RMS graphs across different magnitudes of discrepancy for Exp. 1A–1D (a)–(d): RMS of jdj – green (in coordinate units), RMS of ðaR aS Þ – red (in grey-scale levels) ~S; n ~ R Þ – blue (in °). (For interpretation of the references to colour in this figure caption, the reader is referred to the web version of this article.) and RMS of \ðn
of the hemisphere. The graphs in Fig. 4(b) and (d) show that a discrepancy in the slant angle generally has a stronger impact on the quality of the result than in the tilt angle. In all experiments apart from Experiment 1C the albedo has higher relative error with respect to its maximal possible value 255 than the unit normal (maximal possible deviation of 180 ). Thus, the error d in a scaled normal is propagated more to the albedo than the normal direction. The angular normal error is always the same or smaller than discrepancies in the light directions. 6.1.2. Error in the interaction matrix The second set of experiments is focused on discrepancies in the interaction matrix and the simulation-reconstruction chain follows Eq. (13). The parameters for Setup2 are used and the illumination matrix L is fixed. The matrix VS differs in individual coefficients from the estimated identity matrix VR . In Experiment 1E only the coefficient v b3S which describes the interaction between the blue light and blue camera sensor is changed. A decrease of v b3S from the original value 1 simulates lowering intensity of the blue light. Eq. (16) describing the general discrepancy between VS and VR has a simpler form of Eq. (18) because only the contribution d3 for the blue light is non-zero.
d¼
m3 ~ S j cos g3 ðv b3S 1Þ jn cos c3
ð18Þ
Eq. (18) is an equivalent of Eq. (15) for the vector v 3 . The term A in 1 m3 Eq. (15) is simplified to cos c3 because VR is the identity matrix and the discrepancy ðv b3S 1Þ selects only the last column from L1 . The normal of the zero-error plane equals l3 (stems from the term cos g3 ). The vector jdj linearly increases with distance from the plane as shown in Fig. 5(a). The direction of d is given by m3 across the whole hemisphere and does not change with ðv b3S 1Þ. The coefficients v b2S and v b3S are changed simultaneously in Experiment 1F. Their sum remains equal to 1, thus the discrepancy is expressed by ðv b3S 1Þ as in Experiment 1E. This experiment simulates a blue sensor sensitive to both green and blue light which shifts the sensitivity curve from the blue to green light spectrum with decreasing v b3S . The inference from Eq. (16) simplifies the terms d2 and d3 in a similar way to Eq. (18). The combination of these two zero-error planes in Eq. (19) creates a single zero-error plane due to the constraint v b2S þ v b3S ¼ 1.
m3 ~ S j cos g2 ðv b2S 0Þ þ jn ~ S j cos g3 ðv b3S 1ÞÞ ðjn cos c3 m3 ~ S jðcos g3 cos g2 Þðv b3S 1Þ jn d¼ cos c3
d¼
ð19Þ
The normal of the joint plane is ðl3 l2 Þ represented by ðcos g3 cos g2 Þ. The plane is visible in Fig. 5(b) and the pattern of jdj is similar to the single-light tilt because of a similar direction of ðl3 l2 Þ.
In both Experiments 1E and 1F the zero-error plane does not move with the enlarging discrepancy ðv b3S 1Þ because the discrepancy does not change the orientation of the plane normal. However, a change of ðv b3S 1Þ influences jdj in a linear manner which is illustrated by RMS curves in Fig. 5(c) and (d). Notice that a discrepancy in the intensity of blue light leads to more severe errors than a discrepancy in the spectral relationship between blue ~R; n ~ S Þ show sensor, green and blue light. RMS of ðaR aS Þ and \ðn proportionally larger error in the albedo than in the normal direction for the both experiments. 6.1.3. Image noise The third set of experiments is focused on the image noise where the simulation-reconstruction chain follows Eq. (17). The same matrices V; L from Setup2 are used for the simulation and reconstruction of scaled normals. After simulating the captured image an RGB noise vector generated from a Gaussian distribution N ð0; r2 Þ is added to every pixel. Experiment 1G explores different amounts of image noise by increasing the standard deviation r. Figs. 6(a) and (b) demonstrate direct propagation of the acquisition errors into reconstructed scaled normals where the pattern of jdj is more noisy for larger r ¼ 15. Also, the noise pattern is uniform across the hemisphere which proves no dependency of d on a normal direction as indicated by Eq. (17). RMS of jdj across the hemisphere has a linear relationship with r (Fig. 6(c)). RMS of ðaR aS Þ ~ R; n ~ S Þ also depend linearly on r and the noise is propagated and \ðn more to the albedo than to the normal direction. 6.2. Empirical sensitivity analysis The previous experiments showed complex behaviour of the error vector d in scaled normals for different types of discrepancies in the inputs to PSCL. The magnitude and direction of a discrepancy determines the distribution of the error across the hemisphere. However, these experiments were example cases for specific perturbations of the illumination matrix, the interaction matrix and the image. The following sensitivity analysis empirically investigates the relationship between the general uncertainty in the inputs and the uncertainty in the reconstructed scaled normals. This relationship is observed across the hemisphere of possible normal directions to establish the amount of uncertainty propagated depending on a surface orientation. The sensitivity analysis uses the same simulation-reconstruction chain described in Section 6.1. But this process is repeated many times with random discrepancies introduced to the inputs (5000 runs in all following experiments). The uncertainty of the matrices L; V and the image I is modelled by the Gaussian distribution N ð0; r2 Þ (note that r has now broader use than in Section 5.3). A discrepancy is independently sampled from N for each matrix
Please cite this article in press as: M. Klaudiny, A. Hilton, Error analysis of photometric stereo with colour lights, Pattern Recognition Lett. (2014), http:// dx.doi.org/10.1016/j.patrec.2013.12.013
M. Klaudiny, A. Hilton / Pattern Recognition Letters xxx (2014) xxx–xxx
200 180 160 140 120 100 80 60 40 20 0
120 |d| ~ ,n ~ ∠(n R S) (aR - aS)
|d| ~ ,n ~ ∠(n R S) (aR - aS)
100 80 RMS
RMS
8
60 40 20 0
0
-0.25 -0.5 vb3S - 1
-0.75
0
-0.25 -0.5 vb3S - 1
-0.75
Fig. 5. The magnitude of error jdj in scaled normals across the hemisphere: Exp. 1E – intensity of blue light where ðv b3S 1Þ ¼ 0:5 (a), Exp. 1F – sensitivity of blue sensor where ðv b3S 1Þ ¼ 0:5 (b). The RMS graphs across different magnitudes of ðv b3S 1Þ for Exp. 1E (c) and Exp. 1F (d). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
40 |d| ~ ~ ,n ∠(n R S) (aR - aS)
35 30 RMS
25 20 15 10 5 0 0
5
10
15
σ
Fig. 6. The magnitude of error jdj in scaled normals across the hemisphere for r ¼ 5 (a) and white = 100 coordinate units. The RMS graphs across different magnitudes of r (c).
element or colour channel. The sampled values are added to the parameters of the capture setup or pixel colours. The perturbed version is used for the simulation (LS ; VS ; cS ) and the unperturbed version for the reconstruction (LR ; VR ; cR ). Each run of the simulation and reconstruction provides an albedo-scaled normal map which is compared to the ground truth for the hemisphere. A cloud of error vectors d is accumulated for each scaled normal on the image of the hemisphere. This represents the uncertainty of the scaled normals reconstructed by PSCL. The sensitivity analysis is performed for individual inputs to PSCL and with varying degree of their uncertainty given by r. The output uncertainty of a scaled normal is expressed as RMS of jdj across the runs (denoted RMSðjdj)). A distribution of RMSðjdjÞ for individual normals on the hemisphere is evaluated to establish a dependency of the uncertainty on the normal direction. Total RMS of jdj is accumulated across all normals for all runs to investigate an overall impact of the input uncertainties regardless of the surface orientation (denoted RMST ðjdj)). Also, total ~ R; n ~ S ÞÞ and RMST ðaR aS Þ are calculated to show the influRMST ð\ðn ence on the normal direction and the albedo separately. 6.2.1. Uncertainty of the illumination matrix The uncertainty is modelled by introducing independent Gaussian noise into the Cartesian coordinates of the light directions lj . The standard deviation r is the same for each coordinate which results in a spherical cloud of the perturbed versions of lj . However, each lj needs to be normalised for PSCL calculations, thus the cloud is effectively projected onto the unit sphere. The result is a disk of angular discrepancies to lj with 2D Gaussian distribution which is not limited to slant and tilt deviations as in Experiments 1A-D. The Cartesian standard deviation r can be mapped to the angular standard deviation x on the unit sphere: x ¼ cos1 ð1 r2 =2Þ (e.g. maximal r ¼ 0:5 used in the experiments equals to x ¼ 29 ). Experiments 2A uses the non-orthogonal light configuration of Setup2. This generalises Experiments 1A–D with example
r ¼ 15 (b) in Exp. 1G. The magnitude jdj is encoded in grey-scale – black = 0,
discrepancies in L. The pattern of RMSðjdjÞ across the hemisphere in Fig. 7(a) resembles the patterns of jdj for tilt and slant discrepancies in all light directions in Figs. 3(c) and (d). The uncertainty RMSðjdjÞ radially increases from the minimum at the pole of the hemisphere (Z-axis) towards the edge. Fig. 7(b) samples the central row in Fig. 7(a) across the hemisphere. The profile of RMSðjdjÞ is parabolic and increases by a similar amount between the regularly sampled values of r (step 0:1). The paraboloid of RMSðjdjÞ is shallower for low r. The total error RMST ðjdjÞ has a linear dependency on r stemming from the linearity of PSCL calculation. The same ~ S; n ~ R ÞÞ and the albetrend is observed for the angle error RMST ð\ðn do error RMST ðaR aS Þ. The albedo is relatively more affected than the unit normal with respect to their maximal possible errors. ~S; n ~ R ÞÞ is generally higher in comparison to RMS of angle RMST ð\ðn discrepancies introduced to the light directions. This is in contrast to lower angular RMS in the scaled normals than the discrepancy magnitude in Experiments 1A–C. Experiment 2B is based on orthogonal L (Setup1) which leads to the constant RMSðjdjÞ across the hemisphere in Fig. 7(d). This suggests that the cloud of error vectors d has the same shape regardless of the normal direction. Fig. 7(e) shows that the constant profile is independent of r. Also, note that the levels of RMSðjdjÞ are equal to RMST ðjdjÞ in Fig. 7(f). The trend of RMST ðjdjÞ is linear but noticeably lower than in Experiment 2A. This supports the theoretical claim about the optimality of orthogonal light directions in the context of errors in the illumination matrix (Section 5.1). Although, RMST ðaR aS Þ has a decreasing slope towards higher values of r, it is still proportionally more affected than the normal direction in general. In contrast to Experiment ~S; n ~ R ÞÞ is a bit lower than RMS of angle discrepancies 2A, RMST ð\ðn in the light directions. 6.2.2. Uncertainty of the interaction matrix The uncertainty is modelled by introducing independent Gaussian noise with standard deviation r into coordinates of the vectors
Please cite this article in press as: M. Klaudiny, A. Hilton, Error analysis of photometric stereo with colour lights, Pattern Recognition Lett. (2014), http:// dx.doi.org/10.1016/j.patrec.2013.12.013
9
M. Klaudiny, A. Hilton / Pattern Recognition Letters xxx (2014) xxx–xxx
300
250 0.1 0.2
250
0.3 0.4
|d| ~ ~ ,n ∠(n R S) (aR - aS)
0.5 200
RMS
RMS(|d|)
200 150
150 100
100 50
50 0
0 0
50 100 150 200 250 300 350 400 450 500 Column index
0
0.2
0.3
0.4
0.5
0.3
0.4
0.5
σ
180
180 0.1 0.2
160
0.3 0.4
0.5
140
120
120
100
100
80
80
60
60
40
40
20
20
0
|d| ~ ~ ,n ∠(n R S) (aR - aS)
160
140 RMS
RMS(|d|)
0.1
0 0
50 100 150 200 250 300 350 400 450 500 Column index
0
0.1
0.2 σ
Fig. 7. The RMS graphs for Exp. 2A (a)–(c), Exp. 2B (d)–(f). RMSðjdjÞ of the scaled normals across the hemisphere for r ¼ 0:2 where black = 0, white = 280 coordinate units (a), (d). The profile of RMSðjdjÞ along the central row for different r of the input uncertainty (b), (e). The total RMS over the whole hemisphere for increasing r : RMST ðjdjÞ – red (in ~S; n ~ R ÞÞ – green (in degrees) and RMST ðaR aS Þ – blue (in grey-scale levels) (c), (f). (For interpretation of the references to colour in this figure coordinate units), RMST ð\ðn caption, the reader is referred to the web version of this article.)
v j . This creates perturbations of the original interaction matrix V in a similar manner as for L in the previous section. However, r is defined in the domain of light-sensor-material interaction so there is no geometrical interpretation of noisy v j as for the light directions. Any negative element of a perturbed V is culled to zero to fulfil properties of the interaction matrix. Experiment 2C uses Setup2 with the ideal identity V and generalises Experiments 1E-F with example discrepancies. The pattern of RMSðjdjÞ in Fig. 8(a) is opposite to the pattern for the illumination matrix in Fig. 7(a) (both use the same Setup2). The uncertainty radially decreases from the maximum at Z-axis towards the edge. The profile of RMSðjdjÞ in Fig. 8(b) is an upside-down parabola which scales by similar amounts with increasing r. The total mea~S; n ~ R ÞÞ and RMST ðaR aS Þ of the output sures RMST ðjdjÞ; RMST ð\ðn uncertainty have again the linear dependency on r (Fig. 8(c)). Experiment 2D uses Setup3 with the non-diagonal V where the vectors v 2 ¼ v 3 ¼ ½0 0 0:5T differ from the vector v 1 ¼ ½1 0 0T . This causes an asymmetry in the pattern of RMSðjdjÞ in comparison to Experiment 2C. The paraboloid has a similar profile over different values of r but the minimum is shifted away from the Z-axis. ~S; n ~ R ÞÞ and RMST ðaR aS Þ are dominantly linear RMST ðjdjÞ; RMST ð\ðn but there is a slight change of the slope beyond r ¼ 0:3. This could be caused by a bias in the perturbed v 2 and v 3 which often have to be culled to positive values in the case of larger noise. The output uncertainty is generally higher than for the identity interaction matrix in Experiment 2C which supports the theoretical claim about the optimality of diagonal V in Section 5.2. In both experiments albedos are proportionally more affected than normal directions but the difference is more severe than for the uncertainty in the light directions (Experiments 2A–B). 6.2.3. Uncertainty of image colour The uncertainty of the input image I is modelled by adding independent Gaussian noise in colour channels as in Experiment 1G. The standard deviation r is expressed in colour levels and a different range of values is evaluated than in Experiments 2A-D.
Experiment 2E extends Experiment 1G for the same Setup2 such that a random noise vector is introduced to the colour of the same pixel many times. RMSðjdjÞ over the hemisphere in Fig. 9(a) is fairly constant which further proves that the image noise is propagated to all scaled normals in the same extent. There is a small noise in RMSðjdjÞ across the hemisphere in contrast to the previous experiments (Fig. 9(b)) because each scaled normal is separately subjected to the image colour uncertainty whereas the uncertainty of L and V globally affects all normals. The profile of RMSðjdjÞ linearly increases with r and has the same values as corresponding total RMST ðjdjÞ (Fig. 9(c)). Experiment 2F uses Setup3 to assess the case with non-orthogonal L and non-diagonal V. The outcome is that all RMS measures in Figs. 9(d)–(f) are significantly higher than for Experiment 2E because V is not the identity matrix. Setup1 with the orthogonal L and the identity V represents the optimal configuration with respect to the image noise according to the theoretical findings in Section 5.3. Experiment 2G with Setup1 empirically proves that the noise vector D in Eq. (17) is not amplified as for non-orthogonal L; V. RMS errors in Figs. 9(g)–(i) are lower than in Experiments 2E and F. The graph of RMST ðjdjÞ in Fig. 9(i) confirms the theoretical 2 relationship RMST ðjdjÞ ¼ 3r2 for the optimal setup presented in [6]. As with the uncertainty in the other inputs, the image noise has a bigger impact on albedos than unit normals. The split of the uncertainty of a scaled normal between its direction and albedo is more biased to the albedo than for L but less than for V. 6.3. Assessment of a real capture setup Outcomes of the sensitivity analysis on the synthetic data can be used to assess a real capture setup for PSCL. The constructed setup is similar to Setup2 from the synthetic experiments. The differences are the right-handed WCS (Y-axis points upwards in comparison to Fig. 1) and and vertical swap between the red-green pair of lights and the blue light. Due to physical constraints during the construction the angles h ; / of light directions partially
Please cite this article in press as: M. Klaudiny, A. Hilton, Error analysis of photometric stereo with colour lights, Pattern Recognition Lett. (2014), http:// dx.doi.org/10.1016/j.patrec.2013.12.013
10
M. Klaudiny, A. Hilton / Pattern Recognition Letters xxx (2014) xxx–xxx
400
350 0.1 0.2
350
0.3 0.4
300
300
250 RMS
RMS(|d|)
250 200 150
200 150 100
100
50
50 0
0 0
50 100 150 200 250 300 350 400 450 500 Column index
0
0.1 0.2
600
0.3 0.4
0.5
500 RMS
400 300 200 100 0 0
0.1
0.2
0.3
0.4
0.5
0.3
0.4
0.5
15
20
25
15
20
25
15
20
25
σ
700
RMS(|d|)
|d| ~ ~ ,n ∠(n R S) (aR - aS)
0.5
500 450 400 350 300 250 200 150 100 50 0
50 100 150 200 250 300 350 400 450 500 Column index
|d| ~ ~ ,n ∠(n R S) (aR - aS)
0
0.1
0.2 σ
Fig. 8. The RMS graphs for Exp. 2C (a)–(c) and Exp. 2D (d)–(f).
70
70 5 10
15 20
60
50
50
40
40
30
20
10
10 0 0
50 100 150 200 250 300 350 400 450 500 Column index
0
5
10 σ
120
120 5 10
100
15 20
|d| ~ ~ ,n ∠(n R S) (aR - aS)
25 100 80 RMS
80 RMS(|d|)
30
20
0
60
60
40
40
20
20
0
0 0
50 100 150 200 250 300 350 400 450 500 Column index
0
5
10 σ
45
45 5 10
40
15 20
25
35
30
30
25
25
20
20
15
15
10
10
5
5
0
|d| ~ ~ ,n ∠(n R S) (aR - aS)
40
35 RMS
RMS(|d|)
|d| ~ ~ ,n ∠(n R S) (aR - aS)
25
RMS
RMS(|d|)
60
0 0
50 100 150 200 250 300 350 400 450 500 Column index
0
5
10 σ
Fig. 9. The RMS graphs for Exp. 2E (a)–(c), Exp. 2F (d)–(f) and Exp. 2G (g)–(i). RMSðjdjÞ of the scaled normals across the hemisphere for
r ¼ 25 (a), (d), (g).
Please cite this article in press as: M. Klaudiny, A. Hilton, Error analysis of photometric stereo with colour lights, Pattern Recognition Lett. (2014), http:// dx.doi.org/10.1016/j.patrec.2013.12.013
M. Klaudiny, A. Hilton / Pattern Recognition Letters xxx (2014) xxx–xxx
deviate from the blueprint of Setup2. Also, spectra of the light sources approximately match spectral sensitivity curves of RGB sensors, thus V is not perfectly diagonal. A white sphere is captured by a calibrated camera (Fig. 10(a)) similarly to the simulation with the synthetic hemisphere (Fig. 2(c)). The sphere has a known diameter 75 mm and a known position in WCS. The normal and albedo map of the sphere in Figs. 10(b) and (e) are computed using Eq. (8). The illumination matrix L and the interaction matrix V are calibrated beforehand following the method by [14]. Because the location of the sphere and the camera parameters are known, it is possible to generate the ground-truth normal map from the perspective of the camera (Fig. 10(c)). The ground-truth albedo is not directly available but the sphere has a uniform colour. Exploiting this fact and the ground-truth normals a single albedo can be estimated from the captured image. Eq. (8) has only one variable a when the normal n and the matrices V; L are known. Solving the overdetermined linear system yields an estimate of a from every pixel. Their average of 208 is used as the single albedo value for generating the ground-truth albedo map (Fig. 10(f)). The reconstructed albedo-scaled normal map is compared to the ground truth the same way as for the synthetic data in Section 6.1. The only difference is that the simulation part is now replaced with the real capture (the index S denotes the computed ground truth in this case). The pattern of the error jdj across the sphere is depicted in Fig. 10(d). A mask excludes regions with incorrectly reconstructed normals and albedos because of selfshadows and non-Lambertian highlights. RMSðjdjÞ across the valid region of the sphere is 66:08 with the following split between the ~ R; n ~ S ÞÞ ¼ 15:70 ) and the albedo normal direction (RMSð\ðn (RMSðaR aS Þ ¼ 29:00 grey levels). The observed error is caused by a combination of image noise and discrepancies in the matrices V and L. The difference between the estimated and actual matrices is due to imperfection of the calibration method. Sensitivity analysis on the synthetic hemisphere provides relationships between the uncertainties in the different inputs to PSCL and the uncertainty in output scaled normals. These functions can be inversely used to assess image noise of the camera and accuracy of the calibration procedure. The exact magnitudes of discrepancy
11
in individual inputs cannot be estimated from the output error RMSðjdjÞ because they jointly influence the reconstruction. However, it is possible to calculate upper bounds of the discrepancies if each input is considered separately as the sole source of the output error. Therefore, for each case the RMSðjdjÞ is projected back to the magnitude of uncertainty r using the function of total RMST ðjdjÞ across all normals. This is performed using the results of Experiments 2A, 2C, 2E for Setup2 which is similar to the real capture setup. Note that the estimated r for V and L reflects apart from the calibration inaccuracy also a deviation of the real setup from the theoretical Setup2. A discrepancy in L has the upper bound r ¼ 0:13 for the elements of light directions (corresponding to the angular deviation x ¼ 7:52 ) based on Fig. 7(c). A discrepancy in V is for all elements at most r ¼ 0:11 based on Fig. 8(c). A standard deviation of the image noise is limited by r ¼ 24:92 based on Fig. 9(c). 6.4. Discussion The experiments with example discrepancies in the illumination matrix L and the interaction matrix V showed that the error d in a scaled normal varies depending on surface orientation. Complex behaviour of d across the hemisphere depends on the magnitude and direction of the discrepancy present. However, d changes smoothly with surface orientation which causes a low-frequency bias in the normal and albedo map of an object reconstructed by PSCL. The bias affects the overall shape and appearance of the object but this does not threaten the reconstruction of small details. Only large inaccuracies in L and V could cause a significant flattening of the details in some ranges of normal orientation. Image noise is directly propagated into scaled normals and the error is independent of their directions. The magnitude of discrepancy in L; V or image colours has a linear relationship with the overall RMS error of jdj. Note that the RMS errors cannot be directly compared between the experiments with L; V and image noise due to the different nature and magnitude of the discrepancies. The sensitivity analysis generalises the experiments with example discrepancies and provides an empirical relationship between the magnitude of the error in the inputs to PSCL and a magnitude
Fig. 10. Experiment with a real white sphere – the captured image (a), the reconstructed normal and albedo map (b), (e), the ground-truth normal and albedo map (c), (f), the magnitude of error jdj between the reconstruction and the ground truth (masked by valid regions) (d).
Please cite this article in press as: M. Klaudiny, A. Hilton, Error analysis of photometric stereo with colour lights, Pattern Recognition Lett. (2014), http:// dx.doi.org/10.1016/j.patrec.2013.12.013
12
M. Klaudiny, A. Hilton / Pattern Recognition Letters xxx (2014) xxx–xxx
of the error in the output scaled normals. Indicated by the nonhomogeneous distribution of errors over the hemisphere for example discrepancies, the uncertainty in L and V is propagated to the individual scaled normals depending on their orientation. In the experiments with the illumination matrix L the uncertainty is lower for the normals pointing towards the lighting setup. The opposite case is for the interaction matrix V where the normals pointing away from the lighting setup have lower uncertainty. The uncertainty in image colours is propagated evenly for all normal directions. The uncertainty of scaled normals grows linearly with the uncertainties in any of the inputs. This also applies when the error in a scaled normal is split between its unit normal and albedo. In all experiments the albedo is proportionally more inaccurate than the unit normal but it should be less affected for darker surface colours. The relationships between the input uncertainties and the scaled-normal uncertainty derived for a virtual capture setup can be used to analyse a similar real setup. Upper bounds on calibration errors and image noise are estimated using a reconstruction error of the captured sphere with ground-truth normal and albedo maps. The form of L and V influences the magnitude and distribution of the uncertainty in scaled normals. In the presence of image noise, the orthogonality of vectors forming both matrices guarantees minimal amplification of the noise according to theoretical proofs. This is validated experimentally on different virtual capture setups. The same properties for L and V were theoretically inferred in the case of errors in the matrices themselves. This is also supported by the experiments such that the uncertainty of scaled normals is generally lower for the orthogonal L and the identity V. Several important guidelines on constructing a capture setup for PSCL are identified from the theoretical and experimental analysis. The light directions should be perpendicular to each other (orthogonal L). The light spectral characteristics should match the sensitivity curves of corresponding camera sensors (diagonal V). The light sources should have high intensity (high values on the diagonal of V). 7. Conclusions A theoretical and empirical error analysis was presented for photometric stereo with colour lights. The error in the reconstructed albedo-scaled normals was formulated and assessed with respect to the errors in the image colours, the illumination matrix and the interaction matrix. The important observation is how the accuracy of the scaled normal changes with its direction in the case of inaccuracies in the matrices. Apart from the distribution of the output error, its overall magnitude across all possible directions is minimal if the illumination matrix is orthogonal and the interaction matrix is diagonal. These conclusions are valuable for design of a capture setup for photometric stereo. Experimental results also provide a view on the error magnitude in the normal and albedo map given certain levels of inaccuracy in the typical setup.
Acknowledgements This research was supported by EPSRC Platform Grant EP/ F02827X and EU IST Project SCENE. References [1] R. Anderson, B. Stenger, R. Cipolla, Color photometric stereo for multicolored surfaces, in: Proceedings of the International Conference on Computer Vision, 2011, pp. 2182–2189. [2] V. Argyriou, M. Petrou, S. Barsky, Photometric stereo with an arbitrary number of illuminants, Comput. Vision Image Understand. 114 (2010) 887–900. [3] S. Barsky, M. Petrou, The 4-source photometric stereo technique for threedimensional surfaces in the presence of highlights and shadows, IEEE Trans. Pattern Anal. Mach. Intell. 25 (10) (2003) 1239–1252. [4] S. Barsky, M. Petrou, Design issues for a colour photometric stereo system, J. Math. Imag. Vision 24 (1) (2006) 143–162. [5] G. Brostow, C. Hernandez, G. Vogiatzis, B. Stenger, R. Cipolla, Video normals from colored lights, IEEE Trans. Pattern Anal. Mach. Intell. (I) (2011) 1–12. [6] O. Drbohlav, M. Chantler, On optimal light configurations in photometric stereo, in: Proceedings of the International Conference on Computer Vision, 2005, pp. 1707–1712. [7] M. Drew, Shape from color, Tech. Rep, Technical Report CSS/LCCR TR 92–07, Simon Fraser University, School of Computing Science, 1992. [8] G. Fyffe, X. Yu, Single-shot photometric stereo by spectral multiplexing, in: Proceedings of the IEEE International Conference on Computational Photography, 2011, pp. 1–6. [9] C. Hernández, Self-calibrating a real-time monocular 3d facial capture system, in: Proceedings of the International Symposium on 3D Data Processing, Visualization and Transmission, 2010. [10] C. Hernández, G. Vogiatzis, G. Brostow, B. Stenger, R. Cipolla, Non-rigid photometric stereo with colored lights, in: Proceedings of the International Conference on Computer Vision, 2007, pp. 1–8. [11] C. Hernández, G. Vogiatzis, R. Cipolla, Shadows in three-source photometric stereo, in: Proceedings of the European Conference on Computer Vision, 2008, pp. 290–303. [12] X. Jiang, H. Bunke, On error analysis for surface normals determined by photometric stereo, Signal Process. 23 (3) (1991) 221–226. [13] H. Kim, B. Wilburn, M. Ben-Ezra, Photometric stereo for dynamic surface orientations, in: Proceedings of the European Conference on Computer Vision, 2010, pp. 59–72. [14] M. Klaudiny, A. Hilton, J. Edge, High-detail 3d capture of facial performance, in: Proceedings of the International Symposium on 3D Data Processing, Visualization and Transmission, 2010, pp. 17–24. [15] R. Ray, J. Birk, R. Kelley, Error analysis of surface normals determined by radiometry, IEEE Trans. Pattern Anal. Mach. Intell. 5 (6) (1983) 631–645. [16] B. Sathyabama, V. Divya, A population adaptive differential evolution strategy to light configuration optimization of photometric stereo, in: Proceedings of the International Conference on Swarm, Evolutionary, and Memetic Computing, 2010, pp. 46–53. [17] M. Smith, L. Smith, Dynamic photometric stereo–a new technique for moving surface analysis, Image Vision Comput. 23 (9) (2005) 841–852. [18] A. Spence, M. Chantler, Optimal illumination for three-image photometric stereo using sensitivity analysis, IEE Proc. Vision Image Signal Process. 153 (2) (2006) 149–159. [19] J. Sun, M. Smith, L. Smith, A. Farooq, Examining the uncertainty of the recovered surface normal in three light photometric stereo, Image Vision Comput. 25 (7) (2007) 1073–1079. [20] G. Vogiatzis, C. Hernández, Self-calibrated, multi-spectral photometric stereo for 3D face capture, Int. J. Comput. Vision 97 (1) (2012) 91–103. [21] R. Woodham, Photometric method for determining surface orientation from multiple images, Opt. Eng. 19 (1) (1980) 139–144.
Please cite this article in press as: M. Klaudiny, A. Hilton, Error analysis of photometric stereo with colour lights, Pattern Recognition Lett. (2014), http:// dx.doi.org/10.1016/j.patrec.2013.12.013