Available online at www.sciencedirect.com
Pattern Recognition Letters 29 (2008) 871–877 www.elsevier.com/locate/patrec
Dichromatic illumination estimation without pre-segmentation Javier Toro * Centre de recherche MOIVRE, Universite´ de Sherbrooke, Sherbrooke, Que´bec, Canada J1K 2R1 Received 7 August 2007; received in revised form 14 November 2007 Available online 17 January 2008 Communicated by J.A. Robinson
Abstract An illumination estimation method based on the dichromatic reflection model is proposed. Unlike most previous dichromatic methods, no explicit image segmentation into regions of uniform surface reflectance or highlight areas is required. An estimate is obtained by optimizing a cost function on the color of the illuminating light. Ó 2008 Elsevier B.V. All rights reserved. Keywords: Color constancy; Physics-based vision; Dichromatic reflection model
1. Introduction The goal of color constancy methods is to ensure that the perceived colors in a scene remain fairly constant under varying light conditions. An approach that has been broadly explored consists of determining the color of the illuminating light which is then discounted in order to obtain an invariant color description of observed surfaces. Two main approaches have been developed (Finlayson and Schaefer, 2001b); namely, statistics- and physics-based techniques. In general, both approaches require that differently colored surfaces be present in the viewed scene. However, an advantage of physics-based algorithms is that fewer surfaces are needed, to the point that some algorithms are able to operate with a single one (Lee, 1990; Finlayson and Schaefer, 2001b). Among physics-based techniques, the dichromatic reflection model (Shafer, 1985) has been the most used explanation to reflection properties of scene’s surfaces. Independent of how the illuminant is estimated using the dichromatic assumption, most dichromatic-physics-based * Present address: Grupo de Ingenierı´a Biome´dica (GIBULA), Universidad de Los Andes, Me´rida 5101, Venezuela. Tel.: +58 416 244 3264; fax: +58 274 240 2890. E-mail address:
[email protected]
0167-8655/$ - see front matter Ó 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.patrec.2008.01.004
techniques require pre-processing that either identifies highlight areas or else regions of uniform surface reflectance. In practice, obtaining reliable information from either of these pre-processing tasks has been a difficult problem (Finlayson and Schaefer, 2001b; Tan et al., 2004). Nevertheless, to use the dichromatic assumption seems a promising approach to pursue further. The validity of the dichromatic reflection model has been proved for a variety of inhomogeneous dielectric materials commonly observed in natural scenes (Tominaga and Wandell, 1989; Lee et al., 1990). Moreover, a recently proposed approach that removes both the segmentation and the highlight identification problems (Toro and Funt, 2007) has shown positive results in a broad variety of scenes. The approach of Toro and Funt (2007) removes the explicit image segmentation problem, but requires a discrete set of candidate lights to be able to solve for the illuminant. Given a set of candidates, the illuminant is found by testing all candidates sequentially. In the approach of this paper a discrete set of candidate lights is not needed. The illuminant is found by optimizing fit of candidate planes to image colors using a cost function on the color of the illuminating light. In the proposed formulation all image colors are simultaneously involved in the estimation.
872
J. Toro / Pattern Recognition Letters 29 (2008) 871–877
2. The problem and the proposed approach In RGB space, the color response of an inhomogeneous dielectric material lies on a plane (Shafer, 1985) (referred to as the dichromatic plane of the material), which under the assumption of neutral interface reflection (Lee et al., 1990) contains the color of the illuminating light. Thus, if two or more different materials illuminated by the same light are observed and their dichromatic planes can be identified, the vector direction representing the illuminant can be estimated by intersecting two or more of these planes. A fundamental problem with the dichromatic principle for illumination estimation is that it is not known a priori which observed colors originate from the same material, and this makes it difficult to estimate the potential dichromatic planes reliably. Some approaches assume that the partition of the image into regions of uniform surface reflectance is known (Finlayson and Schaefer, 2001b), or that in any small region of the scene only a single material is imaged (Finlayson and Schaefer, 2001a). Others incorporate some mechanism aimed at detecting where a material change has occurred, so as to decide which information must be used to estimate a single plane (Lee, 1986); and some others seek to assess which estimated planes are likely to be dichromatic and whether their intersections may lead to good illuminant estimates (Schaefer, 2004). Other type of approaches rely on the existence of highlight regions in the scene (Lehmann and Palm, 2001; Tan et al., 2004). In the approach proposed by Lehmann and Palm (2001) it is assumed that within any particular highlight region only a single material is observed. The approach proposed by Tan et al. (2004), by contrast, is able to cope with different materials within the same highlight area. More recently, the problems of having to identify colors from the same material or the location of highlight areas have been avoided by Toro and Funt (2007) by adopting a multilinear constraint that accounts for multiple materials given a candidate illuminant. The formulation proposed in this paper is also a multilinear approach to solve for the color of the illuminating light. The formulation rests on a cost function that depends on a vector representing the color of the light only. By contrast, the cost function put forward by Toro and Funt (2007) has as a parameter not only a vector representing the color of the light, but also a set of vectors representing the dichromatic planes of the scene. To assess in this approach how well a candidate light fits the observed scene colors a secondary minimization problem – that of finding dichromatic planes given the candidate light – has to be solved. In the approach proposed here the goodness of fit of a candidate light is evaluated directly. In the formulation of this paper an estimate of the number of dichromatic planes and their partial location in RGB space is obtained first. To this end, the proposed technique relies on the observation that colors tend to form clusters (Klinker et al., 1988). Since all observed colors must lie on a dichromatic plane, it is argued that if there is a cluster
a plane must go through there. Having hypothesized about the arrangement of dichromatic planes, a constraint on the chromaticity of the illuminant is derived. The constraint explicitly incorporates the fact that all dichromatic planes must intersect at the same line. The most plausible illuminant is found by minimizing the distance between image colors and dichromatic planes. The formulation put forward in this paper is derived assuming that: (1) The spectral content of the light falling on visible surfaces remains the same throughout the observed scene. (2) If a uniformly colored surface reflects different colors, these colors comply with the dichromatic reflection model under the assumption of neutral interface reflection. (3) Material changes are abrupt and appreciably different.
3. A constraint on the light The proposed method rests on the premise that it is possible to discover a set of colors that represents all the materials observed in the scene. In this paper, a color represents a given material if this color lies on the actual dichromatic plane of the material. A mechanism for discovering these representative colors is described in Section 5. Given an arbitrary scene composed of several materials and assuming that a set of representative colors has been discovered, the color of the illuminating light is estimated by looking for the light-related vector under which every color in the scene can be represented as the linear combination of the sought vector and at least one of the discovered representative colors. Here, the linear dependency between a given color (vector c), the candidate light (denoted as vector v) and one of the representative colors (which is denoted by vector p) is assessed by measuring the Euclidean distance between c and the plane spanned by v and p. The distance from c to the plane spanðv; pÞ can be written as c ðp vÞ=jp vj. The quantity p v is a vector normal to the plane spanðv; pÞ. The product of color c with the unit normal vector ðp vÞ=jp vj gives the distance between c and the plane defined by this normal. Clearly, if this distance is equal to zero, then c is linear combination of v and p. Observe that while the distance between a given color and the plane spanned by the candidate light and one of the representative colors may be equal to zero, the distance between the same color and the plane spanned by the candidate light and some other representative color may not necessarily vanish. Nonetheless, the constraint on v is satisfied when at least one of these distances assumes the value zero. To formalize the framework of the proposed approach, consider the image of a scene composed of m different inhomogeneous dielectric materials illuminated by a light source of color cs . Suppose that nðP mÞ color vectors,
J. Toro / Pattern Recognition Letters 29 (2008) 871–877
fpi gni¼1 , which will be referred to as pivots, have been selected (a graphical illustration is given in Fig. 1). If among these n vectors there is a subset of m vectors such that each member represents a different material, then the expression n Y cT P i v ¼0 jP i vj i¼1
ð1Þ
is referred to as the light constraint equation of the scene, where c is any observed color, P i is an antisymmetric maT trix derived from pi ¼ ½pir pig pib as 2 3 0 pib pig 6 0 pir 7 P i ¼ 4 pib ð2Þ 5 pig
pir
0
and v is some vector related to the color of the light. Note that the quantity P i v is the cross product of vectors pi and v. Observe that (1) holds true for every color in the imaged scene when v ¼ kcs , k 6¼ 0. Indeed, as discussed earlier, each term cT P i v=jP i vj; i ¼ 1 . . . n, is the Euclidean distance between c and the plane defined by the normal P i v. As any observed color must lie on one of the m dichromatic planes of the scene and all these planes intersect at the same line, if v ¼ kcs ; k 6¼ 0, then for any color response c there is at least one vector pi such that the response lies on the plane spanned by pi and v, thereby making the distance between c and this plane zero. Consequently, Eq. (1), which is the product of the distances between c and the planes spanðv; pi Þ; i ¼ 1; . . . ; n, also vanishes. Note that (1) is valid for any vector that has the same direction as vector cs , so it is not possible to tell what the intensity of the illuminating light is. Nevertheless, the equa-
873
tion gives a constraint on the most important quality of the illuminant – its chromaticity. In the proposed formulation neither a partition of the image into material-homogeneous regions nor the location of highlight areas is required, but a set of representative pivot colors is needed instead. Clearly, (1) does not hold if among the chosen pivots not all materials are represented. On the other hand, observe that a valid constraint would remain valid if several representatives for the same material were found, or if an otherwise arbitrary pivot were incorporated into the existing formulation. Note that based on Eq. (1) alone at least two different surfaces exhibiting a sufficiently varied chromaticity response are needed to be able to obtain a unique solution. It is possible however to obtain a solution from a single chromaticity-diverse surface if an additional constraint is used. 4. Likely illuminants In practice, the range of colors that typical light sources display is limited. Judd et al. (1964) found that the chromaticity points of most natural light sources cluster around a curve: the daylight locus. This clue can be used to restrict the solution space to those colors that are more likely to occur in nature. This approach has already been explored by Finlayson and Schaefer (2001b), reporting a considerable improvement in the estimates of the chromaticity of the illuminating light. Another advantage of using this a priori knowledge about the color of typical light sources is that the illuminant can be solved for if only a single material is observed (Finlayson and Schaefer, 2001b). Here, as proposed by Judd et al. (1964), a second degree polynomial on one of the chromaticity components is used to represent typical daylight chromaticities. 5. Choosing the pivots
B
R
p2
v
p1
d1
d2
G
c Fig. 1. The dichromatic constraint on the illumination for the case of 2 pivot colors. Each pivot represents a different material in a scene composed of two materials. Given the pivots, the illumination color v, which is free to vary, defines the set of dichromatic planes. Ideally, if v is the true illuminant color then, for every observed color c, its distance (d 1 or d 2 ) to at least one of the planes must vanish.
Let us now turn to the problem of how to choose the pivots (representative colors) that are needed in the light constraint equation. It has been shown that scene colors tend to form clusters and that a color cluster signals the presence of a material in the scene (Klinker et al., 1988). The problem thus is how to detect these color clusters. In this paper, the detection is carried out in the rg-chromaticity space ðr ¼ R=ðRþ G þ BÞ; g ¼ G=ðR þ G þ BÞÞ. In this space, point- and radial-type color clusters in RGB space – types of cluster commonly observed in typical scenes (Klinker et al., 1988) – project onto a point-type cluster. To detect clusters of this type a clustering approach based on the kernel density estimator (Comaniciu and Meer, 2002) is used. In this clustering approach, a density map of the space is produced by using a kernel function, that is, a function that regulates the distance-based similarity between observations (chromaticity values). Pivots then are given by those locations where the produced density map has a local maximum.
874
J. Toro / Pattern Recognition Letters 29 (2008) 871–877
For simplicity, a radially symmetric kernel is employed, which requires only a single parameter (the bandwidth of the kernel) to define the degree of similarity. A brief description of the different methods used to estimate the appropriate bandwidth is given by Comaniciu and Meer (2002). Note that this procedure is, in a sense, a sort of segmentation carried out in chromaticity space. However, the procedure is less strenuous than a typical segmentation since only the locations of local maximum density are required. In the proposed formulation there is no need to figure out how to partition the chromaticity space based on these discovered locations. 6. Finding a solution Given the constraint on the color of the illuminating light (Section 3) and on the color of typical light sources (Section 4), the problem is now how to find a vector v that fits these two restrictions simultaneously. This vector can be found by minimizing the sum of the errors in (1) for all image colors fcj g !2 n X Y cTj P i v min ð3Þ v jP i vj j i¼1 and a quantity indicating how different v is from the color of typical light sources. First, Eq. (3) is Q simplified by noticing that there is a linear form in which ni¼1 cTj P i v can be re-expressed. To begin, consider a single color c. Given this color, a pivot-modified color, denoted as d i , is defined as d i ¼ P i c
ð4Þ P Ti
¼ P i has been used). TheQsought (where the fact that n formulation can be obtained by multiplying out i¼1 d Ti v Qn T ð¼ i¼1 c P i vÞ to thus get an expression that is written as the sum of q ¼ ðn2 þ 3n þ 2Þ=2 independent terms of the T
form vn11 vn22 vn33 ðv ¼ ½v1 v2 v3 Þ, where n1 þ n2 þ n3 ¼ n, with associated coefficients depending on fd i gni¼1 . If the totality of these independent terms are collected into a single vector, denoted here as w, and the coefficientQ multiplyn ing the term vn11 vn22 vn33 is denoted as an1 ;n2 ;n3 , then i¼1 cT P i v can be written as n Y X d Ti v ¼ an1 ;n2 ;n3 vn11 vn22 vn33 ¼ aT w; ð5Þ n1 ;n2 ;n3 n1 þn2 þn3 ¼n
i¼1
which is a linear equation on the entries of vector w. Eq. (5) was obtained by considering a single color c. If j colors are contemplated, the following set of equations is obtained ð6Þ
Kj w; where
2
3 aT1 6 7 Kj ¼ 4 ... 5 aTj
ð7Þ
and aj is the vector of coefficients associated with color cj . Using (6), (3) can now be written as wT KT Kj w Qn Tj T ; i¼1 v P i P i v
ð8Þ
P where the numerator is a matrix form of jj¼1 ðaTj wÞ2 . An advantage of using this expression over that given in (3) is that all the information provided by the observed colors and the chosen pivots is compacted in the q q matrix KTj Kj . Assessing how good or bad a candidate color v is then requires only a simple matrix multiplication. The remaining issue is how to measure the departure of v from the color of typical light sources. Since a parametric form (a polynomial) of the chromaticities that typical light sources display is already available (Judd et al., 1964), instead of attempting to measure the departure of the solution from this curve, the error measure (8) is minimized by using a standard optimization technique on the parameterized colors. Note that a parameterization in the RGB space also induces a parameterization in the space where w resides. As such, (8) can be defined in terms of a single cost parameter. 7. Experiments The approach proposed here was tested on the database generated at the Computational Vision Lab, Simon Fraser University (SFU), which comprises images of 32 scenes taken under a set of 11 different illuminants. The database is organized into four groups: a set of images with minimal specularities (labeled as mondrian), images with non-negligible dielectric specularities (labeled specular), images with metallic specularities (labeled metallic), and images of scenes with at least one fluorescent surface (labeled fluorescent). A detailed description of how this dataset was generated is given by Barnard et al. (2002c). The database can be downloaded at http://www.cs.sfu.ca/~colour/data/colour_constancy_test_images/index.html. A sample image from each group of the database is shown in Fig. 2. In the implementation of the proposed scheme the following ad-hoc procedures were applied. Pixels exhibiting extreme values (either too high or too low) in either of the color channels were excluded. Those pixels whose chromaticity values were isolated with respect to the bulk of the overall chromaticities as well as those that laid within a neighborhood of the daylight locus were also discarded. The later is aimed at preventing any cluster (pivot) from appearing that may cause the constraint of Eq. (1) to become unstable. Apart from this, no other strategy aimed at improving the performance of the proposed technique was used. In estimating the light, the approach was applied to any given image as a whole. Clusters in chromaticity space were discovered using the normal kernel. In Table 1, the performance of the proposed technique on the overall data set and on each subgroup is shown as the Euclidean chromaticity distance between the estimated
J. Toro / Pattern Recognition Letters 29 (2008) 871–877
875
Fig. 2. Sample images from the database generated at the Computational Vision Lab, Simon Fraser University. The database is organized into four groups labeled as mondrian, specular, metallic and fluorescent.
and the actual light. In Table 2, the performance is shown as the angular error in degrees. The results are reported using the mean and the median; measures commonly used to evaluate color constancy algorithms (Barnard et al., 2002a,b; Hordley and Finlayson, 2004). Results are shown for the kernel bandwidth that provided the best performance. In these experiments two different constraints on the illuminant were tested. The label unconstrained search indicates that the illuminant can be any light located on the daylight locus; the label constrained search indicates that the illuminant must be one of the 11 lights used to construct the database. In both tables the results reported by Toro and Funt (2007) are also shown. Note that, in general, the algorithm performs best on the mondrian subset. Specularities in this set are indeed minimal but not nonexistent. Overall, it was observed that the color response of most imaged scenes in this set conforms with the type of response the proposed approach is expected to work best. It is naturally anticipated that the further the observations depart from the idealized model of the world, as stated by the assumptions of Section 2, the less reliable the estimates will be. In any case, the algorithm was able to estimate the light with moderate accuracy. For the unconstrained search, 53 (out of 511) images had an error estimate less than 0.02 (an error threshold heuristically adopted by Barnard et al. (2002a) as adequate color constancy for most color tasks); in the constrained search, 165 images were within the given threshold. In both
cases a large portion of these images belong to the mondrian set (37 for the first group and 91 for the second). The difference between the mean and median values in Table 1 suggests that there are cases where the error is very high. For the unconstrained search, 25 images had an error greater than 0.2. For the constrained search, this number is 5. Inspection of these instances revealed that the actual color of the light corresponded, in most cases, to a local minimum of the error function (Eq. (3)), and not to the required global one. It was also observed that the pre-processing used to detect and remove isolated chromaticities in Table 1 Chromaticity error Unconstrained search
Constrained search
Toro and Funt (2007)
Mean
Mean
Mean
Median
Median
Median
Database
0.080
0.064
0.050
0.043
0.055
0.043
Mondrian Specular Metallic Fluorescent
0.068 0.067 0.088 0.124
0.055 0.060 0.073 0.089
0.036 0.046 0.062 0.076
0.034 0.043 0.051 0.074
0.047 0.051 0.064 0.065
0.026 0.044 0.046 0.050
The performance is shown as the Euclidean distance between the actual and the estimated light in the rg-chromaticity space. In the unconstrained search, the illuminant can be any light located on the daylight locus. In the constrained search, the illuminant must be one of the 11 lights used to construct the database. The chromaticity errors reported by Toro and Funt (2007) are also shown.
876
J. Toro / Pattern Recognition Letters 29 (2008) 871–877
Table 2 Angular error (degrees) Unconstrained search
Constrained search
Toro and Funt (2007)
Mean
Median
Mean
Median
Mean
Median
Database
10.81
8.86
6.89
6.05
7.49
5.80
Mondrian Specular Metallic Fluorescent
9.28 9.21 12.18 15.74
7.38 8.03 10.14 13.17
5.09 6.41 8.48 10.42
4.27 5.55 7.78 11.14
6.37 7.10 8.75 8.93
3.76 6.05 7.64 7.78
The performance is shown as the angular error in degrees between the actual and the estimated light. The angular errors reported by Toro and Funt (2007) are also shown.
the original image left behind some chromaticities that to a human observer still look isolated. These singular clusters prompted a large number of pivots (nearly half or more of the totality). In most cases, hand-picking a number of these pivots out (not the actual chromaticities) restored the correct light (or a close approximation) as the global minimum. The number of pivots depends, in general, on the kernel bandwidth and the behavior of the chromaticities of the given image. From one bandwidth to another there may be a considerable difference in this number. So for instance, an image that for a bandwidth of 0.2 produced 10 pivots may have produced 30 or more with a bandwidth of 0.1. Nevertheless, variations of this order around the optimal bandwidth did not cause a dramatic change in performance. In principle, if two or more surfaces reflect a sufficiently diverse set of chromaticities, an arbitrary illuminant can be estimated using the constraint of Eq. (1) alone. Nonetheless, the results improve considerably when a constraint on the likely color of the illuminant is added. This is seen in Table 1, where the mean error of the estimated light drops from 0.080 to 0.050 when a stronger constraint is used. Under the same experimental conditions of Barnard et al. (2002b), the performance of the proposed algorithm compares well with the performance of major statisticsbased methods. A proper comparison with previously proposed dichromatic-physics-based techniques is in most cases difficult to make. The majority of published results have been produced using either a different image dataset or a subset of the database used in this paper. Compared to the approach proposed by Toro and Funt (2007), the approach of this paper shows, in the constrained search case, a similar overall performance. 8. Concluding remarks The results demonstrate that the proposed approach is able to estimate the color of the light without requiring prior knowledge about the region of support of materialhomogeneous surfaces or highlight areas in the scene. The proposed approach relies on the observation that a representative color of any given material can be identified. This is a much easier task than segmentation and more dependable than locating highlights. The detection of these
representatives involves finding clusters in chromaticity space. Using the estimated representative colors, a constraint on the color of the illuminant for the observed scene can then be formulated. The constraint explains the observed colors in terms of a single parameter (the chromaticity of the light), which is estimated via optimization. The fact that all dichromatic planes of the scene must have a common axis of intersection is explicitly embodied in the constraint of the proposed formulation. Note that the problem of estimating the color of the illuminating light from the principles laid down by the dichromatic reflection model would pose no major difficulties if the partition of the image into regions of uniform surface reflectance were known. In principle, we could therefore think of the illumination estimation problem as that of simultaneous data segmentation and model estimation, for which a number of approaches have been proposed. Most existing approaches start from an initial guess to then iterate between data segmentation and model parameter estimation. These techniques, however, have been reported to be very sensitive to the initial guess, whose appropriate selection remains an elusive problem. A recently proposed algebraic geometric approach (Vidal et al., 2003) finds the model parameters avoiding the segmentation problem, but it is not clear, however, how the technique may behave in the presence of noise (Vidal et al., 2003). In the problem that concerns this paper, the identification of the linear models (dichromatic planes of the scene) is not the primary goal. The aim is to reliably estimate their common axis of intersection. In the presence of noise or any other disturbance, the linear models produced by any model estimation technique may not share a common axis. Some other mechanism would therefore be required, such as the one proposed by Finlayson and Schaefer (2001a), to find the best approximate axis of intersection from the estimated planes. Embedding within the constraint of the formulation the fact that all dichromatic planes must intersect at the same line may be a more reliable way of proceeding than blindly applying a model estimation technique to the observed data. Acknowledgements This work was supported in part by Canadian Heritage and Bell Canada.
J. Toro / Pattern Recognition Letters 29 (2008) 871–877
J. Toro would like to thank Prof. Djemel Ziou, Centre de recherche MOIVRE, Universite´ de Sherbrooke, and Prof. Brian Funt, Computational Vision Lab, Simon Fraser University. References Barnard, K., Cardei, V., Funt, B., 2002a. A comparison of computational color constancy algorithms – Part I: Methodology and experiments with synthesized data. IEEE Trans. Image Process. 11 (9), 972–984. Barnard, K., Martin, L., Coath, A., Funt, B., 2002b. A comparison of computational color constancy algorithms – Part II: Experiments with images. IEEE Trans. Image Process. 11 (9), 985–996. Barnard, K., Martin, L., Funt, B., Coath, A., 2002c. A data set for color research. Color Res. Appl. 27 (3), 148–152. Comaniciu, D., Meer, P., 2002. Mean shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Machine Intell. 24 (5), 603–619. Finlayson, G., Schaefer, G., 2001a. Convex and non-convex illuminant constraint for dichromatic color constancy. In: Internat. Conf. Comput. Vision Pattern Recognition, vol. I. pp. 598–604. Finlayson, G., Schaefer, G., 2001b. Solving for colour constancy using a constrained dichromatic reflection model. Int. J. Comput. Vision 42 (3), 127–144. Hordley, S., Finlayson, G., 2004. Re-evaluating colour constancy algorithms. In: Internat. Conf. Pattern Recognition, vol. 1. pp. 76–79.
877
Judd, D.B., MacAdam, D.L., Wyszecki, G.W., 1964. Spectral distribution of typical daylight as a function of correlated color temperature. J. Opt. Soc. Amer. 54 (8), 1031–1040. Klinker, G.J., Shafer, S.A., Kanade, T., 1988. The measurement of highlights in color images. Int. J. Comput. Vision 2 (1), 7–32. Lee, H.-C., 1986. Method for computing the scene-illuminant chromaticity from specular highlights. J. Opt. Soc. Amer. A 3 (10), 1694–1699. Lee, H.-C., 1990. Illuminant color from shading. In: Proc. SPIE: Perceiving, Measuring, and Using Color, vol. 1250. pp. 236–244. Lee, H.-C., Breneman, E.J., Schulte, C.P., 1990. Modeling light reflection for computer color vision. IEEE Trans. Pattern Anal. Machine Intell. 12 (4), 402–409. Lehmann, T.M., Palm, C., 2001. Color line search for illuminant estimation in real-world scenes. J. Opt. Soc. Amer. A 18 (11), 2679– 2691. Schaefer, G., 2004. Robust dichromatic colour constancy. In: Internat. Conf. Image Analysis Recognition, vol. 2. pp. 257–264. Shafer, S., 1985. Using color to separate reflection components. Color Res. Appl. 10 (4), 210–218. Tan, R.T., Nishino, K., Ikeuchi, K., 2004. Color constancy through inverse intensity chromaticity space. J. Opt. Soc. Amer. A 21 (3), 321– 334. Tominaga, S., Wandell, B.A., 1989. Standard surface-reflectance model and illuminant estimation. J. Opt. Soc. Amer. A 6 (4), 576–584. Toro, J., Funt, B., 2007. A multilinear constraint on dichromatic planes for illumination estimation. IEEE Trans. Image Process. 16 (1), 92–97. Vidal, R., Ma, Y., Sastry, S. 2003. Generalized principal component analysis (GPCA). In: Internat. Conf. Comput. Vision Pattern Recognition, vol. 1, pp. 621–628.