Accurately estimating reflectance parameters for color and gloss reproduction

Accurately estimating reflectance parameters for color and gloss reproduction

Computer Vision and Image Understanding 113 (2009) 308–316 Contents lists available at ScienceDirect Computer Vision and Image Understanding journal...

1MB Sizes 3 Downloads 72 Views

Computer Vision and Image Understanding 113 (2009) 308–316

Contents lists available at ScienceDirect

Computer Vision and Image Understanding journal homepage: www.elsevier.com/locate/cviu

Accurately estimating reflectance parameters for color and gloss reproduction Shiying Li a,b,*, Yoshitsugu Manabe b, Kunihiro Chihara b a b

School of Computer and Communication, Hunan University, 326 Lushan South Road, Changsha, Hunan 410082, China Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara 630-0101, Japan

a r t i c l e

i n f o

Article history: Received 7 April 2008 Accepted 4 November 2008 Available online 25 November 2008 Keywords: Diffuse reflectance Specular reflectance Color reproduction Gloss reproduction Surface roughness High dynamic range Spectral image

a b s t r a c t A new method is described to estimate diffuse and specular reflectance parameters using spectral images, which overcomes the dynamic range limitation of imaging devices. After eliminating the influences of illumination and camera on spectral images, reflection values are initially assumed as diffuse-only reflection components, and subjected to the least squares method to estimate diffuse reflectance parameters at each wavelength on each single surface particle. Based on the dichromatic reflection model, specular reflection components are obtained, and then subjected to the least squares method to estimate specular reflectance parameters for gloss intensity and surface roughness. Experiments were carried out using both simulation data and measured spectral images. Our results demonstrate that this method is capable of estimating diffuse and specular reflectance parameters precisely for color and gloss reproduction, without requiring preprocesses such as image segmentation and synthesis of high dynamic range images. Ó 2008 Elsevier Inc. All rights reserved.

1. Introduction The appearance of an object, such as its color and gloss, provides essential detail that allows us to distinguish one object from another. How to acquire intrinsic information about light reflected on the surface of a real object, with minimal influence from instruments used for the measurement, and how to accurately estimate reflectance parameters, overcoming limitations of imaging devices such as their dynamic range, are critical subjects in the fields of computer graphics and computer vision (CG&CV). Reflection is interaction of a light source, material characteristics and light detectors. For opaque dielectric materials such as ceramic and plastic, the intensity of light reflected at an object is determined both by the light directly reflected at the interface between air and the object’s surface, and by the light re-emitted out of the surface in all directions after penetrating into the subsurface of the object and scattering internally, as shown in Fig. 1. The former is called specular (also known as interface, surface or direct) reflection, which is related to the gloss of the surface material; the latter is called diffuse (also body or indirect) reflection, which is considered as the color of the surface material. In the real world, inhomogeneous objects usually represent a hybrid appearance of these two reflections. When specular reflection on the surface is very strong over the limited dynamic range of

* Corresponding author. Address: School of Computer and Communication, Hunan University, 326 Lushan South Road, Changsha, Hunan 410082, China. E-mail addresses: [email protected], [email protected] (S. Li), manabe@is. naist.jp (Y. Manabe), [email protected] (K. Chihara). 1077-3142/$ - see front matter Ó 2008 Elsevier Inc. All rights reserved. doi:10.1016/j.cviu.2008.11.001

a camera, pixels in the captured images appear as highlights. At these highlight areas, pixel value is recorded as the maximum value of the dynamic range, such as 255 for an 8-bit camera, without information about colors and surface roughness. Such areas are considered saturated in the present paper. It is generally a dilemma to capture an image with unsaturated values in highlight areas while retaining sufficient information about colors and surface roughness in dark areas. Many techniques have assumed that obtained images are fully unsaturated, by, e.g., controlling illumination intensity. Several have focused on obtaining a desired response function by capturing images under different exposure times or shutter speeds, and then reconstructing high dynamic range (HDR) images as input images [1–6]. These techniques are efficient, especially when the raw images of real objects with strong gloss are actually unsaturated. However, it is difficult to ensure that such input images are indeed unsaturated, and it is expensive to generate an effective response function or a HDR image for storage and computing. Since physical color is a function of wavelength, and shows a continuous spectral distribution in the visible range, images captured by a camera which integrates the spectral distribution into three primary values (red, green and blue (RGB) of trichromatic theory) may be inadequate representations of the spectral distribution of reflected light because of factors such as metameric match. Our objectives in this study were to acquire physically intrinsic information about light reflected at the surface of an object, and to estimate diffuse and specular reflectance parameters accurately. We obtained spectral images for the estimation of diffuse and specular reflectance parameters at each wavelength, and used

309

S. Li et al. / Computer Vision and Image Understanding 113 (2009) 308–316

Light source Surface reflection Body reflection

Air Surface Subsurface

Fig. 1. Physics of reflected light on the surface of a dielectric object.

the least squares method to improve specular reflection parameters over the limited dynamic range of a camera. Experiments using synthesized reflection values and measured spectral images were carried out to validate this method. Our method requires neither image segmentation, nor a specific pixel separation process, nor HDR image synthesis, which most existing techniques need. In the remainder of this paper, some representative algorithms are reviewed to estimate diffuse and specular reflectance parameters in Section 2, and reflection models used in this work are introduced in Section 3. The proposed estimation method is then described in Section 4. Experiments and their results are demonstrated in Section 5, and conclusions presented in Section 6. 2. Related work Several techniques have been developed to simultaneously estimate diffuse and specular reflectance parameters from input images using a nonlinear least squares fitting algorithm, but with expensive time and computation costs. Most techniques estimate diffuse and specular reflectance parameters by separating reflection values into diffuse and specular reflection components based on the dichromatic reflection model, and on several geometrical or physical differences between diffuse and specular reflections. An approach has been introduced to separate reflection values into diffuse and specular reflection components using a polarization filter, since diffuse reflection tends to be unpolarized while specular reflection varies as a cosine function with rotation of the polarization filter. Using this approach, images are pre-segmented into highlight and non-highlight areas, at a smooth surface with uniform color [7–9]. Another approach is based on spatial distribution of diffuse and specular reflections in color space. As projected in color space, reflection values are clustered as a T or L shape, with two vectors in different directions (for diffuse and specular reflection components, respectively) when the surface of the object is smooth. By segmenting the two vectors using the principal component analysis method or the Hough transformation method, diffuse and specular reflection components may be separated [10–13]. These two approaches can be combined, using both polarization filters and color information, for objects with complex colors [14]. An epipolar plane image (EPI) strip-pattern of specular reflection in a spatio-temporal EPI has a more vertical orientation than that of diffuse reflection for convex surfaces, and a more horizontal orientation for concave surfaces. Based on these EPI strip-patterns of diffuse and specular reflections, reflection values can be separated with several geometric and photometric constraints [15,16]. Furthermore, since the intensity of specular reflection varies dramatically with lighting and viewing directions, whereas ideal diffuse reflection disperses uniformly in all directions and the intensity depends on spectral reflectance of surface materials, reflection values can be separated either by picking up pixels be-

low a threshold in input images that contain diffuse-only reflection components, or by using images taken at positions of the light source where diffuse-only reflection occurs [5,17–19]. Many approaches for separating diffuse and specular reflection attempt to acquire diffuse-only reflection for vision techniques, such as image segmentation and motion detection, by eliminating specular reflection in the input images, since the position and intensity of specular reflection can produce erroneous results. Some approaches take advantage of the separated specular reflection components for object recognition, illumination localization [20,21], and for recovering the shape or surface curvature of objects [22–24]. Recently, since gloss is a fundamental attribute of surface materials and is an important cue for realistic images, specular reflection itself has become a prominent subject, instead of being removed or ignored. Specular reflectance parameters for gloss intensity and surface roughness are estimated from the separated specular reflection components, in conjunction with known geometric information about the real objects [5,16–19]. In the aforementioned techniques, input images are considered unsaturated, by either controlling intensity of illumination or generating HDR images as the input images. With this assumption, specular reflectance parameters for gloss intensity are approximated by the peak intensity of specular reflection components, so that specular reflectance parameters for surface roughness can be computed in reflection models, and so that the surface normals also can be estimated. Moreover, RGB images, which may provide insufficient information on spectral distribution of light reflected at the surface of an object, are applied in these techniques to estimate diffuse and specular reflectance parameters. Several methods have been introduced to obtain images using a multi-band camera or a spectrograph for material recognition and color reproduction of objects [2,25–29]. These images are captured with spectral distributions at wavelength intervals of 5 or 10 nm, at a point or a line on the surface of the objects. However, these methods take into account only color information about the objects, either assuming specular reflection (gloss) to be absent or eliminating it. 3. Reflection models Appropriate reflection models are vital for estimation of reflectance parameters. Since few surfaces are ideally diffuse or perfectly specular in the real world, the dichromatic reflection model [10,30] considers reflection at a surface patch of inhomogeneous dielectric objects as a linear summation of diffuse and specular reflections, as shown in Fig. 2. H is called the half-vector and represents the normalized vector sum between illuminating vector L and viewing vector V. a is the angle between the normal vector N of the surface patch and the half-vector H. hi and hr are incident and reflection angles, respectively. The radiance I(k, hr) is approximated as shown in

N L Diffuse reflection

θi

α

H

Specular reflection

θr

V

Fig. 2. Diffuse and specular reflection at an object surface. H is the half-vector and represents the normalized vector sum between illuminating vector L and viewing vector V. a is the angle between H and the normal vector N of the object surface. hi and hr are incident and reflection angles, respectively.

310

S. Li et al. / Computer Vision and Image Understanding 113 (2009) 308–316

Eq. (1), where k represents the wavelength of incident light, and Id(k) and Is(k, hr) are diffuse and specular reflection components, respectively:

Iðk; hr Þ ¼ Id ðkÞ þ Is ðk; hr Þ:

ð1Þ

The Lambert reflection model [31] is widely used in CG&CV to model diffuse reflection. With this reflection model, the radiance Id(k) at a diffuse-only surface patch is represented as proportional to the cosine of the incident angle hi at each wavelength of reflected light, as shown in Eq. (2). Rd(k) is the diffuse reflectance parameter (or albedo) at wavelength k.

Id ðkÞ ¼ Rd ðkÞcosðhi Þ:

ð2Þ

At a rough surface, specular reflection occurs as an angular spread around the direction of the peak intensity (see Fig. 2). The Torrance–Sparrow reflection model [30,32], based on geometric optics, assumes that a surface, on which roughness is greater than the wavelength of incident light, consists of mirror-like microfacets, where perfect specular reflection occurs. The intensity of angular specular reflection at a surface patch is therefore a collection of fractions of light reflected at each microfacet, whose normal is oriented toward the direction symmetrical about the surface normal N with illuminating vector L. These microfacets, on each of which the intensity of reflected light is given by the Fresnel reflectance F, are described with a Gaussian probability distribution, and shadowing and masking between adjacent microfacets is predicted with the geometrical attenuation factor G. This reflection model is formulated in the following equation:

Is ðk; hr Þ ¼ FGRs ðkÞexpðða=rÞ2 Þ=cosðhr Þ;

ð3Þ

where Rs(k) and r are specular reflectance parameters for gloss intensity and surface roughness, respectively. For simplicity, F and G are generally assumed to have constant values in CG&CV. Rd(k) in Eq. (2), Rs(k) and r in Eq. (3) are the reflectance parameters to be estimated in this work.

and between the surfaces of a complex-shaped object, may be embedded into diffuse and specular reflections occurring at a surface patch. (3) The geometry of the object is given. Geometrical information is an important attribute of objects, and greatly affects intensity of reflected light at the surfaces, especially for specular reflection. Geometrical information, such as cylindrical and spherical, may be initially given, or may be measured using a range finder or structural lights. (4) The color of the light source is white or calibrated. Spectral power distribution of a light source interferes with spectral distributions of the surface material of an object, as a result of which diffuse reflectance parameters may be estimated inadequately. To obtain the intrinsic spectral reflectance O(k) of the surface material, spectral power distribution of a light source L(k) and camera sensitivity S(k) can be removed in Eq. 4, using images of an object with a given spectral reflectance captured in the same measurement system as that in which the input images of target objects are acquired.

IðkÞ ¼ OðkÞLðkÞSðkÞ:

The estimation method has two steps: estimating diffuse reflectance parameters and estimating specular reflectance parameters.

4.1. Estimating diffuse reflectance parameters Reflection values, which eliminate the influences of illumination and the camera in Eq. (4), are initially assumed as diffuse-only reflection components, and subjected to the least squares method. When the sum of the squares of the difference (SSD) d(k) between experimental and theoretical reflection values at each wavelength reaches a minimum value, the diffuse reflectance parameter Rd(k) at this wavelength is determined as shown in Eq. (5), where j represents different incident positions and I0dj ðkÞ represents experimental reflection values:

4. Estimation method

d ðkÞ ¼ R In the proposed measurement system, constructed with an imaging spectrograph and a rotating light source, spectral images are captured at a sequence of incident positions. Reflection values extracted from the spectral images at each wavelength can therefore be separated into diffuse and specular reflection components, based on the physical fact that the intensity of diffuse and specular reflection varies in different ways according to incident angles. Simply by assuming reflection values as diffuse-only reflection components, diffuse reflectance parameters can be estimated without preprocesses such as selecting diffuse-only pixels in the images or taking diffuse-only images at particular incident positions. By applying the least squares method for specular reflection components that are lower than a threshold, specular reflectance parameters for gloss intensity and surface roughness can be estimated over the dynamic range of a camera, at low computation cost, and without requirements such as HDR image generation. Several assumptions are made in this work: (1) Target objects are opaque and dielectric; thus, only reflection is taken into account. For metal or transparent objects, interactions between incident light and surface materials include not only reflection, but also absorption and transmission. For these objects, another appropriate model is required to interpret reflection and the other interactions. (2) A target object is illuminated by a single light source in a dark space, such that ambient lighting and the interreflections, which occur between the object and the surroundings,

ð4Þ

n

I0dj ðkÞ  Rd ðkÞcosðhij Þ

o2

ð5Þ

The process used to refine diffuse reflectance parameters at each wavelength k is shown in Fig. 3. First, a diffuse reflectance parameter Rdn is estimated using Eq. (5), and temporary diffuse reflection components are computed. Next, temporary specular reflection

(a) Assume reflection values as diffuse-only reflection components

(b) Estimate diffuse reflectance parameter (Rdn)

(c) Compute temporary diffuse reflection components

(d) Compute temporary specular reflection components

(e) Subtract temporary specular reflection components No

(Rdn – Rd(n+1)) < threshold

(f) Estimate diffuse reflectance parameter (Rd(n+1))

Yes (g) Determine diffuse reflectance parameter

Fig. 3. Process used to refine a diffuse reflectance parameter.

311

Intensity

S. Li et al. / Computer Vision and Image Understanding 113 (2009) 308–316

300

300

250

250

200

200

150

150 720nm 650nm

100

550nm 650nm 720nm

100

550nm

50

50 0 -90

0

90

Incident positions (degrees)

0 -90

0

90

Incident positions (degrees)

Fig. 4. Saturated reflection values and separated specular reflection components at different wavelengths. (left) Saturated reflection values; (right) separated specular reflection components.

components are obtained by subtracting the temporary diffuse reflection components from the reflection values. The reflection values are then replaced with values that remove the temporary specular reflection components, whereupon a second diffuse reflectance parameter Rd(n+1) is estimated. In this way, the diffuse reflectance parameter Rd at this wavelength k is finally determined when the difference between two diffuse reflectance parameters Rdn and Rd(n+1) is smaller than a threshold.

5.1. With simulation data Theoretical reflection values were synthesized based on the reflection models described in Section 3, with given diffuse reflectance parameter Rd (1 0 0) and given specular reflectance parameters for gloss intensity Rs (155) and surface roughness r (1.0). Incident angles were assumed to be between 90o and 90o, at intervals of 0.75o. Reflection values for the experiments, which simulate the data in the measurement system, were generated as follows:

4.2. Estimating specular reflectance parameters Using Rd(k) estimated in the previous step, diffuse reflection components at each wavelength are computed, and specular reflection components are then obtained based on the dichromatic reflection model. Separated specular reflection components may be truncated, due to reflection values being saturated because of the limited dynamic range of a camera; and, since the intensity of diffuse reflection varies with wavelength, specular reflection components are separated with different levels of intensity at different wavelengths (see Fig. 4). Therefore, specular reflectance parameters may be estimated inaccurately by assuming the maximum values of specular reflection components as the specular reflectance parameters for gloss intensity. The least squares method is applied to estimate specular reflectance parameters from specular reflection components. The equation of the Torrance–Sparrow reflection model is transformed logarithmically to a linear form, as shown in Eq. (6), assuming F and G in Eq. (3) as 1.0:

logðIs ðk; hr ÞÞ ¼ logðRs ðkÞÞ  ða=rÞ2  logðcosðhr ÞÞ:

ð6Þ

When the SSD s(k) between the logarithm of the experimental and theoretical specular reflection components reaches a minimal value, the specular reflectance parameters for gloss intensity Rs(k) and surface roughness r are estimated in Eq. (7). I0sj ðkÞ and Isj(k) represent the experimental and theoretical specular reflection components at each wavelength, respectively. Note that, for estimation, specular reflection components which are larger than a threshold are ignored.

s ðkÞ ¼

o2 Xn logðI0sj ðkÞÞ  logðIsj ðkÞÞ :

ð7Þ

5. Experiments To validate this method, experiments were carried out with simulation data, which were synthesized theoretically from the reflection models and random noise, and with measured spectral images.

 Since, with a real camera, reflection values are clipped to the maximum of the dynamic range, specular reflection components were synthesized with 10 different saturation levels, ranging between 0% and 90% with respect to the given Rs (155), which is assumed as the maximum for unsaturated specular reflection components.  These specular reflection components at different saturation levels were combined with diffuse reflection components.  Random noise in the range ±2.5% of 255 (Rs + Rd) was generated, and added to the synthesized reflection values at different saturation levels, since reflection values were computed by removing the influences of the light source and camera in Eq. (4) in the measured images, which might include noise. Unsaturated reflection values and reflection values at 10 saturation levels between 0% and 90% are shown in Fig. 5. These 10 types of reflection values were used in experiments to estimate diffuse and specular reflectance parameters. The standard deviation (SD) of generated random noise was calculated, and specular reflection components below a value of [maximum  SD] were subjected to Eq. (7). 5.2. With spectral images Spectral images were captured in the measurement system shown in Fig. 6. A light source was rotated between 90° and 90° at intervals of 0.75°, 1 m from the rotation axis, around a target object which was placed on a turntable and rotated through 360°. The camera was stationary with respect to the target object. The rotation axis of the light source overlaps with the center axis of the target object, and the surface normal vector is facing the camera. For complex-shaped objects, we can assume that the camera in the measurement system is consistent in tracking the surface of the object being measured, either by tilting the object or by moving the camera in arbitrary directions around the object using an instrument such as a robotic arm, when the object is fixed. The measurement system was constructed with a halogen light, which was rotated around a simple-shaped target object using a turntable, and a line-based imaging spectrograph equipped with

S. Li et al. / Computer Vision and Image Understanding 113 (2009) 308–316

Intensity

312

300

300

250

250

200

200

150

150

100

100

50

50

0 -90

0

90

0 -90

Incident positions (degrees)

0

90

Incident positions (degrees)

Fig. 5. Synthesized noisy reflection values. (left) Unsaturated reflection values; (right) reflection values at saturation levels between 0% and 90%, the intensities of which (with noise) are between 259.41 and 121.43.

Rotation axis Incident light L -90o

Target object

θi

Turntable

90o

α Half-vector H

Normal N 0o Camera

Fig. 6. Arrangement of the measurement system.

Fig. 7. Measurement system constructed with an imaging spectrograph, a halogen light and two turntables: one for the halogen light, and the other for the target object.

Wavelengths

Sp

After removing the influences of the light source and the imaging spectrograph, a sequence of calibrated spectral images of a surface line measured at different incident positions were transformed into spectral images with two axes of wavelength and incident position, at each spatial position of the measured surface line, as in the example shown in Fig. 9. In each transformed spectral image, the spectral (wavelength) axis was calibrated between 380 and 780 nm along the axis using four order interference filters. To maximize the S/N ratio in the spectral image, reflection values were averaged across 5-nm wavelengths with respect to the spectral axis, and were then extracted across a sequence of incident positions, as shown in Fig. 10, to estimate diffuse and specular reflectance parameters.

ons Spatial positions ositi ent p d i c In

an objective lens and a monochrome CCD camera, as shown in Fig. 7. Since the imaging spectrograph can measure only a linear area each time at the surface, a color cylinder and a teacup, as shown in Fig. 8, were measured while being rotated at intervals of 0.625° and 2.5°, respectively. The color cylinder was created using matte and glossy paper, on each of which were check patterns of six colors (red, green, blue, yellow, magenta, and cyan), as well as several glued areas to vary reflection properties on the surface. The teacup was a cylindrical ceramic work, with complex colors and textures on the half of its surface which was measured.

Fig. 8. A color cylinder, with matte and glossy paper (left), and a teacup (right) for experiments.

Incident positions

Fig. 9. Transformation from a sequence of calibrated spectral images at different incident positions into that at different spatial positions. (left) Spectral images at different incident positions; (right) spectral images at different spatial positions.

313

Intensity Intensity

1 -90

av W

n e le

gth

Intensity Intensity Intensity

300 300 250 250 200 300 200 300 150 250 300 150 250 200 250 100 100 200 50 150 200 50 150 0 100 150 1 121 0 100 -90 0 1 50 100 121 -90 0 50 Incident angle (degrees) 0 50 (degrees) 1 01 Incident angle -90 021 1 01 -90 021

s

Wavelengths

Intensity

S. Li et al. / Computer Vision and Image Understanding 113 (2009) 308–316

Incident positions

1

Diffuse reflectance parameters Rd(k) and specular reflectance parameters for gloss intensity Rs(k), and surface roughness r were estimated at wavelength intervals of 5 nm for individual surface points. A point-based spectroradiometer was used for measuring standard spectral distribution of the color patches on the cylindrical object, to evaluate the spectral distribution estimated in experiments. The SD of the current dark image was calculated, and the maximal value in separated specular reflection components at each single wavelength was detected. Specular reflection components lower than a value of [maximum  SD] were subjected to Eq. 7, to estimate specular reflectance parameters.

241 90 241 90

21

0 Incident angle (degrees) Incident angle (degrees) Incident angle (degrees)

241 90 241 90 241 90

Incident positions

Fig. 10. Extraction of reflection values at each wavelength from a spectral image. (left) A spectral image at a spatial position; (right) reflection values extracted at each wavelength.

Table 1 Diffuse and specular reflectance parameters estimated with synthesized noisy reflection values at different saturation levels. Saturation levels (maximum with noise)

0% 10% 20% 30% 40% 50% 60% 70% 80% 90%

(259.47) (245.43) (229.93) (214.43) (198.93) (183.43) (167.93) (152.43) (136.93) (121.43)

Estimated results Rd (100)

Rs (155)

r (1.0)

100.26 100.26 100.26 100.26 100.26 100.26 100.26 100.26 100.26 100.26

153.56 152.90 152.14 151.72 151.47 152.49 153.06 149.17 143.11 116.82

0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.98 0.98 0.96

1.0

Reflectance

Experimental results using synthesized reflection values at saturation levels between 0% and 90% are shown in Table 1, compared with given Rd (100), Rs (155) and r (1.0). The threshold for (Rdn  Rd(n+1)) in Fig. 3 was 0.001. As shown in Table 1, using the synthesized noisy reflection values, the diffuse reflectance parameters Rd and specular reflection parameters for surface roughness r across all saturation levels, and the specular reflectance parameters for gloss intensity Rs at saturation levels at and below 70%, were estimated with errors of within 3%, compared with given values of Rd (100), Rs (155) and r (1.0). Moreover, from experimental results using these reflection values without noise, the diffuse and specular reflectance parameters Rd, Rs and r were estimated to be the same as given values (100, 155 and 1.0) at all different saturation levels. Using reflection values extracted from spectral images, the diffuse reflectance parameters estimated for matte and glossy paper of blue, green and magenta are shown in Figs. 11–13, compared with the standard spectral distributions of the same color surface

1.0

Standard Estimated

0.8

5.3. Experimental results

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0.0 380

480

580

680

780

0.0 380

Wavelength (nm)

480

580

680

780

Wavelength (nm)

Fig. 11. Diffuse reflectance parameters estimated (estimated) for blue paper, compared with spectral reflectance (standard) of the same color patch. (left) Matte paper; (right) glossy paper. (For interpretation of the references to color in this figure legend the reader is referred to the web version of the papaer.

1.0

Standard Estimated

Reflectance

0.8

1.0 0.8

0.6

0.6

0.4

0.4

0.2

0.2

0.0 380

480

580

680

Wavelength (nm)

780

0.0 380

480

580

680

780

Wavelength (nm)

Fig. 12. Diffuse reflectance parameters estimated (estimated) for green paper, compared with spectral reflectance (standard) of the same color patch. (left) Matte paper; (right) glossy paper. (For interpretation of color mentioned in this figure the reader is referred to the web version of the article.)

314

S. Li et al. / Computer Vision and Image Understanding 113 (2009) 308–316

Reflectance

1.0

1.0

Standard Estimated

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0.0 380

480

580

680

780

Wavelength (nm)

0.0 380

480

580

680

780

Wavelength (nm)

Fig. 13. Diffuse reflectance parameters estimated (estimated) for magenta paper, compared with spectral reflectance (standard) of the same color patch. (left) Matte paper; (right) glossy paper.

Estimated Rs Fig. 14. Colors in RGB of the two objects reproduced with estimated diffuse reflectance parameters. (left) Opened-up view of the color cylinder; (right) front of the teacup.

patches. The colors of the two objects were reproduced in RGB values using estimated diffuse reflectance parameters, as shown in Fig. 14. From Figs. 11–13, diffuse reflectance parameters were estimated precisely, in good agreement with the spectral distributions of the color surface patches; and in Fig. 14, the reproduced colors were a visually reliable match with those of the two target objects. Since the target objects were paper and ceramic (dielectric), specular reflectance parameters for gloss intensity Rs and surface roughness r at a single surface patch were considered as the mean values of Rs(k) and r estimated for wavelengths between 380 and 780 nm of the patch. Thus, specular reflectance parameters for gloss intensity Rs and surface roughness r at each surface patch of the color cylinder are shown in Fig. 15, and those of the teacup are shown in Fig. 16. In Figs. 15 and 16, for images of estimated Rs, the brighter areas are glossy paper, and the darker areas are matte paper and glued areas; for images of estimated r, the brighter areas are matte paper and borders between different pieces of paper, and the darker areas are glossy paper, as physically shown on the real objects.

Estimated Rs

410.90

62.65

328.72

50.12

246.54

37.59

164.36

25.06

82.18

12.53

0.00

0.00

Estimated σ

Fig. 15. Specular reflectance parameters for gloss intensity Rs and surface roughness r of the color cylinder.

410.90

62.65

328.72

50.12

246.54

37.59

164.36

25.06

82.18

12.53

0.00

0.00

Estimated σ

Fig. 16. Specular reflectance parameters for gloss intensity Rs and surface roughness r of the teacup.

Fig. 17. Diffuse reflection reproduced on the surfaces of two target objects. (left) Front side of the color cylinder; (middle) back side of the color cylinder; (right) front side of the teacup.

Based on the Lambert reflection model, diffuse reflection in RGB values of the cylinder and teacup was reproduced using diffuse reflectance parameters estimated at wavelength intervals of 5 nm for each surface patch (see Fig. 17). Based on the Torrance–Sparrow

Fig. 18. Diffuse and specular reflection reproduced on the surfaces of two target objects. (left) Front side of the color cylinder; (middle) back side of the color cylinder; (right) front side of the teacup.

315

S. Li et al. / Computer Vision and Image Understanding 113 (2009) 308–316

Relative intensity

1.0

0.0 380

480

580

680

Wavelength (nm)

In light of CIE_D65

780

380

480

580

680

Wavelength (nm)

In halogen light

780 380

480

580

680

Wavelength (nm)

In green light

780

380

480

580

680

780

Wavelength (nm)

In blue light

Fig. 19. Variable appearance of the color cylinder under different types of light source. (upper) Spectral power distributions of each light source; (lower) back side of the color cylinder reproduced from each light source.

and the dichromatic reflection models, colors and gloss of the color cylinder and the teacup were reproduced using diffuse and specular reflectance parameters, associated with the shapes which are approximated as cylinders for the two objects (see Fig. 18). In Figs. 17 and 18, colors and gloss show a realistic appearance on the matte and glossy surfaces of the objects (note that the influence of the halogen light used in the experiments was removed). Combining the spectral power distributions of different light sources with estimated diffuse reflectance parameters Rd(k), specular reflectance parameters for gloss intensity Rs(k) and surface roughness r, the colors and gloss of the color cylinder (back side) display dramatic variations among the light sources, as shown in Fig. 19. 6. Conclusions A new method has been developed to estimate diffuse and specular reflectance parameters for color and gloss reproduction, using spectral images and overcoming the limited dynamic range of imaging devices. Our experimental results in Section 5.3, using both synthesized noisy reflection values and measured spectral images, indicate that the method is capable of adequately estimating diffuse and specular reflectance parameters. With such accurate diffuse and specular reflectance parameters, cultural heritages can be preserved with high-fidelity reproduction on a computer, and objects made of various materials can be reliably recognized and classified, for example in industrial applications. Since geometrical information greatly affects the intensity of reflected light at a surfaces, especially for specular reflection, it is necessary to measure reflection properties in conjunction with the shape of an object. Moreover, if accurate information about shape is available, interreflections may be taken into account in separating diffuse and specular reflection components, e.g., by selecting surface points where diffuse or specular interreflections occur, according to incident positions. The geometrical shape of objects may be given, such as for commercial goods in online shops; however, information about the shape of cultural heritage artifacts is often unknown. In future work, reflectance parameters will be

refined for a complex-shaped object by acquiring its geometrical information and taking account of interreflections between its surfaces. Acknowledgments We sincerely thank Dr. Ian Smith for his helpful comments on the English in this paper and wish to express our appreciation to the editor, Dr. Ruud M. Bolle, and the anonymous reviewers for their valuable help in improving the manuscript. References [1] P.E. Debevec, J. Malik, Recovering high dynamic range radiance maps from photographs, in: Proc. of the International Conference on Computer Graphics and Interactive Techniques, Los Angeles, 1997, pp. 369–378. [2] N. Tanaka, S. Tominaga, T. Kawai, A multi-channel camera and its application to estimation of a reflection model, Display and Imaging 8 (2000) 283–291 (in Japanese). [3] S.K. Nayar, T. Mitsunaga, High dynamic range imaging: spatially varying pixel exposures, in: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head Island, 2000, pp. 472–479. [4] S.K. Nayar, V. Branzoi, Adaptive dynamic range imageing: optical control of pixel exposures over space and time, in: Proc. of IEEE International Conference on Computer Vision, vol. 1, Nice, 2003, pp. 1–8. [5] T. Machida, N. Yokoya, H. Takemura, Surface reflectance modeling of real object with interreflections, in: Proc. of the International Conference on Computer Vision, vol. 1, Nice, 2003, pp. 170–177. [6] C. Bloch, The HDRI Handbook: High Dynamic Range Imaging for Photographers and CG Artists, Rocky Nook, 2007. [7] L.B. Wolff, Polarization-based material classification from specular reflection, IEEE Transactions on Pattern Analysis and Machine Intelligence 12 (1990) 1059–1071. [8] L.B. Wolff, T. Boult, Constraining object features using polarization reflectance model, IEEE Transactions on Pattern Analysis and Machine Intelligence 13 (1991) 635–657. [9] S.C. Pont, J.J. Koenderink, Bidirectional reflectance distribution function of specular surfaces with hemispherical pits, Journal of the Optical Society of America A 19 (2002) 2456–2466. [10] S.A. Shafer, Using color to separate reflection components, Color Research and Applications 10 (1985) 210–218. [11] G.J. Klinker, S.A. Shafer, T. Kanade, The measurement of highlight in color images, International Journal of Computer Vision 2 (1988) 7–32. [12] G.J. Klinker, S.A. Shafer, T. Kanade, A physical approach to color image understanding, International Journal of Computer Vision 4 (1990) 7–38.

316

S. Li et al. / Computer Vision and Image Understanding 113 (2009) 308–316

[13] R.T. Tan, K. Ikeuchi, Separating reflection components of textured surfaces using a single image, IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (2005) 178–193. [14] S.K. Nayar, X.-S. Fang, T. Boult, Separation of reflection components using color and polarization, International Journal of Computer Vision 21 (1997) 163–186. [15] A. Criminisi, S.B. Kang, R. Swaminathan, R. Szeliski, P. Anandan, Extracting layers and analyzing their specular properties using epipolarplane-image analysis, Computer Vision and Image Understanding 97 (2005) 51–85. [16] T. Oo, H. Kawasaki, Y. Ohsawa, K. Ikeuchi, Separation of reflection and transparency using epipolar plane image analysis, in: Proc. of the Asian Conference on Computer Vision, vol. 1, Hyderabad, 2006, pp. 908–917. [17] K. Ikeuchi, K. Sato, Determining reflectance properties of an object using range and brightness images, IEEE Transactions on Pattern Analysis and Machine Intelligence 13 (1991) 1139–1153. [18] Y. Sato, M.D. Wheeler, K. Ikeuchi, Object shape and reflectance modeling from observation, in: Proc. of the International Conference on Computer Graphics and Interactive Techniques, vol. 31, Los Angeles, 1997, pp. 379– 388. [19] K. Omata, H. Saito, S. Ozawa, Recovery of shape and surface reflectance of a specular object from relative rotation of light source, Image and Vision Computing 21 (2003) 777–787. [20] Y. Wang, D. Samaras, Estimation of multiple directional light sources for synthesis of mixed reality images, in: Proc. of the Pacific Conference on Computer Graphics and Applications, Beijing, 2002, pp. 38–47. [21] P. Lagger, P. Fua, Retrieving multiple light sources in the presence of specular reflections and texture, Computer Vision and Image Understanding 111 (2008) 207–218.

[22] M. Oren, S.K. Nayar, A theory of specular surface geometry, International Journal of Computer Vision 24 (1996) 105–124. [23] R. Fleming, A. Torralba, E.H. Adelson, Specular reflections and the perception of shape, Journal of Vision 4 (2005) 798–820. [24] S. Roth, M.J. Black, Specular flow and the recovery of surface structure, in: Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, New York, 2006, pp. 1869–1876. [25] Y. Manabe, S. Inokuchi, Recognition of material types using spectral images, in: Proc. of the International Conference on Pattern Recognition, Vienna, 1996, pp. 840–843. [26] Y. Manabe, S. Kurosaka, K. Chihara, Simultaneous measurement system of spectral distribution and shape, Transactions of IEICE J84-DII (2001) 1012– 1019 (in Japanese). [27] K. Devlin, A. Chalmers, A. Wilkie, W. Purgathofer, Tone reproduction and physically based spectral rendering, in: Proc. of the Eurographics, Saarbruecken, 2002, pp. 101–123. [28] S. Nakauchi, Spectral imaging technique for visualizing the invisible information, in: Proc. of the Scandinavian Conference on Image Analysis, Joensuu, 2005, pp. 55–64. [29] M. Doi, R. Ohtsuki, S. Tominaga, Spectral estimation of skin color with foundation makeup, in: Proc. of the Scandinavian Conference on Image Analysis, Joensuu, 2005, pp. 95–104. [30] K.E. Torrance, E.M. Sparrow, Theory for off-specular reflection from roughened surfaces, Journal of the Optical Society of America A 57 (1967) 1105–1114. [31] J.H. Lambert, Photometria, Sive de Mensura de Gradibus Luminis, Colorum et Umbrae, Augsburg, Germany, Eberhard Klett, 1760 (in Latin). [32] R.L. Cook, K.E. Torrance, A reflectance model for computer graphics, Computer Graphics 15 (1981) 307–316.