Adaptive importance photon shooting technique

Adaptive importance photon shooting technique

Computers & Graphics 38 (2014) 158–166 Contents lists available at ScienceDirect Computers & Graphics journal homepage: www.elsevier.com/locate/cag ...

5MB Sizes 0 Downloads 9 Views

Computers & Graphics 38 (2014) 158–166

Contents lists available at ScienceDirect

Computers & Graphics journal homepage: www.elsevier.com/locate/cag

Special Section on CAD/Graphics 2013

Adaptive importance photon shooting technique Xiao-Dan Liu a,b,n, Chang-Wen Zheng a a b

Technology on Integrated Information System Laboratory, Institute of Software, Chinese Academy of Sciences, China University of Chinese Academy of Sciences, China

art ic l e i nf o

a b s t r a c t

Article history: Received 31 July 2013 Received in revised form 24 October 2013 Accepted 24 October 2013 Available online 11 November 2013

Photon mapping is an efficient technique in global illumination and participating media rendering, but it converges slowly in complex scenes. We propose an adaptive importance photon shooting technique to accelerate the convergence rate. We analyze the scene space and build cumulative distribution functions on the surfaces to adaptively shoot photons. The rendering space is partitioned by kd-tree structure. The photons tracing through the scene are stored in the kd-tree node. An error criterion is proposed to estimate the feature error of the local light field in each node. In order to adaptively shoot photons, a novel adaptive cumulative distribution function is built in each node based on their neighbors' error values. When a photon hits a surface in the scene, the reflection direction of this photon is adaptively chosen by our cumulative distribution function. Our technique can be used in both photon mapping and progressive photon mapping. The experiments show that our adaptive importance photon shooting technique gives better results than the previous methods in both visual quality and numerical error. & 2013 Elsevier Ltd. All rights reserved.

Keywords: Adaptive importance shooting kd-tree Progressive photon mapping Cumulative distribution function

1. Introduction There are many beautiful natural phenomena around us, such as amazing sunset, light through a glass of wine, underwater sunbeam and fire flame. It is useful and significant to render high quality images of these phenomena, but it is very expensive to render the global illumination and specular–diffuse–specular path scene in computer graphics. Photon mapping is proposed to handle this problem [1]. It traces photons from the light sources and stores photons on the surface of the scene. It gives high quality effects such as global illumination, caustics and volumetric rendering. Because storing photons costs a lot of memory, the traditional photon mapping can only give high quality results of simple scenes. Some researchers use adaptive techniques to reduce the photon store or guide the photon from the light sources. In order to solve the memory limitation of traditional photon mapping, progressive photon mapping is introduced [2]. It is robust in rendering specular–diffuse–specular path by progressively refining the estimated radius without storing any photons. This method employs the stochastic distributed strategy. It gives the correct average radiance of a pixel footprint. Most effects can be rendered by distributed ray tracing as same as progressive photon mapping. However, it equally shoots photons through the scene and converges slowly to the correct radiance.

n

Corresponding author. E-mail address: lxdfi[email protected] (X.-D. Liu).

0097-8493/$ - see front matter & 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cag.2013.10.027

Motivated by the previous work, we propose an adaptive importance photon shooting technique to accelerate the convergence rate, which can be used in both photon mapping and progressive photon mapping. In order to get the local light field of the scene, we use kdtree structure to store the photons and analyze the rendering space. Depending on our error criterion, we compute the feature error in each node from the photons. During the photon tracing process, the photons are shot more efficiently by using an adaptive importance technique. A novel adaptive cumulative distribution function (CDF) is built in each node to redirect photons. The CDF is estimated by the feature error in the scene. Our technique has the following contributions. Kd-tree partition. Most adaptive methods in photon mapping only adaptively shoot photons from the light sources. Depending on the kd-tree structure, we partition the scene space and analyze the local light field in scene space for adaptive sampling. Our technique can adaptively redirect photons on their hit points through the tracing path. Adaptive importance shooting. In order to accelerate the rendering convergence rate, we propose an adaptive importance photon shooting technique. The photons stored in each node are used to compute the feature error. The feature error indicates if this node needs more photons. We build an adaptive probability density function (PDF) based on the feature error distribution around each node. The PDF is integrated to a CDF to adaptively determine the redirection of photons. Progressive process. Depending on the information stored in kd-tree, our technique can be used in progressive photon mapping with a bounded memory consumption. During each rendering

X.-D. Liu, C.-W. Zheng / Computers & Graphics 38 (2014) 158–166

pass of the progressive process, each node stores the photons and the analyzed information such as feature error, CDF. At the end of each pass, the photons in the kd-tree are cleared and the information of each node is updated. Our technique accelerates the convergence rate in progressive photon mapping.

2. Related work Most photorealistic images are generated by solving the rendering equation [3]. The traditional solution of this rendering equation is based on Monte Carlo ray tracing. This kind of algorithms simulates view rays from the camera through the scene space. The final image is synthesized by integrating all the rays from view point to light source [4]. These methods are expensive to render a high quality global illumination image. Photon mapping simulates light flux in the scene to generate photorealistic images [5]. Compared with the Monte Carlo ray tracing, it generates better effects such as global illumination, caustic, dispersion and multiple scattering in participate media [6]. It has two passes: trace and store photons from the light sources; estimate the eye rays from the camera. Because the traditional photon mapping has to store all the photons traced from the light sources and these photons cost a lot of memory [7], this method cannot generate high quality images of complex scenes. In order to improve the image quality using the same photons, photon differential is introduced to improve illumination accuracy [8]. Photon relaxation can reduce the noise and discrepancy [9]. Different from the vertices estimation, photon beam integrates the light contribution along the entire eye ray [10]. It robustly handles the scene containing participating media and specular materials. To render scenes with many glossy objects, multiple importance sampling is used to combine the contributions of photons [11]. Similar as multiple importance sampling, Georgiev et al. present a reformulation of photon mapping as a bidirectional path sampling technique [12]. Progressive photon mapping [2] solves the memory cost problem. It uses progressive refinement of photon statistics at a vertices and computes the radiance of the eye ray. These radiances are ensured the convergence to the correct values with bounded memory consumption. Stochastic progressive photon mapping has the robustness of photon mapping in a wider class of scene setting than progressive photon mapping [13]. Adaptive Progressive photon mapping gives an adaptive strategy to decide the estimated radius of hit point using the current photons [14]. Adaptive and importance techniques are introduced to reduce the variance by shooting photons more efficiently [4]. A robust adaptive photon tracing is proposed to focus on outdoor scenes and illumination through a small gap based on photon path visibility [15]. Kd-tree is employed to build a splatting architecture to render high quality global illumination [16,17]. An importancedriven method is introduced to sample more photons on the hit points along the eye ray [18]. Density estimation can also be used in importance sampling to reduce noise and bias in global illumination [19]. Most of these adaptive methods only adaptively shooting photons from the light source. Adaptively shoot photons through the scene based on light field are still a challenge in photon mapping.

159

equation can also be represented by photon mapping. Z Lðx; ωo Þ ¼ f r ðx; ωo ; ωi ÞLðx; ωi Þ cos θi dωi Ω



k

1

π rðxÞ

2

∑ f r ðx; ωo ; ωi ÞΦi

i¼1

ð1Þ

Photon mapping shoots photons from the light sources and stores these photons on the surface. To integrate the final image, it samples eye rays from the camera. The photons represent the light flux inside the scene. The photons Φi in the region of radius r(x) are used to generate Lðx; ωo Þ on the hit point x shooting from direction ωo. The eye ray's contribution is computed by the transfer radiance Lðx; ωo Þ and it is used to generate the photorealistic image. Our adaptive importance photon shooting technique improves the photon mapping by adaptively choosing the photon's direction based on the light field feature of the scene. It can be used in both traditional photon mapping and progressive photon mapping. Our adaptive strategy bases on the kd-tree structure, an error criterion and a novel adaptive CDF. The kd-tree structure is used to analyze the scene space and stores the feature information. The error criterion is used to compute the error value of light field feature in each node. The adaptive CDF indicates the reflection direction of the photons on their hit points. It is computed based on the error values. Each node contains a part of the scene, typically is a piece of surface. The CDF in a node is a function giving photos which hitting on the surface the probability of all reflection direction of this surface. When a photon hits in a node, we use the node's CDF to choose its reflection direction. According to our technique, the area having high feature error in the scene receives more photons and the area having low feature error receives less photons. In the following sections, we first introduce our technique used in traditional photon mapping, then we extend it to progressive photon mapping. Fig. 1 shows the overview of our technique. At the beginning, we coarsely shoot some photons and the whole scene is initialized by a kd-tree, each part of the scene belongs to a kd-tree node. Then, the photons are shot from the light sources in iteration. In each iteration, we shoot a certain number of photons, typically the number is 1000–5000. These photons are adaptively reflected on each hit point during their tracing paths. The reflection direction is determined by the adaptive CDF in the kd-tree node. The photon hitting on the surface is also stored in the node. After an photon shooting iteration, we update the kd-tree and compute the error value of each node using our error criterion. The adaptive CDF of each node is built by these error values. The distribution of the error values around each node is combined with the original PDF of the node. And this combined PDF is integrated to the adaptive CDF. If our adaptive shooting technique is used in the traditional photon mapping, after shooting all the photons, the eye rays from the camera are traced to generate the final image. If it is

3. Overview Photorealistic image synthesis depends on the rendering equation [3]. In this equation, the light Lðx; ωo Þ coming in a certain direction ωo from point x is combined by the light Lðx; ωi Þ reflected on this point from all directions Ω based on the BRDF fr. This

Fig. 1. If we use our technique in traditional photon mapping, after building a kd-tree on the scene space, our technique begins to adaptively shoot photons. It ends at eye ray tracing step. If it is used in progressive photon mapping, it begins at eye ray tracing step and will not end until finishing all the passes.

160

X.-D. Liu, C.-W. Zheng / Computers & Graphics 38 (2014) 158–166

Fig. 2. (a) shows the original space and (b) shows the space partitioned by kd-tree. Photons in the scene are organized by the nodes.

used in the progressive photon mapping, after tracing all the eye rays from the camera, the information in each node is updated. The final image is synthesized after rendering all the passes.

4. Kd-tree partition In our technique, we use kd-tree to store the photons and analyze the local light field feature on the surface. Each kd-tree node contains photons, weight, error value and the adaptive CDF. The photons in a node represent its local light field. The weight gives the node's contribution to the final image. The error value computed from the photons indicates that if this node needs more photons. The node's CDF determines the reflection direction of photons based on this node's neighbors' error values. The kd-tree node has a maximum number of photons, typically it is 16–64. Before rendering stage, the whole scene space is one kd-tree node. At the beginning of each rendering pass, the entire scene is initialized by coarse sampling, typically is 1000 photons. Then the kd-tree is split by these initial photons. Fig. 2 gives an example of the kd-tree initialization. During the following rendering, at each time a photon hitting a surface in the node, this photon is stored in this node. If the photons in this node are more than the maximum number, this node will be spilt into two children along its longest dimension. Each child contains half photons from their parent. After shooting all the photons in one iteration, we compute the error value and adaptive CDF in each node. As shown in Fig. 1, if our technique is used in progressive photon mapping, at the beginning of each progressive pass, we compute the weight of the nodes which are crossed by the eye rays from the camera. We also clear all the photons in the kd-tree without changing the kdtree's construction. There are no photons in the kd-tree before each pass, every nodes are empty. And the structure is the same as the end of last pass. During the rendering, the photons are still stored in each node and the node will be split if there are too many photons.

5. Adaptive importance photon shooting Similar as adaptive sampling in ray tracing, to generate high quality image, it is more efficient to shoot photons on high frequency areas than the low frequency areas. Based on this core idea, we propose an adaptive importance photon shooting technique. Compared with the original tracing strategy, our technique accelerates the convergence rate. To adaptively shoot photons, our technique has three key steps: computing the error values, getting each node's neighbors and building the adaptive CDF. The rendering pass (the orange part in Fig. 1) is separated into several

repeated iterations. In each iteration, photons are traced from the light sources and stored on the surface. On one hit point of the tracing path, the photon's reflection direction is determined by the adaptive CDF of the node. The CDF is computed by the error values of its node's neighbors. The node's neighbor is got through the tracing step. The error value is computed based on the error criterion. 5.1. Error criterion We propose an error criterion using the information in each node to estimate the local light field feature error. The error value in each node indicates if this node needs more photons. It is computed after each photon shooting iteration and is used to build the adaptive CDF. In order to estimate the high frequency feature in the scene, the error value is combined by the variance, weight, volume and the count of photons in each node.

εi ¼ wi

var i vol Ni

ð2Þ

Eq. (2) calculates the error value of node i. vol is the volume of node i. Ni is the count of the photons in the node. The weight wi indicates the contribution of the node to the final image. var denotes the variance value of the node. It is computed by the mean square difference of photons. The error value can indicate the high frequency area of the scene based on the variance and node volume. According to the kd-tree structure, the volume is small in the high frequency area after adaptively shooting photons. So the error value will not get stuck in one area. If our technique is used in progressive photon mapping. The variance computing by the traditional method [20] is bias from the real value. Inspired by Hachisuka et al.'s work [21], a bias value of each node is employed to evaluate the real variance. Bi ¼

k Ni 2 ∑ ∇ Kf r Φj 2N t j ¼ 1

ð3Þ

Here, Bi is bias value of node i. k is a constant value depending on the kernel function. fr is the hit point's BRDF of photon Φj. K is the kernel function. The detail of this equation is in Hachisuka et al.'s work. After computing the bias value, the equation of variance is shown below vari ¼

Ni 1 ∑ ðΦ  Φ þBi Þ2 Ni 1 j ¼ 0 j

ð4Þ

The variance vari of photons in a node is calculated by summing all the square difference between each photon and the unbias mean value. This sum is divided by the count of photons. The error value denotes if the node needs more photons, just like the

X.-D. Liu, C.-W. Zheng / Computers & Graphics 38 (2014) 158–166

adaptive sampling in ray tracing. If one node's error value is high, our algorithm may shoot more photons to this node. 5.2. Photon tracing path During the photon tracing step of rendering, our photon shooting technique adaptively chooses the reflection direction of the photon. The reflection direction is determined by our novel adaptive CDF. In order to compute this CDF, we need to know the visible neighbor nodes around this CDF's node. The error values in these neighbor nodes give the feature error distribution around the node. The feature error around each node helps us adaptively

shooting photons. In order to get each nodes' neighbors, the photon records its last hit point during its tracing path (as shown in Fig. 3). Every photons stored in the node have their last hit positions in the scene. These last hit points indicate the node's visible neighbors. At the end of each photon shooting iteration, after computing the error value, each node uses its neighbors' error values to compute the CDF. 5.3. Adaptive cumulative distribution function The main part of our technique is adaptively choosing the reflection direction when the photon hits a point of the scene. Inspired by ray tracing importance methods [22], we use an adaptive cumulative distribution function (CDF) to select the direction. The CDF is built by integrating the probability density function (PDF) in each node. The PDF in a certain point is the function gives the probability light contribution of an incoming direction. In this article, we give a new PDF instead of the original PDF. The original PDF of a point on the surface, typically is a BRDF, is used to choose the ray or photon's reflection direction along the Monte Carlo path tracing. We modify this original PDF by combining with our computed feature error distribution. The new PDF is the original PDF multiplying the error value distribution. After each photon shooting iteration, the feature error distribution around each node is represented by the error values. It is denoted by the function pðωÞ. pðωÞ ¼

Fig. 3. It shows the photon tracing path in the scene. The half circle in the center indicates the reflection direction of the incoming light. The two bars on the bottom show the error value and weight of node A along the reflection direction. The weight value changes based on the camera direction. The error value changes based on the feature frequency of the scene. Because of the last position recorded in the photon, node B and node C know node A is their neighbor. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

161

εω ε

ð5Þ

Here, pðωÞ denotes the shooting probability of direction ω. It means the probability of shooting a photon to direction ω among all the directions based on the error values. If the error value is high in direction ω, the probability pðωÞ is high. Otherwise, it is low. Our technique uses error value to evaluate feature frequency. εω is the error value of the node in direction ω. ε is the mean of error values in all possible directions. In order to adaptively shoot photon, we need to combine the probability function pðωÞ with the original reflection function, such as material's BRDF or some importance sampling functions.

Fig. 4. It shows the situations that the same pðωÞ combines with different material reflection functions. The function pðωÞ is built by the error value around the current node. pbrdf ðωÞ is the original PDF of the reflection function. pi ðωÞ is the normalized result of pðωÞ timing pbrdf ðωÞ. C i ðωÞ, the integration of pi ðωÞ, is our adaptive CDF.

162

X.-D. Liu, C.-W. Zheng / Computers & Graphics 38 (2014) 158–166

Fig. 5. (a) shows the photon distribution of traditional photon mapping. (b) shows the photon distribution using our adaptive technique.

The reflection function is implemented by CDF. It is the integration of PDF. Different points in the scene may have different PDFs based on their materials and input direction. Our adaptive CDF is the combination of pðωÞ and the original PDF on a certain input direction. Z ω C i ðωÞ ¼ pðtÞpbrdf ðtÞ dt ð6Þ 0

Here, pbrdf(t) is the original PDF. Our adaptive CDF Ci in node i is the integration of this PDF timing our error probability function pðωÞ. Ci is normalized. Fig. 4 shows the different situations of computing the CDF on different materials. The adaptive CDF is implemented by a discrete table which is integrated by PDF, typically the resolution of the discretized CDF is 4  4 or 8  8. The PDF value of a certain direction is interpolated by the vectors between the current node and its neighbors. For each input direction and a probability, the CDF gives an output direction. The output direction is interpolated by the probability and the discrete direction in the table. 5.4. Modification of the photon radiance In Fig. 5, we can see the photon distribution is different between the traditional photon mapping and our technique. If we only adaptively shoot the photons to the high frequency area without changing its contribution, the radiance of each eye ray will not converge to the correct radiance. The areas receiving more photons based on our technique are lighter than its real radiance. The areas receiving less photons are darker. In order to solve this problem, the photon's contribution needs to be changed during its adaptive reflection. The function pðωÞ is used to scale the photon's contribution. When a photon is reflected in direction ω, the scaling factor is ε =εω . It guarantees the contribution of each photon is correct.

First, after the eye ray tracing step, the weight of each kd-tree node needs to be updated by the equation below wi þ ¼ N e;i =N e

ð7Þ

The weight wi of node i adds a new value after the eye ray tracing step. N e;i is the count of the eye rays across node i. Ne is the total count of eye rays in the current pass. The weight indicates the contribution of this node to the final image. It controls our technique to shoot more photons to the nodes which can be seen directly. Second, our technique still stores photons as same as the traditional photon mapping in each pass. But the photons in the kd-tree are cleared after each pass. The error value, weight and CDF in each node are kept. So the memory cost is a bounded consumption. Third, a new error criterion is used to compute the error value. It is a heuristic method by combining the value calculated by Eq. (2) and the error value of last pass. In order to compute the following equation, the error value of last pass is saved in each node.

εi ¼ ð1  αÞΔεi þ αε′i

ð8Þ

Eq. (8) is used to compute error value instead of Eq. (2) to estimate the feature frequency. Δεi is the additional value calculated by Eq. (2). ε′i is the last error value of this node in the last pass. εi is the new error value. α A ½0; 1Þ is the parameter to control the sensitive of the error criterion. If α is big, the error value changes slowly due to the new photons. Otherwise, it changes fast. When the node is split, the error value of last pass and the weight are copied to its children. Forth, because of the memory limitation, the kd-tree cannot be split infinitely. We define a maximum number of the node. If the number of node is larger than the maximum value, the kd-tree will not be split. It guarantees that the kd-tree structure has a bounded memory consumption in our technique when the pass increases. The rest steps of our technique in progressive photon mapping are same as in traditional photon mapping.

7. Results and discussion 6. Progressive process Progressive photon mapping progressively renders the scene without storing photons [2]. Our technique can be used in progressive photon mapping as well as the traditional photon mapping. Progressive photon mapping has many passes (blue part in Fig. 1). In each pass, it has two main steps: tracing the eye rays and shooting the photons. In order to use our technique in progressive photon mapping, there are four parts need to be modified.

Our algorithm and previous approaches are implemented on LuxRender [23]. All results were rendered on a Intel Core i7 CPU at 2.80 GHz with 2 GB of RAM of one thread. First the photon distribution of our adaptive photon shooting technique is compared with the standard progressive photon mapping (PPM). Then, kd-tree structure and error criterion of our technique are analyzed. Finally, we compare our algorithm with the previous rendering methods. Our algorithm used in the following experiments is the algorithm using our adaptive shooting technique in progressive photon mapping.

X.-D. Liu, C.-W. Zheng / Computers & Graphics 38 (2014) 158–166

7.1. Adaptive photon distribution According to our technique, the photons adaptively are reflected on the surface during their tracing path. Fig. 6 compares our algorithm with PPM by rendering a glass ball scene in 600  600 resolution with 40 passes, 100 000 photons in each pass. The photon distribution shows that our algorithm shoots more photons on the edges and high light areas. The rendering result of our algorithm is more clear on the high light area than PPM and the additional computation time is acceptable. During the rendering passes, because the error value of each node converges to zero, the photon distribution becomes smooth and does not get stuck in a few areas. Fig. 7 shows this change. At the beginning, the photons are focus on the edges and high light areas. During the rendering process, the photon distribution becomes smooth. After a large number of passes, the photons distributes on all areas of the scene.

7.2. Progressive error criterion analysis When our technique is used in progressive photon mapping, we give a heuristic method to estimate the error value. The α parameter influences the photon shooting strategy by controlling the error value. If the α values are large, the error value of each node changes slowly due to the photon contribution of the new pass. Otherwise, the error value changes fast. Parameter α is analyzed

163

by rendering the magic lamp scene in resolution 512  512 with 40 passes. Fig. 8 shows the rendered results by the same scene with different α value. We compare the shadow and glass edge area of the scene. The mean square error (MSE) table and visual images show this parameter cannot be too high or too low. When the parameter is 0.4, the result gives a low numerical error and high visual quality. 7.3. Parameter analysis Our technique has a few parameters. These parameters control the strategy of the adaptive photon shooting. The maximum depth number controls the precise of the partition. The maximum photon number of node influences the adaptive strategy and the memory consumption. The numerical computation of CDF has a discrete presentation. The precision of the CDF controls the shooting precision to the high frequency areas. These parameters are analyzed by rendering the magic lamp scene in 512  512 resolution with 64 passes. In Fig. 9(a–c) shows the MSE and memory consumption using different parameter values. The red line shows the memory consumption and the blue line shows the MSE. According to the analysis, the quality of the result and the memory consumption both increase when these parameters increase. When the depth number is 64, the maximum node number is 32, our algorithm gives a high quality image and memory consumption is acceptable. (d) gives the convergence rate of our technique. With the pass increasing, our strategy converges faster than the traditional photon mapping. 7.4. Results

Fig. 6. Compared with the standard progressive photon mapping (PPM), the photon distribution of our algorithm assembles faster to the high light and edge areas.

We compare the visual quality of images between our algorithm and previous methods such as Monte Carlo ray tracing, metropolis light transport (MLT) [24] and standard progressive photon mapping (PPM) [2]. The results are shown in Fig. 10. (a) in Fig. 10 is a chess scene. The resolution of the rendering images is 1024  1024. Each image uses 2 h to render. Monte Carlo ray tracing and Metropolis light transport cause high light noises near the glass chesses. Progressive photon mapping and our method give smooth result of diffuse–specular–diffuse scene. The reference is rendered by progressive photon mapping using 6 h. Compared with these previous methods, our method gives near reference quality image. (b) in Fig. 10 is a room scene. The resolution of the rendering images is 1024  1024. Each image uses 4 h to render. Monte Carlo ray tracing renders a high quality on diffuse material, but there are a lot of high light spots on the glass ball. Metropolis light transport generates better image than Monte Carlo method, but it still cause a few high light spots. Progressive photon mapping gives smooth result of diffuse–specular–diffuse scene, but there are noises on the corner and ceiling where photons are hard to reach. Compared with progressive photon mapping, our algorithm shoots more photons on

Fig. 7. It shows the photon distributions of our algorithm with the pass increasing.

164

X.-D. Liu, C.-W. Zheng / Computers & Graphics 38 (2014) 158–166

Fig. 8. The error criterion is analyzed by rendering the same scene with different α values. The top row shows the visual quality of the results. The bottom row shows the photon distributions and the table of mean square error.

Fig. 9. (a–c) shows the mean square error and memory consumption of the rendering results using different maximum depths of the kd-tree, maximum node numbers and CDF precisions. (d) shows the convergence rate of our technique.

the corner and ceiling to reduces the noises. The reference is rendered by progressive photon mapping using 12 h. Compared with these previous methods, our result achieves near reference quality. (c) in Fig. 10 is an outdoor room scene. The resolution of the rendering images is 1024  1024. Each image uses 8 h to render. Because of the sun light and glossy material, the Monte Carlo method generates a lot of noises on the summerhouse's surface. Metropolis

light transport gives a better result by jittering its tracing paths through the scene. Progressive photon mapping reduces the high light spots, but it shoots few photons on the ceiling because the scene is complex. The reference is rendered by progressive photon mapping using 24 h. Our algorithm shoots more photons on the noises areas and synthesizes a higher quality image than the previous methods. From all the experiments, we can see our technique costs more

X.-D. Liu, C.-W. Zheng / Computers & Graphics 38 (2014) 158–166

165

Fig. 10. (a) shows a board with several glass and mirror chesses on it. (b) is a room with two mirror objects and a glass ball. (c) is a summerhouse in an outdoor scene. We render the images using Monte Carlo ray tracing, metropolis light transport, progressive photon mapping and our algorithm.

memory than the previous methods to store the photons and CDF, but the consumption is still acceptable. 7.5. Limitations There are a few limitations of our technique. Though the kdtree can partition the scene space, it still has finite refinement because of the memory consumption. Some small features may not be denoted by the error value. The CDF is built by discrete representation in each node. It limits the precision of the adaptive shooting. Some small features may not be adaptively shot by the photons. To store the CDF and error values in each node, our technique costs more memory than the original methods.

8. Conclusions We propose a novel technique to adaptively shoot photons based on the local light field, which can be used in both traditional photon mapping and progressive photon mapping. During rendering process,

kd-tree structure is used to partition and analyze the scene. The light feature error in each node is estimated by our error criterion. Photons on hit points are redirected by its node's CDF, which is computed by the error values around this node. Our technique accelerates the convergence rate of photon mapping methods.

References [1] Jensen HW. Realistic image synthesis using photon mapping. A.K. Peters, Ltd.; 2001. [2] Hachisuka T, Ogaki S, Jensen HW. Progressive photon mapping, ACM Trans Graphics (SIGGRAPH Asia Proceedings) 2008;5(27). ARTICLE 130. [3] Kajiya JT. The rendering equation. In: Computer graphics (Proceedings of ACM SIGGRAPH 86); 1986. p. 143–50. [4] Pharr M, Humphreys G. Physically based rendering, second edition: from theory to implementation. 2nd ed. San Francisco, CA, USA: Morgan Kaufmann; 2010. [5] Jensen HW. Global illumination using photon maps. In: Proceedings of the eurographics workshop on rendering techniques; 1996. p. 21–30. [6] Jonathan TM, Stephen RM. Simulating multiple scattering in hair using a photon mapping approach. ACM Trans Graphics (Proceedings of SIGGRAPH) 2006;25(3).

166

X.-D. Liu, C.-W. Zheng / Computers & Graphics 38 (2014) 158–166

[7] Jensen HW, Christensen PH. Efficient simulation of light transport in scences with participating media using photon maps. In: SIGGRAPH. ACM Press, New York, USA; 1998. p. 311–20. [8] Schjoh L, Frisvad JR, Sporring KEJ. Photon differentials. In: GRAPHITE; 2007. p. 179–86. [9] Spencer B, Jones MW. Better caustics through photon relaxation. Comput Graphics Forum 2009;28(2):319–28. [10] Jarosz W, Nowrouzezahrai D, Sadeghi I, Jensen HW. A comprehensive theory of volumetric radiance estimation using photon points and beams. ACM Trans Graphics (Presented at SIGGRAPH) 2011;30(1):5:1–19. [11] Veach E, Guibas L. Optimally combining sampling techniques for Monte Carlo rendering. In: SIGGRAPH' 1995;95. [12] Georgiev I, Křivánek J, Davidovič T, Slusallek P. Light transport simulation with vertex connection and merging. ACM Trans Graphics 2012;31(6):192:1–10. [13] Hachisuka T, Jensen HW. Stochastic progressive photon mapping. ACM Trans Graphics (Proceedings of SIGGRAPH Asia) 2009;28(5):141:1–8. [14] Kaplanyan AS, Dachsbacher C. Adaptive progressive photon mapping, ACM Trans Graphics 2013;32(2) [Article 16]. [15] Hachisuka T, Jensen HW. Robust adaptive photon tracing using photon path visibility. ACM Trans Graphics 2011;30(5) [Article 114]. [16] Herzog R, Havran V, Kinuwaki S, Myszkowski K, Seidel H-P. Global illumination using photon ray splatting. EUROGRAPHICS 2007;26(3).

[17] Liu XD, Wu JZ, Zheng CW. Kd-tree based parallel adaptive rendering. The Visual Comput 2012;28(6–8):613–23. [18] Keller A, Wald I. Efficient importance sampling techniques for the photon map. In: Proceedings of the 5th fall workshop on vision, modeling, and visualization; 2000. p. 271–9. [19] Schjoh L. Anisotropic density estimation in global illumination: a journey through time and space. PhD thesis, University of Copenhagen; 2009. [20] Mitchell DP. Generating antialiased images at low sampling densities. In: Computer graphics proceedings, annual conference series, ACM SIGGRAPH, vol. 21, ACM, Anaheim; 1987. p. 65–72. [21] Hachisuka T, Jarosz W, Jensen HW. A progressive error estimation framework for photon density estimation. ACM Trans Graphics (Proceedings of the SIGGRAPH Asia Conference) 2010;29(6) [Article 144]. [22] Clarberg P, Jarosz W, Akenine-Möller T, Jensen HW. Wavelet importance sampling: Efficiently evaluating products of complex functions. In: Proceedings of ACM SIGGRAPH 2005. ACM Press; 2005. [23] 〈http://src.luxrender.net〉, 2008. [24] Kelemen C, László S-K, Antal G, Csonka F. A simple and robust mutation strategy for the metropolis light transport algorithm. Comput Graphics Forum 2002;21(3):531–40.