Visual simulation of clouds

Visual simulation of clouds

Visual Informatics 1 (2017) 1–8 Contents lists available at ScienceDirect Visual Informatics journal homepage: www.elsevier.com/locate/visinf Visua...

2MB Sizes 1 Downloads 48 Views

Visual Informatics 1 (2017) 1–8

Contents lists available at ScienceDirect

Visual Informatics journal homepage: www.elsevier.com/locate/visinf

Visual simulation of clouds Yoshinori Dobashi a , Kei Iwasaki b , Yonghao Yue c , Tomoyuki Nishita d, * a

Hokkaido University/UEI Research, Japan Wakayama University/UEI Research, Japan c The University of Tokyo, Japan d UEI Research/Hiroshima Shudo University, Japan b

article

info

Article history: Available online 27 January 2017 Keywords: Clouds Procedural modeling Image-based modeling Fluid simulation Feedback control Genetic algorithms

a b s t r a c t Clouds play an important role when synthesizing realistic images of outdoor scenes. The realistic display of clouds is therefore one of the important research topics in computer graphics. In order to display realistic clouds, we need methods for modeling, rendering, and animating clouds realistically. It is also important to control the shapes and appearances of clouds to create certain visual effects. In this paper, we explain our efforts and research results to meet such requirements, together with related researches on the visual simulation of clouds. © 2017 Published by Elsevier B.V. on behalf of Zhejiang University and Zhejiang University Press. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

1. Introduction Clouds are important elements when synthesizing images of outdoor scenes to enhance realism. Many methods have therefore been continuously proposed for visual simulation of clouds from the very beginning of the history of computer graphics (Blinn, 1982; Max, 1986; Gardner, 1985). These methods are used in many applications such as flight simulators, movies, computer games, and so on. There are several important factors for creating realistic images of clouds. The first one is the shapes. The shapes of clouds are defined by three-dimensional density distribution of cloud particles, or water droplets. We need a method for synthesizing a realistic distribution of the cloud particles. Once we have the realistic distribution, we need a method that can compute realistic colors of clouds, taking into account attenuation and scattering of light inside the clouds. This requires the simulation of interactions between light and the small particles, which is usually timeconsuming. Moreover, when we want to synthesize animation of clouds, we also need a method that can compute complex but fascinating motions of clouds. Finally, efficiency and controllability are also important factors for applications in computer graphics, such as movies and computer games. There are tremendous numbers of previous work to meet such requirements descried above. In this article, we review some of

* Corresponding author.

E-mail address: [email protected] (T. Nishita). Peer review under responsibility of Zhejiang University and Zhejiang University Press.

those previous work and introduce our continuous efforts for modeling, rendering, and animating clouds realistically. Our inverse approach for visual simulation of clouds is also explained. 2. Modeling clouds In order to display realistic clouds, the density distribution of clouds requires to be defined. Many methods have been developed for this purpose. There are two major approaches to the modeling of clouds: a procedural approach and a physically based approach. Procedural modeling is the most popular approach to modeling clouds and some kind of noise functions are usually used. Voss used the idea of fractals for modeling clouds (Voss, 1983). Gardner proposed a method using textured ellipsoids for visual simulation of clouds (Gardner, 1985). Ebert et al. developed a method combining metaballs and a noise function (Ebert, 1997). Sakas modeled clouds by using spectral synthesis (Sakas, 1993). Schpok et al. have developed a real-time system for the procedural modeling of clouds (Schpok et al., 2003). A more detailed explanation of the procedural modeling of clouds can be found in Ebert (2003). These methods can generate realistic clouds, but many parameters are required to be specified by trial and error to synthesize realistic clouds. Clouds can be generated by physically based simulation of the cloud formation process. Kajiya and Herzen solved atmospheric fluid dynamics, numerically, for modeling cumulonimbus clouds (Kajiya and Herzen, 1984). Miyazaki et al. proposed a method for modeling various types of cloud by using the method called a coupled-map lattice, being an extended version of cellular automata (Miyazaki et al., 2001). They also proposed a method for simulating the cloud formation process by improving the method

http://dx.doi.org/10.1016/j.visinf.2017.01.001 2468-502X/© 2017 Published by Elsevier B.V. on behalf of Zhejiang University and Zhejiang University Press. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

2

Y. Dobashi et al. / Visual Informatics 1 (2017) 1–8

Fig. 1. Modeling of clouds using satellite images. The left image shows a typhoon synthesized by using the inset image. The right image shows a different typhoon viewed from space. Fig. 3. Important factors that affect the intensity of clouds.

We developed three methods using the intensity and the opacity for generating the three types of cloud, respectively. Cirrus clouds are very thin and self-shadows are seldom observed. Therefore, we modeled cirrus as a two dimensional texture. Altocumulus is also thin, but self-shadows are observed. Therefore, a threedimensional density distribution must be defined. We used metaballs to define the density distribution and the parameters of the metaballs were determined by using the intensity and opacity information. Finally, for cumulus clouds, the method initially generates a surface shape of the clouds by calculating the thickness at each pixel by using the opacity information. The density inside the shape is then generated, employing a procedural approach. Yuan et al. (2014) also proposed an image-based method for modeling cumulus clouds by improving our method. Fig. 2. Modeling of clouds from photographs. Three types of clouds in the synthetic image are modeled from the corresponding photographs shown in the inset images.

proposed by Kajiya and Herzen (1984). By using these methods, realistic clouds can be created and these can also be used for animating clouds (see 4). However, one of the problems with these methods is that the computational cost is very high. The above problems can be resolved by employing an imagebased approach, that is, modeling clouds from photographs of real clouds. Since taking multiple photographs of clouds from different directions is usually difficult, the image-based methods for modeling clouds use a single image as an input (Dobashi et al., 1998, 2010; Yuan et al., 2014). The purpose of these image-based modeling methods is not to reconstruct the exact shapes of clouds in an image but to use it as a guide to generate clouds that look similar to those in the image. We have developed the method for modeling large-scale clouds by using infrared satellite images (Dobashi et al., 1998). We used metaballs to represent the density distributions of clouds and the parameters of metaballs were automatically adjusted so that the resulting synthesized clouds became similar to those in an infrared satellite image. The method can generate realistic images of a typhoon viewed from space, as shown in Fig. 1. The method, however, is not suitable for photographs taken from the ground, where we need to remove the effects of the sunlight illumination and the background sky. We then developed a method that synthesized clouds from a photograph taken from the ground (Dobashi et al., 2010). This method can generate three types of cloud: cirrus, altocumulus, and cumulus. Examples of the synthetic clouds generated by this method are shown in Fig. 2. The method initially creates an image of the sky by estimating the sky colors behind the clouds in the photograph. Then, the intensity and the opacity of the clouds are calculated by comparing the input photograph with the sky image.

3. Rendering clouds The best way to compute realistic colors of clouds is to simulate optical phenomena that happen inside clouds. Since clouds are collections of small water droplets, scattering and absorption of light due to cloud particles are the most important factors. The atmosphere also affects the appearance of the clouds. Let us first describe important factors that determine the colors of clouds (see Fig. 3). When the light reaches a point inside clouds, the light is scattered by the cloud particle and reaches the viewpoint. When the light is scattered only once before reaching the viewpoint, it is called single scattering component. However, inside the clouds, the light is scattered multiple times as shown in the figure. The multiple scattering component is also important for clouds. Furthermore, since the clouds are surrounded by the atmosphere, the light is also scattered and attenuated by the atmospheric particles. Such atmospheric effects are also important for the appearance of the clouds. Finally, the light behind the clouds also reaches the viewpoint through the clouds. These factors should be taken into account to compute realistic colors of clouds. Many methods have been proposed for rendering clouds taking into account these physical phenomena. Let us review previous methods by classifying them into real-time and offline methods. 3.1. Real-time rendering of clouds Real-time methods usually use GPUs to accelerate the computation involved in the rendering process. Stam used a 3D hardware texture mapping function to display gaseous objects (Stam, 1999). With the help of the high-end graphical workstation, the method can generate realistic images in real-time by combining 3D textures and advecting cloud textures by using the method developed by Max et al. (1992). We proposed a splatting method to render clouds interactively (Dobashi et al., 2000). The method

Y. Dobashi et al. / Visual Informatics 1 (2017) 1–8

3

Fig. 6. Real-time rendering of dynamic clouds illuminated by sunlight and skylight with multiple scattering.

Fig. 4. Efficient rendering of clouds and shafts of light using GPU.

can produce highly realistic images by taking into account atmospheric scattering effects, as shown in Fig. 4. Shafts of light through gaps between clouds can also be rendered efficiently by using this method (see images in the bottom rows of Fig. 4). All these methods aimed at rendering clouds viewed from the ground. We presented an interactive system for realistic visualization of earthscale clouds viewed from space (Dobashi et al., 2010). The system can generate realistic images while the viewpoint and the sunlight directions can be changed interactively. The realistic display of earth-scale clouds requires us to render large volume data representing the density distribution of the clouds. We therefore employed a precomputation-based approach and a hierarchical data structure to accelerate the rendering process. The atmospheric effects and shadows of clouds on the earth were also taken into account to improve the photorealism of the synthetic images. Fig. 5 shows examples of the earth-scale clouds rendered by using their method. The above methods, however, take into account the single scattering of light only. There are several real-time methods that render clouds realistically, taking into account the multiple scattering of light. Harris and Lastra (2001) improved our method (Dobashi et al., 2000) to achieve faster rendering of clouds. This method partly takes into account the effects of the multiple scattering. SzirmayKalos et al. (2005) proposed a real-time rendering method for static clouds with multiple scattering of light on the fly. The method computes the multiple scattering at run-time. Sloan et al. (2002) proposed a precomputation-based approach, called precomputed radiance transfer, and rendered clouds illuminated by lowfrequency lighting. Bouthors et al. (2006) proposed a real-time method for rendering stratiform clouds. They also proposed an interactive rendering method for clouds with multiple anisotropic

scattering (Bouthors et al., 2008). Although these methods can render realistic clouds, these methods assume that the density distribution inside the clouds is homogeneous, and therefore cannot be applied to clouds with a non-homogeneous density distribution. For dynamic density distributions, Zhou et al. (2008) proposed a fast method for real-time rendering of dynamic smoke with multiple scattering, illuminated by low-frequency environmental lighting. We also proposed a real-time method for rendering dynamic clouds taking into account multiple scattering of light (Iwasaki et al., 2011). We developed an efficient method to create endless animations of dynamic clouds by preparing a database of dynamic clouds consisting of a finite number of volume data. The method can render dynamic clouds illuminated by sunlight and skylight with multiple scattering. The viewpoint and the sunlight direction can be changed at run-time. Example images rendered by this method are shown in Fig. 6. 3.2. Offline rendering of clouds Offline methods can produce much more realistic images by accurately computing multiple scattering of light. For computing multiple scattering, the state of the energy equilibrium has to be solved accurately. Kajiya was the first to provide a solution to this problem by using spherical harmonics with a ray-tracing approach (Kajiya and Herzen, 1984). Clouds are considered as inhomogeneous participating media and there has been much previous research on it (Cerezo et al., 2005; Gutierrez et al., 2009). In the early days, researchers focused on a voxel-based approach for computing multiple scattering of light, where the density distribution of participating media is represented by using a three-dimensional grid. Rushmeier et al. developed a zonal method by extending the radiosity method to take into account the participating medium (Rushmeier and Torrance, 1987). In this method, a simulation space is discretized into voxel elements and the state of the energy equilibrium between the voxels is obtained by solving simultaneous linear equations. However, the method could not handle an anisotropic phase function of particles. So, Max proposed an improved method that can handle an anisotropic phase function by using direction bins (Max, 1994). Nishita et al. also developed an efficient method by making use of the characteristic

Fig. 5. Interactive rendering of earth-scale clouds.

4

Y. Dobashi et al. / Visual Informatics 1 (2017) 1–8

Fig. 7. Realistic rendering of clouds by Monte Carlo sampling techniques.

of the phase function of cloud particles (Nishita et al., 1996). A problem with these voxel-based methods is that the accuracy of the solution is restricted to the resolution of the voxels. Jansen et al. addressed this problem by introducing a technique called a photon map (Jensen and Christensen, 1998). This method shoots many photons from light sources and simulates light scattering by using the photons. However, this method essentially shares the same problem with the voxel-based approach, that is, the accuracy depends on the number of photons to be shot. A more serious drawback of these methods is the fact that the solution is biased; the numerical solution does not converge into the true solution, even though the number of voxels or photons is increased. This problem can be resolved by using the techniques based on Monte Carlo sampling. Recent researches focus more on this approach (Lafortune and Willems, 1996; Pauly et al., 2000; Raab et al., 2006), since the Monte Carlo sampling provides unbiased solutions. In this approach, the intensity at a pixel is calculated by integrating contributions from a number of randomly generated light paths. A light path in the participating media is constructed by randomly generating successive scattering events. To determine the location of a new scattering event, the distance from the previous scattering event, called the free path, needs to be determined using random numbers. An importance sampling technique, called free path sampling, is often employed to efficiently determine the free path according to the probability density function which corresponds to the optical depth of the participating media. Major techniques to determine the locations of the scattering events are the ray-marching method (Perlin and Hoffert, 1989; Jensen and Christensen, 1998; Pauly et al., 2000) or its variants, e.g., (Brown and Martin, 2003). However, they are biased and will produce different results for different sampling intervals. An unbiased solution can be obtained by using the technique, called Woodcock tracking (Woodcock et al., 1965), which was proposed in the nuclear science community and is frequently used in nuclear science and medical physics (Badal and Badano, 2009). It was first introduced to the computer graphics community by Raab et al. (2006). Although it has been proven to be unbiased (Coleman, 1968), it is known to be inefficient when dealing with inhomogeneous media (Leppänen, 2007), because the mean free paths are much longer in sparse regions, and such small distances are usually incremented many times, ranging from tens to thousands, until the next scattering event occurs. GPU implementations of Woodcock tracking were proposed to accelerate the computation by, e.g., Badal and Badano (2009). Leppänen improved Woodcock tracking to increase the efficiency by separating dense regions from sparse regions, and treating these two kinds of regions differently (Leppänen, 2007). However, unlike the problem settings in Leppänen (2007), the density distribution of participating media appearing in the computer graphics field (such as smoke and clouds) is usually continuous, and it is not obvious how to adapt such two-level separation. To overcome the above problem, we developed an adaptive and unbiased technique (Yue et al., 2010). The method partitions the

bounding box of the media into sub-spaces (partitions) according to the spatial variation of the mean free path in the media. The partitioning is represented as a kd-tree. During rendering, the locations of the scattering events are determined adaptively using the kd-tree. This sampling technique is proven to be unbiased. One important contribution of the method is an automatic partitioning scheme based on a cost model for evaluating the sampling efficiency. We found the optimal partitioning with respect to the cost model by solving the largest empty rectangle problem. The method is one to two orders of magnitude faster in the overall rendering speed than the previous methods for highly inhomogeneous media. Fig. 7 shows examples of clouds rendered by using this method. 4. Animating clouds Fascinating animations of clouds with changing their shapes and colors are often used in movies, commercial films and so on. They are often created by filming them in advance and replaying the film quickly. Since generating such realistic animations by computer graphics is useful, a lot of methods have been developed. One of the popular and computationally inexpensive ways for animating clouds is the procedural approach (Voss, 1983; Gardner, 1985; Ebert and Parent, 1990; Ebert et al., 1990; Ebert, 1997; Neyret, 1997; Stam and Fiume, 1993; Stam, 1994; Dobashi et al., 2000). Although these techniques can create realistic images of clouds, they are limited when realistic cloud motion is required. A more natural way to model the motion of clouds is to simulate the physical processes of the cloud formation by solving its governing equations. The physical processes involved in the cloud formation are illustrated in Fig. 8. First, the ground is heated by the sun and then the air is heated by the ground. This creates ascending air currents due to the thermal buoyancy forces and the temperature of the rising air currents decreases due to adiabatic cooling (the left figure of Fig. 8). Therefore, at a certain altitude, vapor in the air parcel causes a phase transition, condenses, and the cloud is generated, as shown in the right figure of Fig. 8. The phase transition process is very important to create realistic animation of clouds. Furthermore, at the time when the phase transition occurs, the latent heat is liberated, which creates additional buoyancy forces and promotes further growth of the clouds into higher regions. Fig. 9 shows examples of clouds generated by this method. In computer graphics, Kajiya et al. were the first to use numerical fluid analysis for the visual simulation of clouds (Kajiya and Herzen, 1984). In their method, the equations of atmospheric fluid dynamics are solved numerically. However, this model does not include adiabatic cooling and the temperature lapse in simulation space, which is important for cumulus dynamics and the results are not realistic very much. There are many researches on gas simulations in computer graphics (Foster and Metaxas, 1997; Stam and Fiume, 1993, 1995). Among them, Stam introduced a stable fluid simulation model that is commonly used for simulating fluid phenomena (Stam, 1999). The method is, however, focused on the motion of smoke and the possibility of the application of their method

Y. Dobashi et al. / Visual Informatics 1 (2017) 1–8

5

Fig. 8. Overview of cloud formation process. Rising air parcels are generated due to the heat from the ground and the temperatures of the parcels decrease due to adiabatic expansion, as illustrated by the left figure. Then, clouds are generated by phase transition from water vapor to water droplets (the right figure). Fig. 10. Generating 3D target shape from 2D input contour (pink curve) for controlling cloud simulation.

addressed this problem and developed two methods, based on an inverse problem approach, for determining two important factors that affect the appearances: the shapes (Dobashi et al., 2008) and colors (Dobashi et al., 2012). In the following, we briefly explain these two methods. 5.1. Creating desired shapes by controlling cloud simulation

Fig. 9. Animation of clouds created by solving atmospheric fluid dynamics.

to clouds was not discussed. We proposed a qualitative model of cloud simulation (Miyazaki et al., 2001) using CML (Coupled Map Lattice). CML is an extension of cellular automaton and is an approximation technique to reduce the calculation cost (Yanagita and Kaneko, 1995). We developed a method that can create various clouds based on simulation of ascending air current and the Benard convection (Miyazaki et al., 2001). Originally the CML model was designed for simulating the Benard convection. In the simulation of ascending air currents, the temperature distribution is assumed to be invariable in simulation space. Although the shape looks like a cumulus cloud, the advection of the temperature that is fundamental to the dynamics of cumulus clouds cannot be simulated. Therefore, we later proposed a more practical and realistic cloud simulation model (Miyazaki et al., 2002). This method takes into account the cloud formation processes shown in Fig. 8; the method includes the phase transition and adiabatic cooling that are not included in smoke simulation (Foster and Metaxas, 1997; Stam, 1999; Fedkiw et al., 2001) and can create more realistic animation of clouds than previous cloud models (Kajiya and Herzen, 1984; Miyazaki et al., 2001) by simulating the interaction of the vapor, the cloud, the temperature and the velocity vector. 5. Inverse approach Although realistic clouds can be synthesized by using the methods described so far, one serious problem remains unsolved; it is difficult to generate clouds with desired appearances. That is, animators need to adjust many non-intuitive parameters manually by a trial-and-error process. The expensive computation cost of the numerical simulation makes this process extremely tedious. We

A straightforward approach to generate the desired shape of clouds is to apply the previous methods for controlling smoke or water (Fattal and Lischinski, 2004; Shi and Yu, 2005) to clouds. However we found that this approach did not produce convincing results. The reason is that there are several important physical processes in the cloud formation, such as the phase transition from water vapor to water droplets, that are not present in other phenomena. There are two requirements for modeling cloud shapes: (1) realistic shapes have to be generated and (2) the shape should closely match to the desired shape specified by the user. For the first requirement, we employ the method developed by Miyazaki et al. (2002) for numerical simulation of cloud formation processes based on the atmospheric fluid dynamics (see Section 4). For the second requirement, we choose to control the physical parameters affecting the cloud formation process with a feedback control mechanism, which is described below. The user specifies a contour line of the desired shape of clouds on the screen as indicated by the pink curve in Fig. 10. Then a threedimensional target shape is generated from the contour line. The simulation is controlled so that the difference between the target shape and the simulated clouds becomes zero. We developed two controllers, a latent heat controller and a water vapor supplier, to automatically adjust the amount of the latent heat and the amount of water vapor. The latent heat controller increases the latent heat and the water vapor supplier adds water vapor in the regions where the clouds have not reached the top of the target shape. By combining these controllers, the vertical development of the clouds and the generation of clouds are controlled until the target shape is formed. An important aspect of our control mechanism is that the simulation is implicitly controlled. No external forces nor cloud densities are explicitly generated. This prevents our controller from destructing the cloud dynamics and results in realistic cloud formation. Figs. 11 and 12 show the clouds generated by our method. Fig. 11 shows a typical shape of cumulonimbus clouds generated by using our control method. In Fig. 12, the user specifies an unnatural shape of clouds, a skull. Our method can successfully generate realistic clouds even for this unnatural shape.

6

Y. Dobashi et al. / Visual Informatics 1 (2017) 1–8

Fig. 11. Examples of cumulonimbus clouds generated by our control method. The pink curve is specified by the user.

Fig. 12. Skull-shaped clouds generated by our control method.

5.2. Adjusting parameters for rendering clouds Although colors of clouds are computed by simulating the light scattering inside clouds, realistic images are not always generated unless the user specifies good parameters used for the simulation of the light scattering. Choosing appropriate parameters to produce the desired appearance of clouds is a difficult, tedious, and timeconsuming task. To address this problem, we developed an automatic method for adjusting the parameters so that the appearance of the synthetic clouds looks similar to a photograph of clouds specified by the user. The inputs to our system are volume data representing the density distribution of synthetic clouds and a photograph of real clouds. The direction of the sunlight and the camera parameters used to render synthetic clouds also need to be specified by the user. Our system then searches for the optimal parameters that minimize the visual differences between the synthetic clouds and the photograph. We use color histograms to measure the visual differences. The intensity of clouds in the synthetic image is calculated based on the rendering equations for the light scattering (Nishita et al., 1996; Cerezo et al., 2005; Zhou et al., 2008; Yue et al., 2010). The intensity of clouds depends on many parameters, such as the intensities of the sunlight and the skylight, and the optical properties of atmospheric and cloud particles. We assume that the only light source illuminating the clouds is the sun. The attenuation and scattering of light due to atmospheric particles between the clouds and the viewpoint are also taken into account.

The minimization problem is solved by rendering the clouds repeatedly with various parameter settings using genetic algorithms. The images of the synthetic clouds are repeatedly created by using an efficient volumetric path tracer with different parameter settings. The genetic algorithms modify the parameters according to the visual difference. Fig. 13 shows examples of cumulonimbus clouds rendered by using our method. The clouds are generated by fluid simulation (Miyazaki et al., 2002). The inset in each image is the input photograph of the clouds. By estimating the parameters for rendering the clouds, the subtle color variations observed in the photograph are reproduced in the synthetic clouds. Fig. 14 shows an example of unnatural clouds. The clouds were generated using controlled simulation described in the previous section. In this example, we replaced the real clouds in the input photograph with the synthetic clouds, rendered using the optimized parameters. The synthetic clouds are naturally composited onto the real photograph. 6. Conclusion In this article, we have reviewed the previous work for the visual simulation of clouds. It is surprising that there have been such a tremendous number of researches focusing on this specific topic. For modeling clouds, methods on three types of approaches have been developed: procedural, physically-based, and image-based approaches. For rendering clouds, real-time and offline methods have been developed. The real-time methods usually use GPUs and the precomputation-based approach is sometimes employed to take into account multiple scattering. The offline methods can take into account the multiple scattering very accurately. The recent researches mostly focus on Monte Carlo sampling techniques to create highly realistic images. For computing motions of clouds to create realistic animations, procedural methods were often used because of its efficiency. Physically-based methods using fluid simulations are also useful for computing realistic motions. However, with these methods, it is often difficult to choose appropriate parameters to generate clouds with the desired appearance. We introduce our inverse design approach for visual simulation of clouds in order to overcome the above problem. Our approach took two-dimensional information as input and computed three-dimensional information to create realistic images of clouds. For the cloud shapes, two-dimensional contour line of the

Fig. 13. Examples of our method for adjusting parameters for rendering clouds.

Y. Dobashi et al. / Visual Informatics 1 (2017) 1–8

7

Fig. 14. Composition of synthetic clouds onto a photograph.

desired shapes of clouds is used to generate three-dimensional shapes of clouds. For the cloud colors, a photograph of real clouds is used to find optimal parameters that can produce realistic images of the synthetic clouds. Clouds are fascinating natural phenomena and always attract our attention. Although it has become possible to synthesize realistic images/animations of clouds, there is still room for us to work on this research topic. For example, the resolution of synthetic clouds is not sufficient when compared to clouds in the real world. Efficient rendering/simulation of large-scale clouds covering hundreds of kilometers of areas should also be resolved. The inverse approach should also be improved so that, for example, the realistic animation of clouds is created in real-time from the user’s sketch. We will keep working on these issues. References Badal, A., Badano, A., 2009. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit. Med. Phys. 36 (11), 4878–4880. Blinn, J., 1982. Light reflection functions for simulation of clouds and dusty surfaces. Comput. Graph. 16 (3), 21–29. Bouthors, A., Neyret, F., Lefebvre, S., 2006. Real-time realistic illumination and shading of stratiform clouds. In: Proc. of Eurographics Workshop on Natural Phenomena. pp. 41–50. Bouthors, A., Neyret, F., Max, N., Bruneton, E., Crassin, C., 2008. Interactive multiple anisotropic scattering in clouds. In: Proceedings of the 2008 Symposium on Interactive 3D Graphics and Games. ACM, pp. 173–182. Brown, F.B., Martin, W.R., 2003. Direct sampling of Monte Carlo flight paths in media with continuously varying cross-sections. In: Proc. ANS Mathematics & Computation Topical Meeting. Cerezo, E., Pérez, F., Pueyo, X., Serón, F.J., Sillion, F.X., 2005. A survey on participating media rendering techniques. Vis. Comput. 21 (5), 303–328. Coleman, W., 1968. Mathematical verification of a certain Monte Carlo sampling technique and applications of the technique to radiation transport problems. Nucl. Sci. Eng. 32, 76–81. Dobashi, Y., Iwasaki, W., Ono, A., Yamamoto, T., Yue, Y., Nishita, T., 2012. An inverse problem approach for automatically adjusting the parameters for rendering clouds using photographs. ACM Trans. Graph. 31 (6), Article 145. Dobashi, Y., Kaneda, K., Yamashita, H., Okita, T., Nishita, T., 2000. A simple, efficient method for realistic animation of clouds. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co., pp. 19–28. Dobashi, Y., Kusumoto, K., Nishita, T., Yamamoto, T., 2008. Feedback control of cumuliform cloud formation based on computational fluid dynamics. ACM Trans. Graph. 27 (3), Article 94. Dobashi, Y., Nishita, T., Yamashita, H., Okita, T., 1998. Using metaballs to modeling and animate clouds from satellite images. Vis. Comput. 15 (9), 471–482. Dobashi, Y., Shinzo, Y., Yamamoto, T., 2010. Modeling of clouds from a single photograph. Comput. Graph. Forum 29 (7), 2083–2090. Ebert, D.S., 1997. Volumetric modeling with implicit functions: A cloud is born. In: Visual Proc. of SIGGRAPH97. p. 147. Ebert, D.S., 2003. Texturing & Modeling: A Procedural Approach. Morgan Kaufmann. Ebert, D.S., Carlson, W.E., Parent, R.E., 1990. Solid spaces and inverse particle systems for controlling the animation of gases and fluids. Vis. Comput. 10 (4), 179190. Ebert, D.S., Parent, R.E., 1990. Rendering and animation of gaseous phenomena by combining fast volume and scanline a-buffer techniques. Comput. Graph. 24 (4), 357–366. Fattal, R., Lischinski, D., 2004. Target-driven smoke animation. ACM Trans. Graph. 23 (3), 439–446. Fedkiw, R., Stam, J., Jensen, H.W., 2001. Visual simulation of smoke. In: Proc. SIGGRAPH 2001. pp. 15–22.

Foster, N., Metaxas, D., 1997. Modeling the motion of a hot, turbulent gas. In: Proc. of SIGGRAPH97. pp. 181–188. Gardner, G.Y., 1985. Visual simulation of clouds. In: Proceedings of SIGGRAPH 1985, Comput. Graph. 19 (3), 297–304. Gutierrez, D., Jensen, H.W., Jarosz, W., Donner, C., 2009. Scattering. In: SIGGRAPH ASIA Courses. Harris, M.J., Lastra, A., 2001. Real-time cloud rendering. Comput. Graph. Forum 20 (3), 76–84. Iwasaki, K., Nishino, T., Dobashi, Y., 2011. Real-time rendering of endless cloud animation. In: Proc. of Pacific Graphics 2011 (Short Papers). Jensen, H.W., Christensen, P.H., 1998. Efficient simulation of light transport in scenes with participating media using photon maps. In: ACM SIGGRAPH 1998. pp. 311–320. Kajiya, J.T., Herzen, B.P.V., 1984. Ray tracing volume densities. Comput. Graph. (Proceedings of SIGGRAPH 1984) 18 (3), 165–174. Lafortune, E.P., Willems, Y.D., 1996. Rendering participating media with bidirectional path tracing. In: Proc. EGWR 1996. pp. 91–100. Leppänen, J., 2007. Development of a New Monte Carlo Reactor Physics Code (Ph.D. thesis). Helsinki University of Technology. Max, N., 1994. Efficient light propagation for multiple anisotropic volume scattering. In: Proc. of the Fifth Eurographics Workshop on Rendering. pp. 87–104. Max, N., Crawfis, R., Williams, D., 1992. Visualizing wind velocities by advecting cloud textures. In: Proc. IEEE Visualization 1992. pp. 179–183. Max, N.L., 1986. Light diffusion through clouds and haze. Comput. Vis. Graph. Image Process 33 (3), 280–292. Miyazaki, R., Dobashi, Y., Nishita, T., 2002. Simulation of cumuliform clouds based on computational fluid dynamics. In: Proceedings of EUROGRAPHICS 2002 Short Presentations. pp. 405–410. Miyazaki, R., Yoshida, S., Nishita, T., Dobashi, Y., 2001. A method for modeling clouds based on atmospheric fluid dynamics. In: Proceedings of the 9th Pacific Conference on Computer Graphics and Applications. pp. 363–372. Neyret, F., 1997. Qualitative simulation of convective cloud formation and evolution. In: Proceedings of Eurographics Computer Animation and Simulation Workshop 1997. pp. 113–124. Nishita, T., Dobashi, Y., Nakamae, E., 1996. Display of clouds taking into account multiple anisotropic scattering and sky light. In: ACM SIGGRAPH 1996. pp. 379–386. Pauly, M., Kollig, T., Keller, A., 2000. Metropolis light transport for participating media. In: Proc. EGWR 2000. pp. 11–22. Perlin, K., Hoffert, E.M., 1989. Hypertexture. ACM SIGGRAPH Comput. Graph. 23 (3), 253–262 (SIGGRAPH 1989). Raab, M., Seibert, D., Keller, A., 2006. Unbiased global illumination with participating media. In: Proc. Monte Carlo and Quasi-Monte Carlo Methods 2006. pp. 591–605. Rushmeier, H.E., Torrance, K.E., 1987. The zonal method for calculating light intensities in the presence of a participating medium. Comput. Graph. 21 (4), 293–302. Sakas, G., 1993. Modeling and animating turbulent gaseous phenomena using spectral synthesis. Vis. Comput. 9 (4), 200–212. Schpok, J., Simons, J., Ebert, D.S., Hansen, C., 2003. A real-time cloud modeling, rendering, and animation system. In: Proceedings of Symposium on Computer Animation 2005. pp. 160–166. Shi, L., Yu, Y., 2005. Taming liquids for rapidly changing targets. In: Proceedings of the 2005 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. pp. 229–236. Sloan, P.-P., Kautz, J., Snyder, J., 2002. Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments. ACM Trans. Graph. 21 (3), 527–536. Stam, J., 1994. Stochastic rendering of density fields. In: Proc. of Graphics Interface94. pp. 51–58. Stam, J., 1999. Stable fluids. In: Proceedings of ACM SIGGRAPH 1999, Annual Conference Series. pp. 121–128. Stam, J., Fiume, E., 1993. Turbulent wind fields for gaseous phenomena. In: Proc. of SIGGRAPH’93. pp. 369–376. Stam, J., Fiume, E., 1995. Dipicting fire and other gaseous phenomena using diffusion processes. In: Proc. of SIGGRAPH’95. pp. 129–136.

8

Y. Dobashi et al. / Visual Informatics 1 (2017) 1–8

Szirmay-Kalos, L., Sbert, M., Umenhoffer, T., 2005. Real-time multiple scattering in participating media with illumination networks. In: Eurographics Symposium on Rendering 2005. pp. 277–282. Voss, R., 1983. Fourier synthesis of gaussian fractals: 1/f noises, landscapes, and flakes. In: SIGGRAPH’83: Tutorial on State of the Art Image Synthesis, vol. 10. Woodcock, E., Murphy, T., Hemmings, P., Longworth, T., 1965. Techniques used in the GEM code for Monte Carlo neutronics calculations in reactors and other systems of complex geometry. In: Proc. Conference on the Application of Computing Methods To Reactor Problems, ANL-7050. pp. 557–579.

Yanagita, T., Kaneko, K., 1995. Rayleigh-Benard convection: Patters, chaos, spatiotemporal chaos and turbulent. Physica D 82, 288–313. Yuan, C., Liang, X., Hao, S., Qi, Y., Zhao, Q., 2014. Modeling cumulus cloud shapes from a single image. Comput. Graph. Forum 33, 288–297. Yue, Y., Iwasaki, K., Chen, B.-Y., Dobashi, Y., Nishita, T., 2010. Unbiased, adaptive stochastic sampling for rendering inhomogeneous participating media. ACM Trans. Graph. 29 (6), Article 177. Zhou, K., Ren, Z., Lin, S., Bao, H., Guo, B., Shum, H.-Y., 2008. Real-time smoke rendering using compensated ray marching. ACM Trans. Graph. 27 (3), Article 36.