FGCS ELSEVIER
Future Generation Computer Systems 14 (1998) 223-229
~UTURE ~ENERATION ~OMPUTER ~YSTEMS
Dynamic radiosity shadows for interactive virtual environments * F r a n k Sch6ffel 1 Fraunhofer Institute for Computer Graphics, Rundeturmstrafle 6, 64283 Darmstadt, Germany
Abstract
Current state-of-the-art virtual reality (VR) systems provide realistic illumination only for static scenes, since shadows are calculated off-line and cannot be updated at interactive rates. This paper presents a new method for incorporating realistic radiosity shadows also in interactive VR applications. A radiosity process and the VR rendering process are coupled. User interactions trigger the radiosity process, which updates the illumination by providing corrected vertex colors. To speed up radiosity updates, coherences are exploited. Thus, on-line updated radiosity shadows can be integrated into interactive, dynamic virtual environments, providing a high degree of visual realism. © 1998 Elsevier Science B.V. Keywords: Virtual reality; Radiosity; Soft shadows; Scene manipulation; Shadow update
1. Introduction
Realistic rendering is an important requirement in most virtual reality (VR) applications. Shadows and lighting effects increase the realistic impression dramatically. Furthermore, shadows help the user to perceive spatial relationships and to orientate in a 3D world [14], as depicted in Fig. 1. For calculating the illumination in virtual worlds, the so-called radiosity method can be applied. Radiosity shadows can be calculated in advance, and can then be used for rendering during walk-through. This technique (see Fig. 2) is state-of-the-art in today's VR systems [11,13]. However, this method is applicable only for static environments: Once material properties or geometry * Expanded version of a talk presented at the Euro-VR MiniConference '97 (Amsterdam, November 1997). Color pictures available. See http:l/www.elsevier.nlllocatelfuture E-mail:
[email protected]
are changed, the entire radiosity preprocess has to be repeated in order to generate a correct illumination which is much too slow for interactive applications. Applications including dynamic, moving objects, or in which the user may manipulate the scene geometry, therefore currently cannot benefit from realistic soft 'shadows. In this paper, a new method is presented for integrating soft radiosity shadows within interactive, dynamic virtual worlds also, by coupling the rendering process of a VR system and a radiosity update process tighter. In Section 2, the radiosity method is briefly introduced, and in Section 3 an overview is given over traditional techniques for including physically based illumination in static VR applications. In Section 4, a new method of coupling the radiosity update process with the VR system is described. Results are presented in Section 5, and in Section 6 conclusions are drawn and an outlook is given.
0167-739X/98/$19.00 © 1998 Elsevier Science B.V. All rights reserved. Pll S0167-739X(98)00026-0
224
E SchOffel/Future Generation Computer Systems 14 (1998) 223-229
Fig. 1. Shadows help to understand spatial relationships: The distance between table top and keyboard is hard to recognize in the left image, while it is obvious in the right image.
Fig. 2. State-of-the-art: Pre-calculating radiosity shadows for usage within VR applications.
2. Global illumination simulation using the radiosity method Local illumination models can very efficiently calculate the illumination on a given surface by taking into account solely the surface's and the light source's geometry and characteristics. These methods are often available in hardware and therefore are used for fast rendering purposes, for example in VR applications. On the other hand, they cannot simulate shadows and interreflections between surfaces. Therefore, for simulating illumination on a physically correct basis, so-called global illumination techniques have to be applied. Those methods consider the whole 3D scene
for calculating the illumination on a given surface, and thus can provide shadows and interreflections, which are essential for achieving a realistic impression of the illumination in a virtual environment. The raytracing method, which is the most popular global illumination method, calculates a still image from a given point of view. Results of raytracing processes typically are very brilliant pictures, including impressive mirror effects, specular reflectances and sharp shadow boundaries. However, because of its view dependence, the raytracing method is unsuitable for usage in VR applications, where the user is free to move around and change his point of view permanently. Another global illumination technique is the radiosity method [6], which originated from radiant heat transfer simulation and traditionally treats each surface as lambertian, i.e., as a perfectly diffuse reflector. Thus, the method is independent from the point of view, but cannot simulate specular reflectances. Since area light sources can be treated with the radiosity method, realistic, soft shadows can be obtained. Because of its view independency, this method is very well suited for simulating global illumination in virtual environments. When applying a radiosity simulation, all object surfaces have to be subdivided into a set of smaller patches. In the easiest case, this subdivision (meshing) can be a uniform grid, but more sophisticated approaches are possible as well. All patches - except the light sources - are initialized to a radiosity value of zero. The light sources have an initial radiosity value greater than zero. Then, the exchange of radiant energy between each pair of patches is calculated based on thermodynamic equations, solving the radiosity equation for each patch: n
Br = Er -4- Pr Z
BsFr, s.
s=l
In this formula, the radiosity Br of a given patch r consists of the patch's emissivity Er, and the energy arriving at r from all n other patches s in the scene, weighted by r ' s reflectivity Pr. Fr,s denotes the formfactor, which describes the portion of the total energy leaving s, that arrives at r. The form-factor depends
E SchOffel/Future Generation Computer Systems 14 (1998) 223-229 on geometric relationships (distance, size, mutual visibility, etc.) of r and s only. Its calculation is the most computationally expensive part in the radiosity simulation. In the original radiosity method [6], at first all formfactors between each pair of patches are calculated, and then a linear set of equations is solved (in fact, the equation system is usually solved three times, once for each the red, green and blue color channels). The result is a radiosity value for each patch (or, to be more exact, three values for red, green, and blue). These values are then mapped to colors which are used for display. The traditional method needs O(n 2) memory for storing the n 2 form-factors, and O(n 2) time for solving the equation system. This complexity makes that method impractical for usage with real-world scenes. Therefore, incremental radiosity methods have been developed. One of these is the progressive refinement method, which reduces complexity to O(n). Progressive refinement radiosity [3] starts with the brightest patch in the scene and distributes its energy to all other scene patches according to the form-factors between the receivers and the sender patch. In the second iteration, the next brightest patch is selected as sender, and so on. Each iteration increases the current radiosity values at all patches by some amount, and the energy transported in each iteration decreases with the number of iterations performed. Therefore, one can stop the process after N iterations, or when the amount of energy transported falls beneath a given threshold. The result of the progressive refinement method converges to the correct result of the traditional method for N --+ ~x~.But, for usage in VR applications, a very small number of iterations often suffices for a good, realistic impression. In fact, often only direct illumination is simulated.
3. Incorporating radiosity shadows into virtual environments For displaying the calculated illumination, the radiosity values have to be mapped to colors onto the scene geometry. Two strategies are widely used for this task: either illumination maps are created for each
225
polygon, which are used as textures during rendering [11,13], or the radiosities are mapped to vertex colors which are displayed using Gouraud-shading [1]. Textures can be generated easily if uniform meshing is used, assigning the color of each patch to one pixel of the texture. For other types of meshes, shadow textures can also be generated [11], but this process is more complicated. During rendering, simply the textured polygons are used. Modem graphics hardware supports texture filtering, so rendering can be done fast and shadow textures appear smooth. But, since the number of polygons can be very large, enormous numbers o f - usually small - textures have to be displayed, and texture memory is exhausted quickly. Vertex colors get around this problem. Here, patch colors (derived from patch radiosities) are interpolated to patch vertex colors. During rendering, the patches are displayed instead of drawing polygons. Thus, a shortcoming of this method is that the number of polygons to render increases significantly. But, this technique is directly applicable to all kinds of meshes, and furthermore, it is easy to apply to initially textured polygons - in contrast to the shadow texture method, in which two textures have to be blended. In addition, hybrid methods can be applied, using textures for polygons with high patch resolution and vertex colors for others. Applying radiosity illumination to a virtual environment results in a new scene - enriched by shadows represented by either textures or vertex colors. This scene, resulting from the illumination preprocess, then acts as input for the VR system. During a walkthrough application, it is then rendered from different points of view, reacting to any camera movement controlled by the user. This technique of using pre-computed shadows, which is depicted in Fig. 2, is state-of-the-art in today's VR systems [1,11,13]. The drawback of this method is that illumination is calculated off-line, and therefore any parameters influencing the lighting situation have to remain fixed in the VR world. View changes can be handled within the VR system, because of the view-independence of the radiosity simulation. But any modification of geometry or material properties necessitates a re-calculation of the
226
E SchOffel/Future Generation Computer Systems 14 (1998) 223-229
illumination. This cannot be done within the VR application, however, since it is too time-consuming. As a consequence, one has to restrict existing VR systems to either poorly illuminated worlds which can be interactively modified, or static worlds which include realistic shadows.
4. Interlinking structure of online radiosity In order to overcome this drawback and make radiosity shadows applicable in interactive scenarios, we coupled the radiosity process with the VR rendering process tighter. We have implemented the radiosity process as a separate process running in parallel to the VR system's renderer. Both processes communicate via shared memory (Fig. 3). 4.1. Initialization phase At startup time, after setting up the inter-process communication, both processes load the same scene description. The radiosity process then calculates an initial radiosity solution, by performing a number of progressive refinement iterations on a uniform patch mesh. The number of iterations to be performed is pre-defined by the user in a startup script. In order to speed up the radiosity update steps later on, coherences have to be exploited [2,5,12]. Therefore, we applied our concept of the so-called
Fig. 3. Online radiosity: Interlinking the VR rendering process with a parallel radiosity update process.
shadow-form-factor-list (SFFL) [12], storing visibility and form-factor information as well as information about the current sending patch si at each progressive refinement iteration i in an efficient data structure for later re-use: at each iteration, a list is stored containing the form-factors Fr,s between the sender patch si and all receiving patches r in the scene. If a receiving patch r is being occluded by an object, then an identification of this blocking object Ori is stored instead of the form-factor. The radiosity values resulting from the initial simulation are mapped to vertex colors, and the vertex coordinates and vertex colors of all patches are written into the shared memory segment. The VR system's rendering process starts rendering the - at first un-illuminated - scene as soon as loading has been finished. After each frame, it peeks whether data have been provided by the radiosity process. As long as no radiosity data are available, the scene is rendered only hardware-shaded, but walkthrough is yet possible. Once the radiosity process has finished the initial simulation and provided the patches' data and vertex colors, the VR process reads these data from the shared memory. Each polygon then is replaced by the set of patches it has been subdivided into by the radiosity process, and from that time on, vertex colors are used for display. 4.2. Interactive phase While view modification is possible from startup time on, interactions with objects are allowed only once the radiosity preprocess has been finished. If the user then moves an object, a message is created consisting of the moved object's ID and a transformation matrix. This message is being sent to the radiosity process, which updates the radiosity solution accordingly. The corrected vertex colors are transferred back to the VR system via shared memory. For speeding up radiosity update, the information stored in the SFFL is used: In order to quickly identify the regions of the old shadow cast by the moved object Om (before movement), we simply have to find those entries in the lists, where Om has been recorded as a blocking object (Ori = Om). Then only to those
E Schtffel/Future Generation Computer Systems 14 (1998) 223-229
patches concerned, the radiosity has to be re-sent from the light sources. For identifying the region of the new shadow of Ore, shadow volumes similar to those proposed in [7] are used. They are constructed from the light source over the bounding box of Om: every object which lies in the shaft behind Om (as seen from the light source) is potentially in shadow and thus has to be re-illuminated. In addition, the patches of Om itself of course have to be newly lit. The technique of reusing the information from the SFFL helps us to avoid many visibility checks and form-factor calculations, and thus speeds up the update process significantly. Not only direct light, but also indirect illumination can be handled by this method - see [12] for further details on the SFFL method. Since the renderer and the radiosity process are running in parallel, walkthrough is still possible while the update process calculates a new radiosity solution. On multi-processor hardware, this can be accomplished without loss of rendering speed. Furthermore, the user specifies when to trigger radiosity updates: Either automatically when objects are grabbed or released (implicit triggering - connected to certain events like grabbing or releasing objects), or only on explicit demand, for example, at some inbetween stage during object movement (explicit triggering - e.g., reacting to a specific key press or to a command given via a speech recognition system), or continuously. The continuous update of the shadow during object movement clearly is the ultimate goal, but since update rates so far are too low for realtimefeedback in complex scenes, it is often useful to trigger update only implicitly when the interaction has been finished. Since it is up to the user to choose a triggering strategy, the system can be adjusted to the specific needs and complexity of a certain application.
5. Results The presented method has been successfully applied to several interactive virtual environments of different complexity. Times given below have been measured on a SGI Onyx IR with six 194 MHz R10000 processors and 1 GB main memory.
227
Fig. 4. In a simple test scene, real-time shadow update is achieved (more than 5 updates per second).
Fig. 5. Moving a seat in a virtual environment. Shadows are updated on explicit request during movement (lower left), and finally implicitly when releasing the object (lower right). Radiosity update took approximately 0.25 s.
Fig. 6. Light emanated from a moved light source is updated together with shadows of the lamp in approximately 0.6 s. In small test environments, real-time shadow update could be accomplished. The scene shown in Fig. 4 consists of 1250 patches. Initial radiosity simulation was done by nine progressive refinement iterations. During user interaction, a minimum shadow update rate of 5 Hz was reached, enabling real interactive work. Here, continuous shadow update triggering was applied. Another - more complex - model is shown in Figs. 5 and 6. This scene consists of 7155 polygons,
228
F. SchOffel/Future Generation Computer Systems 14 (1998) 223-229
subdivided into 10 820 patches for the radiosity simulation. The initial radiosity solution (four iterations) took 4.1 s. Moving the seat in Fig. 5, a total of 2386 patches (mainly on the floor and on the wall as well as on the moved object itself) are affected, and each radiosity update is completed in approximately 0.25 s. In this example, the implicit triggering strategy has been applied, causing the illumination to be updated only once object movement has been finished (lower right in Fig. 5). In addition, the update has been triggered explicitly once during movement (lower left in Fig. 5). As depicted in Fig. 6, not only shadows cast by a moved object are updated, but also light emanated by the object. Lamp movement across the table caused 3952 patches to be re-illuminated, requiring 0.6 s. per update. Note that in this example not only the light emanated from the spot light in the moved lamp has to be re-simulated, but also the shadows cast by the lamp object onto the table top (with respect to the ceiling lights). If ceiling lights are being moved, a big portion of the whole scene is influenced, and thus illumination update takes longer, even if only the light from one single light source has to be re-shot - for example, 0.7 s. were needed for movement of one ceiling lamp above the desktop (4976 of the patches are considered for updating the radiosities - see Fig. 7). Exploiting coherences for radiosity update reduces the number of affected patches and thus decreases update time significantly. For complex scenes, it proved very helpful that the update triggering strategy can be adjusted by the user. It should be mentioned again, that interactive walkthrough is not hindered while the
Fig. 7. If ceiling lights are moved,huge portionsof the scene are affected,causing the updateprocessto be more time consuming: 0.7 s. were needed for update after the ceiling light above the desk has been moved.
update process is running - if both processes have their own processor, this does not even slow down the flame rate of the renderer.
6. Discussion and future outlook
The concept presented in this paper of interlinking a VR system with a radiosity process is a first step towards realistically illuminated, dynamic virtual worlds. The proposed method can be applied not only to interactive worlds in which the user modifies the scene geometry, but also to environments with autonomously moving objects, like cars driving by, casting realistic shadows onto the surrounding objects. Thus, the visual appearance of virtual environments can be improved significantly. Since the radiosity update process is running in parallel to the renderer, performance of the VR system is not decreased too much. Uniform meshing has been chosen for radiosity calculation, in combination with progressive refinement. This has been done, because the SFFL method, which requires fixed patch structure, can be applied then very well. Of course, uniform patch grids cannot capture shadow boundaries good and efficiently. Although in our applications the visual quality was quite satisfying, one could easily imagine scenarios where this is not the case. More sophisticated meshing strategies (e.g., [4,8,10]) would help to improve shadow quality with moderate effort, but then the mesh would have to be adapted to the moving shadow boundaries, which would in turn take some time to be done. In any case, we feel that combining these more exact radiosity techniques with the method presented in this paper will be a promising approach, if efficient ways of re-organizing the patch mesh can be found. In our test applications, it proved very useful that the user can choose the update triggering strategy according to the complexity of the scene and the type of interactions he intends to do. It is possible to switch between implicit and explicit triggering as well as continuous updates even during a running session. Since real-time feedback currently cannot be achieved during user interaction in complex scenes, we suggest as an extension to incorporate fast shadow methods
E Schrffel/Future Generation Computer Systems 14 (1998) 223-229
(e.g., [9]) during interaction and blend between fast shadows and more exact radiosity shadows after interaction is finished. Ideally, blending between both kinds of shadows should be performed smoothly - i.e., without any flickering or shadows popping up suddenly. Thus, shadows would be present during object movement, helping the user to locate an object, for example, and in addition, the visual realism would be increased by soft radiosity shadows when no interaction takes place. To further speed up the shadow update process, we intend to incorporate the viewpoint into the update process, and to consider physiological issues of the human perceptive system. Objects which are in the middle of the current field-of-view of the user, or which are close to the object which is currently being moved, should be considered more important than objects that are far away, or hidden by other objects, or even outside the user's viewing frustum. In addition, objects which cannot be seen because of missing contrast (e.g., small objects in very dark areas) might be of no importance. Thus, using some importance metric, those objects which are more important for the user could be updated before others. To conclude, we feel that our method of coupling a radiosity update process and a VR system's renderer is a first, promising step towards realistically illuminated, dynamic virtual environments, but still there are many open issues for future work, in order to further improve performance and visual quality.
References
[1] E Astheimer, W. Felger, S. Mtiller, Virtual design: A generic VR system for industrial applications, Comput. Graph. 17 (1993) 617--678. [2] S.E. Chen, Incremental radiosity: An extension of progressive refinement radiosity to an interactive image synthesis system, ACM Comput. Graph. 24 (1990) 135144. [3] M.E Cohen, S.E. Chen, J.R. Wallace, D.E Greenberg, A progressive refinement approach to fast radiosity image generation, ACM Comput. Graph. 22 (1988) 75-84. [4] M.E Cohen, D.E Greenberg, D.S. Immel, J. Brock, An efficient radiosity approach for realistic image synthesis, IEEE Comput. Graph. Appl. 6 (1986) 26-35.
229
[5] D.W. George, EX. Sillion, D.E Greenberg, Radiosity redistribution for dynamic environments, IEEE Comput. Graph. Appl. 10 (1990) 26-34. [6] C.M. Goral, K.E. Torrance, D.E Greenberg, B. Battaile, Modeling the interaction of light between diffuse surfaces, ACM Comput. Graph. 18 (1984) 213-222. [7] E.A. Haines, J.R. Wallace, Shaft culling for efficient raytraced radiosity, in: Photorealistic Rendering in Computer Graphics (Proceedings of the Second Eurographics Workshop on Rendering), Springer, New York, 1994. [8] E Hanrahan, D. Salzman, L. Aupperle, A rapid hierarchical radiosity algorithm, ACM Comput. Graph. 25 (1991) 197206. [9] ES. Heckbert, M. Heft, Fast Soft shadows, in: Visual Proceedings, SIGGRAPH '96, p. 145. [10] D. Lischinski, E Tampieri, D.E Greenberg, Combining hierarchical radiosity and discontinuity meshing, ACM Comput. Graph. 24 (1993) 199-208. [11] T. Mrller, Radiosity techniques for virtual reality - faster reconstruction and support for levels of detail, Proceedings of the Fourth International Conference in Central Europe on Computer Graphics and Visualization '96, Plzen, Czech Republic, 1996. [12] S. Miiller, E Schrffel, Fast radiosity repropagation for interactive virtual environments using a shadow-formfactor-list, in: G. Sakas et al. (Eds.), Photorealistic Rendering Techniques, Springer, New York, 1995, pp. 339356. [13] K. Myszkowski, T.L. Kunii, Texture mapping as an alternative for meshing during walktrough animation, in: G. Sakas et al. (Eds.), Photorealistic Rendering Techniques, Springer, New York, 1995, pp. 389--400. [14] M. Slater, M. Usoh, Y. Chrysanthou, The influence of shadows on presence in immersive virtual environments, in: M. Grbel (Ed.), Virtual Environments'95, Springer, New York, 1995, pp. 8-21.
Frank SchiJffel was born in 1968, in Mannheim, Germany. He studied Computer Science at the University of Technology in Darmstadt, Germany, where he achieved his Diploma degree in 1994. Since 1994, he is a full-time researcher at the Fraunhofer Institute for Computer Graphics in Darmstadt. His research interests include virtual reality, simulation of light and heat flow in buildings, and photorealistic image synthesis. Current research focusses on the integration of lighting simulation into virtual environments, and on improving radiosity simulation techniques, especially for dynamic environments. Frank Sch6ffel holds seminars on radiosity and raytracing at the University of Technology in Darmstadt, Germany, and has supervised several Study and Diploma thesis in this area. He has published several technical papers in the field of radiosity and its applications in Journals and International Conferences.