Rendering of landscapes for environmental assessment

Rendering of landscapes for environmental assessment

Landscape and Urban Planning 54 (2001) 19±32 Rendering of landscapes for environmental assessment Eihachiro Nakamaea,*, Xueying Qina, Katsumi Tadamur...

1MB Sizes 0 Downloads 54 Views

Landscape and Urban Planning 54 (2001) 19±32

Rendering of landscapes for environmental assessment Eihachiro Nakamaea,*, Xueying Qina, Katsumi Tadamurab a

Sanei Co., Room 402, 3-13-26 Kagamiyama, Japan b Yamaguchi University, Ube, Japan

Received 4 November 1999; received in revised form 26 May 2000; accepted 12 July 2000

Abstract These days, computer graphics are playing an important role in giving lifelike information for the estimation of landscapes after ®nishing large-scale construction projects. However, how to create vivid photo-realistic images based on exact geometry and optical phenomena is still an essential issue. The authors introduce rendering techniques for visual environmental landscape assessment using computer graphics and/or video sequence images. The techniques are classi®ed into three categories: computer generated images and montages for visualizing landscapes with photo-realistic rendering techniques, panoramic images and panoramic montages employing image-based rendering techniques made from video sequence images, and panned/zoomed video sequence images composited with computer generated images. The basic techniques and their validity are discussed by using practical examples. # 2001 Elsevier Science B.V. All rights reserved. Keywords: Photo-realistic images; Panoramic images; Video sequence montages

1. Introduction The following three rendering techniques for environmental landscape assessment are discussed. First, the rendering techniques of computer generated images and montages to create a photo-realistic landscape image with suf®cient precision and ef®cacy based on optical phenomena are introduced. Still images created by photo-realistic computer graphics and montages are useful for observing the harmony between the landscape of a large-scale construction site and the candidate arti®cial structure(s) from various view points and under different weather conditions. To *

Corresponding author. Tel.: ‡81-824-20-0514; fax: ‡81-824-20-0531. E-mail address: [email protected] (E. Nakamae). 0169-2046/01/$20.00 # 2001 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 9 - 2 0 4 6 ( 0 1 ) 0 0 1 2 3 - 2

create photo-realistic images the following conditions are indispensable; the incident light, such as direct sunlight with the size of the sun and distributed skylight, the effects of scattering and absorption due to mediums consisting of air molecules and aerosols such as fog and clouds, the effects of shadows (umbrae and penumbrae) cast by obstacles, and the characteristics of re¯ection and/or transparency of objects. If we ignore any phenomena mentioned above, the created images cannot give the observer scienti®c exactness nor natural impression. It should be noticed, however, that the weak point of these still images is that their view ®eld is limited because they are projected on a ¯at plane. Secondly, a method for creating a precise panoramic image made from panned video sequence images taken on a tripod is introduced. They are useful for

20

E. Nakamae et al. / Landscape and Urban Planning 54 (2001) 19±32

looking out over the landscape and for experiencing the atmosphere of landscape on the whole. To offer a precise panoramic landscape image, the following techniques should be taken into account; correct varying exposure due to the change of camera view ®elds, remove optical and geometrical discontinuity due to the vignetting and distortion of each frame, respectively, eliminate any geometrical distortion due to the camera lens and/or CCD by calibrating the video camera, remove the effects of the interlaced scanning which is commonly used in video systems, deal with the motion blur of moving objects, and composite video sequence images with computer generated images with precise geometry. One insuf®cient point of a panoramic image is that it gives a still image. Finally, an approach that gives quite realistic impressions to the observer is introduced; a panned/ zoomed landscape video sequence is well matched with photo-realistic computer generated still images, such as bridges and electric power transmission towers. By modifying the techniques used for a panned video sequence, a panoramic montage can be easily created. To create these sequential images, the following three phenomena should be dealt with in a different way from making panoramic images; change of exposure and distortion of each frame due to panning/zooming, and the difference of scanning systems between video and computer generated images (video sequence images are interlaced while computer generated images are non-interlaced). The

costs of composited video sequences for visual environmental assessment should be much less than those for entertainment products. The former should be possible to produce by using software rather than the costly equipment used for the latter such as speci®c camera controllers. The video sequence can give vivid impressions, but dif®culty in the observation of detail is unavoidable. In the following sections, validity of the rendering techniques mentioned above is discussed mainly by using practical examples. The detailed explanations for the algorithms used in this paper are given in the references because the main purpose of this paper is to show the usefulness of computer graphics techniques for visual environmental assessment. 2. Still images created by computer graphics and montages 2.1. Still images created by computer graphics In order to satisfy the request from the observer working for visual environmental assessment, precise daytime landscape images, based on optical phenomena, should be offered. As shown in Fig. 1, the incident light on each calculation point, direct sunlight and distributed skylight passing through the atmosphere, are scattered and absorbed due to the air molecules and aerosols in the atmosphere, such as fog and clouds

Fig. 1. Incident light on a calculation point.

E. Nakamae et al. / Landscape and Urban Planning 54 (2001) 19±32

(Nishita and Nakamae, 1986; Kaneda et al., 1990; Nishita et al., 1993, 1996; Tadamura et al., 1993; Dobashi et al., 1994). The shadows (umbrae and penumbrae) cast by obstacles such as solid objects and clouds, delicately effect the impression of the landscapes. In order to create photo-realistic landscape images, attention to the spectral characteristics of re¯ection, refraction, and transparency of objects (such as arti®cial objects and water surface), and the modeling of natural objects (such as trees) are also indispensable (Nishita and Nakamae, 1994; Tadamura and Nakamae, 1995; Tadamura et al., 1992). Computing costs for outdoor scenes which have a great number of objects become very high because computing time for shadowing increases in proportion to the number of objects. Utilizing the advantages of space sub-division and a multi-layered parallelepiped (Nakamae et al., 1995) and the two-pass Z-buffer algorithm with the optimal number of plural shadow buffers (Tadamura et al., 1999) overcomes this problem; the former gives precise shading and the latter assures the same resolution of shadows as that of visible objects everywhere in the scene. Fig. 2(a) shows a street for city planning. The illuminance due to direct sunlight and skylight is taken into account, and also the inter-re¯ection among the objects is calculated. This scene consists of 90 205 polygons and 73 trees, the resolution of the image is 1024  768, and the calculation time was 835 min. Fig. 2(b) shows the image without penumbrae, that is, no consideration is given to the size of the sun nor the inter-re¯ection; the calculation time was 87 min.

21

Fig. 2(c) has no shadow or skylight; the calculation time was 5 s using OpenGL#. Even though Fig. 2(c) is useful for observing the outline of planning, Fig. 2(a) demonstrates the importance of rendering with consideration to precise optical phenomena. Fig. 3(a) demonstrates the ef®cacy using the optimal number of plural sunlight depth buffers. When they are not used, the shadows at the foreground are cut apart as shown in Fig. 3(b). The number of polygons is 27 411 and the number of trees is 15 353. The trees were made in 2D texture, modi®ed from AMAP# 3D data. Direct sunlight and skylight are also taken into account. Fig. 4(a)±(c) demonstrate the shadow effects of clouds. It is easy to understand how much clouds and their shadows effect the impression of not only landscapes, but also the color design of a car. These scenes consist of 80 463 polygons and 13 776 trees, and each calculation time was 102.0, 103.5 and 102.7 min, respectively. The resolution of Figs. 3 and 4 is the same as Fig. 2. 2.2. Montage For creating montages, the following conditions should be added to the rendering techniques of computer generated images; geometrical and light matching between the background photos and the models of the computer graphics objects. Processing of the latter is much more complicated and dif®cult than that of the former. The solar position, in other words, the direction of the sun, the weather conditions (i.e. fog effects), and the shadows cast not only onto the computer graphics models from the background

Fig. 2. A landscape of a street: (a) precise rendering taking into account the direct sunlight and skylight, umbrae and penumbrae, and interre¯ection; (b) without penumbrae and inter-re¯ection; and (c) without skylight and shadows.

22

E. Nakamae et al. / Landscape and Urban Planning 54 (2001) 19±32

Fig. 3. Quality of shadows: (a) using the optical number of plural sunlight depth buffers; (b) using a single coarse sunlight depth buffer.

objects, but also onto the background images from the models of computer graphics should be taken into account (Nakamae et al., 1986; Kaneda et al., 1989). Fig. 5 shows a montage of a new bridge. The photo was taken against the sunlight, therefore most of the computer generated bridge is illuminated only by the skylight. Fig. 5(a) is created by software now on the market. Since only direct sunlight and ambient light were taken into consideration and not skylight, the intensity of the shaded area becomes ¯at. Fig. 5(b) demonstrates the ef®cacy of optical precise simulation; the user can observe the image in detail by enlarging as shown in Fig. 5(c) and (d). Fig. 6(a), (b) and (c) demonstrate the montages for assessing a yacht harbor. In Fig. 6(a) the landscape is partially illuminated by the sun; so the position of the sun was calculated by using the date, latitude and longitude of when the photo was taken. Fig. 6(b) and (c) show the landscapes near the site of the harbor; the ®gures are useful for leading the

practical claims of residents by showing them near the harbor how the landscapes will look. 3. A precise panoramic image composited from panned video sequence images 3.1. Problems and issues to solve Even though the enlarged images give fairly precise information as mentioned in the previous section, panoramic observation with a wide ®eld is much more useful for environmental landscape assessment compared with the observation of a set of size standard still images. If a panoramic image is displayed on a wide screen, it can give a more lifelike scene and can achieve virtual reality. Panned video sequence images are useful for offering a precise panoramic image; it is easy and inexpensive to make quite a wide landscape

Fig. 4. Shadow effects of clouds.

E. Nakamae et al. / Landscape and Urban Planning 54 (2001) 19±32

Fig. 5. Effects of rendering tools: (a) using software on the market; (b), (c), and (d) using precise rendering techniques.

Fig. 6. Yacht harbor: the upper small pictures show the landscapes before construction, and the bottom ones show the montages.

23

24

E. Nakamae et al. / Landscape and Urban Planning 54 (2001) 19±32

image. In this case, however, the following conditions in each frame should be carefully considered; correcting variations of exposure due to panning a video camera, removing optical discontinuity due to the vignetting, removing geometrical distortion, eliminating any geometrical distortion due to the camera lens and/or CCD by calibrating the video camera, removing the effects of the interlaced scanning which is commonly used in video systems, and dealing with the blur of moving objects. A number of techniques have been developed for capturing panoramic images of a real-world scene. An image mosaic method for constructing panoramic images takes many regular photographic images or a set of video images (Szeliski, 1996; Szeliski and Shum, 1997; Irani et al., 1995); these images must be aligned and composited to complete a panoramic image. Recently, mosaics with moving objects (Davis, 1998) and with global and local alignment (Shum and Szeliski, 1998) have been proposed. In previous work, however, optical effects have been left undone; the effects of the auto-iris, vignetting and interlacing are ignored, and the geometrical mismatch due to the camera lens and/or CCD distortion is dealt with using the de-ghosting method or local alignment. How to make a panoramic montage is discussed in Section 4 because its algorithm is similar to that of compositing computer generated still images with panned landscape video sequence images. 3.2. Process of making a panoramic image A panoramic image is created through the following steps: select a panoramic image space from a video

sequence, extract camera parameters from each frame, extract the moving objects from each image (if there are any), and select the central part of each frame or appropriate interval frames (Qin et al., 1999). Then, repeat the following process for each frame: modify the interlaced image to the noninterlaced image and eliminate the distortion, correct the luminance, map the central part of the frame image onto the panoramic image space without any moving objects. After these processes, the moving objects which are located at the central part of the frame are mapped onto the panoramic image. After making a panoramic background image, computer generated images are pasted onto it precisely for both geometry and optics. The process of each step is as followings: 1. all frames of a video sequence are interlaced, while the panoramic still image is non-interlaced. It does not have any jag caused by panning. When the camera is panned, some jags appear due to the different scanning time between odd and even ®elds as shown in Fig. 7(b) which is enlarged from Fig. 7(a). In order to modify each interlaced video frame image to a non-interlaced image as shown in Fig. 7(c), bilinear interpolation techniques are applied; 2. by panning an auto-iris camera, exposure changes frame by frame depending on its incident light and panning speed. If the panoramic still image is not corrected by using the luminance change rate of each frame image, the panoramic image seems unnatural, as shown in Fig. 8(a); the area around the ®rst and second

Fig. 7. Eliminating jags caused by panning: (a) original pan image; (b) magni®ed sub-image of (a); (c) magni®ed sub-image of the noninterlaced image.

E. Nakamae et al. / Landscape and Urban Planning 54 (2001) 19±32

25

Fig. 8. Panoramic images from a vertical panning video sequence with 405 frames: (a) auto-iris with no calibration; (b) standard frame is set at no. 100 (arrow); (c) standard frame at no. 370 (arrow); (d) standard frame using average luminance at no. 180 (arrow); (e) ®xed iris.

¯oors of the building is too bright because of the effects of the change of exposure due to the autoiris. If the darker one is selected as the standard frame as shown by an arrow in Fig. 8(b), the whole panoramic image becomes too bright. On the contrary, if the brighter one is selected, it becomes too dark as shown in Fig. 8(c). The system should select a suitable standard exposure rate by exploring an average value as shown in Fig. 8(d). Fig. 8(e) was taken by a ®xed iris. The image is moderate because the exposure was set to about the average value of all the frames; in this case, some skill is required of the camera man; 3. in order to avoid the distortion of the video camera lens and/or CCD, the frames are selected at set intervals (e.g. 0, 5, 10, . . .) depending on panning speed. Then their central part is mapped onto the panoramic image space; 4. the moving objects in a video sequence usually generate a blur in the panoramic still image. For example, if in a horizontally panned video sequence, a car moves in the same direction as the panning camera with synchronous speed, the car lengthens as a straight bar in the panoramic still image. If it moves in the opposite direction, it shortens (e.g. the top two enlarged images in Fig. 9). Once the camera parameters are calculated, the moving objects in each frame can be

detected. The moving objects with blur located in the central part of the frame (e.g. the second line images in Fig. 9) are pasted onto the panoramic still image. The position of a moving object, such as a car, changes in a panoramic space depending on its speed; its shape also changes due to the direction it is moving in and camera distortion. Furthermore, a moving object image and its still background image are located at a different position in each frame, and they cannot be matched pixel for pixel. The key to tracking the moving objects is how to compare the two sub-images taken from different frames. Fig. 10(a) and (b) depict the sub-images of a moving car in frame nos. 1 and 2, respectively. Fig. 10(d) shows the differences between Fig. 10(a) and (b), thus, the moving objects' region can be detected. Once the sub-images of moving objects are extracted, the regions without any moving objects become known. The background image of Fig. 10(a) is shown in Fig. 10(c) from frame no. 50. Fig. 10(e) shows the difference between Fig. 10(a) and (c); the shape of a moving object is extracted. A panoramic image without any moving object is constructed ®rst, then the moving objects located in the central part on one of the frames are pasted onto the panoramic image. Fig. 9 displays a 1808 panoramic image.

26

E. Nakamae et al. / Landscape and Urban Planning 54 (2001) 19±32

Fig. 9. A 1808 panoramic image (separated into upper and lower sides) from a horizontal panning video sequence (1152 frames); in the enlarged sub-images the moving cars located at the arrows are uncorrected (the top ones) and corrected (the second line ones).

4. Computer generated still images composited with panned/zoomed landscape video sequences 4.1. Problems and issues to solve Even though the resolution of the still images and panoramic images is good, it is still inferior to the panning/zooming video sequences because video sequences can give quite vivid impressions to the observer. In order to composite a video sequence with still computer generated images, one must composite them carefully because the observer is very sensitive

to any ¯oating and shivering images on the background images. As panning/zooming speed for observing landscapes is usually relatively slow and the computer generated images are essentially still ones (such as bridges, buildings, roads, etc.), the observer can easily see the images dangling on the background images even if the discrepancy between subsequent frames is only half a pixel. An appropriate process for dealing with the following three phenomena should be done: distortion of each panned/zoomed frame, variations of exposure caused by changing camera view ®elds, and the

Fig. 10. Extracting a moving object: (a) the moving car in frame no. 1; (b) the car in the frame no. 2; (c) its background sub-image in frame no. 50; (d) difference between (a) and (b) after diminishing noises; (e) difference between (a) and (c) after diminishing noises.

E. Nakamae et al. / Landscape and Urban Planning 54 (2001) 19±32

difference between the display systems of interlaced video sequence images and non-interlaced computer generated images. All these issues are similar in the process of making panoramic composited images, but the standard is different. Thus, for making a video sequence, every computer generated image is calibrated to match each of the original images; to create a

27

panoramic image, every selected frame image is mapped onto a cylindrical surface. Regarding cost, a composited video sequence for visual environmental assessment should be produced for much less than that for entertainment products. The former should be able to be produced by using software rather than costly equipment, such as speci®c camera controllers.

Fig. 11. Panoramic images: (a) panoramic image made from a panning video sequence; (b) panoramic computer generated images mapped onto a video sequence coordinate system; (c) panoramic image from a panning video sequence composited with computer generated images (in the evening).

28

E. Nakamae et al. / Landscape and Urban Planning 54 (2001) 19±32

Although the computer graphics techniques are highly developed, the modeling of landscapes is still a time consuming process. The methods aiming to produce virtual worlds composited by computer graphics images with video sequence images have been presented in many papers; they offer special effects and virtual special objects and animals, and they are

often applied in ®lms and TV programs. The main disadvantage of these techniques, however, is that they restrict camera movement, and require costly equipment with movement detectors or special reference points in a view ®eld (Kansy et al., 1995). Image registrations are another annoying problem and very dif®cult to deal with because of the interlaced scanning

Fig. 12. Computer generated images composited with video sequence images: (a) frame no. 220 in composited video sequence; (b) magni®ed image of (a); (c) magni®ed image in frame no. 370; (d) magni®ed image in frame no. 220 without correction of camera distortion and interlacing computer generated images; (e) panoramic montage in the afternoon.

E. Nakamae et al. / Landscape and Urban Planning 54 (2001) 19±32

system of TVs, low resolution of video sequence images compared with computer generated images, and the requirements for detecting the precise value of camera parameters. 4.2. Process of compositing computer generated images and video sequence images 1. In order to composite computer generated images and video sequence images, every coordinate system (i.e. computer generated local coordinates, world coordinates, camera coordinates and image coordinates) should be well matched. To paste computer generated images without any geome-

29

trical error, one or two target point(s) is selected in the ®rst frame, the target point(s) is traced for each frame to get the camera parameters, the rotating angles and/or the focus length. Fig. 11(a) shows a panoramic scene made from the techniques of panorama introduced in Section 3; Fig. 11(b) shows the computer generated images, and Fig. 11(c) shows the synthesized panoramic image. Fig. 12(a) depicts frame no. 220 in the composited video sequence. Note that all frame images from now on are displayed by noninterlaced images in order to make it easier to observe; when the observer watches the video sequence, the scenes are seen as shown in the

Fig. 13. Frames of a composited zooming video sequence.

30

E. Nakamae et al. / Landscape and Urban Planning 54 (2001) 19±32

®gures. Fig. 12(b) enlarged from Fig. 12(a). Fig. 12(c) shows the enlarged image of frame no. 370. The trees in Fig. 12(b) and (c) are well ®xed to the background frames. The error is limited to within one pixel anywhere in each frame. Note that the computer generated images appearing in each frame are distorted to match the corresponding background image. Fig. 12(d) is an example of a computer generated image pasted directly without any calibration. The roadside trees shift and shiver, and it is easy to recognize that the trees and poles in front of them are separated.

2. Optical matching between the computer generated image and video sequence images in each frame should be done by using similar techniques used for montages. Fig. 12(e) shows the landscapes in the afternoon at the same position as in Fig. 12(a); optical brightness and color are well matched. Note that in Figs. 11 and 12(e), the video sequence images are non-interlaced to make panoramic images. In order to make a composited sequence image, however, the shapes of computer generated images should be calibrated and transferred from non-interlaced images to interlaced ones, and their

Fig. 14. Frames of a composited panning and zooming video sequence.

E. Nakamae et al. / Landscape and Urban Planning 54 (2001) 19±32

brightness should be matched with each video sequence image. 3. The edges between the computer generated images and their background images should be perfectly antialiased. The foreground images that hide computer generated images should be masked without any jag by employing references (Shinya, 1993; Amanatides and Mitchell, 1990). For matching the motion and focus blurs due to panning/zooming, adding blur effects onto the computer generated images can be calculated by using a Gaussian ®lter. 4. Video sequence images are smeared in general, while computer generated images are sharp; the latter should achieve the same effect as the former. A Gaussian ®lter creates a good solution (refer to Rokita, 1993). 5. A video camera provides frames corresponding to a TV standard. Thus, if the computer generated images assigned to each frame are pasted directly onto each interlaced frame in the resultant animation, non-interlaced computer generated images and the interlaced background video sequence images do not synchronize. The former's boundaries shiver in front of the latter as depicted in Fig. 12(d). How to organize this problems is described in reference (Nakamae et al., 1999). Since it is dif®cult to demonstrate animations on paper, some frame images and panoramic images have been used. Note that the new bridge in front of the old bridge is a computer generated image. Figs. 13 and 14 demonstrate the usefulness of the proposed algorithms. Fig. 14 shows a few frames from a zooming sequence. The observer can con®rm the excellent geometrical matching in every frame. Fig. 14 shows a few frames from a panning and zooming sequence. In this case, the video was taken on someone's shoulder, so the camera rocked from side to side, still the new bridge and its shadow are exactly ®xed on the corresponding background image. The camcorder used was a Sony Pro-Betacam SP UVW100# with a resolution of 768  492 pixels. The resolution of the video sequence frames is 640  480 pixels. Calculations were carried out on an IRIS INDY#.

31

5. Conclusions Three types of rendering techniques for visual environmental assessment using computer graphics and/or video sequence images have been introduced with some examples. Still images created by computer graphics and montages are quite useful for observing the landscapes in subtle detail from various view points. A panoramic image made from panned video sequence images gives fairly lifelike scenes compared with the standard view ®eld images (e.g. a three-fourth aspect ratio). Even though, it is dif®cult to observe in detail, panned/ zoomed video sequence images of landscapes composited with computer generated still images give quite a vivid impression to the observers. The usefulness of all these techniques for visual environmental assessment has been con®rmed by the examples. As mentioned above, each type of rendering techniques has peculiar characteristics. The observer working for visual environmental assessment should use any of the three techniques presented that are suitable for the circumstances. References Amanatides, J., Mitchell, D.P., 1990. Antialiasing of interlaced video animation. Comput. Graphics 24 (4). Davis, J., 1998. Mosaics of scenes with moving objects. In: Proceedings of the Computer Version and Pattern Recognition Conference, pp. 354±360. Dobashi, Y., Kaneda, K., Nakashima, T., Yamashita, H., Nishita, T., Tadamura, K., 1994. Skylight for interior lighting design. Comput. Graphic Forum 13 (3), C/85±C/96. Irani, M., Anandan, P., Hsu, S., 1995. Mosaic based representations of video sequences and their applications. In: Proceedings of the Fifth International Conference on Computer Vision (ICCV'95), pp. 605±611. Kaneda, K., Kato, F., Nakamae, E., Nishita, T., Tanaka, H., Toguchi, T., 1989. Three-dimensional terrain modeling and display for environmental assessment. Comput. Graphics 23 (3), 207±213. Kaneda, K., Okamoto, T., Nakamae, E., Nishita, T., 1990. Highly realistic visual simulation of outdoor scene under various atmospheric conditions. In: Proceedings of CG International'90, pp. 117±131. Kansy, K., Berlage, T., Schmitgen, G., Wisskirchen, P., 1995. Real time integrating of synthetic computer graphics into live video scenes. In: Proceedings of the Conference on Interface of Real and Virtual Worlds, Montpellier, France, 26±30 June, pp. 93±101.

32

E. Nakamae et al. / Landscape and Urban Planning 54 (2001) 19±32

Nakamae, E., Harada, K., Ishizaki, T., Nishita, T., 1986. A Montage method: the overlaying of the computer generated images onto a background photograph. Comput. Graphics 20 (4), 207±214. Nakamae, E., Jiao, G., Tadamura, K., Kato, F., 1995. A model of skylight and calculation of its illuminance. In: Proceedings of the ICSC'95, pp. 304±312. Nakamae, E., Qin, X., Jiao, G., Rokita, P., Tadamura, K., 1999. Computer generated still images composited with panned/ zoomed landscape video sequences. J. Visual Comput. 15 (9), 429±442. Nishita, T., Nakamae, E., 1986. Continuous tone representation of three-dimensional objects illuminated by skylight. In: Proceedings of the Conference on Computer Graphics (SIGGRAPH'86), pp. 125±132. Nishita, T., Nakamae, E., 1994. Method of displaying optical effects within water using accumulation buffer. In: Proceedings of the Conference on Computer Graphics (SIGGRAPH'94). ACM, New York, pp. 373±381. Nishita, T., Sirai, T., Tadamura, K., Nakamae, E., 1993. Display of the earth taking into account atmospheric scattering. In: Proceedings of the Conference on Computer Graphics (SIGGRAPH'93), Vol. 27, pp. 175±182. Nishita, T., Dobashi, Y., Nakamae, E., 1996. Display of clouds taking into account multiple anisotropic scattering and sky light. In: Proceedings of the Conference on Computer Graphics (SIGGRAPH'96), pp. 379±386. Qin, X., Tadamura, K., Nagai, Y., Nakamae, E., 1999. Creating a precise panorama from panned video sequence images. J. Information Processing 40 (10), 3685±3693.

Rokita, P., 1993. Fast generation of depth of ®eld effects in computer graphics. Comput. Graphics, Vol. 17.5. Shinya, M., 1993. Spatial anti-aliasing for animation sequences with spatia-temporal ®ltering. In: Proceedings of the Conference on Computer Graphics, pp. 289±296. Shum, H.Y., Szeliski, R., 1998. Construction and re®nement of panoramic mosaics with global and local alignment. In: Proceedings of the 6th International Conference on Computer Version, pp. 953±958. Szeliski, R., 1996. Video mosaics for virtual environments. In: Proceedings of the Conference on IEEE Computer Graphics and Application, pp. 251±258. Szeliski, R., Shum, H.Y., 1997. Creating full view panoramic image mosaics and environment map. In: Proceedings of the Conference on Computer Graphics, pp. 251±258. Tadamura, K., Nakamae, E., 1995. Modeling Water color in lighting design. In: Proceedings of the Computer Graphics International'95, pp. 97±114. Tadamura, K., Kaneda, K., Nakamae, E., Kato, F., Noguchi, T., 1992. A Display method of trees by using photo images. J. Information Processing 15 (4), 526±534. Tadamura, K., Nakamae, E., Kaneda, K., Baba, M., Yamashita, H., Nishita, T., 1993. Modeling of skylight and rendering of outdoor scene. In: Proceedings of the EUROGRAPHICS'93, Computer Graphics Forum 12 (3) 189±200. Tadamura, K., Qin, X., Jiao, G., Kato, F., Nakamae, E., 1999. Rendering optimal solar shadows using plural sunlight depth buffers. In: Proceedings of the Computer Graphics International'99, pp. 97±114.