CVGIP:
GRAPHICAL
MODELS
Vol. 54, No. 1, January,
AND
IMAGE
PROCESSING
pp. 82-90, 1992
Realistic 3D Simulation
of Shapes and Shadows for Image Processing JEAN-PHILIPPETHIRION*
INRIA,
Domaine
de Voluceau, Received
Rocquencourt,
November
26, 1990; accepted September 5, 1991
This paper illustrates the cooperation between Image Processing and Computer Graphics. We present a new method for computing realistic 3D images of buildings or of complex objects from a set of real images and from the 3D model of the corresponding real scene. We show also how to remove the real shadows from these images and how to simulate new lighting. Our system can be used to generate synthetic images, with total control over the position of the camera, over the features of the optical system, and over the solar lighting. We propose several methods for avoiding most of the artifacts that could be produced by a direct application of our approach. Finally, we propose a general scheme where these images are used to test Image Processing algorithms; a long o 1992 Academic time before that the first physical prototype is built. Press, Inc.
1.
INTRODUCTION
The scientific approach has been based for a long time on a loop between theory and experimentation. First, the researcher looks at the real world and determines a theoretical model that allows him to make some predictions. He performs experiments and compares the results with the predictions. Depending on the difference between theory and reality, the researcher changes the theoretical model or improves it and goes back to the first step. However, real experiments are often very expensive to perform and many parameters are not totally under control. Gradually, simulation has been raised to a completely new field in science. In fact, simulation has alAfter all, ways been part of the scientific approach. predicting facts from a theoretical model is some kind of simulation, whether those predictions are “handmade” or computed. As computers are becoming more and more powerful every day, the theory-experiment loop turns into a theory-simulation loop, with occasional reference to reality by way of real experiments. Simulations have been used by Image Processing researchers for a long time, but simulated images have been * Now a research member of the Epidaure project at INRIA quencourt, France, but parts of this work have been performed the author worked at the LIENS, Paris, France.
B.P. 105, 78153 Le Chesnay Cedex, France
Rocwhen
produced mainly with two-dimensional techniques, in order to test two-dimensional algorithms. The three-dimensional model is much more expensive, and only recent progress in Computer Graphics allow this kind of simulation. This is why a cooperation between computer graphics and image processing is so fruitful. Several recent works emphasize the cooperation between analysis and synthesis for solving 3D problems, as in [8] for the analysis of shape from motion or as in [4] for robotic applications. We present in Section 2 a way to use Image Synthesis techniques to simulate realistic images. Kaneda et al. [7] proposed inserting purely synthetic buildings into a background image built with texture mapping on a terrain model. One major contribution of our method is that we deal with general 3D models and with the real texture of buildings or of complex objects. Our approach is not restricted to the texture of a terrain model (z = f(x, y)); it is closer to [3], but allows us to model any kind of camera. In short, we start from a 3D model of a real scene and a set of real images and we solve the hidden surface problems in order to find the real texture of the objects and to map this texture onto the 3D model. We can then compute any view of the scene, with a complete hold over the geometrical parameters, for example, the camera position or the shape of the objects. We can even simulate a complete movie with total control over the trajectory of the camera. We use ray casting to compute the synthetic images, so we are able to model complex cameras, or any feature of the optical system, such as the errors in the position of the detectors of a CCD camera. Our method has been used to generate images that will be produced by not yet existing devices, including satellite images. Those images can be used to test image processing algorithms designed to correct optical errors or to determine the shape of objects (as in [5, 91). We then perform comparisons with the initial shapes to determine the efficiency of the image processing algorithms. Sometimes, however, image processing requires more. We present in Section 3 a way to remove the real shadows of the real images, and we show how to add synthetic shadows to these images. The two techniques 82
1049.9652/92 $3.00 Copyright 0 1992 by Academic Press, Inc. All rights of reproduction in any form reserved.
3D SIMULATION
OF SHAPES AND SHADOWS
can be used at the same time to produce images with a complete hold on both geometric and lighting features. Those images are then used to test algorithms dealing with shadows, for example “shape from shadows” algorithms [9]. In Section 4 we propose a general scheme that uses these synthetic images to improve both the synthesis and the analysis processes. 2.
GEOMETRIC
SIMULATION
Let us focus on the geometric simulation of the scene. Suppose that we have a three-dimensional model of real buildings and a set of real images. How to get the 3D model is not in the scope of this paper. This can be done with direct measurement of the buildings or of the objects, or with their drafts of construction. In our case, 3D models have been built from stereoscopic views of real scenes (with the TRAPU acquisition system of the IGN). This is a particularly interesting method because the real images are used both to extract the 3D model and to synthesize new images. We also need to know the position and the characteristics of the camera for each image. For rendering purpose, the scene is decomposed into elementary triangles. 2.1. Hidden Surfaces in the Real Images
83
through this pixel, or 0 if no elements are visible. We can compute a synthetic image with this item buffer and compare it with the original image to verify the precision of our 3D model and of the position of the camera. In fact, there is always a slight difference, about the size of a pixel, due to the discrete sampling of the scene. We see later that these aliasing problems provide artifacts. An important point is that the computation of the item buffers for the real images is performed once for all, as a preprocessing for any further simulation. 2.2. Hidden Surfaces for the Synthetic Images We want to synthesize any possible view of the real scene; therefore we are once again obliged to solve the hidden surface problems for new points of view. If we want to model a virtual camera with a perspective projection, we can use the same technique as before. But we can also discretize the behavior of the optical system with ray casting [12]. For each pixel of the synthetic image, we cast a ray through that pixel and toward the scene and compute the first intersected object. This allows us to deal with any type of camera, for example, complex CCD cameras. We are also able to model any aberration of the optical system. 2.3. Mapping of the Real Texture
We are now able to map the texture of the real image onto the synthetic image. For each pixel of the synthetic image, the first intersected object and the point of intersection are determined with ray casting. This point is back projected onto the real image as shown in Fig. 1 (it Clipping. Only the parts of the objects in the scene, can be noninteger coordinates into the real image). We which are in the pyramid of vision, starting from the fo- determine whether the object, visible at this back-projected point in the associated item buffer, is the same as cus point of the camera can be seen. Back-face culling. For closed objects, a single test the object intersected by the ray. If it is the same object, with the normal orientation, according to the position of then the texture in the real image is linearly interpolated the object and the position of the viewer, allows one to and applied to the synthesized image. For example, ray 2 in Fig. 1 corresponds to a point of discard approximately half of the surface elements. Only the item buffer, but the element number of the intersected the front-facing elements can be seen in the image. surface and of the item buffer do not match; therefore Direct hiding. For a given point of view, a surface this image is not used. can hide another surface. Many solutions have been proThe ray-casting process is sometimes very time convided, among them, the z-buffer algorithm and the raysuming, but, as we can see, the mapping of the real texcasting algorithm. ture is very simple. If the new optical system can be If our camera can be approximated with a perspective model with a perspective projection, then we can use a zprojection, we can use a z-buffer algorithm to solve hidden surfaces problems in the real image space (otherwise, a ray-casting algorithm is used): the front-facing elements of the scene can be projected onto the image plane, where a 2D clipping is performed and the hidden surfaces problems are solved for each pixel, according to the distance between the camera and the objects. Hence, we build an Item Buffer (introduced in [ 141)that contains, for FIG. 1. Finding the texture in the real images for a given point. each pixel in the image, a reference to the element visible In a given image, many parts of the scene remain hidden. Image synthesis researchers have classified hidden surface problems into several types and presented solutions for these problems (see [ 1I]): l
l
l
84
JEAN-PHILIPPE
THIRION
No visible surfaces. We simulate any view of the real scene; therefore there are points where no information is available in any of the real images. Tangential viewing. A surface can be front viewed on the synthetic image and tangentially seen on the real images. Therefore, we have some information about the texture at this point, but the information is very “diluted.” On the synthetic image, the real texture is “stretched” (ray 3 in Fig. 1). This phenomenon can be seen on some sides of the building of Jussieu, in Fig. 3. Aliasing problems. We synthesize a discrete image, so aliasing errors appear, producing artifacts such as “stairs” along the edges of the buildings. Supersampling eliminates these artifacts, but can be time consuming. The correct solution to this problem is adaptative supersampling or distributed ray tracing [2] (but we do not implement them). Unfortunately, aliasing appears also in the item buffers associated with the real images. For a frontal surface in the synthesized image and a tangential view in the real image, these artifacts are emphasized. Errors in the 30 model. Any slight error in the 3D model leads to a false association of the surfaces with the real image. For example, assume that the scene is composed of the ground and a building. When the image is simulated, and if a ray intersects the ground at a point corresponding to the limit between the ground and the top of a building on the real image, then a slight difference between the item buffer and the real image, due to a modeling error or even to an aliasing error, leads to the attribution of the color of the roof to this pixel instead of the color of the ground (see ray 1 in Fig. 1). Sometimes, the “ghost” of the edge of the roof can be seen on the ground of the synthetic image. Lighting changes. Lighting can change from one real image to another. If we choose to map the texture coming from a real image to a given pixel and the texture coming from another image to the next pixel, a difference can appear in the synthetic image. l
l
l
l
FIG. 2. One of the two original real images of the University of Jussieu (the closest of the next simulated image).
buffer algorithm instead of ray casting to determine the first object seen through each pixel. A real-time synthesizer of realistic images can be foreseen, but the large amount of memory required to maintain the real images and the item buffers for a huge scene prevent this technique from being directly used for air flight simulation. To illustrate the method, the image in Fig. 2 is one of the two original images (the closest to the synthetic view) used to synthesize the new image in Fig. 3. 2.4. Artifacts Produced with Direct Application of This Method
l
Several types of artifacts can appear during this process; we have classified those artifacts and tried to resolve most of them. These are:
2.5.
FIG. 3. The simulated image of Jussieu. Note that the other buildings, which have not been modeled, appear “flat.”
First, we used the histogram to normalize the intensity of all the original images to lessen the lighting changes between images. Then, for each pixel in the synthetic image, we choose from the set of real images the most suitable one. We present now some criteria which can be used to select the best image: l Priority. We set a priority among the real images depending on the viewing angle. The image whose direction of viewing is nearest to that of the synthesized image is the highest priority. There is then less change from one pixel to the next, because the same image is almost always used. This step is a preprocessing for each simulation.
Some Solutions to These Problems
3D SIMULATION
85
OF SHAPES AND SHADOWS
l Orientation. We use the orientation of each surface, in order to avoid tangential viewing, and select the most frontal view for this surface. l Default value. We compute an average intensity with all the images and provide it when no other information is available. We can also easily compute, with the real images and the item buffers, an average intensity for surface elements, when parts of those elements are visible, and attribute this intensity to the hidden parts of these surfaces. l Neighborhood. We study the neighbors of the selected pixel in the real image. If they all correspond to the surface associated with the selected pixel, then there is less chance of performing a false association of the texture, due to aliasing or geometrical errors, because we know then that the pixel is far from the edge of the surface element; otherwise we use the default value described previously. Those criteria must be weighted according to the kind of simulation to be performed. They gave us satisfying results, but other criteria can be foreseen. In general, with several views of the scene (a top view and four side views) and with supersampling for both the item buffers and the simulated image, the result is very satisfying. Unfortunately we had only vertical views for performing our simulations, and we used the spread angle of the camera for side views of the buildings. We generally use supersampling (not adaptative supersampling) to improve the quality of our simulations; however, no supersamplings have been applied for the simulation of the images presented in this paper, in order to show the maximum effects of artifacts (synthetic and original images have about the same resolution). Nevertheless only a very few aliasing errors appear (see Figs. 2 and 3).
3. SHADOW SIMULATIONS
Ambient
term
a
DilTwe
term
b cos(@)
Specular term cco
Phong lightning
model
s4a)
FIG. 4. The Phong lighting model.
which takes both diffuse and specular aspects into account (see [lo]). Kajiya proposed a more complete equation in [6]. In the present paper, we used only a direct lighting model. We do not model the diffusion of the light from surfaces to surfaces (Fig. 5), nor the specular reflections. The use of radiosity method (see [l]) could lead to more precise results. We use a much more simple model defined by Intensity = [a + b * 6 * cos(O)] - Albedo,
(1)
where Intensity is the isotropic reemitted intensity at a point; Albedo is the albedo of the surface at this point; a is the ambient energy; b is the intensity of the solar lighting; 6 is 0 or 1 if this point is visible from the light source; and 0 is the angle between the incident light and the surface normal. We assume that the ambient term a is constant for the whole scene. We also assume that we have no information about the acquisition, development, and digitalization processing of the real image. The present paper emphasizes the geometrical solutions which allow us to simulate new lightings for the real textures of the images; we do not try to build an elaborated lighting model, but other lighting models can be foreseen.
We show in this section how to remove shadows from real images and how to add new shadows. Dealing with this problem may seem very strange, but much information about the shape of the objects can be extracted from 3.2. Finding Shadows their shadows. Taking the same image of a real scene Finding shadows is in fact a hidden surface problem. with exactly the same geometrical characteristics but First, the back-facing surfaces, with respect to the posiwith different lighting can be sometimes very difficult and tion of the light source, are in their own shadow and are expensive (for example, for aerial views). Our system easily determined. One object can also cast a shadow on can be used to test many algorithms dealing with shadows, with the possibility of defining any type of solar lighting, including boreal (almost horizontal) lighting, and Light source A with total control on the geometrical characteristics. 3.1. Lighting Model A number of increasingly accurate lighting models for illumination have been proposed in the field of computer graphics. Figure 4 describes the Phong lighting model,
FIG. 5. Radiosity: Diffusion of light from surface to surface.
86
JEAN-PHILIPPE
THIRION
FIG. 6. Building the shadow buffer.
another object. We use ray casting to determine these shadows: for each point corresponding to a pixel of the real image, we cast a ray toward the light source. If this ray is intercepted, then the point is in the shade. We build an image, which we call the shading buffer, with “0” when the point is entirely in the shade, with “1” if there is no object of the model for this point, and with the cosine between the incident light and the surface normal (6 . cos(0)) anywhere else (see Fig. 6). The value is set to 1 in order to keep the intensity of the background unchanged, when the image is transformed with the shading buffer. As we see, this is consistent with the value of cos(0) when 8 = 0. If we do not know when the real photographs were taken, we measure directly the position of the sun from the images, using the corner of buildings and the corresponding shadows. This is possible because we have a precise 3D model of the scene. Figure 7 is a real image and Fig. 8 the associated shading buffer. 3.3.
Removing
Shadows from Real Images
The best method would be to find the albedo of the surface. We cannot find the physical albedo of these surfaces because we do not have the physical intensity for each pixel, but rather a gray level Z that comes through
FIG. 8. The shadow buffer associated with the image of Rungis.
the acquisition, development, and digitalization processes. Therefore, we look for a pseudoalbedo A and we assume that Eq. (1) is satisfied. If we invert this formula, A = Z/(a + b 6 6 * cos(0)). We call the corresponding image the pseudoalbedo image. In order to have about the same range of values in the original and in the pseudoalbedo images, we use the normalization A = Z for all the sunlit horizontal surfaces of the scene.
The advantage of this formula is that it leaves most of the surfaces of the real images unchanged. If 01 is the angle between the incident light and the horizontal plane, we have relation (2) between a and b: a + b * cos(Ol)
= 1.
(2)
We need a second relation to determine the coefficients a and b. We get this relation with a measure in the real
image. Assume a horizontal surface with a constant pseudoalbedo Al. Assume also that there is a sunny and a shady part for this surface, with respectively intensities Z,,, and Ishadow.Then we have I sun = (a + b * cos(O1)) . Al Zshadow = a.Al.
(3) (4)
Therefore Zshadow
a=-
(5)
I sun b
FIG. 7. A real image of Rungis.
=
1 -
(~shndow~&un)
cos(Q1)
.
(6)
3D SIMULATION
OF SHAPES AND SHADOWS
87
Airplane Item buffer Pseudo-al&doimage
FIG. 10. Simulating new shadows in a real image.
3.4.
Adding New Shadows on Real Images
Formulas (1) and (7) are used to preprocess the real images and to compute the pseudoalbedo images, respectively, for sunny and shady surfaces. Because we do not simulate radiosity phenomena, we use only formula (1) to FIG. 9. The pseudoalbedo image associated with the image of simulate new lightings on both sunny and shady surfaces. Rungis. Simulations with different lighting conditions can be performed with different values of coefficients a and b. The geometric position of the new shadows is also determined This model holds for many parts of the scene, but un- with ray casting. We build a new shadow buffer, with 6 * fortunately, there are parts where the fraction ZshaduwlZsu,,cos(0) for each pixel of the simulated image, and we use is far from being constant. This is due to many different formula (1) to compute the values in the simulated image factors. First, the acquisition, development, and digitali- (see Fig. 11). We have used a small coefficient a in order to emphazation processes are logarithmic rather than linear. Secsize the new shadows in the images presented in this ond, radiosity effects which are not modeled locally dispaper. But different values of a and b generate images turb the value of coefficient a. We have found ranging from the completely diffuse lighting of the experimentally that these disturbances appear more often pseudoalbedo image in Fig. 9 to the sunny afternoon of in the shady parts of the scene; therefore we assume that the simulated image in Fig. 11. A second point is that the for the shady points, the intensity depends on the albedo computation of a synthetic image with the same coeffiaccording to an independent formula. Generally, highcients a and b and with the same position of the sun as in albedo surfaces enhance local ambient light, and conthe real image leads to the simulation of almost the same versely, low-albedo surfaces reduce ambient light (but image as the real one. The slight difference is due to the reality is much more complicated, because of geometric factors). We choose to model this phenomenon with the following linear formula and with the same normalization as before: A = I,,,, for the horizontal surfaces. Zshadow --u+u.A.
(7)
To summarize the process, we measure two coupled intensities for two sunny and shady horizontal surfaces, (&hadowl , Isun,) and (Z&a&&, Zsun2)corresponding to respectively a high intensity and a low intensity, to determine coefficients a and b of Eqs. (5) and (6) and coefficients u and LJ of Eqs. (8) and (9). Figure 9 is the pseudoalbedo image associated with the image of Fig. 7: u=
v=
Zshadow2
* Isunl
ZSUtll Zshadowl I
-
sunl -
-
Ishadow
-
Lun2
Ishadow
Ln2
.
’ &sun2
(8) (9)
FIG. 11. The synthetic view of Rungis with a new lighting.
88
JEAN-PHILIPPE
THIRION
radiosity effects only. We use this procedure to test our method. 3.5.
Artifacts
for Shadow Simulations Pseudo-al&doimage
There are several causes of the artifacts in the simulated shadows: Aliasing errors. We sample the shadow information with rays, and thus there are traditional aliasing errors. Stairs appear on the edges of the simulated shadows of the buildings. Once again, adaptative supersampling is the correct approach to these artifacts. Modeling errors. During the preprocessing of the image, the slightest errors in the model, or even aliasing errors, produce differences between the real image and the shadow buffer. Therefore, some shady points are shown to be sunny, and their low intensity is reduced, and conversely, some sunny points are shown to be shady, and their high intensity is enhanced. This effect can be seen in Fig. 9. There is an overhang of the roof of the buildings that was not modeled. Thus, the shadows of the edges of these buildings are not accurately computed; sunny points are enhanced and can be seen as little white vertical segments starting from the roof. We use a very simple algorithm to reduce those effects, based on the assumption that when modeling errors occur, the algorithm performs exactly the opposite of what it is supposed to do, and the albedo of those points goes out of the range of the normal spectrum of the histogram. Thus it is easy to find these points and to bring some correction with a filter (see Fig. 12). Figure 9 already incorporates the use of this filter; this kind of artifact is lessened, but not completely eliminated. Noise, radiosity, . . . . Once again, we do not model radiosity phenomena. There are gray-level differences, even for a surface with a constant albedo, and sometimes the previous positions of the shadows can be seen in the simulated images. Differences also appear for particularly dark places, which are emphasized on the simulated images. As the intensity of the image is enhanced for these points, the noise of the image is enhanced too, and this difference can also reveal the previous position of the shadows. l
l
l
Despite all these artifacts, the synthesized images remain a good first-order approximation of the new lighting of the scene. Supersampling and the use of radiosity tech-
FIG. 13. Simulating a new view with a new lighting.
niques could help to simulate more precise images (we do not use supersampling in this paper in order to show artifacts, and radiosity leads to a much more complicated implementation. . .). Some other effects can interfere with the preprocessing of the pseudoalbedo images, such as fog or atmospheric scattering. Therefore we must choose very carefully the set of original images (a bright, sunny day, or a uniform cloud cover, which directly gives the pseudoalbedo images). The precision of the geometric modeling is also very important. Our scenes have been modeled with a precision of 10 cm, but some complex objects such as the trees have not been modeled at all and create artifacts. 3.6.
Merging
Geometric
and Shadow Simulations
We perform the following steps in order to simulate any view with any lighting of the scene: For all the real images: 1. Compute the item buffer associated with the real image. 2. Measure the position of the sun and several pairs of intensity. 3. Compute the shadow buffer associated with the image. 4. Compute the pseudoalbedo image, and filter this image to remove modeling and aliasing errors. l
Preprocessing.
Processing. For all simulated images and for all the pixels in these images: 1. Compute the ray and the first intersection between the ray and the 3D model. Send a new ray from this point toward the sun. 2. Interpolate the color of the point into one of the pseudoalbedo images (instead of the real images as for the previous geometrical simulations). 3. Apply Eq. (1) to the gray level at this point. l
Figure 13 illustrates this process. Figure 14 is a simulated view of a building, with a simulated lighting. Extension to color images is obvious; images are then decomposed into red, green, and blue components. 3.7. Histogram
Filter
FIG. 12. Filtering the out-of-range points.
Performances
The example of Jussieu is based on two real 1000 X 1000images and is modeled with 360 polygons. The simu-
3D SIMULATION
89
OF SHAPES AND SHADOWS
Will my image processing algorithms work well with the images that the optical system will produce? l
FIG. 14.
The simulated image of Jussieu with synthetic
lighting.
lated image is 500 x 1000. We use a new beam-tracing technique, which is faster than ray tracing, to compute the ray intersections (see [13]). For a 1000 x 1000image, our beam tracer takes about 20 min CPU on a silicon Graphic’s Iris 4D20; our z-buffer implementation takes about 2 min. The texture mapping process takes 16 min but could be seriously optimized. Memory and time requirements are given in Fig. 15. 4. THE USE OF REALISTIC SIMULATIONS
These simulated images can be used in many possible ways. We present now a general scheme used to design a new optical system, to test it, and to improve its performances. The previous sections can be considered as an application of this scheme. Other types of modeling and lighting systems can be foreseen, for purposes other than airplane or satellite applications. 4.1. Design of the Optical System The design of a new optical system raises many questions, such as: What will the performance of this optical system be? How can I improve this performance? Will the customers of this optical system be satisfied with the quality of the images? l l l
There is a need to exactly simulate the images that the optical system will take, before beginning to build the first prototype. Depending on the optical system, it is sometimes possible to simulate these images with 2D techniques. But in many cases the artifacts are related to three-dimensional geometric problems. For example, let us consider a pushbroom airplane acquisition system. In this system, a line of CCD detectors “sweeps” a given field with time. Artifacts appear because of aberrations in the optical system, wrong position of the CCD detectors in the focal plane, and trajectory errors of the airplane. The artifacts depend on the nature of the site and are different for flat fields, mountains, and city buildings. What will the performance of an image processing algorithm for such an image be? We start from a set of real images and extract the 3D models of these scenes. We can then simulate images according to our knowledge of the future optical system and use the image processing algorithm, for example, to extract 3D models. If we compare the 3D model coming from the real images to the 3D model used to simulate images, we are able to determine the performance of our image processing algorithm, long before the first real prototype is built. Such a loop between the simulating system and the analysis algorithm is shown in Fig. 16. We can compare the simulated images to the real ones and determine the performance of the optical system. We are also able to show the result to the users and improve the design of the optical system. 4.2. Improving the Optical System The simulating system can be used even when the first real prototype is built. Sometimes, some parameters have to be set only when the optical system is in use (for example, focusing; think about the Hubble telescope). Because it provides direct access to many geometric parameters, the simulating system helps us to understand what is happening and how to modify the parameters in order to produce better images.
3D model of the scene
13 Kbytes
a real image (or albedo image if shadows are simulated)
1 Mbytes
4.3. improving the Simulating Process
an item buffer
2 Mbytes
total amount for the example of Jussieu
21 7 Mbytes
With this first prototype, we get the first real image that can be compared with a simulated image. The difference
FIG. 15.
Memory
and CPU time requirements.
FIG. 16. reality.
Loop between
analysis
and synthesis
with reference
to
90
comes from modeling errors or unforeseen events. Determining where errors occurs is very fruitful; a phenomenon could have been underestimated or ignored when the optical system was designed, and this information is very important both for the next generation of the optical system and for the improvement of the simulating process. 5. CONCLUSION
We have presented in this paper a way to simulate realistic images, with realistic shadows, in order to produce images and to test image processing algorithms long before the first prototype of a new optical system is built. The lighting model that we use is very basic, but leads to good results; further directions might be to take radiosity phenomena into account. Adaptative supersampling can also be used at several levels in our simulating system in order to reduce aliasing artifacts. We have verified the quality of our simulations with the generation of synthetic stereoscopic pairs of images (see Fig. 17), and our system has been used for various applications, including airplane and satellite images. We hope that computer graphic techniques will help in the design of more complex optical systems and enable image processing researchers to build more powerful algorithms, dealing with 3D geometry. ACKNOWLEDGMENT Several French companies and state organizations have directly participated in this work. I particularly thank Mr. Eric Savaria and Mr. Roberto Aloisi from the French Aerospatiale, Mr. Jean-Paul Arif from Matra MS21, who provided also the 3D model of Jussieu, Mr. Philippe Campagne from the Institut Geographique National (IGN France), who provided the images and some the 3D models, and Mr. Jean-Michel Lachiver from the Centre National d’Etudes Spatiales (CNES).
REFERENCES 1. M. F. Cohen and D. P. Greenberg. The hemi-cube: A radiosity solution for complex environments, Corn@. Graphics 19(3), July 1985, 31-40; in Proceedings, ACM SIGGRAPH. 2. R. L. Cook, T. Porter, and L. Carpenter, Distributed ray tracing, Comput. Graphics 18(3), July 1984, 137-145. 3. R. N. Devich and F. M. Weinhaus, Image perspective transformations, Proceedings, SPIE: Image Processing for Missile Guidance, 1980, Vol. 238, pp. 322-332. 4. A. Gagalowicz, Collaboration between computer graphics and computer vision, in ScientiJic Visualization and Graphics Simulation (D. Thalmann, Ed.), pp. 233-248, Wiley, New York, 1990. A. Huertas and R. Nevatia, Detecting buildings in aerial images, Comput. Vision Graphics Image Process. 41, 1988, 131-152. J. T. Kajiya, The rendering equation, Comput. Graphics 20(4), Aug. 1986, 143-150; in Proceedings, ACM SIGGRAPH. K. Kaneda, F. Kato, E. Nakamae, T. Nishita, H. Tanaka, and T. Noguchi, Three dimensional terrain modeling and display for environmental assessment, AIM Comput. Graphics 23(3), July 1989, 207-213. 8. R. Koch, Automatic modelling of natural scenes for generating synthetic movies, in Proceedings, EUROGRAPHICS’ 90, Sept. 1990, pp. 215-224. 9. Y. T. Liow and T. Pavlidis, Use of shadows for extracting buildings in aerial images, Comput. Vision Graphics Image Process. 49, 1990, 242-277. 10. B. Phong, Illumination for computer generated pictures, Commun. ACM 18(6), June 1975, 311-317. 11. I. E. Sutherland, R. F. Sproull, and A. Schumacker, A characterization of ten hidden-surface algorithms, Comput. Surv. 6, 1974, l-25. 12. T. Whitted, An improved illumination model for shaded display, Commun. ACM 23(6), June 1980, 343-349. 13. J. P. Thirion, Interval arithmetic for high resolution ray tracing, Rapp. Rech. LIENS 90(4), Feb. 1990. 14. H. Weghorst, G. Hooper, and D. P. Greenberg, Improved computational methods for ray tracing, ACM Trans. Graphics 3(l), Jan. 1984, 52-69.