A visibility algorithm for hybrid geometry- and image-based modeling and rendering

A visibility algorithm for hybrid geometry- and image-based modeling and rendering

Computers & Graphics 23 (1999) 719}728 Visibility * techniques and applications A visibility algorithm for hybrid geometry- and image-based modeling...

2MB Sizes 8 Downloads 39 Views

Computers & Graphics 23 (1999) 719}728

Visibility * techniques and applications

A visibility algorithm for hybrid geometry- and image-based modeling and rendering Thomas A. Funkhouser* Department of Computer Science, Princeton University, 35 Olden Street, Princeton NJ 08540, USA

Abstract Hybrid geometry- and image-based modeling and rendering systems use photographs taken of a real-world environment and mapped onto the surfaces of a 3D model to achieve photorealism and visual complexity in synthetic images rendered from arbitrary viewpoints. A primary challenge in these systems is to develop algorithms that map the pixels of each photograph e$ciently onto the appropriate surfaces of a 3D model, a classical visible surface determination problem. This paper describes an object-space algorithm for computing a visibility map for a set of polygons for a given camera viewpoint. The algorithm traces pyramidal beams from each camera viewpoint through a spatial data structure representing a polyhedral convex decomposition of space containing cell, face, edge and vertex adjacencies. Beam intersections are computed only for the polygonal faces on the boundary of each traversed cell, and thus the algorithm is output-sensitive. The algorithm also supports e$cient determination of silhouette edges, which allows an image-based modeling and rendering system to avoid mapping pixels along edges whose colors are the result of averaging over several disjoint surfaces. Results reported for several 3D models indicate the method is well suited for large, densely occluded virtual environments, such as building interiors. ( 1999 Published by Elsevier Science Ltd. All rights reserved. Keywords: Visibility map; Image-based rendering; Beam tracing

1. Introduction Hybrid geometry- and image-based rendering (GIBR) methods are useful for synthesizing photorealistic images of real-life environments (e.g., buildings and cities). Rather than modeling geometric details and simulating global illumination e!ects, as in traditional computer graphics, only a coarsely detailed 3D model is constructed, and photographs are taken from a discrete set of viewpoints within the environment. The calibrated photographic images are mapped onto the surfaces of the 3D model to construct a representation of the visual appearance of geometric details and complex illumination e!ects on each surface. Then, novel images can be rendered for arbitrary viewpoints during an interactive visualization session by reconstruction from these hybrid

* Tel.: #609-258-1748; fax: #609-258-1771.

geometry- and image-based representations. This method allows photorealistic images to be generated of visually rich and large environments over a wide range of viewpoints, while avoiding the di$culties of modeling detailed geometry and simulating complex illumination e!ects. Applications for GIBR systems include education, commerce, training, telepresence, and entertainment. For example, grammar school students can use such a system to `visita historical buildings, temples, and museums. Real estate agents can show a potential buyer the interior of a home for sale interactively via the internet [1]. Distributed interactive simulation systems can train teams of soldiers, "re "ghters, and other people whose missions are too dangerous or too expensive to re-create in the real world [2,3]. Entertainment applications can synthesize photorealistic imagery of real-world environments to generate immersive walkthrough experiences for virtual travel and multi-player 3D games [4]. The research challenges in implementing an e!ective GIBR system are to construct, store, and re-sample a 4D

0097-8493/99/$ - see front matter ( 1999 Published by Elsevier Science Ltd. All rights reserved. PII: S 0 0 9 7 - 8 4 9 3 ( 9 9 ) 0 0 0 9 4 - 1

720

T.A. Funkhouser / Computers & Graphics 23 (1999) 719}728

representation for the radiance emanating from each surface of the 3D model. Previous related methods date back to the movie-map system by Lippman [5]. Greene [6] proposed a system based on environment maps [7] in which images captured from a discrete set of viewpoints are projected onto the faces of a cube. Chen and Williams [8] described a method in which reference images are used with pixel correspondence information to interpolate views of a scene. Debevec et al. [9,10] developed a system in which photographs were used to construct a 3D model made from parameterized building blocks and to construct a `view-dependent texture mapa for each surface. Gortler et al. [11] used approximate 3D geometry to facilitate image reconstruction from their 4D Lumigraph representation. Coorg [12] constructed vertical facades from multiple photographs and mapped di!use imagery onto the surfaces for interactive visualization. Several walkthrough applications and video games apply 2D view-independent photographic textures to coarse 3D geometry [13]. Other related image-based representations are surveyed in [14], including cylindrical panorama [15,16], Light Fields [17], and layered depth images [18]. An important step in constructing a GIBR representation is to map pixel samples from every photograph onto the surfaces of a 3D model. The challenge is to develop algorithms that determine which parts of which 3D surfaces are visible from the camera viewpoint of every photograph. This is a classic hidden surface removal (HSR) problem, but with a unique combination of requirements motivated by GIBR systems. First, unlike algorithms commonly used in computer graphics, the HSR algo-

rithm must resolve visible surfaces with object-space precision. Image-space algorithms may cause small surfaces to be missed or radiance samples to be misaligned on a surface, causing artifacts and blurring in reconstructed images. Second, the algorithm should compute a complete visibility map for each photograph, encoding not only the visible surfaces but also the visible edges and vertices with their connectivities on the view plane. From the visibility map, a GIBR system can detect pixels covering multiple disjoint surfaces (e.g., along silhouette edges) and avoid mapping them onto any single surface, which causes noticeable artifacts in resampled images. As an example, consider the situation shown in Fig. 1. The image on the left shows a `photographa taken with a synthetic camera of a simple 3D model comprising two rooms connected by a door. Using a standard HSR algorithm (e.g., z-bu!ering), pixels along the silhouette edge on the left side of the doorway might be mapped onto the wall, #oor, and ceiling of the smaller room, even though their colors partially represent contributions from the edge of the door frame. The result is a `linea of incorrectly colored pixels in resampled images (shown in the image on the right). The HSR algorithm must detect silhouette edges so that these artifacts can be avoided. Third, the HSR algorithm must scale to support very large 3D models. Typical models of interesting realworld environments contain many polygons, most of which are occluded for any given camera viewpoint. For example, consider the building shown in Fig. 2. The left image shows a building (Soda Hall) from the exterior, while the right one shows the #oorplan for one of seven

Fig. 1. Image (on left) mapped onto surfaces of simple 3D model without detection of silhouette edges leads to artifacts appearing as a `linea of partially yellow pixels on wall, #oor, and ceiling in a reconstructed image (on right).

T.A. Funkhouser / Computers & Graphics 23 (1999) 719}728

721

Fig. 2. A building (left) and a #oorplan (right). The visibility for several camera viewpoints are shown in cross-hatch patterns in the #oorplan.

#oors. The entire 3D model contains around 10,000 polygons. Yet, for most camera viewpoints, more than 95% of them are hidden from view. To be e!ective for such large and densely occluded 3D models, a hidden surface removal algorithm must be output sensitive. That is, the expected case running time should depend only on the complexity of the visible portion of the model, not on the size of the entire model. Finally, the algorithm should be tuned to accelerate visible surface determination for multiple camera viewpoints in a single execution. Since GIBR systems use many photographs for constructing a GIBR representation, the cost of precomputing a spatial data structure to accelerate hidden surface removal can generally be amortized over many computations for di!erent viewpoints. Despite the long history of prior work in hidden surface and hidden line removal in computer graphics [19] and computational geometry [20], there are no algorithms currently that meet all the requirements of a GIBR system. Research in computer graphics has focused mostly on HSR algorithms for image synthesis at screen resolution. Example methods include priority ordering [21,22], recursive subdivision [23,24], and depth bu!ering [25]. Meanwhile, research in computational geometry has focused mostly on proving asymptotic complexities of object-space algorithms. Lower and upper complexity bounds for the hidden surface and hidden line removal problems have been proven to be quadratic in the number of polygon boundaries, and algorithms have been described with optimal performance [26}28]. Yet, there is a dearth of practical object-space algorithms with attractive expected-case performance. Debevec et al. [10] recently described a visibility algorithm for a GIBR system using both image-space and object-space methods. First, for each camera

viewpoint, polygon IDs are rendered with a z-bu!er into an item bu!er [29] forming an image-space representation of the visible surfaces. Then, for every front-facing polygon P in the 3D model, a uniform sampling of 3D points is projected onto the image plane and checked against the corresponding entries in the item bu!er to form a list of polygons occluding P. For each such occluder, P is clipped in object-space against planes formed by the camera viewpoint and the occluder's boundary. The problems with this method are that it is not object-space precise, potentially missing occluders not found by a discrete set of samples; it is not outputsensitive, clipping every front-facing polygon against all its occluders; and, it does not detect silhouette edges, potentially leading to visibility artifacts in reconstructed images. The contribution of this paper is an algorithm that computes a visibility map for an arbitrary camera viewpoint. The basic idea is to trace pyramidal beams [30] recursively from a camera viewpoint through a precomputed spatial subdivision of cells (convex polyhedra) connected by `portalsa (transparent boundaries between cells). Jones [31] has used this method to solve a hidden line removal problem for image generation, Teller [32] has used it for occlusion culling in an interactive walkthrough system, and Funkhouser et al. [33] have used it for acoustic modeling. A similar method has recently been developed by Fortune [34] for simulation of radio frequency wave propagation. We extend the recursive beam tracing method to compute a complete visibility map for a set of camera viewpoints and apply it to mapping photographs onto surfaces in a GIBR system. The algorithm executes at object-space precision, it is able to "nd silhouette edges e$ciently, and its execution time is output-sensitive for each camera viewpoint after an initial precomputation.

722

T.A. Funkhouser / Computers & Graphics 23 (1999) 719}728

2. System organization The organization of our complete GIBR system is shown in Fig. 3. The input to the system is: (1) a 3D polygonal model of the environment, and (2) a set of photographic images calibrated with 3D camera viewpoints. The output is a sequenc of synthetic images generated in real-time as a simulated observer moves through the environment. The system is divided into four phases, three of which perform o!-line computations to pre-process the input data. During the "rst preprocessing phase, a spatial subdivision data structure is constructed to represent the topology and geometry of the 3D model. Next, beams are traced through the spatial subdivision to compute a visibility map for each camera viewpoint. Then, during the third and "nal preprocessing phase, the visibility maps are used to map regions of the calibrated images onto 3D polygons to create a set of radiance maps representing the 4D radiance emanating from each surface of the 3D model. Finally, during the fourth phase, synthetic images are generated from the radiance maps at interactive rates for an arbitrary viewpoint moving through the 3D model under user control. In this paper, we focus on the "rst two phases of the system: spatial subdivision and beam tracing. The goal of these two phases is to compute a visibility map for each camera viewpoint so that photographic radiance samples can be mapped onto the surfaces of the 3D model. Detailed descriptions of the last two phases are purposely omitted in this discussion. Although we currently use a view-dependent texture map to store the radiance map for each surface (as in [10]), the reader can imagine use of any GIBR representation of the 4D radiance emanating from each surface of a 3D model that can be constructed from a set of photographs augmented with corresponding visibility maps, e.g., Light Fields [17] or Lumigraphs [11]. The remainder of this paper provides detailed descriptions of our spatial subdivision and beam tracing data

Fig. 3. Organization of GIBR system.

Fig. 4. Winged-pair declarations.

structures and algorithms, followed by results of experiments with several 3D virtual environments and a brief conclusion.

3. Spatial subdivision During the "rst preprocessing phase, our system builds a spatial subdivision representing a decomposition of 3D space, which we call a winged-pair data structure. The goal of this phase is to partition space into convex polyhedral cells whose boundaries are aligned with polygons of the 3D input model and encode the topological adjacencies of the cells in a data structure enabling outputsensitive traversals of sightlines through 3D space. The winged-pair data structure is motivated by the well-known winged-edge data structure described by Baumgart [35]. The di!erence is that the winged-pair describes topological structures one dimension higher than the winged-edge. While the winged-edge represents a 2-manifold, the winged-pair represents a 3-manifold. The reader can think of the winged-pair data structure as a set of `glued togethera winged-edge data structures, each representing the 2-manifold boundary of a single cell. The winged-pair representation is also similar to the facet-edge representation described by Dobkin and Laszlo [36]. Both encode face}edge, face}face, and

T.A. Funkhouser / Computers & Graphics 23 (1999) 719}728

723

Fig. 5. Pair-pair references.

edge}edge adjacency relationships in a set of structures corresponding to face}edge pairs. The di!erence is that Dobkin and Laszlo store a separate structure for each unique pair of face and edge orientations. For most practical purposes, the di!erence is insigni"cant, similar to the one between 2D quad-edge and winged-edge structures. Although storing separate structures for di!erent orientations can eliminate a few conditional checks during traversal operations, it requires extra storage } a straight-forward trade-o! between time and space. We choose to store exactly one structure for each face}edge pair in our winged-pair representation to simplify code extensions and debugging. Pseudocode declarations for the winged-pair structure are shown in Fig. 4. Topological adjacencies are encoded in "xed-size records associated with vertices, edges, faces, cells and face}edge pairs. Every vertex stores its 3D location and a reference to one attached edge, every edge stores references to its two vertices and one attached face}edge pair, each face stores references to its two cells and one attached face}edge pair, and every cell stores a reference to one attached face. The face}edge pairs store references to one edge E and to one face F along with adjacency relationships required for topological traversals. Speci"cally, they store references (spin) to the two face}edge pairs reached by spinning F around

Fig. 6. Pseudocode for the beam tracing algorithm.

E clockwise and counter-clockwise (see Fig. 5) and to the two face-edge pairs (clock) reached by moving around F in clockwise and counter-clockwise directions from E (see Fig. 5). The face-edge pair also stores a bit (direction) indicating whether the orientation of the vertices on the edge is clockwise or counter-clockwise with respect to the face within the pair. These simple, "xed-size structures make it possible to execute output-sensitive topological traversals through cell, face, edge, and vertex adjacency relationships. For instance, "nding all faces on the boundary of a given cell requires O(C #C ) time, where C and C are the numf e f e bers of faces and edges attached to the cell, respectively. As in winged-edge traversals in 2D, simple conditional statements are often required to check the orientation of each structure before moving to the next. For instance, to "nd the cell adjacent to another across a given face, the C`` code looks like this: CellHCellAcrossFace(FaceHface, CellHcell) M return(cell"facePcell[0]) ? facePcell[1] : face Pcell[0]; N

724

T.A. Funkhouser / Computers & Graphics 23 (1999) 719}728

We build the winged-pair data structure for any 3D model using a Binary Space Partition (BSP) [21], a recursive binary split of 3D space into convex polyhedral regions (cells) separated by planes. To construct the BSP, we recursively split cells by candidate planes selected by the method described in [37]. As BSP cells are split by a polygon P, the corresponding winged-pair cells split along the plane supporting P, and the faces and edges on the boundary of each split cell are updated to maintain a 3-manifold in which every face is convex and entirely inside or outside every input polygon. As faces are created, they are labeled according to whether they are opaque (coincide with an input polygon) or transparent (split free space). The binary splitting process continues until no input polygon intersects the interior of any BSP cell, leading to a set of convex polyhedral cells whose

faces are all convex and cumulatively contain all the input polygons. The resulting winged-pair is converted to an ASCII representation and written to a "le for use by later phases of the GIBR system.

4. Beam tracing In the second phase of our GIBR system, we use a beam tracing algorithm to compute a visibility map for every photograph. Beams containing feasible sightlines from each camera viewpoint are traced via a depth-"rst traveral through the winged-pair structure, while corresponding convex regions of the visibility map are partitioned recursively. The algorithm is based on recursive beam tracing methods [32}34], and it is related to

Fig. 7. Test models.

Table 1 Spatial subdivision statistics Test model

d Polys

d Cells

d Faces

d Edges

d Verts

Time (s)

Rooms Maze Arena City Floor Bldg

20 310 665 1125 1772 10 057

6 315 294 829 808 4504

38 1485 2181 4746 5773 33 066

66 2113 3625 7568 9623 54 459

35 944 1739 3652 4659 25 898

0.26 0.84 3.96 3.25 11.36 50.76

6 60 216 1092 79 97 3 25 108 471 31 36 1 2 23 15 9 10 36 163 456 748 361 361 20 64 160 324 132 144 6 8 66 8 26 26 18 82 234 382 183 183 10 32 81 166 67 73 3 4 33 4 13 13 3 100 136 1680 54 67 2 37 57 642 23 25 20 310 665 1125 1772 10 057 Rooms Maze Arena City Floor Bldg

1 2 10 15 6 6

Avg Min Min Avg

Max Min

Avg

Max

Min

Avg

Max

Time (ms) d Edges d Faces d Beams d Polys

We have implemented the algorithms described in the previous sections, and they run on SGI/Irix and PC/Windows computers. To test the e!ectiveness of our

Test model

5. Experimental results

Table 2 Beam tracing statistics

recursive convex decompositions [38]. The key feature is that topological information stored in the winged-pair data structure (edge}face adjacencies) is used to construct a visibility map with topological information and explicit silhouette edges. Psuedocode for the beam tracing algorithm is shown in Fig. 6. During the traversal for each camera viewpoint, the algorithm maintains: a visibility map M (a 2-manifold representing the geometry and topology of faces and edges visible to the camera), a current region R (a convex 2D region of the visibility map), a winged-pair = (as described in Section 3), a current cell C (a reference to a cell in the winged-pair structure) and a current beam B (an in"nite convex 3D pyramidal beam emanating from the camera viewpoint containing all sightlines passing through R). Initially, M and R are initialized to one rectangular region enclosing the visible portion of the view plane, = is constructed from the 3D model as described in Section 3, C is set to be the cell of = containing the camera, and B is set to the four-sided pyramid corresponding to the view frustum of the camera. During each recursive step, the function called TraceBeams partitions the current region of the visibility map into multiple convex subregions corresponding to intersections of the current beam with faces on the boundary of the current cell. For each face F on the i current cell and intersecting the current beam, a convex region R is inserted into the visibility map. If F is i i transparent, R is recursively re"ned with a call to i TraceBeams in which the current region of the visibility map is set to R , the new current cell C is set to be the cell i i adjacent to C across face F , and the new current beam i B is formed by trimming B to include only rays intersecti ing F . Otherwise, F is opaque, the recursive search along i i this path terminates, and R is marked as a "nal region of i the visibility map associated with face F . Contiguous i regions of the visibility map associated with the same opaque winged-pair face are marked as one face during the process. A nice feature of this algorithm is that it constructs a representation of the visibility map with both topological and geometric information. With the exception of silhouette edges, the topology of the visibility graph matches the topology of corresponding vertices, edges and faces in the winged-pair structure exactly. Silhouette edges can be found explicitly by checking the orientations of the faces attached to visible edges, which are readily available by traversing spin references stored in the winged-pair data structure.

725

Max

T.A. Funkhouser / Computers & Graphics 23 (1999) 719}728

726

T.A. Funkhouser / Computers & Graphics 23 (1999) 719}728

Fig. 8. Visualization of winged-pair structure and visibility map computed for `Citya model.

T.A. Funkhouser / Computers & Graphics 23 (1999) 719}728

methods, we executed a series of tests with several 3D models (shown in Fig. 7). For each one, we computed a winged-pair data structure and a visibility map for at least 100 camera viewpoints along a typical walkthrough path within the model. All tests were run on a SGI Onyx2 Workstation with a 195 MHz R10000 processor. Results of the spatial subdivision phase are shown in Table 1. For each test model, we list how many input polygons it has (d Polys), along with the numbers of cells, faces, edges, and vertices in the resulting wingedpair structure and the wall-clock time (in seconds) taken by the spatial subdivision algorithm. We note that the complexity of the winged-pair grows linearly with the numbers of polygons for these models. Also, the spatial subdivision processing times are reasonably quick (less than one minute), even for complex 3D models such as a small city or a large building. Table 2 shows results of the beam tracing phase. The "rst two columns list the test model and how many input polygons it has. The remaining columns list the minimum, average, and maximum statistics measured over all tested camera viewpoints in each model. Speci"cally, groups of three columns list the numbers of beams traced by our algorithm, the numbers of faces and edges in the resulting visibility maps, and the wall-clock times (in milliseconds) required by our algorithm for each view point. From these results, we observe that the time required by our algorithm correlates closely with the number of beams traced, and not necessarily with the number of polygons in the 3D model. For example, although the `Floora model contains polygons for only one of "ve #oors in the `Buildinga model, the statistics gathered during our tests with both models are similar because the same viewpoint path was used and the complexity of the computed visibility maps were similar. Similarly, although the `Buildinga model contains around nine times more polygons than the `Citya model (10,057 versus 1125), the numbers of beams traced, the computation times, and the complexities of the visibility maps measured in tests are far less for the `Buildinga due to the dense occlusions of walls. Finally, we note that the maximum time required to compute the visibility map in all our tests was a little more than one second. Fig. 8 shows visualization of our algorithms captured from an interactive program computing the visibility map for viewpoints in the `Citya model. The top two images ((a) and (b)) show the winged-pair structure constructed by our system. In these images, every face is drawn with a unique color, edges are drawn as solid white lines, and vertices are drawn as green dots. Note how few input polygons are split by binary space partitioning planes. The next three images ((c), (d), and (e)) show views from one camera #ying over the city. As before, every face is drawn with a unique color, but computed silhouette edges are also shown as wide white lines in image (d), and intersections between beams and

727

winged-pair faces are shown as yellow lines in image (e). The bottom-right image (f ) shows a birds-eye view of the set of surfaces (blue polygons) visible to the viewpoint (looking from the bottom-left corner of the image towards the top-right corner) overlaid with edges of the winged-pair structure (white lines). The images in the bottom row of Fig. 8 illustrate the most signi"cant problem with the recursive beam tracing approach: beams get fragmented by cell boundaries as they are traced through free space [32,33]. In theory, the number of beams can be exponential in the number of winged-pair faces traversed. In practice, the number of beams traced depends on the complexity of the visible region of the model. As a result, these methods are best-suited for use in densely occluded environments, such as building interiors. In future work, we plan to pursue topological beam tracing methods in which beams are split only at silhouette edges (as in [34]).

6. Conclusion This paper presents data structures and algorithms useful for mapping images onto surfaces in a hybrid geometry- and image-based rendering system. Our method uses a preprocessing phase to construct a winged-pair data structure encoding the topology and geometry of the 3D input model. A second phase traces beams through the winged-pair structure to "nd visible surfaces. The beam tracing algorithm computes visible surfaces with objectspace precision, it is able to "nd silhouette edges e$ciently, and its execution time depends only on the complexity of the visible region for each camera view point. Topics for future work include investigation of how topological relationships can be used to construct, store, and sample radiance maps more e$ciently.

Acknowledgements The author would like to acknowledge Steve Fortune and David Dobkin for their helpful discussions. Thanks also go to Seth Teller for use of the `Mazea test model, and Bruce Naylor for the `Arenaa test model. Finally, I would like to thank Wilmot Li and Wagner Correa for their contributions to the beam tracing implementation.

References [1] Homes M. www.modernhomes.com/demo.html,1999. [2] Calvin J, Dickens A, Gaines B, Metzger P, Miller D, Owen D. The simnet virtual world architecture. Proceedings of the IEEE Virtual Reality Annual International Symposium, September 1993, p. 450}55.

728

T.A. Funkhouser / Computers & Graphics 23 (1999) 719}728

[3] Macedonia MR, Zyda MJ, Pratt DR, Brutzman DP, Barham PT. Exploiting reality with multicast groups. IEEE Computer Graphics and Applications 1995;15(5):38}45. [4] id Software. Quake, 1996. [5] Lippman A. Movie-maps: an application of the optical videodisc to computer graphics. Computer Graphics 1980;14(3):32}42. [6] Greene N. Environment mapping and other applications of world projections. IEEE Computer Graphics and Applications 1986;6(11):21}9. [7] Blinn JF, Newell ME. Texture and re#ection in computer generated images. Communications of the ACM 1976;19:542}6. [8] Chen SE, Williams L. View interpolation for image synthesis. In Kajia JT, editor. Computer Graphics (SIGGRAPH '93 Proceedings), vol. 27, August 1993, p. 279}288. [9] Debevec PE, Taylor CJ, Malik J. Modelling and rendering architecture from photographs: A hybrid geometry- and image-based approach. Holly Rushmeier, editor, SIGGRAPH 96 Conference Proceedings, Annual Conference Series. ACM SIGGRAPH, Addison-Wesley, Reading, MA, held in New Orleans, Louisiana, 04}09 August 1996, p. 11}20. [10] Debevec PE, Yu Y, Borshukov GD. E$cient view-dependent image-based rendering with projective texture-mapping. Eurographics Rendering Workshop, June 1998, p. 105}116. [11] Gortler SJ, Grzeszczuk R, Szeliski R, Cohen MF. The lumigraph. In: Rushmeier H, editor. SIGGRAPH 96 Conference Proccedings, Annual Conference Series, ACM SIGGRAPH, held in New Orleans, Louisiana, 04}09 August 1996, Addison-Wesley, Reading, MA, p. 43}54. [12] Coorg S. Pose imagery and automated 3-D modeling of urban environmentes. Ph.D. thesis, Dept. of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, September 1998. [13] Jepson W, Ligget R, Friedman S. An environment for realtime urban simulation. In: Hanrahan P, Winget J, editors. 1995 Symposium on Interactive 3D Graphics, ACM SIGGRAPH, April 1995, p. 165}6. ISBN 0-89791-736-7. [14] Debevec P, Gortler S. Image-based modeling and rendering. SIGGRAPH 98 Course Notes. ACM SIGGRAPH, Addison-Wesley, Reading, MA, July 1998. [15] Chen SE. Quicktime VR } an image-based approach to virtual environment navigation. In: Cook R, editor. SIGGRAPH 95 Conference Proccedings, Annual Conference Series, ACM SIGGRAPH, held in Los Angeles, California, 06}11 August 1995, Addison-Wesley, Reading, MA, p. 29}38. [16] McMillan L, Bishop G. Plenoptic modeling: an imagebased rendering system. In: Cook R, editor. SIGGRAPH 95 Conference Proceedings, Annual Conference Series, ACM SIGGRAPH, Addison-Wesley, Reading, MA, held in Los Angeles, California, 06}11 August 1995, p. 39}46. [17] Levoy M, Hanrahan P. Light "eld rendering. In: Rushmeier H, editor. SIGGRAPH 96 Conference Proceedings, Annual Conference Series, ACM SIGGRAPH, AddisonWesley, Reading, MA, held in New Orleans, Louisiana, 04}09 August 1996, p. 31}42. [18] Shade JW, Gortler SJ, He L, Szeliski R. Layered depth images. SIGGRAPH 98 Conference Proceedings, Annual Conference Series, ACM SIGGRAPH, Addison-Wesley, Reading, MA, July 1998, p. 231}42.

[19] Sutherland IE, Sproull RF, Shumacker RA. A characterization of ten hidden surface algorithms. ACM Computing Surveys 1974;6(1):1}55. [20] Dorward SE. A survey of object-space hidden surface removal. International Journal of Computational Geometry and Applications 1994;4:325}62. [21] Fuchs H, Kedem ZM, Naylor BF. On Visible Surface Generation by a Priori Tree Structures, vol. 14, July 1980, p. 124}33. [22] Newell ME, Newell RG, Sancha TL. A new approach to the shaded picture problem. Proceedings of the ACM National Conference, 1972, p. 443. [23] Warnock J. A hidden-surface algorithm for computer generaed half-tone pictures. Technical Report TR 4-15, NTIS AD-733 671, University of Utah, Computer Science Department, 1969. [24] Weiler K, Atherton K. Hidden surface removal using polygon area sorting. Computer Graphics (SIGGRAPH '77 Proceedings) 1977;11(2):214}22. [25] Catmull EE. A subdivision algorithm for computer display of curved surfaces. PhD thesis, Dept. of CS, U. of Utah, December 1974. [26] DeH vai F. Quadratic bounds for hidden line elimination. Proceedings of the Second Annual ACM Symposium Computer. Geometry 1986, p. 269}75. [27] McKenna M. Worst-case optimal hidden-surface removal. ACM Transactions on Graphics 1987;6:19}28. [28] Schmitt A. On the time and space complexity of certain exact hidden line algorithms. Report 24/81, FakultaK t Inform., Univ. Karlsruhe, Karlsruhe, West Germany, 1981. [29] Weghorst H, Hooper G, Greenberg DP. Improved computational methods for ray tracing. ACM Transactions on Graphics 1984;3(1):52}69. [30] Heckbert PS, Hanrahan P. Beam tracing polygonal objects. In: Christiansen H, editor. Computer Graphics (SIGGRAPH '84 Proceedings), vol. 18, July 1984, p. 119}27. [31] Jones CB. A new approach to the &hidden line' problem. Computer Journal 1971;14(3):232}7. [32] Teller S. Visibility Computations in Densely Occluded Polyhedral Environments. PhD thesis, (Also TR UCB/ CSD 92/708) CS Dept., UC Berkeley, 1992. [33] Funkhouser TA, Carlbom I, Elko G, Pingali G, Sondhi M, West J. A beam tracing approach to acoustic modeling for interactive virtual environments. SIGGRAPH 98 Conference Proceedings, Annual Conference Series, ACM SIGGRAPH, Addison-Wesley, Reading, MA, July 1998, p. 21}32. [34] Fortune S. Topological beam tracing. Proceedings of the ACM Symposium on Computational Geometry, 1999, p. 59}68. [35] Baumgart BG. Geometric modeling for computer vision. AIM-249, STA-CS-74-463, CS Dept, Stanford U., October 1974. [36] Dobkin DP, Laszlo MJ. Primitives for the manipulation of three-dimensional subdivisions. Algorithmica 1989;4:3}32. [37] Naylor B. Constructing good partition trees. Proceedings of Graphics Interface '93, Toronto, Ontario, Canada, May 1993, Canadian Information Processing Society, p. 181}91. [38] Naylor BF. Partitioning tree image representation and generation from 3D geometric models. Proceedings of Graphics Interface '92, May 1992, p. 201}12.