Computers & Graphics 35 (2011) 483–491
Contents lists available at ScienceDirect
Computers & Graphics journal homepage: www.elsevier.com/locate/cag
SMI 2011: Full Paper
Localized Cocone surface reconstruction Tamal K. Dey, Ramsay Dyer, Lei Wang Department of Computer Science and Engineering, The Ohio State University, 2015 Neil Avenue, Columbus, OH 43210, USA
a r t i c l e i n f o
abstract
Article history: Received 9 December 2010 Received in revised form 14 March 2011 Accepted 14 March 2011 Available online 30 March 2011
We introduce a surface reconstruction algorithm suitable for large point sets. The algorithm is an octree-based version of the Cocone reconstruction algorithm [4], allowing independent processing of small subsets of the total input point set. When the points are sufficiently sampled from a smooth surface, the global guarantee of topological correctness of the original algorithm is preserved, together with the geometric accuracy guarantees. & 2011 Elsevier Ltd. All rights reserved.
Keywords: Surface reconstruction Delaunay triangulation Large data
1. Introduction The surface reconstruction problem refers to the task of creating a representation of an unknown surface given only a finite set of sample points from that surface. This may be viewed as a family of problems, where the nature of the specific problem varies greatly with the assumptions imposed on the sample set, and with the limitations to be tolerated in the algorithmic solution. Typically some form of regularity is assumed for the unknown surface, for example it may be assumed to be smooth, or piecewise smooth. Our algorithm provides a solution to the problem when the sample set is very large but of good quality, and guarantees on the topological correctness and geometric accuracy of the output are required. In the implementation of the algorithm, the guarantees on the topological and geometric correctness of the output are maintained only for datasets which meet an additional assumption on the sampling density, which is not required in theory. This restriction enables the implementation to also gracefully accommodate datasets which do not meet the quality requirements of the theoretical algorithm, although there are no guarantees in this case. 1.1. Related work Since initial investigations in the early 1980s [9,29], the problem of surface reconstruction has remained an active area of research. A summary of the first two decades of work may be found in the survey by Schall and Samozino [31]. Corresponding author. Tel.: þ 1 7036227849.
E-mail address:
[email protected] (L. Wang). 0097-8493/$ - see front matter & 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.cag.2011.03.014
Seminal work by Amenta and Bern [1] presented the Crust algorithm, and established sampling conditions for a smooth surface which yield quantifiable guarantees on the quality of the output triangle mesh as a representation of the unknown surface. The Cocone algorithm [4] was then introduced as a simplification of the Crust algorithm, guaranteeing an output mesh that is homeomorphic to the unknown surface, and geometrically close to it; bounds are given for the distance between a point on any triangle in the mesh and the closest point on the surface, as well as the difference between the normal vectors of these two points. The Cocone algorithm exploits the Delaunay tetrahedralization of the sample set with respect to the ambient space. A problem with such algorithms is that they do not generally scale well. For large datasets, the datastructures employed exceed the system memory and processing speed grinds to a crawl. To address this issue, Dey et al. [16] developed an octree-based version of the Cocone algorithm such that the Delaunay tetrahedralization is only performed upon small clusters of the total point set. However, this algorithm did not provide the guarantees of topological correctness and geometric accuracy that are a principle attraction of the original Cocone algorithm. In this paper we show that the octree-based algorithm of Dey at et al. [16] can be modified so as to retain these guarantees. Recently there has been a focus on creating robust surface reconstruction algorithms capable of handling very large, and often noisy raw point set data. The Poisson surface reconstruction algorithm introduced by Kazhdan et al. [22] handles noisy data robustly. The algorithm was later extended to handle large point sets in a streaming surface reconstruction algorithm [7], and also a parallel processing version [8]. These algorithms can handle noisy input data, however they require that the input sample
484
T.K. Dey et al. / Computers & Graphics 35 (2011) 483–491
points be oriented: each point is equipped with a vector indicating the outward normal direction of the sample point with respect to the unknown closed surface. Earlier work by Ohtake et al. [27] based upon moving least squares projection can also handle large incomplete datasets. It too requires the input points to be oriented. Manson et al. [25] introduced a streaming surface reconstruction algorithm based on wavelets. This algorithm is capable of quickly processing enormous sample sets, and it is robust with respect to noise in the sample set. It too requires that the sample points be oriented. Recent work presented by Mullen et al. [24] introduced an algorithm that is capable of handling a noisy sample set that includes outliers. Previous surface reconstruction algorithms which handle noise generally assume a noise model that restricts the sample points to lie within a small distance of the unknown surface, and outliers are not well tolerated. Their algorithm deals well with outliers, but it is also an algorithm which acts globally on the input point set. The algorithm avoids the need for oriented input data, and demonstrates that assigning orientations to the points in unoriented data is a nontrivial task which cannot be considered a simple preprocessing step for algorithms which require it. Another work related to reconstruction from large point cloud is that of Alle gre et al. [3]. Like our algorithm, this one does not require the data to be oriented, and it also employs the ambient Delaunay structure, but it decimates the point cloud itself before reconstructing, finishing with a reconstruction on an adaptive subsampling of the input point set. These algorithms are designed to handle large noisy point sets, however none of them provide any guarantees on the approximation quality of the output surface with respect to the original unknown surface. Kil and Amenta [21] took a different approach. They developed a surface reconstruction algorithm which processes the sample points in parallel on the GPU. They are thus able to quickly process large point sets. Their algorithm requires that the sample set be noise free; i.e., the sample points lie on the unknown smooth surface, and also that they satisfy a local uniformity condition. This means that there is a minimal distance between any given sample point and its nearest neighbour. They argue that although these restrictions on the sampling conditions are not reasonable to expect from a raw sample point set from an input device, such point sets can be produced and maintained in controlled settings, and the simplicity of their algorithm is a motivation for such processing. Our algorithm also requires a noise free sample set, however we do not require the local uniformity constraint when the sampling is sufficiently dense. Also, Kil and Amenta did not provide correctness guarantees with their algorithm. There have been localized reconstruction algorithms which provide guarantees when the sample set is sufficiently dense and noise free. Funke and Ramos [18] demonstrated that the Cocone reconstruction algorithm could be modified so that no global computation of the Delaunay tetrahedralization is required. This involved decimating the input point set so that the working point set is a locally uniform subset of the original input. The resulting algorithm has an OðnÞlogn running time. However, it is a complicated theoretical algorithm and no implementation of it has been produced. Dumitriu et al. [13] also developed a surface reconstruction algorithm with correctness guarantees. They use the technique introduced by Funke and Ramos [18] to extract a locally uniform point set from a dense noise free input sample set. Their primary contribution is the observation that purely local operations are sufficient to identify a unique piecewise linear approximation to
the surface. This is in contrast with the Cocone reconstruction algorithm, which requires a manifold extraction step in which an orientation decision is propagated around the entire sample set. Amenta and Kil [21] employed a similar technique, but Dumitriu et al. gave it a foundation of theoretical correctness. Although the algorithm is quite complicated, Diumitru et al. have produced an implementation [14]. However, this implementation does not include the algorithm for extracting a locally uniform subset of sample points from an initial dense sample set. Thus their implementation requires that the input dataset be locally uniform. The more robust reconstruction algorithms generally do not attempt to interpolate the given datapoints. Mederos et al. [26] proceed by resampling the input points using local moving least squares patches. The resulting point set is then meshed with a Delaunay-based technique. More commonly, an implicit function is generated to represent the surface. This has the advantage of guaranteeing a watertight surface. Samozino et al. [30] construct a signed distance function by way of radial basis functions centred on Voronoi vertices near the medial axis. Hornung and Kobbelt [20] take a volumetric approach and produce the final mesh via a graph cut algorithm on the volumetric grid. These techniques can handle larger datasets, but the resolution of the output is limited. More recently Alliez et al. [5] introduced an elegant normal approximation and global optimization technique to generate an implicit function. The method is robust, but appears to be limited to smaller datasets. These algorithms do not require oriented input points, and their common property of not attempting to interpolate the original points gives them flexibility and robustness. However, there has been no quantifiable analysis of the quality of the output of these approximation algorithms; they come with no guarantees, besides the watertight property of the implicit representations. Finally, in the reconstruction algorithm presented by Guibas and Oudot [19], guarantees are provided even in the presence of noise. The unknown surface is not required to be smooth, but only to conform to a weaker regularity requirement. The algorithm incrementally builds a locally uniform subset of the dense input sample set in a relatively simple manner compared with the technique of Funke and Ramos [18]. However, the algorithm requires operations on the full global sample set, and the extraction of the embedded triangle mesh requires the computation of the Delaunay tetrahedralization for the full locally uniform subset of sample points.
1.2. Contribution Our algorithm is a modification of one that has been previously published [16]. Our main contribution is the recovery of the correctness guarantees. This was inspired by recent success by Dey et al. [17], where a localized version of the Delaunay refinement surface meshing algorithm of Chew [11] was introduced. They were able to produce guarantees on the quality of the output by demonstrating that the generated mesh would in fact be the restricted Delaunay triangulation of the entire point set with respect to the surface being meshed. The sampling radius is a function on the surface which bounds the maximum distance to the nearest sample point. The work of Dey et al. [17] deals with a constant sampling radius and a known surface, whereas we are working with a sampling radius that is unknown, except that it depends on the local feature size of an unknown surface. However, we are able to extend the essential observation of Dey et al. to this case. This insight is embodied in Lemma 3.1.
T.K. Dey et al. / Computers & Graphics 35 (2011) 483–491
The localized Cocone reconstruction algorithm presented by Dey et al. [16] uses an octree to partition the point set into smaller patches which are processed individually. We use the same idea. However, Dey et al. use a fixed size of buffer around each node to create an overlap between adjacent nodes. They recognized that if the width of this buffer was sufficiently large with respect to the local feature size, then the globally correct cocone triangles would be computed across the node boundaries, and the reconstructed patches would stitch together seamlessly. However, they did not discover how to ensure this, and the consequently arbitrary choice of a fixed buffer size can never be guaranteed to have sufficient width. We adaptively build a buffer around each node and demonstrate that it is possible to ensure that it is sufficiently wide. Our algorithm has the ability to handle large sample sets, and produces guarantees when the sample points lie with sufficient density on a smooth surface. There is no requirement that the sample set conform to a local uniformity constraint, or that the sample points be oriented. The algorithm is simple and has been implemented. As an implementation choice we have introduced heuristics which enable the algorithm to handle datasets which do not meet the theoretical sampling requirements. When these heuristics are employed, there is no longer any theoretical guarantee on the output. We therefore required a local mechanism to detect insufficient sampling. To this end we demand an additional assumption on the sampling for which the output will be guaranteed; the sampling must have bounded local density, as described in Section 4. Although the idea of bounded local density has already been introduced [2], the stronger local uniformity assumption is more often demanded for localized algorithms. The local uniformity assumption constrains the minimum distance between sample points, and dense sample sets will rarely meet this constraint. To our knowledge, there is no implementation available, short of a full blown reconstruction algorithm [19], which can filter a dense sample set into one which meets this constraint. In contrast, the bounded local density assumption is only violated by pathological sample sets which can exist in theory, but are unlikely to appear in practice. Other algorithms that can handle similarly large sample sets either require more stringent assumptions on the sample set, or they provide no guarantees, or they are not amenable to implementation.
485
algorithm chose the pole vector, v^ p , the vector from p to the farthest point from p in VðpÞ, the Voronoi cell of p. The pole vectors do not define a consistent orientation for the surface normals; only the normal line indicated by the pole vector is reliable in general. Then, the cocone at p is defined as the set 3p ! Cp ¼ x A VðpÞj+a ðv^ p , px Þ Z , 8 where +a ða,bÞ denotes the acute angle between the lines generated by vectors a and b. The algorithm then filters from DðSÞ the set of cocone triangles. A triangle is a cocone triangle if its dual Voronoi edge intersects the cocones of each of its vertices. It is shown that a cocone triangle with p as a vertex has a circumradius bounded by OðelfsðpÞÞ, and thus the angle between the normal to the triangle, and n^ p is bounded by OðeÞ. The cocone triangles contain a manifold triangle mesh that is homeomorphic to S. This is guaranteed because the restricted Delaunay triangulation of S with respect to S is shown to have this property and be contained in the set of cocone triangles. Once the cocone triangles have been collected, there are two further stages to obtain the desired mesh. First is the pruning stage. Any cocone triangle that is incident to a sharp edge is removed from the set of candidate triangles. An edge e is sharp if all the cocone triangles incident to e are contained in a wedge with opening angle less than p=2 on e. The restricted Delaunay triangles are not removed in this process since they together form a manifold mesh with no sharp edges. The final stage is the manifold extraction stage. The cocone triangles that survived the trimming stage do not necessarily form a manifold. However, each connected component of the trimmed cocone triangle complex has the form of a manifold with ‘‘bubbles’’. In other words, a component of the trimmed cocone triangles is contained between a manifold exterior surface and a manifold interior surface (the two are not disjoint in general). The manifold extraction stage chooses a cocone triangle on the convex hull of the component, and then extracts the outer manifold surface by a region growing algorithm, always choosing the outer triangle adjacent to triangles already chosen.
3. Algorithm 2. Background We briefly review here the Cocone reconstruction algorithm [4]. A detailed exposition of this algorithm may be found in the book by Dey [12]. The Cocone reconstruction algorithm takes as input a set of points, S, that is densely sampled from some unknown compact smooth surface without boundary, S R3 . Using the framework established by Amenta and Bern [1], the sampling density is specified by a positive function, lfsðxÞ, called the local feature size, which is defined for any x A S as the distance from x to the medial axis of S. The local feature continuous, with size is Lipschitz Lipschitz constant of one: lfsðxÞlfsðyÞ rdðx,yÞ, where dðx,yÞ is the Euclidean distance between x and y. The sampling density given by the requirement that for any x A S there is a p A Bðx; elfsðxÞÞ \ S, where Bðx; rÞ is the Euclidean open ball centred at x and with radius r. Given such an input, the Cocone algorithm constructs, DðSÞ, the Delaunay tetrahedralization of S, and then computes a manifold triangle mesh E whose vertices are S. The mesh E is a substructure of DðSÞ, and so is guaranteed to be embedded. The algorithm proceeds by first estimating, n^ p , the normal to the unknown surface, S, at each sample point p A S. The original
The idea of our algorithm is that we can use a spatial partition to divide S into small manageable subsets, and then perform the Cocone algorithm on each subset in such a way that all the pieces match up to one consistent whole. We choose an octree for the spatial subdivision. Three related obstacles have to be overcome with this approach. The first is that the Cocone algorithm depends upon global information supplied by the Delaunay tetrahedralization of S. For example, the pole vector that is used to estimate the normal line at each sample point may not be reliable if only a portion of the samples are examined. The work of Funke and Ramos [18] confirmed that this obstacle can be overcome in principle. The second problem is that we are now processing small portions of the surface; in general these will be patches with boundaries. The Cocone algorithm has not been adapted to provide guarantees for surfaces with boundaries. The third problem is that the individual patches that are produced must be consistent across these boundaries so as to form a closed manifold output. Given a locally determined estimate of the normal line through a point, p, the resulting cocone of p is a local object in that, when the sampling conditions are met, it is completely determined by nearby samples. However, without some kind of local uniformity
486
T.K. Dey et al. / Computers & Graphics 35 (2011) 483–491
Fig. 1. Meshes produced by our algorithm. Different colours distinguish the mesh patches produced in different nodes. The patches share their common boundaries and exhibit global consistency. (a) Bimba with 5.6M points. (b) Stringy with 4.6M points. (c) Bracelet with 5.8M points.
Build octree
Estimate normals
FindCocone and Prune
Extract manifold
Surface Fig. 2. A schematic overview of the algorithm.
constraint on the sampling, it is impossible to know a priori how many k-nearest neighbours of p are required to exactly construct the cocone Cp . Our principle observation, summarized by Lemma 3.1, is that we can at least determine when we have included enough local neighbours to completely determine Cp . Algorithm 1. LocCoconeðS, kÞ. 1: compute octree 2: compute normals 3: while Q not empty do 4: n :¼ dequeueðQ Þ 5: if first processing of n then 6: FindCoconesðnÞ 7: end if 8: Pruneðn,Q Þ 9: end while 10: E :¼ ExtractManifoldðÞ 11: return E Our algorithm is summarized schematically in Fig. 2, and in pseudocode in Algorithm 1. It takes an integer parameter k, and then constructs an octree such that no leaf node contains more than k points of S. The resulting leaf nodes are dubbed user nodes. The octree is then further subdivided so that no leaf node contains more than k sample points, and we refer to the leaf nodes of the refined tree as fine leaf nodes. In the following an unqualified reference to a node always refers to a user node. The next step is to estimate the normal line to the surface at each sample. We do not use the pole vector to estimate n^ p , but instead choose the normal to a particular triangle as our estimate. As described in Section 3.1, the operation is essentially local for each point, involving only points contained in the fine leaf node containing p, and possibly points from other nearby fine leaf nodes.
Then all leaf nodes are placed in a queue for processing. Denote by Sn the subset of S that is contained in leaf node n. The first time n is visited for processing, the set, Tn , of cocone triangles that have a vertex in Sn is computed by the FindCoconeðÞ function, as described in Section 3.2. The PruneðÞ algorithm then removes all triangles from Tn which have a sharp edge that has a vertex in Sn . As described in Section 3.3, if PruneðÞ removes a triangle, t, that has a vertex in another node, Z, then t is placed in a list KZ , and Z is returned to the queue if it is not already there. Thus any one node will in general be pulled from the queue several times before trimming is complete. The manifold extraction step works in the same way, with all the nodes going to a queue. As described in Section 3.4, information is passed to a node n by means of a list, On , of cocone triangles on its boundary that have been selected and given an orientation. Thus the data structure for a node n contains four substructures, as listed in Structure 1. Structure 1. Node: n.
Sn : sample points specifically in n Tn : candidate cocone triangles Kn : triangles to be trimmed (info from other nodes) On : oriented triangles (info from other nodes)
3.1. Normal estimation In order to estimate the normal at sample point p, we find a triangle with vertex p that is guaranteed to lie close to the tangent plane of S at p. Any triangle that can be demonstrated to have a circumradius bounded by the sampling radius will suffice for this purpose. This follows from the fact [12, Lemma 3.5] that the angle between the normal to such a triangle t, with circumradius rt , and the surface normal at p, is bounded by Oðrt =lfsðpÞÞ ¼ OðeÞ. In particular, the cocone triangles are subject to this bound. Let q A S be the nearest neighbour to p. We choose the triangle t ¼ ½p,q,u, where u A S is chosen to maximize the angle +qup, which is necessarily acute. This is a Delaunay triangle [26], which has been shown to be a cocone triangle [28]. The construction of t is performed within the fine leaf node, np , that contains p. If the ball Bðx; 2rt Þ is contained in np , then t is necessarily the desired triangle with respect to the global point set. Otherwise the construction is redone, this time including for consideration the sample points from all the fine nodes which intersect the ball. In this way we are able to guarantee that we have obtained the desired cocone triangle. Using the normal to a triangle t constructed in this way allows us to avoid the global computation that would be necessary to find the pole vector at p. Since t is a cocone triangle, the error in this normal estimate is linearly bounded by e, as is the pole vector. Although the constant obtained [4] is larger than it is for
T.K. Dey et al. / Computers & Graphics 35 (2011) 483–491
the pole vector, the guarantees of topological and geometric correctness can be obtained with the exact same arguments as presented by Amenta et al. [4]. 3.2. Cocone verification The first time a node n is processed, Tn needs to be initialized. The FindCoconesðÞ subroutine takes the unprocessed node n as input, and upon completion ensures that Tn contains all cocone triangles that have a vertex in Sn , and only those triangles. Let C~p be the cocone of p with respect to some subset R of S. The true cocone, Cp , of p with respect to all of S must be contained in C~p ; the Voronoi cell of p can grow, but not shrink, when points are removed from S. Thus if we ensure that all the points of S that could influence C~p are contained in R, then it must be that C~p ¼ Cp . The cocone radius of p is the radius, rp , of the smallest closed ball centred at p which contains Cp . We observe that a point q A S influences the boundary of Cp only if Cp \ VðqÞ a |. In this case we call q a cocone neighbour of p. By the triangle inequality, if q is a cocone neighbour of p, then dðp,qÞ r 2rp . In particular, if r~ p is the cocone radius of C~p , then r~ p Zrp , and all cocone neighbours of p are contained within Bðp; 2r~ p Þ, and Cp is completely determined by these points. Thus we desire a bound on r~ p . If p has a full umbrella of triangles that are cocone with respect to R, then r~ p r maxfrt jt cocone w:r:t: C~p g=cosy, where y ¼ p=4 is the cocone angle, shown in Fig. 3. This follows since the farthest point from p in C~p must lie on a Voronoi edge, which must pass through the circumcentre of a candidate cocone triangle. If t is a true cocone triangle, then the angle between n^ p and a normal line through t is linearly bounded by the e specifying the sampling radius, however in general t need not be a true cocone triangle, and its normal may differ greatly from n^ p . In this case its circumradius must be larger; our estimate of r~ p will always be larger than the true value of r~ p . If we do not have a full umbrella of potential cocone triangles at p, then we assume r~ p ¼ 1. Lemma 3.1 summarizes these observations. Lemma 3.1. Suppose the set of candidate cocone triangles, Tn , constructed from a subset R S, contains a complete umbrella, UðpÞ, at p, and that rt p is ffiffiffithe largest circumradius possessed by a triangle in UðpÞ. If Bðp; 2 2rt Þ \ S R, then C~p ¼ Cp . Algorithm 2 outlines the FindCoconesðÞ subroutine, which is run when the node n is processed for the first time. The algorithm iteratively builds a buffer of points R, which includes Sn , until it can be verified that the set, T, of cocone triangles on R includes all the cocone triangles in DðSÞ that have vertices in Sn . When the algorithm terminates, Tn contains exactly these triangles.
487
Algorithm 2. FindCoconesðnÞ. 1: R :¼ Sn 2: U :¼ Sn 3: for i :¼ 0 to 1do 4: repeat 5: compute T 6: for each p A U do 7: Np :¼ CheckCoconeðp,TÞ 8: R :¼ R [ Np 9: end for 10: until all p A U are verified 11: compute T; find Tn T 12: U :¼ verticesðTn Þ\Sn 13: end for The algorithm first verifies that the cocones of all points in Sn have been found, then on the second iteration of the for loop on Line 3, the cocones of all vertices that are cocone neighbours with points in Sn are verified. This insures that when the final set of cocone triangles is computed (Line 11), all vertices involved have verified cocones. At each iteration of the repeat loop on Line 4 the Delaunay tetrahedralization of R is updated and the cocone triangles, T, are extracted (Line 5). The cocone checking algorithm, CheckCoconeðÞ, is outlined in Algorithm 3. To each sample point, p, we associate a flag which indicates whether or not its cocone has been verified. We also associate an integer, jp , which indicates how many buffer expansions have been requested for p. We fix a small value, d, and each time p fails to be verified we collect points to ensure that a ball centred at p and of radius jp d is included in R. The subroutine CheckCoconeðÞ either sets the flag indicating that the cocone at p has been verified, or it returns a nonempty set, Np , of sample points that meet the expansion requirement. In practice we set d to be the diameter of the smallest enclosing ball of the smallest fine leaf node, and we return all the points in the fine leaf nodes that intersect Bðp; jp dÞ, rather than the precise set indicated in Line 4; See Fig. 4. The reason we perform these small expansions, rather than simply including all points in Bðp; 2r~ p Þ is that even if r~ p is finite, it may still be arbitrarily large. Algorithm 3. CheckCoconeðp,TÞ. 1: Np :¼ | 2: if p is not verified then 3: if Bðp; 2r~ p Þ \ S verticesðTÞ 4: flag p as verified 5: else 6: jp :¼ jp þ 1 7: Np :¼ Bðp; jp dÞ \ S 8: end if 9: end if 10: return Np When FindCoconesðÞ terminates, Tn is ensured to contain the desired set of cocone triangles. 3.3. Pruning
Fig. 3. Estimating a bound on the current cocone radius, r~ p . The farthest point from p in C~p must lie on a Voronoi edge within the cocone region. This implies r~ p r rt =cosy.
The pruning and manifold extraction stages of the algorithm are both handled in a similar way. The local operations underlying each of these procedures is edge based: pruning removes sharp edges, and extraction is a region growing process where new triangles are selected by an examination of the edges that are adjacent to only one already selected triangle. When a node n is
488
T.K. Dey et al. / Computers & Graphics 35 (2011) 483–491
loop governs the processing of individual nodes during the extraction of a manifold component. Algorithm 5. ExtractManifoldðÞ.
Fig. 4. Buffer points of a node on Armadillo. Red points belong to the node being processed. Blue points are points in the buffer of the node. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
being processed, we only operate on edges that have at least one vertex in Sn . When a triangle is processed that has a vertex in another node, Z, this information will be passed to Z, which itself must undergo further processing. Algorithm 4. Pruneðn,Q Þ. 1: remove all triangles in Kn from Tn 2: while there exists in Tn a sharp edge e with a vertex in Sn do 3: for each t A Tn with edge e do 4: remove t from Tn 5: for each node Z a n in which t has a vertex do 6: KZ :¼ KZ [ ftg 7: Q :¼ Q [ fZg 8: end for 9: end for 10: end while The PruneðÞ function is outlined in Algorithm 4. When node n is being processed, pruning is performed exactly as in the original algorithm, as described in Dey’s book [12], but with the caveat that only edges that have a vertex in Sn are examined. When a triangle t that has a vertex in another node, Z, is pruned, t is put into the list KZ , and Z is put back into the queue for further processing. Since FindCoconesðÞ guarantees that Tn started with all the cocone triangles with vertices in Sn , no triangle is removed by the call to Pruneðn,Q Þ that would not be removed by the pruning algorithm of the original global Cocone algorithm. This is an invariant of the pruning process. In particular, the triangles from the restricted Delaunay triangulation S are never removed. The call to Pruneðn,Q Þ terminates when there are no sharp edges with a vertex in Sn left in Tn . Any node that is not in the queue will be in this state. The first thing that happens when Z is processed for pruning is that all the triangles in KZ are removed from TZ . This enables the pruning process to pass seamlessly between nodes. Pruning is complete when the queue is empty. At this stage there can be no sharp edges in the set of remaining cocone triangles of all the nodes considered as a whole. 3.4. Extraction The subroutine ExtractManifoldðÞ is described in Algorithm 5. Two nested while loops are employed. The body of the outer loop is executed once per connected component of S. The inner
1: E :¼ | 2: U :¼ fall nonempty nodesg 3: while U a| do 4: n :¼ any node in U 5: SeedðnÞ 6: Q :¼ fng 7: while Q a| do 8: n :¼ dequeueðQ Þ 9: E0 :¼ SurfTrianglesðn,Q Þ 10: E :¼ E [ E0 11: if n contains no unoriented sample points then 12: U :¼ U\fng 13: end if 14: end while 15: end while 16: return E As argued in Dey’s book [12], if S0 is a connected component of S, and T 0 is the component of the trimmed cocone triangles that have vertices in S0 , then the medial axis of S0 is contained in two components of R3 \T 0 . One of these, Oin , contains the portion of the medial axis of S0 which is interior to S0 . The other component is denoted Oout . It is shown that the boundary of Oout , which is a subset of T 0 , is a manifold surface without boundary. The exact same argument applies to the boundary of Oin . In our algorithm, SeedðnÞ selects a cocone triangle t on the convex hull of a connected component of Sn and assigns to t a normal orientation corresponding to the outward normal of that convex hull. Since no cocone triangle intersects the normal segment connecting the centres of the medial balls at a sample point [12, Lemma 4.6], and there are no sharp edges in the pruned cocone triangles, it follows that either t lies on the boundary of Oin , and n^ t points towards Oin , or t lies on the boundary of Oout , and n^ t points towards Oout . When SeedðÞ selects a triangle t, it places ðt, n^ t Þ, in On , and then n is put into the queue, Q, of nodes for processing. The extraction of the manifold component within n that contains t proceeds as in the traditional Cocone algorithm, but with modifications analogous to the modifications that were required for prune. Specifically, the first thing SurfTrianglesðn,Q Þ does is to extract all the triangles in On . When a triangle is extracted, it is added to the output mesh E0 , the vertices of the triangle are flagged as ‘‘oriented’’, and the triangle itself is flagged as ‘‘extracted’’. Then, for each edge e of t which has a vertex in Sn , ðe,t, sÞ is placed in a queue, W, where s is the tetrahedron of DðverticesðTn ÞÞ on the side indicated by n^ t . When such a triple is pulled from W, the adjacent triangle to t on edge e is found by a sequential walk through the tetrahedra with edge e, and, if it has not already been flagged, the resulting triangle t 0 is extracted. In this case the triples corresponding to the other two edges of t 0 are placed in the queue W, provided they have at least one vertex in Sn . The process repeats until W is empty. Whenever a triangle, t, is extracted that has a vertex in another node, Z, then t and its orientation are placed in OZ , and Z is placed back in Q if it is not already there. In this way the extraction of the component proceeds seamlessly between nodes, just as was done with pruning. The argument given in Dey’s book [12, Section 4.1.4] applies equally here to show that the extracted component is a manifold without boundary. The nodes in Q are thus processed until Q is empty. A node n may be processed several times. Each time, extraction proceeds with one or more components of Tn , but all these belong to a single component of the global set of cocone triangles. If there are no
T.K. Dey et al. / Computers & Graphics 35 (2011) 483–491
489
unoriented sample points left in Sn , then n is removed from U. When Q is finally empty the component has already been extracted and another iteration of the outer loop of ExtractManifoldðÞ has begun. This is repeated until there are no nodes left in U. 3.5. Guarantees By ensuring that each node has the correct set of cocone triangles, and especially that triangles that have vertices in more than one node are included in the cocone triangles of each such node, we are able to maintain all the properties of the traditional cocone algorithm, while processing the nodes individually. Operations on one node which affect a neighbouring node are simply communicated through the lists Kn and On . We thus maintain the guarantees of the Cocone algorithm. Theorem 3.1. If S is sampled from a smooth surface, S, with 1 sampling radius elfsðxÞ, and e o 20 , then LocCoconeðS, kÞ produces a manifold triangle mesh, E, such that the projection to the closest point on S defines a homeomorphism x : E-S. For any x A E, dðx, xðxÞÞ is Oðe2 lfsðxðxÞÞÞ and +a ðn^ t , n^ xðxÞ Þ is OðeÞ, where t is a triangle of E, with x A t. 4. Implementation Fig. 6. HoledRing with 1.3M points.
The algorithm has been implemented using the CGAL library [10], version 3.6, to facilitate the computation of Delaunay triangulations. As shown in Fig. 1, 5 and 6, the algorithm performs as expected on well sampled smooth models, with the output mesh exhibiting no indication that the sample set was partitioned into distinct nodes. However, models which meet the stringent sampling requirements of the cocone reconstruction algorithm are rare. In order to accommodate sample sets which do not meet these requirements, it is necessary to introduce heuristics to prevent catastrophic failure. In the traditional cocone algorithm, if the sampling density is insufficient to ensure a full umbrella of cocone triangles around a vertex, then in the subsequent trimming stage of the algorithm, all cocone triangles are removed. The correctness of the trimming step is predicated on the existence of a closed manifold of cocone triangles (Figs. 5 and 6). Dey and Giesen [15] developed a method for mitigating this problem. Regions of insufficient sampling can be detected by the characteristics of the Voronoi cells. A sample point whose local characteristics do not meet the expected criteria is flagged as a bad sample, and it is not consulted in the decision of whether or not a triangle is cocone. A triangle must have at least one good
Fig. 5. Buddha with 0.92M points.
vertex in order to be a cocone triangle. If bad sample points are detected, heuristics can be employed to prevent runaway trimming, but in this case, there is no longer a guarantee that the output will be topologically correct. The problem with this approach in the current setting is that we use the geometry of the Voronoi cells to drive the buffer expansion in the cocone verification stage of the algorithm, as described in Section 3.2. Thus it is not clear how to distinguish between the need to further expand the buffer in order to find a valid cocone, and the case where this is impossible due to insufficient sampling or lack of smoothness. In the naı¨ve implementation, buffer expansion would continue until the entire point set is included in the set R in an attempt to find the cocone triangles that have vertices in Sn . In order to obtain an implementation which can gracefully handle a more general class of input point sets, while maintaining the motivating theoretical guarantees for good point sets, we found the need to introduce an additional assumption on the good point sets. The theoretical e-lfs sampling condition allows a very nasty phenomenon that does not occur in practice: If S contains n points, then any ball centred on S, no matter how small, can in principle contain OðnÞ points. This unrealistic possibility invalidates many otherwise reasonable local assumptions. One way to get around this problem is to introduce a local uniformity assumption [13,18,21], which requires that there is some d, with 0 o d o 1 such that any p,q A S must be separated by a distance of at least delfsðpÞ. However, this approach is very restrictive; sample sets which do happen to meet the local density requirements are likely to have some sample pairs which are too close together. Instead, we adopt the sampling assumption suggested by Attali et al. [2]. We assume that the local sampling density is bounded by a constant. Specifically, we demand the following maximum local density requirement: For any point x A S, Bðx; mlfsðxÞÞ contains no more than c samples, where c is a constant, and m depends only on e. We use this assumption during the cocone verification stage for a sample point p to decide when to abort the buffer expansion and declare p to be a bad sample point. Specifically, Algorithm 3 is modified so that this happens if Bðp; 2r~ p Þ \ SgverticesðTÞ and cardðBðp; 2r~ p Þ \ SÞ 4c. Since the cocone radius should satisfy
490
T.K. Dey et al. / Computers & Graphics 35 (2011) 483–491
rp oðe=ð1eÞÞlfsðpÞ when S satisfies a sampling radius of elfs (this follows directly from the fact that lfs is 1-Lipschitz), this equates to using m ¼ 2e=ð1eÞ in the maximum local density requirement. In our experiments we used the value c ¼ 50 for all models. Following Dey and Giesen [15], we also flag a sample point as a bad sample if the angle between the normal estimate at p and one of its cocone neighbours exceeds a threshold a, which we set at p=3 in our experiments. In this way we retain the guarantees on the correctness of the output when the input sample set satisfies the maximum local density requirement as well as the sampling radius. At the same time, we are able to reconstruct models which do not meet these requirements. For large models, the Delaunay triangulations of all the nodes will exceed the available system memory. When a node is not being processed, we store its Delaunay triangulation on the hard disk. This is more convenient than recomputing the Delaunay triangulation each time a node is selected for processing. We would need to create a separate data structure to enable identification of previously selected cocone triangles and such a data structure would itself consume a lot of memory.
Table 1 Time and memory usage for our algorithm with k ¼ 500 K,c ¼ 50, and k ¼ 30 (left) and traditional Cocone (right). Size
Model
Mem. (MB)
0.83M 0.92M 1.2M 1.3M 2.3M 2.5M 3.7M 3.7M 4.1M 4.3M 4.6M 4.9 4.9M 5.6M 5.8M 6.6M
9HandleTorus Buddha OctaHandles HoledRing 9HandleTorus 3Holes Bracelet HoledRing Homer Bimba Stringy 9HandleTorus OctaHandles Bimba Bracelet Stringy
508 576 681 997 1212 1347 1876 1696 2134 2374 2448 2357 2389 2752 2787 3111
Time (h:min) 757 855 1155 1227 2065 2174 3363 3354 3723 3928 4234 4397 n/a n/a n/a n/a
00:13 00:18 00:19 00:24 00:38 00:39 01:14 01:23 01:23 01:15 01:43 01:25 01:21 01:41 01:54 02:19
00:02 00:02 00:03 00:03 00:05 00:05 00:07 00:07 00:13 08:22 21:58 24:04 Aborted Aborted Aborted Aborted
5. Results Fig. 7 shows plots of the times required for reconstructing various models using the traditional cocone algorithm, and our algorithm. The data are listed explicitly in Table 1. Both our algorithm and the traditional cocone algorithm were implemented using version 3.6 of CGAL [10]. On our system, the traditional algorithm grinds to a crawl when reconstructing from inputs larger than four million points. All experiments were performed on a PC with 4 GB RAM, 2.25 GB swap space and a 2.8 GHz processor running Ubuntu 10.04. For this machine k ¼ 500 K gave good running times. We found k¼ 30 for the maximum size of the fine leaf nodes performed well, however the optimal size generally depends on the model. The difference between our algorithm and the earlier localized cocone algorithm introduced by Dey et al. [16] is primarily theoretical: we give guarantees whereas the previous algorithm cannot. In practice it would take an unusual dataset for this theoretical limitation of their algorithm to manifest itself. A highly adaptive sampling of a model that has large sparsely sampled smooth regions
Fig. 8. An example of a failure case in the SuperCocone algorithm of Dey et al. [16]. (a) In this sparsely sampled synthetic dataset, SuperCocone does not yield a sufficiently large buffer around the nodes, and holes are left at node boundaries. (b) Our algorithm correctly handles the same data.
together with densely sampled fine features could produce small enough nodes so that the buffers are insufficient in the flat regions. We do not have such a dataset, but we demonstrate the problem by running their algorithm on a sparse dataset, just larger than their default node size. The buffer size is set by a number of subdivisions below the leaf node level; the default is four. In Fig. 8, the buffer is given by nodes from a refinement depth of only three, i.e., twice the size of the default parameter. This is still insufficient, and a node boundary is mistaken for a boundary of the object, resulting in a hole. In principle, even if the entire leaf nodes are used as a buffer (the maximum the algorithm allows), it is still possible that this is insufficient.
5.1. Limitations
Fig. 7. Processing time versus size of S when the point set is processed as a whole by Cocone (blue) and in our octree approach (red), for various models. Larger models cannot be reconstructed with the traditional Cocone algorithm. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
The primary limitation of this algorithm is inherent in its roots as a cocone algorithm. Problems occur if the local curvature is not matched by a sufficient local sampling density. In such regions vertices are marked as bad, and we get holes in the mesh, as shown in Fig. 9. Similar problems occur in the presence of noise; our algorithm is not designed to handle noisy raw datasets. Such data could be better accommodated if it were first prefiltered by an algorithm such as LOP [23]. In practice the use of an octree to manage the surface patches limits the scalability of our implementation. Since the entire octree is stored in system memory, the upper bound on the number of points that can be processed is only about twice as large as the limit of the plain Cocone algorithm exhibited in Table 1.
T.K. Dey et al. / Computers & Graphics 35 (2011) 483–491
Fig. 9. Neptune’s head has regions of high curvature that are insufficiently sampled. This model, from the AIM@shape repository [6] has 249K sample points.
6. Conclusions Our algorithm demonstrates that the guarantees of the cocone reconstruction algorithm can be preserved by an algorithm which operates locally. In particular a global computation of the Delaunay tetrahedralization of the entire input point set is avoided. This observation has been previously made by Funke and Ramos [18], but our algorithm is much simpler, and has been implemented. We do not require the local uniformity assumption which demands a minimal separation between sample points. However, in order to accommodate sample sets which do not meet the theoretical requirements, we introduced heuristics which result in a practical algorithm which maintains the theoretical guarantees only for point sets which meet the additional assumption that the local density of the sample set is bounded by a constant times the local feature size. Although we have employed the normal estimation technique introduced by Olson et al. [28], any local normal estimation technique may be employed, and the guarantees on the output will be preserved provided that the calculated normal is guaranteed to have an angle error with respect to the true normal which is linearly bounded by the e which governs the sampling radius. Furthermore, if the normal estimates are also consistently oriented, as is the case with data from some capture devices, for example, then our algorithm could be made fully parallelizable, since the only step which requires sequential processing is the manifold extraction step, where the orientation information is passed from node to node. Our algorithm demonstrates a solution to the problem of implementing a surface reconstruction algorithm that can handle large input point sets while maintaining a guarantee on the quality output surface.
Acknowledgements Most of the models used in this paper were obtained from the AIM@SHAPE and the Stanford 3D repositories. The authors acknowledge the CGAL consortium for making the Delaunay triangulation code available for experiments. This research is supported by NSF Grants CCF-0830467 and CCF-0915996. References [1] Amenta N, Bern M. Surface reconstruction by Voronoi filtering. Discrete and Computational Geometry 1999;22(4):481–504.
491
[2] Attali D, Boissonnat JD, Lieutier A. Complexity of the Delaunay triangulation of points on surfaces: the smooth case. In: Symposium on computational geometry. ACM; 2003. p. 201–10. [3] Alle gre R, Chaine R, Akkouche S. A flexible framework for surface reconstruction from large point sets. Computers Graphics 2007;31(2):190–204. [4] Amenta N, Choi S, Dey TK, Leekha N. A simple algorithm for homeomorphic surface reconstruction. International Journal of Computational Geometry and Applications 2002;12(2):125–41. [5] Alliez P, Cohen-Steiner D, Tong Y, Desbrun M. Voronoi-based variational reconstruction of unoriented point sets. In: Symposium on geometry processing; 2007. p. 39–48. [6] AIM@Shape. AIM@SHAPE repository, accessed February 2011. URL: /http:// www.aimatshape.net/S. [7] Bolitho M, Kazhdan M, Burns R, Hoppe H. Multilevel streaming for out-ofcore surface reconstruction. In: Symposium on geometry processing; 2007. p. 69–78. [8] Bolitho M, Kazhdan M, Burns R, Hoppe H. Parallel Poisson surface reconstruction. Advances in Visual Computing 2009:678–89. [9] Boissonnat J-D. Geometric structures for three-dimensional shape representation. ACM Transactions on Graphics 1984;3(4):266–86. [10] CGAL, Computational Geometry Algorithms Library, Version 3.6. URL: /http://www.cgal.orgS. [11] Chew LP. Guaranteed-quality mesh generation for curved surfaces. In: Symposium on computational geometry; 1993. p. 274–80. [12] Dey TK. Curve and surface reconstruction algorithms with mathematical analysis. Cambridge University Press; 2007. [13] Dumitriu D, Funke S, Kutz M, Milosavljevic´ N. On the locality of extracting a 2-manifold in R3 . In: Proceedings of the 11th scandinavian workshop on algorithm theory (SWAT); 2008. p. 270–81. [14] Dumitriu D, Funke S, Kutz M, Milosavljevic N. How much geometry it takes to reconstruct a 2-manifold in R3 . Journal of Experimental Algorithms 2009;14. . [15] Dey TK, Giesen J. Detecting undersampling in surface reconstruction. In: Symposium on computational geometry; 2001. p. 257–63. [16] Dey TK, Giesen J, Hudson J. Delaunay based shape reconstruction from large data. In: PVG 01: IEEE symposium on parallel and large-data visualization and graphics; 2001. p. 19–27. [17] Dey TK, Levine JA, Slatton AA. Localized Delaunay refinement for sampling and meshing. Computer Graphics Forum 2010;29(5):1723–32. . [18] Funke S, Ramos EA. Smooth-surface reconstruction in near-linear time. In: Symposium on discrete algorithms; 2002. p. 781–90. [19] Guibas LJ, Oudot SY. Reconstruction using witness complexes. Discrete and Computational Geometry 2008;40(3):325–56. [20] Hornung A, Kobbelt L. Robust reconstruction of watertight 3D models from non-uniformly sampled point clouds without normal information. In: Symposium on geometry processing; 2006. p. 41–50. [21] Kil YJ, Amenta N. GPU-assisted surface reconstruction on locally-uniform samples. In: Meshing roundtable; 2008. p. 369–85. [22] Kazhdan M, Bolitho M, Hoppe H. Poisson surface reconstruction. In: Symposium on geometry processing; 2006. p. 61–70. [23] Lipman Y, Cohen-Or D, Levin D, Tal-Ezer H. Parameterization-free projection for geometry reconstruction. in: ACM transactions on graphics (SIGGRAPH Proceedings), 26 July 2007. [24] Mullen P, De Goes F, Desbrun M, Cohen-Steiner D, Alliez P. Signing the unsigned: robust surface reconstruction from raw pointsets. Computer Graphics Forum 2010;29(5):1733–41. . [25] Manson J, Petrova G, Schaefer S. Streaming surface reconstruction using wavelets. Computer Graphics Forum 2008;27(5):1411–20. . [26] Mederos B, Velho L, De Figueiredo LH. Moving least squares multiresolution surface approximation. In: XVI Brazilian symposium on computer graphics and image processing, 2003. SIBGRAPI 2003, 2003. p. 19–26. [27] Ohtake Y, Belyaev A, Alexa M, Turk G, Seidel H-P. Multi-level partition of unity implicits. In: ACM transactions on graphics (SIGGRAPH proceedings), vol. 22, 2003. p. 463–70. [28] Olson M, Dyer R, Sheffer A, Zhang H. Point set silhouettes via local reconstruction, Computers and Graphics, this issue. doi:10.1016/j.cag.2011. 03.034. [29] O’Rourke J. Polyhedra of minimal area as 3D object models. In: International Joint Conference on Artificial Intelligence; 1981. p. 664–6. [30] Samozino M, Alexa M, Alliez P, Yvinec M. Reconstruction with Voronoi centered radial basis functions. In: Symposium on geometry processing; 2006. p. 51–60. [31] Schall O, Samozino M. Surface from scattered points: a brief survey of recent developments. In: First international workshop on semantic virtual environments; 2005. p. 138–47.