Efficient partial-surface registration for 3D objects

Efficient partial-surface registration for 3D objects

Computer Vision and Image Understanding 98 (2005) 271–294 www.elsevier.com/locate/cviu Efficient partial-surface registration for 3D objects G. Xiaoa, ...

802KB Sizes 0 Downloads 163 Views

Computer Vision and Image Understanding 98 (2005) 271–294 www.elsevier.com/locate/cviu

Efficient partial-surface registration for 3D objects G. Xiaoa, S.H. Onga,b,*, K.W.C. Foongc a

c

Department of Electrical and Computer Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260, Singapore b Division of Bioengineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260, Singapore Department of Preventive Dentistry, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260, Singapore Received 24 September 2003; accepted 12 October 2004 Available online 23 November 2004

Abstract We present a new method for three-dimensional partial-surface registration that utilizes both regional surface properties and shape rigidity constraint to align a partial object surface and its corresponding complete surface. The statistical properties of the vertices on the object surface are first computed and compared with each other to find the initial candidate correspondences. We use the overall object-shape rigidity constraint and a clustering method to obtain an approximation of the transformation parameters while, at the same time, rejecting correspondence outliers. The transformation parameters can be further refined with an iterative approach. The algorithm does not require any feature extraction or initial pose estimation, and is especially applicable when the object surfaces are formed by a large number of vertices, smooth with few salient features, and contain many regionally similar surface patches. Experiments confirm that the proposed scheme can achieve accurate registration results in an efficient manner.  2004 Elsevier Inc. All rights reserved. Keywords: 3D registration; Partial surface; Statistical properties; Rigidity constraint; 4-Point tuple; Clustering; Iterative refinement

*

Corresponding author. Fax: +65 6779 1103. E-mail address: [email protected] (S.H. Ong).

1077-3142/$ - see front matter  2004 Elsevier Inc. All rights reserved. doi:10.1016/j.cviu.2004.10.001

272

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

1. Introduction The accurate registration of two objects in three dimensions is very important in many applications, e.g., in surface matching [1], pose estimation [2], object recognition [3], and data fusion [4]. In some cases, one of the object surfaces is not complete and only their overlapping regions can be used for performing registration. An example of such an application is found in orthodontics. Given the scanned 3D image of a dental plaster model, one method of estimating the size and position of a (hidden) tooth root is to match a generic tooth of the same type (e.g., a canine or incisor) with the visible crown. Many registration methods for aligning 3D object surfaces have been proposed in the literature. These algorithms can be divided into several categories: iterative, feature-based, free-form surface representation, and exhaustive search. Of the iterative methods, the iterative closest point (ICP) algorithm by Besl and McKay [5] and Zhang [1] is one of the most popular. This algorithm does not require the extraction of features and can therefore register two objects even if there is only a small number of features on the two surfaces or if part of the surface is occluded. Some improvements to the ICP have been proposed [6–8]. However, there are still some drawbacks with the ICP: (a) it requires a good initial estimate of the transformation parameters in order to prevent the iterative process from being trapped in a local minimum; (b) the computational cost in searching for the nearest points in each iteration is high when there is a large number of vertices on the two surfaces; and (c) there is no guarantee of achieving the correct solution even for noiseless data. Feature-based approaches use point features [9], curves [10–12], or regions [13] to find the correspondences. They have the advantage that they do not require an initial estimate of the transformation. Their drawbacks are that: (a) they cannot solve the problem when the 3D point data set contains very few salient local features, and (b) the final result depends on the accuracy of the feature extraction algorithm. Registration is likely to be biased when the local features are affected by noise. Some registration methods are based on free-form surface representation. Johnson and Hebert [14–16] proposed the spin image. Other algorithms of this kind include spherical representation [17], COSMOS [18], point signature [19], surface signature [20,21], 3D pointÕs fingerprint [22], harmonic shape image [23,24], distance-angle image [25], and symbolic surface signature [26]. These methods do not rely on prior estimation of the transformation parameters, and some of them can handle relatively featureless surface patches. However, when faced with smooth-surface objects that contain many regionally similar surface patches, the problem of multiple correspondences will inevitably occur. These correspondences include false ones that must be rejected in order to obtain a correct transformation. Johnson and Hebert [15] has suggested that geometric consistency be used to filter out correspondence outliers. For all the candidate point correspondences, the geometric consistency measures of every two correspondences are calculated and compared with all the others. Only when one pair of corresponding points is

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

273

geometrically consistent with at least one quarter of all the correspondences in the list, can it be regarded as correct and still be kept in this list. With m pairs of candidate point correspondences, the computational complexity will be O (m2). When the objects have many regionally similar surface patches, there will be a large number of multiple correspondences and hence a large value of m, which results in a heavy computational burden. The exhaustive-search algorithms usually rely on checking all possible alignments of two 3D data sets and counting the votes for each of these transformations. Pose clustering [27–29] quantizes the space of candidate transformations and uses it as an accumulator in which each match increments a corresponding cell. In geometric hashing [30,31], a hash table is constructed and used to vote for the candidate corresponding k-tuples. While his method is still based on the hash table strategy, Mokhtarian et al. [32] uses several curvature values in selecting feature points. Chen et al. [33,34] proposed the RANSAC-based DARCES, in which every three control points are used to calculate a candidate transformation, and the final solution is obtained by verifying all those candidates. These algorithms do not necessarily require feature extraction, and since they use a large number of points on the object surfaces, they are robust, especially in the partially overlapping case. However, the problem of combinatorial explosion will occur when the number of surface vertices is huge. Although mesh simplification [35,36] may offer a solution, the mesh decimation process itself is time consuming and will result in the loss of surface detail. None of the above approaches is suitable for our application for the following reasons. (a) We have no prior knowledge of the approximate transformation between the two surfaces to be registered. Therefore, an initial estimate of the transformation parameters cannot be given if the ICP algorithm is to be used. (b) Since the objects that we mainly consider (e.g., tooth crowns) in general have smooth surfaces with relatively few salient features, feature-based methods are not appropriate. Given the large number of mesh vertices, a combinatorial explosion is likely if the two surfaces are registered by checking all possible transformations. (c) The 3D objects usually contain many regionally similar surface patches, which means there may initially be many false point correspondences when the comparison is based on free-form surface representations of 3D objects. Hence in rejecting the correspondence outliers, it will be highly inefficient if every two pairs of candidate correspondences are compared with each other. From the above discussion, it is clear that special procedures are needed for the efficient partial-surface registration of 3D objects that are smooth with few salient features, densely sampled, and which contain many regionally similar surface patches. We propose a new 3D partial-surface registration algorithm that is efficient and accurate for such objects. Moreover, this method does not necessarily require an initial estimate of the transformation parameters (which is different from and also a big advantage over traditional ICP) and can still cope when the number of vertices is

274

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

large. Our algorithm first uses regional surface properties to find a list of candidate point correspondences on the two surfaces. Correspondence outliers are rejected by employing an overall shape rigidity constraint and a clustering strategy. Finally, the recovered transformation parameters can be further refined with an iterative approach. Our strategy is discriminating and robust in representing the surface properties, and is efficient in rejecting false correspondences. Experiments show that the registration process is fast and that the registration results are accurate even when the 3D objects are smooth with few salient features and contain many locally similar surface patches. In Section 2, we give an overview of the algorithm. Section 3 describes in detail the surface statistical property representation and comparison strategies. In Section 4, we discuss our refinement technique for the candidate point correspondences, including the selection of corresponding tuples, transformation parameter computation, parameter clustering, and further iterative refinement. Section 5 presents experimental results and discussion. The paper ends with the concluding remarks in Section 6.

2. Overview of the algorithm Our proposed algorithm first uses regional surface properties to obtain a list of candidate point correspondences. Instead of calculating the exact surface curvature measures for representing the surface properties, we compute for every point the spatial distribution of all the other vertices in the vicinity, an approach similar to the spin image [16]. Based on this property, a histogram matrix is constructed to depict the local surface for every vertex in the curved region on the two surfaces. We call these vertices candidate vertices, a term that will be explained in detail in Section 3. By comparing these matrices, a list of candidate point correspondences can be obtained. However, these correspondences contain some false ones that need to be rejected. Our method of refining these correspondences begins with the selection of a number of corresponding rigid 4-point tuples from this list of candidate point correspondences. From these tuples, the centroid of the partial surface and the corresponding centroid (defined in Section 4.1) of the complete surface are determined. With the 3D coordinates of these two centroids, we derive an estimate of the transformation parameters that are used to reject most of the correspondences outliers. A novel clustering strategy is proposed (Section 4.3) and adopted in the computation of both the corresponding centroid and the approximate rotation (Section 4.4). This clustering process is based on an improved ISODATA algorithm. It is performed only on candidate transformations satisfying the rigidity constraint instead of on all the possible transformations, which is an approach often adopted in traditional histogram-based pose clustering. Therefore the scale of the clustering problem is smaller. In addition, in estimating the approximate rotation, the orthogonal constraint does not necessarily need to be satisfied, which further reduces the computational cost. In the last step, the transformation parameters are refined in an iterative manner.

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

275

Fig. 1 shows the flowchart of the entire algorithm. All the individual procedures are grouped according to their specific functions and are explained in more detail in subsequent sections.

Fig. 1. Block diagram of our registration process.

276

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

3. Surface representation and comparison 3.1. Surface representation A good 3D object surface representation ought to be view-independent, invariant to rigid transformations, and also easy to compute and manipulate. In our application, the 3D free-form object surface data are given in the form of a cloud of points that can be optionally connected with one another by triangular meshes. Johnson proposed the spin image [15,16] to encode the surface properties using an object-centered coordinate system and assigned these descriptive images to each surface vertex. This surface representation has the advantages that it is adaptive to free-form objects, easy to compute, and invariant to rigid transformation. All the points on one surface can be characterized, compared, and matched with points on the other surface. In representing surface properties, we adopt the approach similar to that of [16]. A local spherical coordinate system is defined at each vertex, which is also the origin. The tangential plane passing through this local origin is defined as the plane to which the average distance from all the points in the vicinity of the origin is at its minimum. In calculating the normal of the tangent plane, principal component analysis (PCA) is performed on the nearby points, and the least important principal axis is chosen as the normal vector. All the other points in the vicinity are then indexed and accumulated according to their relative positions in this system, thereby constructing a histogram matrix that depicts the regional surface properties. The parameterization of the matrix is in the spherical domain. As pointed out by Johnson [16], this parameterization is better than that in the cylindrical domain in that it can reduce the inconsistencies in the surface representation due to surface normal error. A histogram matrix can be constructed for every vertex on the object surface. However, in our algorithm, we choose only a subset of the vertices and note their statistical properties. One reason for this is that the number of vertices involved in the computation and comparison of the statistical properties of the object surface will hence be reduced, thus shortening the computational time. We will also be able to minimize the side effects arising from scanning inaccuracies. It was observed that, in scanning an object, more gaps usually appear on the relatively flat areas of the object surface than those on the more highly curved areas. Therefore, we will only compute the statistical properties for the vertices in the curved regions. In this paper, such points are referred to as the candidate vertices. This point selection strategy is in contrast to that of JohnsonÕs [16]. In his application, he did not have to contend with this type of inaccuracy, which is inherent in our data acquisition. Another difference between his approach and ours is that instead of directly calculating the curvature values with TaubinÕs curvature measure [37], we simply count the points to the two sides of tangent plane. In counting each point, a weight value that is proportional to its distance from the tangent plane is assigned. We will consider the surrounding region as a curved area and current point as the candidate vertex only when the difference between the numbers of points on

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

277

Fig. 2. (A) Original surface vertices and (B) the candidate vertices.

the two opposite sides of the tangent plane is greater than a threshold. In the example of Fig. 2, Fig. 2A shows the original 3D tooth surface and Fig. 2B the surface where only candidate vertices still remain. It is seen in Fig. 2B that all the candidate vertices are in the relatively curved areas of the surface and that the vertices in those relatively flat areas have been eliminated. By disposing of the vertices in the flat areas of the object surface, the accuracy of the representation and comparison of surface properties can be increased. 3.2. Surface comparison After the histogram matrices for every candidate vertex have been generated, a list of corresponding points can be obtained by comparing these matrices. Johnson and Hebert [15] and Zhang and Herbert [24] used the similarity measure histogram to compare the local shape descriptors of every point, while Yamany and Farag [20] employed template matching. In our algorithm, we take an approach similar to the latter, i.e., the sum of the absolute difference between the two matrices normalized by the total number of elements in the matrix is used as the measure of the similarity between two candidate vertices. After the comparison between the histogram matrices of the candidate vertices on both surfaces, a list of candidate point correspondences can be obtained. Since the histogram matrices only depict the property of the neighborhood around a candidate vertex, multiple correspondences are unavoidable, in which case some false correspondences should be rejected as outliers. This problem is especially severe when the 3D object surfaces are smooth and have few salient features, while at the same time having many similar surface patches (Fig. 3). Fig. 3A shows all

278

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

Fig. 3. Corresponding points on two 3D surfaces: (A) shows all the corresponding points on partial surface; (B) shows all the corresponding points on the complete surface. For instance, one candidate vertex A on partial surface may have three corresponding point A0 , B, and C on the complete surface.

the candidate vertices on the partial surface (a tooth crown) and Fig. 3B their corresponding points from the candidate vertices on the complete tooth surface after comparing their histogram matrices. Although the correct correspondences in the crown part of the complete tooth have been found, there are also some vertices in the root area that are mistakenly identified as the corresponding points. Moreover, under noisy conditions, two candidate vertices that actually have somewhat different surrounding surface patches may appear to be similar, thus resulting in false correspondences. In order to obtain the correct transformation parameters, these point correspondences must be refined to reject those that are false.

4. Refinement of candidate point correspondences 4.1. Outline of the refinement strategy In this section, we discuss the strategy for refining point correspondences. Johnson and Hebert [15] uses geometric consistency to filter out the correspondence outliers. This method is simple and easy to implement. However, for smooth-surfaced objects with many regionally similar surface patches, there will be a large number of multiple correspondences, which makes the computation of the geometric consistency measure of every two correspondences very time-consuming. In our work, we have found that the rigidity constraint of the overall object shape is effective in speeding up the process of correspondence outlier rejection.

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

279

Notice that in Fig. 3A, a candidate vertex A on the partial surface may have several corresponding points on the complete surface, e.g., A0 , B, and C in Fig. 3B. Of these points, A0 is the correct correspondence while B and C are false. Since a tooth crown is generally smooth and symmetric to some extent, it is not straightforward to identify B (which is also on the crown of the complete surface) as an incorrect corresponding vertex. However, it is easier to determine that C (on the root portion) is not a desirable one. Moreover, since point C is further away from most of the corresponding points, the false correspondence A–C will have a greater negative impact on calculating the transformation parameter than the false correspondence A–B. Therefore, the former correspondence needs to be eliminated with higher priority than the latter. In our correspondence refinement strategy, the coordinates of the centroid of all the candidate vertices on the partial surface are first calculated. This can be accomplished by computing the mean coordinates of the candidate vertices. Next, we proceed to determine its corresponding centroid in the interior of the complete surface. This corresponding centroid is different from the centroid for all the candidate vertices on the complete surface and is, in fact, the centroid for all the correct corresponding vertices on the complete surface. Fig. 4 illustrates the difference between them. In Fig. 4A, C is the centroid of the partial surface. Its corresponding centroid of the complete surface is C0 in Fig. 4B, while B is actually the centroid of the complete surface. The position of the corresponding centroid is obtained by constructing several rigid 4-point tuples from all the candidate correspondences, followed by clustering the results from all the corresponding tuples. In constructing the rigid tuples, it is required that two corresponding tuples (one

Fig. 4. Illustration of the difference between the corresponding centroid and the centroid of complete surface. (A) C is the centroid of partial surface. (B) B is the centroid of the complete surface, and C0 is the corresponding centroid of the complete surface.

280

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

from each surface) form two tetrahedrons whose corresponding edges are equal in length. With this information, the translation between the two surfaces is known. An approximation of the rotation value can thus be determined linearly, so that an initial estimate of the transformation is achieved, which can then be further refined iteratively. 4.2. Determination of the corresponding centroid After comparing the histogram matrices of all the candidate vertices on both the partial surface and complete surface, a list of m pairs of corresponding points is derived. Let this list be denoted by L = {C1, . . ., Cm}. Each correspondence Ci = {pi, qi} comprises two candidate vertices pi 2 P and qi 2 Q, where P and Q are the sets of candidate vertices on the partial and complete surfaces, respectively. In order to find the corresponding centroid of the complete surface, several pairs of corresponding rigid 4-point tuples are selected from P and Q. The number of the tuples to be selected on the partial surface, num_tuple, is set a priori. In the first step, the points in P are examined to form a list of 4-point tuples, namely, TPL = {TP1, . . ., TPnum_tuple}, where each tuple TPi is defined as TP i ¼ fpi1 ; pi2 ; pi3 ; pi4 g;

pij 2 P ;

i ¼ 1; . . . ; num tuple; j ¼ 1; . . . ; 4:

In forming the tuples, it is required that these points form a tetrahedron, which means they cannot lie on the same plane, and the lengths of all the edges must be greater than a predefined value. In addition, in forming tuples, any point in P can only be selected once at most. This helps to ensure the selected candidate vertices will be distributed over a large area on the partial surface, thus reducing the possible errors caused by regional noise. Second, the points in Q are explored to find the corresponding tuples for all the elements in TPL. This can be easily implemented by looking up the correspondences in L and checking if the lengths of the corresponding edges are the same. Note that more than one corresponding tuples may be found for each tuple in TPL because the object may have multiple regionally similar surface patches. A list of corresponding tuples on the complete surface is now obtained. It is denoted as TCL = {TCL1, . . ., TCLnum_tuple}, where TCLi is a sub-list of corresponding tuples for each tuple TPi in TPL. We also have TCLi = {TC1, . . ., TCk(i)}, where k (i) is the number of corresponding tuples on the complete surface for TPi, and each corresponding tuple is represented as TC j ¼ fqj1 ; qj2 ; qj3 ; qj4 g;

qjk 2 Q;

j ¼ 1; . . . ; kðiÞ; k ¼ 1; . . . ; 4:

Next, the centroid of the set of candidate vertices in P is calculated. For every tuple in TPL, the position of the centroid relative to it can be uniquely determined; it will be used later to find the position of the corresponding centroid of the complete surface. For a tuple on partial surface TP = {p1, p2, p3, p4}, a local 3D coordinate system is defined as follows (Fig. 5): p1 is chosen as the origin, the vector pointing from ! p1 to p2 is regarded as the x-axis, the cross product of vector p! 1 p 2 and vector p 1 p 3 is the z-axis, and consequently, the y-axis is the cross product of the z-axis and the x-

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

281

Fig. 5. Local 3D coordinate system defined by the 4-point tuple [p1, p2, p3, p4]. C is the centroid.

axis. Therefore, the three-dimensional coordinate of the centroid C in this local coor! dinate system can be obtained by calculating the dot products between vector p1 C ! and the x-axis, the dot product between p1 C and the y-axis, and the dot product be! tween p1 C and the z-axis, respectively. In determining the corresponding centroid for the complete surface, a local coordinate system for every corresponding tuple in TCL is also defined in a manner similar to that for the tuples in TPL. For a tuple in TPL, the three-dimensional coordinates of the centroid in this local coordinate system should be the same as those of the corresponding centroid in the local coordinate system defined by its corresponding tuple in TCL. Hence, the coordinates of the corresponding centroid in the local coordinate system defined by every corresponding tuple in TCL are also known. These coordinates still have to be transformed into the original 3D coordinate system of the complete surface. The transformation parameters can be decomposed into a translation vector and a rotation matrix. As the origin of the local system in the original coordinate system is already known, what remains is the rotation matrix. Given the coordinates of the three other points in a tuple, this rotation parameter can be calculated by solving a linear equation system. Let TC be a corresponding tuple on the complete surface, TC = {q1, q2, q3, q4}, q1 the origin of the local coordinate system, (qix, qiy, qiz) the coordinates of these points in the local coordinate system, and ðq0ix ; q0iy ; q0iz Þ; i ¼ 1; . . . ; 4, the coordinate of these points in the original coordinate system. Since q1 is at the origin of the local system, we have q1x ¼ q1y ¼ q1z ¼ 0, and the translation between these two coordinate systems is ðq01x ; q01y ; q01z Þ. If the rotation matrix is represented as 2 3 r1 r2 r3 6 7 R ¼ 4 r 4 r 5 r 6 5; r7 we have

r8

r9

282

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

2

q2x

q2y

q2z

0

0

0

0

0

q3y q4y

q3z q4z

0 0

0 0

0 0

0 0

0 0

0 0

0 0

q2x q3x

q2y q3y

q2z q3z

0 0

0 0

6q 6 3x 6 6 q4x 6 60 6 6 60 6 60 6 6 60 6 6 40

0

0

q4x

q4y

q4z

0

0

0 0

0 0

0 0

0 0

0 0

q2x q3x

q2y q3y

0

0

0

0

0

0

q4x

q4y

3 q02x  q01x 7 6 7 6 0 7 7 6 r2 7 6 q03x  q01x 7 7 6 7 6 0 7 0 7 6 7 6 0 7 7 6 r3 7 6 q4x  q1x 7 7 6 7 6 0 7 7 6 r4 7 6 q02y  q01y 7 7 6 7 6 0 7 0 7 6 7 6 0 7 7  6 r5 7 ¼ 6 q3y  q1y 7: 7 6 7 6 0 7 7 6 r6 7 6 q04y  q01y 7 7 6 7 6 0 7 0 7 6 7 6 q2z 7 7 6 r7 7 6 q2z  q1z 7 7 6 7 6 7 q3z 5 4 r8 5 4 q03z  q01z 5 q4z q04z  q01z r9 0

3 2

r1

3

2

ð1Þ

Since the four points in the tuple cannot lie on the same plane, the left matrix of Eq. (1) is of full rank, thus uniquely determining the rotation matrix R. It can be seen that in the above computation, the orthogonal constraints for a rigid rotation are not adopted in the equation. These constraints demand that every two row or column vectors of a rotation matrix be orthogonal and that the determinant of the rotation matrix be 1. The rotation matrix recovered from Eq. (1) does not necessarily meet these requirements; nevertheless, the computation involved in solving the linear equations is much simpler than that in [38]. More importantly, since these parameters are used only to obtain an estimate of the position of the corresponding centroid (while the whole set of transformation parameters still needs to be later refined iteratively), a strictly orthogonal rotation matrix at the expense of more computation is not necessary at this stage. After the transformation parameters are obtained from Eq. (1), every corresponding tuple TC can yield a candidate value for the position of the corresponding centroid in the original coordinate system. If there is no false correspondence in the correspondence list L and no computational error, all these candidate corresponding centroids should overlap at exactly the same spatial position. However, as pointed out earlier, when the objects to be registered have smooth surfaces with many regionally similar patches, there will be a significant number of correspondence outliers after the comparison of surface properties. In addition, the roundoff error inherent in the computation can also cause some disturbances to the result. All these factors result in a cloud of candidate corresponding centroids instead of a unique corresponding centroid. Determining a suitable corresponding centroid from all these candidates cannot be done by simply computing the mean candidate because of the presence of outliers. For this purpose, we propose a suitable clustering algorithm. 4.3. Clustering to reject outliers As pointed out previously, the candidate corresponding centroids form a cloud of points because of correspondence outliers and rounding error. It is crucial that the false corresponding centroids due to the first factor be correctly identified and

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

283

rejected, while the algorithm should also be able to tolerate the relatively small rounding errors and minimize their side effects. This would also make our algorithm robust against correspondences that are slightly off from their true values. Therefore, in order to obtain the true corresponding centroid, these two types of candidate corresponding centroids should first be separated and identified. In our approach, the candidate corresponding centroids are allocated into several groups. For each group, the mean of all the points can be regarded as its center. It is assumed that the majority of all the correspondences are generally correct. At the end, the group having the largest number of points is considered to be composed of the correct candidate corresponding centroids. In our algorithm, a clustering strategy is adopted to group all these candidate corresponding centroids. The k-means algorithm and ISODATA algorithm are two of the most commonly used clustering methods in pattern recognition. In these algorithms, the number of final clusters needs to be specified in advance. In our application, the objects to be registered may have many similar surface patches, and the number of these surface patches is unknown before the clustering process, so the number of final clusters cannot be given beforehand. In addition, it is important that the algorithm is good at identifying outliers as well as being robust against noise. Therefore, we propose a clustering strategy that is a modification of the traditional ISODATA algorithm. In the clustering process, our algorithm is able to automatically decide on the appropriate number of clusters, either by assigning a new cluster to a single point or by changing its existing assignment to another, depending on its relative position. However, it is required that the minimum distance between two cluster centers dmin be fixed in advance. In measuring the distance between two points, we adopt the Euclidean measure. Let N be the total number of points, where the assignment of each point is w (i), i = 1, . . ., N. Our clustering algorithm is described as follows: Step 1: Let w (i) = NULL, i = 1, . . ., N, which means that all the points do not belong to any cluster at the beginning. Step 2: Randomly select two points as the centers of two initial clusters. The distance between these two points should be greater than dmin. If no such pair of points can be found, all these points belong to one cluster and the clustering process terminates; otherwise continue. Step 3: Go through all the existing cluster centers to find the two that are nearest to each other. If the distance between them is less than dmin, merge them into one cluster, decrease the total number of existing clusters by one, and go to Step 5; otherwise continue. Step 4: For every point i, i = 1, . . ., N: Step 4.1: Find the cluster (H) that is nearest to it and compute the distance (d) between them. Step 4.2: If d 6 dmin, go to Step 4.3. Otherwise, take this point as the center of a new cluster and increase the total number of existing clusters by one. If w (i) „ NULL, modify the center of its previous cluster and the number of points in it. Go to Step 4.4.

284

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

Step 4.3: If w (i) „ H, assign this point to cluster H. Calculate the new center of cluster H and increase the number of points in this cluster by one. If w (i) „ NULL, modify the center of its previous cluster and the number of points in it. In addition, if there was only one point in the previous cluster, eliminate this cluster altogether. Step 4.4: Go back to Step 4 until all the points have been processed. Step 5: Go back to Step 3 until the clustering results stabilize. After all the candidate corresponding centroids have undergone this clustering procedure, several clusters are obtained. The center of the cluster with the largest number of points is taken to be the true corresponding centroid. In this way, the false corresponding centroids caused by correspondence outliers are rejected. 4.4. Computing transformation parameters Once we have determined the correct corresponding centroid of the complete surface, we can obtain the translation parameter Tinitial between the complete and partial surfaces. We next proceed to compute the rotation parameter Rinitial. In rejecting the false corresponding centroids of the complete surface, all the 4point tuples associated with them can also be regarded as incorrect and are hence discarded. A candidate rotation matrix is derived from each pair of the remaining corresponding tuples on the two surfaces. The result is a list of candidate rotation matrices. This process is almost the same as the one for computing the candidate corresponding centroids by Eq. (1), with the difference being that the translation vector now is not the coordinate of the first point in the tuple but the translation vector Tinitial linking the centroid of the partial surface and the corresponding centroid of the complete surface. For this reason, all the four points can also be included in Eq. (1). Although most of the corresponding tuples are considered to be correct at this stage, it is possible that a small number of false correspondences still remain. In this case, the candidate rotation matrices that have just been calculated must also undergo the clustering process described in the previous section. This time, a rotation matrix is first converted into a nine-dimensional vector by sequentially concatenating the three row vectors of this matrix. In the subsequent clustering process, every candidate rotation matrix can be regarded as a point in nine dimensions. When the clustering procedure stops, the cluster with the largest number of points is chosen to be the optimal rotation parameter. By rearranging this vector into a matrix, we obtain the rotation matrix Rinitial. As pointed out in Section 4.2, the resulting rotation matrix does not meet the orthogonal requirements and the clustering result is only an estimate of the final parameters; thus, the transformation parameters cannot be directly used to perform the registration. Nevertheless, this estimate is useful in rejecting many incorrect correspondences. For every possible correspondence Ci = {pi, qi}, let the candidate vertex on the partial surface pi undergo this initial transformation, obtain p0i ¼ Rinitial  pi þ T initial , and check if the distance between p0i and its corresponding point, qi, on the complete surface, kp0i qi k is smaller than a given threshold. If this

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

285

is not so, Ci is regarded as a false correspondence and is removed from the list of corresponding points L. In this way, a large percentage of the false correspondences can be eliminated. Fig. 6 shows an example of this. It can be seen that after applying the procedure for initial outlier rejection, all the incorrect corresponding points in the root portion of the complete surface (e.g., point C in Fig. 3B) have been discarded. However, there may still be some incorrect corresponding points in the crown part of the complete surface (such as point B in Fig. 3B). These remaining outliers can cause some minor errors in the final registration result. Therefore an iterative strategy (such as ICP) is needed to further refine the correspondences. In our approach, the method proposed by Umeyama [39] is chosen for its simplicity during each iteration. In this way, a set of transformation parameters {Rtemp, Ttemp} are obtained from all the remaining correspondences. Let each candidate vertex on the partial surface pi undergo the transformation p0i ¼ Rtemp  pi þ T temp , noting the difference between p0i and qi is kp0i  qi k The correspondence with the largest residual error is rejected. The iteration stops when the largest error is smaller than a pre-defined threshold value. The final transformation parameters are the ultimate parameters {Rfinal, Tfinal} that can be directly used to perform registration. It is clear the largest component of the execution time is the selection of several suitable rigid corresponding 4-point tuples from large number of candidate corresponding points. Given m pairs of candidate point correspondences, the computational complexity of this process by brute force is O (m2) in the worst case and O (m) on average.

Fig. 6. Corresponding points on two surfaces after rough correspondence outlier rejection: (A) shows the corresponding points on partial surface; (B) shows the corresponding points on complete surface.

286

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

5. Experimental results and discussion We first tested the proposed algorithm on 3D surface data of teeth, whose surfaces often contain many regionally similar patches. A laser-based high-resolution scanner (Cyberware 3030R/HIREZ/MM, sampling pitch 0.15–1, 0.3, and 0.05–0.2 mm in the x, y, and z directions, respectively) digitized the samples and the resultant surface data were stored in VRML format. Parts of the complete surfaces were deliberately removed to produce the partial surfaces. Gaussian noise of zero mean and standard deviation 0.02 was deliberately added to the every vertex on the partial surface so that the coordinate values were changed although the overall shape was still very similar to their original parts on the complete tooth. For a high-resolution mesh, this surface with added noise is similar to that obtained by remeshing. Our experiments were done on several different types of teeth, including incisors, canines, premolars, and molars, with two for each type. Fig. 7 shows the registration results of one tooth of each type. It can be seen that the proposed algorithm results in largely correct registration. A numerical assessment is provided in Table 1. In this table, the ground-truth transformation parameters are already known when we constructed the partial surfaces. The recovered transformation parameters are the results generated by the registration algorithm. It can be seen that these two values are very close. Although Table 1 can be used as a reference to check the registration results, it is still somewhat difficult to interpret the accuracies of the recovered transformations directly from the numerical data. In order to further quantify the registration accuracy of our method, the overlapping regions in the final registration results were cut into slices along the long axis of the tooth, so that the partial surface as well as the complete surface has a closed contour on each slice. The two contours were compared slice by slice. For each slice, the area (D) of the region between the two contours was computed and compared with the area (A) of the region inside the contour of the partial surface or the region inside the contour of the complete surface, whichever is larger. In our experiments, the overlapping regions in the registration results were each cut into 19 slices. For every registration result, the mean and standard deviation of D/A are obtained. As listed in Table 2, the non-overlapping areas between the two contours comprise a small percentage of the contour area. Since these small errors are also partly caused by the round-off error inherent in the computation process, the registration results are satisfactory. We compared the time spent on rejecting correspondence outliers using our algorithm and that using geometric consistency. The simulations were performed on a Pentium IV PC (1.8 GHz, 512MB RAM) using Matlab and the results are shown in Table 2. Although the registration accuracies of these two algorithms are comparable, our method is far more efficient when there is large number of candidate point correspondences. To further test the robustness of our algorithm, experiments were done in which outliers were added to the data. These outliers are small patches on the partial surface that do not have their correspondences on the complete surface. In the segmentation of the digitized dental model, the tooth crown that is extracted retains part of

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

287

Fig. 7. Registration results for canine, incisor, molar, and premolar (from top row to bottom row). The left column shows the two surfaces before registration; the right column shows the two surfaces after registration.

288

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

Table 1 Comparison between the ground truth transformation parameters and the recovered transformation parameters Objects Canine1

Ground truth transformation parameters 2 3 0:0614 0:0987 0:9932 R ¼ 4 0:9981 0:0162 0:0601 5 0:0101 0:9950 0:0995

Recovered transformation 2 0:0690 0:0966 R ¼ 4 0:9975 0:0213 0:0147 0:9951

t ¼ ½ 0:3189 1:6945 0:1143  ðmmÞ

t ¼ ½ 0:3589 1:6860 0:1454  ðmmÞ

3

0:0323 0:0468 0:9984 R ¼ 4 0:9959 0:0831 0:0361 5 0:0846 0:9954 0:0440

3 0:0319 0:0460 0:9984 R ¼ 4 0:9963 0:0789 0:0355 5 0:0804 0:9958 0:0433

t ¼ ½ 0:4436 2:7925 0:3949  ðmmÞ

t ¼ ½ 0:4489 2:7796 0:3624  ðmmÞ

2

Canine2

3 0:0167 0:9936 0:1113 R ¼ 4 0:9995 0:0138 0:0273 5 0:0256 0:1117 0:9934

t ¼ ½ 4:6263 26:8757 5:0757  ðmmÞ

t ¼ ½ 4:6186 26:8935 5:1021  ðmmÞ

3

3 0:0042 0:4038 0:9149 R ¼ 4 0:9969 0:0707 0:0358 5 0:0791 0:9121 0:4022

t ¼ ½ 1:1073 24:1928 1:2341  ðmmÞ

t ¼ ½ 1:0859 24:1930 1:2568  ðmmÞ

2

Molar1

3

3 0:1296 0:1508 0:9800 R ¼ 4 0:9915 0:0120 0:1293 5 0:0078 0:9885 0:1510

0:1362 0:1611 0:9775 R ¼ 4 0:9906 0:0112 0:1362 5 0:0110 0:9869 0:1611

3 0:1518 0:6916 0:7062 R ¼ 4 0:9510 0:2969 0:0864 5 0:2694 0:6584 0:7028

t ¼ ½ 2:8832 19:5343 7:5621  ðmmÞ

t ¼ ½ 2:9090 19:5698 7:5287  ðmmÞ

3

2

3 0:0065 0:1811 0:9834 4 R ¼ 0:9980 0:0623 0:0049 5 0:0622 0:9815 0:1812

3 0:0083 0:1731 0:9849 4 R ¼ 0:9973 0:0736 0:0046 5 0:0733 0:9822 0:1732

t ¼ ½ 3:3740 14:9141 4:9014  ðmmÞ

t ¼ ½ 3:3575 14:9527 4:8446  ðmmÞ

2

2

3 0:3279 0:9428 0:0605 4 R ¼ 0:9444 0:3254 0:0468 5 0:0244 0:0725 0:9971

3 0:3324 0:9409 0:0653 4 R ¼ 0:9428 0:3295 0:0511 5 0:0266 0:0785 0:9966

t ¼ ½ 8:4426 29:7234 4:4848  ðmmÞ

t ¼ ½ 8:4159 29:7605 4:5030  ðmmÞ

2

Premolar2

t ¼ ½ 2:6780 21:3224 11:1230  ðmmÞ

0:1628 0:6901 0:7052 R ¼ 4 0:9505 0:3014 0:0755 5 0:2646 0:6580 0:7050

2

Premolar1

2

2

3

t ¼ ½ 2:6896 21:1937 11:1361  ðmmÞ

Molar2

2

0:0018 0:4003 0:9164 R ¼ 4 0:9967 0:0742 0:0344 5 0:0817 0:9134 0:3988

2

Incisor2

2

0:0166 0:9934 0:1132 R ¼ 4 0:9994 0:0131 0:0315 5 0:0298 0:1136 0:9931

2

Incisor1

parameters 3 0:9929 0:0672 5 0:0978

2

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

289

Table 2 Registration accuracy and execution time in rejecting correspondence outliers of our algorithm and geometric consistency Object

Canine1 Canine2 Incisor1 Incisor2 Molar1 Molar2 Premolar1 Premolar2

No. of candidate point correspondences

6743 3891 7968 8468 22016 16709 8144 6640

Mean of D/A

Standard deviation of D/A

Execution time (s)

Our algorithm

Geometric consistency

Our algorithm

Geometric consistency

Our algorithm

Geometric consistency

0.0145 0.0144 0.0164 0.0143 0.0451 0.0212 0.0307 0.0244

0.0159 0.0144 0.0153 0.0270 0.0112 0.0169 0.0184 0.0220

0.0091 0.0066 0.0088 0.0111 0.0543 0.0389 0.0394 0.0304

0.0113 0.0066 0.0071 0.0235 0.0115 0.0400 0.0154 0.0253

28 18 18 39 1018 447 358 26

5550 1838 7764 8701 59353 34293 7934 5408

the gum surface and hence forms the outlier. Two examples of these outliers are shown in Fig. 8. Fig. 8A the crown of a canine with gum added, and Fig. 8B shows the crown of a molar with gum added. Gaussian noise with zero mean and standard deviation 0.02 was added to these two data sets. Fig. 9 shows the registration results. It can be seen the registration is once again satisfactory. The slice comparison strategy used for Table 2 was also adopted to test the accuracy of the registration experiments. Fig. 10A shows the registration result of a canine together with a cutting plane, while Fig. 10B shows the two closed contours in two dimensions. It can be seen that the two contours are so close to each other that it is difficult to distinguish them.

Fig. 8. Crown with part of the gum added as outliers: (A) canine with gum added; (B) molar with gum added.

290

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

Fig. 9. Registration results for canine with outlier (top row) and molar with outlier (bottom row).

We also tested our registration method on two data sets, ‘‘Balloon’’ (from www.3dcafe.com) and ‘‘Bunny’’ (from www.cc.gatech.edu/projects/large_models/). For ‘‘Balloon,’’ the upper part was extracted to form the partial surface, while for ‘‘Bunny,’’ the head was extracted. Gaussian noise with the same magnitude as that in the previous experiments was added to the two partial surfaces. These two surfaces (especially ‘‘Balloon’’) also contain many regionally similar surface patches. The registration results of ‘‘Balloon’’ (Fig. 11) and ‘‘Bunny’’ (Fig. 12) show that the registration results are again satisfactory. The execution times of our method

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

291

Fig. 10. Slice comparison for the registration result of the canine. The blue solid curve stands for the contour of the partial surface, and the red dotted curved stands for the contour of the complete surface.

Fig. 11. Registration result of balloon model: (A) shows the two surfaces before registration; (B) shows the two surfaces after registration.

are also much shorter compared to those obtained with the geometric consistency method (Table 3). In all the experiments, the number of tuples to be selected on the partial surface (num_tuple) is set to 4. On one hand, if num_tuple is larger, more tuples can be used in the correspondence refinement process, so that the process is more robust against noise. On the other hand, more tuples on the partial surface means that a larger number of corresponding tuples on the complete surface will be selected, which would result in increased computational time for computing the corresponding centroid and in clustering. Therefore, a compromise should be made in deciding on the actual value of num_tuple.

292

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

Fig. 12. Registration result of bunny model: (A) shows the two surfaces before registration; (B) shows the two surfaces after registration.

Table 3 Execution time of our algorithm and geometric consistency Object

Bunny Balloon

No. of candidate point correspondences

Execution time (s) Our algorithm

Geometric consistency

114 1090

<1 11

2 179

Although every rigid tuple adopted in our algorithm only contains four points, more points can in fact be used to construct a tuple. In doing so, the robustness in computing the rotation matrix by Eq. (1) is improved, but the time spent in searching for suitable tuples will also become longer. For this reason, we decided on a four-point tuple, since this is the minimum that ensures the coefficient matrix in Eq. (1) is of full rank. Currently, our algorithm can only register two surfaces of the same resolution. This is because in constructing the histogram matrix, we simply count the number of surrounding vertices in each specific location intervals. However, as pointed out by Johnson [16], the algorithm would be able to deal with the multi-resolution case if the areas of the small surface patches surrounding each nearby vertex are used for calculating the histogram matrices. Finally, we would like to point out that although the proposed algorithm provides an overall solution for 3D partial-surface registration, the procedure for refining point correspondences (Section 4), can also be incorporated with methods in which other surface attributes may be used to obtain the initial point correspondences. For example, when there are many salient features on the object surfaces, a feature-based method can be adopted to obtain the point correspondences followed by our refinement strategy to reject the correspondence outliers.

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

293

6. Conclusion In this paper, we present a three-dimensional partial-surface registration algorithm that uses both regional surface properties and the shape rigidity constraint of the objects. The statistical properties for the points on the three-dimensional object surface are used to determine the initial candidate correspondences. The overall shape-rigidity constraint and a specially designed clustering method are employed to reject the correspondence outliers. The final transformation parameters are obtained after an iterative refinement approach. Our algorithm has the following important advantages: (a) There is no requirement for exact curvature computation or feature extraction. (b) An initial estimate of the transformation parameters is not needed. (c) We do not require an exhaustive search for the closest point in each iteration; instead, there is only a one-time search for suitable corresponding tuples. (d) By using the rigid tuples to derive the initial transformation parameters, we avoid the problem of combinatorial explosion when there is a large amount of multiple correspondences. Therefore, our algorithm is especially applicable to the 3D partial registration of smooth surface objects that may also have many regionally similar surface patches. References [1] Z. Zhang, Iterative point matching for registration of free-form curves and surfaces, Int. J. Comput. Vision 13 (2) (1994) 119–152. [2] S. Lavallee, R. Szeliski, Recovering the position and orientation of free-form objects from image contours using 3D distance maps, IEEE Trans. Pattern Anal. Mach. Intell. 17 (4) (1995) 378–390. [3] P.J. Besl, R.C. Jain, Three dimensional object recognition, ACM Comput. Survey 17 (1985) 75–145. [4] R. Bergevin, D. Laurendeau, D. Poussart, Registering range views of multipart objects, Comput. Vision Image Und. 61 (1) (1995) 1–16. [5] P.J. Besl, N.D. McKay, A method for registration of 3-D shapes, IEEE Trans. Pattern Anal. Mach. Intell. 14 (2) (1992) 239–256. [6] C. Langis, M. Greenspan, G. Godin, The parallel iterative closest point algorithm, in: Proc. 3rd Internat. Conf. on 3D Digital Imaging and Modeling, 2001, pp. 195–202. [7] T. Jost, H. Hugli, A multi-resolution ICP with heuristic closest point search for fast and robust 3D registration of range images, in: Proc. 4th Internat. Conf. on 3D Digital Imaging and Modeling, 2003, pp. 427–433. [8] M. Greenspan, M. Yurick, Approximate K-D tree search for efficient ICP, in: Proc. 4th Internat. Conf. on 3D Digital Imaging and Modeling, 2003, pp. 442–448. [9] J.P. Thirion, Extremal points: definition and application for 3D image registration, in: Proc. IEEE Internat. Conf. on Computer Vision and Pattern Recognition, 1994, pp. 587–592. [10] F. Stein, G. Medioni, Structural indexing: efficient 3-D object recognition, IEEE Trans. Pattern Anal. Mach. Intell. 14 (2) (1992) 125–145. [11] J.B.A. Maintz, P.A. van den Elsen, M.A. Viergever, Evaluation of ridge seeking operators for multimodality medial image matching, IEEE Trans. Pattern Anal. Mach. Intell. 18 (4) (1996) 353–365. [12] P. Krsek, T. Pajdla, V. Hlavac, Differential invariants as the base of triangulated surface registration, Comput. Vision Image Und. 87 (2002) 27–38. [13] P.J. Besl, R.C. Jain, Invariant surface characteristics for 3D object recognition in range images, Comput. Vision Graph. Image Process. 33 (1986) 33–80. [14] A.E. Johnson, M. Hebert, Using spin images for efficient object recognition in cluttered 3D scenes, IEEE Trans. Pattern Anal. Mach. Intell. 21 (5) (1999) 433–449.

294

G. Xiao et al. / Computer Vision and Image Understanding 98 (2005) 271–294

[15] A.E. Johnson, M. Hebert, Surface matching for object recognition in complex three-dimensional scenes, Image Vision Comput. 16 (1998) 635–651. [16] A.E. Johnson, Surface landmark selection and matching in natural terrain, in: Proc. IEEE Internat. Conf. on Computer Vision and Pattern Recognition, vol. 2, 2000, pp. 413–420. [17] M. Hebert, K. Ikeuchi, H. Delingette, A spherical representation for recognition of free-form surfaces, IEEE Trans. Pattern Anal. Mach. Intell. 17 (7) (1995) 681–690. [18] C. Dorai, A.K. Jain, COSMOS—a representation scheme for 3D free-form objects, IEEE Trans. Pattern Anal. Mach. Intell. 19 (10) (1997) 1115–1130. [19] C.S. Chua, R. Jarvis, Point signature: a new representation for 3D object recognition, Int. J. Comput. Vision 25 (1) (1997) 63–85. [20] S.M. Yamany, A.A. Farag, Free-form surface registration using surface signatures, in: Proc. IEEE Internat. Conf. on Computer Vision, vol. 2, 1999, pp. 1098–1104. [21] S.M. Yamany, A.A. Farag, Surface signatures: an orientation independent free-form surface representation scheme for the purpose of objects registration and matching, IEEE Trans. Pattern Anal. Mach. Intell. 24 (8) (2002) 1105–1120. [22] Y. Sun, M.A. Abidi, Surface matching by 3D pointÕs fingerprint, in: Proc. IEEE Internat. Conf. on Computer Vision, vol. 2, 2001, pp. 263–269. [23] D. Zhang, M. Herbert, Harmonic maps and their applications in surface matching, in: Proc. IEEE Internat. Conf. on Computer Vision and Pattern Recognition, vol. 2, 1999, pp. 524–530. [24] D. Zhang, M. Herbert, Experimental analysis of harmonic shape images, in: Proc. IEEE Internat. Conf. on 3D Digital Imaging and Modeling, 1999, pp. 209–218. [25] Q. Li, M. Zhou, J. Liu, Multi-resolution mesh based 3D object recognition, in: Proc. IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications, 2000, pp. 37–43. [26] S. Ruiz-Correa, L.G. Shapiro, M. Meila, A new paradigm for recognizing 3-D object shapes from range data, in: Proc. IEEE Internat. Conf. on Computer Vision, vol. 2, 2003, pp. 1126–1133. [27] S. Linnainmaa, D. Harwood, L. Davis, Pose determination of a three-dimensional object using triangle pairs, IEEE Trans. Pattern Anal. Mach. Intell. 10 (5) (1988) 634–647. [28] C.F. Olson, Efficient pose clustering using a randomized algorithm, Int. J. Comput. Vision 23 (2) (1997) 131–147. [29] G. Stockman, Object recognition and localization via pose clustering, Comput. Vision Graph. Image Process. 40 (3) (1987) 361–387. [30] Y. Lamdan, H.J. Wolfson, Geometric hashing: a general and efficient model-based recognition system, in: Proc. IEEE Internat. Conf. on Computer Vision, 1988, pp. 238–249. [31] H.J. Wolfson, I. Rigoutsos, Geometric hashing: an overview, IEEE Comput. Sci. Eng. 4 (4) (1997) 10–21. [32] F. Mokhtarian, N. Khalili, P. Yuen, Multi-scale free-form 3D object recognition using 3D models, Image Vision Comput. 19 (2001) 271–281. [33] C.S. Chen, Y.P. Hung, J.B. Cheng, A fast automatic method for registration of partially-overlapping range images, in: Proc. IEEE Internat. Conf. on Computer Vision, 1998, pp. 242–248. [34] C.S. Chen, Y.P. Hung, J.B. Cheng, RANSAC-based DARCES: a new approach to fast automatic registration of partially overlapping range images, IEEE Trans. Pattern Anal. Mach. Intell. 21 (11) (1999) 1229–1234. [35] T.S. Gieng, et al., Constructing hierarchies for triangle meshes, IEEE Trans. Visualization Comput. Graph. 4 (2) (1998) 145–161. [36] I.J. Trotts, B. Hamann, K.I. Joy, Simplification of tetrahedral meshes with error bounds, IEEE Trans. Visualization Comput. Graph. 5 (3) (1999) 224–237. [37] G. Taubin, Estimating the tensor of curvature of a surface from a polyhedral approximation, in: Proc. IEEE Internat. Conf. on Computer Vision, 1995, pp. 902–907. [38] B.K.P. Horn, H.M. Hilden, S. Negahdaripour, Closed-form solution of absolute orientation using orthonormal matrices, J. Opt. Soc. Am. A (Opt. Image Sci.) 5 (7) (1988) 1127–1135. [39] S. Umeyama, Least-squares estimation of transformation parameters between two point patterns, IEEE Trans. Pattern Anal. Mach. Intell. 13 (4) (1991) 376–380.