An efficient initial guess for the ICP method

An efficient initial guess for the ICP method

Pattern Recognition Letters 125 (2019) 721–726 Contents lists available at ScienceDirect Pattern Recognition Letters journal homepage: www.elsevier...

836KB Sizes 1 Downloads 18 Views

Pattern Recognition Letters 125 (2019) 721–726

Contents lists available at ScienceDirect

Pattern Recognition Letters journal homepage: www.elsevier.com/locate/patrec

An efficient initial guess for the ICP method Zexi Feng a,b,∗ a b

Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 P.R. China University of Chinese Academy of Sciences, China, Yuquan Road No.19(A), Shijingshan District, Beijing, 100049 P.R. China

a r t i c l e

i n f o

Article history: Received 24 November 2018 Revised 25 June 2019 Accepted 24 July 2019 Available online 25 July 2019 Keywords: ICP Statistics Covariance

a b s t r a c t The iterative closest point (ICP) method is a widely accepted solution for the point-based surface match problem, various improved visions are proposed based on this method. Although various improved visions are proposed, the fundamental of these methods is still the point to point correspondence. In this paper, a pseudo closed form solution is proposed. The proposed method is based on the covariance matrix. For fully overlapped regions, the proposed method finds the rigid transformation without the point to point correspondence. For partially overlapped point clouds, the proposed method estimates the overlapped regions from a statistic. After the overlapped regions are found, the initial guesses of the ridge transformation are found for each of the corresponding points. Next, the correct correspondences are selected out based on the number of neighbors. Finally, the rigid transformation is found from the correct correspondences. Experiments demonstrate that the proposed method is robust and suit for partially overlapped point clouds. © 2019 Elsevier B.V. All rights reserved.

1. Introduction Registration of point clouds obtained from different perspective is a challenging problem in computer vision and computer graphics. Currently, the widely accepted solution is the Iterative Closest Point (ICP) method [1], but it requires that a point cloud is the subset of another, and the initial pose is close enough. Many variations of the ICP method has been proposed to increase the robustness in [2–5]. Let {Mi } and {Nk } be the point clouds to be registered. Here, MiT = [xi , yi , zi ] and NkT = [xk , yk , zk ] are the three-dimensional coordinates of the points. If Mi and Nk are a pair of corresponding points, which satisfies Mi = RNk + T , then the transformation between {Mi } and {Nk } is called the rigid transformation. Here, R is a 3 × 3 rotation matrix and T is a 3 × 1 translation vector. Let N¯ kT = [xk , yk , zk , 1] and let Rt = [R, T ] be the transformation matrix, then the rigid transformation is Mi = Rt N¯ k . If there are at least 3 pair of corresponding points, then the rigid transformation is found. However, the sequence of the three-dimensional points in {Mi } and {Nk } may be totally different due to the reason that they are sorted, or they are just scanned in a different view. If the correspondence is unknown, then the rigid transformation is not easy to solve. ∗ Corresponding author at: Intelligence Design and Computer Vision, 1068 Shenzhen 518055 P.R. China. E-mail address: [email protected]

https://doi.org/10.1016/j.patrec.2019.07.019 0167-8655/© 2019 Elsevier B.V. All rights reserved.

The ICP method uses a heuristic strategy to find the point to point correspondence. In the heuristic strategy, each point finds its closest target in another point cloud and then treats it as the corresponding point. With this strategy, the point to point correspondence problem is solved and the translation matrix is found iteratively. But the problem is that this strategy works only if the initial pose of the point clouds is close enough. If the initial pose of the point clouds is not properly set, then the ICP method will dramatically fail. To find the initial position efficiently, the feature-based methods [2,6,8–10,14] and the 2D/3D based methods [11–13] are proposed. In the feature-based methods, the corresponding points are estimated by artificial statistics. With the artificial statistics, a certain amount of the real corresponding points is found, and the partially overlapped problem is solved too. Since not all the corresponding points are the correct pairs, the RANSAC method [7] is employed to find the correct matches. After the correspondence problem is solved, the rigid transformation is found. In the 2D/3D based methods, the image points are employed. With the additional information, additional topology and additional features are found. Since additional information is obtained, the corresponding points are found more easily. Although the point to point problem is partly solved by the 2D/3D based methods, the image information is not always available. In this paper, a pseudo closed form solution is proposed for the rigid point cloud registration problem. The proposed solution is based on the covariance matrix. The advantage of the proposed

722

Z. Feng / Pattern Recognition Letters 125 (2019) 721–726

solution is that the point to point correspondence is not needed, so that the rigid transformation is found with linear time. If the point cloud is transformed and sorted, then the proposed solution is the best solution. For the partially overlapped point clouds, the proposed method finds the solution based on the covariance method and the statistic-based method. Firstly, a statistic (the local feature) is proposed to describe the local structure of each point. Secondly, the potential overlapped regions are found based on the similarity between the local features. Thirdly, the transformations between the overlapped regions are found based on the covariance matrix. Fourthly, the interior regions are found. Finally, the transformation is found according to the interior regions. This paper is organized as follows. Section 2 defines the scope of the problem. Section 3 introduces the proposed theory. Section 4 shows the experimental result and compares the proposed method with other works. Section 5 gives the conclusion. 2. Motivation Previous point-set registration methods treat the point set registration problem as a combination of two intertwined sub problems, which are the pose estimation problem and the point correspondence problem. However, if the point cloud is fully overlapped, then the transformation is determined by the covariance matrix. In this case, the point to point correspondence in not needed. The advantage of finding the transformation using the covariance matrix is that the pseudo closed form solution is found with linear time. For particular kind of application, such as matching the point cloud with the sorted one, the covariance-based method is the best choice. If the point cloud is partially overlapped, then the overlapped regions are found using the local feature. If a pair of features are matched, then an overlapped region is found. Because the overlapped regions are found, the potential rigid transformation matrices are found. Hence, the correct transformation matrices can be selected out. After the correct transformation matrices are selected out, the point to point correspondence problem are solved. After the matched pairs are found, the rigid transformation is found. 3. Theory and framework

Suppose {Mi } and {Nk } are two fully overlapped point clouds, which satisfies Mi = RNk + T (one-to-one mapping). Let CN = cov({Nk }, {Nk }) be the covariance matrix of {Nk }, and let CM = cov({RNk + T }, {RNk + T }) be the covariance matrix of {Mi }. Then, CN and CM satisfies (1), here, CN 2 is the 2-norm of CN . Because the correspondence is not involved in (1), the closed form solution is found without the point to point correspondence. And because the covariance matrix is a statistical method, there is no requirement that the number of points in {Mi } and {Nk } must be the same. If the points in {Mi } and {Nk } are dense enough, then (1) gives a pseudo closed form solution.

(1)

Although (1) is equal to (CM /CM 2 )R = R(CN /CN 2 ), but (1) is not solved in this way. This is because the freedom of R is 4, solving (CM /CM 2 )R − R(CN /CN 2 ) = 0 using the singular value decomposition method will result in a trivial solution. Apply the Schur decomposition method to CM /CM 2 and CN /CN 2 , then both the covariance matrix approaches to a diagonal matrix. Sort the eigen values, then (2) is derived.

UM AMUMT = RUN AN (RUN )T

UM = RUN

(2)

(3)

The cause lay in this phenomenon is that although the eigenvalue is determined, the sign of the eigenvector is arbitrary. For example, if AX = λX is true, then A(−X ) = λ(−X ) is also true. If the rotation between {Mi } and {Nk } is not large, then this phenomenon is not always observed. But if the rotation is large, then this phenomenon will always be observed. Because the sign of the singular vector may change, (3) is changed to (4).

UM = R[±uN1 , ±uN2 , ±uN2 ]

(4)

Solve (4) using the least square method and refine R by setting all of its singular values to 1, and then the solutions are found. Although there are totally 8 solutions in (4), but there is only one correct solution. Let {Ri } be the 8 possible solutions and let Rs be the correct solution. The correct solution should satisfy (1). Besides, because the rotation matrix is the matrix format of the rotation vector, after translating the rotation matrix to the rotation vector and then translates it back, the elements of the rotation matrix should be the same. With these two principles, (5) is em  ployed to evaluate Ri . Here, A= ai j is the sum of every element in the matrix A, abs(A) is changing the elements of A into its absolute value, rodr(Ri ) is transforming the rotation matrix Ri into the rotation vector [15], rodR(X) is transforming the rotation  vector into the rotation matrix, Sk − Ri S k 2 is the distance between the matched features (the matched features are founded from the distance between the feature points, the distance between the nearest point or the distance between statistics), and wi is the weight of the matched points. The rotation matrix Ri , which corresponds to min({Ei }), is considered as the correct rotation matrix. Ideally, if the point structure is the same (only rotated, trans lated and sorted), then wi Sk − Ri S  k 2 is not necessary. But usually, the distances between the matched features are necessary for the scanned models. This is because the scanned points are not the same. In this paper, Sk and S k are statistics found from the norms and the points in the surrounding region (which will be explained in the following part).

Ei =

3.1. Find the rigid transformation from the covariance matrix

CM /CM 2 = R (CN /CN 2 )RT

Theoretically, R should be found in (3). But the problem is that the sign of the eigen vector is not always matched. Let UM = [uM1 , uM2 , uM3 ], and let UN = [uN1 , uN2 , uN3 ]. Sometimes, R is solved by [uM1 , uM2 , uM3 ] and [uN1 , −uN2 , uN3 ].

 + +



 



abs

CM − Ri CM 2



CN CN 2

  RiT

(abs(rodR(rodr (Ri ) ) − Ri ) )

w i S k − R i S  k  2

(5)

Similarly, if the normal of the points are considered, then (4) is changed to (6). There are 64 possible solutions in (6), but there is only one correct solution. After setting the singular values of [ ± uN1 , ± uN2 , ± uN3 , ± u N1 , ± u N2 , ± u N3 ] to 1, (6) is solved using the least square method. The correct solution, which should minimize (5) and the covariance matrix of the normal, is selected out.

[UM , U  M ] = R[±uN1 , ±uN2 , ±uN2 , ±u N1 , ±u N2 , ±u N2 ]

(6)

After the rotation is found, the translation is found by (7). Here, ¯ and N¯ are the geometrical center of {Mi } and {Nk }. M

¯ − RN¯ T =M

(7)

Because finding the covariance matrixes for {Mi } and {Nk } needs a linear time (O(M + N ) complexity) and solving Eqs. (2)– (7) needs a constant time (O(C) complexity), the overall complexity is O(M + N + C ). If O(M + N )  O(C ), then the pseudo closed form solution is found with linear time. Note that if the covariance is

Z. Feng / Pattern Recognition Letters 125 (2019) 721–726

mixing true date with noise, then (6) should be employed instead of (4). If there are outliers in the point clouds, then the outliers should be removed in the preprocess [16,17], or the point clouds should be treated as partially overlapped point clouds. 3.2. Estimate the overlapped regions If {Mi } and {Nk } are two partially overlapped point clouds, then the overlapped regions must be found before solving the rigid transformation. Let pi be a point in {Mi }, and let {pj } be the neighbor points of pi within a radial of H. Let hi be the normal of pi . Similar to [14], a statistic, which is shown in (8), is found for pi . The statistic is later employed to evaluate the similarity between two regions. Here,  pi − p j 2 is the 2-norm of pi − p j , α i , β i and ωi are three angles based on the vertex and the normal of {Mi }, N is the number of bins in the statistics, c0 is the center of the first bin, c is the gap between the bins, bj1 and bj2 are the index of the two most nearest bins, wb j1 and wb j2 are the weights of the two most nearest bins. The statistics divides the searching space into N uniform bins according to the distance to pi , and then employs the weighted method to classify the three angles into two nearest bins. Next, the weighted average of each bin is employed as the feature of pi . Unlike [14], the proposed method finds the cross product of the normal hj and the direction lj , and finds the angles between the normal hi and hj × lj . Besides, the proposed method only finds the angles between pi and pj , but not every pairs of points (the angles between pj and pk are not considered). For example, if there are 11 bins, then the statistics contains 33 elements. The first 11 elements are determined by α i , and the 12th to the 22th elements are determined by β i , while the last 11 elements are determined by ωi . If  pi − p j 2 = H/11, then α i is classified into the first and the second elements of the statistics, β i is classified into the 12th and the 13th elements of the statistics, while ωi is classified into the 23th and the 24th elements of the statistics. The reason to employ the cross product is that the direction of the cross product is different for the mirror symmetrical structures, so that the mirror symmetrical structures are distinguishable.



⎧ d j = pi − p j



2

⎪ ⎪ ⎪ l j = pi − p j / pi − p j

⎪ 2 ⎪ ⎪ ⎪ α j = arccos(hi · h j ) ⎪ ⎪ ⎪ β j = arccos( hi · l j ) ⎪ ⎪ ⎪ ⎪ ω j = arccos hi · l j × h j ⎪ ⎪ ⎪ ⎪ c0 = H/(2N ) ⎪ ⎪ ⎪ c =  H/N ⎪  ⎪ ⎪   ⎪ b = j1 ⎨  d j − c 0 / c + 1 b1 + 1, ( b1 < N ) b j2 = ⎪ b1 , ( b1 = N ) ⎪ ⎪ ⎧ ⎪ ⎪ ⎪ 1 , d j ≤ c0 ⎨ ⎪   ⎪ ⎪ ⎪ wb j1 = d j − (b1 − 1 )c − c0 , (c0 < d j < cN ) ⎪ ⎪ ⎩ ⎪ ⎪ 1 , d j ≥ cN ⎪ ⎪ ⎪ w = 1 − w1 ⎪ b j2 ⎪ ⎪ α = w α /w ⎪ V ⎪ j jk jk k ⎪   ⎪ β ⎪ V = w β / w ⎪ j jk ⎩ kω   jk Vk = w jk ω j / w jk

723

method finds the best match for {FiM } and {FkN } based on the KDtree method. Only if FiM chooses FkN as the matched point and FkM chooses FiN as the matched point, then FiM and FkN are chosen as a pair of matched features. After the matched features are found, the flat regions should be removed. Let Mi and Nk be a pair of matched points and let CMi and CNk be the covariance matrix of Mi and Nk . Here, the covariance matrix is found from the points within a radial of H. The flatness of a matched pair is evaluated by comparing the singular 1 , C2 , C3 ] values of CMi and CNk . Let the singular value of CM be [CMi Mi Mi (sorted in descending order) and let the singular value of CN be 1 2 3 3 2 3 2 < [CNk , CNk , CNk ] (sorted in descending order). If CMi /CMi + CNk /CNk ε, then this pair of matched features are removed. As mentioned in previous section that the matched features found by the local features are not 100% correct, there must be a strategy to remove the incorrect matches. Let C Mi and C Nk be the covariance matrix corresponding to the normal. For each pair of corresponding points, a rigid transformation is found by (6). Let {[ri , Ti ]} be the rigid transformations found in the above process. Similar to the RANSAC method [7], the neighbor of ri and Ti are counted. For the rotation vector ri , because rotate around ri /ri  about 181° is represented as rotate around −ri /ri  about 179°, the rotation vector is translated into the rotation matrix, and the average angle between the three axis are taken into account. Let Ri = [Ri1 , Ri2 , Ri3 ] be the rotation matrix and let γ be the tolerance angle. If RiT1 Rk1 + RiT2 Rk2 + RiT3 Rk3 > 3cos(γ ), then rk is considered as a neighbor of ri . For the translation vector Ti , if the Euclidean distance between Ti and Tk is less than τ , then Tk is considered as a neighbor of Ti . Let {[ni , ni ]} be the number of neighbors corresponding to {[ri , Ti ]}. Then, a threshold ϑ, which is for {ni + ni }, is found by the OTSU method [18]. The corresponding points, for which ni + ni is equal or greater than the threshold suggested by the OTSU method, is considered as the correct correspondences.

3.3. Find the transformation from the overlapped regions Let {Mi } and {Ni } be the matched points, the initial rota-

(8)

Along with (8), two addition statistics are found. The first additional statistic is the mean normal of the involved region, which is referred as S1 . The second additional statistic is the direction from the point pi to the center of the involved region, which is referred as S2 . These two additional statistics, along with the normal of pi , are employed as the matched features of (5). For each of the point in {Mi } and {Nk }, a local feature (the statistics described by (8)) is found. Let {FiM } and {FkN } be the features corresponding to the points in {Mi } and {Nk }. The proposed

tion and the initial translation are found by Mi = Rt Ni , which is mentioned in the first section. The reason to employ the point to point correspondence is that usually there is noise in the point cloud. In this situation, the solution found from the point to point correspondence is more accurate than the solution found from (1). In this paper, the rotation matrix is found from {(Mi − M  )/Mi − M  } and {(Ni − N  )/Ni − N  }. Here, M  and

N  are the mean value of {Mi } and {Ni }. And, to make sure that

every point in {Mi − M  } and {Ni − N  } makes equal contribution to the rotation matrix, the points are normalized. After the initial translation is found, the weights are found for every point. Let {Di } be the distance from Mi to Rt N¯ j , the weight is set as {1/(Di + 1 )}. After the weights are found, the rotation and the transformation are re-computed again. Especially, if the number of matched point is less than 3, then the covariance matrixes are determined by the entire model of {Mi }, {Nk }, {hM } and {hNk }, and the rigid transformation is found i by (6) and (7). Here, {hM } and {hNk } are the normal vectors of {Mi } i and {Nk }. Optionally, the Fuzzy-ICP method [19] can be employed to improve the accuracy. Because {Ni } is a subset of {Mi } and the initial pose is found, the accurate transformation is found by the FuzzyICP method. In this paper, only three nearest point are considered in the Fuzzy-ICP method, and the weight is also set as {1/(Di + 1 )}.

724

Z. Feng / Pattern Recognition Letters 125 (2019) 721–726

Fig. 2. Part of the partially overlapped models employed in the experiment. The first and the second model are the Buddha model, which are interpolated, reversely placed and cut. The third and the fourth model are the Bunny model, which are interpolated, reversely placed and cut.

Fig. 1. The overall framework of the proposed method.

3.4. The overall framework As shown in Fig. 1, the overall framework of the proposed method is as follows:  1 Choose the task (partially overlapped or fully overlapped) of the registration, if the partially overlapped task is chosen, then go to step  2 , else go to step  4;  2 Find the overlapped regions based on the similarity between the statistic of the points, and then determine the correct matches between the point clouds, the detail of the algorithm is mentioned in Section 3.2;  3 After the matched points are found, the rigid transformation is found from the matched points, the detail is mentioned in Section 3.3;  4 If the correspondence information is not provided (the fully overlapped task is chosen, or the statistic-based algorithm failed to find the correspondence), then the rigid transformation is found from the covariance matrixes, the detail is mentioned in Section 3.1;  5 Optionally, the Fuzzy-ICP method can be employed to improve the accuracy after the initial pose is found. 4. Experiments and comparison In the experiments, 131 non-asymmetric point cloud models (found in the threepark.net, referred as the downloaded models hereafter) and some scanned models (obtained from homemade scanners) are employed to test the effectiveness of the proposed method. The downloaded models are interpolated so that the sparse point clouds are changed to dense point cloud. If the size of a mesh is larger than 1.5 times of the average size, then a new point is interpolated in the center of that mesh. This interpolation loop is exceeded for 5 times. After the downloaded models are interpolated, the downloaded models are rotated and translated by a known r0 and T0 (the ground truth). Next, the proposed method is employed to find the rotation and the translations back. For the downloaded models, the ground √ √ √ √ truth rotation is set as r0 = [π / 3, π / 3, π / 3] radians (π / 3 ≈ 1.81), while the ground truth translation is set as T0 = [10, 20, −3.7]. For H, which is the radial when determining the statistics, it is set as 7 times of the average distance of the points’ 33 neighbors. For γ , which is the tolerance angle, it is set as π /40. For τ , which is the tolerance

distance, it is set as 2 times of the average distance between the points in each model. For ɛ, which is the least flatness, it is set as 0.002. In the experiments, the parameter N is set as 11. Suppose the normal is given, the result is shown in Tables 1–6. Here, the file reading time is included. To test the correctness of the proposed method, the downloaded models are rotated and translated by r0 and T0 . Next, the proposed method is employed to find the rotation and the translations back. Suppose the normal is given, the average result is shown in Table 1. Note that because the initial mode is reversely placed (|r0 | = π ) in all the tables here after, there is a possibility that the rotation vector found by a method is −179° Hence, if there is a rotation victor rk , for which rk · r0 < 0, then rk is transformed to rk /rk  · (2π − rk  ) before determining the RMS of the rotation vector. If the algorithm failed to reverse them, then an average error of 3.14 rad may arise. Respectively, if the rotation is not correct, there is a large error in the translation vector. Besides, because the referred methods in Tables 1–6 can deal with partially overlapped point clouds, the proposed method is set to partially overlapped task. To test the robustness of the proposed method, the downloaded models are divided into the odd-numbered points and the evennumbered points. The odd-numbered points are rotated and translated by r0 and T0 . Next, the proposed method is employed to find the rotation and the translations back. Suppose the normal is given, the average result is shown in Table 2. Additionally, the Gaussian noise are added to the downloaded models, and the average results are shown in Table 3. In Table 3, the noise level added to the points are 10 × log(max(Li )/50), and the noise level added to the normal are 10 × log(0.005). Here, Li is the Euclidean distance from a point to the geometrical center of the point cloud. As shown in Tables 2 and 3, although the correctness of the proposed method is the same as that of the FPFH method [14], the proposed method is faster than the FPFH method. It is also noticed in Tables 1 and 2 that if the searching radius increases, then the performance of the FPFH method drops more rapidly. Besides the full overlapped point clouds, the partially overlapped point clouds are also tested. In Tables 4–6, the downloaded models are divided in two groups. In the first group, 1/4 of the front part is cut. In the second group, 1/4 of the back part is cut. Here, the front direction is defined as the direction from the center of the model to the outermost point of the model, and the back direction is the opposite direction. Part of the partially overlapped models are shown in Fig. 2. To test the correctness of the proposed method, the second group of the partially overlapped models are rotated and translated by r0 and T0 . Next, the proposed method is employed to find the rotation and the translations back. Suppose the normal is given, the average result is shown in Table 4. To test the robustness of the proposed method, the oddsampled of the first group and the even-sampled of the second group are employed. The odd-sampled group of the partially overlapped models are rotated and translated by r0 and T0 . Next, the proposed method is employed to find the rotation and the trans-

Z. Feng / Pattern Recognition Letters 125 (2019) 721–726

725

Table 1 Test results of the single model test. Method

Average rotation

RMS of rotation vector

Average translation

RMS of translation vector

Time

Proposed (before Fuzzy-ICP) ICP [4] Super 4 [6] FPFH [14]

[1.81,1.81,1.81] [0.18,0.27,0.29] [−0.39,0.58,0.98] [1.81,1.81,1.81]

0 rad 1.32 rad 1.15 rad 0 rad

[10,20,−3.7] [−2.43,25.71,−7.01] [3.41,11.29,0.40] [10,20,−3.7]

0 mm 41.79 mm 46.77 mm 0 mm

46656 s 4259 s 1806 s 116235 s

Table 2 Test results of the even-odd test. Method

Average rotation

RMS of rotation vector

Average translation

RMS of translation vector

Time

Proposed (before Fuzzy-ICP) ICP [4] Super 4 [6] FPFH [14]

[1.81,1.81,1.81] [0.11,0.32,0.29] [0.58,−0.61,−0.4] [1.81,1.81,1.81]

0.0059 rad 1.34 rad 0.8587 rad 0.0030 rad

[10.00,20.02,−3.72] [−5.96,28.97,−8.43] [4.53,−0.24,−4.8] [9.99,20.03,−3.73]

0.3380 mm 41.80 mm 46.44 mm 0.2664 mm

22251 s 2358 s 1607 s 29571 s

Table 3 Test results of the noise test. Method

Average rotation

RMS of rotation vector

Average translation

RMS of translation vector

Time

Proposed (before Fuzzy-ICP) ICP [4] Super 4 [6] FPFH [14]

[1.81,1.81,1.81] [0.15,0.31,0.28] [−0.92,−1.04,−1.18] [1.81,1.81,1.81]

0.0022 rad 1.33 rad 1.71 rad 0.0026 rad

[9.99,19.99,−3.71] [−2.4,25. 72,−7.02] [7.02,11.04,−3.8] [10.02,20.05,−3.72]

0.1193 mm 41.79 mm 41.74 mm 0.4578 mm

46593 s 4316 s 1819 s 116477 s

Table 4 Test results of the partial model test. Method

Average rotation

RMS of rotation vector

Average translation

RMS of translation vector

Time

Proposed (before Fuzzy-ICP) ICP [4] Super 4 [6] FPFH [14]

[1.81,1.81,1.81] [0.23,0.39,0.45] [0.64,0.85,0.51] [1.81,1.81,1.81]

0.0005 1.3961 1.3870 0.0007

[9.99,20.00,−3.70] [−1.51,27.22,−7.08] [4.54,0.59,−12.13] [9.99,19.99,−3.69]

0.006 mm 42.3 mm 58.52 mm 0.095 mm

45921 s 4436 s 1721 s 110201 s

rad rad rad rad

Table 5 Test results of the even-odd test on partial overlapped data. Method

Average rotation

RMS of rotation vector

Average translation

RMS of translation vector

Time

Proposed (before Fuzzy-ICP) ICP [4] Super 4 [6] FPFH [14]

[1.81,1.81,1.81] [0.24,0.46,0.42] [−1.04,0.018,0.51] [1.81,1.81,1.81]

0.0088 1.3781 1.0144 0.0085

[10.11,20.00,−3.71] [−1.46,26.59,−7.73] [−6.38,16.57,−6.85] [10.01,20.02,−3.69]

1.15 mm 42.12 mm 56.3418 mm 0.71 mm

17034 s 4576 s 1584 s 27661 s

rad rad rad rad

Table 6 Test results of the noise test. Method

Average rotation

RMS of rotation vector

Average translation

RMS of translation vector

Time

Proposed (before Fuzzy-ICP) ICP [4] Super 4 [6] FPFH [14]

[1.81,1.81,1.81] [0.27,0.43,0.44] [0.45,−1.14,−0.36] [1.81,1.81,1.81]

0.0030 1.3742 0.9470 0.0078

[10.01,20.01,−3.68] [−1.20,27.20,−7.02] [−1.12,3.71,−3.48] [9.95,20.15,−3.76]

0.19 mm 42.63 mm 58.1732 mm 1.00 mm

45748 s 4588 s 1761 s 109372 s

rad rad rad rad

lations back. Suppose the normal is given, the average result is shown in Table 5. Additionally, the Gaussian noise are added to the partially overlapped models, and the average results are shown in Table 6. In Table 6, the noise level added to the points are 10 × log(max(Li )/50), and the noise level added to the normal are 10 × log(0.005). Here, Li is the Euclidean distance from a point to the geometrical center of the point cloud. For the scanned models, the parameter H is set as 210 mm, the parameter γ is set as π /40, and the parameter τ is set as 40 mm. The scanned modes are pre-processed by the surrounding box method, part of the initial poses and the registration results are shown in Fig. 3. As shown in Fig. 3, although the initial pose is reversely placed, the models are matched correctly.

Fig. 3. Part of the initial poses and the matched results of the scanned point clouds. From the left to the right is the initial pose of the first model, the initial pose of the second model, the matched result of the first group, the initial poses of the third model, the initial pose of the fourth model, and the matched result of the second group.

726

Z. Feng / Pattern Recognition Letters 125 (2019) 721–726

5. Conclusion and discussion

References

In this paper, a covariance-based pseudo closed form solution is proposed for the point match problem. The rotation and the translation are linearly found in the proposed method. Experiments demonstrate that the proposed method is robust and accurate. If the point cloud is transformed and sorted, then the solution found by (4) and (5) solve the rigid transformation with linear time. In our experiment, the proposed statistics is faster than the FPFH method [14], this is probably because their method searches the neighbors twice in order to combine the SPFH into the FPFH. In our experiment, the Super 4 method [6] encounters large errors but complete the task with the fastest speed, this is probably because the 131 models employed in the experiments include desks and chairs, which are designed by multiple duplicate local structures. Moreover, we have interpolated the models, which enlarges the number of the duplicate structures. Because the statistics employed by the Super 4 method covers only 4 points, their method fails to reverse a certain number of models. Hence, large error is shown in Tables 1–6. For the ICP method [4], because of the initial pose, most of the models are completely upside-down in the iteration. Hence, it produces the largest rotation error. The proposed method is also suit for the n-dimensional rigid transformation problem. Changing the three-dimensional covariance to the k-dimensional covariance, then the rigid transformation is found using the proposed method.

[1] P.J. Besl, N.D. Mckay, A method for registration of 3-D shapes, IEEE Trans. Pattern Anal. Mach. Intell. 14 (2) (February, 1992) 239–256. [2] G.C. Sharp, S.W. Lee, D.K. Wehe, ICP registration using invariant features, IEEE Trans. Pattern Anal.Mach. Intell. 24 (1) (January, 2002) 90–102. [3] D. Chetverikov, D. Svirko, D. Stepanov, P. Krsek, The trimmed iterative closest point algorithm, in: 16th International Conference on Pattern Recognition, Iii, 2002, pp. 545–548. Proceedings. [4] A. Segal, D. Haehnel, S. Thrun, Generalized-ICP, 2009 Robotics: Science and Systems (RSS), June 2009. [5] A. Myronenko, X.B. Song, Point set registration: coherent point drift, IEEE Trans. Pattern Anal. Mach. Intell. 32 (12) (December, 2010) 2262–2275. [6] N. Mellado, D. Aiger, N.J. Mitra, SUPER 4PIC fast global pointcloud registration via smart indexing, Comput. Graphics Forum 33 (5) (August, 2014) 205–215. [7] R. Raguram, J.M. Frahm, M. Pollefeys, A comparative analysis of RANSAC techniques leading to adaptive real-time random sample consensus, Comput. Vision - Eccv 2008 5303 (2008) 500–513 Pt li, Proceedings. [8] B.W. He, Z.M. Lin, Y.F. Li, An automatic registration algorithm for the scattered point clouds based on the curvature feature, Opt. Laser Technol. 46 (March, 2013) 53–60. [9] Y.H. Liu, L. De Dominicis, B.G. Wei, L. Chen, R.R. Martin, Regularization based iterative point match weighting for accurate rigid transformation estimation, IEEE Trans. Vis. Comput. Graph. 21 (9) (September, 2015) 1058–1071. [10] K. Ono, Ryuji Ono, Y. Hanada, 3D surface registration using estimated local shape variation, 2017 IEEE Conference on Control Technology and Applications (CCTA), 2017. [11] Y. Khoo, A. Kapoor, Non-iterative rigid 2D/3D point-set registration using semidefinite programming, IEEE Trans. Image Process. 25 (7) (Jul, 2016) 2956–2970. [12] X. Kang, M. Armand, Y. Otake, W.P. Yau, P.Y.S. Cheung, Y. Hu, R.H. Taylor, Robustness and accuracy of feature-based single image 2-D-3-D registration without correspondences for image-guided intervention, IEEE Trans. Biomed. Eng. 61 (1) (January, 2014) 149–161. [13] L.L. He, S. Wang, T.N. Pappas, 3d surface registration using Z-Sift, 2011 18th Ieee International Conference on Image Processing (Icip), 2011. [14] Q.Y. Zhou, J. Park, V. Koltun, Fast global registration, Comput. Vision - Eccv 2016, Pt li 9906 (2016) 766–782. [15] O. Faugeras. Three-dimensional computer vision: a geometric viewpoint. MIT Press, 1993. [16] X.D. An, X.Q. Yu, Q.T. Xu, J. Wang, Research on 3D scanning point cloud de-nosing, in: 2014 International Conference on Audio, Language and Image Processing (Icalip), Vols 1-2, 2014, pp. 842–845. [17] Z.W. Huang, Y.J. Huang, J. Huang, A method for noise removal of LIDAR point clouds, in: 2013 Third International Conference on Intelligent System Design and Engineering Applications (Isdea), 2013, pp. 104–107. [18] N. Otsu, Threshold selection method from gray-level histograms, IEEE Trans. Syst. Man Cybernet. 9 (1) (1979) 62–66. [19] B. Krebs, P. Sieverding, B. Korn, A fuzzy icp algorithm for 3d free form object recognition, in: Proc. International Conference on Pattern Recognition, 1996, pp. 539–543.

Declaration of Competing Interest The authors have declared that no conflict of interest exists. Acknowledgments The authors would like to thank Professor Zhan.Song for his advices.