A new extracting algorithm of k nearest neighbors searching for point clouds

A new extracting algorithm of k nearest neighbors searching for point clouds

Pattern Recognition Letters 49 (2014) 162–170 Contents lists available at ScienceDirect Pattern Recognition Letters journal homepage: www.elsevier.c...

1MB Sizes 6 Downloads 174 Views

Pattern Recognition Letters 49 (2014) 162–170

Contents lists available at ScienceDirect

Pattern Recognition Letters journal homepage: www.elsevier.com/locate/patrec

A new extracting algorithm of k nearest neighbors searching for point clouds q Zisheng Li a,b, Guofu Ding a,⇑, Rong Li a, Shengfeng Qin c a

Institute of Advanced Design & Manufacturing, School of Mechanical Engineering, Southwest Jiaotong University, Chengdu 610031, China School of Manufacturing Science and Engineering, Southwest University of Science and Technology, Mianyang 621010, China c Department of Design, Northumbria University, City Campus East Building 2, Newcastle upon Tyne NE1 2 SW, UK b

a r t i c l e

i n f o

Article history: Received 5 October 2013 Available online 22 July 2014 Keywords: kNN searching algorithm Extracting algorithm Distance comparison using vector inner product Point clouds

a b s t r a c t k Nearest neighbors (kNN) searching algorithm is widely used for finding k nearest neighbors for each point in a point cloud model for noise removal and surface curvature computation. When the number of points and their density in a point cloud model increase significantly, the efficiency of a kNN searching algorithm becomes critical to various applications, thus, a better kNN approach is needed. In order to improve the efficiency of a kNN searching algorithm, in this paper, a new strategy and the corresponding algorithm are developed for reducing the amount of target points in a given data set by extracting nearest neighbors before the search begins. The nearest neighbors of a reverse nearest neighborhood are proposed to use in extracting nearest points of a query point, avoiding repetitive Euclidean distance calculation in an extracting process for saving time and memories. For any point in the model, its initial nearest neighbors can be extracted from its reverse neighborhood using an inner product of two related vectors other than direct Euclidean distance calculations and comparisons. The initial neighbors can be its full or partial set of the all nearest neighbors. If it is a partial set, the rest can be obtained by using other fast searching algorithms, which can be integrated with the proposed approach. Experimental results show that integrating extracting algorithm proposed in this paper with other excellent algorithms provides a better performance by comparing to their performances alone. Ó 2014 Published by Elsevier B.V.

1. Introduction A variety of kNN searching algorithms are widely used in point cloud modeling [34], spatial database retrieval [10,12] and data mining [27], etc. A kNN searching problem can be described as: given an existing data set S with n points, for a query point p0 2 S, finding a subset S0 with k points (p0 is not included) where S0  S and k < n such that for any point p1 2 S0 and p2 2 S  S0 ; distðp0 ; p1 Þ 6 distðp0 ; p2 Þ , here distðpi ; pj Þ representing the distance between pi and pj . Distance metric can have different forms depending on its applications. In point cloud modeling, Euclidean distance is usually used in kNN search for estimating the geometric properties such as the normal and curvature of a point. In order to search the nearest neighbors of a query point, a kNN searching algorithm needs to (1) calculate distances between the query point and any others in the

q

This paper has been recommended for acceptance by D. Coeurjolly.

⇑ Corresponding author. Tel./fax: +86 028 8760 1643. E-mail address: [email protected] (G. Ding). http://dx.doi.org/10.1016/j.patrec.2014.07.003 0167-8655/Ó 2014 Published by Elsevier B.V.

data set, (2) sort these points by the distance in ascending order, (3) choose the most closest k points. When the query point is changed, the above procedure will be repeated, thus all distances computed before will be used only once and need to be updated every time. This is also called brute force approach [26]. Many scholars have studied various kNN algorithms in Rd (d P 2) space for extensive applications and proposed a few efficient searching algorithms. Those algorithms fall into four categories broadly: multi-step progressive algorithm, parallel algorithm, data reorganization algorithm (DRA) and spatial partition algorithm (SPA). The prominent searching algorithms are DRA and SPA. DRA involves a tree-like data structure, divides a whole data set into multi-level subspaces which are applied to build tree nodes depending on splitting rules recursively. The data structure used in these algorithms is a binary or multi children tree whose nodes differ from each other due to splitting rules; therefore splitting rules determine searching efficiency among these algorithms. For example, KD  tree [6] uses tree nodes to store space range to partition data space into clipped super planes for reducing searching

Z. Li et al. / Pattern Recognition Letters 49 (2014) 162–170

scope. Cell  tree [9] refines the distance bound between cells, and searches nearest neighbors by partitioning the data space into equal-size cubic cells on which cell-tree is built. VP  tree [32] builds a binary tree by dividing the data space using large spherical with distance from the selected vantage point instead of cubic cells employed in Cell  tree. BBD  tree [4] builds a tree differing from KD  tree and Cell  tree in its tree nodes involving not only points but also the set theoretic difference. PAT [11, 21] builds an efficient search tree by using principal component analysis (PCA) and conducts its search by using partial distance search (PDS) while OST [19] builds a tree using orthogonal base vectors and elimination inequality rules. Quad  tree for disk accessing and R  tree for searching [26] with locality are efficient as well as C  tree [33]. Algorithms in this category partition space into small regions to build a tree, each node having nearly the same number (N c ) of points, and use bounding comparison approaches to remove child nodes that cannot be included in candidate set in the searching process. We have tested that N c is a key parameter for searching nearest neighbors. If N c is small, this algorithm can degenerate into a brute force approach, while if it is large, its answer set can be disordered seriously, However, there is no method to calculate the proper N c yet. SPA [20,23,24,30,31,35] divides the bounding box of a data set into cells, its splitting procedures are similar to that of DRA, but it does not reorganize points into a tree and just records which cell contains which points. The algorithm proposed in [23] has been applied to two-dimensional data sets while those in [20,24,30,31,35] are for various three-dimensional point clouds. When a search begins, SPA first locates the cell with a query point in and then calculates distances of every point in that cell to the query point and sorts them in increasing order. If there are enough points in this cell, and if the kth shortest distance is smaller than the distance between the query point and the closet wall of the cell, the search stops. Otherwise, it continues with one or more cells depending on the expanding rules to repeat the search procedure. SPA is an excellent searching algorithm with satisfactory accuracy and acceptable speed because it utilizes neighborhoods of a point to speed up a search process by splitting a whole data space into cells and reducing the searching scope in turn. However, distances between a query point and any other points still need to be calculated again every time, thus, it can still be regarded as a brute force algorithm in this sense. Neighbor finding technology performs wasteful repeated work [22] as points in proximity share neighbors [26]. To avoid using a brute force method in neighborhood, calls for novel methods to extract nearest neighbors directly. Lazy search algorithm proposed by Song [28] can extract partial nearest neighbors from the latest query point for a current query point. But it is used for a moving point. There is a key difference between the kNN searching problem for a point cloud and that of moving point searching problem. For the latter, the query point is moving and it is not an element of the data set. We have tested that the criterion proposed in [28] leads to serious inaccuracies when it is applied to extract nearest neighbors for point cloud models. The motivation of this paper is to reduce the number of target searching points differing from the prominent algorithms which aim to reduce searching scope. In our proposed approach, in order to find kNN for a query point P, if having extracted k1 points from a reverse nearest neighborhood [8] of P, we only need to search k  k1 nearest neighbors for P further in the subsequent searching process. This new algorithm extracts nearest neighbors (EkNN) directly rather than applying a brute force method in neighborhood. In addition, inner products of related vectors are used to sort out the nearest neighbors avoiding the use of direct distance comparison and saving both time and memories. The technical contribution of our work can be summarized as follows:

163

 An accurate criterion for extracting nearest neighbors from the reverse neighborhood of a point in query is proposed, although all nearest neighbors of a query point can be extracted from its reverse neighborhood recursively, we only do it once through each neighbor for better performance.  An alternative method for comparing Euclidean distances is presented. This approach utilizes the inner product of two vectors among the query point and two points in checking to sort out their orders.  Finally, the proposed method can be integrated with any other searching algorithms. We tested our method with SPA and DRA, and used a linked list to manage and save memories. The rest of the paper is organized as follows. Section 2 defines the related concepts of kNN searching and our new approach. Section 3 gives the details of our novel algorithm for extracting nearest neighbors. Section 4 presents the results of experiments and the conclusions are finally drawn in Section 5.

2. A new approach for kNN searching Surface reconstruction has been a central problem in reverse engineering [3,5,15]. Technological advances in laser scanning enable the creation of large 3D point cloud data sets with high density and accuracy and also present a real application challenge in generating product models through reverse engineering approaches accurately and rapidly. There is a wide diversity of reverse engineering methods for surface modeling from point clouds [29].The analytical functions of point clouds are unknown, so all geometric properties such as normals can only be estimated with a variety of methods such as regression [15], delaunay [2], statistic [16], one-ring [13] and hough transformation [7]. For instance, estimating normals needs to construct the best local tangent plane for each point and kNN of each point must have been searched before constructing the tangent plane. Thus, kNN searching plays an important role in point clouds application, and reverse engineering in turn. Although there are a lot of kNN searching algorithms as stated in Section 1, they are not all for point clouds applications such as reverse engineering from a big data set with high density, while classic research in reverse engineering is much focused on surface reconstruction, smoothing and etc., paying less attention to kNN searching problems. Nevertheless, kNN searching algorithm is vital to the performance of reverse engineering on a large scale point cloud modeling with high density and accuracy, it needs further studies. In this paper, n denotes the number of points in a point cloud. P is called a query point if we are going to search kNN points for P, and kNN ðPÞ is the data set that consists of k nearest points to P. If P0 2 kNN ðPÞ, then P is called a reverse nearest neighbor of P 0 . Reverse nearest neighbors is abbreviated to rkNN. All rkNN points of P 0 forms a set rkNN ðP0 Þ. Euclidean distance between P i and Pj denoted by distðPi ; P j Þ. If P0 2 kNN ðPÞ is the next query point, for any Q 2 kNN ðPÞ ðQ – PÞ, there are two cases that Q 2 kNN ðP0 Þ or Q R kNN ðP0 Þ . If we know that Q 2 kNN ðP 0 Þ before a search begins, then only k  1 points are needed to be found in the subsequent search process(the computation time in the searching process reduces when k decreases): (1) In order to judge whether Q 2 kNN ðP 0 Þ or not, we proposed a fast extracting algorithm. By using this extracting algorithm, if Q 2 kNN ðP0 Þ, we can determine the truth before the search begins. (2) Direct distance computation and comparison will be replaced with the inner product of two vectors formed from a query point and other two points in indirectly distance comparison for improving its efficiency further (the proof is given in A). The latter method

164

Z. Li et al. / Pattern Recognition Letters 49 (2014) 162–170

P 00 intersects sphere S at P i , (b) P 0 R kNN ðP 00 Þ, (c) P 0 2 kNN ðP 00 Þ, (d) S0 is tangent to S internally, (e) S0 is tangent to S externally, (f) a  bT > 0 and P 2 kNN ðP 00 Þ. Fig. 1. (a) Ray P 0~

has less arithmetic operations than that of the former [algorithm analysis is given in Section 4], thus the latter method is faster than the former and great effect can be obtained, especially for large point clouds. (3) To save memory, we use dynamical memory allocation technology through all the implementation of algorithm. (4) Algorithm can integrate with any other fast searching algorithm to help them speed up in the process of searching. 3. Algorithm for extracting nearest neighbors In this section, we present a determining rule of extracting nearest neighbors from neighborhood for a query point. The performance of kNN algorithm is highly influenced by the data complexity in terms of the number of dimensions, the number of data points and data distribution of a data set [18]. When the data complexity increases, the kNN searching performance decreases if repeatedly applying a neighbor finding technology on each point in the data set because this method can results wasteful repeated works like a brute force method [26]. So we do not expect to extract all the nearest neighbors from neighborhood repeatedly and recursively, we just do it from each reverse nearest neighbor once, the rest nearest neighbors that can not be extracted will be found by other searching algorithms. 3.1. Determining rule for intersection of two sets Let P 0 ðx0 ; y0 ; z0 Þ be a previous query point, P00 ðx00 ; y00 ; z00 Þ be the current query point. Dk is the maximum distance between kNN ðP0 Þ and P0 , and D0k is the maximum distance between kNN ðP00 Þ and P00 . Assume that the bounding sphere S centered at P 0 with radius Dk encloses kNN ðP 0 Þ tightly, Pi (P i ¼ D  Dk þ P0 )1 [14] is the ! intersection between P0 P00 and S. Pm denotes the midpoint between Pi and P, vector a ¼ Pi  P and b ¼ Pm  P00 , as illustrated in Fig. 1(a). Theorem 1. If P 00 2 kNN ðP 0 Þ, for any point P 2 kNN ðP0 Þ, the sufficient condition for P 2 kNN ðP 00 Þ is a  bT > 0.2

1 2

D is normalization of direction ðx00  x0 ; y00  y0 ; z00  z0 Þ. ða1 ; a2 ; a3 Þ  ðb1 ; b2 ; b3 ÞT ¼ ða1  b1 þ a2  b2 þ a3  b3 Þ.

Proof. In general case, kNN ðP 0 Þ will intersect kNN ðP00 Þ when P00 2 kNN ðP 0 Þ, as shown in Fig. 1(a) and (c). Fig. 1(b) depicts the case that P 00 2 kNN ðP0 Þ and P0 R kNN ðP 00 Þ while Fig. 1(c) depicts the other case that P 00 2 kNN ðP0 Þ and P 0 2 kNN ðP 00 Þ. Meanwhile, there are two extreme cases between kNN ðP0 Þ and kNN ðP00 Þ shown in Fig. 1(d) and (e). In these two cases, the kNN points of P 0 concentrates around P00 (Fig. 1(d)) or P 00 moves away from the maximum kNN point of P 0 in the opposite direction (Fig. 1(e)). Hence S touches S0 at point Pi , and P i is bound to the ! intersection of P0 P00 and S, so there must be Pi 2 kNN ðP 0 Þ and Pi 2 kNN ðP 00 Þ. In whichever cases, kNN ðP 0 Þ equals to kNN ðP 00 Þ. From Fig. 1(d), we obtain the lower limit for D0k :

D0k P distðP00 ; Pi Þ In the same way, we get the upper limit for

D0k

6 Dk þ

ð1Þ D0k

from Fig. 1(e):

distðP00 ; P0 Þ

ð2Þ

Joint the condition of Theorem 1 and above two inequalities, we obtain



p00 2 kNN ðP0 Þ distðP00 ; Pi Þ 6 D0k 6 Dk þ distðP00 ; P0 Þ

ð3Þ

Thus for any point P 2 kNN ðP0 Þ, if distðP00 ; PÞ 6 distðP00 ; P i Þ 6 D0k , then P 2 kNN ðP00 Þ. According to theorem of distance comparison using inner product of vectors (see Appendix A), distðP00 ; PÞ 6 distðP 00 ; Pi Þ is equivalent to a  bT > 0, so concludes the proof. Fig. 1(f) is a general case for helping us understand the above conclusion. h 3.2. Description of the proposed extracting algorithm In this section, we integrate the proposed algorithm with SPA for demonstration. The extracting algorithm picks a point q from the rkNN of a query point firstly, then takes a point p from kNN ðqÞ, judges whether p satisfies Theorem 1. If it is true, p is the kNN point of the query point. For simplicity, only extracting procedures are given. Before the algorithm begins, a point cloud linked list pcdPointlink should be created. The following is the description of the extracting algorithm.

Z. Li et al. / Pattern Recognition Letters 49 (2014) 162–170

Step Step Step Step Step Step Step Step Step Step Step Step Step Step Step Step Step

1. for all qhead in pcdPointlink do 2. if kNN ðqheadÞ is empty then 3. if rkNN ðqheadÞ is empty then 4. search kNN by SPA 5. else 6. k1 ¼ 0, for all q in rkNN ðqheadÞ do 7. for all p in kNN ðqÞ do 8. if p satisfies Theorem 1 then 9. push p into kNN ðqheadÞ, push qhead into rkNN ðpÞ; k1 ¼ k1 þ 1 10. end if 11. end for 12. if k1 < k then 13. k ¼ k  k1 , search kNN by SPA 14. end if 15. end if 16. end if 17. end for

4. Analysis and experiments 4.1. Analysis of the algorithm In general, the proposed extracting algorithm is not a standalone algorithm; it is always integrated with other searching algorithms for speeding up them. So the overall complexity of the extracting algorithm is close to host searching algorithms which can be referred in corresponding cited papers. In this section, we just analyze the complexity of the extracting algorithm for brevity and simplicity only. The number of nearest neighbors of a query point varies from point distribution, density and k. So there is no identical number of extracted nearest neighbors for every model in complexity analysis. Let A be extracted nearest neighbors points for n; BðB ¼ A=nÞ be the average extracted points for n; CðC ¼ B=kÞ be the average percentage of extracted points for k. For a query point P; k  C nearest neighbors have been extracted by EkNN, then ðk  k  CÞ points are needed to be searched by other algorithm averagely. To record the rkNN, each point in data set will be referred k times averagely. So k  C points will be extracted from the point 2 set consisting of k points averagely. However, referring rkNN is a progressive process, thus if the ith point has searched completely, for ði þ 1Þth point, k  C points will be extracted from point set con2 sisting of ni  k points. In a three dimensional Euclidean space, comparing distance between two points and a fixed point involves sixteen plus/minus operations, six multiplication operations, two square root operations and one comparison operation by calculating two distances and comparing their magnitude traditionally. But by using an inner product of two vectors as proposed in this paper, there are only eleven plus/minus operations, three multiplication operations and one comparison operation. So the approach-based on the inner product of vectors is faster than direct distance calculation method in nature. Finally, referring rkNN needs additional storage space. As stated above, each point should be referred k times averagely, each pointer allocates four bytes memory, so the extracting algorithm needs additional 4kn bytes. We will see that how these additional storage spaces bring about efficient improvement in the next sections. 4.2. Effect analysis of the algorithm 4.2.1. Overview The extracting algorithm has been applied to test point clouds with different k parameters such as 4, 8, 12, 16, 20, 24, 28 and

165

32. In this experimental study, point cloud data scanned from different shapes with varied densities. Pig, sphere, frog, bunny, horse, dragon and happy model have points ranging from 3069 to 543,652, in either ply or sfl file formats, supplied from Computer Graphics Laboratory of Stanford University [1], Computer Graphics Lab of ETH Zurich [25] and other web sites. We have converted these point clouds into coordinates of points without loss of precision and leaving index order unchanged, and shared these data within an open source project named EkNN [17]. These models are depicted in Fig. 2, and the corresponding analysis data are listed in Table 1. 4.2.2. Scalability analysis of the algorithm Analytical data are divided into three groups. The first group is that extracted kNN points at run time. The rkNN which used for extracting algorithm are progressive in this group, the number of rkNN of query points increased at run time. The total number of extracted neighbors is A1 (does not contain the duplicate points); the mean value of A1 is B1 (B1 ¼ A1 =n); C 1 is the percentage of B1 relative to k (C 1 ¼ B1 =k). Similar to C 1 ; C 01 is corresponding to those points that do not satisfy Theorem 1 are still kNN points. The second group is that extracted kNN points after the program completes. Differing from the first group, the extracting algorithm has not been used in the search process in this group. We extracted nearest neighbors after all the points have been found their nearest neighbors using SPA. The rkNN of query points are steady and the number of rkNN of query points is fixed in this group. The total number of extracted neighbors is A2 ; the mean value of A2 is B2 (B2 ¼ A2 =n); C 2 is the percentage of B2 relative to k (C 2 ¼ B2 =k). Similar to C 2 ; C 02 ð¼ C 3  C 2 Þ is corresponding to those points that do not satisfy Theorem 1 are still kNN points. The third group is statistical nearest neighbors by retrieval after the program completes. Differing from the above two groups, we look up nearest neighbors other than to extract them in this group. Like the second group, the rkNN of query points are steady and the number of rkNN of query points is fixed. Moreover, we counted the nearest neighbors precisely, the extracting algorithm has not been used in this group any more. The total number of statistical nearest neighbors is A3 ; the mean value of A3 is B3 (B3 ¼ A3 =n); C 3 is the percentage of B3 relative to k (C 3 ¼ B3 =k). Also, in order to compare the effect of the proposed algorithm in more detail, the ratios of D1 (D1 ¼ A1 =A3 ) and D2 (D2 ¼ A2 =A3 ) are given and depicted in Fig. 3(a) and (b) respectively. It can be seen from Table 1 that B1 , B2 and B3 become bigger and bigger with k increases roughly, although the increase varies from each model. When k equals 32, B1 goes up to about 10 points and B2 goes up to 15 points or so (sphere model for exception), while B3 goes up to more than 20 points. This coincides with the rule, which we took it for granted before, that nearest neighbors more overlapped when k increases. The difference between B1 and B2 results from the incomplete reference when the program is running, rkNN set of a point grows progressively at run time. It can be seen from Table 1 that C 1 and C 2 are smaller than C 3 clearly. C 1 falls into the interval between 5 and 30 mostly, and the majority of C 2 falls into the interval of 20 and 50. While C 3 goes up to about 80. There are two possible reasons for these differences: one is that determining criterion supported by Theorem 1 is a sufficient condition, and it is a bit tight that not all the kNN points can be extracted; the other reason is that the set of rkNN grows progressively when the program is running, we can not use all the rkNN to extract nearest neighbors. C 2 and C 3 are not identical for different models with the same k because those models have different shapes and point distributions. D1 indicates the ratio of the number of extracted neighbors at run time to actual neighbors, while D2 indicates the ratio of the number of extracted neighbors after program completes to actual

166

Z. Li et al. / Pattern Recognition Letters 49 (2014) 162–170

Fig. 2. Pictures of seven models.

neighbors. D1 falls into the interval between 10 and 50 while the majority of D2 falls into the interval 30 and 70. We also observed a strange matter that for the ‘‘sphere’’ and ‘‘bunny’’ models, when k equals 4, C 2 is bigger than C 3 , this is because nearest neighbors extracted by the proposed algorithm for many query points, although they are different points, having the same distance to query points, and subjected to k, part of them cannot be chosen into the final results. For example, if we have extracted six points in increasing order of distance, Pi ði ¼ 1; 2 . . . ; 6Þ as the nearest neighbors for P 0 , but distðP0 ; Pi Þ ði ¼ 3; . . . ; 6Þ is identical. We just need to choose the former four points as the result when k equals 4, so the last two of them is discarded. Meanwhile, the retrieving only involves the former four points, that is why C 2 is greater than C 3 when k equals 4 for models ‘‘sphere’’ and ‘‘bunny’’. Thus we can draw the following interesting conclusions about the proposed algorithm by the observation above: (1) With the increasing k, nearest neighbors become more overlapped roughly, independent of the model shapes as well as their precision and density, which we took it for granted, has been demonstrated as true. (2) The proposed algorithm is a feasible algorithm for extracting nearest neighbors for any point from its neighborhood, which can be inferred from D1 and D2 evidently. (3) With the increasing of k; C 1 increases in turn, and time consumed in whole searching process becomes shorter.

improvement of EkNN against them. Point cloud models used in this experiment are the same used in Section 4.2 and also the parameter k. All computing was performed on Dell Precision T7500 with Intel Xeon Duo E5540 2.53 GHz CPU and memory of 4 GB. Program was implemented as console applications of Microsoft Visual C++ 6.0 and executed under 32bit OS Windows 7. 4.3.1. Performance comparison between EkNN and SPA In this paper, we chose the most classical SPA algorithm [31] for comparison study. Computation time is given in Table 1. Percentage of speed improvement of extracting algorithm to SPAðS1 Þ is shown in Fig. 3(c). EkNN gains the percentages of speed improvement from 2.48 to 7.12 when experimenting on the pig model and points sampled uniformly. EkNN gains the percentages of speed improvement from 1.51 to 14.23 when experimenting on the bunny model with points sampled uniformly. And it achieves the maximum percentage 14.23 among those experiments. For other uniform scanned models, EkNN obtains increasing percentages of speed improvement with the rising of parameter k. We noted that only for the frog model with points sampled nonuniformly, the percentages of speed improvement not always increase with the rising of parameter k. However, for the sphere model with k ¼ 4, time consumed by EkNN is bigger than that of SPA. That is because no valid points were extracted in this case in the extracting processes (see Table 1 and a portion of time was wasted for searching.

4.3. Performance analysis of the algorithm In this section, we integrated EkNN with the two prominent categories of algorithms SPA and DRA, for comparing speed

4.3.2. Performance comparison between EkNN and DRA To the best of our knowledge, OST [19] is the fastest search algorithm of DRA till now. We integrated the extracting algorithm with

167

Z. Li et al. / Pattern Recognition Letters 49 (2014) 162–170 Table 1 Effect analysis of extracting kNN points and comparison of computation time. Model n name

a

Extracted at run time

Extracted after the program completes

Retrieved after the program completes

Computation time (SPA and Computation time (OST and EkNN) (ms) EkNN) (ms)

A1

B1

C 1 ð%Þ C 01 ð%Þ A2

B2

C 2 ð%Þ C 02 ð%Þ A3

B3

C 3 ð%Þ SPAa

EkNN

S1 ð%Þ OSTb

EkNN

S2 ð%Þ

3069

4 8 12 16 20 24 28 32

603 3152 7550 14,045 20,980 29,153 38,336 48,301

0.20 1.03 2.46 4.58 6.84 9.50 12.49 15.74

4.91 12.84 20.50 28.60 34.18 39.58 44.61 49.18

38.68 29.43 20.34 19.08 18.98 18.13 17.30 17.05

4014 7671 13,719 22,147 31,720 43,423 57,553 70,886

1.31 2.50 4.47 7.22 10.34 14.15 18.75 23.10

32.70 31.24 37.25 45.10 51.68 58.95 66.98 72.18

29.31 53.80 43.05 32.04 23.23 15.71 8.40 4.53

7612 20,880 29,575 37,878 45,981 54,992 64,775 75,333

2.48 6.80 9.64 12.34 14.98 17.92 21.11 24.55

62.01 85.04 80.31 77.14 74.91 74.66 75.38 76.71

484 765 967 1155 1560 1701 2060 2247

472 734 921 1096 1473 1601 1925 2087

2.48 4.05 4.76 5.11 5.58 5.88 6.55 7.12

125 188 250 312 390 468 577 686

109 172 234 292 364 443 546 640

12.80 8.51 6.40 6.41 6.67 5.34 5.37 6.71

Sphere 3203

4 8 12 16 20 24 28 32

0 78 3049 7965 11,578 16,227 20,791 23,958

0.00 0.02 0.95 2.49 3.67 5.07 6.49 7.48

0.00 0.30 7.93 15.54 18.35 21.11 23.18 23.37

2.62 3.36 7.18 12.00 16.30 17.70 16.65 20.09

3203 3325 8534 15,176 19,665 23,308 28,324 31,956

1.00 1.04 2.66 4.74 6.14 7.28 8.84 9.98

25.00 12.98 22.20 29.61 30.70 30.23 31.58 31.18

0.00 86.72 69.86 54.85 50.95 48.57 45.24 45.45

650 25,546 35,387 43,283 52,302 60,645 68,893 78,538

0.20 7.98 11.05 13.51 16.33 18.93 21.51 24.52

5.07 99.70 92.07 84.46 81.65 78.89 76.82 76.63

187 327 499 639 811 1045 1248 1373

188 318 479 610 770 990 1181 1283

-0.53 2.75 4.01 4.54 5.06 5.26 5.37 6.55

108 172 250 312 390 483 590 717

110 156 234 297 363 453 546 640

-1.85 9.30 6.40 4.81 6.92 6.21 7.46 10.74

Frog

4 8 12 16 20 24 28 32

1352 30,958 50,130 63,217 86,646 107,315 134,041 169,548

0.16 3.63 5.88 7.42 10.17 12.60 15.73 19.90

3.97 25.55 15,806 45.42 13.84 27,381 49.04 10.51 37,743 46.38 9.90 45,792 50.85 9.65 60,221 52.49 8.99 74,174 56.19 8.46 101,436 62.19 8.74 151,079

1.86 3.21 4.43 5.38 7.07 8.71 11.91 17.73

46.38 40.18 36.92 33.60 35.35 36.28 42.53 55.42

28.67 25.25 34.54 35.24 33.50 32.43 28.08 19.43

25,576 44,588 73,056 93,832 117,304 140,483 168,420 204,042

3.00 5.23 8.58 11.01 13.77 16.49 19.77 23.95

75.06 65.42 71.46 68.84 68.85 68.71 70.61 74.85

1279 3432 4446 5476 7488 8331 9626 10,811

1232 3042 4025 5132 6927 7488 8876 10,202

3.67 11.36 9.47 6.28 7.49 10.12 7.79 5.63

296 468 640 843 1060 1310 1592 1919

281 445 614 802 1005 1236 1497 1747

5.07 4.91 4.06 4.86 5.19 5.65 5.97 8.96

Bunny 35,947

4 8 12 16 20 24 28 32

903 8676 55,892 116,344 175,211 222,514 268,707 322,273

0.03 0.24 1.55 3.24 4.87 6.19 7.48 8.97

0.63 3.02 12.96 20.23 24.37 25.79 26.70 28.02

14.24 7.89 7.18 6.65 21.01 23.71 21.98 22.45

37,380 50,562 121,562 196,238 277,064 344,836 393,968 441,826

1.04 1.41 3.38 5.46 7.71 9.59 10.96 12.29

26.00 17.58 28.18 34.12 38.54 39.97 39.14 38.41

0.00 79.21 58.86 45.65 37.10 34.25 34.17 33.58

34,072 278,354 375,479 458,822 543,778 640,291 737,891 828,089

0.95 7.74 10.45 12.76 15.13 17.81 20.53 23.04

23.70 96.79 87.04 79.77 75.64 74.22 73.31 71.99

6162 10,265 16,349 23,197 32,027 40,248 49,873 65,130

6069 9688 15,693 21,637 27,596 35,490 43,914 55,864

1.51 5.62 4.01 6.73 13.84 11.82 11.95 14.23

3011 5007 7160 9672 12,590 15,740 19,422 23,307

2848 4800 6745 9156 11,976 15,013 18,607 22,152

5.41 4.13 5.80 5.33 4.88 4.62 4.20 4.96

Horse

4 8 12 16 20 24 28 32

1390 16,383 75,147 144,346 226,844 312,255 377,991 427,794

0.03 0.34 1.55 2.98 4.68 6.44 7.80 8.82

0.72 4.22 12.92 18.61 23.39 26.83 27.84 27.57

21.40 13.16 10.46 10.85 10.95 15.46 14.38 16.06

50,603 75,467 179,050 267,130 359,665 452,612 531,561 598,180

1.04 1.56 3.69 5.51 7.42 9.34 10.96 12.34

26.09 19.46 30.77 34.43 37.09 38.90 39.16 38.55

8.73 76.12 56.28 46.94 39.50 34.26 33.01 33.88

67,531 370,734 506,470 631,283 742,688 851,286 979,669 1,123,860

1.39 7.65 10.45 13.02 15.32 17.56 20.21 23.18

34.82 95.58 87.05 81.38 76.59 73.16 72.16 72.44

9937 15,506 26,068 35,802 49,499 65,301 77,142 95,909

9740 14,888 25,000 34,303 46,783 61,638 72,442 89,873

1.98 3.99 4.10 4.19 5.49 5.61 6.09 6.29

4945 7285 9610 11,981 14,633 17,300 20,358 23,635

4696 6947 9132 11,369 13,892 16,541 19,258 22,218

5.04 4.64 4.97 5.11 5.06 4.39 5.40 6.00

Dragon 437,645

4 8 12 16 20 24 28 32

162,268 0.37 585,397 1.34 1,157,420 2.64 1,793,022 4.10 2,453,152 5.61 3,126,797 7.14 3,815,385 8.72 4,518,794 10.33

9.27 7.39 16.72 7.91 22.04 7.23 25.61 6.93 28.03 6.48 29.77 5.06 31.14 8.39 32.27 11.12

660,381 1.51 37.72 24.56 1,318,803 3.01 37.67 43.16 2,177,544 4.98 41.46 36.53 3,094,361 7.07 44.19 30.67 4,007,320 9.16 45.78 26.80 4,910,649 11.22 46.75 24.18 5,830,279 13.32 47.58 22.08 6,768,531 15.47 48.33 20.28

1,090,272 2,830,076 4,095,823 5,241,991 6,353,486 7,449,945 8,535,464 9,609,023

2.49 6.47 9.36 11.98 14.52 17.02 19.50 21.96

62.28 80.83 77.99 74.86 72.59 70.93 69.65 68.61

283,811 308,709 446,161 654,234 916,298 1,234,041 1,618,892 2,106,846

278,242 1.96 345,479 297,861 3.51 530,838 427,815 4.11 736,634 621,360 5.02 918,296 837,347 8.62 1,149,847 1,112,188 9.87 1,389,120 1,455,577 10.64 1,599,799 1,859,274 11.75 1,804,908

330,073 505,478 690,651 956,023 1,068,658 1,287,548 1,457,698 1,642,000

4.46 4.78 6.24 6.78 7.06 7.31 8.88 9.03

Happy 543,652

4 8 12 16 20 24 28 32

196,694 0.36 733,860 1.35 1,459,316 2.68 2,258,716 4.15 3,090,976 5.69 3,944,220 7.26 4,821,032 8.87 5,724,412 10.53

9.05 3.34 816,688 1.50 37.56 25.44 16.87 2.73 1,654,999 3.04 38.05 42.38 22.37 3.75 2,738,296 5.04 41.97 35.52 25.97 6.58 3,886,131 7.15 44.68 29.77 28.43 7.01 5,039,840 9.27 46.35 25.89 30.23 8.83 6,195,744 11.40 47.49 23.14 31.67 13.60 7,380,575 13.58 48.49 20.89 32.90 16.42 8,610,373 15.84 49.49 18.87

1,369,891 3,498,193 5,055,835 6,476,073 7,854,899 9,215,276 10,561,256 11,892,648

2.52 6.43 9.30 11.91 14.45 16.95 19.43 21.88

62.99 80.43 77.50 74.45 72.24 70.63 69.38 68.36

382,794 388,737 560,306 816,817 1,135,199 1,543,310 1,996,928 2,520,995

370,500 372,884 531,390 766,487 1,061,403 1,431,021 1,831,435 2,301,254

377,969 537,882 800,832 1,021,041 1,327,183 1,389,643 1,675,842 2,226,812

4.43 5.24 5.87 6.91 7.09 7.45 7.49 8.55

Pig

b

k

8519

48,485

3.21 4.08 5.16 6.16 6.50 7.28 8.29 8.72

395,492 567,648 850,732 1,096,885 1,428,463 1,501,456 1,811,584 2,435,061

Partition Parameter: b ¼ 1:1. Number of Child Node: N c ¼ 8.

OST and compared computation time for discussing the performance of EkNN further. Computation time is listed in Table 1 and percentages of speed improvement (S2 ) are shown in Fig. 3(d). It can be seen from Table 1 that EkNN algorithm faster than OST. All the speed improvement percentages are above 4.0, and the maximum one up to 12.80 in those models. And the percentages of speed improvement fluctuate smaller than the extracting

algorithm integrated with SPA. This also can be seen from Fig. 3(c) and (d). Similar to SPA, the percentages of speed improvement to OST increase with k rising in general terms too. Comparing S1 with S2 , the magnitude of the speed improvement in the former is smaller than in the latter roughly, although existing some fluctuations. This is because that OST algorithm reorders point data and makes the nearest neighbors more centralized

168

Z. Li et al. / Pattern Recognition Letters 49 (2014) 162–170

100

pig sphere frog bunny horse dragon happy

100

D1(%)

90

90

pig sphere frog bunny horse dragon happy

80 70 60 50

80 70 60 50

40

40

30

30

20

20

10

k

0 0

4

8

12

20

(a)

16

20

24

28

16 14 12 10

10

k

0

32

0

4

8

12

20

pig sphere frog bunny horse dragon happy

S1(%)

18

D2(%)

S2(%)

18

(b)

16

20

24

28

20

24

28

32

pig sphere frog bunny horse dragon happy

16 14 12 10 8

8

6

6

4

4

2

2

k

0

0

k

-2 0

4

8

12

(c)

16

20

24

28

32

0

4

8

12

(d)

16

32

Fig. 3. Chart of analysis data D and S.

around the query points before a search begins, and this is one major reason why OST is excellent in performance and why the percentages of speed improvement fluctuate smaller than SPA. As the same in Section 4.3.1, for the sphere model, EkNN takes more time than OST when k equals 4 because of no valid nearest neighbors being extracted during the whole searching process. 4.3.3. Experimental summary Experiments show that when k increases, the number of identical points lies in the set of nearest neighbor of close query points becomes bigger, and C 1 in Table 1 becomes bigger too. And time consumed in searching process become shorter. Time consumed in searching process become shorter when C 1 grows in Table 1; it means that the more nearest neighbors extracted, the less time consumed in whole searching process. This validates the extracting algorithm which is supported by distance comparison employing inner product of vectors. Experiments in this section verify the conclusions drawn in Section 4.2 from the performance point of view. 5. Conclusions and discussion EkNN searches kNN points by extracting nearest neighbors directly which realized by a new distance comparison method with the inner product of two vectors instead of Euclid distance calculation and comparison. Experimental analysis has been given with several public models, and comparisons have been made with SPA and OST, which are prominent algorithms, chosen from category of SPA and DRA, to show that how EkNN algorithm speeds up them. C 2 is less than C 3 (see Table 1), this difference resulting from the condition of Theorem 1 supports that EkNN is a bit tight. We will study the distribution characteristics of points in models further

and update Theorem 1, aiming to give a looser condition or a sufficient and necessary condition, so that more and more kNN points can be extracted accurately to improve extracting efficiency highly. Distance comparison using inner products of vectors is a preeminent approach, which has advantages over the direct distance comparison in Euclid distance metric. In the future, we will apply the proposed method to estimate normal and calculate curvature for point cloud modeling and reverse engineering. Acknowledgments We would like to express our sincere appreciation to the anonymous reviewers for their insightful comments, which have greatly aided us in improving the quality of the paper. This paper is partially supported by Program for New Century Excellent Talents in University of Ministry of Education of China (NCET-09-0665), Project Supported by Scientific Research Fund of Sichuan Provincial Education Department (12ZB152, 12ZB153), and Project Supported by Key Technologies R&D Program of Mianyang Bureau (12G0321). The sphere, pig, frog, bunny, horse, dragon and happy models are courtesy of Stanford University and other anonymous organizations. We would like to thank Yi-Ching Liaw for his sharing source code of OST algorithm for comparative study. Appendix A. Distances comparison using inner product of vectors In this section, we will describe the criterion for comparing distances in Euclidean space. Traditionally, comparing distances of any two points to a fixed point in Euclidean space must calculate the two distances from two points to the fixed point respectively, and then compare them. However, holding the distance value between each neighbor and a query point is unnecessary and cal-

169

Z. Li et al. / Pattern Recognition Letters 49 (2014) 162–170

culating distance repeatedly is time-consuming. We present another way to realize distances comparison using the inner product of two vectors generated by three points for extracting kNN points. This method is applied to extract nearest neighbors for a query point from its rkNN directly, which is described in Section 3 in detail.

Inequality (A.5) is equivalent to the following:

0 x1 þx2

1  x0 2 B y1 þy2 C ðx1  x2 ; y1  y2 ; z1  z2 Þ  @ 2  y0 A > 0 z1 þz2  z0 2

ðA:6Þ

That concludes the proof. h Theorem 2. For a given point p0 ðx0 ; y0 ; z0 Þ, and any other two points p1 ðx1 ; y1 ; z1 Þ and p2 ðx2 ; y2 ; z2 Þ, midpoint between p1 and p2 is 2 y1 þy2 z1 þz2 p12 x1 þx , let vector a ¼ p1  p2 , and b ¼ p12  p0 (see 2 ; 2 ; 2 Fig. A.1). The sufficient and necessary condition for distðp1 ; p0 Þ > distðp2 ; p0 Þ is a  bT > 0. Proof. (1) Sufficiency if a  bT > 0, then

0 x1 þx2 ðx1  x2 ; y1  y2 ; z1  z2 Þ 

2 B y1 þy2 @ 2 z1 þz2 2

 x0

1

C  y0 A > 0  z0

ðA:1Þ

If expand inequality (A.1), add x20 þ y20 þ z20 to two sides of it and Reorganize, we obtain:

ðx1  x0 Þ2 þ ðy1  y0 Þ2 þ ðz1  z0 Þ2 > ðx2  x0 Þ2 þ ðy2  y0 Þ2 þ ðz2  z0 Þ2

ðA:2Þ

Inequality (A.2) is equivalent to

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðx1  x0 Þ2 þ ðy1  y0 Þ2 þ ðz1  z0 Þ2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > ðx2  x0 Þ2 þ ðy2  y0 Þ2 þ ðz2  z0 Þ2

ðA:3Þ

The two sides of inequality (A.3) represent Euclidean distance in three dimensional spaces, that is distðp1 ; p0 Þ > distðp2 ; p0 Þ. (We can see here a  bT is the difference between square of distðp1 ; p0 Þ and square of distðp2 ; p0 Þ). (2) Necessity If distðp1 ; p0 Þ > distðp2 ; p0 Þ, according to the Euclidean distance formula, we get

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðx1  x0 Þ2 þ ðy1  y0 Þ2 þ ðz1  z0 Þ2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > ðx2  x0 Þ2 þ ðy2  y0 Þ2 þ ðz2  z0 Þ2

ðA:4Þ

Square two sides of (A.4), expand, merge and reorganize it, we obtain

ðx1  x2 Þðx1 þ x2  2x0 Þ þ ðy1  y2 Þðy1 þ y2  2y0 Þ þ ðz1  z2 Þðz1 þ z2  2z0 Þ > 0

Fig. A.1. Vector a and b.

ðA:5Þ

References [1] Dscanrep, 2004. URL: . [2] N. Amenta, M. Bern, Surface reconstruction by Voronoi filtering, Discrete Comput. Geom. 22 (1999) 481–504. [3] N. Amenta, S. Choi, T.K. Dey, N. Leekha, A simple algorithm for homeomorphic surface reconstruction, in: Proceedings of 16th Annual Symposium on Computational Geometry, ACM, New York, NY, United States, 2000, pp. 213–222. [4] S. Arya, D.M. Mount, N.S. Netanyahu, R. Silverman, A.Y. Wu, An optimal algorithm for approximate nearest neighbor searching fixed dimensions, JACM 45 (1998) 891–923. [5] P. Benko, L. Andor, G. Kos, R. Martin, T. Varady, Constrained fitting in reverse engineering, Comput. Aided Geom. D 19 (2002) 173–205. [6] J.L. Bentley, K-d trees for semidynamic point sets, in: Sixth Annual Symposium on Computational Geometry, ACM Press, 1990, pp. 187–197. [7] A. Boulch, R. Marlet, Fast and robust normal estimation for point clouds with sharp features, Comput. Graph. Forum 31 (2012) 1765–1774. [8] M.A. Cheema, W. Zhang, X. Lin, Y. Zhang, X. Li, Continuous reverse k nearest neighbors queries in euclidean space and in spatial networks, VLDB J. 21 (2012) 69–95. [9] K.L. Clarkson, Fast algorithm for the all nearest neighbors problem, in: 24th IEEE Annual Symposium on Foundations of Computer Science, IEEE, New York, NY, USA, 1983, pp. 226–232. [10] A. Corral, J.M. Almendros-Jimnez, A performance comparison of distancebased query algorithms using r-trees in spatial databases, Inform. Sci. 177 (2007) 2207–2237. [11] W. Dhaes, D. van Dyck, X. Rodet, PCA-based branch and bound search algorithms for computing k nearest neighbors, Pattern Recognit. Lett. 24 (2003) 1437–1451. [12] Y. Gao, B. Zheng, G. Chen, Q. Li, On efficient mutual nearest neighbor query processing in spatial databases, Data Knowl. Eng. 68 (2009) 705–727. [13] C. Grimm, W. Smart, Shape classification and normal estimation for nonuniformly sampled, noisy point data, Comput. Graph. 35 (2011) 904–915. [14] D. Hearn, M. Baker, W. Carithers, Computer Graphics with OpenGL, fourth ed., Pearson Education Ltd., New Jersey, NJ, 2010. [15] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, W. Stuetzle, Surface reconstruction from unorganised points, ACM SIGGRAPH Comput. Graph. 26 (1992) 71–78. [16] B. Li, R. Schnabel, R. Klein, Z. Cheng, G. Dang, S. Jin, Robust normal estimation for point clouds with sharp features, Comput. Graph. 34 (2010) 94–106. [17] Z. Li, 2013. . [18] Y. Liaw, Evaluation of fast k-nearest neighbors search methods using real data sets, in: 2011 International Conference on Image Processing, Computer Vision, and Pattern Recognition, CSREA Press, 2011, pp. 860–864. [19] Y. Liaw, L. Maw, M. Chien, Fast exact k nearest neighbors search using an orthogonal search tree, Pattern Recognit. 43 (2010) 2351–2358. [20] J. Ma, Y. Fang, W. Zhao, Y. Feng, Algorithm for finding k-nearest neighbors based on spatial sub-cubes and dynamic sphere, Geomat. Inform. Sci. Wuhan Univ. (in Chinese) 36 (2011) 358–362. [21] J. McNames, A fast nearest-neighbor algorithm based on a principal axis search tree, IEEE Trans. Pattern Anal. 23 (2001) 964–976. [22] D. Mount, S. Arya, ANN: a library for approximate nearest neighbor searching, in: CGC Second Annual Fall Workshop on Computational Geometry, 1997. [23] L.A. Piegl, W. Tiller, Algorithm for finding all k nearest neighbors, Comput. Aided Des. 34 (2002) 167–172. [24] X. Ping, R. Xu, J. Kong, S. Liu, Novel algorithm for k-nearest neighbors of massive data based on spacial partition, J. South China Univ. Technol. (Nat. Sci.) (in Chinese) 35 (2007) 65–69. [25] Pointshop3d, 2004, . [26] J. Sankaranarayanan, H. Samet, A. Varshney, A fast all nearest neighbor algorithm for applications involving large point-clouds, Comput. Graph. 31 (2007) 157–174. [27] Y. Shi, L. Zhang, L. Zhu, An approach to nearest neighboring search for multidimensional data, Int. J. Future Gener. Commun. Network. 4 (2010) 23–28. [28] Z. Song, N. Roussopoulos, K-nearest neighbor search for moving query point, in: C.e. Jensen (Ed.), Advances in Spatial and Temporal Databases, Springer, Berlin, Heidelberg, 2001, pp. 79–96. [29] T. Varady, R. Martin, J. Cox, Reverse engineering of geometric models – an introduction, Comput. Aided Des. 29 (1997) 255–268. [30] W. Wei, L. Zhang, L. Zhou, A spatial sphere algorithm for searching k-nearest neighbors of massive scattered points, Chin. J. Aeronaut. (in Chinese) 27 (2006) 944–948. [31] B. Xiong, M. He, H. Yu, Algorithm for finding k-nearest neighbors of scattered points in three dimensions, J. CAD CG (in Chinese) 16 (2004) 909–912:917.

170

Z. Li et al. / Pattern Recognition Letters 49 (2014) 162–170

[32] P.N. Yianilos, Data structures and algorithms for nearest neighbor search in general metric spaces, in: Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, ACM, New York, NY, USA, 1983, pp. 311–321. [33] B. Zhang, S.N. Srihari, Fast k-nearest neighbor classification using cluster-based trees, IEEE Trans. Pattern Anal. 26 (2004) 525–528.

[34] C. Zhao, X. Meng, An improved algorithm for k-nearest-neighbor finding and surface normals estimation, Tsinghua Sci. Technol. 14 (2009) 77–81. [35] J. Zhao, C. Long, Y. Ding, Z. Yuan, A new k-nearest neighbors search algorithm based on 3d cell grids, Geomat. Inform. Sci. Wuhan Univ. (in Chinese) 34 (2009) 615–618.