Pattern Recognition ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Contents lists available at ScienceDirect
Pattern Recognition journal homepage: www.elsevier.com/locate/pr
A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors Xiangqian Wu a,n, Qiushi Zhao a, Wei Bu b a b
School of Computer Science and Technology, Harbin Institute of Technology, 150001 Harbin, PR China Department of Media Technology and Art, Harbin Institute of Technology, 150001 Harbin, PR China
art ic l e i nf o
a b s t r a c t
Article history: Received 3 July 2013 Received in revised form 26 February 2014 Accepted 3 April 2014
Palmprint recognition is a relatively new and effective biometric technology. Most of the traditional palmprint recognition methods are based on contact acquisition devices, which affects their userfriendliness and limits their applications. To overcome these shortcomings, this work proposes a contactless palmprint verification approach based on SIFT, which is composed of three steps, namely, image preprocessing, SIFT feature extraction and matching, and matching refinement. Palmprint images are firstly preprocessed using an isotropic filter, and then the SIFT points are detected and matched. Finally, the matched points are refined by employing a two-stage strategy. In the first stage, an iterative RANSAC (I-RANSAC) algorithm is employed to remove the mis-matched points which fail to satisfy the topological relations. In the second stage, local palmprint descriptors (LPDs) are extracted for SIFT points to further remove the mis-matched points which cannot be distinguished by original SIFT descriptors. The number of final matched SIFT points is taken as the score for decision. Experimental results show that the proposed approach is effective in contactless palmprint recognition, especially when non-linear deformations exist in palmprint images. & 2014 Elsevier Ltd. All rights reserved.
Keywords: Contactless palmprint recognition SIFT Iterative RANSAC Local palmprint descriptor
1. Introduction Palmprint recognition, as an emerging biometric technology, has received considerable interest recently, and has been extensively studied to establish biometric systems with high accuracy and high user friendliness. Traditional palmprint feature extraction methods can be roughly categorized into holistic ones and local ones [1]. In holistic palmprint feature extraction methods, the features are extracted from the entire palmprint image in either spatial domain [2,3] or transform domain [4,5]. Local palmprint feature extraction methods intend to extract the local structure or texture features on palmprint, which can be further classified into line-based methods [6,7], coding-based methods [8–14], local palmprint texture descriptors [15,16], etc. Among them, coding-based methods, such as the competitive code (CompCode) [9], the orthogonal line ordinal features (OLOF) [11], and the multi-scale competitive code (MCC) [13], etc., have achieved great success. In coding-based palmprint feature extraction, a palmprint image is convolved with a bank of filters, and then the direction information [9–13] or the phase information [8,14] of the responses is encoded into binary bits as features.
n
Corresponding author. Tel.: þ 86 451 8641 2871; fax: þ 86 451 8641 3309. E-mail addresses:
[email protected] (X. Wu),
[email protected] (Q. Zhao),
[email protected] (W. Bu).
Most of the existing palmprint feature extraction and matching methods, especially the coding-based ones, require that palmprint images are well aligned so that the feature maps can be precisely matched pixel-wisely. The most frequently used method to guarantee the alignment of palmprint images is to employ a contact device with guiding pegs [8,17,18] for image acquisition. When capturing palmprint images with these contact devices, users are asked to put their hands on a planar surface and/or have fingers restricted by pegs. Such contact image acquisition approaches may cause hygienic concern and reluctance of use [19], and subsequently limit the applications of palmprint recognition. To improve user friendliness and broaden the application of palmprint recognition, researchers begin to focus on contactless palmprint recognition techniques. Doublet [20–22] used Gaborbased holistic features for contactless palmprint recognition. Michael [23] investigated contactless palmprint recognition using the local binary pattern (LBP) features. Hao [24] employed the orthogonal line ordinal features (OLOF) for contactless multispectral palmprint recognition. Jia [25] evaluated the performances of various state-of-the-art palmprint feature extraction methods across several contactless palmprint databases. The feature extraction and matching methods used in these works were directly adopted from the contact palmprint recognition techniques. Without the guiding mechanisms in contactless palmprint recognition, there will be much more hand deformations, such as rotation, translation, scale variance and palm stretching/
http://dx.doi.org/10.1016/j.patcog.2014.04.008 0031-3203/& 2014 Elsevier Ltd. All rights reserved.
Please cite this article as: X. Wu, et al., A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors, Pattern Recognition (2014), http://dx.doi.org/10.1016/j.patcog.2014.04.008i
X. Wu et al. / Pattern Recognition ∎ (∎∎∎∎) ∎∎∎–∎∎∎
2
bending, etc., than in contact occasions. The deformations will result in large intra-class variations and therefore, contact methods often fail to get high accuracies in contactless systems. Local invariant features are promising to solve the hand deformation problem since they are robust to image translation, rotation, and scale variations. Morals [26,27] proposed to employ the scale invariant features transform (SIFT) fused with OLOF for contactless palmprint recognition. Similarly, Chen [28] investigated the fusion of SIFT and 2D symbolic aggregate approximation (SAX) features for recognition. To further increase the accuracy, Zhao [29] performed SIFT-based image alignment before CompCode feature extraction and matching (called Aligned CompCode), and fused the scores of SIFT and CompCode for recognition. These works showed that local invariant features are effective in contactless palmprint recognition. However, these methods just made use of the original SIFT directly without considering the characteristics of palmprint images, and hence the performances of these methods were affected. There are two main problems of the original SIFT technique for palmprint recognition. Firstly, the orientation-histogram-based SIFT descriptors are not sufficient to discriminate different key points on palmprints since the histograms ignore the positions of the orientations, which are the most powerful features to characterize palmprints [9,11,13]. Therefore, besides the original SIFT descriptors, more distinguishable features of SIFT key points should be extracted for palmprint recognition. Secondly, to achieve translation and rotation invariance, the original SIFT technique matches each key point of one image with all of the key points of the other image ignoring their topological relations, which are very important to distinguish different stretched palmprints. To deal with these problems, this paper proposes a novel SIFTbased contactless palmprint verification approach with matching refinement. The proposed approach consists of three steps as depicted in Fig. 1. Palmprint images are firstly preprocessed using an isotropic filter. SIFT features are then extracted and matched from preprocessed images. Finally, a two-stage refinement strategy is proposed to refine the matched SIFT points, and the number of the refined matched SIFT points is taken as score for decision. Major contributions of this work can be briefly described as follows: 1. An isotropic-filter-based preprocessing method is devised to enhance palmprint textures, which can effectively improve the amount of the detected and matched SIFT points.
2. An iterative RANSAC (I-RANSAC) algorithm is proposed to refine matched SIFT points. The I-RANSAC can remove mismatched points which fail to satisfy the topological relations between two palmprint images. 3. Local palmprint descriptors (LPDs) are defined to further refine the matched SIFT points. The LPD-based refinement can remove the mis-matched points which cannot be discriminated with original SIFT descriptors. The remainder of this paper will be organized as follows. Section 2 presents the method for palmprint image preprocessing. Section 3 briefly reviews the extraction and matching of SIFT features. Section 4 proposes the SIFT point matching refinement method, including the I-RANSAC-based refinement and the LPDbased refinement. Afterwards, experimental results with analyses are provided in Section 5, and finally, Section 6 concludes the work.
2. Preprocessing In general, the textures on palms are very fine, and therefore, palmprint images always have low contrast and are easily corrupted by noises, which will affect the subsequent feature extraction and matching. To solve this problem, Morales [27] used a Gabor filter with the orientation of π=4 to enhance palmprint images. Obviously, this method can only enhance the palmprint textures in the corresponding orientation. However, palmprint textures are essentially multiple-oriented, and as a result, Morales’ method may not effectively enhance palmprint images. An isotropic filter is more suitable for palmprint enhancement. The circular Gabor filter [30] is a very effective isotropic filter for texture enhancement and is used to preprocess palmprint images in this work. The circular Gabor filter is expressed as qffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðx2 þ y2 Þ Gðx; yÞ ¼ exp ð1Þ expð2πiFð x2 þ y2 ÞÞ 2 2s where F is the central frequency of the filter, s is the standard deviation pffiffiffiffiffiffiffiffi of Gaussian envelope, and i is the imagery unit, i.e., i ¼ 1. The spatial response of circular Gabor filter used in this work is depicted in Fig. 2(a), where s ¼ 7:25 and F ¼ 0:03. An example of preprocessed palmprint image is also shown in Fig. 2, in which (b) is the original palmprint image, and (c) is the result of preprocessing.
3. SIFT features extraction and matching
Preprocessing
Preprocessing
SIFT feature extraction & matching
SIFT point refinement
SIFT feature extraction
First-stage refinement (I-RANSAC)
SIFT feature matching
Second-stage refinement (LPD)
Feature database
Decision
Fig. 1. Framework of the proposed approach.
3.1. SIFT feature extraction The main stages for SIFT feature extraction proposed by Lowe [31] can be briefly described as the following four steps when applied on palmprint images: 1. Scale space construction: The Gaussian scale space is constructed through convolving the preprocessed palmprint image with a bank of Gaussian kernels of different scales as follows: LðsÞ ¼ GðsÞnI
ð2Þ
where GðsÞ is the Gaussian kernel with scale s, and I is the enhanced palmprint image. 2. Key point localization: Key point localization is performed in the difference of Gaussian (DoG) images, which are obtained through subtraction of neighborhood planes of Gaussian scale space as follows: DðsÞ ¼ ðGðksÞ GðsÞÞnI ¼ LðksÞ LðsÞ
ð3Þ
Please cite this article as: X. Wu, et al., A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors, Pattern Recognition (2014), http://dx.doi.org/10.1016/j.patcog.2014.04.008i
X. Wu et al. / Pattern Recognition ∎ (∎∎∎∎) ∎∎∎–∎∎∎
3
Fig. 2. Circular Gabor filter and the result of palmprint image preprocessing: (a) spatial response of circular Gabor filter, (b) original palmprint image and (c) result of preprocessing.
Fig. 3. Detected SIFT points: (a) detected points in Fig. 2(b) and (b) detected points in Fig. 2(c).
where k is the scale factor. The local extrema of DoG are detected as key point candidates by using non-maximum suppress in both spatial and scale spaces. After that, a detailed model is fit to determine the finer location of key points by using the quadratic Taylor expansion of DðsÞ. The key point candidates are then refined by a threshold to dismiss unstable ones. 3. Orientation assignment: A local image patch centered at a key point is considered for orientation assignment. The gradient phase and magnitude of pixels in this area are calculated, by which a gradient orientation histogram is obtained. The angle corresponding to the maximum histogram bin is taken as the dominant orientation of that key point. 4. Descriptor computation: The local image patch centered at a key point is equally split into 4 by 4 sub-blocks. From each subblock, the gradient orientation histogram is computed and grouped into 8 bins. The descriptor for the key point is obtained by concatenating the gradient orientation histogram of each sub-block into a 128 bin histogram. Before the descriptor is formed, orientation normalization is performed according to the dominant orientation so that the orientation invariance is obtained. Fig. 3 shows a result of SIFT point detection, in which (a) and (b) demonstrate the detected SIFT points from Fig. 2(b) and (c), respectively. There are 376 points detected from the original image, while 531 points are detected from the preprocessed image. The result shows that much more points are detected from the preprocessed image than from the original one.
3.2. SIFT feature matching In SIFT-based palmprint recognition, the Euclidean distance has been successfully used for SIFT feature matching [26–28]. Therefore, in this paper, for the convenience of comparison, the Euclidean distance is also employed to match the descriptors of two palmprint images. A pair of points with descriptors pi ; qj are taken as matched if dij o t minðdik Þ;
k ¼ 1; 2; …; N;
kaj
ð4Þ
where dij and dik are the Euclidean distances between pi ; qj and pi ; qk . The previous work [26] showed that the threshold t in range [0.58, 0.83] is suitable for contactless palmprint verification. Therefore, in this work, the threshold t is chosen as 0.6. To achieve translation invariance, the original SIFT technique matches each point in one image to all the points in the other image to get the matched ones, without considering the topological relations of these points. Fig. 4 shows the matched key points between two palmprint images.
4. Matching refinement After SIFT feature matching, there will exist many mis-matched points (see the ones linked by yellow lines in Fig. 4). In SIFT-based palmprint recognition, the scores for final decision are computed from the number of matched SIFT points, and therefore the mis-matched points will cause false acceptances. To remove mis-matched SIFT points, this work proposes a two-stage refinement strategy, composed of I-RANSAC-based refinement and LPD-based refinement.
Please cite this article as: X. Wu, et al., A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors, Pattern Recognition (2014), http://dx.doi.org/10.1016/j.patcog.2014.04.008i
X. Wu et al. / Pattern Recognition ∎ (∎∎∎∎) ∎∎∎–∎∎∎
4
Fig. 4. Matched SIFT points. The yellow lines are mis-matched ones. (For clearness, the images are not preprocessed.) (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Region 1 Region 4
Region 2
Region 3
Fig. 5. An example of piecewise-linearly deformed palmprint image: (a) original deformed image and (b) demonstration of separated regions.
4.1. I-RANSAC-based refinement The RANSAC algorithm [32], which is frequently used for refining the matched SIFT points, estimates a transformation model (homography) according to all of the matched SIFT points and keep the points which comply with the model as inliers, and discard the other points which fail to comply with the model as outliers. In contactless palmprint images, there exist non-linear deformations due to the variance of stretching degrees of hands. In such cases, it is very difficult to model these non-linear deformations using only one transformation model. By observing the stretched palm, it can be found that although the palm is flexible, large deformations mostly occur along palm lines, by which the palm is segmented into several regions. Each region is relatively stable and can be regarded as a linearly deformed object. Thus, the non-linearly deformed palm can be regarded as piecewise-linearly deformed, as shown in Fig. 5. In such cases, for each region, the RANSAC can be applied for computing one transformation model, and the whole palmprint can be modeled by several transformation models. However, it is a difficult task to extract the palm lines from low quality contactless palmprint images for segmentation. To avoid this problem, an iterative RANSAC (I-RANSAC) algorithm [33] is proposed to estimate the transformation models based on the original RANSIC. In I-RANSAC, after estimating one transformation model using RANSAC, the inliers which comply with the estimated model are retained, and the remaining points, instead of being discarded as outliers, are fed into RANSAC again for computing another model. This process is iteratively executed until the number of maximum plausible models is reached or no transformation models can be
estimated from the remaining points any more. Through this way, most of the truly matched points will be retained and taken as inliers although they comply with different models. The I-RANSAC-based refinement is described as Algorithm 1. Algorithm 1. I-RANSAC-based refinement Input: 1. Original matched SIFT point set: S ¼ fðp; qÞi ji ¼ 1; 2; …; Ng where ðp; qÞi is the ithpair of matched SIFT points and N is the number of matched SIFT point pairs. 2. Number of maximum plausible models: M max Output: 1. Number of models: M 2. Refined inlier sets: SiIRANSAC D S; i
3. Estimated models: H ;
i ¼ 1; 2; …; M
i ¼ 1; 2; …; M
1. Initialization: S0outlier ¼ S; S0IRANSAC ¼ ∅; M ¼ M max (∅ represents the empty set) 2. For T ¼ 1 to M max 1 1 (1) SToutlier ¼ SToutlier STIRANSAC ( represents the set subtraction operation)
(2) Apply RANSAC to SToutlier , get the inlier set STIRANSAC and the transformation model HT (3) If STIRANSAC ¼ ∅ (a) M ¼ T 1 (b) Break (4) End 3. End
Please cite this article as: X. Wu, et al., A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors, Pattern Recognition (2014), http://dx.doi.org/10.1016/j.patcog.2014.04.008i
X. Wu et al. / Pattern Recognition ∎ (∎∎∎∎) ∎∎∎–∎∎∎
5
Fig. 6. Matching refinement using I-RANSAC: (a) original matchings (yellow lines mean mis-matched), (b) matchings complying with Model 1, (c) matchings complying with Model 2, and (d) final refined matchings. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 7. Mis-matched points between two inter-class palmprint images: (a) mis-matched SIFT points and (b) matchings after refining by I-RANSAC. (For clearness, the images are not preprocessed.)
In Algorithm 1, M max is the number of maximum plausible transformation models to estimate. If M max is set to 1, the algorithm is equivalent to the original RANSAC. Fig. 6 shows an example of refinement, in which two models are fitted using I-RANSAC. Fig. 6(a) is the matching correspondences without refinement, and Fig. 6(b) and (c) are the matching correspondences refined using the first transformation model and the second transformation model, respectively. It can be seen from Fig. 6 that if only one model is employed, a considerable amount of truly matched points are discarded as outliers, while the I-RANSAC can retain these points by employing multiple models (Fig. 6(d)). It should be pointed out that the result in Fig. 6(b) is also the result of the original RANSAC.
however, all of them are actually mis-matched ones since these two images are from different palms. The reason can be attributed to the fact that the orientation histogram based original SIFT descriptors cannot provide sufficient discriminability to distinguish SIFT points in palmprint images. To further remove mismatched points retained by I-RANSAC, additional features (called local palmprint descriptor, LPD) with high discriminability is extracted to characterize SIFT points in this work. Let pðx; yÞ denote a matched SIFT point at ðx; yÞ in a palmprint image. The LPD of pðx; yÞ is defined as the local palmprint feature extracted from a square palmprint image patch centered at ðx; yÞ as follows: LPDðx; yÞ ¼ Feaði; jÞ; i; j A Ddðx;yÞ
ð5Þ
4.2. LPD-based refinement After refining using I-RANSAC, the mis-matched points which fail to satisfy the topological relations are removed. However, there may still exist some mis-matched ones. Fig. 7 shows an example of mis-matched points retained by I-RANSAC. Fig. 7(a) is the original matched SIFT points, and Fig. 7(b) is the result of I-RANSAC based refinement. In Fig. 7(b), 7 points are taken as inliers by I-RANSAC,
where Feað U Þ is a feature extractor, Ddðx;yÞ is an image patch with size d and centered at ðx; yÞ. LPDs can be matched by using distance metric with respect to Fea. Two matched SIFT points are taken as outliers if the distance between their LPDs is larger than a threshold τ. In this work, the LPD is used for further removing mis-matched points retained by I-RANSAC. For each inlier set obtained by
Please cite this article as: X. Wu, et al., A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors, Pattern Recognition (2014), http://dx.doi.org/10.1016/j.patcog.2014.04.008i
X. Wu et al. / Pattern Recognition ∎ (∎∎∎∎) ∎∎∎–∎∎∎
6
I-RANSAC, the related points can be well aligned by using the corresponding transformation model before extracting LPD. Therefore, the feature extractors used for constructing LPD do not have to be invariant to scale, rotation and translation variances. Fig. 8 shows an example of image patch definition for LPD extraction. Fig. 8(a) demonstrates a pair of palmprint images and the matched points given by I-RANSAC (for clearness, we only take one model in the example), and Fig. 8(b) is the result of alignment using the estimated model. Around each SIFT point in the aligned images, an image patch is defined for extracting LPD, which is depicted in Fig. 8(c).The process of refinement using LPD can be described as Algorithm 2. Algorithm 2. LPD-based refinement
Hi ; i ¼ 1; 2; …; M Output: 1. Refined inlier sets: SiLPD DSiI RANSAC ; i ¼ 1; 2; …; M 1. Initialization: SiLPD ¼ SiI RANSAC ; i ¼ 1; 2; …; M 2. For m ¼ 1 to M (1) Align I 2 to I 1 using transformation model Hm (2) For k ¼ 1 to N m (N m is the number of elements in Sm LPD )
(i) Remove ðp; qÞk from Sm LPD (d) End (3) End 3. End
ω ω2 κ2 Gðx; y; s; δ; θÞ ¼ pffiffiffiffiffiffiffiffiexp 2 ð4x02 þ y02 Þ cos ðωx0 Þ exp 2 8κ 2πκ 0
3. Inlier sets returned by I-RANSAC: SiIRANSAC ; i ¼ 1; 2; …; M 4. Transformation models returned by I-RANSAC:
(b) Match LPDp ; LPDq and get the distance d (c) If d 4 τ
1. Competitive code (CompCode) [9]: In competitive code feature extraction, a bank of modified Gabor filters with different orientations expressed as Eq. (6) are employed to convolve with palmprint images.
ð6Þ
Input: 1. Pair of palmprint images to be matched: I 1 ; I 2 2. Number of models returned by I-RANSAC: M
(a) Compute LPDp ; LPDq for points p; q of ðp; qÞ Eq. (5)
After applying transformation model to the palmprint image, the image patch centered at a SIFT point can be regarded as a well aligned small palmprint image, hence various palmprint feature extractors can be used for LPD construction. Three most powerful coding-based palmprint feature extractors, namely, CompCode [9], OLOF [11], and MCC [13], will be investigated in this work.
k
A Sm LPD
using
0
where x ¼ ðx x0 Þ cos θ þ ðy y0 Þ sin θ, and y ¼ ðx x0 Þ sin θ þ ðy y0 Þ cos θ. ðx0 ; y0 Þ represents the center of the filter. ω and θ denote the frequency the orientation, respecpffiffiffiffiffiffiffiffiffiffiffiffiand ffi tively. κ is defined as κ ¼ 2 ln 2ðð2δ þ1Þ=ð2δ 1ÞÞ, where δ is the half-amplitude bandwidth of the frequency response. Once κ is fixed, ω can be derived as ω ¼ κ=s where s is the scale of Gaussian envelope. The feature of pixel ðx; yÞ in image Iðx; yÞ is defined as Eq. (7): Wðx; yÞ ¼ arg minðIðx; yÞnGR ðx; y; s; δ; θj ÞÞ; θj ¼ jπ=6; j ¼ 0; 1; 2; 3; 4; 5 j
ð7Þ where GR is the real part of G, and n denotes the convolution operation. The feature of each pixel is then encoded into 3 binary bits. 2. Orthogonal line ordinal feature (OLOF) [11]: OLOF applies the orthogonal line ordinal filter defined as Eq. (8) to palmprint images: π FðθÞ ¼ gðx; y; θÞ g x; y; θ þ ð8Þ 2 where gðx; y; θÞ is the 2D Gaussian function orientated at θ. The filtering response of pixel ðx; yÞ is encoded with the rule
Fig. 8. Image patch definition: (a) result of refinement using I-RANSAC, (b) result of alignment and (c) image patch definition. (For clearness, the example takes only one model when using I-RANSAC, and the images are not preprocessed.)
Please cite this article as: X. Wu, et al., A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors, Pattern Recognition (2014), http://dx.doi.org/10.1016/j.patcog.2014.04.008i
X. Wu et al. / Pattern Recognition ∎ (∎∎∎∎) ∎∎∎–∎∎∎
7
Fig. 9. An example of LPD-based refinement: (a) mis-matched points retained by IRANSAC and (b) refined results using LPD.
5. Experimental results and analysis
expressed as follows: Fðx; y; θÞ ¼
1 þ sgnðRθ ðx; yÞÞ 2
ð9Þ
where Rθ ðx; yÞ is the filtering response of pixel ðx; yÞ, and sgnð UÞ returns the sign of the filtering response. Three ordinal filters, i.e. OFð0Þ; OFðπ=6Þ, and OFðπ=3Þ, are applied to the palmprint image and the feature of each pixel is then represented by 3 binary bits. 3. Multi-scale competitive code (MCC) [13]: The MCC uses 18 Gaussian derivative filters of 3 scales and 6 orientations as follows: ! 1 x02 y02 Fðx; y; s1 ; s2 ; θÞ ¼ ðx02 s21 Þ 4 exp 2 2 ð10Þ s1 2 s1 2 s2
where x0 ¼ ðx x0 Þ cos θ þ ðy y0 Þ sin θ, and y0 ¼ ðx x0 Þ sin θ þðy y0 Þ cos θ, ðx0 ; y0 Þ represents the center of the filter. s1 and s2 are the scale of the Gaussian filter along x' and y' directions. MCC takes the filter bank as a dictionary and employs the fast iterative shrinkage-thresholding algorithm (FISTA) to calculate the filter coefficients. The feature of pixel ðx; yÞ is computed by a competitive rule as follows: Wðx; yÞ ¼ arg minðωxy ðθÞÞ
ð11Þ
θ
where ωxy ðθÞ is the filter coefficients of orientation θ. Wðx; yÞ is then encoded into 3 binary bits for matching. The features of the above three coding-based methods are represented as 3 binary bits, and hence the matching scores can be computed using the normalized Hamming distance. Let P ib and Q ib denote the ith bit plane of two encoded feature maps P and Q , respectively. The normalized Hamming distance metric between P and Q is defined as follows: DðP; Q Þ ¼
1 M1 N1 k i ∑ ∑ ∑ P ðx; yÞ Q ib ðx; yÞ kMN x ¼ 0 y ¼ 0 i ¼ 1 b
ð12Þ
where is the exclusive OR operator. k is the number of bitplanes of the encoded feature map, M and N are the dimensions of palmprint image patches on which LPDs are computed. Fig. 9 demonstrates an example of refinement using LPD (CompCode). Fig. 9(a) are mis-matched points retained by I-RANSAC as demonstrated in Fig. 7(b). Most of these mis-matched points are removed by using LPD-based refinement (Fig. 9(b)). After refinement, the numbers of matched SIFT points are taken as the scores for decision.
5.1. Databases We consider two publicly available contactless palmprint databases to evaluate the performance of the proposed approach in this work.1 1. IIT Delhi (IITD) Touchless Palmprint Database: The public IITD Touchless Palmprint Database version 1.0 [34] database is not exactly the same with what was used in Refs. [26,27]. The database used in Refs. [26,27] contains 235 persons and 6 samples respectively captured from the left and right hand of each person. While the public IITD database contains 2791 images, which captured from 235 persons, 5–11 samples from each hand of each person. We use the public IITD database for experiments in this paper. The images in the public IITD database are captured using a digital CMOS camera with a simple and contactless imaging setup. In addition to the original images, automatically segmented region of interest (ROI) images are also available. Some typical samples from the IITD database are shown in Fig. 10. 2. CASIA palmprint database: The CASIA Palmprint Database version 1 [35] contains 5502 palmprint images captured from 312 subjects. For each subject, 8–17 samples are respectively captured from the left and the right hand. In the acquisition device, there are no pegs to restrict postures and positions of palms. Some typical palmprint images in the database are shown in the top row of Fig. 11. The CASIA database provides only the original images, and a preprocessed algorithm as described in Ref. [8] is performed to get the ROI images, as shown in the bottom row of Fig. 11. According to Figs. 10 and 11, we can find that the images from the CASIA database contain much less deformation than those from the IITD database. Both the considered databases are separated as two subsets, i.e., left hand and right hand. In our experiments, we evaluate the performances of the algorithms on left hand and right hand respectively. When evaluating on one subset, the other one is taken as the training set to optimize the parameters of the proposed method, and then vice versa. 5.2. Results and analysis of preprocessing The proposed preprocessing method is performed on all of the four datasets, and three values are taken as the evaluation criterion, i.e., the 1 All of the figures in this section with MATLAB.fig format can be provided by the authors.
Please cite this article as: X. Wu, et al., A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors, Pattern Recognition (2014), http://dx.doi.org/10.1016/j.patcog.2014.04.008i
X. Wu et al. / Pattern Recognition ∎ (∎∎∎∎) ∎∎∎–∎∎∎
8
Fig. 10. Typical samples from the IITD database. Top row: original images. Bottom row: corresponding ROI images.
Fig. 11. Typical samples from the CASIA database: Top row: original images. Bottom row: corresponding ROI images.
average number of detected SIFT points (ND), the average number of matched SIFT points (from intra-class images) (NM), and the average ratio of NM to ND (RMD). For comparison, the ND, NM, and RMD are also computed over the images without preprocessing and the images preprocessed using Morales’ method [26], during which the same configurations of SIFT algorithm are employed. Fig. 12 demonstrates some examples of the results, and Table 1 lists the statistical results. From Table 1 we can see that both Morales’ and the proposed preprocessing methods can improve the ND, NM and RMD, because the palmprint are enhanced before key point detection. The proposed preprocessing method obtains the highest ND and NM since the circular Gabor filter can enhance palmprint textures of various orientations. Moreover, the proposed preprocessing method also gets the highest RMD, which indicates that the detected SIFT points are the most stable. 5.3. Selection of model number In Algorithm 1, the value of M max controls the number of linear models for approximating a non-linearly deformed palmprint. The value of M max depends on the extent of non-linear palmprint deformations, and more non-linear deformations require larger M max values. To determine the value of M max , we apply Algorithm 1 on each of the four datasets with no restrictions on the model number, i.e., M max is set to 1. In this case, Algorithm 1 will terminate when there is no transformation model that can be
estimated by RANSAC. Some of the statistical results between intraclass samples are listed in Table 2, including the average percentage of samples with no more than 3 models (No More Than 3 Models) and the average percentage of remained matched points beyond 3 models (Remaining Points). Table 2 shows that on all the datasets, Algorithm 1 stops within 3 iterations on over 80% of the occasions. We can also find that for the samples on which Algorithm 1 does not stop within 3 iterations, the points that remained after 3 iterations are very few (less than 5% on average), which have little influence on the final results. Therefore, considering both accuracy and efficiency, M max is set to 3 in this paper. 5.4. Results of verification To evaluate the verification performance of the proposed approach, verification experiments are performed over each dataset, and the verification equal error rate (EER) is taken as the evaluation criterion. Each image is matched with all the other samples in the same dataset. If the two images are from the same hand, the matching is counted as a genuine matching, otherwise, an impostor matching. This strategy is referred as Strategy 1. For comparison with the results reported in Refs. [26,27], the proposed approach is also tested by using a similar strategy with the one used in Refs. [26,27] (referred as Strategy 2). In Strategy 2, N samples are randomly selected from each hand of a dataset. Of
Please cite this article as: X. Wu, et al., A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors, Pattern Recognition (2014), http://dx.doi.org/10.1016/j.patcog.2014.04.008i
X. Wu et al. / Pattern Recognition ∎ (∎∎∎∎) ∎∎∎–∎∎∎
9
Fig. 12. Results of detected SIFT points: (a) original palmprint image, (b) preprocessed image using the proposed approach, and (c) preprocessed image using Morales’ method. Top row: original/preprocessed palmprint images. Bottom row: detected SIFT points.
Table 1 Statistical results of different preprocessing methods. Methods
ND
NM
RMD (%)
Without preprocessing Morales [27] Proposed
474 553 616
41 117 177
8.6 21.2 28.7
Table 2 Statistical results for selection of model number (%).
No more than 3 models Remaining points
IITD left
IITD right
CASIA left
CASIA right
82.4 4.4
81.6 3.7
87.3 3.7
89.1 4.5
Table 3 Verification EERs of various methods on IITD database (%). Methods
Left hand
Right hand
CompCode [9] OLOF [11] MCC [13] Aligned CompCode [29] SIFT without refinement Morales’ method (SIFT þ OLOF) [26] SIFT with RANSAC SIFT with I-RANSAC [33] SIFT with I-RANSAC and LPD (CompCode) SIFT with I-RANSAC and LPD (OLOF) SIFT with I-RANSAC and LPD (MCC)
3.4170 3.7234 3.2717 0.6134 0.7391 0.6697 0.5720 0.5134 0.4252 0.4785 0.4369
4.8913 4.5389 4.4086 0.7662 1.0821 0.8122 0.6486 0.5524 0.4869 0.5362 0.4850
(1.5235) (1.5407) (1.5202) (0.2266) (0.2674) (0.2213) (0.2443) (0.2204) (0.1689) (0.1707) (0.1665)
(1.7463) (1.6825) (1.6636) (0.2601) (0.2771) (0.2546) (0.2563) (0.2417) (0.1948) (0.2065) (0.2024)
the N samples, one is taken as the testing sample and the remaining ones are the reference samples. Each testing sample is matched with all of the reference samples. All of the matching
scores are used to compute EER. These steps were repeated N times and the average EER is taken as the final EER. Since there are at least 5 and 8 samples for each hand in the IITD database and the CASIA database respectively, N is selected as 5 and 8 for the corresponding datasets. In Refs. [26,27], N is 6. Obviously, palmprint verification with Strategy 1 is much more difficult than that with Strategy 2.
5.4.1. Results and analysis on IITD database The results of the proposed approach on IITD database are listed in Table 3. For comparison, some results of the state-of-thearts are also listed. The results out of brackets and in brackets are for Strategy 1 and Strategy 2, respectively, and the ROC curves of Strategy 1 are shown in Fig. 13. From Table 3 and Fig. 13, it can be seen that the proposed approach achieves the highest accuracies over both of the subsets of the IITD database. Traditional palmprint recognition methods, i.e. CompCode [9], OLOF [11], and MCC [13], fail to obtain high accuracies on the IITD database, although they are very successful in traditional contact palmprint recognition. The reason can be attributed to the fact that the images in the IITD database contains much more translation, rotation, scale and other non-linear deformations than contact palmprint images. SIFT-based methods, including the original SIFT and Morales’ method (fusion of OLOF and SIFT) [26], get better results, because they can partially overcome these deformations. The Aligned CompCode [29] removes linear deformations by aligning palmprint images before extracting CompCode features, and thus can also improve the accuracy. However, the Aligned CompCode still cannot obtain a high accuracy because of the non-linear deformations of palmprint images. Table 3 and Fig. 13 also show that the matching refinement can further improve the performance. In all of the refinement strategies, the performance of the original RANSAC is worst since
Please cite this article as: X. Wu, et al., A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors, Pattern Recognition (2014), http://dx.doi.org/10.1016/j.patcog.2014.04.008i
X. Wu et al. / Pattern Recognition ∎ (∎∎∎∎) ∎∎∎–∎∎∎
10
2.5 CompCode OLOF
2
Aligned CompCode SIFT Morales' method (SIFT+OLOF) RANSAC
FRR (%)
MCC
1.5 1
I-RANSAC
0.5
I-RANSAC & LPD (CompCode) I-RANSAC & LPD (OLOF) I-RANSAC & LPD (MCC)
0 10-2
100
102
Logorithmic FAR (%)
2.5 CompCode OLOF
2
Aligned CompCode SIFT Morales' method (SIFT+OLOF) RANSAC
FRR (%)
MCC
1.5 1
I-RANSAC I-RANSAC & LPD (CompCode) I-RANSAC & LPD (OLOF) I-RANSAC & LPD (MCC)
0.5 0 10-2
100
102
Logorithmic FAR (%) Fig. 13. ROC curves of the various methods on IITD database: (a) left hand and (b) right hand.
Fig. 14. An example of falsely non-matched by coding-based methods but truly matched by the proposed approach: (a) original image and (b) matched points using the proposed approach.
many truly matched points are removed. Because the proposed I-RANSAC retains more truly matched points than the original RANSAC and at the same time removes outliers, the former one outperforms the latter one. The LPD provides higher discriminability to key points than the original SIFT descriptor, and hence it can further remove mis-matched points. Therefore the accuracies of refinement using I-RANSAC and LDP are the highest. According to Table 3 and Fig. 13, the verification accuracies by using LPDs constructed with CompCode, OLOF and MCC gains comparable results. The probable reason is that the image patches used for constructing LPDs are small (11 by 11 pixels in the experiments), and the discriminability of these three palmprint features to small image patches are similar. Moreover, from Table 3 it can be seen that the EERs with Strategy 2 are lower than those with Strategy 1. This is because palmprint verification with Strategy 1 is much more difficult than that with Strategy 2. It should be pointed out that the results with Strategy 2 is different from those reported in Ref. [26] (the EERs for
left and right hands are 0.20% and 0.21%, respectively) because the public IITD database is not exactly the same as what was used in Ref. [26]. Fig. 14 shows a pair of samples which are falsely non-matched by the mentioned coding-based methods but truly matched by the proposed approach. Palmprint images in Fig. 14(a) contains many image deformations, especilally the translations, and as a result the pixel-wise feature matching of the coding-based methods will get a large distance. In the proposed approach, there are still sufficient matched points between the pair of palmprint images, as shown in Fig. 14(b), and hence they can be correctly matched.
5.4.2. Results and analysis on CASIA database The experimental results of the proposed approach on CASIA database are listed in Table 4 together with those of some state-ofthe-arts. The results out of brackets and in brackets are for Strategy
Please cite this article as: X. Wu, et al., A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors, Pattern Recognition (2014), http://dx.doi.org/10.1016/j.patcog.2014.04.008i
X. Wu et al. / Pattern Recognition ∎ (∎∎∎∎) ∎∎∎–∎∎∎
1 and Strategy 2, respectively, and the ROC curves of Strategy 1 are shown in Fig. 15. From Table 4 and Fig. 15, it can be seen that on the CASIA database, the considered coding-based methods obtained higher accuracies than the SIFT method without refinement, because the image deformations are small in this database. The results of CompCode are improved by Aligned CompCode since it removes linear deformations of palmprint images, but the improvement is not so notable as that on the IITD database due to the fact that the image deformations of IITD database are much larger than those of the CASIA database. We can also find from the results that the matching refinements using original RANSAC, I-RANSAC and I-RANSAC followed by LPD can gradually further improve the accuracies, and the proposed SIFT with I-RANSAC and LPD (MCC) get the highest accuracies. It should be pointed out that since deformations of the images in the CASIA database is not very obvious, and the considered coding-based methods are proven to be very successful on the
11
palmprint images with small deformations, the superiority of the proposed approach is not so notable in the CASIA database. 5.4.3. Comparisons across databases From the experimental results on the IITD database and CASIA database, it can be found that the accuracies of the considered coding-based methods on IITD are much lower than those on CASIA. The reason is that the images from the former database have more deformations than those from the latter one, and the coding-based methods are critically affected by image deformations. However, for the SIFT-based methods, the accuracies on the two databases do not vary very much, which demonstrate that the SIFT-based methods are more robust to palmprint image deformations. To evaluate the robustness to image deformations, for each method, we compute the differences of the average EER over the two databases, and plot the results in Fig. 16. As shown in Fig. 16, all of the SIFT-based methods are much more robust than the 4%
Strategy 1 Strategy 2
3.5%
Table 4 Verification EERs of various methods on CASIA database (%). Methods
Left hand
3% 2.5% 2%
Right hand
1.5%
CompCode [9] OLOF [11] MCC [13] Aligned CompCode [29] SIFT without refinement Morales’ method (SIFT þ OLOF) [26] SIFT with RANSAC SIFT with I-RANSAC [33] SIFT with I-RANSAC and LPD (CompCode) SIFT with I-RANSAC and LPD (OLOF) SIFT with I-RANSAC and LPD (MCC)
0.5508 0.6134 0.5047 0.4946 0.8173 0.5313 0.5161 0.4534 0.4451 0.4488 0.3969
(0.3805) (0.4788) (0.3362) (0.3551) (0.6817) (0.4036) (0.3849) (0.3512) (0.3134) (0.3208) (0.2927)
0.5351 0.6173 0.5058 0.5155 0.8821 0.5450 0.6065 0.5879 0.5233 0.5462 0.4897
(0.4034) (0.4652) (0.3617) (0.3987) (0.7328) (0.4250) (0.4632) (0.4581) (0.3933) (0.4202) (0.3438)
1% 0.5% 0 Co
mp
Co
de
) e) C C C nt F) F) de MC pCo neme OLO NS A NS A pCod (OLO (MCC i m + A A PD Co ut ref SIFT ith R h I-R (Com LPD L d ne h o thod ( IFT w T wit LPD AC & AC & Alig T wit S S e SIF AC & ANS AN SIF ales' m NS ith I-R ith I-R r A o R w M I T w IFT ith S SIF Tw SIF
OF OL
Fig. 16. Differences of the average EERs across databases.
2 CompCode OLOF
1.5
Aligned CompCode SIFT Morales' method (SIFT+OLOF) RANSAC
FRR (%)
MCC
I-RANSAC
1
0.5
I-RANSAC & LPD (CompCode) I-RANSAC & LPD (OLOF) I-RANSAC & LPD (MCC)
0 10-2
100
102
Logorithmic FAR (%)
2 CompCode OLOF
1.5
Aligned CompCode SIFT Morales' method (SIFT+OLOF) RANSAC I-RANSAC I-RANSAC & LPD (CompCode) I-RANSAC & LPD (OLOF) I-RANSAC & LPD (MCC)
FRR (%)
MCC
1
0.5
0 10
-2
10
0
10
2
Logorithmic FAR (%) Fig. 15. ROC curves of various methods on CASIA database: (a) left hand and (b) right hand.
Please cite this article as: X. Wu, et al., A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors, Pattern Recognition (2014), http://dx.doi.org/10.1016/j.patcog.2014.04.008i
X. Wu et al. / Pattern Recognition ∎ (∎∎∎∎) ∎∎∎–∎∎∎
12
Fig. 17. Some falsely rejected samples by the proposed approach: (a) a pair of samples with low image quality, (b) a pair of samples corrupted by image blurring, and (c) a pair of samples corrupted by non-linear deformation. The yellow lines are correspondences of matched SIFT points which are removed by refinement. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
5.5. Execution time performance evaluation
Table 5 Execution time. Steps ROI extraction Preprocessing SIFT feature extraction SIFT feature matching I-RANSAC-based refinement LPD-based refinement (CompCode) Total
Average execution time (ms) 250 12 457 75 40 363 1197
coding-based ones. The stableness of the original SIFT method is increased by matching refinement using RANSAC, and the proposed approach gains the highest robustness. 5.4.4. Main errors of the proposed approach In our experiments, most errors of the proposed method are caused by false rejected samples due to small scores (small quantity of matched SIFT points) between intra-class images. The reasons fall into the following aspects: 1. Low image quality: The palmprint image quality may be critically influenced by the camera quality. The gradient orientation histogram is easily to be affected by low image quality, and hence, there will be many mis-matched SIFT points. Fig. 17 (a) shows an example of bad quality palmprint images from the same palm, in which all of the matched points are mis-matched ones (yellow lines in Fig. 17(a)) and are removed by the proposed approach, and therefore these two images are regarded as non-matched. 2. Image blurring: Since the contactless image acquisition has no guiding mechanisms, and the hand does not touch any part of the acquisition device, the captured images will be blurred even small movements occurs during image capturing. Fig. 17 (b) demonstrates an example of blurred palmprint images caused by hand movement, in which all of the matched points are mis-matched ones (yellow lines in Fig. 17(b)) and are removed by the proposed approach, and the pair of images will also be falsely decided as non-matched. 3. Severe non-linear hand deformations: The proposed I-RANSAC algorithm can retain most of the inliers even if they comply with different transformation models assuming that the hand deformation are composed of several linear deformations. However, some palmprint images are severely corrupted by non-linear deformations which can hardly be modeled as piece-wise linear deformations. Fig. 17(c) demonstrates an example of severe non-linearly deformed palmprint images falsely taken as non-matched by the proposed method, in which all the matched SIFT points are mis-matched ones (yellow lines in Fig. 17(c)), and will be removed by the proposed approach.
The proposed approach is implemented using Matlab 2013a on a computer with 2.8 GHz Pentium CPU and 3 GB RAM. The execution time for each step is listed in Table 5. (For SIFT feature extraction and matching, we adopt the toolkit provided by Vedaldi [36].) The total execution time of verification is about 1.2 s without code optimization, and can be further decreased if the system is implemented using C or C þ þ programming language. This indicates that the proposed approach is efficient enough for practical online biometric applications.
6. Conclusions This paper intensively investigated SIFT-based contactless palmprint verification with matching refinement. In the work, palmprint images are firstly preprocessed using an isotropical filter for enhancement, after which, SIFT features are extracted and matched. For matching refinement, a two-stage refinement approach was proposed, including an I-RANSAC-based refinement and a LPD-based refinement. The proposed preprocessing method can greatly improve the amount of detected and matched SIFT points. The I-RANSAC-based refinement can retain more truly matched SIFT points than the original RANSAC. The LPDs provide high discriminability to SIFT points, and hence the LPD-based refinement can remove the mis-matched which are falsely retained by I-RANSAC. By employing the refinement, most of the mis-matchings were removed while most of the truly matchings were retained, and hence the verification accuracies are greatly improved. The proposed approach is very robust to palmprint deformations, and its superiority is significant on palmprint images with distinct deformations. The proposed approach can be effectively employed in practical contactless palmprint recognition systems.
Conflict of Interest Statement We declare that we have no financial and personal conflict with other people or organizations.
Acknowledgements This work was supported by the Natural Science Foundation of China (Grant No. 61073125 and 61350004) and the Fundamental Research Funds for the Central Universities (Grant No. HIT. NSRIF.2013091 and HIT.HSS.201407).
Please cite this article as: X. Wu, et al., A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors, Pattern Recognition (2014), http://dx.doi.org/10.1016/j.patcog.2014.04.008i
X. Wu et al. / Pattern Recognition ∎ (∎∎∎∎) ∎∎∎–∎∎∎
References [1] D. Zhang, W. Zuo, F. Yue, A comparative study of palmprint recognition algorithms, ACM Comput. Surv. 44 (1) (2012) 2:1–2:37. [2] G. Lu, D. Zhang, K. Wang, Palmprint recognition using eigenpalms features, Pattern Recognit. Lett. 24 (9) (2003) 1463–1467. [3] X. Wu, D. Zhang, K. Wang, Fisherpalms based palmprint recognition, Pattern Recognit. Lett. 24 (15) (2003) 2829–2838. [4] X. Jing, Y. Tang, D. Zhang, A Fourier-LDA approach for image recognition, Pattern Recognit. 38 (3) (2005) 453–457. [5] M. Ekinci, M. Aykut, Palmprint recognition by applying wavelet-based kernel PCA, J. Comput. Sci. Technol. 23 (5) (2008) 851–861. [6] L. Liu, D. Zhang, Palm-line detection, in: Proceedings of the International Conference on Image Processing, 2005, pp. 269–272. [7] X. Wu, D. Zhang, K. Wang, Palm line extraction and matching for personal authentication, IEEE Trans. Syst. Man Cybern. Part A: Syst. Humans 36 (5) (2006) 978–987. [8] D. Zhang, W. Kong, J. You, M. Wong, Online palmprint identification, IEEE Trans. Pattern Anal. Mach. Intell. 25 (9) (2003) 1041–1050. [9] W. Kong, D. Zhang, Competitive coding scheme for palmprint verification, in: Proceedings of the 17th International Conference on Pattern Recognition, 2004, pp. 520–523. [10] W. Jia, D. Huang, D. Zhang, Palmprint verification based on robust line orientation code, Pattern Recognit. 41 (5) (2008) 1504–1513. [11] Z. Sun, T. Tan, Y. Wang, S. Li, Ordinal palmprint representation for personal identification, in: Proceedings of International Conference on Computer Vision and Pattern Recognition, 2005, pp. 279–284. [12] Z. Guo, D. Zhang, L. Zhang, W. Zuo, Palmprint verification using binary orientation co-occurrence vector, Pattern Recognit. Lett. 30 (13) (2009) 1219–1227. [13] W. Zuo, Z. Lin, Z. Guo, D. Zhang, The multiscale competitive code via sparse representation for palmprint verification, in: Proceedings of International Conference on Computer Vision and Pattern Recognition, 2010, pp. 2265– 2272. [14] L. Zhang, H. Li, Encoding local image patterns using Riesz transforms: with applications to palmprint and finger-knuckle-print recognition, Image Vis. Comput. 30 (12) (2012) 1043–1051. [15] A. Kumar, D. Zhang, Personal recognition using hand shape and texture, IEEE Trans. Image Process. 15 (8) (2006) 2454–2461. [16] Y. Han, T. Tan, Z. Sun, Palmprint recognition based on directional features and graph matching, in: Seong-Whan Lee, Stan Z. Li (eds.), Advances in Biometrics, Lecture Notes in Computer Science, vol. 4642, 2007, pp. 1164–1173. [17] D. Zhang, Z. Guo, G. Lu, L. Zhang, Y. Liu, W. Zuo, Online joint palmprint and palmvein verification, Expert Syst. Appl. 38 (3) (2011) 2621–2631. [18] D. Zhang, Z. Guo, G. Lu, L. Zhang, W. Zuo, An online system of multispectral palmprint verification, IEEE Trans. Instrum. Meas. 59 (2) (2010) 480–490.
13
[19] V. Kanhangad, A. Kumar, D. Zhang, A unified framework for contactless hand verification, IEEE Trans. Inf. Forensics Secur. 6 (3) (2011) 1014–1027. [20] J. Doublet, O. Lepetit, M. Revenu, Contact less hand recognition using shape and texture features, , in: Proceedings of 8th International Conference on Signal Processing, vol. 3, 2006, pp. 1–6. [21] J. Doublet, O. Lepetit, M.J. Revenu, Contactless hand recognition based on distribution estimation, in: Proceedings of International Conference on Biometrics Symposium, 2007, pp. 1–6. [22] J. Doublet, M. Revenu, O. Lepetit, Robust GrayScale distribution estimation for contactless palmprint recognition, in: Proceedings of International Conference on Biometrics: Theory, Applications, and Systems, 2007, pp. 1–6. [23] G. Michael, T. Connie, A. Teoh, Touch-less palm print biometrics: novel design and implementation, Image Vis. Comput. 26 (12) (2008) 1551–1560. [24] Y. Hao, Z. Sun, T. Tan, R. Chao, Multispectral palm image fusion for accurate contact-free palmprint recognition, , Proceedings of 15th IEEE International Conference on Image Processing, 2008, pp. 281–284. [25] W. Jia, R. Hu, J. Gui, Y. Zhao, X. Ren, Palmprint recognition across different devices, Sensors 12 (6) (2012) 7938–7964. [26] A. Morales, M. Ferrer, A. Kumar, Towards contactless palmprint authentication, IET Comput. Vis. 5 (6) (2011) 407–416. [27] A. Morales, M. Ferrer, A. Kumar, Improved palmprint authentication using contactless imaging, in: Proceedings of Fourth International Conference on Biometrics: Theory Applications and Systems, 2010, pp. 1–6. [28] J. Chen, Y. Moon, Using SIFT features in palmprint authentication, in: Proceedings of 19th International Conference on Pattern Recognition, 2008, pp. 1–4. [29] Q. Zhao, W. Bu, X. Wu, SIFT-based image alignment for contactless palmprint verification, in: Proceedings of International Conference on Biometrics, 2013, pp. 1–6. [30] J. Zhang, T. Tan, L. Ma, Invariant texture segmentation via circular Gabor filters, in: Proceedings of 16th International Conference on Pattern Recognition, vol. 2, 2002, pp. 901–904. [31] D. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vis. 60 (2) (2004) 91–110. [32] M. Fischler, R. Bolles, Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, Commun. ACM 24 (6) (1981) 381–395. [33] Q. Zhao, X. Wu, W. Bu, Contactless palmprint verification based on SIFT and iterative RANSAC, in: Proceedings of International Conference on Image Processing, 2013, pp. 4186–4189. [34] IIT Delhi Touchless Palmprint Database version 1.0, 〈http://www4.comp.polyu. edu.hk/ csajaykr/IITD/Database_Palm.htm〉. [35] CASIA Palmprint Database version 1, 〈http://biometrics.idealtest.org/〉. [36] A. Vedaldi, An Implementation of Lowe's Scale Invariant Feature Transform, 〈http://www.vlfeat.org/ vedaldi/code/sift.html〉.
Xiangqian Wu received the B.Sc., M.Sc., and Ph.D. degrees in computer science from the Harbin Institute of Technology (HIT), Harbin, China, in 1997, 1999, and 2004, respectively. Dr. Wu works as a lecturer (2004–2006), associate professor (2006–2009) and professor (2009–present) at the School of Computer Science and Technology, HIT. He has published one book and about 90 papers in international journals and conferences. His current research interests include pattern recognition, image processing, computer vision, biometrics and medical image analysis, etc. Dr. Wu won the Nomination of National Excellent Ph.D. Dissertation Award of China and China Computer Federation (CCF) Excellent Ph.D. Dissertation in 2006. He is a reviewer for dozens of international journals and conferences including the IEEE Transactions on Pattern Analysis and Machine Intelligence, etc.
Qiushi Zhao received his B.S. degree and M.S. degree in Computer Science and Technology from School of Computer Science and Information Technology, Northeast Normal University, Changchun, P.R. China. He is now a Ph.D. candidate at the School of Computer Science and Technology of Harbin Institute of Technology. His research interests include pattern recognition, image analysis, and biometrics, etc.
Wei Bu received the B.Sc., M.Sc., and Ph.D. degrees from the Harbin Institute of Technology (HIT), Harbin, China, in 2000, 2006, and 2010, respectively. From 2010, Dr. Bu works as a lecturer at Department of New Media Technologies and Arts, HIT. She has published about 20 academic papers. Her current research interests include pattern recognition, image processing, biometrics and digital art design, etc.
Please cite this article as: X. Wu, et al., A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors, Pattern Recognition (2014), http://dx.doi.org/10.1016/j.patcog.2014.04.008i