Combining pores and ridges with minutiae for improved fingerprint verification

Combining pores and ridges with minutiae for improved fingerprint verification

ARTICLE IN PRESS Signal Processing 89 (2009) 2676–2685 Contents lists available at ScienceDirect Signal Processing journal homepage: www.elsevier.co...

1MB Sizes 2 Downloads 60 Views

ARTICLE IN PRESS Signal Processing 89 (2009) 2676–2685

Contents lists available at ScienceDirect

Signal Processing journal homepage: www.elsevier.com/locate/sigpro

Fast communication

Combining pores and ridges with minutiae for improved fingerprint verification Mayank Vatsa a, Richa Singh a, Afzel Noore b,, Sanjay K. Singh c a b c

Indraprastha Institute of Information Technology, New Delhi, India Lane Department of Computer Science and Electrical Engineering, West Virginia University, USA Department of Computer Engineering Institute of Technology, Banaras Hindu University, India

a r t i c l e i n f o

abstract

Article history: Received 7 September 2007 Received in revised form 11 May 2009 Accepted 12 May 2009 Available online 21 May 2009

This paper presents a fast fingerprint verification algorithm using level-2 minutiae and level-3 pore and ridge features. The proposed algorithm uses a two-stage process to register fingerprint images. In the first stage, Taylor series based image transformation is used to perform coarse registration, while in the second stage, thin plate spline transformation is used for fine registration. A fast feature extraction algorithm is proposed using the Mumford–Shah functional curve evolution to efficiently segment contours and extract the intricate level-3 pore and ridge features. Further, Delaunay triangulation based fusion algorithm is proposed to combine level-2 and level-3 information that provides structural stability and robustness to small changes caused due to extraneous noise or non-linear deformation during image capture. We define eight quantitative measures using level-2 and level-3 topological characteristics to form a feature supervector. A 2n-support vector machine performs the final classification of genuine or impostor cases using the feature supervectors. Experimental results and statistical evaluation show that the feature supervector yields discriminatory information and higher accuracy compared to existing recognition and fusion algorithms. Published by Elsevier B.V.

Keywords: Fingerprint verification Active contour Delaunay triangulation Information fusion

1. Introduction Fingerprint features have been widely used for verifying the identity of an individual. Automatic fingerprint verification systems use ridge flow patterns and general morphological information for broad classification, and minutiae information for verification [1]. The ridge flow patterns and morphological information are referred to as level-1 features, while ridge endings and bifurcations, also known as minutiae, are referred to as level-2 features. With the availability of high resolution fingerprint

 Corresponding author.

E-mail addresses: [email protected] (M. Vatsa), [email protected] (R. Singh), [email protected] (A. Noore), [email protected] (S.K. Singh). 0165-1684/$ - see front matter Published by Elsevier B.V. doi:10.1016/j.sigpro.2009.05.009

sensors, it is now feasible to capture more intricate features such as ridge path deviation, ridge edge features, ridge width and shape, local ridge quality, distance between pores, size and shape of pores, position of pores on the ridge, permanent scars, incipient ridges, and permanent flexure creases. These fine details are characterized as level-3 features [2] and play an important role in matching and improving the verification accuracy. Researchers have proposed several algorithms for level-2 fingerprint recognition [1]. However, limited research has been performed for level-3 feature extraction and matching [3–7]. The recognition performance of these algorithms are good but have certain limitations. The algorithm proposed in [3] requires manual intervention for fingerprint alignment whereas the algorithm proposed in [4,5] requires very high resolution ( 2000 ppi) images. Meenen et al. [6] proposed a pore extraction algorithm

ARTICLE IN PRESS M. Vatsa et al. / Signal Processing 89 (2009) 2676–2685

that suffers due to elastic distortion and misclassification of pore features. Jain et al. [7] proposed an automated algorithm that extracts minutiae information for alignment. Level-3 features are then extracted and efficient matching is performed using iterative closest point algorithm. However, level-3 feature extraction and matching algorithm is computationally expensive. The objective of this research is to develop a fast and accurate automated fingerprint verification algorithm that incorporates both level-2 and level-3 features. We propose a fast level-3 feature extraction and matching algorithm using Mumford–Shah curve evolution approach. The gallery and probe fingerprint images are registered using a two-stage registration process and the features are extracted using an active contour model. For improving the fingerprint verification performance, we further propose fusion algorithm for level-2 and level-3 features that uses Delaunay triangulation for generating feature supervector and support vector machine (SVM) for classification. Experimental results and statistical tests on a database of 550 classes show the effectiveness of the proposed algorithms. In the next section, we describe fingerprint feature extraction algorithm and Section 3 presents the proposed fusion algorithm. 2. Proposed fingerprint feature extraction and matching In the proposed fingerprint verification algorithm, we first extract level-2 features which are used in the twostage registration process. Minutiae are extracted from a fingerprint image using the ridge tracing and minutiae extraction algorithm [9]. The extracted minutiae are matched using a dynamic bounding box based matching algorithm [10]. The details of the minutiae extraction and matching algorithms are found in [9,10], respectively. We next register gallery and probe images using the twostage registration algorithm and then extract level-3

2677

features for fusion and matching. Fig. 1 illustrates the various stages of the proposed algorithm. In this section, we describe the two-stage registration and level-3 feature extraction and matching algorithms in detail. 2.1. Two-stage non-linear registration algorithm Fingerprint images are subjected to non-linear deformation due to varying pressure applied by a user when the image is captured. These deformations affect the unique spatial distribution and characteristics of fingerprint features. To address these non-linearities and accurately register the probe fingerprint image with respect to the gallery fingerprint image, a two-stage non-linear registration algorithm is proposed. The registration process uses two existing registration algorithms: Taylor series transformation [6] and thin plate spline based ridge curve correspondence [8]. Taylor series based algorithm performs an image transformation that takes fingerprint images along with their minutiae coordinates as input and computes the registration parameter using Taylor series. We use this algorithm to coarsely register the gallery and probe images in the first stage. In the next stage, we use the more detailed and accurate thin plate spline based ridge curve correspondence [8] algorithm for fine registration of fingerprint images. Thin plate spline algorithm is used at the second stage because it requires coarsely registered features as input. Fingerprint images contain a large number of level-3 features that increases the complexity of the thin plate spline algorithm. We therefore use only level-2 features to perform fine registration of gallery and probe images. The non-linear two-stage approach thus registers the gallery and probe fingerprint images with respect to rotation, scaling, translation, feature positions, and local deformation. Fig. 2 illustrates the results obtained with the proposed two stage registration algorithm. Note that Fig. 2(c) shows

Fig. 1. Steps illustrating the proposed fingerprint verification algorithm.

ARTICLE IN PRESS 2678

M. Vatsa et al. / Signal Processing 89 (2009) 2676–2685

Fig. 2. Representative results of the two stage registration algorithm. (a) Fingerprint ridge curves (level-2) and pore (level-3) samples captured from two different gallery and probe images of the same individual, (b) registration using Taylor series [6], and (c) result of the two stage registration algorithm. Although, the registration algorithm uses only level-2 ridge and minutia features, level-3 pores are also effectively registered.

Fig. 3. Images illustrating the intermediate steps of the level-3 feature extraction algorithm: (a) input fingerprint image, (b) stopping term f computed using Eq. (3), and (c) extracted fingerprint contour.

that a segment of level-2 ridge curves and level-3 pores obtained from both the gallery and probe images are perfectly registered and appear superimposed. 2.2. Level-3 pore and ridge feature extraction and matching The proposed level-3 feature extraction algorithm uses curve evolution with the fast implementation of Mumford–Shah functional [11,12]. Mumford–Shah curve evolution efficiently segments the contours present in the image. In this approach, feature boundaries are detected by evolving a curve and minimizing an energy based segmentation model as defined in the following equation [11]: Z Z Z Z EnergyðC; c1 ; c2 Þ ¼ a fkC0 k dx dy þ b jIðx; yÞ O inðCÞ Z Z  c1 j2 dx dy þ l jIðx; yÞ  c2 j2 dx dy outðCÞ

(1) where C is the evolution curve such that C ¼ fðx; yÞ : c ðx; yÞ ¼ 0g, C is the curve parameter, f is the weighting function or the stopping term, O represents the image

domain, Iðx; yÞ is the fingerprint image, c1 and c2 are the  respectively, average value of pixels inside and outside C, and a, b, and l are positive constants such that a þ b þ l ¼ 1 and aob  l. Further, Chan and Vese [12] parameterize the energy equation (Eq. (1)) by an artificial time t  0 and deduce the associated Euler–Lagrange equation that leads to the following active contour model,1  j þ rfrc  þ bdðI  c Þ2 þ ldc  ðI  c Þ2 c0t ¼ afðn þ k Þjrc 1 2 (2) where n is the advection term and k is the curvature based smoothing term. r is the gradient and d ¼ 0:5= ðpðx2 þ 0:25ÞÞ. The stopping term f is set to



1 1 þ ðjrIjÞ2

(3)

This gradient based stopping term ensures that at the strongest gradient, the speed of the curve evolution becomes zero, and therefore it stops at the edges of the image. 1 More mathematical details of curve evolution parameterization and stopping term can be found in [11,12].

ARTICLE IN PRESS M. Vatsa et al. / Signal Processing 89 (2009) 2676–2685

Contour Tracing

Ridge Contours after Tracing

2679

Pore Coordinates after Tracing

Fig. 4. Tracing of fingerprint contour and categorization of level-3 pore and ridge features.

Fig. 6. Level-3 ridge features obtained after tracing. Fig. 5. Level-3 pore features obtained after tracing.

Initial contour is initialized as a grid with 750 blocks over the fingerprint image and the boundary of each feature is computed. The size of grid is chosen to balance the time required for curve evolution and correct extraction of all the features. Fig. 3 shows examples of feature extraction from fingerprint images. This image shows that due to the stopping term (Fig. 3b), the noise present in the fingerprint image has very little effect on contour extraction. Once the contour extraction algorithm  , it is scanned using provides final fingerprint contour c the standard contour tracing technique [13] from top to bottom and left to right consecutively. Tracing and classification of level-3 pore and ridge features are performed based on the standards defined by the ANSI/ NIST committee for extended fingerprint feature set (CDEFFS) [2]. During tracing, the algorithm classifies the contour information into pores and ridges: (1) A blob of size greater than 2 pixels and less than 40 pixels2 is classified as a pore. Therefore, noisy contours, which are sometimes wrongly extracted, are not included in the feature set. A pore is approximated with a circle and the center is used as the pore feature.

2

In our experiments, we found that the range of 2–40 pixels correctly classifies a blob as pore. This analysis is consistent with the findings by Jain et al. [7].

(2) An edge of a ridge is defined as the ridge contour. Each row of the ridge feature represents x; y coordinates of the pixel and direction of the contour at that pixel. As shown in Fig. 4, fine details and structures present in the fingerprint contour are traced and categorized into pores and ridge contours depending on the shape and attributes. Additional examples of level-3 feature extraction are shown in Figs. 5 and 6. The feature set comprises of two fields separated by a unique identifier, e.g. R R fðxP1 ; yP1 Þ; . . . ; ðxPn ; yPn Þ; ðxR1 ; yR1 ; y1 Þ; . . . ; ðxRm ; yRm ; ym Þg where n and m represents the amount of extracted pore and ridge features, respectively. Here, the first field contains all the pore information while the second field contains all the ridge information. The size of feature set is dependent on the size of the fingerprint image and features present in it. For matching, the features pertaining to the gallery and probe images are represented as row vectors, f g and f p , and Mahalanobis distance Dðfg ; fp Þ is computed: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (4) Dðfg ; fp Þ ¼ ðfg  fp Þt S1 ðfg  fp Þ where fg and fp are the two features vectors,3 and S is the positive definite covariance matrix of fg and fp . Mahalanobis distance Dðfg ; fp Þ is used as the match 3

R R fg ¼ fðxPg1 ; yPg1 Þ; . . . ; ðxPgn ; yPgn Þ; ðxRg1 ; yRg1 ; yg1 Þ; . . . ; ðxRgm ; yRgm ; ygm Þg R R fp ¼ fðxPp1 ; yPp1 Þ; . . . ; ðxPpn ; yPpn Þ; ðxRp1 ; yRp1 ; yp1 Þ; . . . ; ðxRpm ; yRpm ; ypm Þg.

ARTICLE IN PRESS 2680

M. Vatsa et al. / Signal Processing 89 (2009) 2676–2685

score of level-3 features. It ensures that the features having high variance do not contribute to the distance and hence decrease the false reject rate for a fixed false accept rate (FAR). 3. Information fusion using Delaunay triangulation and SVM classification Researchers have theoretically and experimentally shown that, in general, fusion of two or more biometric sources yields better performance compared to single biometrics [14]. Fusion can be performed at different levels such as raw data or image level, feature level, match score level, and decision level. In fingerprint biometrics, image fusion and feature fusion have been performed using level-2 minutiae and mosaicing techniques [15]. Further, level-2 and level-3 match score fusion algorithms have been proposed in [7,16]. In this paper, we propose a fusion algorithm for fusing level2 and level-3 information such that it is resilient to minor deformations in features and missing information. Fingerprint features are extracted using the feature extraction algorithms described in Section 2. Delaunay triangulation has been used with level-2 features for fingerprint indexing and identification [17–19]. However, in the proposed fusion algorithm, Delaunay triangulation is used to generate a feature supervector that contains both level-2 and level-3 features. A Delaunay triangle is formed using minutiae information as follows: (1) Given n minutiae points, the Voronoi diagram is computed which decomposes the minutiae points into different regions. (2) Voronoi diagram is used to compute the Delaunay triangulation by joining the minutiae coordinates present in the neighborhood Voronoi regions. Fig. 7(a) and (b) show an example of Voronoi diagram and Delaunay triangulation of fingerprint minutiae. Each triangle in the Delaunay triangulation is used as a minutiae triplet [20]. Fig. 7(c) shows an example of a minutiae triplet along with level-3 features. Tuceryan and

Chorzempa [17] found that Delaunay triangulation have the best structural stability and hence minutiae triplets computed from Delaunay triangulation are able to sustain the variations due to fingerprint deformation [18]. Further, Bebis et al. [18] and Ross and Mukherjee [19] have shown that any local variation due to noise or insertion/deletion of feature points affects the Delaunay triangulation only locally. From the minutiae triplets generated using the Delaunay triangulation, the feature supervector that includes minutia, pore, and ridge information is computed. Jain et al. [7] have shown that pore features are less reliable compared to minutiae and ridge features. On the other hand, CDEFFS [2] has defined reliable and discriminatory level-2 and level-3 features that can be used for verification. Based on these studies, the feature supervector in the proposed fusion algorithm is composed of eight elements described as follows: (1) Average cosine angle in minutiae triplet (A): Let amin and amax be the minimum and maximum angles in a minutiae triplet. Average cosine angle of the minutiae triplet is defined as A ¼ cosðaavg Þ

(5)

where aavg ¼ ðamin þ amax Þ=2. Average cosine angle vector is computed for all the minutiae triplets in a given fingerprint. (2) Triangle orientation (O): According to Bhanu and Tan [20], triangle orientation is defined as f ¼ signðz21  z32 Þ, where z21 ¼ z2  z1 , z32 ¼ z3  z2 , z13 ¼ z1  z3 , and zi ¼ xi  jyi . zi is computed from the coordinates ðxi ; yi Þ, i ¼ 1; 2; 3 in the minutiae triplet. Triangle orientation vector, O is then computed for all the minutiae triplets in a given fingerprint. (3) Triplet density (D): If there are n0 minutiae in a Voronoi region centered at a minutia, then the minutiae density is n0 . For a minutiae triplet, we define triplet density as the average minutiae density of the three minutiae that form the triplet. If n01 , n02 , and n03 are the minutiae density of the three minutiae, then the triplet density is ðn01 þ n02 þ n03 Þ=3. We compute triplet

Fig. 7. Example of (a) Voronoi diagram of fingerprint minutiae, (b) Delaunay triangulation of fingerprint minutiae, and (c) minutiae triplet.

ARTICLE IN PRESS M. Vatsa et al. / Signal Processing 89 (2009) 2676–2685

(4)

(5)

(6)

(7)

(8)

density for all the triplets in the fingerprint image to form the triplet density vector D. Edge ratio in minutiae triplet (E): The ratio of the longest and shortest edges in each minutiae triplet forms the edge ratio vector, E. Min–Max distance between minutia points and k-nearest neighbor pores (Pm ): For every minutia point, we first compute the distances between minutiae and its k-nearest neighboring pores which are on the same ridge. Out of these k distances, the minimum and maximum distances are used as the fusion parameters. Min–Max distance vector (Pm ) is then generated from all the minutiae in a fingerprint image. Average distance of k-nearest neighbor pores (Pavg ): Average distance vector of k-nearest neighbor pores, P avg , is formed by computing the average distance of k-nearest neighbor pores of all the minutiae in a triplet. Average ridge width (Rw ): Ridge width is computed for each minutiae triplet using the tracing technique [13]. Average ridge width vector (Rw ) is computed by taking the average of the ridge width of each minutiae triplet. Ridge curve parameters (RC ): Ross and Mukherjee [19] proposed the use of ridge curve parameters for fingerprint identification. We modified the ridge curve parameter for the proposed fusion algorithm. Each ridge can be parameterized as y ¼ ax2 þ bx þ c where a, b, and c are the parameterized coefficients. Ridge curve parameter for a ridge is ½a=ða þ b þ cÞ; b= ða þ b þ cÞ; c=ða þ b þ cÞ. In a minutiae triplet, each minutiae is associated with a ridge. Therefore the ridge curve parameters of a minutiae triplet are ½ai =ðai þ bi þ ci Þ; bi =ðai þ bi þ ci Þ; ci =ðai þ bi þ ci Þ where i ¼ 1; 2; and 3. Ridge curve parameter RC is similarly computed for all the minutiae triplets.

A feature supervector IðA; O; D; E; Pm ; P avg ; Rw ; RC Þ is formed by concatenating these eight elements (vectors) and includes information pertaining to both level-2 and level-3 fingerprint features. This feature supervector is used for verification purposes. 3.1. Matching feature supervectors using SVM For matching two feature supervectors, I1 ðA; O; D; E; P m ; P avg ; Rw ; RC Þ and I2 ðA; O; D; E; P m ; Pavg ; Rw ; RC Þ pertaining to the gallery and probe fingerprints, respectively, we propose the use of support vector machine. The first step of the matching algorithm is to compute Mahalanobis distance between each element of the gallery and probe feature supervectors using Eq. (4). Let dðiÞ be the Mahalanobis distance associated with individual elements of the feature supervector, where i ¼ 1; 2; 3; 4; 5; 6; 7; and 8. The octuple Mahalanobis distance vector d is used as input to the SVM classification algorithm. Support vector machine, proposed by Vapnik [21], is a powerful methodology for solving problems in non-linear classification, function estimation, and density estimation [22]. SVM starts with the goal of separating the data with a hyperplane and extends it to non-linear decision

2681

boundaries. SVM is thus a classifier that performs classification by constructing hyperplanes in a multidimensional space and separates the data points into different classes. To construct an optimal hyperplane, SVM uses an iterative training algorithm that maximizes the margin between two classes [22]. In our previous research, we have shown that a variant of support vector machine known as the dual n-SVM (2n-SVM) is useful for classification in biometrics [22,23]. For more details of 2n-SVM, readers can refer to [23,24]. The 2n-SVM decision algorithm is divided into twostages: (1) training and (2) classification. Training: 2n-SVM classifier is first trained on the labeled training Mahalanobis distance vectors computed from the training fingerprint images. Using the standard training procedure [23,24] and non-linear radial basis kernel function, 2n-SVM is trained to perform classification between genuine and impostor Mahalanobis distance vectors. Classification: The trained 2n-SVM is used to perform classification on the probe dataset. Mahalanobis distance vector between gallery and probe data, d, is given as input to the non-linear 2n-SVM classifier and the output is a signed distance from the separating hyperplane. A decision of accept or reject is made using a decision threshold T as defined in Eq. (6). ( accept if SVM output  T Decision ¼ (6) reject otherwise

4. Database and algorithms used for validation The performance of the proposed level-3 feature extraction and fusion algorithms are evaluated and compared with existing level-3 feature based fingerprint recognition algorithms and existing fusion algorithms on a 1000 ppi fingerprint database. In this section, we briefly describe the database and the algorithms used for validation. 4.1. Fingerprint database A fingerprint database obtained from law enforcement agency is used to validate the proposed algorithms. This database contains 5500 fingerprint images from 550 classes (10 images per class) captured using an optical scanner. For each class, there are four rolled fingerprint images, four slap fingerprint image, and two partial fingerprint images with less than 10 minutiae. The resolution of fingerprint images is 1000 ppi to facilitate the extraction of both level-2 and level-3 features. Images in the database follow all the standards defined by CDEFFS [2]. Further, all fingerprints are captured in standard upward direction and in general, deformation is only due to the pressure applied by the user. From each class, three fingerprints are selected for training the fusion algorithm and 2n-SVM classifier, and the remaining seven images per class are used as the gallery and probe dataset. For performance evaluation, we use cross validation with 20 trials. Three images are randomly selected for training and

ARTICLE IN PRESS M. Vatsa et al. / Signal Processing 89 (2009) 2676–2685

the remaining images are used as the test data to evaluate the algorithms. This train-test partitioning is repeated 20 times and receiver operating characteristics (ROC) curves are generated by computing the genuine accept rates (GAR) over these trials at different false accept rates. For each cross validation trial, we have 11,550 genuine matches ð550  7  6=2Þ and 73,97,775 impostor matches ð550  7  549  7=2Þ using the gallery and probe database. 4.2. Existing algorithms To compare the performance of the proposed level-3 algorithm (Section 2.2) with existing algorithms, we have implemented two algorithms. The algorithm by Kryszczuk et al. [4,5] extracts pore information from high resolution fingerprint images by applying different techniques such as correlation based alignment, Gabor filtering, binarization, morphological filtering, and tracing. The match score obtained from this algorithm is a normalized similarity score in the range of ½0; 1. The algorithm proposed by Jain et al. [7] uses Gabor filtering and wavelet transform for pore and ridge feature extraction and iterative closest point for matching. To compare the performance of the proposed fusion algorithm, we use three existing fusion algorithms. Feature fusion using concatenation [25] compares the performance at feature level. Further, sum rule [14] and SVM match score fusion [22] compares the performance with match score level fusion algorithms. The match scores obtained by matching level-2 features and level-3 features (Section 2.2) are combined using match score fusion algorithms. 5. Experimental evaluation The performance of the proposed algorithms are evaluated both experimentally by computing the average verification accuracy at 0.01% false accept rate and statistically by computing the half total error rate (HTER) [27]. Experimental results are obtained using the cross validation approach. We perform two experiments: (1) evaluation of the proposed level-3 feature extraction algorithm and (2) evaluation of the proposed Delaunay triangulation and SVM classification based fusion algorithm. Performance evaluation of level-3 feature extraction algorithm: We first compute the verification performance of the proposed level-3 feature extraction algorithm and compare it with existing level-2 [9,10] and level-3 feature based verification algorithms [4,5,7]. The ROC plots in Fig. 8 and verification accuracies in Table 1 summarize the results of this experiment. The proposed level-3 feature extraction algorithm yields a verification accuracy of 91.41% which is 2–7% better than existing algorithms. Existing level-2 minutiae based verification algorithm is more optimized for 500 ppi fingerprint images and the performance does not improve when 1000 ppi images are used. Also the performance decreases significantly if there are fewer number of minutiae in the fingerprint image

100 98 Genuine Accept Rate (%)

2682

96 94 92 90 88 86 84 10−2

Level−3 Kryszczuk et. al. Level−3 Jain et.al. Level−2 Minutiae Proposed Level−3 10−1 False Accept Rate (%)

100

Fig. 8. ROC plot for the proposed level-3 feature extraction and comparison with existing level-3 feature based algorithms [4,5,7] and minutiae based algorithm.

( 10). The level-3 pore matching algorithm [4,5] requires very high quality fingerprint images ( 2000 ppi) for feature extraction and matching, and the performance suffers when the fingerprint quality is poor or the amount of pressure applied during scanning affects the pore information. The algorithm proposed by Jain et al. [7] is efficient with rolled and slap fingerprint images. However, the verification accuracy decreases with partial fingerprints when the number of minutiae is limited. The experimental results show that the proposed level-3 feature extraction algorithm provides good recognition performance even with limited number of minutiae. The Mumford–Shah functional based feature extraction algorithm is robust to irregularities due to noise and efficiently compensates for variations in pressure during fingerprint capture by incorporating both ridge and pore features. Mahalanobis distance measure also reduces the false reject cases thus improving the verification accuracy. We further compare the performance of existing algorithms with and without the registration process. In this experiment, first the verification accuracy of existing algorithms is computed without the 2-stage registration technique and then compared with the accuracy obtained by applying the 2-stage registration technique. Note that the existing algorithms already have in built techniques for fingerprint registration, e.g. Jain et al. [7] use iterative closest point matching that can address minor deformation in fingerprints. This experimental setup evaluates and compares the performance of the 2-stage registration technique on existing algorithms. The 2-stage registration technique minimizes the spatial differences or elastic deformation between the gallery-probe image pairs, thus reducing the intra-class variations. As evident from the results in Table 1, it significantly improves the verification accuracy for all the existing feature extraction, matching and fusion algorithms. This result is also consistent with the results established by other researchers [1,8]. In our experiments, we observe that, in general, level-2 and level-3 algorithms provide correct results when both

ARTICLE IN PRESS M. Vatsa et al. / Signal Processing 89 (2009) 2676–2685

2683

Table 1 Average verification performance of fingerprint verification and fusion algorithms. Recognition and fusion algorithms

Level-2 minutiae [9,10] Level-3 Kryszczuk et al. [4,5] Level-3 Jain et al. [7] Level-3 proposed Feature fusion—concatenation [25] Score fusion—sum rule [14] Score fusion—SVM [22] Proposed Delaunay triangulation and SVM classification fusion

Accuracy without registration

84.56 79.49 85.62 – 85.13 90.23 92.07 –

Accuracy with registration

HTER

89.11 84.03 88.07 91.41 90.08 91.99 95.12 96.37

Time (s)

[Max, Min]

Average

[8.92, 4.57] [19.54, 4.89] [6.33, 4.21] [4.57, 4.08] [7.23, 3.92] [6.79, 3.45] [3.12, 1.84] [1.88, 1.76]

5.45 7.99 5.97 4.30 4.96 4.01 2.44 1.82

03 12 34 09 13 12 13 15

Verification accuracy is computed at 0.01% false accept rate (FAR).

Fig. 9. Three sample probe cases: (a) when both level-2 and level-3 features based algorithms provide correct result, (b) when level-2 feature based algorithm fails to provide correct result whereas level-3 features based algorithm provides accurate decision, and (c) when both the algorithms fail to perform. For these three probe cases, we used good quality rolled fingerprint images as gallery.

100 99 Genuine Accept Rate (%)

the gallery and probe are rolled fingerprint images. Similarly, when the images are good quality slap or partial prints, both level-2 and level-3 features yield correct results (Fig. 9(a)). However, there are several slap and partial samples in the database that are very difficult to recognize using level-2 features because of the inherent fingerprint structure. At 0.01% FAR, one such example is shown in Fig. 9(b) where level-2 feature extraction algorithm does not yield correct result but the proposed level-3 feature extraction algorithm provides correct results. Further, there are cases involving partial fingerprints in which both level-2 and level-3 algorithms fail to perform (Fig. 9(c)) because of sensor noise and less pressure. Such cases require image quality assessment and enhancement algorithms to enhance the quality such that feature extraction algorithm can extract useful features for verification. Performance evaluation of proposed fusion algorithm: We next compute the verification accuracy of the proposed fusion and matching algorithm. As shown in Fig. 10 and Table 1, the Delaunay triangulation and SVM classification based fusion algorithm improves the verification accuracy by at least 4.96% using both level-2 and the proposed level-3 feature extraction algorithms. The proposed fusion approach also outperforms existing feature fusion algorithm [25] that performs feature normalization, feature

98 97 96 95 94 93 92 91 90 10-2

10-1 False Accept Rate (%)

100

Fig. 10. ROC plot for the proposed Delaunay triangulation and SVM classification based fusion algorithm and comparison with existing feature concatenation algorithm [25], sum rule [14], and SVM match score fusion algorithm [22].

selection, and concatenation. It yields good performance when compatible feature sets are fused. However, in our experiments, level-2 and level-3 features are incompatible feature sets and require different matching schemes. After

ARTICLE IN PRESS 2684

M. Vatsa et al. / Signal Processing 89 (2009) 2676–2685

fusion, the concatenated feature vector is matched using Euclidean distance measure which, in some cases, is not able to discriminate between genuine and impostor fused features. We also compare the proposed fusion algorithm with sum rule [14] and SVM based match score fusion algorithm [22]. While sum rule and SVM match score fusion perform better compared to feature fusion using concatenation [25], the proposed fusion algorithm outperforms the sum rule by 4.38% and the SVM match score fusion algorithm by 1.25%. The results of fusion algorithms, existing and the proposed, also accentuate that level-2 and level-3 features provide complementary information and combining multiclassifier match scores improves the verification accuracy. However, linear statistical sum rule does not guarantee improved performance in all cases. On the other hand, SVM match score fusion and the proposed fusion algorithm use non-linear decision functions for improved performance. As shown in Fig. 10, the Delaunay triangulation and SVM classification based fusion is better than SVM match score fusion for low FAR as the proposed fusion algorithm computes the most discriminative fingerprint features and classifies them using non-linear 2n-SVM decision algorithm. The algorithms are also statistically validated using HTER [26,27]. Half total error rate is defined as HTER ¼

FAR þ FRR 2

(7)

Confidence intervals (CI) are computed around HTER as HTER  s  Z a=2 . s and Z a=2 are computed using Eqs. (8) and (9) [27]. rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi FARð1  FARÞ FRRð1  FRRÞ s¼ þ (8) 4  NI 4  NG

Z a=2

8 > < 1:645 ¼ 1:960 > : 2:576

for 90% CI for 95% CI

(9)

algorithm [7] requires 34 s. Further, the fusion algorithm takes only 15 s for verification which includes feature extraction, computation of feature supervector, and 2n-SVM classification. 6. Conclusion With the availability of high resolution fingerprint sensors, salient features such as pores and ridge contours are prominently visible. In this paper, these conspicuous level-3 features are combined with the commonly used level-2 minutiae to improve the verification accuracy. The gallery and probe fingerprint images are registered using the two-stage registration process. A feature extraction algorithm is proposed to extract detailed level-3 pore and ridge features using Mumford–Shah curve evolution approach. The paper further proposes fusion of level-2 and level-3 features using Delaunay triangulation algorithm that computes a feature supervector containing eight topological measures. Final classification is performed using a 2n-SVM learning algorithm. The proposed algorithm tolerates minor deformation of features and non-linearity in the fingerprint information. Experimental results and statistical evaluation on a high resolution fingerprint database show the efficacy of the proposed feature extraction and fusion algorithms. The proposed algorithm performs better than existing recognition algorithms and fusion algorithms. We plan to extend the proposed Delaunay triangulation and SVM classification based fusion algorithm by incorporating image quality measure and studying the effect of including other level-3 features such as dots, incipient ridges, and scars in a structured manner. Future work will also characterize the performance of level-3 features on a comprehensive large scale database which contains fingerprint images of varying size and quality.

for 99% CI

Here, NG is the total number of genuine scores and NI is the total number of impostor scores. Table 1 summarizes the results of the statistical evaluation in terms of minimum, maximum, and average HTER corresponding to 20 cross validation trials. The proposed Delaunay triangulation and SVM classification based fusion algorithm yields the lowest and most stable HTER; whereas HTER values pertaining to other algorithms vary significantly for different cross validation trials. For example, SVM match score fusion yields the maximum and minimum HTER of 3.12 and 1.84, respectively. On the other hand, for the proposed fusion algorithm these values are 1.88 and 1.76, respectively. Further, for 99% confidence, the best confidence interval of 1:82  0:22% is observed for the proposed fusion algorithm. Therefore, statistical results further validate the effectiveness of the proposed Delaunay triangulation and SVM classification based fusion algorithm and the experimental results. We also evaluate the average time for verification using different algorithms on a P-IV 3.2 GHz, 1 GB RAM and Matlab programming. The proposed level-3 pore and ridge feature extraction and matching algorithm takes 9 s, Kryszczuk’s algorithm [4,5] requires 12 s, and Jain’s

Acknowledgment This research is supported in part through a grant (Award no. 2003-RC-CX-K001) from the Office of Science and Technology, National Institute of Justice, Office of Justice Programs, United States Department of Justice. The authors thank the reviewers for providing useful suggestions and raising thoughtful questions. References [1] D. Maltoni, D. Maio, A.K. Jain, S. Prabhakar, Handbook of Fingerprint Recognition, Springer, Berlin, 2003. [2] Working draft of CDEFFS: the ANSI/NIST committee to define an extended fingerprint feature set, 2008. Available at hhttp:// fingerprint.nist.gov/standard/cdeffs/index.htmli. [3] A.R. Roddy, J.D. Stosz, Fingerprint features—statistical analysis and system performance estimates, Proceedings of the IEEE 85 (9) (1997) 1390–1421. [4] K. Kryszczuk, A. Drygajlo, P. Morier, Extraction of level 2 and level 3 features for fragmentary fingerprints, in: Proceedings of the Second COST275 Workshop, 2004, pp. 83–88. [5] K. Kryszczuk, P. Morier, A. Drygajlo, Study of the distinctiveness of level 2 and level 3 features in fragmentary fingerprint comparison, in: Proceedings of ECCV International Workshop on Biometric Authentication, 2004, pp. 124–133.

ARTICLE IN PRESS M. Vatsa et al. / Signal Processing 89 (2009) 2676–2685

[6] P. Meenen, A. Ashrafi, R. Adhami, The utilization of a Taylor seriesbased transformation in fingerprint verification, Pattern Recognition Letters 27 (14) (2006) 1606–1618. [7] A.K. Jain, Y. Chen, M. Demirkus, Pores and ridges: high resolution fingerprint matching using level 3 features, IEEE Transactions on Pattern Analysis and Machine Intelligence 29 (1) (2007) 15–27. [8] A. Ross, S. Dass, A.K. Jain, Fingerprint warping using ridge curve correspondences, IEEE Transactions on Pattern Analysis and Machine Intelligence 28 (1) (2006) 19–30. [9] X.D. Jiang, W.Y. Yau, W. Ser, Detecting the fingerprint minutiae by adaptive tracing the gray level ridge, Pattern Recognition 34 (5) (2001) 999–1013. [10] A. Jain, R. Bolle, L. Hong, Online fingerprint verification, IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (4) (1997) 302–314. [11] A. Tsai, A. Yezzi Jr., A. Willsky, Curve evolution implementation of the Mumford–Shah functional for image segmentation, denoising, interpolation, and magnification, IEEE Transactions on Image Processing 10 (8) (2001) 1169–1186. [12] T. Chan, L. Vese, Active contours without edges, IEEE Transactions on Image Processing 10 (2) (2001) 266–277. [13] T. Pavlidis, Algorithms for Graphics and Image Processing, Computer Science Press, 1982. [14] A. Ross, K. Nandakumar, A.K. Jain, Handbook of Multibiometrics, Springer, Berlin, 2006. [15] A. Ross, S. Shah, J. Shah, Image versus feature mosaicing: a case study in fingerprints, in: Proceedings of SPIE Conference on Biometric Technology for Human Identification III, 2006, pp. 620208-1–620208-12. [16] M. Vatsa, R. Singh, A. Noore, M.M. Houck, Quality-augmented fusion of level-2 and level-3 fingerprint information using DSm theory, International Journal of Approximate Reasoning 50 (1) (2009) 51–61. [17] M. Tuceryan, T. Chorzempa, Relative sensitivity of a family of closest-point graphs in computer vision applications, Pattern Recognition 24 (5) (1991) 361–373.

2685

[18] G. Bebis, T. Deaconu, M. Georgiopoulos, Fingerprint identification using delaunay triangulation, in: Proceedings of IEEE International Conference on Intelligence, Information and Systems, 1999, pp. 452–459. [19] A. Ross, R. Mukherjee, Augmenting ridge curves with minutiae triplets for fingerprint indexing, in: Proceedings of SPIE Conference on Biometric Technology for Human Identification IV, vol. 6539, 2007, p. 65390C. [20] B. Bhanu, X. Tan, Fingerprint indexing based on novel features of minutiae triplets, IEEE Transactions on Pattern Analysis and Machine Intelligence 25 (5) (2003) 616–622. [21] V.N. Vapnik, The Nature of Statistical Learning Theory, Springer, Berlin, 1995. [22] R. Singh, M. Vatsa, A. Noore, Intelligent Biometric Information Fusion using Support Vector Machine, Soft Computing in Image Processing: Recent Advances, Springer, Berlin, 2006, pp. 327–350 (Chapter 12). [23] R. Singh, M. Vatsa, A. Noore, Improving verification accuracy by synthesis of locally enhanced biometric images and deformable model, Signal Processing 87 (11) (2007) 2746–2764. [24] H.G. Chew, C.C. Lim, R.E. Bogner, An implementation of training dual-n support vector machines, in: L. Qi, K.L. Teo, X. Yang (Eds.), Optimization and Control with Applications, Kluwer, Dordrecht, 2004, pp. 157–182. [25] A. Ross, R. Govindarajan, Feature level fusion using hand and face biometrics, in: Proceedings of SPIE Conference on Biometric Technology for Human Identification II, 2005, pp. 196–204. [26] E.S. Bigu¨n, J. Bigu¨n, B. Duc, S. Fischer, Expert conciliation for multimodal person authentication systems by Bayesian statistics, in: Proceedings of Audio Video based Biometric Person Authentication, 1997, pp. 291–300. [27] S. Bengio, J. Marie`thoz, A statistical significance test for person authentication, in: Proceedings of Odyssey: The Speaker and Language Recognition Workshop, 2004, pp. 237–244.