ARTICLE IN PRESS
Neurocomputing 68 (2005) 208–216 www.elsevier.com/locate/neucom
Letters
An efficient fingerprint verification system using integrated gabor filters and Parzen Window Classifier Dario Maio, Loris Nanni DEIS, IEIIT—CNR, Universita` di Bologna, Viale Risorgimento 2, 40136 Bologna, Italy Received 5 April 2005; received in revised form 2 May 2005; accepted 3 May 2005 Available online 18 July 2005 Communicated by R.W. Neucomb
Abstract This paper proposes a novel method of image-based fingerprint matching based on the features extracted by ‘‘FingerCode’’. The experiments show that our system outperforms the standard ‘‘FingerCode’’ recognition method and other image-based approaches. Combining the matching score generated by the proposed technique with that obtained from a minutiaebased matcher results in an overall improvement in performance of the fingerprint matching. r 2005 Elsevier B.V. All rights reserved. Keywords: Fingerprint authentication; FingerCode; Image-based matcher
1. Introduction Various approaches of automatic fingerprint matching have been proposed in the literature. Fingerprint matching techniques may be broadly classified as being either minutiae-based, correlation-based or image-based [2] (for a good survey see [9]).
Corresponding author.
E-mail address:
[email protected] (L. Nanni). 0925-2312/$ - see front matter r 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2005.05.003
ARTICLE IN PRESS D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216
209
Minutiae-based approaches first extract the minutiae from the fingerprint images; then, the matching between two fingerprints is made using the two sets of minutiae locations. Image-based approaches usually extract the features directly from the raw image since a grey-level fingerprint image is available; then, the decision is made using these features. Moreover, image-based approaches may be the only viable choice, for instance, when image quality is too low to allow reliable minutia extraction. For dissimilarity matching (in the image-based approaches), in many papers, a simple Euclidean distance metric is employed instead of more complex classifiers. This is because the authors give more relevance to the effectiveness of the feature extraction rather than to the classification technique in the experiments [6,12]. In this work, we show that with a study of machine learning methods, we may drastically increase the performance reducing the equal error rate (EER). The approach here presented combines the features extracted by ‘‘FingerCode’’ [6] with Parzen Window Classifier (PWC) [4]. The outline of the paper is as follows: Sections 2 and 3 provide a short review of the system proposed and the methodologies tested in this work, respectively. Section 4 presents the experimental results.
2. Fingerprint verification The training stage proposed in this work, as depicted in Fig. 1, consists of six modules:
Enhancement: the image is enhanced using the technique described in [3]; Core Detection: the core is extracted by a Poincare`-based algorithm; Orientation Computation: we correct the orientation of the image; Extra-images generation: creation of other images of the same fingerprint using morphological operators; Feature Extraction: the features are extracted using ‘‘FingerCode’’;
Fig. 1. System proposed.
ARTICLE IN PRESS D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216
210
Creation of the Template by PWC: during the learning phase a final template is created for each individual using a ‘‘one-class’’ PWC (Classifier).
The Verification Stage proposed in this work consists of five modules: Enhancement; Core Detection; Orientation Computation; Feature Extraction; Classification: a similarity between the input fingerprint and each template is calculated.
3. Methods 3.1. Enhancement Since the input image may be noisy, it is first enhanced before applying the feature extraction. Enhancement improves the clarity of the ridges and furrow structure in the fingerprint image. We use the technique described in [3] to enhance the fingerprint image. In [3] the authors propose a Fourier domain based block-wise contextual filter approach for enhancing fingerprint images. The image is filtered in the Fourier domain by a frequency and orientation selective filter, whose parameters are based on the estimated local ridge orientation and frequency. 3.2. Orientation computation The binarized shape of the finger can be approximated by an ellipse. The parameters of the best-fitting ellipse, for a given binary shape, are computed using the moments [1]. The orientation of the binarized image is approximated by the major axis of the ellipse and the required angle of rotation is the difference between normal and the orientation of image. 3.3. Extra-images generation Morphological operators [5] are here applied to the fingerprint, to create extraimages. In particular, the erode operator and dilate operator are adopted. Moreover, the original image is rotated by an angle of 22.51 and 22.51 and others images are generated. 3.4. Feature extraction (FE) The four main steps in Fingercode [6] feature extraction algorithm are: (1) determine a reference point and region of interest for the fingerprint image; (2) tessellate the region of interest around the reference point; (3) filter the region of interest in eight different directions using a bank of Gabor filters (eight directions are required to completely capture the local ridge characteristics in a fingerprint while only four directions are required to capture the global configuration);
ARTICLE IN PRESS D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216
211
(4) compute the average absolute deviation from the mean of grey values in individual sectors in filtered images to define the feature vector or the FingerCode. 3.5. Fingerparzen (FP) In this method, the features extracted from the region of interest, around the reference point, are used for creating a template of the individual by PWC. Given a training set of an individual c composed by a set Dc of feature vectors (mean of grey values in individual sectors in filtered images) of its fingerprints (original images and the extra-images generated at step 4), a non-parametric estimation of its probability density function (pdf) is obtained by using Parzen window [4](chapter 4.1–4.3). The Parzen window density estimate can be used to approximate the probability density pðxÞ of a vector of continuous random variables X. It involves the superposition of a normalized window function centered on a set of samples. Given a set of n d-dimensional samples D ¼ fx1 ; x2 ; . . . ; xn g the pdf estimate by the Parzen window is given by ^ pðxÞ ¼ 1n
n X
jðx xi ; hÞ,
(1)
i¼1
where jð; Þ is the window function and h is the window width parameter (h ¼ 25 in _ our tests). Parzen showed that p ðxÞ converges to the true density if j( , ) and h are selected properly We use Gaussian kernels with covariance matrix S ¼ Identity as window function (det(.) denotes the determinant of the matrix): T 1 1 y S y jðy; hÞ ¼ exp . (2) 2h2 ð2pÞd=2 hd detðSÞ1=2 _
We get the estimate of the conditional pdf p ðs=cÞ of each class c using the Parzen Window method as: 1 X _ jðs si ; hÞ, (3) p ðs=cÞ ¼ #Dc s 2D i
c
where #Dc denotes the cardinality of the set Dc . The verification procedure of the PWC is performed by evaluating Eq. (3). The _ scores (p ðs=cÞ) are normalized to sum to one as in [8,13]. This classifier is implemented as in PrTools 3.1.7 [13]. 3.6. Fingerparzen 1 quadrant (F1Q), fingerparzen 2 quadrants (F2Q) and fingerparzen 3 quadrants (F3Q) The image-based approach suffer from two types of distortions: (1) noise, which is caused by the capturing device or by e.g. dirty fingers and (2) non-linear distortions.
ARTICLE IN PRESS 212
D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216
Fig. 2. The region of interest is partitioned in four smaller sub-images.
This causes various sub-regions in the sensed image to be distorted differently due to the non-uniform pressure applied representation. In these methods (F1Q, F2Q and F3Q), the region of interest around the reference point is partitioned in four smaller sub-images (the different quadrants), as depicted in Fig. 2. Consequently, variations in the image will only affect sub-images, and thus the local information may be better represented. In F1Q, the features extracted from each sub-image are used for creating a different template of the individual by means of PWC. In F2Q, the features extracted from each combination of 2 sub-images are used for creating a different template of the individual by means of PWC. In F3Q, the features extracted from each combination of three sub-images are used for creating a different template of the individual by means of PWC. For example, in F3Q we have four combinations: 11, 21 and 31 quadrant; 11, 21 and 41 quadrant; 11, 31 and 41 quadrant; 21, 31 and 41 quadrant. The template created using the first combination is obtained training a PWC with the features extracted from the related quadrants (11, 21, 31) of the original images and of the extra-images generated at step 4. Please note that the features used in F1Q, F2Q, and F3Q represent the average absolute deviations from the mean of grey values in individual sectors in filtered images, as in FP or in FingerCode. We simply use the ‘‘max rule’’ [7] to combine the score from the different templates of each individual. 3.7. Multiclassifiers We simply use the ‘‘mean rule’’ and Dempster-Shafer [7] to combine the score from FingerParzen and a commercial minutiae matcher [14]. Whenever training was needed for the fusion schemes (i.e., Dempster-Shafer), 2-fold cross-validation was used.
4. Experiments and discussion In this paper, the algorithm is evaluated on images taken from FVC 2002, which is available in DVD included in [9]. FVC2002 provided four different fingerprint databases: Db1, Db2, Db3 and Db4, three of these databases are acquired by various
ARTICLE IN PRESS D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216
213
sensors, low-cost and high quality, optical and capacitive. Ninety volunteers were randomly partitioned into three groups (30 person each); each group was associated to a Db and therefore to a different fingerprint scanners. The fourth database contains synthetically generated images. In our verification stage, the comparison of two fingerprints must be based on the same core point. However, the comparison only can be done if both fingerprint images contain their respective core points, but two out of eight impressions for each finger in FVC2002 [10] have an exaggerate displacement. In our experiments, as in [12], these two impressions were excluded, and hence, there are only six impressions per finger yielding 600 fingerprint images in total for each database. We recall that false acceptance rate (FAR) is the frequency of fraudulent accesses, and false rejection rate (FRR) is the frequency of rejections of people which should be correctly verified. For the performance evaluation we adopt EER [10], that is the error rate when FAR and FRR assume the same value; it can be adopted as a unique measure for characterizing the security level of a biometric system. The experiments were conducted by using 1–2 training images per person to form a template while the other impressions are treated as the testing image. The value 1 or 2 in the first column of Tables 1–7 is the number of the images used, for each
Table 1 EER obtained using FingerCode matcher
1 2
Db1
Db2
Db3
Db4
12.5 9.2
11.7 8.7
29 21
18 12.4
Db1
Db2
Db3
Db4
7 4.5
6.3 3.4
27 25
16 11
Db1
Db2
Db3
Db4
5.5 3.8
5.1 3.4
31 27
9.8 5.8
Table 2 EER obtained by FP
1 2
Table 3 EER obtained by F1Q
1 2
ARTICLE IN PRESS D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216
214 Table 4 EER obtained by F2Q
1 2
Db1
Db2
Db3
Db4
5 3.5
3.99 3.3
29 26
10.1 7.1
Db1
Db2
Db3
Db4
4.5 3.1
4.2 2.7
27 23
14 8
Table 5 EER obtained by F3Q
1 2
Table 6 EER obtained by [12]
1 2
Db1
Db2
Db3
Db4
5.65 3.97
5.3 Not reported
8.33 Not reported
Not reported Not reported
Table 7 EER obtained combining F3Q and BioM.
1 2
Fus1
Fus2
BioM
0.79 0.75
0.81 0.61
1.6 1
person, in the training. In Tables 1–6, we show that our systems permit to obtain improvements in comparison to the state-of-the-art works of image-based fingerprint matchers [6,12]. Please note: (1) To be fair the comparison with FingerCode, the input image of FingerCode is first enhanced using [3]; (2) The test protocol used in [12] is easier than the test protocol used in this work, in [12] the authors only one impression is treated (for each individual) as testing image; (3) In Db3 the performance of our systems are similar to the performance of ‘‘FingerCode’’. In this Db the binarized shape of the finger can be approximated
ARTICLE IN PRESS D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216
215
by a circle (and not an ellipse), hence our ‘‘Orientation Computation’’ is not very reliable. These tests are reported in Table 7: BioM: commercial minutiae matcher; Fus1: fusion between F3Q and BioM by mean rule; Fus2: fusion between F3Q and BioM by Dempster-Shafer. Please note, we have used only Db2 because the matcher BioM works only using the images of that database.
5. Conclusions An integrated framework, that combines FingerCode-based feature extractor and PWC, is presented in this paper. The creation of the template of an individual (training stage) takes 3 s in a Pentium IV, 2400 MHz processor (Matlab code), the creation of the template of an individual (testing stage) takes 0.4 s in the same computer. The training stage of PWC takes 0.1 s for a dataset of 100 individuals, while the testing stage of PWC takes 0.05 s for individuals. Please note that the computation time of the training stage of PWC is linear to the number of individuals. Experimental results obtained from a large fingerprint database (FVC2002) show that our approaches leads to a substantial improvement in the overall performance. It must be mentioned that the performance of the proposed technique is inferior to that of a minutiae-based matcher [9] (Section 4.3). However, when used alongside a minutiae matcher, an improvement in matching performance is observed. The main drawback of our method is that detecting the core point is not an easy problem. In some images the core point may not even be present, or may be present close to the boundary of the image. To circumvent the problem we want to use the extracted features themselves to align the fingerprint (as in [11]).
Acknowledgements This work has been supported by Italian PRIN prot. 2004098034 and by European Commission IST-2002-507634 Biosecure NoE projects. References [1] M. Baskan, M. Balut, V. Atalay, Projection-based method for segmentation of human face and its evaluation. Pattern Recognition Lett. 23 (14) (2002) 1623–1629. [2] A.M. Bazen, G.T.B. Verwaaijen, S.H. Gerez, L.P.J. Veelenturf, B. Zwaag, Correlation-based fingerprint verification system, Proceedings Program for research on Integrated systems and circuits, Veldhoven, the Netherlands, 29 December—1 November, 2000, pp. 205–213. [3] S. Chikkerur, C. Wu, V. Govindaraju, A Systematic approach for feature extraction in fingerprint images, International Conference on Bioinformatics and its Applications, Fort Lauderdale, Florida, USA, 16–19 December, 2004, pp. 344–350. [4] R. Duda, P. Hart, D. Stork, Pattern Classification, Wiley, New York, 2000.
ARTICLE IN PRESS 216
D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216
[5] R.C. Gonzales, E.R. Woods, Digital Image Processing, Prentice-Hall, Englewood Cliffs, NJ, 2002. [6] A.K. Jain, S. Prabhakar, L. Hong, S. Pankanti, Filterbank-based fingerprint matching, IEEE Trans. Image Process. 5 (9) (2000) 846–859. [7] L.I. Kuncheva, J.C. Bezdek, R.P.W. Duin, Decision templates for multiple classifier fusion: an experimental comparison, Pattern Recognition Lett. 34 (2001) 299–314. [8] S. Macskassy, H. Hirsch, F. Provost, V. Dhar, R. Sankaranarayanan, Information triage using prospective criteria, Workshop on Machine Learning, Information Retrieval and User Modeling, Sonthofen, German 13–17 (2001) 1–10. [9] D. Maio, D. Maltoni, A.K. Jain, S. Prabhakar, Handbook of Fingerprint Recognition, Springer, New York, 2003. [10] D. Maio, D. Maltoni, R. Cappelli, J.L. Wayman, A.K. Jain, FVC2002: second fingerprint verification competition, 16th International Conference on Pattern Recognition, Quebec City, QC, Canada, 11–15 August, 2002, pp. 30811–30814. [11] A. Ross, J. Reisman, A.K. Jain, Fingerprint Matching Using Feature Space Correlation, European Conference on Computer Vision. Copenhagen, Denmark, 27 May –2 June, 2002, pp. 48–57. [12] A. Teoh, D. Ngo, O.T. Song, An efficient fingerprint verification system using integrated wavelet and Fourier-Mellin invariant transform, Image Vision Comput. J. 22 (6) (2004) 503–513. [13] ftp://ftp.ph.tn.tudelft.nl/pub/bob/prtools/prtools3.1.7/ [14] www.biometrika.it
Loris Nanni is a Ph.D Candidate in Computer Engineering at the University of Bologna, Italy. He received his Master Degree cum laude in 2002 from the University of Bologna. In 2002, he started his Ph.D in Computer Engineering at DEIS, University of Bologna. His research interests include pattern recognition, and biometric systems (fingerprint classification and recognition, signature verification, face recognition).
Dario Maio is a Full Professor at the University of Bologna. He is a Chair of the Cesena Campus and Director of the Biometric Systems Laboratory (Cesena, Italy). He has published over 150 papers in numerous fields, including distributed computer systems, computer performance evaluation, database design, information systems, neural networks, autonomous agents and biometric systems. He is author of the book: ‘‘Biometric Systems, Technology, Design and Performance Evaluation’’, Springer, London, 2005, and of the ‘‘Handbook of Fingerprint Recognition’’, Springer, New York, 2003 (received the PSP award from the Association of American Publishers). Before joining University of Bologna, he received a fellowship from the C.N.R. (Italian National Research Council) for working on the Air Traffic Control Project. He received a degree in Electronic Engineering from the University of Bologna in 1975. He is an IEEE member. He is with DEIS and IEIITC.N.R.; he teaches database and information systems.