A deformation-invariant image-based fingerprint verification system

A deformation-invariant image-based fingerprint verification system

ARTICLE IN PRESS Neurocomputing 69 (2006) 2336–2339 www.elsevier.com/locate/neucom Letters A deformation-invariant image-based fingerprint verificati...

198KB Sizes 0 Downloads 59 Views

ARTICLE IN PRESS

Neurocomputing 69 (2006) 2336–2339 www.elsevier.com/locate/neucom

Letters

A deformation-invariant image-based fingerprint verification system Loris Nanni, Alessandra Lumini DEIS, IEIIT-CNR, Universita` di Bologna, Viale Risorgimento 2, 40136 Bologna, Italy Received 12 September 2005; received in revised form 5 May 2006; accepted 7 May 2006 Communicated by R.W. Newcomb Available online 3 June 2006

Abstract We show that it is possible to handle the problem of fingerprint deformation if the region of interest around the reference point is partitioned in smaller sub-images and each sub-image is used for creating a different template. The experiments show that our system outperforms the standard ‘FingerCode’ recognition method and other image-based approaches. Combining the matching score generated by the proposed technique with that obtained from a minutiae-based matcher results in an overall improvement in performance of the fingerprint matching. r 2006 Elsevier B.V. All rights reserved. Keywords: Fingerprint verification; FingerCode; Image-based matcher

1. Introduction Various approaches of automatic fingerprint matching have been proposed in the literature. Fingerprint-matching techniques may be broadly classified as being either minutiae-based, correlation-based or image-based [1] (for a good survey see[6]). Minutiae-based approaches first extract the minutiae from the fingerprint images. Then, the matching between two fingerprints is made using the two sets of minutiae locations. Image-based approaches usually extract the features directly from the raw image since a grey-level fingerprint image is available; then, the decision is made using these features. Moreover, image-based approaches may be the only viable choice, for instance, when image quality is too low to allow reliable minutia extraction. For dissimilarity matching (in the image-based approaches), in many papers, a simple Euclidean distance metric is employed instead of more complex classifiers. This is because the authors give more relevance to the effectiveness of the feature extraction rather than to the classification technique in the experiments [10,5]. Fingerprint matching is affected by the non-linear distortion introduced in fingerprint impressions during the image Corresponding author. Tel.: +39 349 3511673; fax: +39 054 7338890.

E-mail address: [email protected] (L. Nanni). 0925-2312/$ - see front matter r 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2006.05.004

acquisition process due to the non-uniform finger pressure applied by the subject on the sensor and the elastic nature of the human skin. This non-linear deformation causes fingerprint features such as minutiae points and ridge curves to be distorted in a complex manner. For reliable matching, these non-linear deformations must be accounted for prior to comparing two fingerprint images. Techniques to handle such distortions have been suggested in the literature (see, [9], Section 4.5 and [7]). In this work, the region of interest around the reference point is partitioned in smaller sub-images (the different bands). The features extracted from the sub-images are used for creating a different template. Consequently, variations in the image will only affect sub-images, and thus the local information may be better represented. 2. Fingerprint verification The Training Stage proposed in this work consists of six modules: Enhancement: the image is enhanced using the technique described in [2], Core Detection: the core is extracted by a Poincare`-based algorithm [9], Orientation Computation: we correct the orientation of the image as in [7], Extra-images generation: creation of other images of the same fingerprint using morphological operators, Feature Extraction: the features are extracted using

ARTICLE IN PRESS L. Nanni, A. Lumini / Neurocomputing 69 (2006) 2336–2339

‘FingerCode’ [7], Creation of the Template by Parzen Window Classifier (PWC): during the learning phase a final template is created for each individual using a ‘one-class’ PWC [3]. The Verification Stage proposed in this work consists of six modules: Enhancement, Core Detection, Orientation Computation, Extra-images generation, Feature Extraction, Classification: a similarity between the input fingerprint and each template is calculated. 3. Methods

We use Gaussian kernels with covariance matrix S ¼ Identity as window functions (det(.) denotes the determinant of the matrix)  T 1  1 y S y exp  . (2) jðy; hÞ ¼ 2h2 ð2pÞd=2 hd detðSÞ1=2 _

We get the estimate of the conditional pdf pðs=cÞ (where s is the feature vector extracted from a fingerprint) of each class c using the Parzen window method as 1 X _ pðs=cÞ ¼ jðs  si ; hÞ, (3) #Dc s 2D i

3.1. Extra-images generation For each image, we derive multiple samples by perturbing its core coordinates (of a number of pixels equal to (1/2)*r, where r is the average inter-ridge spacing of the fingerprints of a given database) in a mode of eightneighbours deviation. As shown in Fig. 1, there are nine possible position for the core. Moreover, the original image is rotated by an angle of 11.251 and 11.251 and other images are generated. 3.2. Parzen Window Classifier (PWC) In this method, the features extracted from the region of interest, around the reference point, are used for creating a template of the individual by a PWC. This classifier is implemented as in PrTools 3.1.7 [5]. Given a training set of an individual c composed by a set Dc of feature vectors (mean of grey values in individual sectors in filtered images) of its fingerprints (original images and the extra-images generated at step (4), a nonparametric estimation of its probability density function (pdf) is obtained by using a Parzen window [3] (chapter 4.1–4.3). The Parzen window density estimate can be used to approximate the probability density p(x) of a vector of continuous random variables X. It involves the superposition of a normalized window function centred on a set of samples. Given a set of n d-dimensional samples D ¼ fx1 ; x2 ; . . . ; xn g the pdf estimate by the Parzen window is given by ^ pðxÞ ¼

n 1X jðx  xi ; hÞ; n i¼1

(1)

where j(  ,  ) is the window function and h is the window width parameter (h ¼ 6 in our tests). Parzen showed that _ pðxÞ converges to the true density if j(  ,  ) and h are selected properly. Core detected by Poincarebased

2337

c

where #Dc denotes the cardinality of the set Dc. The verification procedure of the PWC is performed by evaluating Eq. (3). We denote with PWC2 the classifier _ where the scores ðpðs=cÞÞ are normalized to sum to one as in [4] and [9], while we denote with PWC1 if the scores are not normalized. 3.3. Method to partition the region of interest around the reference point The image-based approach suffer from two types of distortions: (1) noise, which is caused by the capturing device or by e.g., dirty fingers and (2) non-linear distortions. This causes various sub-regions in the sensed image to be distorted differently due to the non-uniform pressure applied representation. The region of interest around the reference point is partitioned in five smaller sub-images (the different bands), as depicted in Fig. 2. Consequently, variations in the image will only affect sub-images, and thus the local information may be better represented. In this work, we propose two methods for using the bands: COMB1 and COMB2. COMB1: Let Rj,c be a training set of an individual c composed by concatenating the feature vectors of three bands of its fingerprints (original images and the extraimages generated at step (4). For each combination of three bands we get the estimate of the conditional pdf of each class c using the Parzen window method, and we combine these classifiers using the ‘sum rule’ [6]. COMB2: Let Bi,c a training set of an individual c composed by a set of feature vectors of the ith band of its fingerprints (original images and the extra-images generated at step (4). We get the estimate of the conditional _ pdf pBi;c ðs=cÞ of each class c using the Parzen window 2° Band

3° Band

1° Band

4° Band

5° Band Fig. 1. Eight-neighbours deviation.

Fig. 2. The region of interest is partitioned in smaller sub-images.

ARTICLE IN PRESS L. Nanni, A. Lumini / Neurocomputing 69 (2006) 2336–2339

2338

method as _

pBi;c ðs=cÞ ¼

1 X jðs  si ; hÞ, #Bi;c s 2B i

(4)

i;c

where #Bi,c denotes the cardinality of the set Bi,c. In our experiments, we have K ðK ¼ 5Þ bands, we have five _ different non-parametric estimation pBi;c ðs=cÞ of each individual c, and we combine the classifiers using the ‘sum rule’ [6]. Please note, for each input fingerprint s our method generates d extra-images, we obtain a score for each of these d+1 images, we combine these scores with the ‘max rule’. 3.4. Multiclassifiers We simply use the ‘mean rule’ [6] and Dempster–Shafer [6] to combine the score from our image-based method and a commercial minutiae matcher [11]. Whenever training was needed for the fusion schemes (i.e., Dempster–Shafer), twofold cross-validation was used. 4. Experiments and discussion In this paper, the algorithm is evaluated on images taken from FVC 2002, which is available in DVD included in [9]. FVC2002 provides four different fingerprint databases: Db1, Db2, Db3 and Db4. In our verification stage, the comparison of two fingerprints must be based on the same core point. However, the comparison only can be done if both fingerprint images contain their respective core points, but two out of eight impressions for each finger in FVC2002 [8] have an exaggerate displacement. In our experiments, as in [10], these two impressions were excluded, and hence, there are only six impressions per finger yielding 600 fingerprint images in total for each database. 35

Db1

30

Db2

20

30 25 20

Mean

15 10

10

5

5

(a)

In this figure, with: NEW1 we consider the system COMB1 where we use as classifier the method PWC1; NEW2 we consider the system COMB2 where we use as classifier the method PWC1; NEW3 we consider the system COMB1 where we use as classifier the method PWC2; NEW4 we consider the system COMB2 where we use as classifier the method PWC2; MAIO we consider the system proposed in [7]. These tests are reported in Fig. 4: BIOM: commercial minutiae matcher [11]; NEW_FUS1: fusion between New4 and BIOM by mean rule; NEW_FUS2: fusion between New4 and BIOM by Dempster–Shafer; MAIO_FUS1: fusion between MAIO and BIOM by mean rule; MAIO_FUS2: fusion between MAIO and BIOM by Dempster–Shafer. Please note, we have used only Db2 because the matcher BioM works only using the images of that database. In Fig. 4, with ‘1’ we refer to the performance obtained using only one training image, with ‘2’ we refer to the performance obtained using two training images.

Db4 15

Db1 Db2 Db3 Db4 Mean

(1) To be fair the comparison with FingerCode, the input image of FingerCode is first enhanced using [2]. (2) In Db3, the binarized shape of the finger can be approximated by a circle (and not an ellipse), hence our ‘Orientation Computation’ is not used.

Db3

25

0

We recall that false acceptance rate (FAR) is the frequency of fraudulent accesses, and false rejection rate (FRR) is the frequency of rejections of people who should be correctly verified. For the performance evaluation we adopt equal error rate (EER) [8], that is the error rate when FAR and FRR assume the same value. The experiments were conducted by using 1–2 training images per person to form a template while the other impressions are treated as the testing image. In Fig. 3, we show that our systems permit to obtain improvements in comparison to the state-of-the-art works of image-based fingerprint matchers [5,7]. A further remark is in order:

FC 12,5 11,7 29 18 17,80

Maio 5 3,99 29 10,1 12,02

New1 5 4,7 19 16 11,18

New2 8 7,3 23 16 13,58

New3 4 3,6 19 16 10,65

New4 3,1 2,5 16 8 7,40

0 Db1 Db2 Db3 Db4 Mean (b)

FC

Maio

New1

New2

New3

New4

9,2 8,7 21 12,4 12,83

3,5 3,3 26 7,1 9,98

4,5 4,1 16 12 9,15

7,3 6,2 18 15 11,63

3 2,3 16 12 8,33

1,4 0,99 12 5,6 5,00

Fig. 3. EER obtained using image-based matchers (Mean is the average EER on the four DBs), in (a) we report the performance obtained using 1 training image, in (b) we report the performance obtained using two training images.

ARTICLE IN PRESS L. Nanni, A. Lumini / Neurocomputing 69 (2006) 2336–2339

1.7 1

1.5

2 1.3 1.1 0.9 0.7 0.5 0.3

Maio_FUS 2

BioM

Maio_FUS1

New_FUS1 New_FUS2

1

1,6

0,79

0,81

0,59

0,59

2

1

0,75

0,61

0,49

0,39

Fig. 4. EER obtained combining the image-based matchers and a minutiae-based matcher.

5. Conclusions We introduce a new image-based fingerprint matcher that dramatically outperforms the state-of-the-art of the image-based fingerprint matchers [5,10,7]. We show that deriving multiple ‘virtual’ samples from each original sample in the training set and test set, by perturbing the positions of the core point, permits to obtain a more reliable system. The creation of the templates of an individual takes 4 s in a Pentium IV, 2400 MHz processor (Matlab code). The training stage of PWC takes 0.2 s for a dataset of 100 individuals, while the testing stage of PWC takes 0.1 s for each individual. References [1] A.M. Bazen, G.T.B. Verwaaijen, S.H. Gerez, L.P.J. Veelenturf, B. Zwaag, Correlation-based fingerprint verification system, In: Proceedings Program for Research on Integrated Systems and Circuits, Veldhoven, The Netherlands, 29 December–1 November 2000, pp. 205–213.

2339

[2] S. Chikkerur, C. Wu, V. Govindaraju, A Systematic approach for feature extraction in fingerprint images, In: International Conference on Bioinformatics and its Applications, Fort Lauderdale, FL, USA, 16–19 December 2004, pp. 344–350. [3] R. Duda, P. Hart, D. Stork, Pattern Classification, Wiley, New York, 2000. [4] ftp://ftp.ph.tn.tudelft.nl/pub/bob/prtools/prtools3.1.7/ [5] A.K. Jain, S. Prabhakar, L. Hong, S. Pankanti, Filterbank-based fingerprint matching, IEEE Trans. Image Process. 5 (9) (2000) 846–859. [6] L.I. Kuncheva, J.C. Bezdek, R.P.W. Duin, Decision templates for multiple classifier fusion: an experimental comparison, Pattern Recogn. Lett. 34 (2001) 299–314. [7] D. Maio, L. Nanni, An efficient fingerprint verification system using integrated Gabor filters and Parzen Window Classifier, Neurocomputing 68 (7) (2005) 208–216. [8] D. Maio, D. Maltoni, R. Cappelli, J.L. Wayman, A.K. Jain, FVC2002: second fingerprint verification competition, In: 16th International Conference on Pattern Recognition, Quebec City, QC, Canada, 11–15 August 2002, pp. 30811–30814. [9] D. Maio, D. Maltoni, A.K. Jain, S. Prabhakar, Handbook of Fingerprint Recognition, Springer, New York, 2003. [10] A. Teoh, D. Ngo, O.T. Song, An efficient fingerprint verification system using integrated wavelet and Fourier–Mellin invariant transform, Image Vision Comput. J. 22 (6) (2004) 503–513. [11] www.biometrika.it Loris Nanni received his Master Degree cum laude in 2002 from the University of Bologna, and the April 26th 2006 he received his Ph.D. in Computer Engineering at DEIS, University of Bologna. His research interests include pattern recognition, and biometric systems (fingerprint classification and recognition, signature verification, face recognition).

Alessandra Lumini received a degree in Computer Science from the University of Bologna, Italy, on March 26th 1996. In 1998 she started her Ph.D. studies at DEIS- University of Bologna and in 2001 she received her Ph.D. degree for her work on ‘‘Image Databases’’. Now she is an Associate Researcher at University of Bologna. She is a member of the BIAS Research Group at the department of Computer Science of the University of Bologna (Cesena). She is interested in Biometric Systems (particularly Fingerprint Classification), Multidimensional Data Structures, Digital Image Watermarking and Image Generation.