Partial fingerprint matching using minutiae and ridge shape features for small fingerprint scanners

Partial fingerprint matching using minutiae and ridge shape features for small fingerprint scanners

Accepted Manuscript Partial Fingerprint Matching Using Minutiae and Ridge Shape Features for Small Fingerprint Scanners Wonjune Lee , Sungchul Cho , ...

9MB Sizes 11 Downloads 284 Views

Accepted Manuscript

Partial Fingerprint Matching Using Minutiae and Ridge Shape Features for Small Fingerprint Scanners Wonjune Lee , Sungchul Cho , Heeseung Choi , Jaihie Kim PII: DOI: Reference:

S0957-4174(17)30436-0 10.1016/j.eswa.2017.06.019 ESWA 11390

To appear in:

Expert Systems With Applications

Received date: Revised date: Accepted date:

31 January 2017 1 June 2017 12 June 2017

Please cite this article as: Wonjune Lee , Sungchul Cho , Heeseung Choi , Jaihie Kim , Partial Fingerprint Matching Using Minutiae and Ridge Shape Features for Small Fingerprint Scanners, Expert Systems With Applications (2017), doi: 10.1016/j.eswa.2017.06.019

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT

HIGHLIGHTS: 

AC

CE

PT

ED

M

AN US

CR IP T

  

New partial fingerprint-matching for small sensors in mobile devices is proposed The method incorporates new ridge shape features (RSFs) in addition to minutiae RSFs represent small ridge segments where specific edge shapes are observed These edge shapes are detectable in conventional 500 dpi images of small sensors

ACCEPTED MANUSCRIPT

Partial Fingerprint Matching Using Minutiae and Ridge Shape Features for Small Fingerprint Scanners Wonjune Leea, Sungchul Choa, Heeseung Choib, and Jaihie Kima,*

a

AN US

CR IP T

School of Electrical and Electronic Engineering Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-749, Republic of Korea. E-mail: {wonjune727, ikriel, jhkim}@yonsei.ac.kr b Image Media Research Center, Korea Institute of Science and Technology, Seoul 136-130, Republic of Korea E-mail: [email protected] * Corresponding author: Jaihie Kim Email address: [email protected] Tel: (82-2)2123-2869 Fax: (82-2)312-4584 Postal address: B619, 2nd Engineering Hall, 50 Yonsei-ro, Yonsei University, Seodaemun-gu, Seoul 120-749, Republic of Korea

Abstract

M

Currently, most mobile devices adopt very small fingerprint sensors that only capture small partial fingerprint images. Accordingly, conventional minutiae-based fingerprint matchers are not capable of

ED

providing convincing results due to the insufficiency of minutiae. To secure diverse mobile applications such as those requiring privacy protection and mobile payments, a more accurate

PT

fingerprint matcher is demanded. This manuscript proposes a new partial fingerprint-matching method incorporating new ridge shape features (RSFs) in addition to the conventional minutia features. These

CE

new RSFs represent the small ridge segments where specific edge shapes (concave and convex) are

AC

observed, and they are detectable in conventional 500 dpi images. The RSFs are effectively utilized in the proposed matching scheme which consists of minutiae matching and ridge-feature-matching stages. In the minutiae matching stage, corresponding minutia pairs are determined by comparing the local RSFs and minutiae adjacent to each minutia. During the subsequent ridge-feature-matching stage, the RSFs in the overlapped area of two images are further compared to enhance the matching accuracy. A final matching score is obtained by combining the resulting scores from the two matching stages. Various tests for partial matching were conducted on the FVC2002, FVC2004 and BERC (self-constructed) databases, and the proposed method shows significantly lower equal-error rates

ACCEPTED MANUSCRIPT

compared to other matching methods. The results show that the proposed method improves the accuracy of fingerprint recognition, especially for implementation in mobile devices where small fingerprint scanners are adopted.

CR IP T

Keywords— Minutiae, partial fingerprint, ridge shape feature (RSF), small fingerprint scanner

1. Introduction

Fingerprint recognition is one of the most reliable biometrics for user authentication. In recent

AN US

years, its application has been significantly extended to privacy protection in mobile devices (smartphones, tablets, laptops, etc.). Furthermore, fingerprint recognition has been considered as an important technique for user identification in mobile payments. Recent mobile devices adopt a miniaturized fingerprint scanner because of its compact size; i.e., the sensing area of most fingerprint

M

readers is even much smaller than 10  10 mm2. Thus, these readers capture only small partial

ED

fingerprint images. A partial fingerprint generally means a very small or an incomplete part of a whole fingerprint, and in this paper, partial fingerprint matching means the matching between two

PT

fingerprint images captured by the small fingerprint scanners. Although many approaches to fingerprint matching have been proposed, partial fingerprint matching for a small fingerprint scanner

CE

still remains a challenging task in fingerprint recognition, especially for mobile applications (LiuJimenez, Ros-Gomez, Sanchez-Reillo, & Fernandez-Saavedra, 2016).

AC

Conventional fingerprint matching on normal-sized images is typically performed by using only the

minutiae at the ridge endings or bifurcations (Maltoni, Maio, Jain, & Prabhakar, 2009). In Automated Fingerprint Identification Systems (AFIS), positive identification between two fingerprint images is generally made when more than 12 matched minutia pairs are observed (Maltoni et al., 2009). However, partial fingerprint images captured by a small fingerprint sensor contain much less than the required number of minutiae that would enable an accurate identification. For example, several fingerprint images captured by a small sensor with the size of 6.9  6.5 mm2 contain only four minutia

ACCEPTED MANUSCRIPT

points (details are provided in Section 4.1.1). Jea & Govindaraju (2005) introduced a minutiae-based partial fingerprint matching, but their matching result was still limited due to the deficiency in minutiae. On the other hand, commercial mobile fingerprint authentication systems require the registration of multiple impressions (typically up to 25 impressions) during the fingerprint enrollment process to overcome the insufficiency in fingerprint features including minutiae (Liu-Jimenez et al.,

CR IP T

2016; Mathur, Vjay, Shah, Das, & Malla, 2016). Some commercial systems control their enrollment processes to register various portions of a fingerprint from one user. However, the difficulty in the matching of small-sized fingerprint images itself remains unsolved.

Utilization of additional non-minutia features can be considered to improve the accuracy of partial

AN US

fingerprint matching. Various fingerprint matching methods using non-minutia features have been proposed. These methods can be categorized into four groups: image-based matching and ridgefeature-based matching, feature-point-based matching, and Level 3 feature-based matching (summarized in Table 1). The image-based approach evaluates the degree of similarity between two

M

fingerprint images. The most intuitive way is to calculate the correlation between two images using gray-scale information (Maltoni et al., 2009; Hatano et al., 2002; Venkataramani, Keskinoz, & Kumar,

ED

2005; Zanganeh, Srinivasan, & Bhattacharjee, 2015). Although this approach directly compares the

PT

ridge patterns of two fingerprint images, it is vulnerable to the alignment error caused by nonlinear deformation. Other image-based approaches employ discriminative texture features such as the Gabor

CE

response, local binary patterns (LBP), and histogram of oriented gradients (HoG) (Benhammadi, Amirouche, Hentous, Bey Beghdad, & Aissani, 2007; Jain & Prabhakar, 2001; Nanni & Lumini, 2008;

AC

Nanni & Lumini, 2009; Ouyang, Feng, Su, & Cai, 2006; Ross, Jain, & Reisman, 2002). However, they still remain sensitive to image variation caused by noise, skin condition, or nonlinear deformation. Furthermore, these image-based methods require the registration of the whole image or texture data of fingerprint templates for matching, which may not be desired for a security reason.

ACCEPTED MANUSCRIPT

Table 1 Summary of fingerprint matching methods using non-minutia features Category

Feature

Strength & Weakness

Image-based method

- pixel intensities - texture features such as Gabor response, LBP, and HoG

- Directly compare the entire fingerprint patterns - High capability in matching low quality images

Level 3 feature-based method

- pores, incipient ridges, dots and ridge contour

PT

ED

(Ashbaugh, 1999; Chen & Jain, 2007; A. Jain, et al., 2007; Kryszczuk, et al., 2004)

feature-point-based method

- Performance improvement by using ridge information in addition to conventional minutiae - Robust to non-linear distortion

- key-points such as SIFT and A-KAZE

- the orientation and frequency are not discriminative in the partial images - The extraction of ridge features is unstable due to the error of minutia extraction - Only several ridges associated with minutiae are incorporated in the matching - Performance improvement by using the micro details of ridges in addition to conventional minutiae - Only available in high resolution images of 1000 dpi and over - These features include the discriminative textural characteristics of ridges - Sensitive to the large textural variations caused by noise or skin condition

AC

CE

(Mathur et al., 2016; Yamazaki, Li, Isshiki, & Kunieda, 2015)

CR IP T

(Feng, 2008; Tico & Kuosmanen, 2003; Wang, et al., 2007; Fang, et al., 2007; He, et al., 2006; Wang et al., 2007; Choi, et al., 2011; Feng, et al., 2006)

- ridge orientation and frequency - ridge sampling points - ridge count, curvature, length, and type

AN US

Ridge feature-based method

- Performance degradation due to nonlinear distortion, finger skin condition, and image alignment error. - Requires storing the entire template image or texture data

M

(Maltoni et al., 2009; Hatano et al., 2002; Venkataramani, et al., 2005; Zanganeh, et al., 2015; Benhammadi,et al., 2007; Jain & Prabhakar, 2001; Nanni & Lumini, 2008; Nanni & Lumini, 2009; Ouyang, et al., 2006; Ross, et al., 2002)

Ridge-feature-based approaches utilize the topological information of the ridge patterns. Ridge

information is capable of enhancing the individuality of a fingerprint by combining it with minutia information. Several approaches used the minutiae with their surrounding ridge orientation and frequency information (Feng, 2008; Tico & Kuosmanen, 2003; Wang, Li, & Niu, 2007). These methods indicate that ridge orientation and frequency information are useful for improving the matching performance. However, the ridge orientation and frequency do not provide discriminatory

ACCEPTED MANUSCRIPT

information in the case of small partial fingerprint images, because they do not change distinctively in the small portion of a fingerprint area except at the area around a singular point. In other approaches, the ridge patterns are either represented by sampling points along ridges associated with minutiae (Fang, Srihari, Srinivasan, & Phatak, 2007; He, Tian, Li, Chen, & Yang, 2006; Wang et al., 2007) or by the topological information of ridges between two minutia points (Choi, Choi, & Kim, 2011).

CR IP T

Based on these ridge representations, various ridge features such as ridge curvature, ridge orientation, and ridge count (between minutiae) can be adopted for fingerprint matching. However, these ridge representation schemes are highly dependent on the existence of minutiae; namely, the ridge features can be falsely obtained or missed if the associated minutiae are falsely extracted or missed. In

AN US

addition, ridges without minutiae cannot be incorporated in fingerprint matching. Accordingly, ridge information cannot be sufficiently utilized in the partial fingerprint matching. Feng, Ouyang, & Cai (2006) introduced a method to incorporate the entire ridge pattern in fingerprint matching. The entire ridges are represented by a list of sampling points extracted along the thinned ridges. In the matching

M

process, direct ridge comparison is performed by the sampled points. However, a high sampling rate

of the matching process.

ED

is required to achieve the high matching performance, which increases the computational complexity

PT

In addition, several researches proposed partial fingerprint matching based on SIFT or accelerated KAZE (A-KAZE) features which are typically used for object recognition and image matching

CE

(Mathur et al., 2016; Yamazaki, Li, Isshiki, & Kunieda, 2015). They validated that the SIFT and AKAZE features could provide discriminative information for fingerprint matching. Moreover, Mathur

AC

et al. (2016) showed that the A-KAZE-based approach achieved better performance than the conventional minutiae-based approach in the partial fingerprint matching. However, these approaches are relatively sensitive to the large textural variations caused by noise or skin condition. Level 3 feature-based approaches validated the usefulness of ridge details such as pores, incipient ridges, dots, and ridge contours in fingerprint matching (Ashbaugh, 1999; Chen & Jain, 2007; A. Jain, Chen, & Demirkus, 2007; Kryszczuk, Drygajlo, & Morier, 2004). They proposed matching schemes incorporating Level 3 features with the minutia features and achieved performance improvement in

ACCEPTED MANUSCRIPT

terms of fingerprint matching. However, the use of the Level 3 features was considered only in highresolution fingerprint images of 1000 dpi and over, whereas most of the conventional fingerprint images had a resolution of 500 dpi. This paper proposes a new partial fingerprint-matching scheme suitable for small fingerprint scanning. The scheme is based on the combination of conventional minutia features with new ridge

CR IP T

features that represent ridge segments exhibiting specific edge shapes such as convex and concave. The use of ridge edges was previously investigated for high-resolution (≥ 1000 dpi) fingerprint matching (A. Jain et al., 2007). However, by simplifying the characterization of the ridge contour shapes, the proposed ridge shape features (RSFs) are detectable in conventional 500 dpi fingerprint

AN US

images. The proposed RSFs are extracted from only specific ridge segments where convex and concave edges are observed. Compared with other approaches utilizing entire ridge contour or entire image (texture), the proposed method is believed to be more appropriate for securing fingerprint templates and saving memory space. In addition, the RSFs are extracted from any ridges, not only the

M

ridges associated with minutiae. In other words, rich ridge information is employed in the matching process.

ED

A partial matching method using RSFs, consisting of minutiae matching and subsequent ridge-

PT

feature-matching stages, is proposed. During the minutiae-matching stage, minutia points are matched by comparing both of the minutiae and RSFs in the local neighborhood. The neighborhood of a

CE

minutia becomes discriminative by additionally including the RSFs even if very few minutiae are available therein. During the subsequent ridge-feature-matching stage, the overlapped area of two

AC

images is determined by using the previously matched minutia pairs and then the RSFs in the overlapped area are compared to enhance the accuracy of the fingerprint matching. Finally, the similarity scores, which are obtained from minutiae matching and ridge matching, are combined into a final matching score. The remainder of this paper is organized as follows. In Section 2, the new ridge features are introduced and their extraction method is described. Section 3 proposes the fingerprint-matching method using the new ridge features in addition to the conventional minutiae. Section 4 presents the

ACCEPTED MANUSCRIPT

experimental results for various partial matching scenarios. Finally, Section 5 concludes the paper and suggests future work.

2. Extraction of Ridge Shape Features

CR IP T

2.1. Characterization of Ridge Shape Features Friction ridge edges exhibit various shapes and are not only straight lines. Chatterjee (Ashbaugh, 1999) categorized the ridge edges into eight detailed shapes (straight, convex, peak, table, pocket, concave, angle, and others) (see Fig. 1), and validated the usefulness of them in fingerprint

AN US

identification. These shape variations are believed to appear due to the differential growth of the

M

ridges and the existence of pores near the ridge edges (Ashbaugh, 1999; A. Jain et al., 2007).

ED

Fig. 1. Shape characterization of friction ridge edges (adopted from (Ashbaugh, 1999)).

PT

Similarly, A. Jain et al. (2007) employed entire ridge contours for fingerprint matching, and proved the validity of ridge shape information. This approach is based on high-resolution images of 1000 dpi

CE

and over, because detailed ridge shapes are normally observed only in such high-resolution images. However, some specific and large shapes of ridge edges are also observed in conventional 500 dpi

AC

resolution images. Fig. 2 shows fingerprint images captured from the same finger but at different resolutions of 1000 dpi and 500 dpi.

CR IP T

ACCEPTED MANUSCRIPT

AN US

Fig. 2. Two images of the same fingerprint at 1000 dpi and 500 dpi resolutions (CrossMatch ID1000T). The same concave segments (dotted circles) and convex segments (solid circles) are observed at both resolutions.

In 500 dpi fingerprint scans, the ridge edges are less clear than in fingerprint images of 1000 dpi. However, at least, it is possible to determine the shapes of concave and convex ridge segments at a

M

resolution of 500 dpi. As shown in Fig. 2, the concave ridge segment is relatively narrower, whereas

ED

the convex ridge segment is wider in comparison with its adjacent portion. In this work, we employ the ridge shape information at a resolution of 500 dpi by simply categorizing the shapes of ridge edges

PT

as convex, concave, and other types. Moreover, the elasticity of finger skin, which causes edge shapes to vary, is another practical reason why simple categorization is considered. In this paper, the ridge

CE

segments exhibiting concave or convex edges are defined as ridge shape features (RSFs) that are

AC

characterized by their position, orientation, and shape (concave and convex).

2.2. Preprocessing

Several preprocessing steps are needed to obtain the conventional minutiae and new ridge features from a given fingerprint image. The foreground of a fingerprint image is firstly segmented by using the mean and variance of local pixel blocks. Then, image normalization is performed to improve the image contrast (Hong, Wan, & Jain, 1998). The normalized fingerprint image is further enhanced by

ACCEPTED MANUSCRIPT

using the Gabor filter (Hong et al., 1998), and the ridges are finally skeletonized for minutiae extraction (F. Zhao & Tang, 2007). For ridge feature extraction, a binary image, in which ridges and valleys are clearly separated, is also obtained from the normalized image by using a threshold with a constant value. Ridge features are extracted by using the binary image, enhanced image, thinned

CR IP T

image, and ridge orientation field as described in the following section.

2.3. Ridge Shape Feature Extraction

Ridge features are extracted by determining each segment of entire ridges whether it can be

AN US

considered to be a concave or convex segment. These two types of ridge segments show significant variation in the ridge width compared with their neighbors. The ridge width of a concave segment is relatively narrow and the ridge width of a convex segment is relatively wide. Accordingly, to determine the concave and convex segments of the ridge, the ridge width values are compared along

AC

CE

PT

ED

M

the ridge lines.

Fig. 3. Estimation of ridge width at (xt, yt). Detection of ridge boundaries on (a) binary and (b) enhanced images. (c) Estimation of ridge width using final ridge boundaries.

First, the ridge width is measured using the thinned image and the binary image of the same fingerprint image as follows:

ACCEPTED MANUSCRIPT

1) Let (xt, yt) be the coordinates of a ridge point in the thinned image. In the binary image, a ridge profile  centered at (xt, yt) is defined in the direction normal to the local ridge orientation, and with a length of 2 + 1 ( = 8 in our experiments) (see Fig. 3(a)). 2) The ridge boundary points, each of which is defined as a point in the ridge region with one of its two adjacent points being in the valley region, are determined on the ridge profile . Generally,

CR IP T

two boundary points (x1b, y1b) and (x2b, y2b) are obtained on the ridge profile. However, fewer or more than two boundary points can be extracted in the low quality region.

3) If fewer than two boundary points are found, the ridge point (xt, yt) is considered as a bad point and the ridge width is not measured at that point. On the other hand, if more than two boundary

AN US

points are found, the same process to find the ridge boundaries is performed at the same position (xt, yt) in the enhanced image (see Fig. 3(b)).

4) Let (x1e, y1e) and (x2e, y2e) be the two ridge boundary points found in the enhanced image. Among the boundary points of the corresponding binary image, the two points that are the closest to each

M

of (x1e, y1e) and (x2e, y2e) are considered as true ridge boundaries (x1b, y1b) and (x2b, y2b).

ED

5) The ridge width W at (xt, yt) is determined by the Euclidean distance between (x1b, y1b) and (x2b, y2b) (see Fig. 3(c)); thus, the ridge width is measured on the binary image, while other types of

AC

CE

PT

images are used for it.

Fig. 4. Segment test for extracting RSFs. (a) Comparison of ridge width at ridge point x with ridge width at adjacent ridge points. (b) Ridge points classified as either convex or concave by the segment test.

ACCEPTED MANUSCRIPT

After measuring the ridge width at each point of the thinned ridge line, each ridge point is determined whether it belongs to a concave or convex segment by comparing its ridge width to those at neighboring ridge points. Let x be a point on a thinned ridge line. The ridge width at x is compared with the ridge width values at N neighboring ridge points on both sides of x (N = 10), as illustrated in

into one of three states: Wi  Wx  Tw  n,  S (i )   s, Wx  Tw  Wi  Wx  Tw  w, Wx  Tw  Wi 

(narrower) (similar) , (wider)

CR IP T

Fig. 4(a). During the comparison process, each neighboring ridge point i {1,2,..., N} is categorized

(1)

AN US

where Wx denotes the ridge width at ridge point x, and Wi is the ridge width at an adjacent ridge point i. Tw is a threshold value for comparing ridge width values (Tw = 0.6 in our case). The state S(i) of ridge point i is assigned as n (narrower), s (similar), or w (wider) by comparing its ridge width Wi with Wx. The comparing result is used to classify a ridge point x as a point in the concave or convex segment

M

of a ridge. Ridge point x is considered as a concave point if more than half of N neighboring ridge points are classified as w. On the other hand, ridge point x is considered as a convex point if more

x, is assigned as follows:

N 2 i 1 . N N   S (i )  n   2 i 1

  S (i )  w 

PT

N

(2)

CE

 concave, T ( x)    convex, 

ED

than half of N neighboring ridge points are classified as n. T(x), which denotes the type of ridge point

AC

This segment test is performed at each ridge point of the thinned ridge line, and then the concave and convex ridge points are determined. If a set of consecutive ridge points is classified into the same set of concave or the same set of convex points as shown in Fig. 4(b), the ridge segment where the same ridge points are located is considered as the concave or convex segment and is used to define a ridge shape feature (RSF). On the other hand, if a single ridge point is solely classified as either concave or convex, this point is considered as a falsely detected point due to noise or an unclear segment and is discarded. An RSF rk is represented by a central ridge point of consecutive concave

ACCEPTED MANUSCRIPT

or convex points as follows: rk  ( xk , yk ,k , tk )T ,

(3)

where (xk, yk) are the coordinates of the central point, k is the ridge orientation at the position, and tk denotes the type of ridge shape (concave or convex). This feature representation is similar to the conventional minutia representation used in fingerprint recognition. However, the orientation of an

CR IP T

RSF has the range of (-/2, /2] whereas the direction of a minutia typically the range of (-, ]. Fig.

AN US

5 shows examples of RSFs extracted from a 500-dpi fingerprint image.

ED

M

Fig. 5. Ridge shape features and minutiae extracted from a fingerprint image.

3. Fingerprint Matching Incorporating Minutiae with RSFs

PT

Here we propose a matching scheme that compares both minutiae and RSFs in two fingerprint

CE

images to improve the accuracy of partial fingerprint matching. The minutiae and RSFs in two images can be directly compared using a brute-force matching scheme. However, the brute-force scheme

AC

significantly increases the computational complexity, because too many RSFs have to be compared even in a small partial fingerprint image. Furthermore, the brute-force matching produces many false RSF matches. Therefore, the proposed fingerprint matching is performed by a sequential process of minutiae matching and RSF matching (see Fig. 6). The proposed matching scheme first finds the matched minutia pairs by comparing local minutiae and RSFs adjacent to each minutia, and then the linear transformation (translation and rotation) between two images is estimated using the matched minutia pairs. When the minutiae-matching rate (mmr) is higher than a pre-defined threshold, the RSF

ACCEPTED MANUSCRIPT

matching is subsequently performed to calculate the similarity of RSFs in the overlapped area between two images. In the proposed RSF matching, all RSFs in the overlapped area are compared using both of the linear transformation and the matched minutia pairs without the brute-force search. The final matching score of two fingerprint images is obtained by combining the matching scores of

CR IP T

the minutiae and RSF matching stages.

AN US

Fig. 6. Flow chart of overall fingerprint matching process

3.1. Minutiae Matching Using Local Minutiae and RSFs

In the minutiae matching stage, corresponding minutia pairs between template and query images are determined by comparing local neighborhoods of minutia points. The local neighborhood of a minutia

M

is typically represented using a fixed number of its nearest minutiae (Jea & Govindaraju, 2005;

ED

Chikkerur, Cartwright, & Govindaraju, 2005; Jiang & Yau, 2000; Peralta et al., 2015) or all neighboring minutiae within a certain distance (Ratha, Bolle, Pandit, & Vaish, 2000; Lee, Choi, &

PT

KIm, 2002; Peralta et al., 2015). However, due to the lack of minutiae, those typical representations of local neighborhood cannot be discriminative in partial fingerprint images. To make the local

CE

neighborhoods more discriminative, the local neighborhood, also referred to as the local structure in this paper, is newly described using both the adjacent minutiae and the RSFs of a central minutia.

AC

The nearest neighbors of a certain minutia may vary in partial fingerprint images even captured

from the same finger when the overlapped region of two images is very small. Therefore, the local structure in our work contains all minutiae and RSFs within the given ranges from a central minutia (see Fig. 7(a)). Note that the ranges for neighboring RSFs and minutiae are defined differently (R1 = 40 and R2 = 80, respectively in this paper). This is because many more RSFs are included in a small local region compared with minutiae.

CR IP T

ACCEPTED MANUSCRIPT

Fig. 7. Representation of the local neighborhood of a minutia: (a) local structure based on adjacent minutiae and RSFs, and (b) topological information between a central minutia and its neighbors.

As illustrated in Fig. 7(b), given a central minutia mc , its neighbor nk (which can be either a

AN US

minutia or a RSF point) is represented by the Euclidean distance d c ,k between the minutiae mc and nk , the orientation difference  c ,k between mc and nk , the directional difference c ,k between

the direction of mc and the direction of the edge connecting mc to nk , and the type tk of nk

M

(ending or bifurcation for a minutia, and concave or convex for a RSF). Then, the local structure of

ED

mc is defined as L(mc )  {(dc,k ,c,k ,c,k , tk )}, k  1,2,..., Nc , where N c is the total number of

adjacent minutiae and RSFs.

PT

Let L(mT ) and L(mQ ) be the local structures of a minutia mT in the template fingerprint image and a minutia mQ in the query fingerprint image, respectively. These two local structures are

CE

matched by dynamic programming. Dynamic programming finds the optimal matching result that maximizes the similarity score between L(mT ) and L(mQ ) . The similarity score slm (mT , mQ ) is

AC

computed as

slm (mT , mQ ) 

sm (mT , mQ )  sr (mT , mQ ) , 2

(4)

where sm (mT , mQ ) and sr (mT , mQ ) are the similarities computed by matching neighboring minutiae and by matching neighboring RSFs, respectively. The similarities are calculated by the following equation:

ACCEPTED MANUSCRIPT

s(mT , mQ )



2 Fs (niT , nQj ) NT  NQ

(5)

,

where N T and N Q represent the total number of neighbors (minutiae or RSFs) in L(mT ) and L(mQ ) respectively, and Fs (niT , nQj ) represents a matching certainty score between niT and nQj ,

which are local neighbors in L(mT ) and L(mQ ) respectively. The matching certainty score

CR IP T

between niT and nQj is calculated by comparing their topological relation (relative distance, orientation difference, directional difference, and type) to the central minutiae as follows (Jiang & Yau, 2000):





AN US

Q Q   T  niT  n j / T , if niT  n j  T Fs (niT , nQj )   ,  0 , otherwise 

(6)

where T is a pre-defined threshold to determine whether niT and nQj are matched. The matching threshold is adaptively selected depending on the Euclidean distance between the central minutia and

ED

Fs (niT , nQj )  0 if they are not matched.

M

a neighbor (Lee et al., 2002). Fs (niT , nQj )  1 if niT and nQj are perfectly matched, whereas

All minutiae from template and query fingerprint images are compared using their local structures,

PT

and the top N minutia pairs that yield the highest similarity scores are selected as the initial matched pairs ( N  7 in our experiments). From each of the initially matched pairs, other corresponding

CE

minutia pairs are incrementally found using the breadth-first search algorithm (Peralta et al., 2015; Chikkerur et al., 2005), and then N different matching results are obtained. Among them, the result

AC

that shows the maximum minutiae-matching rate is selected as the final minutiae-matching result. The minutiae-matching rate represents the extent of minutiae correspondence in the overlapped area of two fingerprint images, which is calculated as Mm

mmr 

2  slm (muT , muQ ) u 1

N mT ,o



N mQ,o

,

(7)

where N mT ,o and N mQ,o are the total number of minutiae within the overlapped area of the template

ACCEPTED MANUSCRIPT

and input fingerprint images, respectively, M m is the number of matched minutiae, and

slm (muT , muQ ) is the similarity score of the matched minutia pair (muT , muQ ) , which is computed by Eq. (4). The overlapped area is estimated by linear transformation (rotation and translation) using the matched minutia pairs (Q. Zhao, Zhang, Zhang, & Luo, 2010). If mmr is greater than a matching

CR IP T

threshold Tmmr , the initially matched minutia pairs between two images are considered to be correctly found, and the proposed RSF matching is followed using those matched pairs; Tmmr is empirically determined according to the training.

AN US

3.2. Ridge Shape Feature Matching in the Overlapped Region

After matching the minutiae between two fingerprint images, the extent to which the RSFs in the overlapped region of the template and the query images correspond to each other is established. During the minutiae matching procedure, the RSFs in the local structures of minutiae are compared,

M

but this is only to establish the extent of minutiae correspondence; comparing RSFs in the local

ED

structures of minutiae cannot accurately establish one-to-one correspondence between RSFs on two images. In addition, the other RSFs, which are not in the local structures but in the overlapped area of

PT

two fingerprint images, are not compared during the minutiae matching. Therefore, by establishing the correspondence between RSFs in the overlapped region of two images, the accuracy of partial

CE

fingerprint matching can be further improved. The corresponding RSF pairs between two images are determined based on coarse-to-fine search

AC

scheme. By using the prior knowledge of the linear transformation between template and query images, the RSFs in the template image do not need to be compared with every RSF in the query image. By projecting the template RSFs on the query image using the linear transformation, their candidate pairs can be coarsely found among the query RSFs. However, there exists transformation error due to either nonlinear deformation or minutia location error caused by the imperfect minutiae extraction process. To find corresponding RSF pairs more accurately, the RSFs are locally matched by considering their topological relation (relative distance, radial angle, and orientation) to the

ACCEPTED MANUSCRIPT

matched minutia pairs. Let r T be a RSF point in the template fingerprint image and r T ' be a projection of r T on the input fingerprint image (see Fig. 8(a)). When we consider the transformation error, there is no a RSF point of the input image which is exactly matched with r T ' . Accordingly, the corresponding pair of r T is searched within the radius of

Rc pixels from r T ' , as illustrated in Fig. 8(b) ( Rc  20 in our

CE

PT

ED

M

AN US

CR IP T

experiments).

AC

Fig. 8. Local matching of the RSFs in the overlapped region of two fingerprint images: (a) projection of a template RSF (yellow star) on the query domain, (b) search for query RSFs (green stars) within the radius of Rc from the projected RSF, and (c) representation of the RSFs using the matched minutia pairs (red squares)

Let r Q be a RSF point within the radius of Rc pixels from r T ' . To determine the correspondence between r T and r Q , they are firstly represented using a set of matched minutia





pairs M match  (m Tj , m Qj )

j 1,...,M m

. The position and orientation of r T in the template image is

ACCEPTED MANUSCRIPT

described by the k nearest minutiae chosen among the template minutiae mTj  Mmatch (see Fig. 8), and then the position and orientation of r Q is also represented by the query minutiae mQj  Mmatch , which are the corresponding pairs of the selected k template minutiae ( k  5 in our experiments). If there are less than 5 matched minutia pairs between two images, all matched minutiae are employed

CR IP T

to describe r T and r Q . Each of r T and r Q are represented by the relative distance, radial angle, and orientation from the neighboring k matched minutia pairs as described in Section 3.1. Then, the similarity score slr (r T , r Q ) between r T and r Q is calculated as

 j 1 Fs (mTj , mQj ) k

slr

(r T , r Q )



k

,

(8)

AN US

where (mTj , mQj ) is the neighboring matched minutia pair of r T and r Q , and Fs (mTj , mQj ) is the matching certainty score of (mTj , mQj ) , which is computed by using Eq. (6). If slr (r T , r Q ) is higher than the pre-defined threshold Tlr , the RSF r Q is considered as a candidate pair of r T , and the RSF

M

pair (r T , r Q ) is stored in the candidate pair list. The threshold Tlr is adjusted depending on the number of neighboring matched minutia pairs k as follows:

Tmin  Tmax  (k  1)  Tmax , 5 1

ED

Tlr 

(9)

PT

where Tmax and Tmin is the maximum and minimum of Tlr , respectively ( Tmax  0.6 and

CE

Tmin  0.4 in our experiments). The threshold Tlr is increased as k decreases in order to reduce

false matches by insufficient description of r T and r Q .

AC

This matching process is performed for all RSFs in the template fingerprint image, and every possible RSF pair is stored in the candidate pair list with its similarity score. Then, from the candidate pair list, a set of final RSF pairs is determined using a greedy algorithm (Tico & Kuosmanen, 2003), which identifies the RSF pairs in descending order of their similarity scores and sets up one-to-one correspondence between the pairs. The pairing continues until no further possible pair remains. The overall procedure for matching the RSFs of two images is summarized as follows:

ACCEPTED MANUSCRIPT

Pseudo code of the RSF matching algorithm Input:  R T & R Q : the RSF sets of template and query images, respectively





 Mmatch  (mTj , m j ) Q

j 1,..., M m

: a set of matched minutia pairs

Output:  R match : a set of matched RSF pairs

Do:

CR IP T

Initialize:  Rcand  {} : a list of candidate RSF pairs  For u  1: N rT,t ( N rT,t is a total number of RSFs in the template image) 1. Let ruT ' be a projection of ruT  R T on the query image

 r   R which satisfies  r  is not empty Q v

2. Find 3. If

Q

rvQ  ruT '  Rc ( Rc  20 )

Q v

AN US

Q a. Describe ruT and rv by using the Mmatch

b. Compute the similarity score Slr (ruT , rvQ ) Q c. If Slr (ruT , rv )  Tlr

Rcand  Rcand  (ruT , rvQ ) End End

M

End

ED

 Find R match from R cand by using greedy algorithm

PT

3.3. Matching Score Computation

CE

The final matching score between two fingerprint images is derived by combining the score of the minutiae matching and that of the RSF matching. The minutiae-matching score is formulated as

AC

Smin  mmr 

2M m ,  N mQ,t

N mT ,t

(10)

where mmr is the minutiae-based similarity score computed by Eq. (7), M m is the number of matched minutiae, and N mT ,t and N mQ,t are the total number of minutiae in the template and input images, respectively. On the right-hand side of Eq. (10), the former term is considered as the minutiae-matching rate of the overlapped area of two images, whereas the latter term represents the size of the overlapped area. The size of the overlapped area reflects the reliability of minutiae

ACCEPTED MANUSCRIPT

matching between two images; as the overlapped region decreases, the similarity in imposter matching can be falsely increased. The RSF matching score is similarly computed as Mr

v 1 N rT,o



N rQ,o



2M r ,  N rQ,t

(11)

N rT,t

CR IP T

Srsf 

2  slr (rvT , rvQ )

where slr (rvT , rvQ ) is the similarity score of matched RSF pair (rvT , rvQ ) , M r is the number of matched RSFs, N rT,o and N rQ,o are the total number of RSFs in the overlapped region of the

template and input images, respectively.

AN US

template and query images, respectively, and N rT,t and N rQ,t are the total number of RSFs in the

Then, the final matching score is represented as

Stotal    Smin  (1   )  Srsf ,

(12)

where  (0    1) is a weighting factor for the purpose of combining the minutiae matching score

M

and the RSF matching score. The weighting factor is adjusted depending on the number of minutiae

( N mT ,t  N mQ,t )

( N mT ,t  N mQ,t )   rsf ( N rT,t  N rQ,t )

,

(13)

PT



ED

available in the fingerprint images as follows:

where  rsf is a weight of the RSF which assigns relative importance of the RSF to the minutia

CE

feature (  rsf  0.12 in our experiments).  is calculated as a ratio of minutia features to the total

AC

number of minutia and RSF features in both template and input images.  decreases as the number of available minutiae decreases, because the accuracy of minutiae matching is degraded when only few minutiae are available.

ACCEPTED MANUSCRIPT

4. Experimental Results and Discussion 4.1. Description of Partial Fingerprint Datasets There is no publicly available partial fingerprint database that contains images captured from small fingerprint scanners of various sizes. Accordingly, the performance of the proposed method was

CR IP T

evaluated on three different databases: (1) partial fingerprint images cropped from the FVC2002 database (DB1 and DB3) (Maio et al., 2002), (2) partial fingerprint images cropped from the FVC2004 database (DB1 and DB2) (Maio et al., 2004), and (3) a partial fingerprint database selfconstructed using a small fingerprint sensor. Tests on these datasets were expected to show the

AN US

applicability of the proposed ridge feature for conventional fingerprint images of 500 dpi.

4.1.1. Partial Fingerprint Images Cropped From FVC 2002 and 2004 Databases The images in FVC databases were captured from various optical and capacitive sensors. The

M

image resolution of the databases is 500 dpi, and they are composed of 800 fingerprint impressions

ED

from 100 fingers (eight impressions per finger). The images in the FVC databases were cropped to partial fingerprint images, in order to simulate small fingerprint scanning. Among the eight

PT

impressions for each finger, four impressions (the first, third, fifth, and seventh impressions) were used to generate template partial images, and the other four impressions (the second, fourth, sixth, and

CE

eighth impressions) were employed to generate query partial images. Commercial fingerprintauthentication systems in mobile devices require users to input multiple impressions in the enrollment

AC

process and provide some user guidance to obtain the whole region of a fingerprint (Mathur et al., 2016). Similarly, to simulate the practical situation, the partial fingerprint images for the enrollment were generated as follows:

CR IP T

ACCEPTED MANUSCRIPT

Fig. 9. Partial fingerprint images generated from FVC2002 and FVC2004 databases for (a) enrollment and (b) test

1) The foreground region of a fingerprint image is divided into five sub-regions (see Fig. 9(a)).

AN US

2) In each sub-region, a point is randomly selected as the center of a partial image, but the partial image is generated to have adequate foreground region (≥ 90% of the size of the partial image). 3) The partial fingerprint images are randomly generated in each of the five sub-regions. A total of 20 partial images are generated from four impressions of a finger.

M

On the other hand, as illustrated in Fig. 9(b), a total of 20 query partial images were randomly generated from the foreground regions of four query images, and the partial images are generated to

ED

contain foreground region more than 90% of the size of the partial image. Three different sizes of partial images were considered, and a total of 2000 (=20×100fingers) template images and 2000 query

PT

images were generated for each pre-defined size. Fig. 10 shows samples of the generated partial

AC

CE

fingerprint images.

AN US

CR IP T

ACCEPTED MANUSCRIPT

Fig. 10. Partial fingerprint images cropped from FVC2002, FVC2004 and BERC databases

M

In addition, the number of minutia points according to the size of the sensing area was investigated using the generated partial fingerprint datasets. For each image size, the minutia points were manually

ED

counted from the 100 different impressions of all fingers in FVC2002 DB1 (see Fig. 11). As the size of the sensing area decreases, the number of minutia points drastically decreases. Minutia points were

PT

still observed even in the small partial fingerprint images; the partial images of 6.9×6.5 mm2

CE

contained at least four minutia points. However, the number of minutia points in the partial images is insufficient for accurate fingerprint matching. Accordingly, the proposed RSFs can be used as

AC

supplementary features to enhance the accuracy of partial fingerprint matching.

CR IP T

ACCEPTED MANUSCRIPT

AN US

Fig. 11. Number of minutia points according to the size of sensing area.

4.1.2. BERC Partial Fingerprint Database

To evaluate the performance of the proposed method more extensively, we constructed a partial fingerprint database, named the BERC database, using a small fingerprint sensor (viz., the FPC 1020

M

from Fingerprint Cards AB). The fingerprint sensor is a 500 dpi capacitive sensor with a 9.8 × 9.8

ED

mm2 sensing area, which captures only a small portion of a whole fingerprint. This database was constructed from two fingers (index fingers of both hands) of 54 participants. For each finger, 10

PT

template images and 10 query images were separately captured. Therefore, a total of 1080 template partial images and 1080 query partial images were collected. In order to simulate smaller fingerprint

CE

scanners, the acquired images were cropped into partial images of two different sizes. Fig. 10 shows

AC

some samples of the BERC partial fingerprint database.

4.2. Reliability of RSF Extraction

Two fingerprint images captured from the same finger are not exactly the same due to variations in

the skin condition (wet or dry) and the elasticity of the finger skin. To maximize the accuracy of fingerprint matching, the same RSFs should be found in different impressions of the same fingerprint. This property is termed the “repeatability” (Schmid, Mohr, & Bauckhage, 2000). In our experiments, the reliability of RSF extraction is estimated by measuring the repeatability. The reliability of RSF

ACCEPTED MANUSCRIPT

extraction may also be evaluated by finding missed RSFs and falsely found RSFs as is typically done for evaluating a minutiae extraction algorithm. However, this evaluation method requires ground-truth RSFs subjectively marked by a human expert. Furthermore, since the determination of the concave and convex segments in ridges by a human expert is more ambiguous and erroneous than determining the ending or the bifurcation points in ridges, the repeatability is adopted in our experiment.

CR IP T

The optimal parameter for RSF extraction maximizes the repeatability of RSFs, and is determined by evaluating the repeatability of the RSFs. The extraction of RSFs is controlled by the value TW, a threshold value that is used to compare the ridge width values along the ridges (as described in Section 2.3). If the threshold value is too low, too many RSFs are falsely found from ridge segments,

AN US

even though their shapes should be disregarded as being either concave or convex. On the other hand, if the threshold value is too high, too many ridge features are omitted. Therefore, an optimal threshold value is necessary to reduce both the number of false and missing RSFs.

In addition, the repeatability of RSFs was compared with that of accelerated KAZE (A-KAZE)

M

features (Alcantarilla, Nuevo, & Bartoli, 2013). In the following sections (4.3 and 4.4), the proposed matching method was compared with the A-KAZE-based partial matching method (Mathur et al.,

ED

2016); details are described in Section 4.3. Therefore, the repeatability of A-KAZE features in the

PT

fingerprint images was also evaluated in this experiment. The parameters for the A-KAZE extraction were set as given in Mathur et al. (2016).

CE

Two impressions from each of 20 fingers in FVC2002 DB1 were used for our experiments. A total of 40 RSFs and 40 A-KAZE features were randomly selected from one of the two impressions from

AC

each finger, and checked whether they were also detected in the other impression. Accordingly, a total of 800 (= 2040) RSFs and 800 A-KAZE features were checked to evaluate the repeatability. The feature (RSF or A-KAZE) existence is verified by measuring the repeatability as Repeatability 

Number of features in both impressions . Total number of checked features

(14)

During the experiment, the threshold value TW for the RSF extraction was adjusted from 0.4 to 0.8 in intervals of 0.1, and the repeatability was evaluated at each of these values. Fig. 12 shows the

ACCEPTED MANUSCRIPT

repeatability of RSF extraction at each threshold value. When the threshold value was set to 0.6 in the proposed RSF extraction procedure, the repeatability was maximized and about 62% of ridge shape features were repeatedly found in both of the two impressions. Therefore, during subsequent experiments, the threshold TW was set to 0.6 for RSF extraction. In addition, the A-KAZE features

M

AN US

CR IP T

showed a repeatability of 58.5%, which was relatively lower than the repeatability of the RSFs.

ED

Fig. 12. Repeatability of RSF extraction

PT

4.3. Matching Tests on Partial Fingerprint Images In this experiment, the performance of partial fingerprint matching was evaluated by changing the

CE

size of fingerprint images. The matching tests were performed using the partial fingerprint datasets described in Section 4.1. On the partial datasets of FVC2002 (DB1 and DB3) and FVC2004 (DB1 and

AC

DB2), it was assumed that 20 impressions were registered during the enrollment process; therefore, each input image was matched against 20 template images for one subject and the maximum matching score among the 20 matching scores was selected as the final matching score for the subject. The total number of genuine matching results is 2000 (= 100 fingers×20 query images). The total number of imposter matching results is 4950 (= (100×99)/2); the first test sample of each finger is matched with the enrolled samples of the other fingers. On the BERC database, it was assumed that 10 impressions were registered as the template fingerprint images. Therefore, a query image was matched against 10

ACCEPTED MANUSCRIPT

template images for one subject, and the final matching score for the subject was the maximum one among the 10 matching scores. The total number of genuine matching results is 1080 (= 108 fingers×10 query images), and the total number of imposter matching results is 5778 (= (108×107)/2). In the tests, seven different matching methods were compared as follows: 1) Conventional minutiae matcher (CMM): This method is based on only minutia features. The

CR IP T

overall matching procedure is the same as the proposed minutiae matching stage, but the local neighborhood of a minutia is typically represented by only the adjacent minutiae within the given range from a central minutia.

2) Minutia cylinder-code (MCC): This approach is another representative minutiae-based matching

AN US

method where the local structure of a minutia is designed as a 3D cell-based structure known as a cylinder (Cappelli, Ferrara, & Maltoni, 2010). The MCC-based matcher is considered as one of the most accurate minutiae-based matchers (Peralta et al., 2015). The parameter values given by the MCC SDK 2.0 were employed in the tests.

M

3) Representative ridge point (RRP): Fang et al., (2007) proposed a partial fingerprint matching method incorporating minutiae and RRPs sampled from ridges associated with minutiae. Similar

ED

to the proposed method, the RRPs are represented by their position, orientation, and index and

PT

utilized with minutiae in the partial matching. They employed conventional minutiae-based matchers to match both RRPs and minutiae of two fingerprint images.

CE

4) HoG-based matcher (HoG): Nanni & Lumini (2009) proposed fingerprint matching based on the histogram of oriented gradient (HoG) features. Two fingerprint images are aligned using the

AC

results of minutiae matching, and HoG features are compared in the overlapped area of the images. The matching scores of minutiae matching and HoG matching are combined using a weighted sum rule.

5) A-KAZE-based matcher (A-KAZE): Mathur et al. (2016) proposed a partial fingerprint matching method based on A-KAZE features without the use of minutia points. A-KAZE keypoints were extracted from template and query fingerprint images, and they were matched by computing feature distances and topological relations. The parameters for the A-KAZE extraction and

ACCEPTED MANUSCRIPT

matching were set as introduced in Mathur et al. (2016). 6) Proposed minutiae matcher (PMM): The proposed minutiae matching stage introduced in Section 3.1 is separately evaluated in order to validate its usefulness. In the proposed minutiae matching, the local neighborhood of a minutia is represented by adjacent minutiae and RSFs within the given ranges.

CR IP T

7) Proposed matcher (PM): The proposed RSF matching is additionally performed with the proposed minutiae matching. The matching score is calculated by combining the scores of the PMM and the RSF matching.

Table 2, 3 and 4 present the results (EERs) of the partial matching tests, and Fig. 13, 14 and 15



AN US

show the ROC curves of the partial matching tests. The test results are analyzed as follows:

The proposed matcher showed the lowest EER among the seven different matching approaches. The results validate the performance gain by using the proposed RSFs. First, the proposed matcher achieved better matching performance than the proposed minutiae matcher. This result

M

shows that the RSF matching stage can provide additional improvement of the partial matching. In addition, the proposed minutiae matcher outperformed the conventional minutiae matcher,

ED

which validates that the corresponding minutia pairs can be more accurately determined by



PT

including RSFs into local structures.

Because of the lack of minutiae, the MCC-based matcher as well as the conventional minutiae

CE

matcher showed relatively low matching performance, even though the MCC-based matcher was considered as an accurate fingerprint matcher for normal-sized fingerprint matching (Peralta et al.,

AC

2015). 

The RRP-based matcher and HoG-based matcher mostly showed better matching performance than the minutiae-based matchers, but the performance gain by using the RRPs and HoG features was lower than that by using the RSFs. The RRPs can be falsely extracted or missed by the errors of minutiae extraction, which causes the low performance gain. In addition, the HoG features are considered less discriminative in the small fingerprint region, and the matching performance can be degraded by the alignment error. The A-KAZE-based matcher also provided competitive

ACCEPTED MANUSCRIPT

performance on several partial fingerprint datasets, but it did not show the benefits of using AKAZE features on partial fingerprint datasets of FVC2002 DB3, FVC2004 DB1 and DB2 which contain low quality images. In the poor quality images, the distinctiveness of A-KAZE features was drastically decreased because of noise and large textural variations.

DB1 and DB3 of FVC2002 FVC2002 DB1

FVC2002 DB3

9.89.3

8.17.7

6.96.5

9.39.3

7.77.7

6.96.9

Image size (pixel2)

192184

160152

136128

184184

152152

136136

CMM

0.46

2.48

5.47

2.35

4.30

6.44

MCC

0.40

6.69

25.26

2.80

9.78

22.61

RRP

0.55

2.94

6.65

2.50

4.86

7.58

HoG

0.29

1.62

3.87

1.80

3.25

5.86

A-KAZE

1.15

2.35

4.95

10.35

13.72

16.00

PMM

0.30

1.80

3.15

2.05

3.44

5.95

PM

0.25

0.54

1.20

1.50

2.50

3.90

M

AN US

Sensing area (mm2)

ED

EER (%)

Simulated DB

CR IP T

Table 2 Performance of partial fingerprint matching on the partial fingerprint datasets simulated on

PT

Table 3 Performance of partial fingerprint matching on the partial fingerprint datasets simulated on DB1 and DB2 of FVC2004

FVC2004 DB1

CE

Simulated DB Sensing area (mm2)

9.39.3

7.77.7

6.96.9

9.39.3

7.77.7

6.56.5

Image size (pixel2)

184184

152152

136136

184184

152152

128128

CMM

2.48

4.00

6.02

4.63

4.86

7.04

MCC

3.98

8.25

19.20

4.70

7.80

25.42

RRP

3.48

4.97

8.36

4.41

5.05

7.73

HoG

1.94

3.53

5.23

3.48

4.17

5.65

A-KAZE

9.41

12.04

15.80

22.26

22.90

25.70

PMM

2.35

3.59

5.44

4.21

4.70

6.15

PM

1.80

2.35

3.35

3.04

3.34

3.85

AC EER (%)

FVC2004 DB2

ACCEPTED MANUSCRIPT

Table 4 Performance of partial fingerprint matching on the BERC partial fingerprint database BERC 8.18.1

7.37.3

Image size (pixel2)

192192

160160

144144

CMM

1.21

3.44

8.51

MCC

1.11

7.86

RRP

1.11

4.72

HoG

0.52

2.51

A-KAZE

3.98

6.11

PMM

0.64

2.69

PM

0.37

2.21

4.26

AC

CE

PT

ED

M

AN US

9.89.8

EER (%)

Sensing area (mm2)

CR IP T

Simulated DB

19.39 9.72 5.85 8.14 5.82

AC

CE

PT

ED

M

AN US

CR IP T

ACCEPTED MANUSCRIPT

Fig. 13. ROC curves of partial fingerprint matching on partial fingerprint datasets: (a) 6.9×6.5 mm2, (b) 8.1×7.7 mm2, and (c) 9.8×9.3 mm2 of FVC2002 DB1 and (d) 6.9×6.9 mm2, (e) 7.7×7.7 mm2, and (f) 9.3×9.3 mm2 of FVC2002 DB3

AC

CE

PT

ED

M

AN US

CR IP T

ACCEPTED MANUSCRIPT

Fig. 14. ROC curves of partial fingerprint matching on partial fingerprint datasets: (a) 6.9×6.9 mm2, (b) 7.7×7.7 mm2, and (c) 9.3×9.3 mm2 of FVC2004 DB1 and (d) 6.5×6.5 mm2, (e) 7.7×7.7 mm2, and (f) 9.3×9.3 mm2 of FVC2004 DB2

AC

CE

PT

ED

M

AN US

CR IP T

ACCEPTED MANUSCRIPT

Fig. 15. ROC curves of partial fingerprint matching on BERC partial fingerprint database: (a) 7.3×7.3 mm2, (b) 8.1×8.1 mm2, and (c) 9.8×9.8 mm2

Although the proposed approach showed the considerable improvement of partial fingerprint matching, false reject cases occurred when two partial images of the same finger barely overlapped or

ACCEPTED MANUSCRIPT

exhibited large textural differences (see Fig. 16(a)). In addition, false reject errors were also found when attempting to match low quality images of the same finger. On the other hand, false accept cases were observed when two partial images of different fingers had similar ridge flows (see Fig. 16(b)). Since RSFs are extracted along the ridges, some of them can be matched if underlying ridges have

ED

M

AN US

CR IP T

similar orientations.

CE

PT

Fig. 16. Examples of matching errors: (a) false reject cases due to small overlapped region, large textural difference, and low image quality, and (b) false accept cases due to similar ridge flows (red squares: matched minutia pairs, blue stars: matched RSF pairs)

AC

4.4. Matching Performance According to the Number of Enrolled Impressions In this experiment, the partial matching performance is evaluated according to the number of

enrolled impressions. The test was performed using four different partial fingerprint datasets generated from FVC2002 (DB1 and DB3) and FVC2004 (DB1 and DB2) databases (see Table 5 and Table 6). Among the 20 template partial images of each finger, N partial images were selected for the tests (N = 10, 15, and 20). A total of 2000 genuine matching and 4950 imposter matching tests were conducted on each partial fingerprint dataset. The proposed matcher was compared with five different

ACCEPTED MANUSCRIPT

matching approaches as described in Section 4.3.

Table 5 Matching performance according to the number of enrolled images (FVC2002 DB1 & DB3) FVC2002 DB3 (6.9 × 6.9 mm2)

10

15

20

10

15

20

CMM

8.29

6.58

5.47

8.06

6.86

6.44

MCC

30.39

27.06

25.26

25.15

23.46

22.61

RRP

9.04

7.44

6.65

9.25

8.12

7.58

HoG

6.64

4.95

3.87

6.76

6.30

5.86

A-KAZE

8.19

5.95

4.95

18.90

17.09

16.00

PM

3.90

2.60

1.20

5.34

4.25

3.90

AN US

EER (%)

# of enrolled impressions

CR IP T

FVC2002 DB1 (6.9 × 6.5 mm2)

Simulated DB

FVC2004 DB2 (6.5 × 6.5 mm2)

10

20

10

15

20

15

9.88

8.30

6.02

10.94

9.10

7.04

MCC

24.28

21.11

19.20

30.90

27.18

25.42

11.63

9.73

8.36

10.59

8.93

7.73

CE

RRP

PT

CMM

HoG

8.35

6.82

5.23

8.23

7.21

5.65

A-KAZE

18.88

17.24

15.80

29.64

27.60

25.70

7.61

5.80

3.35

7.04

5.73

3.85

AC

EER (%)

# of enrolled impressions

FVC2004 DB1 (6.9 × 6.9 mm2)

ED

Simulated DB

M

Table 6 Matching performance according to the number of enrolled images (FVC2004 DB1 & DB2)

PM

The matching performances of all matching methods were improved by enrolling more impressions. However, in practice, there is a limit on the number of enrolled impressions due to the limited memory space and user convenience. On the four datasets, the proposed method in the scenarios of enrolling 10 and 15 impressions achieved better performance compared to most of the others in the

ACCEPTED MANUSCRIPT

scenario in which 20 impressions were enrolled. This shows that the load of the enrollment process can be reduced when employing the proposed fingerprint matcher. In addition, the processing time for feature extraction and 1:1 matching was computed on the above four partial fingerprint datasets (see Table 7). All algorithms were implemented using Microsoft Visual Studio 2013; in addition, OpenCV 3.1 was used for the A-KAZE feature extraction and

CR IP T

descriptor generation. All tests were conducted on a PC (Intel Core i7 3.20 GHz and 16.0 GB RAM) running the Microsoft Windows 10 operating system. The processing time of the proposed feature extraction (extraction of both minutiae and RSFs) was 14.82 ms on average, and that of the proposed matching was approximately 4.72 ms. The processing time of the proposed method was longer than

AN US

those of the minutiae-based approaches because of the additional use of RSFs. However, the processing time of the proposed feature extraction was shorter than that of A-KAZE feature extraction, and the processing time of the proposed matching was shorter than that of the RRP-based and HoG-

M

based matching methods.

datasets

Avg. Time (ms) FVC2002 DB3 FVC2004 DB1 (6.9 × 6.9 mm2) (6.9 × 6.9 mm2) Feature Matching Feature Matching extraction (1:1) extraction (1:1)

CE

PT

FVC2002 DB1 (6.9 × 6.5 mm2) Feature Matching extraction (1:1)

ED

Table 7 Processing time of the proposed method and five other methods on four partial fingerprint

FVC2004 DB2 (6.5 × 6.5 mm2) Feature Matching extraction (1:1)

0.61

0.79

0.64

0.86

0.64

1.12

0.56

1.02

MCC

0.61

1.27

0.64

1.24

0.64

1.23

0.56

1.07

AC

CMM

RRP

1.11

4.54

1.12

4.91

1.20

6.91

1.01

7.14

HoG

0.62

15.87

0.65

17.36

0.65

16.42

0.57

13.27

A-KAZE

89.43

2.89

82.96

2.48

131.39

5.73

102.95

3.88

PM

14.73

4.15

16.51

4.80

15.04

3.88

12.99

6.06

ACCEPTED MANUSCRIPT

5. Conclusion Partial fingerprint images acquired from a small area-type fingerprint sensor commonly contain insufficient minutiae for accurate verification. This paper proposes a partial fingerprint-matching

CR IP T

method incorporating new RSFs with minutiae. These RSFs were defined on ridge segments where concave or convex edges are observed, and are available in conventional 500 dpi images. The RSFs can be extracted from any ridges in fingerprint images. Thus, ridge information is extensively incorporated in the partial fingerprint matching without the need to store the entire fingerprint image

AN US

(texture) as a template. In addition, compared to other approaches based on entire ridge contour or image (texture), our approach is believed to be more appropriate for securing fingerprint templates and saving memory space. The proposed ridge features are represented as conventional minutia features, which facilitates a simple matching process based on the ridge features and minutiae. The

M

RSFs are effectively utilized in the proposed matching scheme. The RSFs are used to accurately find corresponding minutia pairs. In addition, all RSFs in the overlapped area of fingerprint images are

ED

compared to further improve the matching accuracy. The proposed method for partial fingerprint matching was evaluated on the FVC2002 (DB1 and

PT

DB3), FVC2004 (DB1 and DB2) and BERC databases. The experimental results showed that the proposed approach achieved superior matching performance than conventional minutiae-based

CE

methods and other partial fingerprint matching approaches using non-minutia features. The results

AC

validated the discriminative ability of RSF and the usefulness of the proposed matching scheme. However, it is difficult to consistently extract RSFs from low-quality images. In addition, if a partial fingerprint image does not contain any minutia points due to its low quality or its small size, the RSF matching stage cannot be properly performed even though the image contains RSFs. Accordingly, in future we plan to further improve the RSF extraction in terms of consistency and accuracy, in order to maximize the performance gain by using RSFs. In addition, partial fingerprint matching using only RSFs also needs to be considered for matching partial fingerprint images that do not contain any minutia points.

ACCEPTED MANUSCRIPT

Acknowledgments This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (No.

AC

CE

PT

ED

M

AN US

CR IP T

2016R1A2B4006320)

ACCEPTED MANUSCRIPT

References Alcantarilla, P. F., Nuevo, J., & Bartoli, A. (2013). Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces. British Machine Vision Conference, 13.1-13.11. Ashbaugh, D. R. (1999). Quantitative-qualitative friction ridge analysis: an introduction to basic and advanced ridgeology. CRC Press.

CR IP T

Benhammadi, F., Amirouche, M. N., Hentous, H., Bey Beghdad, K., & Aissani, M. (2007). Fingerprint matching from minutiae texture maps. Pattern Recognition, 40(1), 189–197.

Cappelli, R., Ferrara, M., & Maltoni, D. (2010). Minutia Cylinder-Code A New Representation and Matching Technique for Fingerprint Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence,

AN US

32(12), 2128–2141.

Chen, Y., & Jain, A. K. (2007). Dots and Incipients_Extended Features for Partial Fingerprint Matching. In Biometics Symposium (pp. 1–6).

Chikkerur, S., Cartwright, A. N., & Govindaraju, V. (2005). K-plet and Coupled BFS: A graph based fingerprint representation and matching algorithm. International Conference on Biometrics, LNCS 3832(Figure 1),

M

309–315.

ED

Choi, H., Choi, K., & Kim, J. (2011). Fingerprint matching incorporating ridge features with minutiae. IEEE Transactions on Information Forensics and Security, 6(2), 338–345. Fang, G., Srihari, S. N., Srinivasan, H., & Phatak, P. (2007). Use of ridge points in partial fingerprint matching.

PT

Proceedings of SPIE, 6539, 65390D–65390D–9.

CE

Feng, J. (2008). Combining minutiae descriptors for fingerprint matching. Pattern Recognition, 41(1), 342–352. Feng, J., Ouyang, Z., & Cai, A. (2006). Fingerprint matching using ridges. Pattern Recognition, 39(11), 2131–

AC

2140.

Hatano, T., Adachi, T., Shigematsu, S., Morimura, H., Onishi, S., Okazaki, Y., & Kyuragi, H. (2002). A fingerprint verification algorithm using the differential matching rate. In International Conference on Pattern Recognition 2002 (pp. 799–802).

He, Y., Tian, J., Li, L., Chen, H., & Yang, X. (2006). Fingerprint matching based on global comprehensive similarity. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(6), 850–862. Hong, L., Wan, Y., & Jain, A. (1998). Fingerprint image enhancement: algorithm and performance evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(8), 777–789.

ACCEPTED MANUSCRIPT

Jain, A., Chen, Y. C. Y., & Demirkus, M. (2007). Pores and Ridges: High-Resolution Fingerprint Matching Using Level 3 Features. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(1), 15–27. Jain, A. K., & Prabhakar, S. (2001). Fingerprint Matching Using Minutiae and Texture Features. Proceedings of Internation Conference on Image Processing, (c), 282–285. Jea, T. Y., & Govindaraju, V. (2005). A minutia-based partial fingerprint recognition system. Pattern

CR IP T

Recognition, 38(10), 1672–1684. Jiang, X., & Yau, W.-Y. (2000). Fingerprint minutiae matching based on the local and global structures. Proceedings 15th International Conference on Pattern Recognition. ICPR-2000, 2, 1038–1041.

Kryszczuk, K., Drygajlo, A., & Morier, P. (2004). Extraction of level 2 and level 3 features for fragmentary fingerprints. 2nd COST275 Workshop, (December), 83–88.

AN US

Lee, D., Choi, K., & Kim, J. (2002). A robust fingerprint matching algorithm using local alignment. Object Recognition Supported by User Interaction for Service Robots, 3(c), 803–806. Liu-Jimenez, J., Ros-Gomez, R., Sanchez-Reillo, R., & Fernandez-Saavedra, B. (2016). Small fingerprint scanners used in mobile devices: the impact on biometric performance. IET Biometrics, 5(1), 28–36.

M

Maio, D., Maltoni, D., Cappelli, R., Wayman, J. L., & Jain, A. K. (2002). FVC2002: Second Fingerprint Verification Competition. Proceedings of the 16 Th International Conference on Pattern Recognition

ED

(ICPR’02) Volume 3 - Volume 3, (September 2000), 811–814. Maio, D., Maltoni, D., Cappelli, R., Wayman, J. L., & Jain, A. K. (2004). Fvc2004: Third fingerprint

PT

verification competition. Biometric Authentication. Springer Berlin Heidelberg. Maltoni, D., Maio, D., Jain, A. K., & Prabhakar, S. (2009). Handbook of Fingerprint Recognition. Springer

CE

Science & Business Media.

Mathur, S., Vjay, A., Shah, J., Das, S., & Malla, A. (2016). Methodology for partial fingerprint enrollment and

AC

authentication on mobile devices. 2016 International Conference on Biometrics, ICB 2016, (c). Nanni, L., & Lumini, A. (2008). Local binary patterns for a hybrid fingerprint matcher. Pattern Recognition, 41(11), 3461–3466.

Nanni, L., & Lumini, A. (2009). Descriptors for image-based fingerprint matchers. Expert Systems with Applications, 36(10), 12414–12422. Ouyang, Z., Feng, J., Su, F., & Cai, A. (2006). Fingerprint Matching With Rotation-Descriptor Texture Features. 18th International Conference on Pattern Recognition, 4, 417–420. Peralta, D., Galar, M., Triguero, I., Paternain, D., García, S., Barrenechea, E., Benitez, J.M., Bustince, H., &

ACCEPTED MANUSCRIPT

Herrera, F. (2015). A survey on fingerprint minutiae-based local matching for verification and identification: Taxonomy and experimental evaluation. Information Sciences, 315, 67–87. Ratha, N. K., Bolle, R. M., Pandit, V. D., & Vaish, V. (2000). Robust fingerprint authentication using local structural similarity. Fifth IEEE Workshop on Applications of Computer Vision, 29–34. Ross, A., Jain, A. K., & Reisman, J. (2002). A hybrid fingerprint matcher. In International Conference on

CR IP T

Pattern Recognition 2002 (Vol. 3, pp. 795–798). Schmid, C., Mohr, R., & Bauckhage, C. (2000). Evaluation of interest point detectors. International Journal of Computer Vision, 37(2), 151–172.

Tico, M., & Kuosmanen, P. (2003). Fingerprint Matching using an Oriented-Based Minutia Descriptor, 25(8), 1009–1014.

AN US

Venkataramani, K., Keskinoz, M., & Kumar, B. V. K. V. (2005). Soft information fusion of correlation filter output planes using Support Vector Machines for improved fingerprint verification performance, 5779, 184–195.

Wang, X., Li, J., & Niu, Y. (2007). Fingerprint matching using OrientationCodes and PolyLines. Pattern

M

Recognition, 40(11), 3164–3177.

Yamazaki, M., Li, D., Isshiki, T., & Kunieda, H. (2015). SIFT-based algorithm for fingerprint authentication on

ED

smartphone. 2015 6th International Conference on Information and Communication Technology for Embedded Systems, IC-ICTES 2015, 4–8.

PT

Zanganeh, O., Srinivasan, B., & Bhattacharjee, N. (2015). Partial fingerprint matching through region-based similarity. 2014 International Conference on Digital Image Computing: Techniques and Applications,

CE

DICTA 2014.

Zhao, F., & Tang, X. (2007). Preprocessing and postprocessing for skeleton-based fingerprint minutiae

AC

extraction. Pattern Recognition, 40(4), 1270–1281. Zhao, Q., Zhang, D., Zhang, L., & Luo, N. (2010). High resolution partial fingerprint alignment using porevalley descriptors. Pattern Recognition, 43(3), 1050–1061.