Singular point detection by shape analysis of directional fields in fingerprints

Singular point detection by shape analysis of directional fields in fingerprints

Pattern Recognition 39 (2006) 839 – 855 www.elsevier.com/locate/patcog Singular point detection by shape analysis of directional fields in fingerprints...

624KB Sizes 0 Downloads 19 Views

Pattern Recognition 39 (2006) 839 – 855 www.elsevier.com/locate/patcog

Singular point detection by shape analysis of directional fields in fingerprints Chul-Hyun Parka,∗ , Joon-Jae Leeb , Mark J.T. Smitha , Kil-Houm Parkc a School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907, USA b Department of Computer and Information Engineering, Dongseo University, San 69-1, Jurye2-dong, Sasang-gu, Busan 617-833, Republic of Korea c School of Electrical Engineering and Computer Science, Kyungpook National University, 1370 Sangyeok-dong, Buk-gu, Daegu 702-701, Republic of Korea

Received 29 April 2004; received in revised form 17 March 2005; accepted 12 October 2005

Abstract This paper presents a new fingerprint singular point detection method that is type-distinguishable and applicable to various fingerprint images regardless of their resolutions. The proposed method detects singular points by analyzing the shapes of the local directional fields of a fingerprint image. Using the predefined rules, all types of singular points (upper core, lower core, and delta points) can be extracted accurately and delineated in terms of the type of singular points. In case of arch-type fingerprints there exists no singular point, but reference points for arch-type fingerprints are required to be detected for registration. Therefore, we propose a new reference point detection method for arch-type fingerprints as well. The result of the experiments on the two public databases (FVC2000 2a, FVC2002 2a) with different resolutions demonstrates that the proposed method has high accuracy in locating each types of singular points and detecting the reference points of arch-type fingerprints without regard to their image resolutions. 䉷 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved. Keywords: Singular point; Fingerprint; Reference point; Directional field; Classification; Alignment

1. Introduction The use of fingerprint-based personal authentication and identification technologies is increasing. The surge of these technologies can be seen in forensics, commercial industry, and government agencies, to mention a few. Fingerprints are attractive for identification because they can characterize an individual uniquely and their configurations do not change through the life of the individual except in cases of bruises, cuts or other such alterations on the fingertips [1,2]. New hardware technologies have also been developed recently for acquisition of fingerprint images [1,3], which in turn have provided good alternatives for personal identification over other biometric inputs and conventional methods like passwords, ID cards, keys, etc. [4]. In general, personal verification or identification based on fingerprints mainly consists of image acquisition, feature extraction, matching, and final decision. In order for a personal verification or identification system to be robust to image rotation and translation, input fingerprints and fingerprints enrolled in archive databases are required to be aligned rotationally and translationally during the overall process. So far, there have been many fingerprint alignment methods proposed. One of the most commonly used techniques is the fingerprint alignment approach that exploits singular (core and delta) point information [5–11]. A core point can be defined as the topmost (upper core) or bottommost (lower core) point of the innermost curving ridge line, whereas a delta point is defined as the point from which the ridge branches out in three directions to form a delta shape, ∗ Corresponding author. Department of Electrical and Computer Engineering, The Ohio State University, 2015 Neil Avenue, Columbus, OH 43210, USA. Tel.: +1 614 292 3092, +1 765 494 3539; fax: +1 765 494 3544. E-mail addresses: [email protected], [email protected] (C.-H. Park), [email protected] (J.-J. Lee), [email protected] (M.J.T. Smith), [email protected] (K.-H. Park).

0031-3203/$30.00 䉷 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.patcog.2005.10.021

840

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

Fig. 1. Singular points of a fingerprint image (∩: upper core point, ∪: lower core point, : delta point).

both of which are illustrated in Fig. 1. The singular point-based fingerprint alignment methods have the advantage that once the singular points are detected, alignment is relatively simple and classification from these singular points can be implemented effectively [12,13]. However, such matches have the disadvantage that it is not easy to detect the singular points accurately when the quality of the acquired fingerprints is poor. The conventional singular point detection methods based on Poincaré index analysis are robust to image rotation and relatively simple compared with the other methods [12,13]. In addition, since type information of singular points is also obtained from these detection methods, they can be used conveniently as part of the fingerprint classification process. But if the image quality is poor, detection performance tends to degrade severely. In light of this difficulty with the Poincaré index-based method, symmetry properties have been used instead for the singular point detection [11]. In this approach, two complex filters are defined that detect rotational symmetries for core- and delta-type singularities, then these are typically applied to the directional field (DF) of the fingerprint image in multiple scales. This method has the advantage of being able to extract the position and spatial orientation of a singular point simultaneously. But the detection thresholds for the filter responses need to be determined, and this performance is dependent on the accuracy of the threshold to extract prominent symmetry points. Its main disadvantages are (1) that the singular points to be used as reference points cannot be detected in fingerprint images without enough strong filter responses, and (2) that this method cannot be used for fingerprint classification because it uses the prominent symmetry points as the singular points differently from the Poincaré index-based approaches. The approach that locates the topmost loop-type singularities using the sine components in two adjacent regions is robust to noise because it is based on the multi-resolution analysis [8], but the predefined semicircular mask used makes it difficult to detect the reference points near the image border and accurately locate the reference points of the rotated fingerprint images. The method in Ref. [14], which is a bit similar to our method, detects the singular points by using the directional histograms of the directional image of a fingerprint. The authors have reported that their method is robust to noise, but their corpus of test fingerprints was small (only 25). In testing this method with a large database, the results we obtained fell short of expectation. In this paper, we propose a conceptually simple and robust singular point extraction method, which not only can be used for both detection of reference points for fingerprint alignment and rough classification of fingerprint images, but works well regardless of image resolution. Singular point regions embody directional anisotropy, whereas most regions in fingerprints manifest strong local directionality in a particular orientation. In addition to this fact, we have found that the directional patterns around singular points have a specific pattern of structure that characterizes the particular type. The proposed method examines the local characteristic of the DF of the fingerprint image and then checks out if the local DF meets the requirements for each type of singular point in order to extract singular points. Since the requirements for singular points are defined according to the characteristic of the DF around singular points, the singular points are not only detected very accurately but the type information of the detected singular points is also obtained accurately. This paper presents an additional reference point detection method for arch-type fingerprints as well, because arch-type fingerprints are known as ones for which no type of singular point exists. Since the DF of a fingerprint is calculated on a block-by-block basis considering the average inter-ridge distance, the proposed method can be applied to various types of fingerprints having different resolutions almost without modifying the parameters or thresholds. Adding to previous work on the computation of the DF, we focus on presenting a method based on shape analysis of DFs for singular point detection. The content of this paper is as follows: in Section 2, the proposed singular point detection method is described. Then in Section 3, the proposed reference point extraction method for arch-type fingerprints is presented. Section 4 presents the experimental results, and conclusions are given in Section 5.

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

841

2. Singular point extraction The proposed singular point detection algorithm consists of the following steps: the input fingerprint image is first segmented into foreground and background regions, after which the DF of the foreground region is calculated considering the average inter-ridge distance. Candidate regions for singular points are then detected from the calculated DF. Finally, the singular points are extracted from the candidate regions. These steps are discussed next in detail. 2.1. Fingerprint segmentation The boundary region surrounding the actual fingerprint in the image inherently contains discontinuities in the ridge pattern since beyond that boundary is a background with relatively constant (but often noisy) pixel intensity. To address the border discontinuities, the fingerprint is segmented from the background. The DF is then computed only for the foreground region. The average magnitude of the gradient or variance value in each image block can be used to perform segmentation [15,16]. 2.2. Directional field estimation Since fingerprints have strong directionality in most regions, DFs have been widely used in various fingerprint imageprocessing tasks such as singular point detection, image enhancement, and feature extraction. In the proposed method, the DF plays a critical role in extracting singular points as well. There have been many approaches to calculate the DF of a fingerprint image. Karu et al. use the predefined local windows to calculate the magnitude of a certain direction [13]. Lee et al. use Gabor filter banks to obtain the local ridge direction [17]. Ratha et al. make use of local gradients to compute the principal direction of a local block [16]. The effect of this general approach has been proven to be an analogy to that of principal component analysis (PCA) by Bazen et al. [10]. In order to estimate the local ridge directions we adopt the approach in Ref. [16] using the local gradients because it is computationally efficient and it captures the principal direction of local ridges effectively. In the proposed method, the DF is calculated on a block-by-block basis, and the block size is determined as the average inter-ridge distance [18] so that at least one ridge and valley pair may be included. The original DF is prone to be noisy and so the DF needs to be smoothed [13]. The directions cannot be averaged directly by the mean filter because of their modular operation. Therefore the DF is smoothed by converting it into a vector form, after which the two components of the vectors are averaged separately as in Ref. [13]. Each direction value is quantized into one of eight directions to make the procedure simple. 2.3. Singular point detection The proposed singular point detection scheme attempts to isolate upper core points (the topmost points on the innermost upward curving (∩) ridges), lower core points (the bottommost points on the innermost downward curving (∪) ridge lines), and delta points (the points from which a ridge branches out in three directions to form a delta shape ()). These singular points are illustrated in Fig. 1. The idea of the proposed method comes from the observation that the directional patterns around singular points have a prescribed structure that characterizes the particular type. The rules here are that the directional patterns of the two consecutive blocks, which we will call “candidate region” later, around the singular points have the shapes similar to symbol cap (∩) or cup (∪), and the directions of the neighboring blocks around the singular points are within certain ranges according to the types of the singular points. The observed rules imply that the location and the type of a singular point can be obtained by examining the directions of the candidate region blocks and their neighboring blocks. In the proposed method, two kinds of candidate regions are defined. One is the cap-shaped candidate region consisting of two consecutive blocks where the direction of the left block is between 0◦ and 90◦ , that of the right block is between 90◦ and 180◦ ; and the inner angle (see Fig. 2) between the two blocks is larger than or equal to 45◦ and smaller than or equal to 135◦ at the same time. These cap-shaped candidate regions are used for extracting the upper core and delta points. The other kind of candidate region is a cup-shaped candidate region where the direction of the left block is between 90◦ and 180◦ , that of the right block is between 0◦ and 90◦ , and the inner angle is larger than or equal to 45◦ and lower than or equal to 135◦ . These cup-shaped candidate regions are used for detecting the lower core points. Singular regions containing delta points usually show a cap shape though the image is slightly rotated, but if the image is rotated much, the shape of the delta region becomes similar to a cup shape as illustrated in Fig. 3. Therefore, the cup-shaped candidate regions are also used for locating the delta points in many rotated fingerprint images. The set of all quantized direction pairs is illustrated in Fig. 4. Note that (0◦ ,90◦ ) and (90◦ ,0◦ ) pairs belong to both candidate regions.

842

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

Inner angle

Inner angle (a)

(b)

Fig. 2. Inner angle of the two consecutive blocks. Inner angles of (a) cap-shaped candidate region and (b) cup-shaped candidate region.

(a)

(b)

(c)

Fig. 3. Rotation of a delta point region and its effect on the local DF: (a) sample delta region, and delta regions rotated by (b) 20◦ and (c) 45◦ , respectively.

Cap-shaped candidate regions 1 2

*

22.5°, 112.5°

45°, 135°

90°, 135°

0°, 112.5°

22.5°, 135°

45°, 157.5°

90°, 157.5°

0°, 135°

22.5°, 157.5°

67.5°, 0°

0°, 90°

45°, 0°

67.5°, 112.5°

22.5°, 67.5°

45°, 90°

67.5°, 135°

22.5°, 90°

45°, 112.5°

67.5°, 157.5°

*

90°,0° 112.5°, 157.5°

Cup-shaped candidate regions 1 2

*

157.5°, 67.5°

135°, 45°

90°, 45°

0°, 67.5°

157.5°, 45°

135°, 22.5°

90°, 22.5°

0°, 45°

157.5°, 22.5°

112.5°, 0°

0°, 90°

135°, 0°

112.5°, 67.5°

157.5°, 112.5°

135°, 90°

112.5°, 45°

157.5°, 90°

135°, 67.5°

112.5°, 22.5°

*

90°, 0° 67.5°, 22.5°

Fig. 4. The direction pairs of the candidate regions for singular points. The pairs with an asterisk (*) belong to both candidate regions.

Let the directions of certain two consecutive blocks be 1 and 2 , respectively. Then the two consecutive-block regions become a cap-shaped candidate region if at least one of the following three conditions is satisfied: −135◦ (1 − 2 )  − 45◦ and 1  = 0◦ and 2  = 0◦ ,

(1)

−135◦ (1 − 2 )  − 90◦ and 1 = 0◦ ,

(2)

45◦ (1 − 2 ) 90◦ and 2 = 0◦ .

(3)

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

843

In a similar way, certain two consecutive blocks become a cup-shaped candidate region if the blocks satisfy at least one of the following conditions: 45◦ (1 − 2 ) 135◦ and 1  = 0◦ and 2  = 0◦ ,

(4)

−90◦ (1 − 2 )  − 45◦ and 1 = 0◦ ,

(5)

90◦ (1 − 2 ) 135◦ and 2 = 0◦ .

(6)

Fig. 4 is a graphical representation of the blocks that satisfy the conditions (1)–(3) or (4)–(6). Since the directions of the border area between the foreground and the background are likely to be inexact, if either of the two consecutive blocks considered belongs to the background although the blocks satisfy the conditions for the candidate regions, the blocks are excluded from the candidate regions. By assigning the candidate regions to the value of 1, the other regions to the value of 0, the binary candidate region image of a fingerprint is obtained. 2.3.1. Notations The notations used in this paper are related to angles, ranges, and positions of blocks and most notations are given as a form of Xi+n,j . If X is , it denotes the angle of a block, while if X is R, it means the range of an angle. The subscript (i + n, j ) indicates the position of the block. If i is t, it indicates topmost blocks of each connected candidate regions, and if i is b, it points out bottommost blocks. In the subscript expression, n means the relative vertical position to the current block, so if the number is −1, it means the upper neighboring block and if the number is +1, it means the lower neighboring block. Finally, j indicates the horizontal position in the two consecutive candidate blocks. Therefore j = 1 means the left block, j = 2 the right block. Exceptionally, there are some notations that j is m or h, which mean the middle angle of the two consecutive candidate blocks and the angle perpendicular to the middle angle, respectively. In case that j is h1 or h2, it means that the angle (h) perpendicular to the middle angle is adjusted by a tolerance angle, which we will describe later. 2.3.2. Upper core and delta point detection from cap-shaped candidate regions First, the proposed method performs connected component labeling [19] on the binary image where the candidate regions have the binary value of 1 and the other regions the binary value of 0. During the labeling process, the position information of the topmost and bottommost two consecutive blocks of each connected candidate region is obtained. The bottommost blocks of each connected region are used to detect upper core points and the topmost ones are employed to extract delta points. This is because in the connected candidate regions with upper core points, the bottommost blocks have the lowest inner angle (the highest curvature) and as the position is going upwards the inner angle of two consecutive blocks is getting larger; whereas in the connected candidate regions with delta points, the topmost blocks have the smallest inner angle (the highest curvature) and as the position is going downwards the inner angle of two consecutive blocks is getting larger. Let the directions of the bottommost two consecutive blocks of a certain connected region be b,1 , b,2 from left to right, respectively. Let the directions of the upper neighboring blocks of the bottommost two consecutive blocks be b−1,1 , b−1,2 . Finally, let the directions of the lower neighboring blocks of the bottommost two consecutive blocks be b+1,1 , b+1,2 (refer to Fig. 5(d) for the position information of the neighboring blocks). Then in order for the bottommost blocks of the connected candidate region to be a singular region containing an upper core point, the directions of the upper and lower two blocks of the bottommost blocks must satisfy the following conditions: • The directions of the lower neighboring blocks must be within the range Rb+1 , as illustrated in Fig. 6. • The directions of the upper neighboring blocks should be within the ranges Rb−1,1 , Rb−1,2 , respectively, as shown in Fig. 6. In the second condition, since the upper neighboring blocks of the bottommost blocks including an upper core point have the smaller curvature than the bottommost blocks and their directional pattern is a cap shape, the directional range is confined to the line b,h that is perpendicular to the middle direction of b,1 and b,2 as shown in Fig. 6. The idea of the conditions is simple, but the direction arithmetic uses a modulus of 180. Therefore, the mathematical expressions of the conditions are as follows: b,1 b+1,1 < b,2 and b,1 < b+1,2 b,2 ,

(7)

844

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

Fig. 5. Upper core point and delta point detection process of a fingerprint image: (a) original image, (b) DF image, (c) cap-shaped candidate regions, (d) superimposed candidate regions on DF image and the positions of the bottommost and topmost blocks and their neighboring blocks of each region, and (e) detected upper core point and delta point (⊗: upper core point, : delta point).

b,m

b,h

Rb-1,1

Rb-1,2 b,1

Rb+1 b,2

Fig. 6. Directional ranges for the upper and lower neighboring blocks of the bottommost blocks of a connected region that contains an upper core point (b,1 : direction of the left block of the bottommost blocks, b,2 : direction of the right block of the bottommost blocks, b,m : middle direction of b,1 and b,2 , b,h : direction perpendicular to b,m , Rb−1,1 : directional range for the left block of the upper neighboring blocks, Rb−1,2 : directional range for the right block of the upper neighboring blocks, Rb+1 : directional range for the lower neighboring blocks).

where b,2

 =

b,2

if b,2  = 0◦ ,

180◦

otherwise,

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

845

Rt-1 t,h1 tol t,h2 tol

Rt+1,1

t,h Rt+1,2

t,1 t,m

t,2

Fig. 7. Directional ranges for the upper and lower neighboring blocks of the topmost blocks of a connected candidate region that contains a delta point (t,1 : direction of the left block of the topmost blocks, t,2 : direction of the right block of the topmost blocks, t,m : middle direction of t,1 and t,2 , t,h : direction perpendicular to t,m , t,h1 , t,h2 : modifications of t,m , Rt+1,1 : directional range for the left block of the lower neighboring blocks; Rt+1,2 : directional range for the right block of the lower neighboring blocks, Rt−1 : directional range for the upper neighboring blocks, tol : tolerance angle).

(b−1,1 b,1 ) and (b−1,2 b,2 or b−1,2 = 0◦ )

if b,h = 0◦ ,

(b−1,1 b,1 or b−1,1 b,h ) and (b,2 b−1,2 b,h )

else if 90◦ < b,h < 180◦ ,

(8)

(b,h b−1,1 b,1 ) and (b−1,2 b,2 or b−1,2 b,h ) otherwise, where  b,h = b,m =

b,m + 90◦

if b,m < 90◦ ,

b,m − 90◦

otherwise,

b,1 + b,2 2

.

After the bottommost (two consecutive) blocks of a connected candidate region are checked against upper core conditions, the delta point test is performed using the topmost blocks and their neighboring blocks. The neighboring blocks of a delta point show the reversed directional patterns of those for an upper core point. Hence, the condition that the topmost blocks are a singular region containing a delta point can be simply expressed by exchanging the conditions for the upper neighboring blocks with those for the lower neighboring blocks. In the delta point detection strategy, the directional ranges for the lower neighboring blocks of the topmost blocks are extended by the tolerance angle (tol ) as illustrated in Fig. 7 in order to cover the cases where the directions are a little bit distorted by noise. In most cases, delta points tend to be located around the border between the foreground and background areas, and generally the nearer to the image border a local area is, the poorer the local image quality becomes (typically because of the finger pressure differences). Consequently, the missing rate of the delta points can be slightly reduced by allowing the tolerance range. In the experiment, we set the tolerance angle to 22.5◦ . If the angles of the upper neighboring blocks (t−1,1 , t−1,2 ) are within the range Rt−1 in Fig. 7, and the angles of the lower neighboring blocks (t+1,1 , t+1,2 ) are within the ranges Rt+1,1 , Rt+1,2 in Fig. 7, respectively, then the topmost blocks are determined as the singular blocks containing a delta point. The mathematical expression for these conditions is given in Appendix A.1. Once the bottommost or the topmost block is identified as a singular block that contains an upper core point or a delta point, respectively, its position is defined as the center point of the topmost row of the block. The horizontal position of the upper core point or the delta point is obtained by dividing the sum of the horizontal coordinates by the total pixel number of the topmost

846

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

Fig. 8. Singular point detection procedure: (a) original image, (b) candidate region superimposed on DF image, (c) detected singular points (⊗: upper core point, ⊕: lower core point, : delta point).

row. All the values of the vertical position and pixel number of the topmost row of each connected candidate region are easily computed during the labeling process.

2.3.3. Lower core and delta point detection from cup-shaped candidate regions The lower core point detection procedure can be considered as a reversed form of the upper core detection process. The lower core points are extracted from cup-shaped candidate regions as shown in Fig. 8 instead of cap-shaped candidate regions. After the binary candidate region image where cup-shaped candidate regions have the value of 1 and the other regions have the value of 0 is obtained, connected component labeling is performed on the binary image. From each connected candidate region, the positions of the topmost blocks and their neighboring blocks are calculated. In the lower core point detection process, the tolerance range (22.5◦ ) is also added to the directional ranges for the lower neighboring blocks as in the delta point detection. Generally, lower core points tend to have lower qualities than upper core points and are likely to be located around the border area. Therefore unless the tolerance range is used, the probability that lower core points are missed becomes high. But since allowing the tolerance range can generate false lower core points, the proposed method uses the foreground area morphologically eroded by one block size. The two consecutive blocks of a delta region may form a cup-shaped directional pattern if the fingerprint is rotated much as shown in Fig. 3(c), hence the rules for delta point detection are also applied to the cup-shaped candidate regions. The delta point detection procedure from cup-shaped candidate regions has an exactly reversed form of that from cap-shaped candidate regions, except the fact that a determination of whether a delta point exists in the candidate region is made from the bottommost block and the position of the delta point is determined from the bottommost row of the singular blocks, which is for maintaining consistency between the positions detected from cap- and cup-shaped candidate regions. In case that two delta points are detected from the same singular area, the delta point detected from the cup-shaped candidate region is deleted. If we assume that input fingerprints are not rotated very much, the delta detection procedure from cup-shaped candidate regions can be omitted. The mathematical details for the conditions for the lower core and delta point detection from the cup-shaped candidate regions can be found in Appendix A.2 and A.3, respectively. A sample fingerprint image that contains all kinds of singular points and its detected singular points are shown in Fig. 8. Since the proposed method has rules for every type of singular point, the singular points are not only detected accurately by the proposed method, but the type information of the singular point is also obtained. Because of these characteristics, the proposed method can be used for rough classification of fingerprint images. If the relative positions between the core and delta points are considered, fine classification can be done as reported in Refs. [12,13]. In addition, since the DF is calculated on a block-by-block basis and the size of the block is automatically determined by computing the average inter-ridge distance of a fingerprint, the proposed method can be used without any modification of the thresholds except for ones for segmentation. 2.4. Performance improvement In order to enhance the accuracy and suppress occurrence of false singular points by noise, the proposed method defines some additional rules for determining the positions of the neighboring blocks and the amount of DF smoothing.

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

Pt-1,1 Pb-1,1 Pb,1 Pb+1,1 (a)

Pb-1,2 Pt,1 Pb,2 Pt+1,1 Pb+1,2 (b)

Pt-1,2

847

Pb-1,1 Pb-1,1 Pb-1,2 Pb,2 Pb,1

Pt,2 Pt+1,2 P b,1 Pb+1,1 (c)

Pb+1,2

Pb+1,1

Pb-1,2 Pb,2 Pb+1,2

(d)

Fig. 9. Some examples of the rules for determining the positions of the bottommost or topmost block and their neighboring blocks.

2.4.1. Rule for determining the positions of the neighboring blocks The proposed method achieves performance improvement by defining simple rules for determining positions of the upper and lower neighboring blocks. Let the initial positions of the bottommost two consecutive blocks of a connected candidate region be (i, j ), (i, j + 1), then the final positions of the bottommost blocks pb,1 , pb,2 , and the positions of the upper neighboring blocks pb−1,1 , pb−1,2 are given as follows: ⎧  (i − 1, j ) (i − 1, j + 2)  if (i − 1, j ) ∈ / Rcand ⎪ ⎪ ⎪ ⎪ ⎪ (i, j ) (i, j + 2) and (i − 1, j + 1) ∈ Rcand , ⎪ ⎪ ⎪   ⎪   ⎨ (i − 1, j − 1) (i − 1, j + 1) pb−1,1 pb−1,2 else if (i − 1, j ) ∈ Rcand = (9) ⎪ (i, j − 1) (i, j + 1) pb,1 pb,2 and (i − 1, j + 1) ∈ / Rcand , ⎪ ⎪ ⎪ ⎪ ⎪ ⎪  (i − 1, j ) (i − 1, j + 1)  ⎪ ⎪ ⎩ otherwise, (i, j ) (i, j + 1) where Rcand means candidate regions (refer to Fig. 9(a) and (b)). If the directions of the upper neighboring blocks obtained by the above rule are the same as those of the initial bottommost blocks, the vertical coordinate is decremented by 1 until the directions at the modified position differ from those of the initial bottommost blocks as shown in Fig. 9(d). This rule is added in order to discriminate small vertical directional changes around the singular block from noise. Let the initial positions of the topmost two consecutive blocks of a connected candidate region be (i, j ), (i, j + 1). Then the final positions of the topmost blocks pt,1 , pt,2 , and the positions of the lower neighboring blocks pt+1,1 , pt+1,2 are given as follows:  ⎧  (i, j ) (i, j + 2) if (i + 1, j ) ∈ / Rcand ⎪ ⎪ , ⎪ ⎪ ⎪ (i + 1, j ) (i + 1, j + 2) and (i + 1, j + 1) ∈ Rcand , ⎪ ⎪ ⎪   ⎪  ⎨ pt,1 pt,2 (i, j − 1) (i, j + 1) else if (i + 1, j ) ∈ Rcand = (10) ⎪ (i + 1, j − 1) (i + 1, j + 1) pt+1,1 pt+1,2 and (i + 1, j + 1) ∈ / Rcand , ⎪ ⎪ ⎪ ⎪   ⎪ ⎪ (i, j ) (i, j + 1) ⎪ ⎪ ⎩ otherwise. (i + 1, j ) (i + 1, j + 1) When the proposed method decides the positions of the lower neighboring blocks of the topmost candidate blocks, the vertical coordinate of the lower neighboring blocks is incremented by 1 until the directions at the modified position differ from the directions of the initial topmost blocks. In addition to this rule, if the middle angle of the two angles constituting a candidate region is deviated much from the vertical direction (90◦ ) as the shaded pairs in Fig. 4, the positions of the upper and lower neighboring blocks of the candidate region shift one block in the horizontal direction as shown in Fig. 9(c). Let the positions of the two consecutive blocks of a connected candidate region be (i, j ), (i, j + 1) and let the middle angle be m . Then the positions of the upper neighboring blocks p−1,1 , p−1,2 and the lower neighboring blocks p+1,1 , p+1,2 are given as follows: ⎧  (i − 1, j + 1) (i − 1, j + 2)  ⎪ ⎪ if (m − 90◦ ) = −45◦ , ⎪ ⎪ ⎪ (i + 1, j ) ⎪ ⎪ (i + 1, j − 1) ⎪     ⎪ ⎨ p−1,1 p−1,2 (i − 1, j − 1) (i − 1, j ) else if (m − 90◦ ) = 45◦ , = (11) ⎪ (i + 1, j + 1) (i + 1, j + 2) p+1,1 p+1,2 ⎪ ⎪ ⎪ ⎪   ⎪ ⎪ (i − 1, j ) (i − 1, j + 1) ⎪ ⎪ ⎩ otherwise. (i + 1, j ) (i + 1, j + 1)

848

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

In the proposed method, if the difference between the middle angle and 90◦ is ±45◦ , the candidate region is considered as being significantly rotated and the pairs meeting this condition are (22.5◦ , 67.5◦ ), (112.5◦ , 157.5◦ ), (0◦ , 90◦ ), (90◦ , 0◦ ), (67.5◦ , 22.5◦ ), (157.5◦ , 112.5◦ ) (refer to Fig. 4). Some examples of these additional rules for performance improvement are shown in Fig. 9. 2.4.2. Rule for determining the number of DF smoothing Various kinds of noise, introduced in the acquisition process, can generate false singular points because the local directional pattern affected by the noise could erroneously satisfy the conditions for singular points. Similarly, true singular points can be missed because of noise. Therefore, a method is required to suppress the occurrence of such false singular points or missed singular points. To reduce the effect of noise, smoothing of the DF can be considered naturally as a basic image processing technique. Constraint must be exercised, as excessive smoothing can result in inaccuracy in locating singular points. In the proposed method, the positional relationship of the singular points that can appear on fingerprints is used to determine the proper amount of smoothing. In case that the number of the detected upper core is not 1 or the normal positional relationship is not satisfied between the detected singular points, the DF is smoothed once more. The reason that the number of upper core points only is used as a required condition is because the other singular points (lower core points or delta points) are prone to be unavailable on the acquired fingerprint images. The normal positional relationship here indicates the facts that the left loop-type fingerprints have their delta points on the right of their upper core points, the right loop-type fingerprint have their delta points on the left of their upper core points, and the whorl or twin loop-type fingerprints have their core points between the horizontal coordinates of the delta points. Though these positional relationships may be broken when input images are rotated much, they reduce the occurrence of false singular points. However the maximum number of DF smoothing operations is limited to 5, because too much smoothing brings about inaccurate results. If the number of core points is not unity and the conditions for normal positional relationship between the detected singular points are not satisfied even with maximum smoothing, the proposed method fails in detecting the singular points. If no singular points are detected, it is either the case that no singular points exist or the case that the singular point(s) have been missed. Finally, in the case that the distance between the detected core point and delta point is below a certain threshold (about one block distance), the core (upper core or lower core) and delta pair are considered as being corrupted by noise. Consequently, both of them are removed from the singular point list [13].

3. Reference point detection for arch-type fingerprints One of the main purposes for extracting the singular points in fingerprints is to use the singular points as reference points for registration. All the singular points can be used as the reference points, but in most cases only the upper core points are used as the reference points because they are more likely to be present in fingerprint images than lower core and delta points. However in arch-type fingerprints, typically singular points do not exist. Thus, an additional reference point detection method for arch-type fingerprints is required. In order to extract the reference point of an arch-type fingerprint, the proposed method uses the DF of a fingerprint image as in singular point detection. First, the proposed method estimates the DF, finds the candidate regions for the reference point from the DF, and then detects the reference point from the candidate regions. The procedure is similar to that of singular point detection, but a different kind of candidate region is defined and a different method is applied to determine the reference point from the candidate regions. Since arch-type fingerprints typically have downward concaving ridge flows similar to the shape of a cap (∩) and tend to have lower curvature than other types of fingerprints, a wider range of cap-shaped candidate regions is defined. The candidate region image for singular points has a binary value. But for arch-type fingerprints we use a differential labeling scheme. Let the directions of two consecutive blocks be i,j , i,j +1 , respectively. Then the candidate region image g(i, j ) is obtained as follows:

g(i, j ) =

⎧ ⎪ ⎪ Di,j ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ Di,j −1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 0

if 0◦ i,j 90◦ and (90◦ i,j +1 < 180◦ or i,j +1 = 0◦ ) and Di,j 45◦ and Di,j  = 0◦ , else if 0◦ i,j −1 90◦ and (90◦ i,j < 180◦ or i,j = 0◦ ) ◦

and Di,j −1 45 and Di,j −1  =

(12)

0◦ ,

otherwise,

Di,j = i,j +1 − i,j , Di,j −1 = i,j − i,j −1 ,

(13)

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

849

Fig. 10. Reference point detection for an arch-type fingerprint: (a) original image, (b) candidate region image, and (c) detected reference point.

Input fingerprint image Compute DF, ns=0 Smooth DF, ns=ns+1

Find cap-shaped candidate regions

Find cup-shaped candidate regions

Detect upper core and delta point

Detect lower core and delta point

No. of smoothing (ns)<5 ?

No

Yes No

No. of upper core points = 1 ? Yes

No

Normal positional relationship ? Find candidate regions

Yes No. of singular core points = 0 ?

Yes Detect reference point

No Singular points

Reference point for arch type

Fig. 11. Flowchart of the proposed singular point and reference point detection method.

where Di,j is the difference value of the directions of the two consecutive blocks. When a candidate region image is obtained, the reference blocks containing a reference point are determined as the two consecutive blocks that have the smallest directional difference (D) among the candidate regions. This means that the cap-shaped candidate region with the highest curvature is selected as the reference blocks. Once the reference blocks are detected, the reference point is then determined as the center point of the topmost row. An example of reference point detection for arch-type fingerprint is shown in Fig. 10. The overall singular point and reference point detection procedures are illustrated in the flowchart in Fig. 11.

850

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

Table 1 The characteristics of each database

FVC2000 2a FVC2002 2a

Sensor type

Image resolution (dpi)

Image size

No. of images

Solid-state Optical

500 569

364 × 256 560 × 296

800 800

Table 2 Error rate of each method in the experiment using FVC2000 2a Proposed

Core (upper/lower) Delta

Poincaré index-based

False

Missed

False

Missed

14/8 23

11/21 23

109 79

35 48

Table 3 Error rate of each method in the experiment using FVC2002 2a Proposed

Core (upper/lower) Delta

Poincaré index-based

False

Missed

False

Missed

12/4 18

15/19 30

71 53

42 52

4. Experimental results To evaluate the performance of the proposed method, we used the FVC2000 2a [20] and FVC2002 2a [21] databases, which were used for the worldwide fingerprint verification competitions in 2000 and 2002. Since these two databases have different image resolutions and the types of the sensors used are also different as shown in Table 1, so the two fingerprint database images show quite different characteristics. First, we examined the error rate of the proposed method using these database images. Here, the error has two components associated with it: missing singular points and false singular points. Thereafter, to assess the relative effectiveness of the proposed algorithm, we compared the proposed method with the Poincaré index-based method [13], which is one of the most widely used singular point detection techniques. The same segmentation algorithm used in the proposed method was also applied to the conventional method. Performance comparisons with respect to both component errors are shown in Tables 2 and 3. The Poincaré index-based method does not discriminate between the upper core and lower core points, so we obtained the number of core points without discriminating core point types. Most false singular points were detected around the border area between the foreground and background regions. This is because the local directions around the border between foreground and background are not accurately computed, a consequence of the border discontinuity. In some cases, the input images were not properly segmented into foreground and background regions, which led to errors. To suppress the occurrence of false singular points around the border area, a method can be used which extracts the singular points from the foreground area eroded by a certain size. In this case, the number of false singular points can be reduced, albeit at the potential expense of missing some true singular points. All in all, most of the errors (either false SP or missed SP) can be associated with the foreground/background border. Above and beyond these causes, there were cases where the DFs were locally distorted by noise or the intrinsic bad quality of the fingerprints. The proposed method has better performance than the Poincaré index-based method. The Poincaré index analysis used for comparison here obtains the Poincaré index by calculating the directional difference between the adjacent blocks in a counter clockwise direction and summing these differences. In this method, if even one of the blocks by which the Poincaré index is calculated is affected by noise, the method is likely to fail to detect the singular points or to falsely detect a normal point as a singular point. Although it uses the iterative DF smoothing scheme (i.e. smoothing is repeated until the number of the core points become less than or equal to 2), the method is not robust to noisy images because only the number of core points is used for DF smoothing. The proposed method, on the other hand, is more robust to noise because in addition to using the DF smoothing scheme, the required conditions are given as a form of the directional range instead of a computed value. Even though the same segmentation method is applied to the Poincaré index-based method, the proposed method is able to perform better. Moreover,

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

851

Table 4 Consistency (standard deviation) for each method Proposed

FVC2000 2a FVC2002 2a

Poincaré index-based

Vertical

Horizontal

Vertical

Horizontal

5.36 5.52

3.47 2.47

7.35 9.04

4.13 4.46

Table 5 Test of robustness to rotation ((∗) is for the case that only cap-shaped candidate regions are used for delta point detection) Rotation angle

Upper core point

Lower core point

Delta point

(degrees)

False

Missed

False

Missed

False

Missed

−30 −25 −20 −15 −10 −5 0 5 10 15 20 25 30

0 0 0 1 0 0 0 0 0 0 0 0 0

0 0 0 1 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0

1 0 0 0 0 0 0 0 0 0 0 0 0

0 2 0 1 0 1(0) 1 0(1) 0 0 0 0 0

3(6) 1(3) 3(4) 1(2) 0 0 0 0(1) 0 1 2 1 1(2)

Table 6 Consistency (standard deviation) of the proposed reference point detection method for arch-type fingerprints

Standard deviation

Vertical direction

Horizontal direction

10.2 pixels

3.2 pixels

the proposed method outperforms the other recently reported methods that use the same database [10,11]. The proposed method can be used for rough classification of fingerprints because the type information of singular points is obtained with their location as in the Poincaré index-based method. Since the accuracy and consistency are the important factors in evaluating a singular point detection method, both of them were tested. First, the coordinate of each singular point was manually detected, and then the accuracy and consistency were computed automatically by checking how the detected singular points are deviated horizontally and vertically from the defined singular points. The result is shown in Table 4. We can see that the proposed method has a smaller absolute deviation from the locations of the defined singular points than the Poincaré index-based method. This is because the proposed method makes better or more concretely use of the geometric feature of the local DF around each type of singular point. The standard deviation (S.D.) values of Table 4 show the consistency of each method. The consistency is an important property when singular points are used as reference points. In terms of constancy, the result demonstrates that the proposed method can be effectively used for detecting the reference points of the fingerprint images. In order to see how robust to image rotation the proposed method is, 20 fingerprint images sampled from FVC2000 2a database were rotated to various degrees and the singular point detection was performed on the images by the proposed method. In the evaluation, a total of 240 fingerprint images were used. The recognition result is shown in Table 5. It is difficult to quantify exactly the rotational range to which the proposed method is robust, but we can see that the singular points were detected well in the rotation range between about −30◦ and +30◦ . In evaluating the performance of the proposed reference point detection method for arch-type fingerprints, consistency is a critical factor. In other words, it means that detecting the same location as the reference point if the fingerprint image is obtained from the same finger is advantageous as a reference point detection method. To show how consistently the proposed method locates the reference points for arch-type fingerprints in the FVC 2000 2a database, Table 6 lists the standard deviations of the reference points detected by the proposed method. The experimental result shows that the proposed method has a standard

852

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

Table 7 Rough performance comparison between the proposed method and the conventional methods Methods

Rotation

Noise

U Core

L Core

Delta

Arch

Complexity

Poincaré [13] DF-Histogram [14] Sine map [8] Complex filter [11] Proposed

High Low Low High High

Low Middle High High High

Yes Yes Yes Yes Yes

Yes No No No Yes

No No No No No

No Partly Yes Partly Yes

Low High High Middle High

Rotation: robustness to rotation, Noise: robustness to noise, U Core: upper core detection ability, L Core: lower core detection ability, Delta: delta point detection ability, Arch: reference point detection ability for arch-type fingerprints.

√ deviation of about 10 pixels ( 10.22 + 3.22 ≈ 10.69) similar to one block size, which means that the proposed can be successfully used as a reference point detection method for arch-type fingerprints. Finally, we showed the rough comparison between performance of the proposed method and those of several conventional methods in Table 7. 5. Conclusion We have proposed a new rule-based fingerprint singular point detection method. The proposed method computes the block directional field and performs rule-based analysis to determine each type of singular point. The geometric characteristics associated with singular points are effectively captured by the algorithm’s detection rules. This in turns allows singular points to be more accurately extracted and to be done more consistently than other approaches using the Poincaré index or symmetry. In addition, since the block size of the DF is automatically determined by computing the average inter-ridge distance, the proposed method can be applied to various fingerprints regardless of their resolutions almost without any modifications. Besides the singular point detection method, an additional reference point detection method for arch-type fingerprints has been presented. The proposed method effectively detects the reference points of arch-type fingerprints by simply finding the two consecutive blocks with the highest curvature among the candidate blocks satisfying a certain directional conditions. Since the proposed method can detect a reference point from arch-type fingerprints whose reference points cannot be detected by the Poincaré indexbased methods and discriminate the type of detected singular points, the proposed method can be used for rough classification of fingerprints as well as reference point detection. The experimental results show that the proposed method could be successfully used for singular point and reference point detection. We speculate that additional improvements in future work might be realized by developing a method that can estimate well the DF of foreground–background border areas. Acknowledgements This work was supported by the IT postdoctoral fellowship program of the Ministry of Information and Communication (MIC), Republic of Korea. Appendix A. Conditions for singular point detection A.1. Conditions for delta point detection from cap-shaped candidate regions The topmost blocks of a connected cap-shaped candidate region are determined as the delta point blocks, if they and their upper and lower neighboring blocks meet the following conditions: t,1 t−1,1 < t,2 and t,1 < t−1,2 t,2 , where t,2

 =

t,2

if t,2  = 0◦ ,

180◦

otherwise.

t+1,1 t,1

if t,h1 = 0◦ ,

t+1,1 t,1 or t+1,1 t,h1

else if 90◦ < t,h1 < 180◦ ,

t,h1 t+1,1 t,1

otherwise,

(A.1)

(A.2)

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

t+1,2 t,2 or t+1,2 = 0◦

if t,h2 = 0◦ ,

t,2 t+1,2 t,h2

else if 90◦ < t,h2 < 180◦ ,

t+1,2 t,2 or t+1,2 t,h2

otherwise,

where

 t,h1 =  t,h2 =  t,h =

(t,h − tol ) + 180◦

if (t,h − tol ) < 0◦ ,

t,h − tol

otherwise,

(t,h + tol ) − 180◦

if (t,h + tol ) 180◦ ,

t,h + tol

otherwise,

t,m + 90◦

if t,m < 90◦ ,

t,m − 90◦

otherwise,

853

(A.3)

t,1 + t,2 . 2

t,m =

A.2. Conditions for lower core point detection from cup-shaped candidate regions The topmost blocks of a connected cup-shaped candidate region are determined as the lower core point blocks, if they and their upper and lower neighboring blocks meet the following conditions: t,2 < t−1,1 t,1 and t,2 t−1,2 < t,1 , where t,1 =



t,1

if t,1  = 0◦ ,

180◦

otherwise,

t+1,1 t,1 or t+1,1 = 0◦

if t,h1 = 0◦ ,

t,1 t+1,1 t,h1

else if 90◦ < t,h1 < 180◦ ,

t+1,1 t,1 or t+1,1 t,h1

otherwise,

t+1,2 t,2

if t,h2 = 0◦ ,

t+1,2 t,2 or t+1,2 t,h2

else if 90◦ < t,h2 < 180◦ ,

t,h2 t+1,2 t,2

otherwise,

where

 t,h1 =  t,h2 =  t,h = t,m =

(t,h + tol ) − 180◦

if (t,h + tol ) 180◦ ,

t,h + tol

otherwise,

(t,h − tol ) + 180◦

if (t,h − tol ) < 0◦ ,

t,h − tol

otherwise,

t,m + 90◦

if t,m < 90◦ ,

t,m − 90◦

otherwise,

(A.4)

(A.5)

(A.6)

t,1 + t,2 . 2

A.3. Conditions for delta point detection from cup-shaped candidate regions The bottommost blocks of a connected cup-shaped candidate region are determined as the delta point blocks, if they and their upper and lower neighboring blocks meet the following conditions: b,2 < b+1,1 b,1 and b,2 b+1,2 < b,1 ,

(A.7)

854

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

where b,1 =



b,1

if b,1  = 0◦ ,

180◦

otherwise,

b−1,1 b,1 or b−1,1 = 0◦

if b,h1 = 0◦ ,

b,1 b−1,1 b,h1

else if 90◦ < b,h1 < 180◦ ,

b−1,1 b,1 or b−1,1 b,h1

otherwise,

b−1,2 b,2

if b,h2 = 0◦ ,

b−1,2 b,2 or b−1,2 b,h2

else if 90◦ < b,h2 < 180◦ ,

b,h2 b−1,2 b,2

otherwise,

where

 b,h1 =  b,h2 =  b,h = b,m =

(b,h + tol ) − 180◦

if (b,h + tol ) 180◦ ,

b,h + tol

otherwise,

(b,h − tol ) + 180◦

if (b,h − tol ) < 0◦ ,

b,h − tol

otherwise,

b,m

+ 90◦

b,m − 90◦ b,1 + b,2 2

(A.8)

(A.9)

if b,m < 90◦ , otherwise,

.

To understand the notations for the conditions mentioned in Appendix, refer to Section 2.3.1 (Notations). References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]

D. Maltoni, D. Maio, A.K. Jain, S. Prabhakar, Handbook of Fingerprint Recognition, Springer, Berlin, 2003. S. Pankanti, S. Prabhakar, A.K. Jain, On the individuality of fingerprints, IEEE Trans. Pattern Anal. Mach. Intell. 24 (8) (2002) 1010–1025. X. Xia, L. O’Gorman, Innovations in fingerprint capture devices, Pattern Recognition 36 (2003) 361–369. A.K. Jain, L. Hong, S. Pankanti, R. Bolle, An identity authentication system using fingerprints, Proc. IEEE 85 (9) (1997) 1365–1388. J.H. Wegstein, The M40 Fingerprint Matcher, National Bureau of Standards, Technical Note 878, US Government Printing Office, US Government Publication, Washington, DC, 1972. J.H. Wegstein, J.F. Rafferty, The LX39 Latent Fingerprint Matcher, US Government Publication, National Bureau of Standards, Institute for Computer Sciences and Technology, 1978. J.H. Wegstein, An automated Fingerprint Identification System, US Department of Commerce, National Bureau of Standards, US Government Publication, Washington, DC, 1982. A.K. Jain, S. Prabhakar, L. Hong, S. Pankanti, Filterbank-based fingerprint matching, IEEE Trans. Image Process. 9 (5) (2000) 846–859. C.-H. Park, J.-J. Lee, M.J.T. Smith, S.-I. Park, K.-H. Park, Directional filter bank-based fingerprint feature extraction and matching, IEEE Trans. Circuits Syst. Video Technol. 14 (1) (2004) 74–85. A.M. Bazen, S.H. Gerez, Systematic methods for the computation of the directional fields and singular points of fingerprints, IEEE Trans. Pattern Anal. Mach. Intell. 24 (7) (2002) 905–919. K. Nilsson, J. Bigun, Localization of corresponding points in fingerprints by complex filtering, Pattern Recognition Lett. 24 (2003) 2135–2144. M. Kawagoe, A. Tojo, Fingerprint pattern classification, Pattern Recognition 17 (3) (1984) 295–303. K. Karu, A.K. Jain, Fingerprint classification, Pattern Recognition 29 (3) (1996) 389–404. V.S. Srinivasan, N.N. Murthy, Detection of singular points in fingerprint images, Pattern Recognition 25 (2) (1992) 139–153. D. Maio, D. Maltoni, Direct gray-scale minutiae detection in fingerprints, IEEE Trans. Pattern Anal. Mach. Intell. 19 (1997) 27–40. N.K. Ratha, S. Chen, A.K. Jain, Adaptive flow orientation-based feature extraction in fingerprint images, Pattern Recognition 28 (11) (1995) 1657–1672. C.-J. Lee, S.-D. Wang, Fingerprint feature extraction using Gabor filters, Electron. Lett. 35 (4) (1999) 288–290. L. Hong, Y. Wan, A.K. Jain, Fingerprint image enhancement: algorithm and performance evaluation, IEEE Trans. Pattern Anal. Mach. Intell. 20 (8) (1998) 777–789. R.M. Haralick, L.G. Shapiro, Computer and Robot Vision, vol. 1, Addison-Wesley, Reading, MA, 1993. D. Maio, D. Maltoni, R. Cappelli, J.L. Wayman, A.K. Jain, FVC2000: fingerprint verification competition, IEEE Trans. Pattern Anal. Mach. Intell. 24 (3) (2002) 402–411. D. Maio, D. Maltoni, R. Cappelli, J.L. Wayman, A.K. Jain, FVC2002: second fingerprint verification competition, in: Proceedings of the International Conference on Pattern Recognition, Quebec City, August 11–15, 2002, pp. 811–814.

C.-H. Park et al. / Pattern Recognition 39 (2006) 839 – 855

855

About the Author—CHUL-HYUN PARK received his B.E., M.E., and Ph.D. degrees in Electronic Engineering from Kyungpook National University, Daegu, South Korea, in 1995, 1999, and 2004, respectively. He was a visiting scholar in the School of Electrical and Computer Engineering at the Purdue University, West Lafayette from March 2004 to September 2005, sponsored by the Ministry of Information and Communication, Republic of Korea. Thereafter he was transferred to the Ohio State University and currently he works as a visiting scholar in the Computational Biology and Cognitive Science Laboratory. His main interests are in image processing, computer vision, and Biometrics. About the Author—JOON-JAE LEE received his B.S., M.S., and Ph.D. degrees in Electronic Engineering from the Kyungpook National University, Daegu, South Korea, in 1986, 1990, and 1994, respectively. He worked for the Kyungpook National University as a Teaching Assistant from September 1991 to July 1993. In March 1995, he joined the Computer Engineering faculty at the Dongseo University, Busan, South Korea, and is currently an Associate Professor in the Department of Computer and Information Engineering. He was a visiting scholar at the Georgia Institute of Technology, Atlanta, from 1998 to 1999, funded by the Korea Science and Engineering Foundation (KOSEF), and also worked for PARMI corporation as a research and development manager for 1 year from 2000 to 2001. His main interests are in image processing, three-dimensional computer vision, and fingerprint recognition. About the Author—MARK J. T. SMITH received his B.S. degree from the Massachusetts Institute of Technology and his M.S. and Ph.D. degrees from the Georgia Institute of Technology all in Electrical Engineering. He joined the Electrical Engineering Faculty at the Georgia Tech in 1984 and later served as the Executive Assistant to the President of the Institute from 1997 until 2001. In January, 2003, he joined the faculty at the Purdue University as Head of the School of Electrical and Computer Engineering where he holds the Michael J. & Katherine R. Birck endowed professorship. Dr. Smith is a Fellow of the IEEE, a former IEEE Distinguished Lecturer in Signal Processing, and the 2005 recipient of the SPIE Wavelet Pioneer Award. He has authored more than 200 papers in the areas of speech and image processing, filter banks, and wavelets and is the co-author of two introductory books titled: “Introduction to Digital Signal Processing” and “Digital Filtering.” He is also co-editor of the book titled “Wavelets and Subband Transforms: Design and Applications,” and the co-author of the textbook titled “A Study Guide for Digital Image Processing.” Dr. Smith is a past Chairman of the IEEE SP Digital Signal Processing Technical Committee in the IEEE Signal Processing Society and a former member of the Board of Governors. He has served as an Associate Editor for the IEEE Transactions on ASSP and as a member of the MIPS Advisory Board of the National Science Foundation. He has been active as a member of the Organizing Committees for the IEEE DSP Workshops, the SPIE VCIP Conferences, and the Defense Applications of Signal Processing Workshops. He currently serves as a Secretary of the Electrical and Computer Engineering Department Heads Association (ECEDHA). About the Author—KIL-HOUM PARK received his B.E. degree in Electronic Engineering from the Kyungpook National University, Daegu, South Korea, in 1982 and his M.E. and Ph.D. degrees in Electronic Engineering from the Korea Advanced Institute of Science and Technology (KAIST), Seoul, in 1984 and 1990, respectively. He has been with the Kyungpook National University since 1984 and is currently a Professor in the School of Electrical Engineering and Computer Science. His current research areas include image processing, pattern recognition, and computer vision.