Finger-vein verification based on the curvature in Radon space

Finger-vein verification based on the curvature in Radon space

Expert Systems With Applications 82 (2017) 151–161 Contents lists available at ScienceDirect Expert Systems With Applications journal homepage: www...

2MB Sizes 0 Downloads 28 Views

Expert Systems With Applications 82 (2017) 151–161

Contents lists available at ScienceDirect

Expert Systems With Applications journal homepage: www.elsevier.com/locate/eswa

Finger-vein verification based on the curvature in Radon space Huafeng Qin a,∗, Xiping He a, Xingyan Yao a, Hongbing Li b a b

National Research Base of Intelligent Manufacturing Service, Chongqing Technology and Business University, Chongqing 400067, China Chongqing Three Gorges College, Chongqing, 404000, China

a r t i c l e

i n f o

Article history: Received 12 December 2016 Revised 12 March 2017 Accepted 29 March 2017 Available online 4 April 2017 Keywords: Personal verification Finger-vein enhancement Radon transform Curvature in Radon space

a b s t r a c t Finger-vein verification has drawn increasing attention because it is highly secured and private biometric in practical applications. However, as the imaging environment is affected by many factors, the captured image contains not only the vein pattern but also the noise and irregular shadowing which can decrease the verification accuracy. To address this problem, in this paper, we proposed a new finger-vein extraction approach which detects the valley-like structures using the curvatures in Radon space. Firstly, given a pixel, we obtain eight patches centered on it by rotating a window along eight different orientations and project the resulting patches into Radon space using the Radon transform. Secondly, the vein patches create prominent valleys in Radon space. The vein patterns are enhanced according to the curvature values of the valleys. Finally, the vein network is extracted from the enhancing image by a binarization scheme and matched for personal verification. The experimental results on both contacted and contactless finger-vein databases illustrate that our approach can significantly improve the accuracy of the finger-vein verification system. © 2017 Elsevier Ltd. All rights reserved.

1. Introduction With the tremendous growth in the demand for secured systems, the automatic personal verification using biometrics has drawn increasing attention and become one of the most critical and challenging tasks. The physical and behavioral characteristics of people such as face, fingerprint, signature and gait have been widely applied for identification of criminals as a tool by law. Some researchers have explored new biometrics features and traits. Currently, a number of biometric characteristics have been employed to achieve the verification task and can be broadly categorized in two categories; (1) extrinsic biometric features, i.e. face, fingerprint, palm-print and iris. (2) intrinsic biometric features, i.e. finger-vein, hand-vein and palm-vein. Extrinsic biometric feature are susceptible to spoof attacks because the fake fingerprints, palm-prints and face images can successfully cheat the verification system. Therefore, the usage of extrinsic biometric feature generate some concerns on privacy and security in practical application. On the other hand, it is difficult to acquire without the knowledge of an individual and forge intrinsic biometrics characteristics (fingervein, hand-vein and palm-vein). In addition, the key point in practical applications is that the biometrics trait must provide the high collectability and convenience for user. Therefore, the finger-vein ∗

Corresponding author. E-mail addresses: [email protected] (H. Qin), [email protected] (X. He), [email protected] (X. Yao), [email protected] (H. Li). http://dx.doi.org/10.1016/j.eswa.2017.03.068 0957-4174/© 2017 Elsevier Ltd. All rights reserved.

biometric has emerged as a promising alternative for personal verification. Firstly, it is difficult for user to leave crucial information when he/she interact with the finger-vein capturing device. Secondly, using finger-vein for verification is convenient to be captured in practical applications. Therefore, the finger-vein verification is a highly secured, private and convenient biometric for the practical applications. The finger-vein is under surface and not easily observed in visible light, but it can be captured using the infrared light emitting diode (LED) and charge coupled device (CCD) camera (Kono, Ueki, & Umemura, 2002). Finger-vein verification is still a challenging task because the finger-vein acquisition process is inherently affected by a number of factors: environmental illumination (Hashimoto, 2006; Huang, Dai, Li, Tang, & Li, 2010; Song et al., 2011; Yu, Qing, & Zhang, 2008), ambient temperature (Kumar & Zhou, 2012; Miura, Nagasaka, & Miyatake, 2007; Mulyono & Jinn, 2008; Song et al., 2011), light scattering in imaging finger tissues (Cheong, Prahl, & Welch, 1990; Lee & Park, 2011; Yang & Shi, 2014), physiological changes (Kumar & Zhou, 2012; Miura et al., 2007) and user behavior (Hashimoto, 2006; Huang et al., 2010; Mulyono & Jinn, 2008). In practical application, these factors are not controlled and/or avoided, so some noise and irregular shadowing are produced in many acquired finger-vein images. For example, the sensor and circuitry of a scanner or digital camera can produce the electronic noise, and the irregular shading is mainly caused by the varied thickness of the finger and/or the uneven illumination. In general, these noise and irregular shadowing will

152

H. Qin et al. / Expert Systems With Applications 82 (2017) 151–161

ultimately compromise the performance of the automatic authentication system. Currently, there are several schemes such quality assessments (Nguyen, Park, Shin, & Park, 2013; Qin, Li, Kot, & Qin, 2012; Yang, Yang, Yin, & Xiao, 2013), image restoration (Lee & Park, 2009; 2011; Nguyen et al., 2013; Yang & Shi, 2014) and image feature extraction (Kono et al., 2002; Kumar & Zhou, 2012; Miura et al., 20 07; 20 04; Mulyono & Jinn, 20 08; Qin, Qin, & Yu, 2011; Yu, Qin, Cui, & Hu, 2009; Yu et al., 2008; Zhang, Ma, & Han, 2006) to solve the problem. Finger-vein feature extraction as an effective scheme has been widely investigated by many researchers and applied for finger-vein verification. Miura, Nagasaka, and Miyatake (2004) have proposed a repeated line tracking approach to enhance the vein pattern and proved the efficacy of the proposed approach. The performance of finger-vein verification is significantly improved by detecting maximum curvature points in image profiles and the promising experimental results are shown in the Ref. Miura et al. (2007). In Ref. Zhang et al. (2006), a multi-scale feature extraction method of finger-vein patterns based on curvelets and neural networks was investigated and has shown high performance with respect to finger-vein feature extraction. However, the details of the experiment setup are not illustrated in this work. To enhance finger-vein pattern, Yu et al. (2009) have developed an approach to extract concave region by calculating the maximum convolution in eight directions of pixels. Subsequently, they proposed a region growth based approach for finger-vein verification (Qin et al., 2011). To achieve robust feature extraction, in these work (Frangi, Niessen, Vincken, & Viergever, 1998; Song et al., 2011; Zhou & Kumar, 2011), the Hessian Phase matrix is employed to enhance the vein pattern. The magnitude of the corresponding eigenvalues of the Hessian matrix reflects the curvature of the principal orientation in the local image region. The vein patterns are extracted by combining the curvature of all orientations. Compared to other methods, these schemes achieve the promising performance. Recently, the Gabor filters are known to achieve the maximum possible joint resolution in spatial and spatial-frequency domain and have been utilized by researchers to enhance finger-vein pattern. The single scale-based Gabor filters (Kumar & Zhou, 2012; Yang, Yang, & Shi, 2009a, 2009b) have been employed for the finger-vein ridge enhancement. However, the accuracy for finger-vein identification is limited because the false veins created by irregular shadowing are emphasized during the enhancement procedure. Therefore, the Gabor filter based approaches are prone to over-segmentation. To overcome the drawback, a multi-scale multiplication operation (Yang & Shi, 2014) is applied to further emphasize vein region and suppress false ridge in the image enhanced by Gabor filters. The experimental results have proved that the proposed method outperforms the existing methods. The multiscale Gabor filters can extract the finger-vein pattern with different scale, but irregular shadowing and noise are still emphasized in their experimental results. For the same purpose, a difference curvature (Qin et al., 2013) is proposed for finger vein verification and show the higher performance than the conventional Gabor filter with respect to vein pattern enhancement and the false vein suppression. Based on the description of prior work, most finger feature extraction methods aim to detecting ridge/valley region generated by vein pattern. The curvature and Gabor filters as two effective vein detection tools have shown higher accuracy for finger-vein verification. The curvature based approaches (Frangi et al., 1998; Hashimoto, 2006; Miura et al., 2007; Song et al., 2011; Zhou & Kumar, 2011) extract the vein patterns by computing the curvature of the valley. However,the valleys are susceptible to be corrupted by the noise. So, the false vein features are created in the vein image because the curvature is very sensitive to noise. In fact, the other approaches based on detecting valley such as repeated line tracking (Liu, Xie, Yan, Li, & Lu, 2013; Miura et al., 2004) and region

growth (Qin et al., 2011) also suffer from similar problem because the noise can comprise the valley. Gabor filter based methods (Kumar & Zhou, 2012; Yang et al., 2009a, 2009b) may alleviate the problem because it is more effective to suppress the noises than the curvature based ones. However, these approaches are prone to over-segmentation because it is not easy for them to distinguish vein pattern from the irregular shadowing in a finger-vein image. Therefore, how to extract the vein pattern from a finger-vein image, especially from ambiguous region such as irregular shadowing and noise is still an issue for finger-vein feature extraction. To extract robust finger-vein patterns, in this paper, we proposed a finger-vein feature extraction approach for verification. Firstly, for given pixel, several patches are determined by rotating a window along different orientations. Secondly, each patch is projected into Radon space by the Radon transform. As the local Radon transform is the collection of integrals along line, it can be treat as a low pass filter. Therefore, the noise can be suppressed in the Radon space. At the same time, the gap between the foreground (vein pattern) and background will be enlarged based on the integration computation. Therefore, the vein patches can create the prominent valleys in Radon space. Thirdly, the vein patterns are extracted based on the curvature of valleys in Radon space. Finally, the finger-vein feature is encoded from enhanced image and the finger-vein verification is achieved by computing the amount of overlap between two finger-vein feature images. Experimental results on two large public databases show the proposed approach can extract the vein patterns from law finger-vein images and significantly improve the accuracy of finger-vein verification. Currently, few works have employed Radon transform to extract finger-vein pattern for finger-vein identification and quality assessment. In Wu and Ye (2009), a finger-vein image is projected into Radon space. Then, the coefficients of Radon transform image which represent the vein features are input into neural network for identification. A preliminary version of this work was presented in Qin et al. (2012) for quality assessment. The present work has a different motivation and improves the initial version in significant ways. First, in previous works (Qin et al., 2012; Wu & Ye, 2009), a Radon transform is proposed for finger-vein identification and quality assessment. However, in this work, we develop a Radon transform based approach to segment the vein patterns for finger-vein verification. To the best of our knowledge, the Radon transform has not been applied before for finger-vein segmentation. Second, different from the work (Qin et al., 2012), the vary rectangle widow is adapted in the Local Radon transformation and then the difference curvature is computed to enhance the vein patterns. In addition, considerable new analyses and intuitive explanations are added to the proposed approach. Thirdly, we study in this work the effect on matching performance of cropping the testing image. Experimentally, we demonstrate that performance can be improved by eliminating the boundary region of the testing image. Finally, the verification performance of our approach is investigated by carrying the experiments on two public finger-vein databases. Also, we compare with a number of recently published methods and confirm that our model significantly outperforms existing approaches for finger-vein verification. The rest of this paper is organized as follows: Section 2 presents details on the proposed verification approach. The experimental results and discussion are presented in Section 3. Finally, the key conclusions from this paper are summarized in Section 4. 2. The proposed approach Fig. 1 shows the system flow diagram of the proposed fingervein verification approach. First, a finger-vein image is projected into Radon space by Local Radon transform. Second, the curvature of valley in Radon space is obtained for extracting vein patterns.

H. Qin et al. / Expert Systems With Applications 82 (2017) 151–161

153

Template Database Image Enhancement

Local Radon Transform

Finger-Vein Image

Computing Curvature

Vein Pattern Enhancement

Image Encoding

Matching

Verified User

Decision

Unknown

Fig. 1. Block diagram for personal verification using finger-vein images. y yy

x

o

o

o

(a)

(b)

o

(c)

(d)

Fig. 2. Relationship between projection orientation and rotation angle: (a), (b), (c) and (d) are the patches Bαi (α = 0◦ , 45◦ , 90◦ , 135◦ ) corresponding to different projection orientations, where the dash line denotes the projection orientation, and the angle between the solid line and vertical orientation is the rotation angle.

Table 1 A vein-like patch.

Thirdly, we compute the difference of curvatures in two perpendicular projection orientations to enhance the vein patterns. Finally, the vein networks are extracted by a binarization approach and further employed for matching.

1 1 2 3 5

2.1. Radon transform based image enhancement Let o be the ith pixel a given finger-vein image F with size of M × N. First of all, the intensity values in a rectangle window centered on pixel o can construct a patches Bi (i = 1, 2, 3, . . . , M × N ) with the size W × H. W and H are the odd number to enforce symmetry, and heuristically set to 17 and 21 in our experiments. To enhance finger-vein feature, a set of patches Bα is created for i pixel o by rotating window along the different orientations α , where 0 < α ≤ π . Fig. 2 has shown the patches along orientations α = 0◦ , 45◦ , 90◦ , 135◦ . Secondly, each patches Bα is projected i into Radon space using a Radon transform along the projection orientation. Let the dash line be the projection orientation which is perpendicular to α , as shown in Fig. 2. Thirdly, the curvature is computed to enhance values of pixel o. Finally, an enhancement finger-vein image is obtained after computing the curvature values of all pixels. 2.1.1. Radon transform Radon transformation (Deans, 2007) as an effective line detection tool has been applied for biometrics such as finger-vein (Qin et al., 2012; Wu & Ye, 2009), hand vein (Zhou & Kumar, 2011), hand print (Jia, Huang, & Zhang, 2008), and iris (Zhou & Kumar, 2010). As the vein can be treated as a short line segment in a local region, the Radon transform is employed for finger-vein image enhancement. The 2-D Radon transform for a given image along arbitrary direction is defined as follows

φi ( ρ , θ ) =



F (x, y )δ [ρ − (x cos(θ ) + y sin(θ ))]dxdy.

(1)

(x,y )∈Bi

where δ is the Dirac delta function and θ refers to the angle of offsets ρ as shown in Fig. 4(a). Therefore, the equation illustrates the integration process of F(x, y) along the direction which is orthogonal to orientation θ defined by

ρ = x cos(θ ) + y sin(θ ).

(2)

Based on Eq. (1), a patch is projected into the Radon space along varying orientations θ and offsets ρ , where θ ranges from 0 to π and the range of variation for ρ is determined by the patch size. For example, there are two tangent lines (Two solid

1 1 1 2 3

2 1 1 1 2

3 2 1 1 1

5 3 2 1 1

lines in Fig. 4(b)) for a patch along a projection orientation. Let dk be the distance between two tangent lines along the kth orientation, and d = max dk represents the maximum distance k=1,2,...,K

along all K projection orientations, where K is fixed to 8 in our experiments. Then, the ρ ranges from 0 to d . The Radon transform should yield a peak corresponding to every line in the image that is brighter than its surroundings and a valley for every dark line. Fig. 4(a) shows the projecting result of the patch in Fig. 3(a) using the Radon transform, where the projection direction is the same as the main orientation (denoted as θ m where 0 < θ m ≤ π ) of the vein pattern. The vein is extracted by detecting the valley in the Radon space. The Radon transform can successfully detect the vein pattern in the patch as shown in Fig. 3(a), because there is a prominent valley in φ i (ρ , θ m )(Fig. 4(a)). However, for other vein patches as shown in Figs. 3(b) and 4(b), it may not be effectively extracted by using the Radon transform. The reason is that the number of both the vein pixels and their neighborhood background pixels is different along the projection direction, which may not be able to create a prominent valley in φ i (ρ , θ m ). For example, Table 1 shows an example of vein-like patch where the values of pixels in center position are smaller than these at both sides along the diagonal (from upper-left corner to lower-right corner). We project it into Radon space along the diagonal and then obtain a vector [5 6 6 4 5 4 6 6 5]. However, a prominent valley cannot be created based on resulting vector in Radon space because of different pixel number in the projection orientation. To effectively extract the vein patterns, we employ the patch obtained from the varied square window instead of a fixed square window in Radon transformation.



ψi ( ρ , θ ) =

(x,y )∈Bθi

F (x, y )δ [ρ − (x cos(θ ) + y sin(θ ))]dxdy N

. (3)

where N refers to the number of all the pixels in the patch along projection ordination,

154

H. Qin et al. / Expert Systems With Applications 82 (2017) 151–161

o

o

o (a)

(b)

(c) θm

Fig. 3. (a) and (b) are two pixels and their 51 × 51 neighborhood, and (c) is a 51 × 51 neighborhood Bi

obtained by rotating the square widow in (b).

Fig. 4. Projecting the patches in Fig. 3 into the Radon space along the main vein direction. The dash lines refer to the projection direction.

(a)

(b)

(c)

(d)

Fig. 5. (a) A vein pixel in the thick vein and its 51 × 51 neighborhood, (b) A background pixel and its 51 × 51 neighborhood, (c) A vein pixel and its 25 × 51 neighborhood, and (d) A background pixel and its 25 × 51 neighborhood. The 25 × 51 neighborhood in (c) and (d) is obtained by shortening one side of the 51 × 51 neighborhood in (a) and (b).

 N=

(x,y )∈Bθi

δ [ρ − (x cos(θ ) + y sin(θ ))]dxdy.

(4)

and Bθi is the patch centered on pixel o along the projection orientation (as shown in Figs. 2(b) and 3(c)). In Eq. (3), the output ψ i (ρ , α ) is normalized to a certain range based on the N. In addition, these patches Bθi vary with different projection orientations α (as illustrated in Fig. 2) such that the number of pixels N is same along each projection orientation (dash line). For example, we rotate the window in Fig. 3(b) and obtain a patch in Fig. 3(c). Then, the resulting patch in Fig. 3(c) is projected into Radon space where a prominent valley is created, as shown in Fig. 4(c). In general, the width of vein in the same/different images is different, as shown in Fig. 5. For the thick vein patterns may create a prominent valley using large square widow (as shown in Figs. 5(a) and 6(a)). However, a valley is also created for non-vein pixels using same square widow (as shown in Fig. 6(b)). On the other word, some false vein patterns will be created in the enhanced feature map. To overcome the drawback, a rectangle widow is obtained by shortening one side of the square widow in the Fig. 5(a) and (b) and then employed in Eq. (3) to separate vein patterns from background. As shown in Fig. 6(c) and (d), a prominent valley is created by vein pattern instead of background based on a rectangle widow. In our experiment, the width W (long side) and height H (short side) of the window are heuristically determined. In addition, diffident from the fixed widow in Eq. (2), the projection of

Fig. 6. Projecting the patches in Fig. 5 into the Radon space along the main vein direction. The dash lines refer to the projection direction.

a rectangular patch in Eq. (3) is statistically balanced in all projection orientations because the number of taken pixels from each patch N along all projection orientations is also same. 2.1.2. Curvature in Radon space Curvature is very effective to the valley. Therefore, we compute the curvature of the patch Bθi in Radon space, i.e.,

Ci (ρ , θ ) =

d2 ψi (ρ , θ )/dρ 2 . {1 + (dψi (ρ , θ )/dρ )2 }3/2

(5)

Fig. 7(a) illustrates the Curvature in Radon space along the main vein orientation θ m . The region of valley can be determined by Ci (ρ , θ ) > 0. For the vein pixel o, the patch in the orientation θ m can generate more than one valley in the Radon space. To simplify the description of our approach in the following discussions, the

H. Qin et al. / Expert Systems With Applications 82 (2017) 151–161

Fig. 7. Curvature in Radon space along (a) the main vein orientation θ m and (b) the orientation θ m which is perpendicular to the main vein orientation θ m .

region of the rth valley in the Radon space is defined as Rr (ρ 1 , ρ 2 , θ ), subject to • • •

ρ2 > ρ1 Ci (ρ , θ ) = 0, if ρ = ρ1 or ρ 2 Ci (ρ , θ ) > 0, if ρ 1 < ρ < ρ 2

where ρ 1 and ρ 2 are the boundary bins of the valley region Ci (ρ , θ ) > 0 (as shown in Fig. 3(a)), and determined by automatically searching the boundary points. 2.1.3. Image enhancement Generally speaking, if the contrast of a vein pattern is high, the corresponding valley in ψ i (ρ , θ m ) will be sharp with a high curvature. The value of the pixel is enhanced according to the curvature values of the valleys in ψ i (ρ , θ m ). First of all, to find the main vein orientation θ m of the pixel o, we quantize all the possible vein orientations into a set of K values as

θk =

kπ . K

(6)

where k = 1, 2, . . . , K. Correspondingly, there are K patches for the pixel o and K is heuristically set to 8. An enhanced value Ei (θ k ) in orientation θ k is calculated by

E i ( θk ) =

 ρ2

ρ1 Ci (ρ , θk )dρ

0

i f o ∈ R r ( ρ1 , ρ2 , θ k ) . otherwise

(7)

where o ∈ Rr (ρ 1 , ρ 2 , θ k ) determines whether the region of valley is created by the values in the neighborhood centered on pixel o. Other regions of valley which do not include the center pixel o will be ignored in Eq. (7). The main orientation θ m for each pixel is determined as

θm = arg max Ei (θk ). θk

(8)

Once the main vein orientation is determined, its perpendicular orientation is computed by



θm =

θm + K  i f m ≤ K  . θm−K  i f m > K 

(9)

where K = K2 . Then, the value of center pixel o is enhanced by following equation

fi = Ei (θm ) − Ei (θ m ).

(10)

The proposed approach has following advantages. (1) There is a prominent valley in Radon space even if the pixel o locates in the non-connective vein region as shown in Fig. 8(a). Therefore, the proposed approach can extract the connective finger-vein patterns.

155

(2) Previous methods (Miura et al., 2007; Qin et al., 2013; Song et al., 2011) enhance an image by computing the curvature of valley. As the noises are easy to corrupt the value of pixel and create sharp peak or valley region (as shown in Fig. 8) in cross-section profile, these methods are not robust to enhance the finger-vein pattern. For example, it is difficult to enhance the center pixels in Fig. 8(b) because there are not prominent valley in cross-section profile created by the pixels in the red line. On the contrary, the Radon transform cannot only suppress the high frequency noise in fingervein image but also enlarge the gap between vein and background. This is explained by following fact. In general, most vein pixels in the vein patch have smaller values than background pixels. Few noise pixels in vein region have large values but the summation of both vein and noise pixel values is still smaller than that of background pixel values. So, a valley is still generated for the vein patch corrupted by some noises, as shown in Fig. 8(c). Therefore, the Radon transform is effective for the suppression of the noise and enhancement of vein pattern. (3) For the pixel located in the vein region, Ei (θ m ) is large while Ei (θ m ) is equal to zero because there is no prominent valley in Ei (θ m ) as shown in Fig. 7(b). So, the difference between them is large. For the pixel in background region, the difference is small because both Ei (θ m ) and Ei (θ m ) are small. For the pixel in the isolated noise or irregular shadowing, both Ei (θ m ) and Ei (θ m ) are large, but the difference fi is still small. So, the vein pattern can be distinguished from other regions such as isolated noise, irregular shadowing and background. 2.2. Image encoding and matching 2.2.1. Image encoding To match the two images effectively, the enhancement image is encoded using following binarization scheme.



R(x, y ) =

1 i f f (x, y ) > u . 0 i f f (x, y ) ≤ u

(11)

where u is a mean of pixels values in the P × P neighborhood of the pixel (x, y) and R(x, y) is a binarized image. The parameter P is heuristically determined based on rigorous experiments, and then the threshold u is automatically computed by u =

P

x=1

P

y=1 f ( x,y )

P×P

.

2.2.2. Matching The preprocessing (as shown in Fig. 9) before matching allows reducing the translational and rotational variations, but inaccurate parameter estimation still causes some variations in the normalized vein images. Therefore, a robust matching approach is proposed to compute the overlapping region between two images with a small amount of translation in horizontal and vertical directions, and accepts the match if the size of this overlapping region is larger than a preset threshold. Let R and T denote registered and testing binarized feature maps with size of M × N. The template R¯ is an expanded image of R that is computed by extending its width and height to 2wR + m and 2hR + n, R¯ is expressed as:



R¯ (x, y ) =

R ( x − w R , y − hR ) −1

if 1 + wR ≤ x ≤ M + wR , 1 + hR ≤ y ≤ N + hR . otherwise

(12)

In many previous methods (Kumar & Zhou, 2012; Miura et al., 2004; Qin et al., 2011), the matching is performed by computing overlapped region between the whole testing image and the template for finger-vein verification. However, many factors affect the

156

H. Qin et al. / Expert Systems With Applications 82 (2017) 151–161

Fig. 8. Examples for different patches: (a) projecting a discontinuous vein patch into Radon space, (b) a cross-section profile from a patch in which the center vein pixel is corrupted, and (c) projecting the corrupted patch from figure (b) into Radon space along its main orientation.

Fig. 9. Normalized results: (a) original image from database A; (b) normalized gray image from (a); (c) original image from database B; (d) normalized gray image from (c).

quality of the finger-vein image, so the there are some noises and irregular shadowing in the capturing image, which degrades the verification accuracy. In general, compared to the pixels in the center region, the values of the pixels on the boundary of a finger-vein image are prone to be corrupted by some factors such as illumination and light scattering. Therefore, the boundary regions of the finger-vein image may include more noise and irregular shadowing, and some vein patterns may be easily missed in these regions. Matching such a region may create many verification errors. To achieve the robust matching, the patch T¯ obtained by cropping the width and height of the testing image T to M − 2wT and N − 2hT is used to compute the matching score:

T¯ (x, y ) = T (x + wT , y + hT ).

(13)

where x ∈ [1, M − 2wT ] and y ∈ [1, N − 2hT ]. The matching score between the template image R and the testing image T is computed as

φ (T , R ) =

max

0≤i≤2(wR +wT ),0≤ j≤2(hR +hT )

M−2wT N−2hT x=1

× where

(X, Y ) =



1 0

(R¯ (x + i, y + j ), T¯ (x, y )) . ( M − 2 w T ) × ( N − 2 hT ) y=1

i f X = Y and otherwise

X = −1 .

(14)

(15)

φ (T, R) basically computes the amount of overlap between R and T. The parameters (wR + wT ) and (hR + hT ) are employed to control the translation distance over horizontal and vertical directions. 3. Experimental results To estimate the effectiveness and robustness of the proposed method for finger-vein verification, we carried out experiments on the contacted and contactless based databases. The feature extraction methods using maximum curvature (Miura et al., 2007), mean curvature (Song et al., 2011), difference curvature (Qin et al., 2013), Local Binary Patterns (LBP) (Liu, Xie, & Park, 2016), and Sift

(Peng, Wang, El-Latif, Li, & Niu, 2012) have been suggested promising results in these literature. The robust Gabor filter (Kumar & Zhou, 2012) and inter-scale multiplication operation (ISMO) (Yang & Shi, 2014), have been also applied to extract finger-vein pattern and shown good performance. Therefore, we compare the proposed method with all these ones, so that more insights into the problem of finger-vein verification can be obtained in our experiments. 3.1. Database Database A. The Hong Kong Polytechnic University finger-vein image database (Kumar & Zhou, 2012) consists of 3132 images acquired from 156 subjects using an opening and contactless imaging device. The first 2520 finger images were captured from 105 subjects in two separate sessions with an average interval of 66.8 days. In each session, each of the subjects provided 6 image samples from index finger and middle finger, respectively. Therefore, each subject provided 12 images in one session. The remaining 51 subjects only provide 612 images in one session. To test our approach, we use the sub-database which consists of 2520 finger images (105 subjects 2 fingers 6 images 2 sessions) from the first 105 subjects and which is more close to a practical captured environment. The remaining 612 images are employed as validation set for the selection of parameter. As all images are acquired using an opening and contactless imaging device, there exists more variations such as translation, rotation, scale and uneven illumination. Therefore, the acquired finger vein images are firstly subjected to preprocessing steps before feature extraction. In our experiment, the region of interest (ROI) images are extracted, and then translation and orientation alignment are carried out using the method in Ref. Kumar and Zhou (2012). As the background in image will contribute a lot of errors during the matching stage, all images are cropped to 494 × 157. And then they are further resized to 186 × 71 to reduce the computation cost. Database B. The finger-vein database was constructed by the Information Security and Parallel Processing Laboratory National Taiwan University of Science and Technology (Mulyono & Jinn, 2008).

H. Qin et al. / Expert Systems With Applications 82 (2017) 151–161

157

Fig. 10. Genuine and imposter distributions. Fig. 11. Relationship between neighborhood size with EER.

There are 680 images which capture from the close and contacted finger-vein imaging of 85 subjects. Each of the subject provided four image samples from the left and right index finger. Therefore, there are 8 samples for each person. The size of each image is 352 × 288 pixels. These original images contain the black background, which should degrade the matching accuracy. Thus we crop them with dimension 221 × 83. In addition, the finger is fixed by a device during the image capturing process, so there is less rotation and scaling. Therefore, we do not normalize the orientation and scaling of each image. Figs. 9(a) and (b) have illustrated the preprocessing images from database A. Fig. 9(c) and (d) has shown the normalized images from database B.

3.2. Determination of parameters 3.2.1. Determination of neighborhood size The selection of the neighborhood size is important for extracting robust vein patterns. If the size is too small, more detailed vein patterns are extracted but including more noise. This noise can produce verification errors. On the contrary, if the neighborhood size is too large, the vein feature details are suppressed and smooth vein features are extracted. On the other word, some features will be missed in the feature image, which also results in mismatch error. In this section, we determine the appropriate neighborhood size using the validation dataset of 612 images (51 subjects × 2 fingers × 6 images) from remaining 51 subjects in database A. Firstly, the different fingers are treated as the different classes. So, there are 102 classes and each class has 6 images. Based on Eq. (14), we compute the matching scores between the templates and the testing images. This results in 1530 (102 × C26 ) genuine scores and 185,436 (102 × 101 × 36/2) impostor scores. The symmetric matches are not executed for the computation of impostor scores. Secondly, the False Rejection Rate(FRR) is computed by matching images from same fingers and the False Acceptance Rate (FAR) is computed by matching images from different fingers. The FAR and FRR are computed as follows. Let T be a threshold set which is sampled from 0 to 1 at sampling interval 0.0 0 02, namely T = {0, 0.0 0 02, 0.0 0 04, . . . , 0.9998, 1}. There are 5001 elements in set T. Ti is the ith element of T, where i = 1, 2, . . . , 5001. Let G and E be genuine score set and impostor score set, receptively. If the score is lower than the predefined decision threshold Ti , then the system accepts the claimant, otherwise the claimant is rejected, as shown in Fig. 10. With each distribution normalized for unit area, the area under the genuine score distribution above a chosen threshold is the false rejected score set and denoted as FR, while the area under the imposter score distribution below the threshold is the false accepted score set and represented as FA. The FAR and FRR for threshold Ti are represented as

FARi and FRRi and computed by

F RRi =

Number o f matching scores in F R . Number o f matching scores in G

(16)

F ARi =

Number o f matching scores in F A . Number o f matching scores in E

(17)

So, there is the couple of FAR and FRR for each threshold. With increasing threshold Ti from 0 to 1, the FAR will increase and FRR will reduce. On the contrary, the decreasing threshold Ti leads to low FAR and high FRR, as shown in Fig. 10. When FAR is equal to FRR, FAR or FRR is the Equal Error Rate (EER). The Genuine accept rate (GAR) is computed by

GARi = =

Number o f geuine accepted matching scores . Number o f matching scores in G

(18)

Number o f matching scores in G − F R . Number o f matching scores in G

(19)

= 1 − F RRi .

(20)

Fig. 11 shows the experimental results. From Fig. 11, we can see that a smaller equal error rate is achieved at a neighborhood size of 7 (7 × 7). Therefore, the patch size is set to 7 for Database A. As the width of vein in the two databases are similar, the neighborhood size is fixed to the same value for Database B. 3.2.2. Determination of patch size The parameters wR and hR determine the max translation scope over horizontal and vertical directions. In general, the larger values of wR and hR can overcome the larger translational variations between two images but causing expensive computation cost. So, the wR and hR are set to 15 and 20, heuristically. Compare to wR and hR , the selection of parameters wT and hT associated with the patch size of the testing image is more critical for achieving higher performance. If the size is too small, some features will be missed in the feature map, which also results in low discrimination among different classes. On the contrary, more vein features are obtained but including more noise and irregular shadowing. This noise and irregular shadowing can produce mismatch errors which degrade verification accuracy. The parameters wR and hR are determined based on the EER on the validation dataset of 612 images from database A. Table 2 illustrates the relationships between the cropped size and EER. In Table 2, wT is the number of rows which are removed from the top and bottom of the testing image, and hT corresponds the number of columns which are removed from the left and right of the testing image, respectively. The cropped images are illustrated in Fig. 12. From Table 2, we can see that the EER is 2.87% when matching original testing image with the template.

158

H. Qin et al. / Expert Systems With Applications 82 (2017) 151–161

Fig. 12. The cropped samples. (a) Original image with size of 71 × 186; (b) patch with size of 52 × 157 (wT = 10 and hT = 15); (c) patch with size of 42 × 157 (wT = 15 and hT = 15); (d) patch with size of 32 × 157 (wT = 20 and hT = 15); (e) Patch with size of 22 × 157 (wT = 25 and hT = 15).

Table 2 EER at various cropped sizes.

Table 3 EER of various approaches on the database A collected at first session.

hT EER(%)

wT

0 5 10 15 20 25 30

0

5

10

15

20

25

30

2.87 2.68 2.29 2.09 1.79 1.79 3.18

2.77 2.58 2.19 1.99 1.89 1.69 3.18

2.77 2.49 2.19 1.99 1.49 1.59 3.18

2.87 2.39 2.19 1.79 1.29 1.59 3.28

2.57 2.19 2.19 1.79 1.49 1.59 3.48

2.48 2.49 2.29 1.89 1.79 1.89 3.68

2.57 2.49 2.19 1.99 1.99 2.19 3.78

Methods

EER

GAR(FAR = 0.1%)

GAR(FAR = 0.01%)

SIFT LBP ISMO Maximum curvature point Mean curvature Gabor filter Difference curvature The proposed approach

14.45 4.61 1.93 7.56 1.77 1.65 1.78 1.23

30.57 80.79 93.78 66.83 96.48 96.55 96.57 97.87

16.10 68.03 89.09 53.21 94.95 95.69 94.16 96.83

Table 4 EER of various approaches on the database A collected at second session.

The EER decreases with the increased number of cropped lows and columns. When the number of cropped lows and columns are larger than 25, the EER increases significantly. A smaller equal error rate is achieved when wT = 20 and hT = 15, which implies better separation between the matching score distributions. Therefore, the parameters wT and hT are set to 20 and 15 for Database A. As the image size in the two databases is similar, the number of cropped lows and columns are fixed to the same value for Database B. From the experimental results, we can also see that using the patch instead of the whole finger-vein image achieves better performance for finger-vein verification. This may be contributed by the fact that the vein patterns in the image boundary region such as the fingertip region are prone to be corrupted. In these regions, there may be more noise and irregular shadowing and some vein patterns may be missed. Therefore, performing matching in such a region may generate additional verification errors.

3.3. Visual assessment In this section, we visually analyze the extracted features from various approaches, so that we can get more insights into the proposed approach. Fig. 13 illustrates the finger-vein extraction results of various existing approaches. From the obtained results, we can see that, the existing curvature based approaches cannot extract the smooth and continuous patterns from raw finger-vein image. In addition, Gabor filter based approach is prone to oversegmentation because the irregular shadowing is extracted in the resulting vein feature image. Compared to the existing approaches, the proposed method extracts more smooth and continuous vein patterns from raw finger-vein images. This may be explained by the following fact: (1) a vein patch can generate prominent valleys while the noise is suppressed in Radon space, so it is not difficult for curvature to extract the finger-vein patterns. (2) the irregular shadowing is also removed by computing the difference of curvature in two projection orientations. Therefore, the proposed approach can emphasize the finger-vein patterns instead of the noise and irregular shadowing.

Methods

EER

GAR(FAR = 0.1%)

GAR(FAR = 0.01%)

SIFT LBP ISMO Maximum curvature point Mean curvature Gabor filter Difference curvature The proposed approach

13.71 3.33 1.58 7.32 1.14 0.96 1.14 0.48

29.37 83.05 94.68 68.13 97.87 98.03 98.03 99.11

14.06 69.11 90.40 52.32 96.48 97.04 96.79 98.41

Table 5 EER of various approaches on the database A collected at two mixed sessions. Methods

EER

GAR(FAR = 0.1%)

GAR(FAR = 0.01%)

SIFT LBP ISMO Maximum curvature point Mean curvature Gabor filter Difference curvature The proposed approach

14.08 4.05 1.75 7.44 1.48 1.35 1.48 1.03

29.97 81.92 94.24 52.76 97.17 97.34 97.30 98.49

15.08 68.57 89.75 67.48 95.71 96.52 95.18 97.62

3.4. Verification results based on image dataset from one session In this section, we evaluate the performance of various approaches on datasets A and B, by considering vein images collected in the same session. For database A, we employ a subset of 2520 images (210 fingers × 6 images × 2 sessions) which are captured at two sessions in the verification experiments. The different fingers from same subjects are treated as different class, so there are totally 210 classes for first 105 subjects. First, the performance is evaluated in each session, individually. In one session, there are 1260 images from 210 fingers. Therefore, the total number of genuine scores and impostor scores are 3150 (210 × C26 ) and 790,020 (210 × 209 × 36/2). Second, the performance of combining scores from two sessions is reported. So, there are 6300 (3150 × 2 sessions) genuine scores and 1,580,040 (790020 × 2 sessions) impostor scores. Tables 3–5 list the verification error of various approaches for each session taken separately, and then for the two

H. Qin et al. / Expert Systems With Applications 82 (2017) 151–161

159

Fig. 13. Sample results from various approaches: (a) original finger-vein images(the upper left image and lower left image are from databased A and databased B, respectively), (b) vein pattern extracted by even Gabor with morphological, (c) vein pattern extracted by mean curvature, (d) vein pattern extracted by difference curvature, (e) vein pattern extracted by maximum curvature, (f) enhancing vein pattern based on our approach, and (g) vein pattern extracted from (f).

Fig. 14. Receiver operating characteristics from database A collected at (a) first session, (b) second session and (c) two mixed sessions.

sessions, mixed. The Receiver operating characteristics (ROC) curve for the corresponding performances is illustrated in Fig. 14. For database B, all images are collected at one session. Different finger from same person can be treated as different class, so there are 160(85 × 2) classes. Performing the matching totally generates 1020(170 × 6) genuine scores and 224, 930(170 × 169 × 6/2 ) im-

postor scores, respectively. In addition, the genuine scores and impostor scores generated by the method (Mulyono & Jinn, 2008). have been provided in the Database B, so we directly compare these experimental results in our comparable experiments. Fig. 15 has shown the ROC curve and the EERs of various methods have been summarized in Table 6.

160

H. Qin et al. / Expert Systems With Applications 82 (2017) 151–161

Fig. 15. Receiver operating characteristics from finger-vein images on the database B. Table 6 EER of various approaches on database B. Methods

EER

GAR(FAR=0.1%)

GAR(FAR=0.01%)

Scores in Ref. (Mulyono,D., & H.S.Jinn.) SIFT LBP ISMO Maximum curvature point Mean curvature Gabor filter Difference curvature The proposed approach

2.16

93.04

88.63

10.98 4.62 4.46 7.36 2.06 1.76 1.08 0.69

66.57 87.45 88.65 70.29 90.88 93.04 94.01 97.55

55.88 81.57 85.05 61.08 83.04 88.24 89.32 93.04

It can be ascertained from Tables 3–6 and Figs. 14, 15 that using the proposed method to extract vein pattern achieves the best performance among all the approaches referenced in this work. The success results may be attributed to following fact. Firstly, the Radon transform which computes the integration in a patch can be treated as a low pass filter, so the proposed approach can suppress the noise. Secondly, the irregular shadowing can be removed by computing the difference of curvature in two perpendicular orientations. Finally, the proposed method also can extract the connective finger-vein feature, because there still exists valley for some non-connected vein patterns as shown in Fig. 8(a). Therefore, the proposed approach can ignore the isolated noises and irregular shadings and enhance the finger-vein patterns. The maximum curvature approach, mean curvature approach and difference curvature approach cannot achieve robust performance in our experiments. The poor verification results (Tables 3–6) can be contributed by the fact that these methods are difficult to distinguish noises and vein patterns. Different from the proposed method, they enhance the image according to the curvature in cross-section profiles. However, the valley in cross-section profiles is easy to be comprised by noises (as shown in Fig. 8(b)). Therefore, some patterns may be missed in the feature image, which also results in low discrimination among different classes. Also, these noises may create some valleys in non-vein patches. So, using these approaches to extract vein patterns creates some false vein features which generate erroneous matches. Gabor filters have been employed for vein recognition and show promising results (Kumar & Zhou, 2012; Yang & Shi, 2014; Yang et al., 2009a, 2009b). However, they cannot achieve high verification accuracy in our experiments. Such poor performance can be explained that the images from the databases A and B include more the irregular shading which may be classified as vein patterns by the Gabor filter based methods (as shown in Fig. 13(b)). LBP and Sift descriptors

Fig. 16. Receiver operating characteristics from finger-vein images (database A). Table 7 EER of various approaches on the database A collected at two sessions. Methods

EER

GAR(FAR = 0.1%)

GAR(FAR = 0.01%)

SIFT LBP ISMO Maximum curvature point Mean curvature Gabor filter Difference curvature The proposed approach

15.11 9.29 5.72 13.65 5.68 4.68 4.14 2.86

32.06 43.51 75.13 35.32 84.60 87.22 88.62 92.46

16.43 20.61 63.08 17.38 77.06 77.06 81.32 86.51

(Kauba, Reissig, & Uhl, 2014; Liu et al., 2016; Peng et al., 2012) are invariant to scaling and rotation, and have shown good performance in terms of feature extraction and matching. Unfortunately, compared to other approaches such as Mean curvature and Gabor filter based on approach, they cannot achieve better performance on two public databases. The reason may be that there exist less rotation and scaling in the finger-vein image after preprocessing. 3.5. Verification results based on image dataset from different session This experiment aims at estimating the effectiveness and robustness of various algorithms on the finger-vein image data from both sessions. We do not consider the database B because all images are captured at same session. For database A, the first 2520 finger images were collected from 210 fingers of 105 subjects in two separate sessions. In each session, each finger provided 6 images. The first 6 images of each finger acquired at first session is employed for training and the remaining 6 images for testing. Therefore, 1260(210 × 6) genuine scores and 263, 340(210 × 209 × 6 ) impostor scores are generated by matching images from same and different fingers, respectively. Fig. 16 and Table 7 have shown the experimental results from various approaches on the image dataset from different sessions. From the Fig. 16 and Table 7, we can see that the proposed approach achieves 2.86% verification error which is significantly lower than those of other approaches. Similar to the experimental results on dataset A from same session, using Sift descriptor to extract vein feature show the highest verification error among all methods. The experimental results on Database B and Database A illuminate the consistent trends that our approach significantly outperforms the existing methods in terms of improvement of verification accuracy. Compared to the experimental results on database A collected from same session (Tables 3–5 and Fig. 14) and different session (Table 7 and Fig. 16), we see that all approaches achieve a significant improvement in terms of verification accuracy on image datasets acquired in one session. Such a good performance can be

H. Qin et al. / Expert Systems With Applications 82 (2017) 151–161

attributed to the fact that there exists small within-class variations in the images captured at the same session. In other words, the capturing environment and user behavior are almost similar during finger-vein image acquisition within a short duration. 4. Conclusions In this paper, a robust finger-vein image feature extraction approach is proposed for personal verification. Firstly, each patch is projected in to Radon space based on the Radon transform. Secondly, the vein patterns are enhanced by computing the curvature of the valley in the Radon space. Finally, the enhancement images are encoded and matched for verification. The experimental results on two public databases imply that the proposed approach achieves better performance than other promising approaches considered in this work. Radon transformation can suppress noised and enhance the valley created by vein pattern. In future work, we will combine it with other valley detecting methods to improve the performance of the finger-vein verification system. Acknowledgments This work is supported by the National Natural Science Foundation of China (grant nos. 61402063; 51605061), the Natural Science Foundation Project of Chongqing (grant nos. cstc2013kjrcqnrc40013; cstc2014jcyjA1316), Chongqing Municipal Education Commission Research Project (KJ1400612) and the Scientific Research Foundation of Chongqing Technology and Business University (grant no. 1352019; grant no. 2013-56-04). References Cheong, W. F., Prahl, S. A., & Welch, A. J. (1990). A review of the optical properties of biological tissues. IEEE Journal of Quantum Electronics, 26, 2166–2185. Deans, S. R. (2007). The Radon transform & some of its applications. Courier Corporation. Frangi, A. F., Niessen, W. J., Vincken, K. L., & Viergever, M. A. (1998). Multiscale vessel enhancement filtering. In International conference on medical image computing and computer-assisted intervention (pp. 130–137). Hashimoto, J. C. (2006). Finger vein authentication technology & its future. In 2006 symposium on VLSI circuits, 2006. Digest of technical papers (pp. 5–8). Huang, B. N., Dai, Y. G., Li, R. F., Tang, D. R., & Li, W. X. (2010). Finger-vein authentication based on wide line detector & pattern normalization. In 20th international conference on pattern recognition (ICPR): vol. 2010 (pp. 1269–1272). Jia, W., Huang, D. S., & Zhang, D. (2008). Palmprint verification based on robust line orientation code. Pattern Recognition, 41, 1504–1513. Kauba, C., Reissig, J., & Uhl, A. (2014). Pre-processing cascades and fusion in finger vein recognition. In BIOSIG (pp. 87–98). Kono, M., Ueki, H., & Umemura, S. (2002). Near-infrared finger vein patterns for personal identification. Applied Optics, 41, 7429–7436. Kumar, A., & Zhou, Y. B. (2012). Human identification using finger images. IEEE Transactions on Image Processing, 21, 2228–2244. Lee, E. C., & Park, K. R. (2009). Restoration method of skin scattering blurred vein image for finger vein recognition. Electronics Letters, 45, 1.

161

Lee, E. C., & Park, K. R. (2011). Image restoration of skin scattering & optical blurring for finger vein recognition. Optics and Lasers in Engineering, 49, 816–828. Liu, B. C., Xie, S. J., & Park, D. S. (2016). Finger vein recognition using optimal partitioning uniform rotation invariant LBP descriptor. Journal of Electrical and Computer Engineering, 1–10. Liu, T., Xie, J. B., Yan, W., Li, P. Q., & Lu, H. Z. (2013). An algorithm for finger-vein segmentation based on modified repeated line tracking. The Imaging Science Journal, 61, 491–502. Miura, N., Nagasaka, A., & Miyatake, T. (2007). Extraction of finger-vein patterns using maximum curvature points in image profiles. IEICE Transactions on Information and Systems, 90, 1185–1194. Miura, N., Nagasaka, A., & Miyatake, T. F. (2004). Feature extraction of finger-vein patterns based on repeated line tracking & its application to personal identification. Machine Vision and Applications, 15, 194–203. Mulyono, D., & Jinn, H. S. (2008). A study of finger vein biometric for personal identification. In International symposium on biometrics and security technologies (pp. 1–8). Nguyen, D. T., Park, Y. H., Shin, K. Y., & Park, K. R. (2013). New finger-vein recognition method based on image quality assessment. KSII Transactions on Internet and Information Systems (TIIS), 7, 347–365. Peng, J. L., Wang, N., El-Latif, A., Li, Q., & Niu, X. M. (2012). Finger vein verification using Gabor filter & sift feature matching. In 2012 eighth international conference on in intelligent information hiding and multimedia signal processing (IIH-MSP) (pp. 45–48). Qin, H. F., Li, S., Kot, A. C., & Qin, L. (2012). Quality assessment of finger-vein image. In Signal and information processing association annual summit and conference (APSIPA ASC), 2012 Asia-Pacific (pp. 1–4). Qin, H. F., Qin, L., Xue, L., He, X. P., Yu, C. B., & Liang, X. Y. (2013). Finger-vein verification based on multi-features fusion. Sensors, 13, 15048–15067. Qin, H. F., Qin, L., & Yu, C. B. (2011). Region growth–based feature extraction method for finger-vein recognition. Optical Engineering, 50. 057208. Song, W. D., Kim, T. J., Kim, H. C., Choi, J. H., Kong, H. J., & Lee, S. R. (2011). A finger-vein verification system using mean curvature. Pattern Recognition Letters, 32, 1541–1547. Wu, J. D., & Ye, S. H. (2009). Driver identification using finger-vein patterns with Radon transform and neural network. Expert Systems with Applications, 36(3), 5793–5799. Yang, J. F., & Shi, Y. H. (2014). Towards finger-vein image restoration & enhancement for finger-vein recognition. Information Sciences, 268, 33–52. Yang, J. F., Yang, J. L., & Shi, Y. H. (2009). Finger-vein segmentation based on multi-channel even-symmetric Gabor filters. In IEEE international conference on intelligent computing and intelligent systems, 2009. In ICIS 2009: vol. 4 (pp. 500–503). Yang, J. F., Yang, J. L., & Shi, Y. H. (2009). Combination of Gabor wavelets & circular Gabor filter for finger-vein extraction. Emerging Intelligent Computing Technology and Applications, 346–354. Yang, L., Yang, G. P., Yin, Y. L., & Xiao, R. Y. (2013). Finger vein image quality evaluation using support vector machines. Optical Engineering, 52. 027003. Yu, C. B., Qin, H. F., Cui, Y. Z., & Hu, X.-Q. (2009). Finger-vein image recognition combining modified Hausdorff distance with minutiae feature matching. Interdisciplinary Sciences: Computational Life Sciences, 1, 280–289. Yu, C. B., Qing, H. F., & Zhang, L. (2008). A research on extracting low quality human finger vein pattern characteristics. In The 2nd international conference on bioinformatics and biomedical engineering, 2008. In ICBBE 2008 (pp. 1876–1879). Zhang, Z. B., Ma, S. L., & Han, X. (2006). Multiscale feature extraction of finger-vein patterns based on curvelets and local interconnection structure neural network. In 18th international conference on pattern recognition, 2006. In ICPR 2006: 4 (pp. 145–148). Zhou, Y. B., & Kumar, A. (2010). Personal identification from iris images using localized Radon transform. In 2010 20th international conference on pattern recognition (ICPR) (pp. 2840–2843). Zhou, Y. B., & Kumar, A. (2011). Human identification using palm-vein images. IEEE Transactions on Information Forensics and Security, 6, 1259–1274.