Designing an accurate hand biometric based authentication system fusing finger knuckleprint and palmprint

Designing an accurate hand biometric based authentication system fusing finger knuckleprint and palmprint

Neurocomputing 151 (2015) 1120–1132 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Desig...

7MB Sizes 21 Downloads 101 Views

Neurocomputing 151 (2015) 1120–1132

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Designing an accurate hand biometric based authentication system fusing finger knuckleprint and palmprint Aditya Nigam n, Phalguni Gupta Department of Computer Science and Engineering, Indian Institute of Technology Kanpur, 208016, India

art ic l e i nf o

a b s t r a c t

Article history: Received 14 November 2013 Received in revised form 12 March 2014 Accepted 24 March 2014 Available online 30 October 2014

This paper proposes an accurate and efficient multi-modal authentication system that makes use of palm and knuckleprint samples. Biometric images are transformed using the proposed sign of local gradient (SLG). Corner features are extracted from vcode and hcode and are tracked using geometrically and statistically constrained Lucas and Kanade tracking algorithm. The proposed highly uncorrelated features (HUF) measure is used to match two query images. The proposed system is tested on publicly available PolyU and CASIA palmprint databases along with PolyU Knuckleprint database. Several sets of chimeric bi-modal as well as multimodal databases are created in order to test the proposed system. Experimental results reveal that the proposed multi-modal system achieves CRR of 100% with an EER as low as 0.01% over all created chimeric multimodal datasets. & 2014 Elsevier B.V. All rights reserved.

Keywords: Biometrics Palmprint Knuckleprint Multi-modal Fusion Scharr

1. Introduction There is a need to have automated, secure and accurate human access control mechanisms for reliable identification of people in several social applications such as law enforcement, secure banking, immigration control, etc. The best mode in which the identity management can be realized is the biometric based authentication system which uses physiological (fingerprint [9,37], face [41,40,26,27], iris [11,29], etc.) or behavioral (signature, gait, etc.) characteristics. Biometrics based solutions are better than the traditional token or knowledge based identification systems as they are harder to spoof, easier to use and never be lost. In past few years, society have noticed great attention in hand based biometric recognition systems (e.g. palm print [5], fingerprint [9] and finger knuckleprint [47,28,30]) because of their low cost acquisition sensors, high performance, higher user acceptance and lesser need of user cooperation. The pattern formations at finger knuckle bending [47] as well as palmprint region [5] are supposed to be stable (as shown in Fig. 1) and hence can be considered as discriminative biometric traits.

1.1. Motivation Palmprint: The inner part of the hand is called palm and the extracted region of interest in between fingers and wrist is termed n

Corresponding author. E-mail addresses: [email protected], [email protected] (A. Nigam), [email protected] (P. Gupta). http://dx.doi.org/10.1016/j.neucom.2014.03.083 0925-2312/& 2014 Elsevier B.V. All rights reserved.

as palmprint as shown in Fig. 1(a). Pattern formation within this region are suppose to be stable as well as unique. Even monozygotic twins are found to have different palmprint patterns [16]. Hence one can consider it as a well-defined and discriminative biometrics trait. Palmprint's prime advantages over fingerprint includes its higher social acceptance because it is never being associated with criminals and larger ROI area as compared to fingerprint. Larger ROI ensures abundance of structural features including principle lines, wrinkles, creases and texture pattern (as it is evident in Fig. 1(a)) even in low resolution palmprint images. This enhances system's speed, accuracy and reduces the cost. Some other factors favoring palmprint are lesser user cooperation, non-intrusive and cheaper acquisition sensors. Knuckleprint: The horizontal and the vertical pattern formation in finger knuckleprint images (as shown in Fig. 1(b)) are believed to be very discriminative [47]. The knuckleprint texture is developed very early and lasts very long primarily because they are on the outer side of the hand, hence safely preserved. Its failure to enroll rate (FTE) is observed to be lower as compared to fingerprint and can be acquired easily using an inexpensive setup with lesser user cooperation. The user acceptance favors knuckleprint as unlike fingerprint they are never being associated to any criminal investigations. A comparative study between palmprint and knuckleprint based over the biometric properties is presented in Table 1. Multimodal: The performance of any unimodal biometric system is often got restricted by variable and uncontrolled environmental conditions, sensor precision and reliability. Several trait specific challenges such as pose, expression, aging, etc. for face recognition degrades the system performance. Hence they can only provide low

A. Nigam, P. Gupta / Neurocomputing 151 (2015) 1120–1132

1121

Fig. 1. Biometric trait's anatomy. (a) Palmprint Anatomy. (b) Knuckleprint anatomy.

Table 1 Biometric properties (M¼medium; H ¼ high). Property

Meaning

Palmprint

Knuckleprint

Universality Uniqueness Permanence Collectability Performance Acceptability Circumvention

Every individual must possess Features should be distinct across individuals Characteristics should be constant over a long period of time Easily acquired Possess high performance as per performance parameters (CRR; EER) Acceptable to a large percentage of the population Difficult to mask or manipulate

M H H M H M M

M M H H M H H

or middle level security. Fusing more than one biometric traits in pursuit of superior performance can be a very useful idea, termed as multi-modal [10] systems. Any such system makes use of multiple biometric traits to enhance system's performance especially when a huge number of subjects are enrolled. The false acceptance rate grows rapidly with the database size [4]; hence multiple trait data can be utilized to achieve better performance. 1.2. Contribution In this paper palmprint and knuckleprint ROI's are extracted and transformed using the proposed sign of local gradient (SLG) method to obtain robust vcode and hcode image representations. Corner features are extracted from vcode and hcode by performing eigen analysis of the Hessian matrix at every pixel. The matching is performed using the proposed HUF dissimilarity measure. Finally scores obtained for both traits (i.e. palm and knuckleprint) are fused to get multi-modal fusion score using the SUM rule. The overall architecture of the proposed multi-modal biometric system is shown in Fig. 2. This paper is organized as follows: the comprehensive literature survey is presented in Section 2. In Section 3 extraction of region of interest (ROI) from biometric sample is explained. Section 4 describes the proposed algorithm. Section 5 presents the detailed experimental results of the proposed system on publicly available palmprint and knuckleprint along with their fused self-created chimeric multimodal databases. Last section presents the concluding remarks.

2. Literature review 2.1. Palmprint Palmprint recognition systems are broadly based on structural or statistical features. In [13], line-like structural features are extracted

by applying morphological operations over edge-maps. In [12], structural features such as points on principle line and some isolated points are utilized for palmprint authentication. In [44], single fixed orientation Gabor filter is applied over the palmprint and the resulting Gabor phase is binarized using zero crossing. In [15], bank of elliptical Gabor filters with different orientations is employed to extract the phase information of the palmprint image and merged according to a fusion rule to produce a feature called the FusionCode. In [39], a recognition system fusing the phase (FusionCode) and orientation information has been proposed. In [14], the palmprint is processed using the bank of Gabor filters with different orientations. The highest filter response is preserved as features represented in three bits. In [39], the palmprint is processed using the bank of orthogonal Gabor filters and their differences which are coded into bits are considered as the features of palmprint. All the above-mentioned systems use hamming distance for matching and classification. Many statistical techniques such as PCA, LDA, ICA and their combinations are also applied to palmprints in order to achieve better performance [19,35,42]. Several other techniques such as Stockwell [6], Zernike moments [7], Discrete Cosine Transforms (DCT) [8] and Fourier [5] transforms are also applied to achieve better performance. 2.2. Knuckleprint On the other hand, finger knuckleprint is relatively newer biometric trait and very limited amount of work is reported. In [45], Zhang et al. have extracted the region of interest using convex direction coding. Correlation between two knuckleprint images is used for identification which is calculated using band limited phase only correlation (BLPOC). In [24], knuckleprints are enhanced using CLAHE to address non-uniform reflection and SIFT key-points are used for matching. In [43], the knuckleprint based recognition system that extract features using local Gabor binary patterns (LGBP) has been proposed. In [46], the Gabor filter bank is

1122

A. Nigam, P. Gupta / Neurocomputing 151 (2015) 1120–1132

Fig. 2. Overall architecture of the proposed multimodal biometric authentication system.

applied to extract features for those pixels which have varying Gabor responses. The orientation and the magnitude information are fused to achieve better results. In [47], local as well as global features are fused to achieve optimal performance.

2.3. Multimodal Not much work is reported in multimodal fusion area largely because of unavailability of synchronized multi-modal biometric database. Reported work is done mostly over chimerically1 created multi-modal biometric databases. In [33], scores obtained by eigenfinger and eigenpalm are fused while in [18] hand shape and palm features are fused. In [17], finger geometry and dorsal finger surface information are fused for performance improvement as compared to their unimodal version. In [23], features detected by tracking are encoded using efficient directional coding and ridgelet transforms are used for feature matching. In [22], 1D Gabor filters are used for extracting features from knuckle as well as palmprint. In [31], knuckle and palmprint information is fused at the score level. Sharp edge based knuckleprint features are denoised using wavelet. Corner features with their local descriptors are considered for palmprint images. Finally matching is done by cosine similarity function and hierarchical hand metric. In [25], radon and haar transforms are used for feature extraction and nonlinear fisher transformation is applied for dimensionality reduction. In [21], score level fusion is preformed on palm and knuckleprint images by using the phase only correlation (POC) function. 1 The practice of fusing multiple trait biometric data and creating chimeric subjects where different biometric data do not necessarily belongs to the same subject.

3. ROI extraction from biometric sample The first step in any biometric based authentication system is region of interest (ROI) extraction. In this work palm and knuckleprint ROI's are extracted using the algorithms proposed in [5,46]. 3.1. Palmprint [5] Hand images are thresholded to obtain binarized image from which hand contour is extracted. Four key-points (X 1 ; X 2 ; V 1 ; V 2 ) are obtained on the hand contour as shown in Fig. 3(b). Then two more key-points are obtained, C1 that is the intersection point of hand contour and the line passing from V1 with a slope of 451 and C2 that is the intersection point of line passing from V2 with a slope of 601 as shown in Fig. 3(c). Finally the midpoints of the line segment V 1 C 1 and V 2 C 2 are joined which is considered as one side of the required square ROI. The final extracted palmprint ROI is shown in Fig. 3(d). The various ROI extraction steps are shown in Fig. 3. 3.2. Knuckleprint [46] The bottom boundary is obtained using canny edges and a rough ROI is selected heuristically. The convex direction code for each canny edge pixel is computed to represent its local direction. The direction can either be convex leftward ð þ 1Þ or convex rightward ð  1Þ. The pixels those are not on any curve are assigned value ð0Þ. The convexity direction measures the strength of dominant direction locally. The convex direction codes of all canny edge pixels over a vertical scan-line are added to obtain the convex direction magnitude for that scan-line. The curves along small area around phalangeal joint do not have any dominant convex direction. Hence, the vertical scan-line for which minimum convex direction magnitude is obtained is considered as the

A. Nigam, P. Gupta / Neurocomputing 151 (2015) 1120–1132

1123

Fig. 3. Palmprint ROI extraction (images are taken from [5]). (a) Original. (b) Contour. (c) Key points. (d) Palmprint ROI.

Fig. 4. Knuckleprint ROI extraction (images are taken from [46]). (a) Original. (b) Rough ROI. (c) Canny edges. (d) Vertical axis. (e) ROI.

Fig. 5. LGBP transformation (red: -ve gradient; green: þve gradient; blue: zero gradient). (a) Original. (b) Transformed (kernel ¼3). (c) Transformed (kernel ¼9). (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

Fig. 6. Original and transformed (vcode and hcode) for palm and knuckleprint ROI's. (a) Original palmprint. (b) Palm vcode. (c) Palm hcode. (d) Original knuckleprint. (e) Knuckle vcode. (f) Knuckle hcode.

1124

A. Nigam, P. Gupta / Neurocomputing 151 (2015) 1120–1132

y-axis. The various steps involved in knuckleprint ROI extraction are shown in Fig. 4.

4. Proposed system The details of the proposed authentication system are discussed in this section. 4.1. Image transformation Abundant texture is present in both palm and knuckleprint images, which is very discriminative and hence can be considered for authentication. Initially the region of interest (ROI) of palmprint and knuckleprint samples are normalized to smaller size in order to reduce the computation time. The palmprint ROI's are normalized to 100  100 where as knuckleprints are normalized to 100  50 size so as to maintain the original aspect ratio. The proposed transformation uses edge-maps so as to achieve robustness against illumination. 4.1.1. Sign of local gradient (SLG) The proposed sign of local gradient (SLG) method transforms ROI's of palmprint and knuckleprint samples into vcode and hcode that are more stable than gray-scale image and can provide robust features as shown in Fig. 6. The gradient of any edge pixel is positive if it lies on an edge that is created due to a transition from light to dark shade (i.e. high to low gray value) as shown with green in Fig. 5(b) and (c); otherwise it will be negative or zero. Hence all edge pixels can be divided into three classes of þ ve,  ve or zero gradient values. The sobel x-direction kernel of size 3  3 and 9  9 is applied in Fig. 5 (a) to obtain Fig. 5(b) and (c), respectively. Bigger size kernel produces coarse level features while smaller produces fine level but noisy features as shown in Fig. 5(c) and (b). The scharr kernel [34] (as shown below) is used to obtain the x and y derivatives instead of sobel kernels because sobel kernels

lack rotational symmetry 3 2 3 0 3 7 6 Scharr x ¼ 4 10 0  10 5; 3 0 3

2

3 6 Scharr y ¼ 4 0 3

10 0 10

3 3 7 0 5 3

The scharr kernel is obtained by minimizing angular error hence are more consistent and reduce artifacts. The sign of local gradient can be augmented with the edge information to make it more discriminative and robust. Hence the proposed SLG based transformation uses this information to calculate a 8-bit code for every pixel. The Scharr x and y derivative kernels are used to extract x and y-direction derivatives of eight neighboring pixels to obtain vcode and hcode, respectively. The sign of local gradient (SLG) based transformation calculates sign_code for each pixel using the derivatives of its eight neighborhood. The sign_code for any pixel is a 8-bit binary number whose ith bit is defined as  1 if ðNeigh½i 4 0Þ sign_codei ¼ ð1Þ 0 otherwise where Neigh½i; i ¼ 1; 2; …8 are the gradient of eight neighboring pixels centered at pixel P j;k obtained by applying appropriate x or y direction Scharr kernel. The vcode or hcode for any image P can be obtained by evaluating sign_code for all pixel of that image. Every ROI is transformed into its corresponding vcode and hcode (as shown in Fig. 6). The basic assumption is that the pattern of edges within eight neighborhood of any pixel does not change abruptly, hence in sign_code of any pixel, only the sign of the derivative in its neighborhood is considered. This property ensures robustness of the proposed representation in illumination varying environments as it only uses the sign of the local gradient. 4.2. Feature extraction Corners have high derivative in two orthogonal directions; hence they can provide robust information for accurate tracking

Fig. 7. Sample images from all databases (five distinct users). (a) Casia A. (b) Casia B. (c) Casia C. (d) Casia D. (e) Casia E. (f) Polyu A. (g) Polyu B. (h) Polyu C. (i) Polyu D. (j) Polyu E. (k) Polyu A. (l) Polyu B. (m) Polyu C. (n) Polyu D. (o) Polyu E.

A. Nigam, P. Gupta / Neurocomputing 151 (2015) 1120–1132

even in varying illumination conditions. The KLT operator [36] performs eigen analyses of 2  2 hessian matrix (M) at every pixel considering a local neighborhood. The matrix M can be defined for any pixel at ith row and jth column in image I as   A B Mði; jÞ ¼ ð2Þ C D such that A¼



wða; bÞ:I 2x ðiþ a; j þbÞ



wða; bÞ:I x ðiþ a; jþ bÞ:I y ði þa; j þ bÞ



wða; bÞ:I y ði þ a; jþ bÞ:I x ðiþ a; jþ bÞ



wða; bÞ:I 2y ði þa; j þ bÞ

 K r a;b r K



 K r a;b r K



 K r a;b r K



 K r a;b r K

ð3Þ

where wða; bÞ is the weight given to the neighborhood, I x ðiþ a; j þ bÞ and I y ði þ a; jþ bÞ are the partial derivatives sampled within the ð2K þ 1Þ  ð2K þ 1Þ window centered at Iði; jÞ. The matrix M can have at-most two eigen values λ1 and λ2 such that λ1 Z λ2 . Like [36], all pixels having λ2 ZT (i.e. smaller eigen value is greater than a threshold) are considered as corner feature points. 4.3. Feature matching The corner features from vcode and hcode image representations are tracked using the proposed constrained version of LK tracking which is bounded by some statistical and geometrical bounds to match features of two biometric sample images. 4.3.1. LK tracking [20] Let there be some feature at location (x,y) at time instant t with intensity Iðx; y; tÞ and this feature has moved to the location ðx þ δx; yþ δyÞ at time instant t þ δt. The LK tracking [20] makes use of three assumptions to estimate the optical flow. 1. Brightness consistency: It assumes that there will be very small change for small δt Iðx; y; tÞ  Iðx þ δx; y þ δy; t þ δtÞ

ð4Þ

2. Temporal persistence: It assumes that there will be small feature movement for small δt. The value of Iðx þ δx; y þ δy; t þ δtÞ is

1125

estimated by applying Taylor series on Eq. (4). Neglecting the high order terms one can obtain I x V x þI y V y ¼  I t

ð5Þ

where V x ; V y are the respective components of the optical flow velocity for feature at pixel Iðx; y; tÞ and I x ; I y and It are the local image derivatives in the corresponding directions. 3. Spatial coherency: The motion for any feature can be estimated by assuming a local constant flow within 5  5 neighborhood (i.e. 25 neighboring pixels, P 1 ; P 2 …P 25 ) around that feature point. An overdetermined linear system of 25 equations (as shown in Eq. (6)) is obtained that can be solved using the least square method 0 1 1 ! I x ðP 1 Þ I y ðP 1 Þ I t ðP 1 Þ V x B C B ⋮ ⋮ C ¼ @ ⋮ A @ A Vy I x ðP 25 Þ I y ðP 25 Þ ðP Þ I t 25 |fflfflfflffl{zfflfflfflffl} |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} V 0

C

ð6Þ

D

Here rows of the matrix C represent the derivatives of image I in x; y directions and those of D are their corresponding temporal derivatives at 25 neighboring pixels. A least square solution of above b for system gives the required estimate of the optical flow vector V the feature point defined as b ¼ ðC T CÞ  1 C T ð  DÞ V

ð7Þ

b of any feature point is estimated using its The final location F b as initial position vector bI and estimated flow vector V b ¼b b F I þV

ð8Þ

Basis of tracking based matching algorithm: The LK tracking algorithm is based on three assumptions namely brightness consistency, temporal persistence and spatial coherency. Hence its performance depends on how well these three assumptions are satisfied. These constraints can safely be assumed to be holding well for genuine matching as compared with the corresponding imposter matchings hence tracking performance should be good for genuine matching and poor for imposters. This assumption is used to decide whether a matching is genuine or imposter.

Table 2 Database specifications (L ¼LEFT Hand, R¼ RIGHT Hand, ALL¼ L þ R).

Dataset PALM (CASIA) PALM (CASIA) PALM (CASIA) PALM (POLYU) PALM (POLYU) PALM (POLYU) KNUCKLE (POLYU) KNUCKLE (POLYU) KNUCKLE (POLYU) Dataset Traits fused A1 2 A2 2 A3 6 A4 A5 A6

3 3 6

A7 A8

3 3

Traits/fusion specifications

Sub Pos Total

Unimodal LEFT HAND RIGHT HAND FULL DB (LEFT þ RIGHT) LEFT HAND RIGHT HAND FULL DB (LEFT þ RIGHT) LEFT HAND RIGHT HAND FULL DB (LEFT þ RIGHT) Multimodal

290 276 566 193 193 386 330 330 660

ALL.PALM(CASIA), ALL.KNUCKLE(POLYU) ALL.PALM(POLYU), ALL.KNUCKLE(POLYU) L.PALM(CASIA), R.PALM(CASIA), L.INDEXKNUCKLE(POLYU), L.MIDDLEKNUCKLE(POLYU), R.INDEXKNUCKLE(POLYU), R. MIDDLEKNUCKLE(POLYU) L.PALM(CASIA), R.PALM(CASIA), ALL.KNUCKLE(POLYU) ALL.PALM(CASIA), L.KUNCKLE(POLYU)[LIþ LM], R.KUNCKLE(POLYU)[RI þRM] L.PALM(POLYU), R.PALM(POLYU), L.INDEXKNUCKLE(POLYU), L.MIDDLEKNUCKLE(POLYU), R.INDEXKNUCKLE(POLYU), R.MIDDLEKNUCKLE(POLYU) L.PALM(POLYU), R.PALM(POLYU), ALL.KNUCKLE(POLYU) ALL.PALM(POLYU), L.KNUCKLE(POLYU)[LI þLM], R.KUNCKLE(POLYU)[RI þ RM]

8 8 8 20 20 20 12 12 12

2320 2208 4528 3860 3860 7720 3960 3960 7920

566 8 386 12 165 8

2  4528 ¼9056 2  4632 ¼9264 6  1320 ¼ 7920

276 8 330 8 165 12

3  2208 ¼6624 3  2640 ¼7920 6  1980 ¼ 11880

193 12 330 12

3  2316¼ 6948 3  3960 ¼11880

1126

A. Nigam, P. Gupta / Neurocomputing 151 (2015) 1120–1132

4.3.2. Matching algorithm using LK tracking Let A and B be two ROI's of either knuckle or palmprint that has to be matched and IvA, IvB, IhA, IhB are their corresponding vcode and hcode, respectively. Corner features in IvA obtained through KLT corner detector are tracked in IvB whereas that of IhA in IhB. A novel dissimilarity measure HUF (highly uncorrelated features) is proposed that estimates the tracking performance of LK tracker by considering some geometrical and statistical quantities computed for each tracked corner feature. Any corner feature tracked is assumed to be highly uncorrelated if any one of the following geometrical and statistical constraints are not strictly satisfied:

Section 4.3.2. Out of all the successfully tracked corner pairs (i.e. stcvAB ; stchAB ) those that are having inconsistent optical flow are considered as false matching and hence are pruned. The proposed HUF

 Vicinity bound: The Euclidean distance between any feature and 



its estimated tracked location should be at-most equal to an empirically selected threshold Tvb. Patch error bound: The tracking error is defined as pixel-wise sum of absolute difference between a local patch centered at any feature and that of its estimated tracked location patch should be at-most equal to empirically selected threshold Tpeb. Correlation bound: The phase only correlation [21] between a local patch of size 7  7 centered at any feature and that of its estimated tracked location patch should be at-least equal to an empirically selected threshold Tcb.

Consistent optical flow: All properly tracked corners may not be the true matches because of noise, local non-rigid distortions or even less difference in inter-class matching as compared with intra-class matching. It can be noted that true matching corners have the optical flow which can be aligned with the actual affine transformation between the two images. Hence a false matching pair pruning step is performed by considering consistent optical flow to obtain only true matches. The estimated optical flow direction for each matched corner is quantized into eight directions that are at an interval of π =4. The most consistent direction out of these eight quantized directions is selected as the one which has the maximum number of successfully tracked features. Any corner matching other than the most consistent direction is considered as false matching corner.

Fig. 9. Parameterized analysis for knuckleprint left index validation dataset using only vcode (X-axis in log scale).

4.3.3. Proposed dissimilarity measure A dissimilarity measure has been proposed termed as highly uncorrelated features (HUF) to estimate the tracking performance. It computes the aforementioned geometrical and statistical quantities that are defined above as vicinity bound, patch error bound and correlation bound for tracking performance estimation. Given two vcode I vA , IvB and two hcode I hA , IhB, Algorithm 1 can be used to compare ROI A with ROI B using HUF. The vcode I vA , IvB are matched while hcode I hA , IhB are matched. Finally HUFðA; BÞ is obtained by using sum rule fusion of horizontal and vertical code matching scores. Such a fusion is very useful and boost the performance of the proposed system because some of the images may have more discrimination in the vertical direction while others have it in the horizontal direction. Any corner is considered to be tracked successfully if vicinity bound, patch error bound and correlation bounds are satisfied as defined in

Fig. 8. Graph showing genuine and imposter score distribution (here represents the genuine similarity score distribution).

Rt

0 ðsðxjH G Þ

dxÞ

Fig. 10. Vertical and horizontal information fusion based performance boost-up for palm databases (X-axis in log scale). (a) CASIA Palmprint database. (b) PolyU Palmprint database.

A. Nigam, P. Gupta / Neurocomputing 151 (2015) 1120–1132

measure is calculated by taking the average value of huf ðA; BÞ and huf ðB; AÞ in order to make HUFðA; BÞ measure symmetric.

7:

Algorithm 1. HUFðA; BÞ. Require: A. The two vcode I vA , IvB of the corresponding ROI images A,B respectively. B. The two hcode I h , IhB of the corresponding ROI images A,B A respectively. C. Nva , Nvb, Nhaand Nhb are the number of corners in I v ; I v ; I h ; and A

B

v

v

h

h

h

h

huf AB ¼ 1 cof AB =Nva ; [Highly Uncorrelated Corners (vcode forward)] v v huf BA ¼ 1 cof BA =Nvb ;[Highly Uncorrelated Corners (vcode backward)]

8:

huf AB ¼ 1 cof AB =Nha ; [Highly Uncorrelated Corners (hcode forward)]

9:

huf BA ¼ 1 cof BA =Nhb ; [Highly Uncorrelated Corners (hcode backward)]

10: return HUFðA; BÞ ¼ ðhuf v þ huf h þ huf v þhuf h Þ=4; AB AB BA BA //[SUM RULE FUSION]

A

IhB respectively. Ensure: Return the symmetric function HUFðA; BÞ. 1: Track corners of vcode IvA in vcode IvB and that of hcode IhA in hcode IhB. 2: Calculate the number of successfully tracked corners in vcode tracking (i.e. stcvAB) and hcode tracking (i.e. stchAB) that have their tracked position within Tvb and their local patch dissimilarity is under Tpeb along with the local patch correlation more than Tcb. 3: Similarly calculate successfully tracked corners of vcode IvB in vcode IvA (i.e. stcvBA) as well as hcode IhB in hcode IhA (i.e. stchBA). 4: Quantize optical flow direction for each successfully tracked corner into eight directions (i.e. at an interval of π4 ) and obtain 4 quantized optical flow direction histograms

5:

6:

1127

H vAB ; H hAB ; H vBA and HhBA using stcvAB ; stchAB ; stcvBA and stchBA respectively (where each histogram can be seen as an array of 8 elements). For each histogram, out of 8 bins the bin (i.e. direction) having the maximum number corners will be considered as the most consistent optical flow direction. The maximum values obtained from the four histograms are termed as corners having consistent optical flow represented as cofvAB, cofhAB, cofvBA and cofhBA.

5. Experimental results 5.1. Databases The proposed system is tested on two publicly available benchmark palmprint databases CASIA [2] and PolyU [3] and the largest publicly available PolyU knuckleprint database [1]. 5.1.1. Unimodal databases The CASIA palmprint database has 5502 palmprint images taken from 312 subjects (i.e. 624 distinct palms). From each subject eight images are collected from both hands. First four images are taken for training and rest are kept for testing. The PolyU palmprint database has 7752 palmprint images of 193 subjects (i.e. 386 distinct palms). From each subject, 20 images are collected from both hands in two sessions (10 images per session). First session images are taken for training and images of second session are kept for testing. The PolyU knuckleprint database consists of 7920 knuckleprint images taken from 165 subjects. Each subject has given six knuckleprint images of left index (LI), left middle (LM), right index (RI) and right middle (RM) finger in two sessions (i.e. 660 distinct knuckles). There are some palms in both CASIA

Table 3 Testing strategy specifications (L ¼ LEFT Hand, R¼ RIGHT Hand, ALL ¼L þR). Traits/fusion specifications

Dataset PALM (CASIA) PALM (CASIA) PALM (CASIA) PALM (POLYU) PALM (POLYU) PALM (POLYU) KNUCKLE (POLYU) KNUCKLE (POLYU) KNUCKLE (POLYU)

Unimodal LEFT HAND RIGHT HAND FULL DB (LEFT þ RIGHT) LEFT HAND RIGHT HAND FULL DB (LEFT þ RIGHT) LEFT HAND RIGHT HAND FULL DB (LEFT þ RIGHT)

Dataset Traits fused

Multimodal

A1 A2 A3

2 2 6

A4 A5

3 3

A6

6

A7 A8

3 3

ALL.PALM(CASIA), ALL.KNUCKLE(POLYU) ALL.PALM(POLYU), ALL.KNUCKLE(POLYU) L.PALM(CASIA), R.PALM(CASIA), L.INDEXKNUCKLE(POLYU), L.MIDDLEKNUCKLE(POLYU), R.INDEXKNUCKLE(POLYU), R. MIDDLEKNUCKLE(POLYU) L.PALM(CASIA), R.PALM(CASIA), ALL.KNUCKLE(POLYU) ALL.PALM(CASIA), L.KUNCKLE(POLYU)[LI þLM], R.KUNCKLE(POLYU) [RI þ RM] L.PALM(POLYU), R.PALM(POLYU), L.INDEXKNUCKLE(POLYU), L.MIDDLEKNUCKLE(POLYU), R.INDEXKNUCKLE(POLYU), R. MIDDLEKNUCKLE(POLYU) L.PALM(POLYU), R.PALM(POLYU), ALL.KNUCKLE(POLYU) ALL.PALM(POLYU), L.KNUCKLE(POLYU)[LI þ LM], R.KUNCKLE(POLYU) [RI þ RM]

Training images

Testing images Genuine matches

Imposter matches

290  4¼ 1160 276  4¼ 1104 566  4¼ 2264 193  10 ¼1930 193  10 ¼1930 386  10¼ 3860 330  6¼ 1980 330  6¼ 1980 660  6¼ 3960

290  4¼ 1160 276  4¼1104 566  4¼ 2264 193  10¼ 1930 193  10¼ 1930 386  10¼3860 330  6¼1980 330  6¼1980 660  6¼ 3960

4640 4416 9056 19300 19300 38600 11880 11880 23760

1340960 1214400 5116640 3705600 3705600 14861000 3908520 3908520 15681600

566  4¼ 2264 386  6¼ 2316 165  4¼ 660

566  4¼ 2264 386  6¼ 2316 165  4¼660

9056 13896 2640

5116640 5349960 432960

276  4¼ 1104 330  4¼ 1320

276  4¼1104 330  4¼1320

4416 5280

1214400 1737120

165  6¼ 990

165  6¼990

5940

974160

193  6¼ 1158 330  6¼ 1980

193  6¼1158 330  6¼1980

6948 11880

1334016 3908520

1128

A. Nigam, P. Gupta / Neurocomputing 151 (2015) 1120–1132

and PolyU databases with incomplete or missing data, such palms are discarded for this experiment. Some of the sample images taken from each database are shown in Fig. 7 and detailed database specifications are given in Table 2. 5.1.2. Chimeric multi-modal databases Eight multi-modal chimeric databases (namely A1 to A8) consisting of palm and knuckleprint images are self-constructed as defined below (the detailed database specifications are given in Table 2). In A1 and A2 only two modalities are fused while in A4, A5, A7, A8 three modalities are considered for fusion. In A3 and A6 six modalities are fused to analyze the performance boost-up. Multiple set of chimeric multi-modal databases are created to analyze the system performance when the number of traits considered for fusion increases. A1 and A2: The A1 dataset is created by considering 566 palms from CASIA databases along with first 566 knuckles (out of 660) from the PolyU knuckleprint database. The CASIA palm databases consist of eight images of each subject; hence to make A1 dataset consistent, first and last four images (i.e. eight out of 12 images of each subject) from knuckle-print PolyU database are considered to ensure intersession matching. Therefore A1 dataset has 4528 palm and knuckleprint images collected from 566 distinct palm and knuckle, with eight palm as well as knuckle images per subject. The first four images per trait for every subject are considered as gallery images and rest of them are considered as probe images. In a similar way A2 dataset has been constructed by considering all 386 palms from PolyU palm databases and only first 386 (out of 660) knuckles from PolyU knuckle database. All the 12 knuckle images are considered along with only first and last six palm images (i.e. 12 out of 20) images per subject are considered as gallery and probe images for each trait per subject, respectively, in order to maintain the consistency. A3 and A6: The dataset A3 and A6 are constructed by considering all six distinct traits per person namely left and right palm along with left index, left middle, right index and right middle knuckle-print images, where palmprint images are taken from CASIA and PolyU palmprint databases, respectively. All four distinct knuckles collected from 165 subjects of knuckleprint database are used along with only first 165 subject from CASIA and PolyU palm databases to obtain A3 and A6, respectively. The first and last four images are considered as training and testing images, respectively, for A3 database while for A6 database first and last six images are considered as training and testing images, respectively. A4 and A7: Similarly datasets A4 and A7 are constructed by considering three distinct traits per person namely left and right palm from CASIA and PolyU palmprint, databases, respectively, along with the knuckleprint data of first 276 and 193 subjects (out of 660), respectively. The first and last four images are considered as training and testing images, respectively, for A4 database while for A7 database first and last six images are considered as training and testing images, respectively. A5 and A8: The datasets A5 and A8 also consider three distinct traits but this time all 330 knuckles from left and right hands are used along with the palmprint data of first 330 palms from CASIA and PolyU palmprint databases, respectively. The first and last four images are considered as training and testing images, respectively, for A5 database while for A8 database first and last six images are considered as training and testing images, respectively. 5.2. Performance analysis The performance of the system is measured using correct recognition rate (CRR) in case of identification and equal error rate (EER) for verification. Apart from them their are several performance measures that are defined in the literature.

5.2.1. Performance parameters The performance measures used to analyze the proposed multimodal personal authentication system in this work are defined below. 1. CRR: The CRR (i.e. the Rank1 accuracy) of any system is defined as the ratio of the number of correct (non-false) top best match of query ROI and the total number of ROI in the query set. 2. FAR/FRR: At any given threshold, the probability of accepting the impostor is known as false acceptance rate (FAR) and probability of rejecting the genuine user is known as false rejection rate (FRR) as shown in Fig. 8.

Fig. 11. Vertical and horizontal information fusion based performance boost-up for knuckle databases (X-axis in log scale). (a) PolyU Knuckleprint finger-wise. (b) PolyU Knuckleprint hand-wise and ALL.

Table 4 Performance of other systems. Approach

Database

CRR %

EER %

PalmCode [44] PalmCode [44] CompCode [14] CompCode [14] OrdinalCode [39] OrdinalCode [39] BLPOC [45] ImCompcode and MagCode [46] LGIP [47]

Palm (CASIA) Palm (PolyU) Palm (CASIA) Palm (PolyU) Palm (CASIA) Palm (PolyU) Knuckle (PolyU) Knuckle (PolyU) Knuckle (PolyU)

99.62 99.92 99.72 99.96 99.84 100.00 – – –

3.67 0.53 2.01 0.31 1.75 0.08 1.67 1.48 0.40

A. Nigam, P. Gupta / Neurocomputing 151 (2015) 1120–1132

3. EER: Equal error rate (EER) is the value of FAR for which FAR and FRR are equal EER ¼ fFARjFAR ¼ FRRg

of the state-of-the-art systems of palmprint and knuckleprint uses the same for the performance analysis of their systems.

ð9Þ

4. Receiver operating characteristics (ROC) curve: It is a graph plotted between FAR and FRR in order to analyze their relative behavior as shown in Fig. 10. It quantifies the discriminative power of the system between genuine and imposter's matching scores. The ROC curve provides a good way to compare the performance of any two biometric systems. An ideal ROC curve would include origin point i.e. FRR ¼ 0; FAR ¼ 0 hence lower the ROC curve (towards both co-ordinate axes) better is the system. 5. Decidability Index: It measures separability between imposter and genuine matching scores and is defined as jμG  μI j 0 d ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi σ 2G þ σ 2I 2

1129

ð10Þ

where μG and μI are the mean, and σ G and σ I are the standard deviation of the genuine and imposter scores, respectively. In all the graphs shown in this paper, vcode represents results using only vcode matching and similar representation for hcode and fusion is used. Also in every ROC graph the x-axis is labeled as FRR is plotted in the log scale so as to show the variations in the graph effectively. 5.2.2. Testing strategy The proposed system is tested over the aforementioned databases using inter-session matching based testing strategy defined as follows. For each database only inter-session matching is performed to ensure that the two images participating in a match are temporally distant. The detailed testing specifications for all databases are given in Table 3. A matching is termed as genuine matching if both the constituting images are of the same subject otherwise it is termed as an imposter matching. The number of imposter matchings considered for the performance analysis of the proposed system ranges from half a million to 16 million along with the genuine matchings ranging from 3 to 40 thousand. One can observe that a huge number of matchings are considered in order to analyze and compute performance parameters of the proposed system. This testing strategy is considered because most

5.2.3. Parameterized analysis The proposed HUF measure quantifies the dissimilarity between any two query ROI's. It is primarily parameterized by three parameters Tvb, Tpeb and Tcb as defined in Section 4.3.2. The Tvb bound depends on the amount of expected translation in between two sample images, while Tpeb depends on the texture based variation in intra-class matching. The value of Tcb bound regulates the amount of patchwise correlation between two matched features. The proposed multimodal system is tested by estimating these parameters empirically to maximize the performance (in terms of CRR and EER) over a small validation set for each database. In Fig. 9, ROC curves for different parametric sets are shown for knuckleprint left index validation set consisting of only first 50 subjects. The parameter values for which system has found to be performing best for the knuckleprint database are T peb ¼ 2000, T vb ¼ 23, and T cb ¼ 0:4 while for palmprint database they are T peb ¼ 750, T vb ¼ 18, and T cb ¼ 0:4 with the tracking window of size 11  11 and 5  5. 5.2.4. Performance boost-up by fusing vcode and hcode The graphs shown in Figs. 10 and 11 show the performance boost-up achieved by fusing vcode and hcode information over palm and knuckleprint unimodal databases, respectively. Palmprint Casia database: In Fig. 10(a) the ROC characteristics for all different categories of Casia palmprint databases are shown by adopting testing strategies as described in Section 5.2.2. Table 3 can be used to check the total number of genuine as well as imposter matching considered. EER wise one can see that the proposed system has performed equally well over all the three databases and has shown huge performance boost-up after vcode þ hcode fusion as suggested in Fig. 10(a). Palmprint PolyU database: In Fig. 10(b) the ROC characteristics for various PolyU palmprint databases are shown. EER wise one can infer that overall system's performance is not as good as Casia Palmprint database because our ROI cropping algorithm fails for some subjects and also the PolyU is subject-wise much bigger than Casia. In between left and right hand palm data the right palm performance is observed to be better mainly because most of the

Table 5 Consolidated results (L ¼LEFT Hand, R¼ RIGHT Hand, ALL¼ L þR). 0

Traits/fusion specifications

d

Dataset PALM (CASIA) PALM (CASIA) PALM (CASIA) PALM (POLYU) PALM (POLYU) PALM (POLYU) KNUCKLE (POLYU) KNUCKLE (POLYU) KNUCKLE (POLYU)

Unimodal LEFT HAND RIGHT HAND FULL DB (LEFT þ RIGHT) LEFT HAND RIGHT HAND FULL DB (LEFT þ RIGHT) LEFT HAND RIGHT HAND FULL DB (LEFT þ RIGHT)

2.57 99.91 2.53 100 2.30 99.96 1.72 99.89 1.79 100 1.71 99.95 2.07 99.40 2.06 99.55 2.08 99.41

0.30 0.34 0.29 2.09 1.34 1.5 3.22 3.22 3.06

Dataset Traits fused

Multimodal

A1 A2 A3

2 2 6

2.78 100 2.43 100 4.62 100

0.02 0.12 0.0

A4 A5 A6

3 3 6

3.90 100 3.39 100 2.89 100

0.0 0.0 0.02

A7 A8

3 3

ALL.PALM(CASIA), ALL.KNUCKLE(POLYU) ALL.PALM(POLYU), ALL.KNUCKLE(POLYU) L.PALM(CASIA), R.PALM(CASIA), L.INDEXKNUCKLE(POLYU), L.MIDDLEKNUCKLE(POLYU), R.INDEXKNUCKLE(POLYU), R. MIDDLEKNUCKLE(POLYU) L.PALM(CASIA), R.PALM(CASIA), ALL.KNUCKLE(POLYU) ALL.PALM(CASIA), L.KUNCKLE(POLYU)[LI þ LM], R.KUNCKLE(POLYU)[RI þ RM] L.PALM(POLYU), R.PALM(POLYU), L.INDEXKNUCKLE(POLYU), L.MIDDLEKNUCKLE(POLYU), R.INDEXKNUCKLE(POLYU), R. MIDDLEKNUCKLE(POLYU) L.PALM(POLYU), R.PALM(POLYU), ALL.KNUCKLE(POLYU) ALL.PALM(POLYU), L.KNUCKLE(POLYU)[LIþ LM], R.KUNCKLE(POLYU)[RI þRM]

2.95 100 2.53 100

0.0 0.04

CRR

EER

1130

A. Nigam, P. Gupta / Neurocomputing 151 (2015) 1120–1132

subjects are right handers and hence feel comfortable while giving that data. Huge performance boost-up after fusion can be interpreted from Fig. 10(b).

Knuckleprint PolyU database: In Fig. 11(a) and (b) finger-wise as well as hand-wise PolyU knuckleprint database ROC characteristics are presented. Table 3 can be referred to find out the total

Fig. 12. Genuine vs. imposter best matching score separation graph for all the multi-modal databases (A1–A8).

A. Nigam, P. Gupta / Neurocomputing 151 (2015) 1120–1132

number of genuine as well as imposter matching considered. From ROC curves one can see that out of all four fingers the right hand fingers (RI, RM) perform slightly better than that of the left ones, after fusion (i.e. vcode þ hcode) mainly because of majority right handed users. Huge performance boost-up after fusion is suggested in Fig. 11(a) and (b). The generalized experimental analysis reveals that vcode is more discriminative than hcode mainly because palm and knuckle both have mostly vertical edges. But the matchings where vcode fails to discriminate, hcode information can be used to enhance the system performance as it can reduce the falsely accepted imposters. In both Figs. 10 and 11, the ROC curves for only vcode based recognition and that with fusion of vcode and hcode are shown for each palm and knuckle databases, respectively. For all three databases namely PolyU knuckleprint [1], PolyU palmprint [3] and CASIA palmprint [2] databases, one can observe a significant performance boost-up after fusing vcode and hcode scores. The analysis clearly suggests that fusion can effectively reduce false acceptance rate and hence significantly enhance the system performance.

5.2.5. Performance analysis over multi-modal datasets The proposed system has been compared with some other state-of-the-art unimodal palmprint and knuckleprint systems [14,39,44–47] which are presented in Table 4. All the results obtained for unimodal and multi-modal systems are presented in Table 5. It has been found that the proposed multimodal system performs much better than the unimodal systems that clearly justifies the fusion proposal. It is found that CRR of the proposed multi-modal system is 100.00% with an EER less than 0.01% over all the eight multi-modal databases which is much better than their corresponding unimodal systems. Also the performance of the proposed system is observed to be better than other proposed state-of-the-art multimodal based biometric systems [38,32,48] mainly because they have fused face and iris. Their performance got restricted primarily due to several face trait specific issues and challenges (such as pose, expression, aging, illuminations, etc.). In this work palm and knuckleprint biometric traits are used for fusion as both of them have huge amount of unique and discriminative texture information so as to compliment each other. Also both of them are hand based which reduces the data acquisition, user cooperation and increases the user acceptance and data quality. The EER of the proposed multi-modal system for all eight multi-modal databases (A1 to A8) is found to be either zero or very low as given in Table 5. Hence ROC curves are closed to perfect and all looks similar to each other. Generally for such a highly accurate system, in order to draw any significant conclusion 0 decidability index d and genuine vs. imposter best matching score separation graphs are used. 0 0 Decidability Index (d ): The decidability index d measures the 0 separability of all genuine and imposter matching scores. Higher d signifies better separation and hence superior performance. From 0 Table 5 it can be observed that d value for all multi-modal databases is more than 2.5 and for some databases (chimeric databases consisting more than two traits fusion) it is as high as 4.6 which is suppose to be very good. Genuine vs. imposter best match graph: The genuine vs. imposter best matching score separation graph plots the best genuine and best imposter score for all probe image. From these graphs the best genuine and imposter matching score separation can be analyzed visually. Such graphs for all multi-modal databases (A1 to A8) are shown in Fig. 12. From Fig. 12(b) it can be observed that only for one or two probe images, the genuine and imposters scores were comparable. For all the rest chimeric multi-modal databases one can observe a clear-cut and discriminative separation between genuine

1131

and imposter best matchings for all probe images; hence the overall performance of the proposed system can be concluded to be very good.

6. Conclusion This paper presents a multi-modal fusion based biometric system using highly uncorrelated features (HUF) dissimilarity measure to compare structural features for palm and knuckleprints images. Robustness against varying illumination is achieved by working over edge-maps. The ROI images are transformed using the sign of local gradient SLG method that works over edge-map to obtain more discriminative vcode and hcode representations. The corner features are extracted from vcode and hcode and are tracked in the corresponding vcode or hcode using the LK tracking algorithm. The HUF measure is used to compare two ROI's by estimating the tracking performance. The proposed system is tested on two publicly available palmprint databases of PolyU and CASIA and largest available PolyU knuckleprint database along with eight selfcreated multi-modal databases. The proposed multi-modal system has performed with a CRR of 100% with EER as low as 0.01% over all databases. It has also shown robustness against illumination variation because of using SLG based transformation and against rotation and translation by the virtue of LK-tracking which is constrained by three statistical and geometrical based bounds viz. T vb ; T peb and Tcb. The proposed system has been compared with the well-known unimodal based authentication systems [14,39,44,47]. The unimodal results over Casia Palmprint are much better while PolyU palm and knuckleprint are weaker than state of the art mainly because the focus of the work was not to optimize the individual trait but to reinforce the belief that if orthogonal multiple traits are fused then very high performance can be achieved easily instead of fine tuning and parameterizing unimodal systems rigorously. A major inference that can be drawn after this experimental work is that it is very hard to achieve perfect ROC behavior using only one trait but that can be achieved using two of more traits. A new biometric trait like knuckleprint can be used to boost the discriminative power of palmprint features to perform almost perfectly. References [1] [2] [3] [4] [5] [6] [7]

[8]

[9]

[10]

[11]

[12] [13] [14]

Knuckleprint polyu. 〈http://www4.comp.polyu.edu.hk/biometrics/FKP.htm〉. Palmprint casia. 〈http://www.cbsr.ia.ac.cn〉. Palmprint polyu. 〈http://www.comp.polyu.edu.hk/biometrics〉. S.C. Amit J. Mhatre, Srinivas Palla, V. Govindaraju, Efficient search and retrieval in biometric databases, in: Proc. SPIE, 5779 (2005) 265–273. G. Badrinath, P. Gupta, Palmprint based recognition system using phasedifference information, Future Gener. Comput. Syst., 2014, in press. G.S. Badrinath, P. Gupta, Stockwell transform based palm-print recognition, Appl. Soft Comput. 11 (7) (2011) 4267–4281. G.S. Badrinath, N.K. Kachhi, P. Gupta, Verification system robust to occlusion using low-order zernike moments of palmprint sub-images, Telecommun. Syst. 47 (3–4) (2011) 275–290. G.S. Badrinath, K. Tiwari, P. Gupta, An efficient palmprint based recognition system using 1d-dct features, in: International Conference on Intelligent Computing, vol. 1, 2012, pp. 594–601. R. Cappelli, M. Ferrara, D. Maltoni, Minutia cylinder-code: a new representation and matching technique for fingerprint recognition, IEEE Trans. Pattern Anal. Mach. Intell. 32 (December (12)) (2010) 2128–2141. R. Connaughton, K. Bowyer, P. Flynn, Fusion of face and iris biometrics, in: M.J. Burge, K.W. Bowyer (Eds.), Handbook of Iris Recognition, Advances in Computer Vision and Pattern Recognition, Springer, London, 2013, pp. 219–237. J. Daugman, High confidence visual recognition of persons by a test of statistical independence, IEEE Trans. Pattern Anal. Mach. Intell. 15 (11) (1993) 1148–1161. N. Duta, A.K. Jain, K.V. Mardia, Matching of palmprints, Pattern Recognit. Lett. 23 (4) (2002) 477–485. C.C. Han, H.L. Cheng, C.L. Lin, K.C. Fan, Personal authentication using palmprint features, Pattern Recognit. 36 (2) (2003) 371–381. A.W.K. Kong, D. Zhang, Competitive coding scheme for palmprint verification, in: International Conference on Pattern Recognition, vol. 1, 2004, pp. 520–523.

1132

A. Nigam, P. Gupta / Neurocomputing 151 (2015) 1120–1132

[15] A.W.K. Kong, D. Zhang, Feature-level fusion for effective palmprint authentication, in: International Conference of Biometric Authentication, 2004, pp. 761–767. [16] A.W.K. Kong, D. Zhang, G. Lu, A study of identical twins palmprints for personal verification, Pattern Recognit. 39 (11) (2006) 2149–2156. [17] A. Kumar, C. Ravikanth, Personal authentication using finger knuckle surface, IEEE Trans. Inf. Forensics Secur. 4 (1) (2009) 98–110. [18] A. Kumar, D. Zhang, Personal recognition using hand shape and texture, IEEE Trans. Image Process. 15 (August (8)) (2006) 2454–2461. [19] G. Lu, D. Zhang, K. Wang, Palmprint recognition using eigenpalms features, Pattern Recognit. Lett. 24 (9–10) (2003) 1463–1467. [20] B.D. Lucas, T. Kanade, An iterative image registration technique with an application to stereo vision, in: International Joint Conference on Artificial Intelligence, 1981, pp. 674–679. [21] A. Meraoumia, S. Chitroub, A. Bouridane, Fusion of finger-knuckle-print and palmprint for an efficient multi-biometric system of person recognition, in: IEEE International Conference on Communications, 2011. [22] A. Meraoumia, S. Chitroub, A. Bouridane, Palmprint and finger-knuckle-print for efficient person recognition based on log-Gabor filter response, Analog Integr. Circuits Signal Process. 69 (2011) 17–27. [23] G.K.O. Michael, T. Connie, A.T.B. Jin, An innovative contactless palm print and knuckle print recognition system, Pattern Recognit. Lett. 31 (12) (2010) 1708–1719. [24] A. Morales, C. Travieso, M. Ferrer, J. Alonso, Improved finger-knuckle-print authentication based on orientation enhancement, Electron. Lett. 47 (6) (2011) 380–381. [25] L. Nanni, A. Lumini, A multi-matcher system based on knuckle-based features, Neural Comput. Appl. 18 (2009) 87–91. [26] A. Nigam, P. Gupta, A new distance measure for face recognition system, in: International Conference on Image and Graphics (ICIG), 2009, pp. 696–701. [27] A. Nigam, P. Gupta, Comparing human faces using edge weighted dissimilarity measure, in: International Conference on Control, Automation, Robotics and Vision (ICARCV), 2010, pp. 1831–1836. [28] A. Nigam, P. Gupta, Finger knuckleprint based recognition system using feature tracking, in: Chinese Conference on Biometric Recognition, 2011, pp. 125–132. [29] A. Nigam, P. Gupta, Iris recognition using consistent corner optical flow, in: Asian Conference on Computer Vision, vol. 1, 2012, pp. 358–369. [30] A. Nigam, P. Gupta, Quality assessment of knuckleprint biometric images, in: 20th IEEE International Conference on Image Processing (ICIP), September 2013, pp. 4205–4209. [31] L. Qing Zhu, S. Yuan Zhang, Multimodal biometric identification system based on finger geometry, knuckle print and palm print, Pattern Recognit. Lett. 31 (12) (2010) 1641–1649. [32] A. Rattani, M. Tistarelli, Robust multi-modal and multi-unit feature level fusion of face and iris biometrics, in: M. Tistarelli, M. Nixon (Eds.), Advances in Biometrics, Lecture Notes in Computer Science, vol. 5558, Springer, Berlin, Heidelberg, 2009, pp. 960–969. [33] S. Ribaric, I. Fratric, A biometric identification system based on eigenpalm and eigenfinger features, IEEE Trans. Pattern Anal. Mach. Intell. 27 (November (11)) (2005) 1698–1709. [34] H. Scharr, Optimal operators in digital image processing (Ph.D. thesis), 2000. [35] L. Shang, D.-S. Huang, J.-X. Du, C.-H. Zheng, Palmprint recognition using fastica algorithm and radial basis probabilistic neural network, Neurocomputing 69 (13–15) (2006) 1782–1786. [36] J. Shi, Tomasi, Good features to track, in: Computer Vision and Pattern Recognition, 1994, pp. 593–600. [37] N. Singh, A. Nigam, P. Gupta, P. Gupta, Four slap fingerprint segmentation, in: International Conference on Intelligent Computing (ICIC), 2012, pp. 664–671. [38] B. Son, Y. Lee, Biometric authentication system using reduced joint feature vector of iris and face, in: T. Kanade, A. Jain, N. Ratha (Eds.), Audio- and Video-

[39]

[40] [41]

[42] [43] [44] [45]

[46] [47] [48]

Based Biometric Person Authentication, Lecture Notes in Computer Science, vol. 3546, Springer, Berlin, Heidelberg, 2005, pp. 513–522. Z. Sun, T. Tan, Y. Wang, S.Z. Li, Ordinal palmprint representation for personal identification, in: Conference on Computer Vision and Pattern Recognition, vol. 1, 2005, pp. 279–284. M. Turk, A.P. Pentland, Eigenfaces for recognition, J. Cognit. Neurosci. 3 (1) (1991) 71–86. L. Wiskott, J.-M. Fellous, N. Krüger, C. von der Malsburg, Face recognition by elastic bunch graph matching, IEEE Trans. Pattern Anal. Mach. Intell. 19 (July (7)) (1997) 775–779. X. Wu, D. Zhang, K. Wang, Fisherpalms based palmprint recognition, Pattern Recognit. Lett. 24 (15) (2003) 2829–2838. M. Xiong, W. Yang, C. Sun, Finger-knuckle-print recognition using lgbp, in: International Symposium on Neural Networks (ISNN), 2011, pp. 270–277. D. Zhang, W.K. Kong, J. You, M. Wong, Online palmprint identification, Pattern Anal. Mach. Intell. 25 (September (9)) (2003) 1041–1050. L. Zhang, L. Zhang, D. Zhang, Finger-knuckle-print verification based on bandlimited phase-only correlation, in: International Conference on Computer Analysis of Images and Patterns (CAIP), 2009, pp. 141–148. L. Zhang, L. Zhang, D. Zhang, H. Zhu, Online finger-knuckle-print verification for personal authentication, Pattern Recognit. 43 (7) (2010) 2560–2571. L. Zhang, L. Zhang, D. Zhang, H. Zhu, Ensemble of local and global information for finger-knuckle-print recognition, Pattern Recognit. 44 (9) (2011) 1990–1998. Z. Zhang, R. Wang, K. Pan, S. Li, P. Zhang, Fusion of near infrared face and iris biometrics, in: S.-W. Lee, S. Li (Eds.), Advances in Biometrics, Lecture Notes in Computer Science, vol. 4642, Springer, Berlin, Heidelberg, 2007, pp. 172–180.

Aditya Nigam has received MTech degree in Computer Science and Engineering from Indian Institute of Technology, Kanpur, India, in 2009. He is currently a Ph.D. student in the Department of Computer Science and Engineering at Indian Institute of Technology, Kanpur, India. His research interest includes biometrics, pattern recognition, computer vision and image processing.

Phalguni Gupta did his Ph.D. from IIT Kharagpur and started his career in 1983 by joining in Space Applications Centre (ISRO) Ahmedabad, India as a Scientist. In 1987, he joined the Department of Computer Science and Engineering, Indian Institute of Technology Kanpur, India. Currently he is a Professor in the CSE department. He works in the field of data structures, sequential algorithms, parallel algorithms, on-line algorithms, image analysis, and biometrics. He has published more than 350 papers in International Journals and Conferences. He has completed several sponsored and consultancy projects which are funded by the Government of India. Some of these projects are in the area of Biometrics, System Solver, Grid Computing, Image Processing, Mobile Computing, and Network Flow. During this period he has proved himself a well-known researchers in theoretical computer science, especially in the field of biometrics.