Author’s Accepted Manuscript Palmprint Recognition with Local Micro-structure Tetra Pattern Gen Li, Jaihie Kim
www.elsevier.com/locate/pr
PII: DOI: Reference:
S0031-3203(16)30142-X http://dx.doi.org/10.1016/j.patcog.2016.06.025 PR5779
To appear in: Pattern Recognition Received date: 8 January 2016 Revised date: 28 April 2016 Accepted date: 28 June 2016 Cite this article as: Gen Li and Jaihie Kim, Palmprint Recognition with Local Micro-structure Tetra Pattern, Pattern Recognition, http://dx.doi.org/10.1016/j.patcog.2016.06.025 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Palmprint Recognition with Local Micro-structure Tetra Pattern Gen Lia, Jaihie Kima,* a
School of Electrical and Electronic Engineering,
Yonsei University, Seoul 120-749, Republic of Korea E-mail: {leegeun, jhkim}@yonsei.ac.kr *Correspoding author: E-mail addresses:
[email protected] Tel: (82-2) 2123-2869
Fax: (82-2) 2312- 4584.
Postal address: #619, Engineering Hall 2, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-749, Korea. Abstract Human Palmprint-based biometric solutions have been studied extensively in both controlled and uncontrolled environments. However, the majority of existing methods do not reliably handle variations of translation, rotation, and blurriness of one’s palm within the range of acceptable tolerance, which largely degrades the performance. Therefore, this paper presents a unique local descriptor called Local Micro-structure Tetra Pattern (LMTrP) and its application to palmprint recognition. The proposed descriptor takes advantage of local descriptors’ direction as well as thickness. In this paper, the palmprint image is first filtered with the line-shaped filter to effectively eliminate unnecessary features. Then, local region histograms of LMTrP are extracted and concatenated into one feature vector to represent the given image. Finally, the kernel linear discriminant analysis is applied on the feature vector for dimension reduction. The experimental results indicate that the proposed methods significantly outperform the state-of-the-art methods without the need to align the palmprint images.
Keywords: Biometrics, Feature Extraction, Local Descriptor, Palmprint Recognition, Subspace Learning, Local Tetra Pattern
*Correspoding author. Tel.: +82 2 2123 2869; fax: +82 2 2312 4584.
E-mail addresses:
[email protected] (G. Li),
[email protected] (J. Kim)
Postal address: #619, Engineering Hall 2, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-749, Korea.
1.
Introduction Currently, human palmprint-based personal authentication has drawn more and more attention in the biometric
information field [1]. Unquestionably, the human palmprint is extracted on the inner surface of the skin between the wrist and fingers of one’s hand and contains structurally distinctive features, such as principal lines, wrinkles, creases, ridges, minutiae points and texture patterns [2]. Generally, these features potentially possess discriminative traits and are relatively suitable for identifying an individual from others. Even though monozygotic twins have the same genetic information, their palmprint patterns are distinguishable [3]. Furthermore, compared with other traditional biometric modalities (e.g. face, iris, fingerprint etc.), human palmprint also has the prominent advantage of low computational cost, high accuracy and user-friendliness.
Numerous studies of palmprint recognition have been extensively reported in biometric literature. Comprehensive literature surveys on the recent development are found in [4-6]. The existing approaches broadly fall into three main categories: holistic, structural and hybrid based. The holistic or global feature approach uses each whole palmprint image as a feature set in conjunction with some popular statistical techniques such as Principal Component Analysis (PCA) [7], Linear Discriminant Analysis (LDA) [8], Independent Component Analysis (ICA) [9-10], and Kernel Fisher Discriminant Analysis (KFDA) [11]. In addition, the well-known 2D matrices-based subspace analysis is also used in palmprint feature representation including 2D PCA, 2D LDA [12], 2D PCA+PCA [13], and 2D locality preserving projecting (2D LPP) [14]. Furthermore, in order to enhance the discriminative capability, the Fourier Transform [15-16], Discrete Cosine Transform (DCT) [17], Wavelet Transform [18], and Gabor Transform [19-20] were performed along with aforementioned statistical techniques on the palmprint images to extract palm features. However, these approaches require well-aligned palmprint images, and are very sensitive to illumination, distortion, translation, and rotation variances.
In contrast, the structural or local feature approaches used stable palmprint features such as palm lines and texture. These approaches can be further divided into three main categories: (1) line based, (2) coding based and (3) texture based. The first category extracts the palm lines via several line detection algorithms. Han et al. extracted line-like features using Sobel and morphological operations to the palmprint image [21]. Wu et al. addressed directional line detectors to extract palm lines with different specific directions [22]. Huang et al. proposed a unique principal line detector with Modified Finite Radon Transform (MFRAT) [23]. However, these methods are computationally expensive to apply on palmprint feature extraction and matching. Furthermore, it is very difficult to occasionally extract the stable palm lines. The second category encodes the palmprint features into bitwise codes by using the
responses of a bank of phase or directional filters. The well-known representative methods include the PalmCode [24], Competitive Code (Comp Code) [25], Ordinal code [26], Fusion Code [27], etc. In addition, Jia et al. propose Robust Line Orientation Code (RLOC) that extracts the local orientation information of palmprint with the MFRAT [28]. Yue et al. utilize a modified fuzzy C-means cluster technique into the Comp Code and a better performance was reported [29]. Guo et al. encode the responses of real Gabor filters along with six different orientations for palmprint images, namely Binary Orientation Co-occurrence Vector (BOCV) [30]. Zhang, et al. further improve the verification performance of BOCV by masking out the fragile bits, which is called E-BOCV [31]. Thanks to bitwise code, these methods usually have the advantages of high accuracy and low computational cost. The last category basically extracts the palm features in the local regions of palmprint images either in the overlapped or non-overlapped subblocks. Michael et al. apply local binary pattern (LBP) [48] and Sobel directional operator via modified probabilistic neural network (PNN) [32]. Nanni et al. fuse the matching scores of Discrete Cosine Coefficient, invariant LBP, and Gabor filter for the palmprint images [53]. Tamrakar et al. utilize the histograms of Uniform LBP and Entropy for the non-occluded sub-blocks of the palmprint images. Then, the matching scores of non-occluded sub-blocks are fused through the sum rule [54]. Lu et al. propose the Enhanced Gabor-based Region Covariance Matrices features by using the response of both Gabor magnitude and Gabor phase along with eigenvalue-based distance [33]. Guo et al. address a hierarchical multi-scale LBP histogram for palmprint feature representation. Then, chi-square distance is used for feature matching [34]. Mu et al. use the shiftable complex directional filter bank (CDFB) transform and uniform LBP to extract palmprint features with Fisher linear discriminant (FLD) [35]. Hong et al. extract Block Dominant Orientation Code and Block-based Histogram of Oriented Gradient as the fine palmprint features [36]. Recently, Luo et al. have successfully adopted the local line directional pattern (LLDP) descriptor to the palmprint recognition [42]. Therefore, these methods achieve high performance in small changes of misalignment due to usage of local region descriptors’ information.
The hybrid based approach utilizes both holistic and structural features to represent palmprint images. Kumar et al. integrate the holistic, line and texture based methods to extract palmprint features by the score and decision level [37]. Morles et al. combine the Scale Invariant Feature Transform (SIFT) and Orthogonal Line Ordinal Features (OLOF) for the contactless palmprint identification [38]. Kumar et al. also propose a new nonlinear rank-level fusion approach for multiple palmprint representation [39]. Recently, Xu et al. propose a new framework that integrates three kinds of similarity scores from left, right and between left (reverse of left) and reverse of right (right) palmprint features with weighted fusion scheme [40]. However, these methods generally require more computational time. To our knowledge, although various related works have been reported, only a few studies are devoted to slight misalignment issues without the need to align the palmprint images. More recently, Jia et al. investigate the histogram of oriented lines (HOL) and the line-shaped filter with several dimension reduction techniques [41]. It is stable in
small changes of illumination, translation and rotation. However, HOL has become the blockage in producing high performance. To overcome these limitations, this paper proposes a unique palmprint recognition method using lineshape filters (i.e. real response of Gabor filter or Modified finite radon transform) in conjunction with the LMTrP descriptor, and achieves better performance than several state-of-the-art methods. Meanwhile, the key contributions of this paper can be listed as follows.
A unique local micro-structure descriptor, called Local Micro-structure Tetra Pattern (LMTrP), is proposed. Thanks to both the descriptor’s direction and thickness information, the proposed descriptor outperforms Local Tetra Pattern (LTrP).
An effective feature representation method for palmprint images is derived. The experiments were conducted on several palmprint databases either in controlled or uncontrolled environments. It achieves better performance than other related approaches (i.e. Comp code, Ordinal code, LLDP, HOL, etc.).
In particular, the proposed method for palmprint image is the least sensitive to slight misalignment in almost all cases such as variations of rotation, translation and blurriness.
The remainder of this paper is organized as follows. Section 2 presents some fundamental concepts about the Gabor wavelet filter, MFRAT and LTrP descriptor while the proposed descriptor and our palmprint recognition method are presented in Section 3. Experimental results are shown in Section 4 along with detailed discussions. Finally, Section 5 presents conclusions and possible research topics for future studies.
2.
Preliminaries
2.1. Line-shape based Filters This section briefly reviews the most commonly used line-shape based filters. The line-shape based filters have been successfully applied to palmprint recognition to extract the specific orientation and palm line features on the palmprint images. Moreover, it can effectively eliminate unnecessary palm line features on the palmprint images, such as signal noise, small scars, and unstable palm line features. Kong et al. propose Comp code that utilizes the real part of the Gabor wavelet filter [25]. More recently, inspired by Finite Radon Transform [49], Jia et al. address RLOC that utilizes the MFRAT [28]. In this section, both line-shape filters will be introduced. Then in order to have better understanding for the proposed LMTrP descriptor, the LTrP descriptor is briefly reviewed [43-44] [56].
2.1.1. Gabor wavelet filter In the spatial domain, the Gabor wavelet is a linear filter which is generated by multiplying a 2D Gaussian
function and an oriented complex exponential. The general form is usually expressed as follows:
G(x, y, θm ) =
1 x2 + y2 exp{− } × exp{2πju(xCos(θm ) + ySin(θm ))} 2 2πσ 2σ2
(1)
where (x, y) denotes the pixel position in the spatial domain and j2 = −1, μ is the frequency of the sinusoidal wave and σ is the standard deviation of the 2D Gaussian envelop, respectively. When the number of orientations κ is fixed, the orientation of the sinusoidal wave θm can be derived from θm =
π(m−1) κ
, where m ∈ [1,2, ⋯ , κ].
As in [25] [41], this paper also utilizes the real part of Gabor wavelet filter to extract the palm line features from palmprint images. The examples of the real part of Gabor wavelet filter at six orientations are illustrated in Fig. 1. Let GR (x, y, θm ) be the real part of Gabor wavelet filter at the angle of θm and IROI (x, y) be the palmprint Region of Interest (ROI) images. Thus, the magnitude Mag(x, y)G and the orientation Orient(x, y)G can be extracted by convoluting IROI (x, y) using a family of GR (x, y, θm ). Mag(x, y)G = min(IROI (x, y)⨂GR (x, y, θm ))
(2)
Orient(x, y)G = arg minm (IROI (x, y)⨂GR (x, y, θm ))
(3)
where m is the winning index, and ⨂ represents the convolution operation.
Fig. 1. Examples of Gabor wavelet filter at six orientations. (a) 0° , (b)30°, (c)60°, (d)90°, (e)120° , and (f)150° .
2.1.2. Modified finite radon transform The modified finite radon transform (MFRAT) assumes that the line-like features are approximately regarded as a straight line with specific thickness on a small local region. It is also a powerful technique to extract the orientation and magnitude features from the palmprint images [28] [41]. In general, the MFRAT features are extracted by summation of image intensity of a line at a certain angle on the size of lattice. Hence, the magnitude of the MFRAT feature R[Lθm ] is defined as
R[Lθm ] =
1 C
∑
IROI [x, y]
(4)
(x,y)∈Lθm
where 𝐈𝐑𝐎𝐈 [𝐱, 𝐲] denotes the intensity of the palmprint images at the position [𝐱, 𝐲]. The 𝐂 represents the scale coefficient of 𝐑[𝐋𝛉𝐦 ], and 𝐋𝛉𝐦 represents the set of intensity of a line at a certain angle 𝛉𝐦 , respectively.
Examples of the MFRAT at six orientations are illustrated in Fig. 2. Thus, the magnitude 𝐌𝐚𝐠(𝐱, 𝐲)𝐌 and the orientation 𝐎𝐫𝐢𝐞𝐧𝐭(𝐱, 𝐲)𝐌 can be extracted by using the following two equations. 𝐌𝐚𝐠(𝐱, 𝐲)𝐌 = |𝐦𝐢𝐧(𝐑[𝐋𝛉𝐦 ])|,𝐦 ∈ [𝟏, 𝟐, ⋯ , 𝛋]
(5)
𝐎𝐫𝐢𝐞𝐧𝐭(𝐱, 𝐲)𝐌 = 𝐚𝐫𝐠 𝐦𝐢𝐧𝐦 (𝐑[𝐋𝛉𝐦 ]), 𝐦 ∈ [𝟏, 𝟐, ⋯ , 𝛋]
(6)
where 𝛋 is the number of orientations, and |∙| denotes the absolute operation.
Fig. 2. Examples of 11×11 MFRAT at six orientations. (a) 𝟎° , (b)𝟑𝟎° , (c)𝟔𝟎° , (d)𝟗𝟎° , (e)𝟏𝟐𝟎° , and (f)𝟏𝟓𝟎° . Fig. 3 illustrates the magnitude and orientation features of palmprint ROI images are obtained by the Gabor wavelet filter and MFRAT filter. Fig. 3 (b) depicts the magnitude 𝐌𝐚𝐠(𝐱, 𝐲)𝐆 of palmprint ROI image of Fig. (a), while its corresponding orientation 𝐎𝐫𝐢𝐞𝐧𝐭(𝐱, 𝐲)𝐆 is depicted in Fig. 3(c). In addition, Fig. 3 (d) depicts the magnitude 𝐌𝐚𝐠(𝐱, 𝐲)𝐌 of palmprint ROI image of Fig. (a), while its corresponding orientation 𝐎𝐫𝐢𝐞𝐧𝐭(𝐱, 𝐲)𝐌𝐅𝐑𝐀𝐓 is depicted in Fig. 3(e). Here, we assigned the different winning indexes with distinctive gray values. And, its examples are illustrated in Fig. 3 (c) and (e). As mentioned in [25] [28] [41], the most stable and useful palm line features are extracted accurately and clearly, such as principal lines and texture patterns.
Fig. 3. The magnitude and orientation features are obtained by the Gabor wavelet filter and MFRAT filter. The first two rows of the palmprint images are acquired from the same individual, while the last row of the palmprint images is acquired from another individual. (a) palmprint ROI images, (b) and (c) depict the magnitude and orientation of the Gabor wavelet filter of (a), (d) and (e) depict the magnitude and orientation of the MFRAT filter of (a).
2.2. Local Tetra Pattern The LTrP has been applied to content-based image retrieval [43] and face recognition [44]. The LTrP encodes the local descriptor using the relationship between a reference pixel and its surrounding neighbors along horizontal and vertical directions. More detailed discussion is addressed in [43] [55]. To extract the LTrP descriptor, let 𝑰𝒏−𝟏 𝒉 (𝒈𝒄 ) 𝒕𝒉 and 𝑰𝒏−𝟏 𝒗 (𝒈𝒄 ) be the (𝒏 − 𝟏) -order pairwise derivatives along horizontal and vertical directions at the reference
pixel 𝒈𝒄 in a given local region, respectively. Accordingly, the pairwise derivatives of the LTrP descriptor can be formulated as 𝒏−𝟐 𝒏−𝟐 𝑰𝒏−𝟏 𝒉 (𝒈𝒄 ) = 𝑰𝒉 (𝒈𝒉 ) − 𝑰𝒉 (𝒈𝒄 )
(7)
𝒏−𝟐 𝒏−𝟐 𝑰𝒏−𝟏 𝒗 (𝒈𝒄 ) = 𝑰𝒗 (𝒈𝒗 ) − 𝑰𝒗 (𝒈𝒄 )
(8)
where 𝒈𝒉 is the neighborhood pixel along the horizontal direction of 𝒈𝒄 , and 𝒈𝒗 is the neighborhood pixel along vertical direction of 𝒈𝒄 . Afterwards, the (𝒏 − 𝟏)𝒕𝒉 direction of the reference pixel 𝑰𝒏−𝟏 𝑫𝒊𝒓. (𝒈𝒄 ) can be encoded into four quadrant values using Eq. (9). Where 𝑫𝒊𝒓. is a reference direction which is one of the four directions as depicted in Fig. 4 (b).
𝒏−𝟏 𝟏, 𝑰𝒏−𝟏 𝒉 (𝒈𝒄 ) ≥ 𝟎𝒂𝒏𝒅𝑰𝒗 (𝒈𝒄 ) ≥ 𝟎 𝒏−𝟏 𝟐,𝑰𝒏−𝟏 𝒉 (𝒈𝒄 ) < 𝟎𝒂𝒏𝒅𝑰𝒗 (𝒈𝒄 ) ≥ 𝟎 𝑰𝒏−𝟏 𝑫𝒊𝒓. (𝒈𝒄 ) = 𝒏−𝟏 (𝒈 ) 𝒏−𝟏 𝟑,𝑰𝒉 𝒄 < 𝟎𝒂𝒏𝒅𝑰𝒗 (𝒈𝒄 ) < 𝟎 𝒏−𝟏 (𝒈 ) 𝒏−𝟏 {𝟒,𝑰𝒉 𝒄 ≥ 𝟎𝒂𝒏𝒅𝑰𝒗 (𝒈𝒄 ) < 𝟎
(9)
Consequently, the 𝒏𝒕𝒉 -order LTrP descriptor at the reference pixel 𝒈𝒄 , 𝑳𝑻𝒓𝑷𝒏 (𝒈𝒄 ), is defined as 𝑳𝑻𝒓𝑷𝒏 (𝒈𝒄 ) = 𝒏−𝟏 𝒏−𝟏 𝒏−𝟏 𝒏−𝟏 𝒏−𝟏 {𝒔𝟏 (𝑰𝒏−𝟏 𝑫𝒊𝒓. (𝒈𝒄 ), 𝑰𝑫𝒊𝒓. (𝒈𝟏 )) , 𝒔𝟏 (𝑰𝑫𝒊𝒓. (𝒈𝒄 ), 𝑰𝑫𝒊𝒓. (𝒈𝟐 )) , … , 𝒔𝟏 (𝑰𝑫𝒊𝒓. (𝒈𝒄 ), 𝑰𝑫𝒊𝒓. (𝒈𝑷 ))} |𝑷=𝟖
𝒏−𝟏 𝒔𝟏 (𝑰𝒏−𝟏 𝑫𝒊𝒓. (𝒈𝒄 ), 𝑰𝑫𝒊𝒓. (𝒈𝑷 )) = {
𝒏−𝟏 𝟎,𝑰𝒏−𝟏 𝑫𝒊𝒓. (𝒈𝒄 ) = 𝑰𝑫𝒊𝒓. (𝒈𝑷 ) 𝒏−𝟏 (𝒈 ),𝒆𝒍𝒔𝒆 𝑰𝑫𝒊𝒓. 𝑷
(10)
(11)
where 𝑷 denotes the number of neighborhood pixels of the reference pixel 𝒈𝒄 , the output of 𝒔𝟏 (∙,∙) represents four quadrant values by comparing the differences of direction between the reference pixel and its neighborhood pixel. If direction of the reference pixel is same as that of the neighborhood pixels, 𝑳𝑻𝒓𝑷𝒏 (𝒈𝒄 ) is coded as “0”. Otherwise, 𝑳𝑻𝒓𝑷𝒏 (𝒈𝒄 ) is coded as direction of the neighborhood pixel. Afterwards, in order to improve the discriminative power of the feature vector, the 𝒏𝒕𝒉 -order 𝑳𝑻𝒓𝑷𝒏 is segregated into three binary patterns as follows 𝑳𝑻𝒓𝑷𝒏 |{𝑫𝒊𝒓. = ∑𝑷𝒑=𝟏 𝟐(𝒑−𝟏) × 𝒔𝟐 (𝑳𝑻𝒓𝑷𝒏 (𝒈𝒄 ))| ̂ |∀𝑫𝒊𝒓.,∃¬𝑰𝒏−𝟏 𝑫𝒊𝒓. (𝒈𝒄 )}
̂ |∀𝑫𝒊𝒓.,∃¬𝑰𝒏−𝟏 {𝑫𝒊𝒓. 𝑫𝒊𝒓. (𝒈𝒄 )}
𝟏,𝒊𝒇𝑳𝑻𝒓𝑷𝒏 (𝒈𝒄 ) = ⃗⃗⃗⃗⃗⃗⃗⃗ 𝑫𝒊𝒓. 𝒔𝟐 (𝑳𝑻𝒓𝑷𝒏 (𝒈𝒄 ))|𝑫𝒊𝒓. ⃗⃗⃗⃗⃗⃗⃗⃗ ∈𝑫𝒊𝒓. ̂ ={ 𝟎,𝒆𝒍𝒔𝒆
(12) (13)
̂ is a set that includes four quadrant values except the corresponding value of a reference direction, and where 𝑫𝒊𝒓. ⃗⃗⃗⃗⃗⃗⃗⃗ ̂. 𝒔𝟐 is a binary pattern generated using the ⃗⃗⃗⃗⃗⃗⃗⃗ 𝑫𝒊𝒓. is one of the values of the 𝑫𝒊𝒓. 𝑫𝒊𝒓.. In the same way, the other three 𝑳𝑻𝒓𝑷𝒏 s for the remaining reference directions are generated based on Eq. (12). In total, the LTrP achieves 12 binary patterns. Besides as mentioned in [43] [56], to extract more discriminative features, the magnitude of LBP [50] is used in conjunction with 𝑳𝑻𝒓𝑷𝒏 .
3.
Proposed method According to the literature review in Section 2.2, the LTrP descriptor only takes advantage of local descriptors’
quadrant directions at the reference pixel for a given image. Hence, the LTrP cannot extract the local descriptor with various thicknesses, simultaneously. To solve this drawback, the proposed descriptor considers quadrant directions as well as thickness by some modifications of LTrP. Besides, we address the palmprint recognition method using the proposed descriptor and line-shape filters.
3.1. Local Micro-structure Tetra Pattern In general, existing methods for local descriptor can be divided into non-directional and directional derivatives. As
mentioned in [50], the local descriptor commonly includes edges, lines, corners, spots, etc. For the non-directional based methods, the LBP [50] encodes the local descriptors based on the gray scale value of the reference pixel and its neighborhoods. For the directional derivative based methods, in contrast, the local derivative pattern (LDP) [51] encodes the local descriptor into two distinct values based on high-order derivative of the reference pixel and its neighborhoods. More recently, the LTrP [43] encodes the local descriptor into four quadrant values based on highorder derivative along horizontal and vertical directions. Although the LTrP outperforms LBP and LDP, it cannot extract the local descriptor with the different directions and thicknesses, simultaneously. In order to overcome the local descriptors’ thickness, this paper presents a LMTrP descriptor by comparing the relationship between the reference pixels and their surrounding pixels with a certain thickness along the horizontal and vertical directions. To extract the proposed LMTrP descriptor, let 𝒈𝒄 be the neighborhood pixel (marked with blue) of the reference position (marked with red), and 𝒈𝜷,𝑹 be the adjacent pixels (marked with yellow) of 𝒈𝒄 along the 𝜷 direction with 𝑹 distance in a given local region. The adjacent pixels of 𝒈𝒄 along a certain angle and the reference position are illustrated in Fig. 4 (a). 𝒏−𝟏 At first, the (𝒏 − 𝟏)𝒕𝒉 -order pairwise derivatives 𝑰𝒏−𝟏 𝒉 (𝒈𝒄 ) and 𝑰𝒗 (𝒈𝒄 ) of LMTrP descriptor along horizontal
and vertical directions can be defined as 𝒎
𝑰𝒏−𝟏 𝒉 (𝒈𝒄 ) =
𝒏−𝟐 𝟏 (∑𝑹=𝟏 𝑰𝒏−𝟐 𝒉 (𝒈𝜷,𝑹 ) + 𝑰𝒉 (𝒈𝒄 ))
𝑵𝟏 + 𝟏
−
𝒏−𝟐 𝟐 ∑𝒎 𝑹=𝟏 𝑰𝒉 (𝒈𝜽,𝑹 ) 𝑵𝟐
𝒎
𝑰𝒏−𝟏 𝒗 (𝒈𝒄 )
=
𝟏 𝒏−𝟐 (∑𝑹=𝟏 𝑰𝒏−𝟐 𝒗 (𝒈𝜷,𝑹 ) + 𝑰𝒗 (𝒈𝒄 ))
𝑵𝟏 + 𝟏
−
𝟐 𝒏−𝟐 ∑𝒎 𝑹=𝟏 𝑰𝒗 (𝒈𝜽,𝑹 ) 𝑵𝟐
(14)
(15)
where 𝒎𝟏 is the thickness of adjacent pixels of 𝒈𝒄 along the 𝜷 direction, and 𝒎𝟐 is the thickness of adjacent pixels of 𝒈𝒄 along the 𝜽 direction. 𝜷 is 𝟎° , 𝟗𝟎° , 𝟏𝟖𝟎° , and 𝟐𝟕𝟎° . 𝜽 is 𝟎° or 𝟏𝟖𝟎° for the horizontal direction, whereas the 𝜽 is 𝟗𝟎° or 𝟐𝟕𝟎° for the vertical direction. 𝑵𝟏 is the number of adjacent pixels of 𝒈𝒄 along the 𝜷 direction, and 𝑵𝟐 is the number of adjacent pixels of 𝒈𝒄 along the 𝜽 direction. Consequently, the (𝒏 − 𝟏)𝒕𝒉 -order direction of the reference pixel 𝑰𝒏−𝟏 𝑫𝒊𝒓. (𝒈𝒄 ) assign the four quadrant values to generate quadrant patterns of the LMTrP descriptor using Eq. (9) in a similar way of LTrP descriptor. Then, the proposed 𝒏𝒕𝒉 -order LMTrP descriptor is generated using Eq. (10). Finally, this descriptor is segregated into the binary patterns using Eq. (12).
For example, a second-order LMTrP descriptor encodes the local descriptor in each direction for a given reference position as illustrated in Fig. 4. Here, 𝑳𝑴𝑻𝒓𝑷𝟐𝟏 , 𝑳𝑴𝑻𝒓𝑷𝟐𝟐 , 𝑳𝑴𝑻𝒓𝑷𝟐𝟑 , and 𝑳𝑴𝑻𝒓𝑷𝟐𝟒 represent the second-order quadrant patterns of the LMTrP descriptor in direction “1”, “2”, “3”, and “4”, respectively. 𝑩𝑷𝟏𝟏 , 𝑩𝑷𝟏𝟐 , and 𝑩𝑷𝟏𝟑
represent the corresponding three binary patterns of the LMTrP descriptor in direction “1”. When the parameters of the LMTrP descriptor are set as 𝒎𝟏 = 𝟑, 𝒎𝟐 = 𝟏, 𝑵𝟏 =3 and 𝑵𝟐 = 𝟒, the pairwise first-order derivatives (𝑰𝟏𝒉 (𝒈𝒄 ) and 𝑰𝟏𝒗 (𝒈𝒄 )) are calculated as “(𝟖 + 𝟏 + 𝟐)/𝟑 − (𝟖 + 𝟕 + 𝟑 + 𝟓 + 𝟗)/(𝟒 + 𝟏) ≈ 𝟐. 𝟕𝟑” and “(𝟕 + 𝟑 + 𝟕)/𝟑 − (𝟖 + 𝟕 + 𝟑 + 𝟓 + 𝟗)/(𝟒 + 𝟏) ≈ 𝟏. 𝟗𝟗” based on Eq. (14) and Eq. (15). These derivatives are used to assign the first corresponding bit of the quadrant patterns to “1” through Eq. (9) because both first-order derivatives are bigger than 0. Similarly, the above procedure is calculated repeatedly to generate the complete quadrant patterns based on the adjacent pixels of 𝒈𝒄 along a certain direction. According to Eq. (10), when a quadrant pattern of LMTrP is equal to the direction of a reference position, the corresponding bit of quadrant pattern is encoded into “0”. Otherwise, the corresponding bit is encoded into the direction of a neighborhood pixel. Finally, the quadrant pattern of LMTrP is generated for direction “1”, which is “0422023”. Consequently, the quadrant patterns segregate into the three binary patterns as “00111010”, “00000001”, and “01000000” using Eq. (12). To compactly represent the descriptor, the 8-bit binary patterns are converted into a uniform pattern. The uniform pattern contains at most two bitwise transitions from “0” to “1” or from “1” to “0”. For more details, refer to [50].
Using similar procedures, the quadrants patterns and binary patterns of LMTrP are calculated for the other three directions. In total, we obtained the quadrants patterns “30333440”, ” 02022440”, and “32011303” for the directions “2”, “3”, and “4”, respectively.
Fig. 4. Example illusrates the encoding quadrant patterns and binary patterns of second-order LMTrP descriptor in direction “1”, “2”, “3”, and “4”, respectively. (a) the adjacent pixels of 𝒈𝒄 with distances 𝑹 along a certain angle of 𝜷. (b) four reference directions of LMTrP. (c) the quadrant patterns of LMTrP. 𝑷𝒌 denote the corresponding bit of the quadrant patterns of LMTrP, where 𝒌 ∈ [𝟏, 𝟐, ⋯ , 𝟖].
3.2. Comparison between LMTrP and LTrP descriptor In this section, we compare the LTrP and LMTrP descriptors. To evaluate the performance of the two descriptors’, we used a pattern that contains two rectangles with different thickness. One is 3 pixels wide, and the other is 1 pixel wide. Each rectangle has two diagonals. Compared with second-order LTrP descriptor, our descriptor is able to extract more detailed textures in direction as well as thickness, simultaneously. The response map shows the differences of effectiveness between the LTrP and our descriptor in a visibly comprehensible way. Fig. 5 illustrates the response maps acquired by adopting the LTrP and our descriptor on the pattern.
Fig. 5. The response map of the LTrP and LMTrP descriptor. (a) a pattern image, (b) the response map of LTrP descriptor, (c) the response map of LMTrP descriptor. Each row of (b) and (c) depicts the response of (a) in each direction. Each column of (b) and (c) depicts the response of each corresponding binary pattern of (a).
3.3 Palmprint feature representation and matching The palmprint recognition generally consists of palmprint feature representation and matching. This paper presents a LMTrP based palmprint recognition that is highly discriminative and relatively less sensitive to mere misalignment. For the palmprint feature representation procedure, the line-shaped filter (e.g. Gabor or MFRAT) is first utilized to eliminate unnecessary palm features for a given palmprint image. Like in [41] [42], we also adopted 12 orientations for the Gabor or MFRAT filter. And then, local region histograms of LMTrP are extracted and concatenated into one feature vector to represent the palmprint feature. The local region is experimentally divided into 5×5 sub-blocks. This paper utilizes quadrant directions and thicknesses of LMTrP but ignores magnitude because the magnitude is not an essential feature for palmprint recognition. To enhance the discriminative capability and reduce the storage space, the dimension reduction technique was applied on the feature vector. Among dimension reduction techniques such as PCA, LDA, kernel principal component analysis (KPCA), and kernel linear discriminant analysis (KLDA [52]), the KLDA was finally selected because this technique obtained the best performance both in terms of the identification and verification.
Afterwards, this paper calculates the similarity between two palmprint feature vectors by Euclidean distance in the
feature matching procedure. Given two palmprint features 𝑭𝟏 = [𝒂𝟏 , 𝒂𝟐 , ⋯ , 𝒂𝑵 ] and 𝑭𝟐 = [𝒃𝟏 , 𝒃𝟐 , ⋯ , 𝒃𝑵 ], the similarity score 𝑺𝑬 (𝑭𝟏 , 𝑭𝟐 ) is written as 𝑵
𝑺𝑬 (𝑭𝟏 , 𝑭𝟐 ) = ∑‖𝒂𝒋 − 𝒃𝒋 ‖ 𝒋=𝟏
𝟐
(16)
where ‖𝒂𝒋 − 𝒃𝒋 ‖ denotes the Euclidean distance. 𝟐
Fig. 6 and Fig. 7 illustrate the block diagram and the proposed framework of palmprint recognition.
Fig. 6.The block diagram based on the LMTrP descriptor for palmprint recognition
Fig. 7. An illustration of the palmprint recognition based on the LMTrP descriptor.
4.
Experimental Results and Analysis
4.1 Database description To evaluate the performance of the proposed method, we conducted several experiments on the Hong Kong Polytechnic University (PolyU) Multispectral, Indian Institute of Technology Delhi (IITD), and Biometric Engineering Research Center (BERC) palmprint database. A brief description of three databases is given below.
4.1.1
The PolyU Multispectral Palmprint Database
The PolyU multispectral palmprint database (PolyU_M) [45-46] is one of the most widely used benchmarks in the literature on multispectral palmprint recognition. Moreover, the palmprint ROI images are publicly available in this database. The database consists of 6000 gray scale palmprint images of both left and right palms from 250 individuals, including 195 males and 55 females. Typically, the database is established in two separate sessions at different time periods under Blue, Green, Red, and NIR illumination. Each session contains 6 palmprint images for each palm from each individual. Fig. 8 gives some ROI examples of palmprint images.
Fig. 8. The palmprint images of the same individuals at four different illumination conditions in the PolyU_M. (a) Blue, (b) Green, (c) Red, and (d) NIR. The first two columns are captured in the first section, whereas the last column is captured in the second section.
4.1.2
The IITD Palmprint Database
The IITD palmprint database [39] [47-48] is also one of the most widely used benchmark in the literature for contactless palmprint recognition. The database consists of 3290 gray scale palmprint images of both left and right palms of 235 individuals without any contact or guiding surface, and contains 7 palmprint images for each palm from each individual. Apart from the original hand images, the palmprint ROI images of the palms are publicly available in the database. These images are acquired in severe variations of distortion, rotation and translation, etc. Fig. 9 shows the examples of original hand and the corresponding ROI images from two different subjects in this palmprint database. Compared to the PolyU_M database, the palmprint images in this database are closer to the real-life application under a controlled illumination environment.
Fig. 9. (a)-(d) are two pairs of the left and right hand images of two differenct individuals in the IITD pamprint database. (e)-(h) are the correspoing ROI images extracted from (a)-(d), respectively.
4.1.3
The BERC Palmprint Database
To evaluate the proposed method for the practical palmprint recognition, we constructed a large collection of palmprint images in both indoor and outdoor environments with a mobile phone. The BERC palmprint database consists of 8967 hand images from indoor environments, and 9224 hand images from outdoor environments. All participants have provided their hand images of both hands through our palmprint acquisition application with the hand-shape guide. More detailed description of palmprint acquisition application is addressed in [58]. Furthermore, this database was collected from 60 individuals who are East Asian, and contains severe variations of hand pose, clutter background, illumination, shadow, etc. Fig. 10 shows the examples of hand images and the corresponding ROI images from different subjects in this palmprint database.
Fig. 10. (a) and (b) are three pairs of left and right hand images in indoor environments, while (d) and (e) are three pairs of left and right hand images in outdoor environments in the BERC palmprint database. (c) and (f) are the correspoing ROI images extracted from (a), (b), (d), and (e), respectively.
4.2 Experiments on the PolyU Multispectral Database In the first experiment, we evaluated the proposed method under different illuminations on the PolyU_M database. Here, we used the first three samples from the first session for a training set, while the other remaining samples from the first session were used as the validation set to determine the optimal parameters. And then, all the samples from the second session were used as a test set to evaluate the performance. Fig. 8 shows the examples of palmprint ROI images under different illuminations in this database.
Fig. 11. ROC curves of methods Comp code, Ordinal code,𝑳𝑳𝑫𝑷𝑮 , 𝑳𝑳𝑫𝑷𝑴 , 𝑯𝑶𝑳𝑮 ,𝑯𝑶𝑳𝑴 , 𝑳𝑻𝒓𝑷𝑮 , 𝑳𝑻𝒓𝑷𝑴 and the proposed method (𝑳𝑴𝑻𝒓𝑷𝑮 𝒂𝒏𝒅𝑳𝑴𝑻𝒓𝑷𝑴 ) on PolyU_M database: (a) Blue, (b) Green, (c) Red, and (d) NIR. In terms of the verification performance, we compared the equal error rate (EER) of the proposed methods with that of Comp Code [25], Ordinal code [26], LLDP [42], HOL [41], and LTrP based on this database. The EER is frequently used to evaluate the performance of verification, and is calculated by a set of false rejection rate (FRR) and false acceptance rate (FAR). Fig. 11 shows receiver operation characteristic (ROC) under the different illuminations, and Table 1 summarizes the EERs of the representative methods including the best verification performances reported in HOL [41] and LLDP based [42]. It shows that the proposed method of the MFRAT combined with LMTrP achieves significantly better EER than the other methods in most cases, especially in natural illumination conditions. In addition, the proposed method of Gabor combined with LMTrP achieves slightly lower EER than 𝑳𝑴𝑻𝒓𝑷𝑴 , except for NIR illumination condition. Among these representative methods, from Table 1, 𝑳𝑴𝑻𝒓𝑷𝑮 and 𝑳𝑴𝑻𝒓𝑷𝑴 methods got the best average of EERs of 0.0017 and 0.0006, respectively. Furthermore, the confidence intervals are calculated using percentiles of normal distribution [57]. From Table 2 to Table 5, the comparisons of FRRs are listed on this database under the four different illuminations at FAR of 0.01, 0.001, and 0.0005, respectively.
Table 1 Verification EERs (%) on the PolyU_M database under different illumination. Blue
Green
Red
NIR
Average
(%)
(%)
(%)
(%)
(%)
Comp Code [25]
0.0995
0.1006
0.1082
0.1082
0.1041
Ordinal Code [26]
0.0387
0.0587
0.0662
0.0334
0.0493
𝑳𝑳𝑫𝑷𝑮 [42]
0.1767
0.1989
0.4809
0.6515
0.3770
𝑳𝑳𝑫𝑷𝑴 [42]
0.3358
0.4719
0.6896
0.3982
0.4739
𝑯𝑶𝑳𝑮 [41]
0.0006
0.0348
0.0012
0.0025
0.0098
𝑯𝑶𝑳𝑴 [41]
0.0017
0.0031
0.0059
0.0330
0.0109
𝑳𝑻𝒓𝑷𝑮
0.0335
0.0668
0.0021
0.0331
0.0339
𝑳𝑻𝒓𝑷𝑴
0.1035
0.0583
0.0583
0.2923
0.1281
𝑳𝑴𝑻𝒓𝑷𝑮
0.0013
0.0040
0.0001
0.0012
0.0017
𝑳𝑴𝑻𝒓𝑷𝑴
0.0000
0.0003
0.0000
0.0021
0.0006
Methods
Table 2
Comparison of FRRs (%) on the PolyU_M database under blue illumination at FAR=0.01, 0.001 and 0.0005.
FRR (%) with 90% confidence interval Methods FAR = 0.0100 (%)
FAR = 0.0010 (%)
FAR = 0.0005 (%)
Comp Code [25]
2.5889 [2.5861 : 2.5916]
3.1444 [3.1414 : 3.1475]
3.2889 [3.2858 : 3.2920]
Ordinal Code [26]
3.0111 [3.0081 : 3.0141]
4.6778 [4.6741 : 4.6814]
5.7444 [5.7404 : 5.7485]
𝑳𝑳𝑫𝑷𝑮 [42]
0.7222 [0.7208 : 0.7237]
1.5111 [1.5090 : 1.5132]
1.8444 [1.8421 : 1.8468]
𝑳𝑳𝑫𝑷𝑴 [42]
1.1111 [1.1093 : 1.1129]
2.0444 [2.0420 : 2.0469]
2.4000 [2.3973 : 2.4027]
𝑯𝑶𝑳𝑮 [41]
0.0000 [0.0000 : 0.0000]
0.0333 [0.0330 : 0.0336]
0.2000 [0.1992 : 0.2008]
𝑯𝑶𝑳𝑴 [41]
0.0000 [0.0000 : 0.0000]
0.2667 [0.2658 : 0.2676]
0.5000 [0.4988 : 0.5012]
𝑳𝑻𝒓𝑷𝐆
0.1000 [0.0995 : 0.1005]
0.2000 [0.1992 : 0.2008]
0.2000 [0.1992 : 0.2008]
𝑳𝑻𝒓𝑷𝑴
1.2000 [1.1981 : 1.2019]
3.5333 [3.5301 : 3.5365]
4.0000 [3.9966 : 4.0034]
𝑳𝑴𝑻𝒓𝑷𝑮
0.0000 [0.0000 : 0.0000]
0.0333 [0.0330: 0.0336]
0.1000 [0.0995 : 0.1005]
𝑳𝑴𝑻𝒓𝑷𝑴
0.0000 [0.0000 : 0.0000]
0.0000 [0.0000 : 0.0000]
0.0000 [0.0000 : 0.0000]
Table 3
Comparison of FRRs (%) on the PolyU_M database under green illumination at FAR=0.01, 0.001 and 0.0005.
FRR (%) with 90% confidence interval Methods FAR = 0.0100 (%)
FAR = 0.0010 (%)
FAR = 0.0005 (%)
Comp Code [25]
2.6889 [2.6861 : 2.6917]
3.200 [3.1969 : 3.2031]
3.3778 [3.3746 : 3.3809]
Ordinal Code [26]
5.2889 [5.2850 : 5.2928]
4.7000 [4.6963 : 4.7037]
5.2889 [5.2850 : 5.2928]
𝑳𝑳𝑫𝑷𝑮 [42]
0.6556 [0.6542 : 0.6570]
1.4111 [1.4091 : 1.4132]
1.9444 [1.9421 : 1.9468]
𝑳𝑳𝑫𝑷𝑴 [42]
1.3111 [1.3091 : 1.3131]
2.3778 [2.3751 : 2.3804]
2.9222 [2.9193 : 2.9251]
𝑯𝑶𝑳𝑮 [41]
0.6000 [0.5987 : 0.6013]
1.5000 [1.4979 : 1.5021]
1.7667 [1.7644 : 1.7690]
𝑯𝑶𝑳𝑴 [41]
0.0000 [0.0000 : 0.0000]
0.4667 [0.4655 : 0.4678]
0.5333 [0.5321 : 0.5346]
𝑳𝑻𝑟𝑷𝑮
0.2667 [0.2658 : 0.2676]
0.7000 [0.6986 : 0.7014]
1.0333 [1.0316 : 1.0351]
𝑳𝑻𝒓𝑷𝑴
0.5667 [0.5654 : 0.5680]
1.2667 [1.2647 : 1.2686]
1.4333 [1.4313 : 1.4354]
𝑳𝑴𝑻𝒓𝑷𝑮
0.0000 [0.0000 : 0.0000]
0.2667 [0.2658 : 0.2676]
0.3000 [0.2991 : 0.3009]
𝑳𝑴𝑻𝒓𝑷𝑴
0.0000 [0.0000 : 0.0000]
0.0000 [0.0000 : 0.0000]
0.0667 [0.0662 : 0.0671]
Table 4
Comparison of FRRs (%) on the PolyU_M database under red illumination at FAR=0.01, 0.001 and 0.0005.
FRR (%) with 90% confidence interval Methods FAR = 0.0100(%)
FAR = 0.0010 (%)
FAR = 0.0005 (%)
Comp Code [25]
2.6333 [2.6306 : 2.6361]
3.2444 [3.2414 : 3.2475]
3.4889 [3.4857 : 3.4921]
Ordinal Code [26]
1.9778 [1.9754 : 1.9802]
3.5000 [3.4968 : 3.5032]
4.0111 [4.0077 : 4.0145]
𝑳𝑳𝑫𝑷𝑮 [42]
2.3556 [2.3529 : 2.3582]
4.5444 [4.5408 : 4.5481]
5.3778 [5.3739 : 5.3817]
𝑳𝑳𝑫𝑷𝑴 [42]
1.7000 [1.6978 : 1.7022]
3.2667 [3.2636 : 3.2697]
3.7333 [3.7300 : 3.7366]
𝑯𝑶𝑳𝑮 [41]
0.0000 [0.0000 : 0.0000]
0.1333 [0.1327 : 0.1340]
0.3333 [0.3323 : 0.3343]
𝑯𝑶𝑳𝑴 [41]
0.0333 [0.0330 : 0.0336]
0.2667 [0.2658 : 0.2676]
0.4667 [0.4655 : 0.4678]
𝑳𝑻𝒓𝑷𝑮
0.0000 [0.0000 : 0.0000]
0.0333 [0.0330 : 0.0336]
0.0333 [0.0330 : 0.0336]
𝑳𝑻𝒓𝑷𝑴
0.7333 [0.7319 : 0.7348]
1.9667 [1.9643 : 1.9691]
2.6000 [2.5972 : 2.6028]
𝑳𝑴𝑻𝒓𝑷𝑮
0.0000 [0.0000 : 0.0000]
0.0000 [0.0000 : 0.0000]
0.0000 [0.0000 : 0.0000]
𝑳𝑴𝑻𝒓𝑷𝑴
0.0000 [0.0000 : 0.0000]
0.0000 [0.0000 : 0.0000]
0.0000 [0.0000 : 0.0000]
Table 5
Comparison of FRRs (%) on the PolyU_M database under NIR illumination at FAR=0.01, 0.001 and 0.0005.
FRR (%) with 90% confidence interval Methods FAR = 0.0100 (%)
FAR = 0.0010 (%)
FAR = 0.0005 (%)
Comp Code [25]
2.8000 [2.7971 : 2.8029]
3.7778 [3.7745 : 3.7811]
3.9556 [3.9522 : 3.9589]
Ordinal Code [26]
2.1444 [2.1419 : 2.1470]
3.6000 [3.5968 : 3.6032]
4.1000 [4.0966 : 4.1034]
𝑳𝑳𝑫𝑷𝑮 [42]
4.2333 [4.2298 : 4.2368]
8.3000 [8.2952 : 8.3048]
9.5778 [9.5727 : 9.5829]
𝑳𝑳𝑫𝑷𝑴 [42]
3.6889 [3.6856 : 3.6922]
5.9889 [5.9848 : 5.9930]
6.6333 [6.6290 : 6.6376]
𝑯𝑶𝑳𝑮 [41]
0.0000 [0.0000 : 0.0000]
0.1667 [0.1660 : 0.1674]
0.2667 [0.2658 : 0.2676]
𝑯𝑶𝑳𝑴 [41]
0.0000 [0.0000 : 0.0000]
0.3000 [0.2991 : 0.3009]
0.3333 [0.3323 : 0.3343]
𝑳𝑻𝒓𝑷𝑮
0.0667 [0.0662 : 0.0671]
0.2667 [0.2658 : 0.2676]
0.4333 [0.4322 : 0.4345]
𝑳𝑻𝒓𝑷𝑴
0.7667 [0.7652 : 0.7682]
1.0667 [1.0649 : 1.0684]
1.3333 [1.3313 : 1.3353]
𝑳𝑴𝐓𝒓𝑷𝑮
0.0000 [0.0000 : 0.0000]
0.0333 [0.0330 : 0.0336]
0.0667 [0.0662 : 0.0671]
𝑳𝑴𝑻𝒓𝑷𝑴
0.0000 [0.0000 : 0.0000]
0.1000 [0.0995 : 0.1005]
0.2000 [0.1992 : 0.2008]
In terms of the identification performance, this paper uses the rank 1 recognition rate on this database. Table 2 lists the rank 1 recognition rate of Comp Code [25], Ordinal code [26], LLDP based [42], HOL based [41], LTrP based, and LMTrP based. In particular, it can be seen that 𝑳𝑴𝑻𝒓𝑷𝑴 improves the recognition rate (it obtains the recognition rate of 100%), while 𝑳𝑴𝑻𝒓𝑷𝑮 achieves slightly higher average recognition rates than Comp Code, HOL based and LTrP based. 𝑳𝑴𝑻𝒓𝑷𝑮 obtains slightly lower average recognition rates than LLDP based and Ordinal Code. In addition, all the LTrP based methods achieve lower performance (both verification and identification) than the LMTrP based methods. This is due to the fact that the additional usage of LMTrP descriptors’ thickness makes the palmprint features much more discriminative in most cases.
Table 6 The rank-1 recognition rate (%) on the PolyU_M database under different illumination. Blue
Green
Red
NIR
Average
(%)
(%)
(%)
(%)
(%)
Comp Code [25]
99.90
99.93
99.87
99.77
99.87
Ordinal Code [26]
99.97
99.97
99.97
99.93
99.96
𝑳𝑳𝑫𝑷𝑮 [42]
100.00
100.00
99.77
99.93
99.93
𝑳𝑳𝑫𝑷𝑴 [42]
100.00
100.00
100.00
99.77
99.94
𝐻𝑶𝑳𝑮 [41]
99.90
99.07
99.87
99.97
99.70
𝑯𝑶𝑳𝑴 [41]
99.73
99.73
99.77
99.87
99.78
𝑳𝑻𝒓𝑷𝑮
99.87
99.67
99.97
99.80
99.83
𝑳𝑻𝒓𝑷𝑴
99.30
99.77
99.57
99.23
99.47
𝑳𝑴𝑻𝒓𝑷𝑮
99.93
99.90
100.00
99.73
99.89
𝑳𝑴𝑻𝒓𝑷𝑴
100.00
100.00
100.00
100.00
100.00
Methods
4.3 Experiments on the IITD Palmprint Database In the second experiment, we evaluated verification as well as identification performance in the severe variations of pose, distortion, translation, and rotation for the contactless palmprint database. Fig. 9 illustrates the samples of hand images and the corresponding ROI images from different individuals in the IITD palmprint database. We divided the database into a training set and a test set. The first three samples were used as a training set, and the remaining samples were used as a test set to evaluate the performance.
In terms of the verification performance, we compared the EERs of the proposed methods with that of LLDP based [42] and HOL based [41] on this database. Fig. 12 shows ROC curves, and the comparisons of EERs are described in Table 7. It is shown that the proposed LMTrP based methods achieve significantly better EER than the other methods.
In terms of the identification performance, the rank 1 recognition rates are listed in Table 8 on this database. It is shown that the proposed LMTrP based methods significantly improve the recognition rate without any alignment procedure. In addition, 𝑳𝑴𝑻𝒓𝑷𝑴 achieves a slightly higher recognition rate than 𝑳𝑴𝑻𝒓𝑷𝑮 . This demonstrates the effectiveness of the proposed method on contactless environments with severe variations in pose, distortion, rotation, and translation.
Fig. 12. ROC curve of 𝑳𝑳𝑫𝑷𝑮 , 𝑳𝑳𝑫𝑷𝑴 , 𝑯𝑶𝑳𝑮 ,𝑯𝑶𝑳𝑴 , 𝑳𝑻𝒓𝑷𝑮 , 𝑳𝑻𝒓𝑷𝑴 , and the proposed method (𝑳𝑴𝑻𝒓𝑷𝑮 and 𝑳𝑴𝑻𝒓𝑷𝑴 ) on IITD database.
Table 7 Verification EERs (%) of various methods on the IITD database. Equal Error Rate Methods (%) 𝑳𝑳𝑫𝑷𝑮 [42]
5.79
𝑳𝑳𝐃𝑷𝑴 [42]
9.59
𝑯𝑶𝑳𝑮 [41]
1.27
𝑯𝑶𝑳𝑴 [41]
1.67
𝑳𝑻𝒓𝑷𝑮
0.94
𝑳𝑻𝒓𝑷𝑴
1.67
𝑳𝑴𝑻𝒓𝑷𝑮
0.87
𝑳𝑴𝑻𝒓𝑷𝑴
0.87
Table 8 The rank-1 recognition rate (%) of various methods on the IITD database. Recognition Rate Methods (%) 𝑳𝑳𝑫𝑷𝑮 [42]
93.38
𝐿𝑳𝑫𝑷𝑴 [42]
91.95
𝑯𝑶𝑳𝑮 [41]
93.66
𝑯𝑶𝑳𝑴 [41]
89.83
𝑳𝑻𝒓𝑷𝑮
95.31
𝑳𝑻𝒓𝑷𝑴
94.81
𝑳𝑴𝑻𝒓𝑷𝑮
95.03
𝑳𝑴𝑻𝒓𝑷𝑴
95.60
4.4 Experiments on the BERC Palmprint Database In the third experiment, we evaluated verification as well as identification performance in more complicated conditions than the PolyU_M and IITD such as indoor and outdoor environments. Fig. 10 illustrates the samples of hand images and the corresponding ROI images from different individuals in the BERC database. The three samples were randomly selected from each palmprint ROI image set of individual for training, and the remaining samples were used for testing.
In terms of the verification performance, we compared the EERs of the proposed methods with that of LLDP based [42] and HOL based [41] method on this database. Fig. 13 shows the ROC curves in both indoor and outdoor environments, and the comparisons of EERs are described in Table 9. It is shown that the LMTrP based methods achieve significantly better EER than the other methods.
In terms of the identification performance, the rank 1 recognition rates are listed in Table 10 on this database. It is shown that the proposed LMTrP based methods significantly improve the recognition rate both indoor and outdoor environments. Furthermore, 𝑳𝑴𝑻𝒓𝑷𝑮 achieves a slightly higher recognition rate than 𝑳𝑴𝑻𝒓𝑷𝑴 . This demonstrates the effectiveness of the proposed method in complicated environments with severe variations in pose, clutter background, illumination, shadow, etc.
Fig. 13. ROC curve of 𝑳𝑳𝑫𝑷𝑮 , 𝑳𝑳𝑫𝑷𝑴 , 𝑯𝑶𝑳𝑮 ,𝑯𝑶𝑳𝑴 , 𝑳𝑻𝒓𝑷𝑮 , 𝑳𝑻𝒓𝑷𝑴 , and the proposed method (𝑳𝑴𝑻𝒓𝑷𝑮 and 𝑳𝑴𝑻𝒓𝑷𝑴 ) on the BERC database. (a) Indoor and (b) Outdoor.
Table 9 Verification EERs (%) of various methods on the BERC database. Equal Error Rate (%) Methods Indoor
Outdoor
𝑳𝑳𝐃𝑷𝑮 [42]
4.1928
5.1670
𝑳𝑳𝑫𝑷𝑴 [42]
2.1100
2.6191
𝑯𝑶𝑳𝑮 [41]
1.1808
1.7151
𝑯𝑶𝑳𝑴 [41]
1.7110
2.1710
𝑳𝑻𝒓𝑷𝑮
1.4903
1.8315
𝑳𝑻𝒓𝑷𝑴
1.7467
2.1614
𝑳𝑴𝑻𝒓𝑷𝑮
1.1119
1.6802
𝑳𝑴𝑻𝒓𝑷𝑴
1.3998
1.9279
Table 10 The rank-1 recognition rate (%) of various methods on the BERC database. Recognition Rate (%) Methods Indoor
Outdoor
𝐿𝑳𝑫𝑷𝑮 [42]
97.5800
95.7094
𝑳𝑳𝑫𝑷𝑴 [42]
96.5082
94.7444
𝑯𝑶𝑳𝑮 [41]
95.7930
93.1794
𝑯𝑶𝑳𝑴 [41]
93.5773
91.8232
𝑳𝑻𝒓𝑷𝑮
96.2277
95.2791
𝑳𝑻𝒓𝑷𝑴
95.4284
94.8487
𝑳𝑴𝑻𝒓𝑷𝑮
96.8027
95.7225
𝑳𝑴𝑻𝒓𝑷𝑴
95.9473
95.0443
4.5 Measurement of execution time In this section, we compare the execution time among LLDP, HOL and proposed LMTrP based methods on the same databases. All the experiments were performed using a desktop PC with an Intel i7-4960 CPU (3.6 GHz) and 32GB RAM. The algorithms are configured with Windows 7, Visual Studio 2008 C++, and Open Source Computer Vision (OpenCV 2.3.1). The execution time taken by each method is computed and is given in Table 11. It is shown that the MFRAT based methods are faster than Gabor based methods.
Table 11 Execution time of various descriptors Execution time Methods (millisecond) 𝐿𝐿𝐷𝑃𝐺 [42]
145.25
𝐿𝐿𝐷𝑃𝑀 [42]
33.50
𝐻𝑂𝐿𝐺 [41]
146.37
𝐻𝑂𝐿𝑀 [41]
80.30
𝐿𝑀𝑇𝑟𝑃𝐺
154.40
𝐿𝑀𝑇𝑟𝑃𝑀
82.68
Accordingly, the execution time for 𝐿𝐿𝐷𝑃𝐺 ,𝐿𝐿𝐷𝑃𝑀 , 𝐻𝑂𝐿𝐺 ,𝐻𝑂𝐿𝑀 , 𝐿M𝑇𝑟𝑃𝐺 , and 𝐿𝑀𝑇𝑟𝑃𝑀 is 145.25, 33.50, 146.37, 80.30, 154.40, and 82.68 millisecond, respectively. However, once the descriptors are extracted, the time for matching is very fast. Although the proposed LMTrP based methods take somewhat more time than LLDP and HOL based methods in descriptor extraction procedure, the time for descriptor extraction is within the acceptable range for the real-time palmprint recognition application. In fact, the execution time could be further reduced by using some optimization techniques.
4.6 Discussion In the last experiment, we further simulated the identification performance in the different levels of translation, rotation, and blurriness in the artificial palmprint database, as in Fig. 14.
Fig. 14. The examples of palmprint ROI images under variations of rotation, translation, and blurriness on the PolyU_M database. (a) Original palmprint ROI image, (b) – (d) rotated palmprint ROI images of (a) at the angle of 2° , 4° , and 6° , (e) – (g) translated palmprint ROI images of (a) at the pixels of 2, 4, and 6 pixels, (h) – (j) blurred palmprint ROI images of (a) at the standard deviation of the Gaussian filter of 0.5, 1.5, and 2.5. The section 2 of PolyU_M palmprint database was chosen for our comparative evaluation because its ROI images are acquired in controlled illumination environment and have less variance in translation, rotation, and blurriness. The artificial palmprint database consists of rotated, translated, and blurred images at different level values. Fig. 14 (b), (c), and (d) show the examples of rotated palmprint images of Fig. 14 (a). The examples of translated palmprint images of Fig. 14 (a) are shown in Fig. 14 (e), (f), and (g). The examples of blurred palmprint images of Fig. 14 (a) are shown in Fig. 14 (h), (i), and (j).
Fig. 15 illustrates the recognition performance of the proposed methods and HOL based competitors under a varying level of rotation values, and the results are given in Table 12. Note that for comparative purposes, the 𝐿𝑀𝑇𝑟𝑃𝐺 significantly outperforms the other three methods for all levels of rotation values on the artificial palmprint database. At the variation of angle of 4° , none of the HOL based methods achieve higher than 96% recognition rate, while the proposed 𝐿𝑀𝑇𝑟𝑃𝐺 achieves over 99 % on four illumination conditions. Even at the variation of angle of 6° , the recognition rate of 𝐿𝑀𝑇𝑟𝑃𝐺 is still higher than 95.90 %. In addition, 𝐿𝑀𝑇𝑟𝑃𝑀 performs worse than 𝐿𝑀𝑇𝑟𝑃𝐺 , and better than almost all HOL based in the all four illumination conditions, except 𝐻𝑂𝐿𝐺 on NIR illumination condition. Therefore, compared with HOL based methods, the proposed LMTrP based methods for palmprint recognition is less sensitive in the variation of rotation.
Fig. 15. The rank-1 recognition rate (%) under varying level of rotation values on four illumination conditions. (a) Blue, (b) Green, (c) Red, and (d) NIR
Table 12 The rank-1 recognition rate (%) under varying level of rotation values. Spectral Blue
Green
Red
NIR
0°
2°
4°
6°
8°
10°
𝐻𝑂𝐿𝐺 [41]
99.90
99.23
94.83
72.73
27.57
6.30
𝐻𝑂𝐿𝑀 [41]
99.93
97.07
93.00
80.90
55.67
27.70
𝐿𝑀𝑇𝑟𝑃𝐺
99.93
99.83
99.17
99.07
89.43
73.90
𝐿𝑀𝑇𝑟𝑃𝑀
100.00
99.13
97.77
92.63
79.47
56.60
𝐻𝑂𝐿𝐺 [41]
99.07
94.37
74.27
26.70
3.47
0.53
𝐻𝑂𝐿𝑀 [41]
99.77
95.37
90.67
74.80
47.27
20.23
𝐿𝑀𝑇𝑟𝑃𝐺
99.90
99.83
99.07
97.03
90.57
76.37
𝐿𝑀𝑇𝑟𝑃𝑀
100.00
99.60
97.33
88.80
67.80
45.77
𝐻𝑂𝐿𝐺 [41]
99.87
98.20
91.20
70.00
37.13
12.47
𝐻𝑂𝐿𝑀 [41]
99.77
88.10
79.90
59.17
33.60
16.27
𝐿𝑀𝑇𝑟𝑃𝐺
100.00
99.90
99.43
95.90
85.47
63.97
𝐿𝑀𝑇𝑟𝑃𝑀
100.00
98.17
92.37
76.93
54.17
35.13
𝐻𝑂𝐿𝐺 [41]
99.97
98.57
95.10
82.77
54.67
21.83
𝐻𝑂𝐿𝑀 [41]
99.30
26.20
10.43
5.97
3.03
0.97
𝐿𝑀𝑇𝑟𝑃𝐺
99.97
99.77
99.47
98.23
94.80
84.97
𝐿𝑀𝑇𝑟𝑃𝑀
100.00
95.77
86.47
71.00
47.90
33.40
Methods
The recognition performance of the proposed methods and HOL based approaches under a varying level of translation values are shown in Fig. 16 and Table 13, respectively. It is shown that the 𝐿𝑀𝑇𝑟𝑃𝑀 significantly outperforms the other three methods for all levels of translation values on the artificial database. At the variation of 6 pixels, none of the HOL based methods achieve higher than 92.33% recognition rate whereas 𝐿𝑀𝑇𝑟𝑃𝑀 achieves over 94.87 % in the all four illumination conditions. In addition, the 𝐿𝑀𝑇𝑟𝑃𝐺 also performs better than HOL based methods, and worse than 𝐿𝑀𝑇𝑟𝑃𝑀 in the natural illumination conditions (blue, green, and red) at the variation of 6 pixels. This is due to the fact that the acquired palmprint images include the palm vein information on NIR illumination condition. In this case, the MFRAT may be more sensitive than the Gabor filter under a varying level of translation values. Therefore, compared with HOL based methods, the proposed LMTrP based methods for palmprint
recognition is less sensitive in the variation of translation.
Fig. 16. The rank-1 recognition rate (%) under varying level of translation values on four illumination conditions. (a) Blue, (b) Green, (c) Red, and (d) NIR
Table 13 The rank-1 recognition rate (%) under varying level of translation values Spectral Blue
Green
Red
NIR
Methods
0 pixel
2 pixel
4 pixel
6 pixel
8 pixel
10 pixel
𝐻𝑂𝐿𝐺 [41]
99.90
99.47
97.87
82.80
79.83
40.70
𝐻𝑂𝐿𝑀 [41]
99.93
98.17
92.20
73.70
47.03
20.40
𝐿𝑀𝑇𝑟𝑃𝐺
99.93
99.50
97.57
92.17
80.40
61.63
𝐿𝑀𝑇𝑟𝑃𝑀
100.00
99.77
99.40
97.43
93.77
82.83
𝐻𝑂𝐿𝐺 [41]
99.07
95.30
86.83
68.40
43.83
9.63
𝐻𝑂𝐿𝑀 [41]
99.77
97.77
91.17
72.00
44.23
18.83
𝐿𝑀𝑇𝑟𝑃𝐺
99.90
99.60
97.40
93.27
82.63
63.73
𝐿𝑀𝑇𝑟𝑃𝑀
100.00
99.57
98.33
94.87
86.87
68.03
𝐻𝑂𝐿𝐺 [41]
99.87
99.43
97.63
90.97
70.07
30.70
𝐻𝑂𝐿𝑀 [41]
99.77
95.53
86.73
64.77
39.63
15.93
𝐿𝑀𝑇𝑟𝑃𝐺
100.00
99.87
97.93
92.00
76.77
46.17
𝐿𝑀𝑇𝑟𝑃𝑀
100.00
99.67
98.33
97.00
92.43
74.17
𝐻𝑂𝐿𝐺 [41]
99.97
99.03
96.97
92.33
78.27
43.60
𝐻𝑂𝐿𝑀 [41]
99.30
71.13
56.33
42.77
29.03
9.83
𝐿𝑀𝑇𝑟𝑃𝐺
99.97
99.77
99.20
96.70
91.00
71.70
𝐿𝑀𝑇𝑟𝑃𝑀
100.00
99.43
98.30
96.70
94.63
82.90
The recognition performance of proposed method and HOL based approaches under different levels of blurriness are shown in Fig. 17 and Table 14. The experimental results show that the 𝐿𝑀𝑇𝑟𝑃𝐺 method significantly outperforms the other three methods for all levels of blurriness in the blurred palmprint database. Even at the standard deviation of 1.5, the recognition rate of 𝐿𝑀𝑇𝑟𝑃𝐺 is still higher than 99.07% on four illumination conditions. Therefore, compared to the other three methods, the 𝐿𝑀𝑇𝑟𝑃𝐺 is less sensitive in all the levels of standard deviation of the Gaussian filter. However, MFRAT based methods show worse performance than Gabor based methods. This is due to the fact that we utilized the line of the MFRAT filter with one pixel wide only, which makes them sensitive to
the blurred lines. Therefore, if the line of MFRAT is wider than one pixel, the performance of MFRAT based methods could be improved significantly.
Fig. 17. The rank-1 recognition rate under the varying levels of the standard deviation of the Gaussian filter on four illumination conditions. (a) Blue, (b) Green, (c) Red, and (d) NIR
Table 14 The rank-1 recognition rate (%) under different levels of standard deviation of the Gaussian filter. Spectral Blue
Green
Red
NIR
5.
Methods
0.0
0.5
1.0
1.5
2.0
2.5
𝐻𝑂𝐿𝐺 [41]
99.90
99.93
98.57
93.83
80.77
63.50
𝐻𝑂𝐿𝑀 [41]
99.93
99.43
97.90
93.60
85.90
77.13
𝐿𝑀𝑇𝑟𝑃𝐺
99.93
99.90
99.80
99.53
98.53
96.80
𝐿𝑀𝑇𝑟𝑃𝑀
100.00
100.00
98.33
87.67
66.70
50.10
𝐻𝑂𝐿𝐺 [41]
99.07
99.90
99.27
94.23
83.67
62.47
𝐻𝑂𝐿𝑀 [41]
99.77
99.03
2.23
0.20
0.20
0.20
𝐿𝑀𝑇𝑟𝑃𝐺
99.90
99.77
99.70
99.40
98.37
94.97
𝐿𝑀𝑇𝑟𝑃𝑀
100.00
99.97
98.70
95.63
86.03
71.17
𝐻𝑂𝐿𝐺 [41]
99.87
99.50
88.33
65.90
44.10
22.87
𝐻𝑂𝐿𝑀 [41]
99.77
98.33
94.07
87.63
80.80
65.33
𝐿𝑀𝑇𝑟𝑃𝐺
100.00
100.00
99.93
99.73
97.53
91.07
LMTrPM
100.00
99.97
98.53
90.87
77.20
64.27
HOLG [41]
99.97
99.53
93.63
79.17
52.20
23.50
HOLM [41]
99.30
91.07
81.40
67.43
54.47
42.20
LMTrPG
99.97
99.93
99.77
99.07
98.67
97.00
LMTrPM
100.00
99.53
98.07
92.93
84.43
73.83
Conclusions The proposed LMTrP descriptor is highly compact, discriminative and less sensitive to slight misalignment. We
have developed a unique palmprint recognition method with the proposed descriptor integrated with line-shape based filters. Thus, the main contribution of this paper is the usage of the local descriptors’ direction as well as thickness, which simultaneously improves the discriminative capability and performance. The proposed methods are conducted on three palmprint databases. Compared to the traditional methods of palmprint feature representation, the proposed methods achieved the best results in almost all cases due to additional usage of local descriptors’ thickness. In addition, the proposed methods are more stable than other related methods when dealing with different levels of
rotation, translation, and blurriness because the histograms have small changes in local regions.
From our perspective, there are still some aspects that are worthy of further studies: (1) this paper basically assumes that the palmprint textures are well extracted with line-shape based filters. However, it is impossible to extract the local descriptor in low or high contrast conditions; (2) the performance of the proposed method is expected to be improved when dealing with distortion issues of palms in contactless palmprint recognition; (3) the proposed method could suffer from occlusion issues; and (4) although this paper mainly focuses on the traits of palmprint images, the proposed descriptor may be appropriately used in other related biotechnology research areas [56];
References [1] A.K. Jain, A. Ross, S. Prabhakar, An Introduction to Biometric Recognition, IEEE Trans. Circuits Syst. Video Technol. 14 (1) (2004) 4-20.
[2] J.A. Unar, W.C. Seng, A. Abbasi, A review of biometric technology along with trends and prospects, Pattern Recognit. 47 (8) (2014) 2673-2688. [3] A.W. Kong, D. Zhang, G.M. Lu, A study of identical twins’ plamprints for personal verification, Pattern Recognit. 39 (11) (2006) 2149-2156.
[4] A. Kong, D. Zhang, M. Kamel, A survey of palmprint recognition, Pattern Recognit. 42 (7) (2009) 1408-1418.
[5] D. Zhang, W. Zuo, F. Yue, A comparative study of palmprint recognition algorithms, ACM Comput. Surv. 44 (1) (2012) 2:1-2:37.
[6] A. Genovese, V. Piuri, F. Scotti, Touchless palmprint recognition systems, Springer, Switzerland, 2014.
[7] G. Lu, D. Zhang, K. Wang, Palmprint recognition using eigenpalms features, Pattern Recognit. 24 (9) (2003) 1463-1467.
[8] X. Wu, D. Zhang, K. Wang, Fisherpalms based palmprint recognition, Pattern Recognit. Lett. 24 (15) (2003) 2829-2838.
[9] T. Connie, A.T.B jin, M.G.K. On, D.N.C. Ling, An automatic palmprint recognition system, Image Vis. Comput. 23 (5) (2005) 501-515.
[10] L. Shang, D.S. Huang, J.X. Du, C.H. Zheng, Palmprint recognition using FastICA algorithm and radial basis probabilistic neural network, Neurocomputing 69 (13-15) (2006) 1782-1786.
[11] Y. Wang, Q. Ruan, Kernel Fisher Discriminant Analysis for Palmprint Recognition, in: Proceedings of IEEE International Conference on Pattern Recognition (ICPR 2006), Hong Kong, China, pp. 457-460, 2006.
[12] M. Wang, Q. Ruan, Palmprint recognition based on two-dimensional methods, in: Proceedings of IEEE International Conference on Signal Processing (ICSP 2006), Beijing, China, vol. 4, 2006.
[13] Z. Zhao, D. Huang, W. Jia, Palmprint recognition with 2DPCA+PCA based on modular neural networks, Neurocomputing 71 (1-3) (2007) 448-454.
[14] D. Hu, G. Feng, Z. Zhou, Two-dimensional locality preserving projecting (2DLPP) with its application to palmprint recognition, Pattern Recognit. 40 (3) (2007) 339-342.
[15] W. Li, D. Zhang, Z. Xu, Palmprint identification by fourier transform, Int. J. Pattern Recognit. Artif. Intell. 16 (4) (2002) 417-432.
[16] P.H. Hennings-Yeomans, B.V.K.V. Kumar, M. Savvides, Palmprint classification using multiple advanced correlation filters and palm-specific segmentation, IEEE Trans. Inf. Forensics. Secur. 2 (3) (2007) 613-622.
[17] X. Jing, D. Zhang, A face and palmprint recognition approach based on discriminant DCT feature extraction, IEEE Trans. Syst. Man Cybern. Part B-Cybern. 34 (6) (2004) 2405-2415.
[18] M. Ekinci, M.Aykut, Palmprint Recognition by Applying Wavelet-Based Kernel PCA, J. Comput. Sci. Technol. 23 (5) (2008) 851-861.
[19] M. Ekinci, M. Aykut, Gabor-based kernel PCA for palmprint recognition, Electron. Lett. 43 (20) (2007) 10771079. [20] X. Pan, Q.Q. Ruan, Palmprint recognition using Gabor feature-based (2D)2PCA, Neurocomputing 71 (13-15) (2008) 3032-3036.
[21] C. Han, H. Cheng, C. Lin, K. Fan, Personal authentication using palm-print features, Pattern Recognit. 36 (2) (2003) 371-381.
[22] X.Q. Wu, D. Zhang, K.Q. Wang, Palm line extraction and matching for personal authentication, IEEE Trans. Syst. Man Cybern. –Syst. 36 (5) (2006) 978-987.
[23] D. Huang, W. Jia, D. Zhang, Palmprint verification based on principal lines, Pattern Recognit. 41 (4) (2008) 1316-1327.
[24] D. Zhang, W. Kong, J. You, M. Wong, Online Palmprint Identification, IEEE Trans. Pattern Anal. Mach. Intell.
25 (9) (2003) 1041-1050.
[25] A.W. Kong, D. Zhang, Competitive coding scheme for palmprint verification, in: Proceedings of IEEE International Conference on Pattern Recognition (ICPR 2004), Cambridge, UK, pp. 520-523, 2004.
[26] Z.N. Sun, T.N. Tan, S.Z. Li, Ordinal palmprint representation for personal identification, in: Proceedings of the IEEE International Conference on Computer Vison and Pattern Recognition (CVPR 2005), San Diego, CA, USA, pp. 279-284. 2005.
[27] A. Kong, D. Zhang, M. Kamel, Palmprint identification using feature-level fusion, Pattern Recognit. 39 (3) (2006) 478-487.
[28] W. Jia, D. Huang, D. Zhang, Palmprint verification based on robust line orientation code, Pattern Recognit. 41 (5) (2008) 1521-1530.
[29] F. Yue, W.M, Zuo, D. Zhang, Orientation selection using modified FCM for competitive code-based palmprint recognition, Pattern Recognit. 42 (11) (2009) 2841-2849.
[30] Z. Guo, D. Zhang, L. Zhang, W. Zuo, Palmprint verification using binary orientation co-occurrence vector, Pattern Recognit. Lett. 30 (13) (2009) 1219-1227.
[31] L. Zhang, H. Li, J. Niu, Fragile Bits in Palmprint Recognition, IEEE Signal Process. Lett. 19 (10) (2012) 663666.
[32] G.K.O. Michael, T. Connie, A.B.J. Teoh, Touch-less palm print biometrics: Novel design and implementation, Image Vis. Comput. 26 (12) (2008) 1551-1560. [33] J. Lu, Y. Zhao, J. Hu, Enhanced Gabor-based region covariance matrices for palmprint recognition, Electron. Lett. 45 (17) (2009) 880-881.
[34] Z. Guo, L. Zhang, D. Zhang, X. Mou, Hierarchical multiscale LBP for face and palmprint recognition, in: Proceedings of IEEE International Conference on Image Processing (ICIP 2010), Hong Kong , China, pp. 4521-4524, 2010.
[35] M. Mu, Q.Ruan, S. Guo, Shift and gray scale invariant features for palmprint identification using complex directional wavelet and local binary pattern, Neurocomputing 74 (17) (2011) 3351-3360.
[36] D. Hong, W. Liu, J. Su, Z. Pan, G. Wang, A novel hierarchical approach for multispectral palmprint recognition, Neurocomputing 151 (3) (2015) 511-521.
[37] A. Kumar, D. Zhang, Personal authentication using multiple palmprint representation, Pattern Recognit. 38 (10) (2005) 1695-1704.
[38] A. Morles, M.A. Ferrer, A. Kumar, Towards contactless palmprint authentication, IET Comput. Vis. 5 (6) (2011) 407-416.
[39] A. Kumar, S. Shekhar, Personal identification using multibiometrics Rank-level Fusion, IEEE Trans. Syst. Man Cybern. Part C –Appl. Rev. 41 (5) (2011) 743-752.
[40] Y. Xu, L. Fei, D. Zhang, Combining Left and Right Palmprint Images for More Accurate Personal Identification, IEEE Trans. Image Process. 24 (2) (2015) 549-559.
[41] W. Jia, R. Hu, Y. Lei, Y. Zhao, J. Gui, Histogram of Oriented Lines for Palmprint Recognition, IEEE Trans. Syst. Man Cybern. – Syst. 44 (3) (2014) 385-395.
[42] Y. Luo, L. Zhao, B. Zhang, W. Jia, F. Xue, J. Lu, Y. Zhu, B. Xu, Local line directional pattern for Palmprint Recognition, Pattern Recognit. 50 (2) (2016) 26-44.
[43] S. Murala, R.P. Maheshwari, R. Balasubramanian, Local Tetra Patterns: A New Feature Descriptor for ContentBased Image Retrieval, IEEE Trans. Image Process. 21 (5) (2012) 2874-2886.
[44] E. Walia, N.K. Rajput, Evaluation of Local Tetra Pattern Features for Face Recognition, in: Proceedings of IEEE International Conference on Computer Communication and Informatics (ICCCI 2014), Coimbatore , India, pp. 1-6, 2014.
[45] D. Zhang, Z. Guo, G. Lu, L. Zhang, W. Zuo, An Online System of Multi-spectral Palmprint Verificaiton, IEEE Trans. Instrum. Meas. 59 (2) (2010) 480-490.
[46]
PolyU
Multispectral
Palmprint
Database
http://www4.comp.polyu.edu.hk/~biometrics/MultispectralPalmprint/MSP.htm
[47] A. Kumar, Incorporating cohort information for reliable palmprint authentication, in: Proceedings of Sixth Indian Conference on Computer Vision, Graphics & Image Processing (ICVGIP 2008), Bhubaneswar, India, pp. 583-590, 2008.
[48]
IIT
Delhi
Touchless
Palmprint
Database
version
1.0,
http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database_Palm.htm. [49] F. Matus, J. Flusser, Image representations via a finite radon transform, IEEE Trans. Pattern Anal. Mach. Intell.
15 (10) (1993) 996-1006.
[50] T. Ojala, M. Pietikainen, T. Maenpaa, Multiresolution gray scale and rotation invariant texture classification with local binary patterns, IEEE Trans. Pattern Anal. Mach. Intell. 24 (7) (2002) 971-987.
[51] B. Zhang, Y. Gao, S. Zhao, J. Liu, Local derivative pattern versus local binary pattern: face recognition with high-order local pattern descriptor, IEEE Trans. Image Process. 19 (2) (2010) 533-544.
[52] J. Yang, A.F. Frangi, J. Yang, D. Zhang, Z. Jin, KPCA Plus LDA: A complete kernel fisher discriminant framework for feature extraction and recognition, IEEE Trans. Pattern Anal. Mach. Intell. 27 (2) (2005) 230-244.
[53] L. Nanni, A. Lumini, Ensemble of multiple Palmprint representation, Expert Syst. Appl. 36 (3) (2009) 44854490.
[54] D. Tamrakar, P. Khanna, Occlusion Invariant Palmprint Recognition with ULBP Histograms, in: Proceedings of Eleventh International Conference on Image and Signal Processing (ISISP 2015), Bangalore, India, pp. 491-500.
[55] K. Fan, T. Hung, A Novel Local Pattern Descriptor-Local Vector Pattern in High-Order Derivative Space for Face Recognition, IEEE Trans. Image Process. 23 (7) (2014) 2877-2891. [56] L. Nanni, S. Brahnam, S. Ghidoni, E. Menegatti, T. Barrier, Different Approaches for Extracting Information from the Co-Occurrence Matrix, PLoS One 8 (12) (2013) 1-9.
[57] R.M. Bolle, N.K. Ratha, S. Pankanti, An Evaluation of Error Confidence Interval Estimation Methods, in: Proceedings of IEEE International Conference on Pattern Recognition (ICPR 2004), Cambridge, England, UK, pp. 103-106, 2004. [58] J.S. Kim, G. Li, B. Son, J. Kim, An Empirical Study of Palmprint Recognition for Mobile Phones, IEEE Trans. Consum. Electron. 61 (3) (2015) 311-319.
Author Biography
Gen Li received his B.S. degree in Computer Science and Technology in Southwest University for Nationalities at Chengdu, China, in 2006, and is currently working toward his Ph.D. degree in Electrical and Electronic Engineering at Yonsei University, Seoul, Korea. His current research interests include computer vision, image analysis, and pattern recognition.
Jaihie Kim received Ph.D. from Case Western Reserve University, USA. Since 1984, he has been a professor at the School of Electrical and Electronic Engineering at Yonsei University. Currently, he is the Director of the Biometric Engineering Research Center and his research interests include mobile biometrics, spoof detection, cancellable biometrics, and face age estimation/synthesis. He was the author of many international technical journals which could be found at http://cherup.yonsei.ac.kr.
Highlights
A novel local micro-structure Tetra Pattern (LMTrP) is proposed.
A framework to the palmprint recognition based on LMTrP is presented.
The proposed method to the palmprint recognition is less sensitive to mere misalignment.
The proposed method to the palmprint recognition outperforms state-of-the-art local feature based approaches.