Pattern Recognition 50 (2016) 26–44
Contents lists available at ScienceDirect
Pattern Recognition journal homepage: www.elsevier.com/locate/pr
Local line directional pattern for palmprint recognition Yue-Tong Luo a, Lan-Ying Zhao a, Bob Zhang b, Wei Jia c,d,n, Feng Xue a, Jing-Ting Lu a, Yi-Hai Zhu e, Bing-Qing Xu a a
School of Computer and Information, Hefei University of Technology, Hefei 230009, China Faculty of Science and Technology, University of Macau, Macau, China c Hefei Institutes of Physical Science, Chinese Academy of Science, Hefei 230031, China d Biometrics Research Centre, Department of Computing, Hong Kong Polytechnic University, Hong Kong e Tableau Software, Seattle, WA 98103, USA b
art ic l e i nf o
a b s t r a c t
Article history: Received 29 November 2014 Received in revised form 8 August 2015 Accepted 27 August 2015 Available online 9 September 2015
Local binary patterns (LBP) are one of the most important image representations. However, LBPs have not been as successful as other methods in palmprint recognition. Originally, the LBP descriptor methods construct feature vectors in the image intensity space, using pixel intensity differences to encode a local representation of the image. Recently, similar feature descriptors have been proposed which operate in the gradient space instead of the image intensity space, such as local directional patterns (LDP) and local directional number patterns (LDN). In this paper, we propose a new feature input space and define an LBP-like descriptor that operates in the local line-geometry space, thus proposing a new image descriptor, local line directional patterns (LLDP). Moreover, the purpose of this work is to show that different implementations of LLDP descriptors perform competitively in palmprint recognition. We evaluate variations to LLDPs, e.g., the modified finite radon transform (MFRAT) and the real part of Gabor filters are exploited to extract robust directional palmprint features. As is well-known, palm lines are the essential features of a palmprint. We are able to show that the proposed LLDP descriptors are suitable for robust palmprint recognition. Finally, we present a thorough performance comparison among different LBP-like and LLDP image descriptors. Based on experimental results, the proposed feature encoding of LLDPs using directional indexing can achieve better recognition performance than that of bit strings in the Gabor-based implementation of LLDPs. We used four databases for performance comparisons: the Hong Kong Polytechnic University Palmprint Database II, the blue band of the Hong Kong Polytechnic University Multispectral Palmprint Database, the Cross-Sensor palmprint database, and the IIT Delhi touchless palmprint database. Overall, LLDP descriptors achieve a performance that is competitive or better than other LBP descriptors. & 2015 Elsevier Ltd. All rights reserved.
Keywords: Biometrics Palmprint recognition LBP Local descriptor Line feature
1. Introduction In recent years, as one of emerging biometrics technologies, palmprint recognition has drawn wide attentions from both academy and industry. Generally speaking, palmprint recognition is using the person's palm to identify or verify who the person is [1,2]. In the early stage of the research for palmprint recognition, inked palmprint was used for offline recognition [3]. Later, Zhang et al. [4] proposed the first online palmprint recognition system using 2D low resolution imaging for civilian applications, in which the palmprint images were captured by the contact manner. At a low resolution palmprint image, i.e., about 75 dpi, palm lines n Corresponding author at: Hefei Institutes of Physical Science, Chinese Academy of Science, Hefei 230031, China. E-mail address:
[email protected] (W. Jia).
http://dx.doi.org/10.1016/j.patcog.2015.08.025 0031-3203/& 2015 Elsevier Ltd. All rights reserved.
including principal lines and creases could be clearly observed [1– 4]. Zhang et al. [5,6] also proposed the first 3D palmprint recognition system, which has more robust recognition performance. However, such system needs an expensive capturing device. Recently, some work focused on high resolution palmprint recognition [7,8]. At a high resolution palmprint image, i.e., about 400–500 or greater dpi, ridges, minutiae and pores could be detected [7,8]. Up to present time, palmprint recognition at high resolution is limited to forensic applications. Due to the importance for civilian applications, the research for 2D low resolution palmprint recognition is still very active. After the work of [4], contact-free [9,10], multimodal [11,12] and multispectral [13,14] 2D low resolution palmprint recognition systems were investigated in depth. In this paper, our work also focuses on 2D low resolution palmprint recognition. So far, many approaches have been proposed for 2D low resolution palmprint recognition. Existing methods can be roughly
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
divided into several different categories such as texture based methods, line based methods, subspace learning based methods, correlation methods, coding methods and local descriptor based methods [1,2]. For texture based methods, Gabor wavelet [15], discrete cosine transform [16], dual-tree complex wavelets [17], region covariance matrices [18], discrete orthonormal S-transform [19], co-occurrence matrices [20], fractal dimension [21] and other statistical methods have been used for palmprint texture feature extraction. Line based methods are also very important in palmprint recognition field. Wu et al. [22] proposed a palm line detection method using the first and the second-order derivatives of Gaussian function. Liu et al. [23] proposed a wide palm line detector using an isotropic nonlinear filter, which is based on the isotropic responses via circular masks. In [24], Jia et al. proposed a principal line extraction method based on modified finite radon transform (MFRAT). So far, coding methods have achieved very promising recognition performance in terms of accuracy and matching speed. Ordinal code [25,26], robust line orientation code (RLOC) [27], competitive code [28,29,74] and binary orientation co-occurrence vector (BOCV) [30] are representative coding methods. Recently, correlation methods such as optimal tradeoff synthetic discriminant function (OTSDF) filter [31] and bandlimited phase only correlation filter (BLPOC) [32] have also been successfully adopted for palmprint recognition. In the past two decades, the study on subspace learning techniques has made a great progress. Many different representative subspace learning methods have been applied to palmprint recognition. As two simple and representative subspace learning methods, principal component analysis (PCA) [33] and linear discriminant analysis (LDA) [34] requires that the 2-D image data must be reshaped into 1-D vector, which can be referred to as the strategy of “image-asvector”. In recent years, kernel method [35], manifold learning method [36], matrix and tensor embedding method [37,38], sparse learning [39] and low rank representation based method [40], were also applied for palmprint recognition. In image processing field, local image descriptor plays an important role for object detection, image recognition, image retrieval, image registration, scene analysis, medical image processing, and video surveillance, etc. Up to now, a lot of local image descriptors have been proposed, which include local binary pattern (LBP) [41], scale-invariant feature transform (SIFT) [42], speeded up robust features (SURF) [43], histogram of oriented gradients (HOG) [44], DAISY [45], binary robust independent elementary features (BRIEF) [46], and local difference binary (LDB) [47], etc. Some of them have been used for palmprint recognition. Badrinath et al. [48] and Chen et al. [49] proposed the palmprint recognition methods based on SIFT, respectively. Recently, Wu et al. exploited SIFT and iterative RANSAC algorithm for contactless palmprint recognition [50]. Badrinath et al. [51] also proposed a palmprint recognition method using SURF descriptor, which has faster feature extraction speed than SIFT. Ghandehari et al. [52] proposed a palmprint recognition method using pyramidal HOG feature and fast tree based matching. Jia et al. [53] proposed the histogram of oriented lines (HOL) descriptor recently, which is one of variants of HOG. In [54], discriminative histogram of local dominant orientation (D-HLDO) is presented for palmprint recognition. Guo et al. [55] adopted hierarchical multiscale LBP, and Shen et al. [56] proposed a method integrating LBP and Gabor response for palmprint recognition. In [57], Mu et al. proposed a palmprint recognition method combining LBP and complex directional wavelet. Among all kinds of local image descriptors, it is well-known that LBP is a popular and powerful one, which has been successfully adopted for many different applications such as face recognition, texture classification, object recognition, and scene recognition [58]. Huang et al. [58] has made a survey on LBP and its
27
applications to facial image analysis. Based on original LBP, a lot of variants have been proposed including local ternary pattern (LTP) [59], dominant LBP (DLBP) [60], center-symmetric LBP (CSLBP) [61], local derivative pattern (LDP) [62], and completed LBP (CLBP) [63], etc. In order to extract spatial structure of an object, some researcher proposed methods combining Gabor wavelet representation and LBP. Zhang et al. [64] proposed a LBP descriptor in Gabor transform domain (LGBP). Then, Zhang et al. [65] proposed to combine Gabor phase information with LBP (LGPP). Later, Xie et al. [66] proposed local XOR pattern integrating with Gabor transform (LGXP). Currently, a new trend of the research on LBP is to encode the directional information instead of intensity information. Jabid et al. [67] proposed the local directional pattern (LDP), which uses the edge response derived from the Kirsch gradient operator in eight directions around a pixel. Later, Zhong et al. [68] presented the enhanced local directional pattern (ELDP) utilizing the directions of the most prominent edge response value and the second most prominent one. Ahmed [69] proposed the gradient directional pattern (GDP), which encodes the texture information of local region by quantizing the gradient directional angles to form a binary pattern. Rivera et al. [70] presented the local directional number pattern (LDN) by computing the edge response of the neighborhood using a compass mask, and by taking the most positive and negative directions of those edge responses. Since edge gradient is more stable than the pixel intensity, these descriptors based on edge gradient could provide better recognition performance than original LBP for face and expression recognition. As we know, palm lines including principal lines and wrinkles are essential and basic features of palmprint at a low resolution palmprint image. At the same time, palm lines are very complicated sometimes. For example, different palm lines have different widths and directions, and some lines may be interconnected and intertwined. Thus, edge gradient may not be a good tool to capture the robust feature of palmprint. In our previous work [53], we have proved that the HOG descriptor based on gradient cannot obtain desirable recognition performance. If the gradient is replaced by line features, the new descriptor called as HOL could achieve promising performance [53]. Motived by HOL [53] and LDP [67] descriptors, in this paper we propose a new LBP-structure descriptor, local line directional pattern (LLDP), for palmprint recognition. LLDP encodes the structure of a local neighborhood by analyzing line directional information. Consequently, we compute the line responses in the neighborhood, in 12 different directions using MFRAT or Gabor filters. The main contributions of this work are as follows: (1) a novel LBP-structure local feature descriptor, LLDP, is proposed, which is very suitable for palmprint recognition. In LLDP, we propose to use a new feature space, i.e., the line feature space, instead of the gradient space or the intensity feature space, to compute robust code; (2) we exploit different coding schemes to produce meaningful descriptors, which are based on bit strings and line direction numbers, respectively. We show that the coding scheme based on line direction index numbers can achieve better recognition performance than that of bit strings; (3) on four palmprint databases, especially on the Hong Kong Polytechnic University Palmprint Database II (PolyU II) and the blue band of the Hong Kong Polytechnic University Multispectral Palmprint Database (PolyU M_B) [71], the proposed LLDP descriptor achieves rank 1 identification rates of 100% in identification experiments, and EERs of 0.0216% and 0.0264% in verification experiments, respectively, which are much better than all other existing LBPstructure descriptors. The rest of this paper is organized as follows: Section 2 presents the coding schemes of the proposed LLDP descriptor. Then, in Section
28
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
3, we detail the use of LLDP for palmprint recognition. We evaluate the performance of the proposed descriptor in Section 4. Finally, we present the concluding remarks in Section 5.
2. Local line directional pattern
prominent one. LDN considers the maximum direction index number and the minimum direction index number [70]. Like EDLP, LDN also assigns fixed positions for the top direction index number and the minimum direction index number. For the central pixel of the neighborhood being coded, the decimal LDN coding is defined as follows: LDN ¼ t 1 81 þ t 8 80
2.1 LDP The LDP of each pixel is an eight-bit binary code calculated by comparing the edge response values of different directions in local 3 3 neighborhood [67]. Given a central pixel in the image, the eight-directional edge response values |mi| (i¼0, 1,…, 7) are computed by Kirsch masks, Mi, in eight different orientations centered on the pixel's position. These masks are shown in the Fig. 1. Then the top k response values |mi| (i¼0, 1,…, 7) are selected and the corresponding directional bits are set to 1. The remaining (8-k) bits are set to 0. Thus, the LDP code is derived by: ( 7 X 1; a Z 0 LDPk ¼ bi ðmi mk Þ2i ; bi ðaÞ ¼ ð1Þ 0; a o 0 i¼0 where, mk is the k-th most significant directional response. Fig. 2 depicts the 8-directional edge response positions and corresponding binary bit positions in LDP. 2.2 ELDP LDP is encoded using the absolute values of the convolving results. However, LDP ignores the sign of the original convolving values, which may contain useful information. In order to improve the performance of the original LDP, ELDP utilizes the directions of maximum edge response value and the second most prominent one [68]. Similar to LDP, ELDP also uses Kirsch masks to calculate edge responses at different directions. But, ELDP exploits the direction index number i of mi to calculate the code instead of bit string. Thus, eight gradient directions can be represented by eight octal codes, respectively. Another main difference between LDP and ELDP is that ELDP assigns a fixed position for the maximum edge's direction index number, as the three most significant bits in the code, and the three least significant bits are the second prominent edge's direction index number. After calculating the edge response values, ELDP sorts {mi } (i¼0, 1,…, 7) in descending order to get the new sorting values {mt1 , mt2 ,…,mt8 }. Here, t 1 denotes the direction index number of the maximum edge response, and t 2 denotes the direction index number of second most prominent one. Obviously, the value of t 8 is the direction index number of the minimum one. For the central pixel of the neighborhood being coded, its decimal ELDP coding is defined as follows: ELDP ¼ t 1 81 þ t 2 80
ð2Þ
ð3Þ
where t 1 is the direction index number of the maximum edge response, and t 8 is the direction index number of the minimum one. In LDN, two masks are used to produce the code, which are Kirsch mask and Gaussian mask, respectively. And, the corresponding codes are denoted as LDNK and LDNG , respectively. 2.4 Coding schemes of LLDP As we have mentioned above, unlike LDP, ELDP and LDN, the LLDP descriptor uses the line direction space, instead of gradient space to calculate the feature code. In LLDP, the MFRAT [24] and the real part of Gabor filters are exploited to extract line direction feature. In this paper, 12-directional line responses are detected. The MFRAT is defined as follows: In an image, given a square local area Z p , whose size is p p, MFRAT calculates different line responses {mi } (i¼ 0, 1,…, 12) across the central pixel (x0 ,y0 ) by the following formula: X f ½x; y ð4Þ mi ¼ x;y A Li
where f[x,y] is the pixel value located in (x, y), and Li denotes the set of points that make up a line on the Z p , which means: ð5Þ Li ¼ ðx; yÞ : y ¼ Si ðx x0 Þ þ y0 ; x A Z p where i means the index number corresponding to a slope of Si . That is to say, different i denotes different slopes of Li . For any given i, the summation mi of only one line, which passes through the center point (x0 ,y0 ) of Z p , is calculated. Actually, mi is the line response of Li . Fig. 3 shows 13 13 MFRAT at 12 different directions. Gabor filter is a powerful tool in computer vision and pattern recognition. Generally, 2D circular Gabor filter has the following form 1 x2 þ y2 Gðx; y; θ; μ; σ Þ ¼ exp 2 π σ2 2 σ2 ð6Þ expf2 π jðμ x cos θ þ μ y sin θÞg pffiffiffiffiffiffiffiffi where j ¼ 1, μ is the frequency of the sinusoidal wave, θ controls the orientation of the function and σ is the standard deviation of the Gaussian envelop. Based on this Gabor function, a Gabor filter bank with one scale and k directions can be created. The direction, θi, is calculated as follows
πði 1Þ
2.3 LDN
θi ¼
LDN and ELDP are similar to some extent. ELDP pays attention to the maximum direction index number and the second most
The real parts of Gabor filter at the 12 directions are presented in Fig. 4. Given a palmprint image I, the steps of extracting the line responses and orientation of pixels in palmprint using the real part of Gabor filter bank can be briefly summarized as follows: convoluting image I using the real part of designed Gabor filter bank to generate filtered images. The line response mi located in I(x,y) can be obtained by the following equations ð8Þ mi ¼ I G θi ðx;yÞ
Fig. 1. Kirsch edge masks in all eight directions.
12
i ¼ 1; 2; ⋯; 12
ð7Þ
According to the coding schemes of LDP, ELDP and LDN, we use three coding strategies to generate LLDP code.
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
29
Fig. 2. 8-directional edge response positions (left) and LDP binary bit positions (right).
Fig. 3. 13 13 MFRAT at the directions of 01 , 151 , 301 , 451 ,601 , 751 , 901; 1051, 1201 , 1351 , 1501 , and 1651 ; the red point is the center of the lattice; black and red points make up lines at different directions. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
LLDP coding strategy 1: in this strategy, we exploit the coding scheme like LDP. Since palm lines are dark lines, the minimum k line response values {mi } (i¼0, 1,…, 12) are selected and the corresponding directional bits are set to 1. The remaining (12-k) bits are set to 0. Thus, for the central pixel of the neighborhood being coded, its LLDP code is derived by: ( 12 X 0; a Z 0 LLDPk ¼ bi ðmi mk Þ2i ; bi ðaÞ ¼ ð9Þ 1; a o 0 i¼0 where mk is the k-th minimum directional response. LLDP coding strategy 2: the second strategy adopts the coding scheme similar to ELDP. That is, the indexes of the first and the second minimum line responses, t 12 and t 11 , are utilized for coding. Since there are 12 directions, we use the duodecimal code. The decimal LLDP coding based on strategy 2 is defined as follows: LLDP ¼ t 12 þ 121 þ t 11 120
ð10Þ
LLDP coding strategy 3: this strategy adopts the coding scheme similar to LDN. The index numbers of the minimum line response t 12 and the maximum line response t 1 are utilized for coding. The decimal LLDP coding based on strategy 3 is defined as follows: LLDP ¼ t 12 þ 121 þ t 1 120
ð11Þ
For ease of distinction, if the palm lines are extracted by MFRAT or Gabor filters, the name of LLDP based on coding strategy 1, 2,
and 3 are denoted as LLDP1M , LLDP1G , LLDP2M , LLDP2G , LLDP3M , and LLDP3G , respectively. 2.5 Analysis In order to better understand the coding strategy of LLDP, Fig. 5 illustrates examples of the coding procedure of LBP, LDP, ELDP, LDN, and LLDP3M . It can be seen that if there is a line across the center point, there will be a strong line responses in LLDP3M , many pixels around the center point are involved for LLDP3M coding. Therefore, LLDP descriptor is robust to noise and illumination changes. Fig. 6 depicts the coding images of LBP, LDP, ELDP, LDNK , LDNG , LLDP1M , LLDP2M , LLDP3M , LLDP1G , LLDP2G , and LLDP3G . Obviously, the coding image of LLDP can well reflect the structure of palmprint. Particularly, LLDP2M and LLDP3M , LLDP2G and LLDP3G have similar coding images due to their similar coding strategy. On the contrary, the coding images of LBP, LDP, ELDP, and LDNK based on gradients can not clearly capture the basic structure of palmprint.
3. Palmprint description Similar to the description of generating LDN descriptor [70], an example of generating LLDP descriptor is depicted in Fig. 7. Given a palmprint image I, the main steps of generating LLDP histogram (LH) are presented as follows: Step 1: For each pixel, I(x,y), calculating its LLDP code to generate coding image.
30
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
Fig. 4. The real parts of three Gabor filters at the directions of 01 , 151 , 301 , 451 ,601 , 751 , 901; 1051, 1201 , 1351 , 1501 , and 1651 .
Step 2: To aggregate the location information to the descriptor, we divide the whole palmprint image into non-overlapping small regions {R1,…RN}, and extract a histogram Hj from each region. We create the histogram H, using each code as a bin. X Hj ðcÞ ¼ ðx; yÞ A Rj LLDP ðx; yÞ ¼ cv; 8 c
ð12Þ
where c is a LLDP code, and (x,y) is a pixel position in the region Rj , LLDP(x,y) is the LLDP code for the position (x,y), and v is the accumulation value commonly the accumulation value is one. Step 3: The LH is computed by concatenating those histograms: N
LH ¼ ∏ Hj
ð13Þ
j¼1
where ∏ is the concatenation operator, and N is the number of regions of the divided palmprint. In this paper, we calculate the distance between two histograms by Chi-square distance and Manhattan distance in recognition stage. Given two histograms LHA and LHB of length N, the Chi-square distance is defined as:
2 N
X LHAj LHBj ð14Þ dx2 LHA ; LHB ¼ A B j ¼ 1 LHj þ LHj The Manhattan distance between two histograms LHA and LHB is defined as: N
X A dE LHA ; LHB ¼ LHj LHBj
ð15Þ
j¼1
4. Experiments 4.1 Databases and experimental parameters The proposed method was tested on four palmprint databases, which are PolyU II, [4] PolyU M_B [71], Cross-Sensor [72], and IIT Delhi touchless palmprint databases [9,73]. PolyU II database contains 7752 grayscale palmprint images from 386 palms corresponding to 193 individuals [4]. In this database, about 20 samples from each of these palms were collected in two sessions, where about 10 samples were captured in the first session and the remaining 10 samples were captured the
second session. The total numbers of images captured in the first session and the second session are 3889 and 3863, respectively. PolyU M_B database contains 6000 grayscale palmprint images from 500 palms corresponding to 250 individuals [71]. In this database, about 12 samples from each of these palms were collected in two sessions, where 6 samples were captured in the first session and the remaining 6 samples were captured in the second session. Cross-Sensor touchless palmprint database contains 12,000 images captured by three devices including one digital camera and two mobile phones [72]. Each device collected 4000 palmprint images in total from 200 palms corresponding to 100 individuals. And, 20 samples are captured from each of the palms in two sessions, where 10 samples were captured in the first session and the second session, respectively. IIT Delhi touchless palmprint database consists of 2601 images from 460 palms corresponding to 230 users [9,73]. However, we found that the class 54 and 97 are the same in the left palm, so we remove the images of class 97. As a result, the remaining database includes 2596 images from 459 palms. For each palm, 5–7 palmprint images are acquired in varying hand pose variations. In addition to the original images, 150 150 pixel automatically cropped and normalized palmprint images are also provided in the website. In PolyU II, PolyU M_B, and Cross-Sensor databases, palmprint is orientated and the ROI image, whose size is 128 128, is cropped. In IIT Delhi database, we resize the ROI images to new images with the size of 128 128. In these four databases, both verification and identification experiments are conducted. Verification is a one-to-one comparison, which answers the question of “whether the person is whom he claims to be”. In the verification experiments, the statistical value of Equal Error Rate (EER) is adopted to evaluate the performance of different methods, respectively. In experiments, the statistical pairs of False Reject Rate (FRR) and False Accept Rate (FAR) were used to calculate EER. In a palmprint database, two image sets were constructed, i.e., the training set and the test set. Here, we suppose that each palm provides n palmprint training images (templates) in training set. To obtain the statistical pairs of FRR and FAR, each of the test images was matched with all of the templates in the training set. If the test image and the template are from the same palm, the matching between them is remarked as a correct matching. Likewise, an incorrect matching can also be defined in a similar manner. Since each palm has n templates in the training database, each test image can thus generate n scores. The maximum of them is regarded as the final correct matching score at last. Similarly,
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
31
Fig. 5. Examples of generating code of different methods, (a) LBP, (b) LDP, (c) ELDP, (d) LDN, (e) LLDP3M .
when a test image matches with another template that comes from a different palm, n incorrect scores can be calculated, and the maximum of them is regarded as an incorrect verification matching score. After all test images matched with all templates in
training set, the statistical values of FRR, FAR and EER can be calculated. Identification is a one-to-many comparison, which answers the question of “who the person is?”. In this paper, the close-set
32
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
Fig. 6. Coding images. (a) Original image, (b) LBP, (c) LDP, (d) ELDP, (e) LDNK , (f) LDNG , (g) LLDP1M , (h) LLDP2M , (i)LLDP3M , (j) LLDP1G , (k)LLDP2G , (l) LLDP3G .
Fig. 7. An example of generating LLDP descriptor.
identification is conducted. That is we know all enrollments exist in the training set. In order to obtain identification accuracy, the rank 1 identification rate is used, in which a test image will be matched with all templates in training set, and the label of the
most similar template will be assigned to this test image. In this paper, the nearest neighbor classifier is used for classification. Before conducting the experiments, some parameters of LLDP descriptor should be given. In this paper, we divide the whole
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
palmprint image into 8 8 non-overlapping small regions. Thus, each small region contains 16 16 pixels. For those LLDP descriptors based on MFRAT, an important parameter is the value of p, which is the size of MFRAT. In this paper, the value of p is set to 13 empirically. In those LLDP descriptors based on Gabor filter, the values of σ and μ in Gabor filter are set to 5.6179 and 0.11 empirically, respectively. In LLDP1M and LLDP1G , the minimum k line response values from 12 directions are selected and the corresponding positions are set to 1, so the total number of the possibility appeared patterns is Ck12 . In this paper, the value of k is set to 3, thus, the feature length of LLDP1M and LLDP1G is 220, and the corresponding histogram length is 10,480 (220 8 8). LLDP2M , LLDP2G , LLDP3M , and LLDP3G both choose the two line response values for coding, so the feature length is 144 (12 12), and their histogram length is 9216 (144 8 8). In the next subsection, we will conduct some experiments on PolyU II database to see the recognition performance of LLDP under different parameters. In order to make a performance comparison, the experiments of the following descriptors are also conducted, i.e., LBP [41], LBPu2 [41], LTP [59], LTPu2 [59], LGBP_Mag [64], LGBP_Phase [64], HGPP [64], LGXP [66], CLBP [63], LDP [67], ELDP [68], LDNK [70], LDNG [70], and HOL [53]. Original LBP compares the center pixel with its eight neighbors in 3 3 neighborhood and produces 256 patterns [41]. Tan and Trigs [59] extended the original LBP to a version with 3-value codes named as LTP, the value of the threshold t is 5 in LTP. LBPu2 and LTPu2 use uniform pattern, which only code the binary code from 0 to 1 or from 1 to 0 that has less than two changes, so the total number of pattern is 59. CLBP use uniform pattern and three matrix CLBP_S, CLBP_M and CLBP_C to generate feature histogram. LDP calculates eight directional edge response value of the center pixel using Kirsch masks in eight different orientations (M0–M7) and also has 256 patterns. ELDP and LDNK also use Kirsch masks to compute 8 directions and select two prominent edge response values, then put the indexes on the bit of the first three and the last three positions, so the patterns has 64 types. LDNG uses the derivative of a skewed Gaussian to
33
create an asymmetric compass mask to compute the edge response, the values of σ is set to 1, the rotation angle is 45°. The methods mentioned above divide palmprint image into 8 8 splits to generate coding image. For HOL descriptor, the size of MFRAT is 11, the size of Cell is 16 16, the number of bins is 12, and the image is divided into 13 13 splits. For the Gabor methods appeared in our experiments, each Gabor based method uses 5-scale and 8-orientation Gabor filter bank. Each local Gabor pattern matrix is divided into 2 2 sub blocks, and uniform pattern is used for coding. The final feature length of LGBP_Mag, LGBP_Phase and LGXP is 9440 (2 2 8 5 59). As the HGPP contains both GGPP and LGPP, the final length of HGPP is 37,760 (2 2 8 5 59 2 2). 4.2 Experimental results on PolyU II database Fig. 8 depicts six palmprint ROI images of PolyU II database, which were captured from a same palm but in different sessions. The three images in the first row (Fig. 8a) were captured in the first session while the images in the second row (Fig. 8b) were captured in the second session. It can be seen that there are drastic changes of illumination between the images captured in different sessions. In this regard, PolyU II is a challenging database. In PolyU II database, we use the first three palmprint images from the first session for training (1158 images). All palmprint images (3863 images) from the second session are used as test images to evaluate the recognition performance of the proposed method. 4.2.1 Experiments under different parameters In order to view the performance of LLDP descriptors under different parameters, we firstly conduct several experiments. For the LLDPM descriptors, there is one important parameter, that is, the size of MFRAT. Thus, it is interesting to know the recognition performance of LLDPM when the size of MFRAT varies. Tables 1–3 list the recognition performance of LLDPM under different sizes of MFRAT on PolyU II database. It can be seen that when the size of
Fig. 8. Six palmprint ROI images from a same palm in PolyU II database, (a) three images captured in the first session, (b) three images captured in the second session.
34
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
Table 1
Table 5
The performance of LLDP1M under different sizes of MFRAT on PolyU II database.
The performance of LLDP2G under different values of μ in Gabor filter on PolyU II database.
p
Manhattan distance
Chi-square distance μ
11 13 15 17 21 25 27 33
Rank 1 identification rate (%)
EER (%) Rank 1 identification rate (%)
EER (%)
99.95 100 100 100 99.92 99.95 99.95 99.77
0.3110 0.1306 0.1177 0.0859 0.1183 0.1366 0.1434 0.2374
0.4124 0.1534 0.1179 0.0695 0.0999 0.1215 0.1472 0.2288
99.90 100 100 100 99.95 99.92 99.92 99.79
0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15
Manhattan distance
Chi-square distance
Rank 1 identification rate (%)
EER (%) Rank 1 identification rate (%)
EER (%)
100 99.97 100 100 100 100 100 100 99.97
0.1632 0.0944 0.0538 0.0744 0.0561 0.0300 0.0266 0.0366 0.0285
0.1722 0.0935 0.0501 0.0515 0.0605 0.0358 0.0302 0.0348 0.0295
99.97 99.95 100 99.97 100 100 100 100 100
Table 2
Table 6
The performance of LLDP2M under different sizes of MFRAT on PolyU II database.
The performance of LLDP3G under different values of μ in Gabor filter on PolyU II database.
p
Manhattan distance
Chi-square distance μ
Rank 1 identification rate (%) 11 99.92 13 99.97 15 100 17 99.97 21 99.97 25 99.92 27 99.87 33 99.82
EER (%) Rank 1 identification rate (%)
EER (%)
0.2860 0.1747 0.1039 0.1190 0.1735 0.1680 0.1630 0.1796
0.3248 0.1848 0.1281 0.1073 0.1685 0.1675 0.1552 0.2104
99.90 99.97 99.95 99.97 99.97 99.90 99.87 99.82
Table 3 The performance of LLDP3M under different sizes of MFRAT on PolyU II database. p
11 13 15 17 21 25 27 33
Manhattan distance
0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15
Manhattan distance Rank 1 identification rate (%)
EER (%) Rank 1 identification rate (%)
EER (%)
100 100 100 100 100 100 100 100 100
0.0466 0.0205 0.0399 0.0388 0.0448 0.0318 0.0282 0.0216 0.0306
0.0539 0.0233 0.0274 0.0236 0.0323 0.0267 0.0255 0.0243 0.0324
EER (%) Rank 1 identification rate (%)
EER (%)
100 100 99.97 99.97 99.95 99.95 99.95 99.84
0.3701 0.2636 0.2022 0.1867 0.1540 0.1735 0.1761 0.2140
0.4317 0.2504 0.2187 0.2042 0.1710 0.1879 0.1641 0.2224
100 100 100 99.97 99.97 99.92 99.92 99.90
100 100 100 100 100 100 100 100 99.97
Table 7 The performance of LLDP1M under different splits on PolyU II database.
Chi-square distance
Rank 1 identification rate (%)
Chi-square distance
Manhattan distance Rank 1 identification rate (%) 11 42.53 22 82.53 44 98.76 88 100 16 16 99.59
EER (%)
Chi-square distance Rank 1 identification rate (%)
18.0279 45.07 6.4231 82.66 1.1837 98.84 0.1306 100 0.3938 99.51
EER (%)
18.9325 7.5601 1.2134 0.1534 0.4600
Table 8 The performance of LLDP2M under different splits on PolyU II database. Manhattan distance
Table 4 The performance of database. μ
0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15
LLDP1G
Chi-square distance
Rank 1 identification rate (%)
EER (%)
Rank 1 identification rate (%)
EER (%)
35.75 78.95 98.58 99.97 99.28
17.7019 6.7857 1.7772 0.1747 0.8020
40.64 78.80 98.14 99.97 99.33
18.9243 8.0038 1.8816 0.1848 0.7652
under different values of μ in Gabor filter on PolyU II
Manhattan distance
Chi-square distance
Rank 1 identification rate (%)
EER (%) Rank 1 identification rate (%)
EER (%)
99.92 99.97 99.92 99.97 99.97 99.95 99.97 99.97 99.97
0.2102 0.1444 0.1123 0.1074 0.0938 0.0845 0.0966 0.0531 0.0557
0.3102 0.2229 0.1544 0.1435 0.1256 0.0927 0.0931 0.0810 0.0650
99.92 99.97 99.97 99.97 100 100 99.97 99.97 100
11 22 44 88 16 16
MFRAT is 17, LLDP1M and LLDP2M descriptors achieve the best verification performance while the best size is 21 for LLDP3M . For the LLDPG descriptors based on Gabor filter, the values of σ and μ in Gabor filter are two important parameters. Here, we fix the value of σ to 5.6179, and change the value of μ, which controls the band width of Gabor filter. Tables 4–6 list the recognition performance of LLDPG on PolyU II database, where the value of μ
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
changes from 0.07 to 0.15 at the interval of 0.01. It can be seen that when the value of μ is set to 0.14, LLDP3G descriptor achieves the best recognition performance. Table 9 The performance of LLDP3M under different splits on PolyU II database. Manhattan distance
11 22 44 88 16 16
Chi-square distance
Rank 1 identification rate (%)
EER (%)
Rank 1 identification rate (%)
EER (%)
52.39 90.53 99.64 100 99.59
12.5682 3.9848 1.0599 0.2636 0.3435
53.27 89.83 99.46 100 99.56
13.7758 4.5832 1.2874 0.2504 0.3244
Table 10 The performance of LLDP1G under different splits on PolyU II database. Manhattan distance
11 22 44 88 16 16
Chi-square distance
Rank 1 identification rate (%)
EER (%) Rank 1 identification rate (%)
EER (%)
41.26 91.46 99.72 99.97 99.77
9.3812 2.8139 0.4792 0.0938 0.2889
8.2402 2.5460 0.5823 0.1256 0.3064
50.69 93.89 99.92 100 99.82
Table 11 The performance of LLDP2G under different splits on PolyU II database. Manhattan distance
11 22 44 88 16 16
Chi-square distance
Rank 1 identification rate (%)
EER (%) Rank 1 identification rate (%)
EER (%)
44.52 92.98 99.90 100 99.90
8.0585 1.9100 0.2154 0.0561 0.1630
7.5492 1.8083 0.2016 0.0605 0.1562
49.47 94.87 99.92 100 99.95
Table 12 The performance of LLDP3G under different splits on PolyU II database. Manhattan distance
11 22 44 88 16 16
Chi-square distance
Rank 1 identification rate (%)
EER (%) Rank 1 identification rate (%)
EER (%)
49.44 94.18 99.97 100 99.90
8.8224 1.8935 0.1452 0.0448 0.1564
8.6352 1.8732 0.1518 0.0323 0.1535
51.20 94.30 99.97 100 99.90
35
In the above experiments, we divided the whole palmprint image into 8 8 non-overlapping small regions to construct the histogram. It should be noted that initially proposed idea of partitioning the palmprint is the work of [31]. Tables 7–12 list the recognition performance of different LLDP descriptors under different splits (1 1, 2 2, 4 4, 8 8, 16 16) on PolyU II database. Experimental results show the split of 8 8 is most suitable for recognition. In LLDP1M and LLDP1G , the minimum k line response values {mi} (i¼0, 1,…, 12) are selected and the corresponding directional bits are set to 1 (we denote this coding scheme as “minimum” strategy). Tables 13 and 14 list the recognition performance of LLDP1M and LLDP1G under different values of k. When the value of k is 3, LLDP1M descriptor achieves the best recognition performance while the best value of k is 1 for LLDP1G descriptor for verification. Although palm lines are dark lines, the maximum k line response values may contain useful information. Here, we change the coding scheme of LLDP1M and LLDP1G from “minimum” strategy to “maximum” strategy, i.e., the maximum k line response values are selected and the corresponding directional bits are set to 1. The experimental results of “maximum” strategy are listed in Tables 15 and 16. We find that the “maximum” strategy can also obtain good recognition performance, which is only a little worse than that of the “minimum” strategy. In order to examine the recognition power of separate line response, we change formula (9)–(15). ( 12 X 0; i a k i LLDPk ¼ bi ðmi mk Þ2 ; bi ðaÞ ¼ ð15Þ 1; i ¼ k i¼0 where mk is the k-th minimum directional response. Obviously, in formula (15), only one line response is used for LLDP1M and LLDP1G coding. Experimental results are listed in Tables 17 and 18. It can be seen that the minimum and maximum line responses have the best discriminating power. For LLDP2G and LLDP3G descriptors, their coding strategies can be viewed as different combinations of different direction index numbers. Here, we denote the coding strategy of LLDP2G as (t12, t11), which means the combination of the indexes of the most minimum line response and the second minimum line response. Similarly, (t12, t1) can be defined as the coding strategy of LLDP3G . Besides of (t12, t11) and (t12, t1), there are different combinations between the index of the most minimum line response and the index of other line responses. Tables 19 and 20 present the recognition performance of LLDPM and LLDPG under different combinations of direction index numbers. Obviously, among different combinations, (t12, t1) and (t12, t2) can obtain good recognition performance. 4.2.2 Performance comparision of different local descriptors on PolyU II database After the experiments about parameters, we conduct the experiments to make a performance comparison of different descriptors. Table 21 lists the identification and verification results
Table 13 The performance of LLDP1M under different values of k using “minimum” strategy. The value of k in LLDP1M
1 2 3 4 5 6
Feature length
12 66 220 495 792 924
Histogram length
768 4224 14,080 31,680 50,688 59,136
Manhattan distance
Chi-square distance
Rank 1 identification rate (%)
EER (%)
Rank 1 identification rate (%)
EER (%)
99.84 99.95 100 99.90 99.95 99.87
0.6656 0.2223 0.1306 0.2726 0.5205 0.5254
99.74 99.97 100 99.90 99.77 99.64
0.7928 0.2631 0.1534 0.5085 0.9565 1.2250
36
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
Table 14 The performance of LLDP1G under different values of k using “minimum” strategy. The value of k in LLDP1M
1 2 3 4 5 6
Feature length
12 66 220 495 792 924
Histogram length
768 4224 14,080 31,680 50,688 59,136
Manhattan distance
Chi-square distance
Rank 1 identification rate (%)
EER (%)
Rank 1 identification rate (%)
EER (%)
99.97 100 99.97 100 99.97 100
0.0478 0.0899 0.0938 0.1373 0.1535 0.1417
99.97 100 100 100 99.97 100
0.0431 0.0631 0.1256 0.1497 0.1505 0.1722
Table15 The performance of LLDP1M under different values of k using “maximum” strategy. The value of k in LLDP1M
1 2 3 4 5 6
Feature length
12 66 220 495 792 924
Histogram length
768 4224 14,080 31,680 50,688 59,136
Manhattan distance
Chi-square distance
Rank 1 identification rate (%)
EER (%)
Rank 1 identification rate (%)
EER (%)
99.64 99.82 99.95 99.87 99.82 99.87
0.7965 0.3508 0.2600 0.2992 0.7737 0.5133
99.64 99.87 99.92 99.84 99.64 99.64
1.0331 0.4109 0.2353 0.5946 1.3826 1.1968
Table 16 The performance of LLDP1G under different values of k using “maximum” strategy. The value of k in LLDP1M
1 2 3 4 5 6
Feature length
12 66 220 495 792 924
Histogram length
768 4224 14,080 31,680 50,688 59,136
Manhattan distance
Chi-square distance
Rank 1 identification rate (%)
EER (%)
Rank 1 identification rate (%)
EER (%)
100 100 99.97 99.97 100 100
0.0628 0.0780 0.0676 0.0966 0.0798 0.1516
99.97 100 100 99.97 100 100
0.0410 0.0486 0.0738 0.0816 0.0873 0.1722
Table 17 The performance of LLDP1M using coding method of (15) under different values of k k
1 2 3 4 5 6 7 8 9 10 11 12
Feature length
12 12 12 12 12 12 12 12 12 12 12 12
Histogram length
768 768 768 768 768 768 768 768 768 768 768 768
Manhattan distance
Chi-square distance
Rank 1 identification rate (%)
EER (%)
Rank 1 identification rate (%)
EER (%)
99.84 98.94 98.94 98.34 97.62 96.32 96.32 94.90 95.44 96.76 98.76 99.61
0.6656 1.5699 1.7147 2.2925 2.4471 2.7299 2.6240 3.1386 3.4791 2.5515 1.5926 0.8296
99.74 98.94 98.71 98.52 97.83 96.79 96.53 95.47 95.34 97.05 98.65 99.64
0.7928 1.7782 1.8989 2.4321 2.6496 2.8392 2.7646 3.2766 3.8368 2.7851 1.7747 1.0077
of different methods on PolyU II database. In this table, we list the highest rank 1 identification rate and lowest EER in bold type. It can be seen that the proposed method achieves the rank 1 identification rate of 100% and the EER of 0.0216%, which are much better than other methods including our previous work HOL. Since there are drastic changes of illumination between the training and test images, the rank 1 identification rates of original LBP and LTP descriptors are about 88.7% and 86.6%, respectively. The performance of LBPu2 and LTPu2 descriptors using uniform pattern is
worse than that of original LBP and LTP descriptors. This phenomenon demonstrates that uniform pattern is inappropriate for robust palmprint recognition. The performances of LGBP_Mag, LGBP_Phase, HGPP and LGXP combining Gabor wavelet and uniform pattern in this paper are not well. According to our analysis, LDP, ELDP and LDNK based on gradients could not achieve desirable performance, which is shown by the experimental reuslts. It is noteworthy that there are obvious superiority of recognition performance of LDNG than LDP, ELDP and LDNK . The main reason is
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
37
Table 18 The performance of LLDP1G using coding method of (15) under different values of k. k
1 2 3 4 5 6 7 8 9 10 11 12
Feature length
12 12 12 12 12 12 12 12 12 12 12 12
Histogram length
768 768 768 768 768 768 768 768 768 768 768 768
Manhattan distance Rank 1 identification rate (%)
EER (%)
Rank 1 identification rate (%)
EER (%)
99.97 99.95 98.68 99.07 99.46 99.59 99.61 99.56 99.28 98.91 99.79 100
0.0478 0.2635 1.4486 0.7667 0.5315 0.3836 0.4504 0.4554 0.6337 0.8609 0.3616 0.0628
99.97 99.95 98.99 99.22 99.66 99.66 99.64 99.74 98.48 99.73 99.82 99.97
0.0431 0.3174 1.3822 0.7166 0.4605 0.3187 0.4074 0.3930 0.5863 0.9123 0.3405 0.0410
Table 19 The performance of LLDPM under different combinations of direction index numbers. Different combination
Chi-square distance
Rank 1 identification rate (%)
EER (%) Rank 1 identification rate (%)
EER (%)
99.97
0.1190
99.97
0.1073
99.97 99.92 99.82 99.84 99.90 99.84 99.92 99.92 99.97 : (t12, t1) 99.97
0.2252 0.3226 0.3998 0.4279 0.4317 0.4634 0.3785 0.3837 0.2692 0.1867
99.97 99.92 99.84 99.87 99.82 99.87 99.92 99.92 99.97 99.97
0.2341 0.3442 0.4168 0.4566 0.4287 0.4599 0.3848 0.3730 0.2683 0.2042
LLDP2M : (t12, t11) (t12, t10) (t12, t9) (t12, t8) (t12, t7) (t12, t6) (t12, t5) (t12, t4) (t12, t3) (t12, t2) LLDP3M
Manhattan distance
Table 20 The performance of LLDPG under different combinations of direction index numbers. Different combination
Manhattan distance Rank 1 identification rate (%)
LLDP2G : (t12, t11) (t12, t10) (t12, t9) (t12, t8) (t12, t7) (t12, t6) (t12, t5) (t12, t4) (t12, t3) (t12, t2) LLDP3G : (t12, t1)
Chi-square distance
Chi-square distance EER (%) Rank 1 identification rate (%)
EER (%)
100
0.0366
100
0.0348
99.97 100 100 100 100 100 100 100 100 100
0.0348 0.0317 0.0274 0.0260 0.0256 0.0264 0.0250 0.0250 0.0249 0.0216
99.97 100 100 100 100 100 100 100 100 100
0.0325 0.0236 0.0226 0.0218 0.0297 0.0281 0.0267 0.0264 0.0236 0.0261
that the mask of derivative Gaussian used in LDNG is also robust to noise and illumination changes. However, due to the complexity of palm lines, the mask of derivative Gaussian used for producing edge responses is not robust enough to extract palmprint features. Therefore, the recognition performance of LDNG is worse than that of LLDP. Additionally, the performance of LLDPG is better than that of LLDPM , and LLDP3G achieves the best recognition performance. In the above experiments, we use three palmprint images of a palm for training. However, many times, in reality there is only 1 template image to use as a template. So, we provide the
verification results for the case of 1 image VS 1 image matching. Here, we use one palmprint image from the first session for training (386 images) and use all palmprint images from the second session for test (3863 images). For performance comparison, we also conduct the experiments of other descriptors by using one image for training. And, we conduct the experiments three times by using the first, second, third image from the first session for training, respectively. The average recognition rates are shown in Table 22. It can be seen that the performance of LLDP descriptors only decreases a little using 1 image for training, and the performances of other descriptors drop obviously. 4.2.3 Performance comparision between LLDP descriptor and coding methods We also make a performance comparison between the proposed method and several representative orientation coding based methods, which are RLOC [27], Ordinal Code [25], and Competitive Code (CompC) [28], respectively. The rank 1 identification rates and EERs of these methods are listed in Table 23. It should be noted that we use the code of “the BioSecure tool used to evaluate the performance of a biometric verification system” [75] to calculate the values of EER of Table 23. In this code, a 90% interval of confidence could be provided for the EER, which allows for determining whether accuracy differences between systems are really statistically significant. Using a parametric method [76], this code can on the EER value which is h calculate a confidence interval i d CI_EER; EER d þ CI_EER [76]. Here, EER d is the calcuEER A EER lated EER, and CI_EER is the error margin of EER. Table 23 also lists the values of CI_EER. In order to better illustrate the verification performances, the Receiver Operating Characteristic (ROC) curves of several methods are illustrated in Fig. 9, which plots the False Accept Rate (FAR) against the Genuine Accept Rate (GAR). From Table 23 and Fig. 9, it can be seen that the performance of the proposed method is comparable to three orientation coding based methods on PolyU II database. In order to make a comparison of the ROC curves of different methods, we list the values of GAR under different values of FAR in Table 24. The results of Table 24 show that CompC Code [28], Ordinal Code [25], and LLDP descriptor have close verification performance. 4.3 Experimental results on PolyU M_B database Fig. 10 depicts six palmprint ROI images of PolyU M_B database, which were captured from a same palm. The three images in first row (Fig. 10a) were captured in the first session while the images in second row (Fig. 10b) were captured in second session. Compared with PolyU II database, there are no drastic changes of illumination between the images captured in different sessions in
38
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
Table 21 The performance comparison (3 training images) of different local descriptors on PolyU II database Method
Manhattan distance
Chi-square distance
Feature length
Histogram length
256 59
16,384 3776
Rank 1 identification rate (%)
EER (%)
Rank 1 identification rate (%)
EER (%)
88.77 79.73
5.9247 11.9052
91.10 80.90
4.9221 10.2474
86.59 80.20
6.4904 10.7185
86.82 84.49
5.8261 8.4173
256 2 59 2
32,768 7552
85.71 87.83 89.98 84.44 85.92 90.55 91.92 91.61
6.5552 5.3325 5.1696 6.6275 9.4767 6.1023 5.0857 4.8599
86.05 89.46 91.10 89.75 89.28 85.17 90.94 89.10
6.5047 4.8180 4.8611 4.7941 6.6070 7.8008 5.5744 5.8114
59 59 59 4 59 59 256 64 64
9440 9440 37,760 9440 11,328 16,384 4096 4096
99.15
1.3668
99.02
1.6877
64
4096
LLDP1M
99.92 100
0.3634 0.0859
99.90 100
0.5764 0.0695
48 220
8112 14,080
LLDP2M
100
0.1039
99.97
0.1281
144
9216
LLDP3M
99.95
0.1540
99.97
0.1710
144
9216
LLDP1G
99.97
0.0557
100
0.0650
220
14,080
LLDP2G
100
0.0266
100
0.0306
144
9216
LLDP3G
100
0.0216
100
0.0272
144
9216
LBP LBPu 2 LTP LTPu2 LGBP_Mag LGBP_Phase HGPP LGXP CLBP LDP ELDP LDNK LDNG HOL
100
Table 22 The performance comparison (1 training image) of different local descriptors on PolyU II database
LBP LBPu2 LTP LTPu2 LGBP_Mag LGBP_Phase HGPP LGXP CLBP LDP ELDP LDNK
Manhattan distance
99.9
Chi-square distance
Rank 1 identification rate (%)
EER (%)
Rank 1 identification rate (%)
85.55 73.78
7.2995 87.91 13.4168 75.133
6.3368 11.8124
82.37 75.16
8.1030 82.30 11.8907 79.63
7.3534 10.1375
79.86 81.68 84.48 76.64 57.01 85.45 87.42 86.71
8.0325 6.7438 6.2279 8.0467 20.5584 7.1834 6.1613 6.1870
8.0486 6.3997 6.0791 6.4319 19.7254 9.0249 6.6480 7.1572
80.06 83.68 85.77 83.24 58.41 80.17 86.54 83.78
EER (%)
96.73
2.4790 96.18
2.9207
LLDP1M
99.47 99.49
0.5499 99.28 0.5203 99.50
0.8010 0.5480
LLDP2M
99.40
0.5476
99.41
0.5468
LLDP3M
99.36
0.7735 99.39
0.7527
LLDP1G
99.77
0.3384 99.84
0.3411
LLDP2G
99.85
0.2123
99.90
0.1981
LLDP3G
99.84
0.1880
99.85
0.1696
LDNG HOL
Table 23 The performance comparison between the proposed LLDP descriptor and three orientation coding based methods on PolyU II database. Method
Rank 1 identification rate (%)
EER (%)
CI_EER (%)
RLOC [27] CompC Code [28] Ordinal Code [25] LLDP
100 100 100 100
0.0521 0.0513 0.0497 0.0216
0.0316 0.0316 0.0316 0.0222
GAR (%)
Method
99.8
99.7
CompC Ordinal Code RLOC LLDP HOL
99.6
99.5
0
0.05
0.1
0.15
0.2
0.25
FAR (%) Fig. 9. The ROC curves of the methods of CompC, Ordinal Code, RLOC, HOL, and LLDP on PolyU II database.
Table 24 The ROC comparison between the proposed LLDP descriptor and other methods. The values of GAR (%) under different values of FAR (%) Method
FAR ¼ 10 4
FAR ¼ 10 3
FAR¼ 10 2
FAR ¼ 10 1
FAR¼ 100
RLOC [27] CompC Code [28] Ordinal Code [25] HOL [53] LLDP
97.17 99.74
99.58 99.92
99.81 99.94
99.94 100
100 100
99.71
99.87
99.94
100
100
97.92 99.69
98.91 99.82
99.40 99.98
99.71 100
99.94 100
PolyU M_B database. In this database, the first three palmprints from the first session are used for training and the palmprints from the second session are adopted for test. So, the numbers of images for training and test are 1500 and 3000, respectively.
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
39
Fig. 10. Six palmprint ROI images from a same palm in PolyU M_B database, (a) three images captured in the first session, (b) three images captured the second session.
100
Table 25 The performance comparison of different local descriptors on PolyU M_B database.
LBPu2 LBP_256 LTPu2 LTP_256 LGBP_Mag LGBP_Phase HGPP LGXP CLBP LDP ELDP LDNK
Manhattan distance
Chi-square distance
99.9
Rank 1 identification rate (%)
EER (%) Rank 1 identification rate (%)
EER (%)
98.03
1.6179
97.97
1.6076
99.60 87.50
0.4949 4.4085
99.63 91.37
0.6354 3.4300
91.97 98.13 97.97 97.50 92.67 99.40 99.07 99.77 99.57
5.4779 1.0068 0.9491 1.2743 2.4587 0.7517 1.3910 0.4129 0.5677
92.53 98.17 98.27 98.27 95.87 99.60 97.43 99.73 99.50
5.7985 0.9325 0.9513 1.1380 1.4608 0.5613 3.4288 0.4639 0.6159
99.43
0.5299
99.53
0.5917
LLDP1M
99.93 99.97
0.1683 0.0881
99.90 99.97
0.1461 0.0956
LLDP2M
99.97
0.0410
99.97
0.0428
LLDP3M
100
0.0670
100
0.0652
LLDP1G
100
0.0358
100
0.0323
LLDP2G LLDP3G
100
0.0408
100
0.0377
100
0.0264
100
0.0310
G
LDN HOL
Table 26 The performance comparison between the proposed methods and three orientation coding methods on PolyU M_B database. Method
Rank 1 identification rate (%)
EER (%)
CI_EER (%)
RLOC [27] CompC Code [28] Ordinal Code [25] LLDP
100 100 100 100
0.0327 0.0040 0.0408 0.0264
0.0289 0.0006 0.0286 0.0283
GAR (%)
Method
99.8
99.7
CompC Ordinal Code RLOC LLDP HOL
99.6
99.5
0
0.01
0.02
0.03
0.04
0.05
FAR (%) Fig. 11. The ROC curves of the methods of Ordinal Code, RLOC, HOL and LLDP on PolyU M_B database.
In the experiments on PolyU M_B database, we adopted the same parameters used in the experiments on PolyU II database. Table 25 lists the identification and verification results of different methods on PolyU M_B database. Since the illumination condition of palmprint capture is stable in the first session and the second session, the recognition performance of all methods increase obviously compared with the results obtained from PolyU II database. Again, the proposed LLDP method achieves the best recognition performance among all local descriptors listed in Table 25. Particularly, the EER of the proposed method is 0.0264%, which is actually quite a good result. We also make a performance comparison between the proposed method and three orientation coding methods, i.e., RLOC [27], Ordinal Code [25], CompC Code [28]. The rank 1 identification
40
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
Table 27 The performance comparison of different local descriptors on Cross-Sensor database. Method
Manhattan distance
Chi-square distance
Rank 1 identification rate (%)
EER (%)
Rank 1 identification rate (%)
EER (%)
69.47
23.9700
73.12
19.6332
85.98 18.68
9.0878 86.75 30.4129 26.60
8.1677 24.9382
17.82 73.52 71.00 59.98 52.12 51.65 73.95 76.35 76.82
30.7180 9.5105 8.4004 13.6673 16.3601 32.8989 24.7654 17.3821 15.8315
23.15 74.00 72.05 59.78 60.28 57.27 69.25 75.92 76.08
25.9687 9.5149 8.2443 14.1921 13.0956 34.4253 26.5982 18.0259 16.3705
87.10
6.8321
87.18
7.1457
LLDP1M
97.45 97.05
2.5205 97.22 2.6490 96.83
2.6638 2.7311
LBPu2 LBP LTPu2 LTP LGBP_Mag LGBP_Phase HGPP LGXP CLBP LDP ELDP LDNK LDNG HOL LLDP2M
96.78
2.6709 96.50
2.8125
LLDP3M
98.10
2.4360 98.20
2.3639
LLDP1G
96.93
2.0633 97.50
2.0337
LLDP2G
97.35
1.7208
97.45
1.6715
LLDP3G
98.15
1.4840 98.45
1.4700
Table 28 The performance comparison between the proposed methods and three orientation coding methods on Cross-Sensor database. Method
Rank 1 identification rate (%)
EER (%)
CI_EER(%)
RLOC [27] CompC Code [28] Ordinal Code [25] LLDP
99.00 99.625 99.57 98.45
0.967 0.580 0.624 1.470
0.1383 0.1050 0.1097 0.1679
Fig. 12. Six palmprint ROI images captured by three devices from a same palm on Cross-Sensor database, (a) two images captured in the first and the second sessions by digital camera, (b) two images captured in the first and the second sessions by the first mobile phone, (c) two images captured in the first and the second sessions by the second mobile phone.
100 99.5 99
rates, EERs and CI_EERs of these methods are listed in Table 26. And, Fig. 11 presents the ROC curves of several different methods. It can be seen that the performance of the proposed LLDP method is also comparable to these three orientation coding based methods in PolyU M_B database.
GAR (%)
98.5 98 97.5 97 96.5
CompC Ordinal Code RLOC LLDP HOL
96
4.4 Experimental results on Cross-Sensor palmprint database Fig. 12 depicts six palmprint ROI images of Cross-Sensor database, which were captured from a same palm. The images captured by digital camera in the first and second sessions are shown in first row (Fig. 12a). Correspondingly, Fig. 12b and c show the images captured by two mobile phones. In this database, the first three palmprints from the first session captured by digital camera are used for training and the palmprints from the second session captured by two mobile phones are adopted for test. So, the numbers of images for training and test are 600 and 4000, respectively.
95.5 95
0
2
4
6
8
10
FAR (%) Fig. 13. The ROC curves of the methods of CompC, Ordinal Code, RLOC, HOL, and LLDP on Cross-Sensor database.
In the experiments on Cross-Sensor database, we adopted the same parameters determined in the experiments on PolyU II database. Table 27 lists the identification and verification results of different methods on Cross-Sensor database. It can be seen that
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
41
Fig. 14. Six palmprint ROI images from six different palms on IIT Delhi touchless palmprint database. Table 29 The performance comparison of different local descriptors on IIT Delhi database.
LBPu2 LBP LTPu2 LTP LGBP_Mag LGBP_Phase HGPP LGXP CLBP LDP ELDP LDNK LDNG HOL LLDP1M LLDP2M LLDP3M LLDP1G LLDP2G LLDP3G
Manhattan distance
Chi-square distance
Rank 1 identification rate (%)
EER (%)
Rank 1 identification rate (%)
76.04
10.5450 77.12
10.4240
79.83 77.59
9.7196 79.22 10.4888 78.99
9.8006 9.3968
80.25 83.76 79.93 83.06 73.56 67.85 78.47 79.22 76.98
9.3079 7.7969 8.8227 8.2444 11.3995 14.6187 10.4233 10.5914 10.9347
79.55 83.11 81.28 82.92 76.88 67.38 76.18 78.57 76.60
9.9003 7.9945 9.1120 8.5569 10.2439 15.3029 11.2956 11.2048 11.2208
76.42
10.8939 75.43
11.7268
83.86 86.15
7.1356 83.01 6.5468 86.20
7.7864 6.7723
85.91
6.7956 85.63
6.9611
86.99
6.9546 86.71
6.9807
90.78
4.6645 91.44
4.3718
91.20
4.3482 91.48
4.1396
92.00
4.0954
4.0725
91.95
Method
Rank 1 identification rate (%)
EER (%)
CI_EER (%)
RLOC [27] CompC Code [28] Ordinal Code [25] LLDP
84.83 88.16 85.58 92.00
7.44 7.72 8.47 4.07
0.4926 0.4959 0.5174 0.4133
EER (%)
100 98 96 94
GAR (%)
Method
Table 30 The performance comparison between the proposed methods and three orientation coding methods on IIT Delhi touchless palmprint database.
92 90 88 86
CompC Ordinal Code RLOC LLDP HOL
84 82 80 0
the proposed LLDP3G descriptor achieves the best recognition performance among all other local descriptors. We make a performance comparison between the proposed method and three orientation coding methods, i.e., RLOC [27], Ordinal Code [25], and CompC Code [28]. The rank 1 identification rates, EERs and CI_EERs of these methods are listed in Table 28. And Fig. 13 presents the ROC curves of different methods. In CrossSensor database, the performance of LLDP is a little worse than that of orientation coding methods.
5
10
15
20
25
30
FAR (%) Fig. 15. The ROC curves of the methods of Ordinal Code, RLOC, and LLDP on IIT Delhi touchless palmprint database.
4.5 Experimental results on IIT Delhi touchless palmprint database Fig. 14 depicts six palmprint ROI images of IIT Delhi touchless palmprint database, which come from six different palms [73]. It
42
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
Table 31 Analysis on computational complexity of feature extraction. Operation LBP LDP LDNG HGPP HOLM HOLG LLDPM LLDPG
Pixel difference Convolution with Kirsch gradient operator Convolution with derivatives of Gaussian filter bank Convolution with Gabor wavelet Line detecting using MFRAT Convolution with Gabor filter bank Line detecting using MFRAT Convolution with Gabor filter bank
should be noted that a lot of ROI images in this database have black area, which will inevitably reduce recognition performance. In this database, the first palmprint image of each palm is used for training, the rest images (about 4–6 images) are adopted for test. So, the numbers of images for training and test are 459 and 2137, respectively. Table 29 lists the identification and verification results of different local descriptors on IIT Delhi touchless palmprint database. In this table, the propsoed LLDP3G descriptor again achieves the best recognition performance among all other local descriptors. We make a performance comparison between the proposed method and three orientation coding methods, i.e., RLOC [27], Ordinal Code, and CompC Code [28]. The rank 1 identification rates, EERs and CI_EERs of these methods are listed in Table 30. And Fig. 15 presents the ROC curves of different methods. In IIT Delhi touchless palmprint database, the performance of LLDP is better than orientation coding methods. 4.6 Analysis on computational complexity For real-time applications, the computational complexity is very important which will determine the processing speeds. In Table 31, an analysis on computational complexity for feature extraction is provided. Suppose that the size of image is n1 n2, and size of convolution kernel is small, the computational complexity of 2D Convolution can be written as O(d), where d ¼n1 n2. We implemented all algorithms using Matlab 2014, and tested different descriptors on a PC with a 3.2 GHz CPU and 8 GB RAM. The simulation time of feature extraction is listed in Table 31. It can be seen that the feature extraction time of the proposed methods is larger than that of LBP descriptor. Here, we make a simple computational complexity comparison between CompC Code [28] and the proposed LLDP descriptor. In LLDPG descriptor, Gabor filters at 12 directions are used for feature extraction. In CompC Code [28], Gabor filters at 6 directions are used for feature extraction [28]. Thus, the speed of feature extraction of CompC Code [28] is faster than LLDP. At the same time, CompC Code uses hamming distance for matching, which is much faster than that of LLDP. As a result, the proposed LLDP is more suitable for verification rather than identification.
5. Conclusion In this paper, we extend the feature extraction in LBP-structure descriptors from intensity and gradient spaces to line space, and propose a novel local feature descriptor, local line directional pattern (LLDP), for palmprint recognition, which enriches the family of LBP-structure descriptors. The proposed method is tested on four palmprint databases, i.e., PolyU II, PolyU M_B, Cross-Sensor, and IIT Delhi touchless databases. In these four databases, the proposed LLDP achieves rank 1 identification rates of 100%, 100%, 98.45% and 92.00% in identification experiments, and EERs of
Kernel size 33 33 77 11 11 11 11 30 30 13 13 30 30
Operation times
Time of feature extraction (s)
1 8 8
0.037 0.180 0.140
40 12 12 12 12
0.352 0.168 0.217 0.266 0.281
0.0216%, 0.0264%, 1.470%, 4.09% in verification experiments. According to the experimental results, the recognition performance of proposed LLDP descriptor is obviously better than that of other LBP-structure descriptors. Besides of promising recognition performance, we have other valuable findings in this work. (1) Gabor filter bank is better than MFRAT to extract LLDP descriptor. (2) In LBP-structure descriptors, there are some different coding schemes. In this paper, we showed that the coding scheme based on direction index number can achieve better recognition performance than that of bit strings in Gabor based LLDP descriptors. (3) In the past, the minimum line response of palmprint was used for feature extraction since palm lines are dark lines. However, our work showed that line responses in other directions also contain useful information, which has not been fully investigated. In the summary, we proposed a promising local image descriptor for palmprint recognition. In our future work, based on LLDP descriptor, we will try to propose a new descriptor with better discriminating power, less computation complexity and faster matching speed for palmprint recognition.
Acknowledgments This work is supported by the Grants of the National Natural Science Foundation of China, Nos. 61175022, 61305006, 61305093, 61402018, and 61472115. The authors would like to thank anonymous reviewers for their helpful and constructive comments that greatly improved the paper. The authors also would like to express gratitude to people who helped us during the preparation of this paper. They are: Miss Yun-Yun Xu at the Hefei University of Technology, Dr. Mei-Na Kan at the Institute of Computing Technology, Chinese Academy of Sciences, Dr. Jie Chen at the University of Oulu, Dr. Xiao-Feng Qu at the Department of Computing, the Hong Kong Polytechnic University, and Dr. Qiu-Shi Zhao at Harbin University of Science and Technology.
References [1] A. Kong, D. Zhang, M. Kamel, A survey of palmprint recognition, Pattern Recognit. 42 (7) (2009) 1408–1418. [2] D. Zhang, W.M. Zuo, F. Yue, A comparative study of palmprint recognition algorithms, ACM Comput. Surv. 44 (1) (2012) 1–37. [3] D. Zhang, W. Shu, Two novel characteristics in palmprint verification: datum point invariance and line feature matching, Pattern Recognit. 32 (1999) 691–702. [4] D. Zhang, A. Kong, J. You, M. Wong, Online palmprint identification, IEEE Trans. Pattern Anal. Mach. Intell. 25 (9) (2003) 1041–1050. [5] D. Zhang, V. Kanhangad, N. Luo, A. Kumar, Robust palmprint verification using 2D and 3D features, Pattern Recognit. 43 (1) (2010) 358–368. [6] D. Zhang, G. Lu, W. Li, L. Zhang, N. Luo, Palmprint recognition using 3-D information, IEEE Trans. Syst., Man Cybern., Part C 39 (5) (2009) 505–519. [7] A.K. Jain, J.J. Feng, Latent palmprint matching, IEEE Trans. Pattern Anal. Mach. Intell. 39 (6) (2009) 1032–1047. [8] J. Dai, J. Zhou, Multi-feature based high-resolution palmprint recognition, IEEE Trans. Pattern Anal. Mach. Intell. 33 (5) (2011) 945–957.
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
[9] A. Kumar, Incorporating cohort information for reliable palmprint authentication, in: Proceedings of ICVGIP, Bhubneshwar, India, 583–590, December 2008. [10] M. Aykut, M. Ekinci, AAM-based palm segmentation in unrestricted backgrounds and various postures for palmprint recognition, Pattern Recognit. Lett. 34 (2013) 955–962. [11] R. Raghavendra, B. Dorizzi, A. Rao, G.H. Kumar, Designing efficient fusion schemes for multimodal biometric systems using face and palmprint, Pattern Recognit. 44 (5) (2011) 1076–1088. [12] A. Kumar, S. Shekhar, Personal identification using multibiometrics rank-level fusion, IEEE Trans. Syst., Man, Cybern.-Part C: Appl. Rev. 41 (5) (2011) 743–752. [13] Z.H. Guo, D. Zhang, L. Zhang, W.H. Liu, Feature band selection for online multispectral palmprint recognition, IEEE Trans. Inf. Forensics Secur. 7 (3) (2012) 1094–1099. [14] R. Raghavendra, C. Busch, Novel image fusion scheme based on dependency measure for robust multispectral palmprint recognition, Pattern Recognit. 47 (6) (2014) 2205–2221. [15] X. Wang, L. Lei, M.Z. Wang, Palmprint verification based on 2D-Gabor wavelet and pulse-coupled neural network, Knowl.-Based Syst. 27 (2012) 451–455. [16] X.Y. Jing, D. Zhang, A face and palmprint recognition approach based on discriminant DCT feature extraction, IEEE Trans. Syst., Man, Cybern. 34 (6) (2004) 2405–2415. [17] G.Y. Chen, W.F. Xie, Pattern recognition with SVM and dual-tree complex wavelets, Image Vis. Comput. 25 (2007) 960–966. [18] J. Lu, Y. Zhao, J. Hu, Enhanced Gabor-based region covariance matrices for palmprint recognition, Electron. Lett. 45 (17) (2009) 1–2. [19] S. Saedi, N.M. Charkari, Palmprint authentication based on discrete orthonormal S-Transform, Appl. Soft Comput. 21 (2014) 341–351. [20] J. Zhou, C.C. Liu, Y. Zhang, G.F. Lu, Object recognition using Gabor cooccurrence similarity, Pattern Recognit. 46 (2013) 434–448. [21] X.M. Guo, W.D. Zhou, Y. Wang, Palmprint recognition algorithm with horizontally expanded blanket dimension, Neurocomputing 127 (2014) 152–160. [22] X.Q. Wu, D. Zhang, K.Q. Wang, Palm line extraction and matching for personal authentication, IEEE Trans. Syst., Man, Cybern. Part A: Syst. Hum. 36 (5) (2006) 978–987. [23] L. Liu, D. Zhang, J. You, Detecting wide lines using isotropic nonlinear filtering, IEEE Trans. Image Process. 16 (6) (2007) 1584–1595. [24] D.S. Huang, W. Jia, D. Zhang, Palmprint verification based on principal lines, Pattern Recognit. 41 (4) (2008) 1316–1328. [25] Z.N. Sun, T.N. Tan, Y.H. Wang, S.Z. Li, Ordinal palmprint representation for personal identification, Proceedings of CVPR 1 (2005) 279–284. [26] Z.N. Sun, L.B. Wang, T.N. Tan, Ordinal feature selection for iris and palmprint recognition, IEEE Trans. Image Process. 23 (9) (2014) 3922–3934. [27] W. Jia, D.S. Huang, D. Zhang, Palmprint verification based on robust line orientation code, Pattern Recognit. 41 (5) (2008) 1504–1513. [28] A. Kong, D. Zhang, Competitive coding scheme for palmprint verification, Proceedings of the 17th ICPR 1 (2004) 520–523. [29] F. Yue, W.M. Zuo, D. Zhang, FCM-based orientation selection for competitive code-based palmprint recogniton, Pattern Recognit. 42 (11) (2009) 2841–2849. [30] Z.H. Guo, D. Zhang, L. Zhang, W.M. Zuo, Palmprint verification using binary orientation co-occurrence vector, Pattern Recognit. Lett. 30 (13) (2009) 1219–1227. [31] P. Hennings-Yeomans, B. Kumar, M. Savvides, Palmprint classification using multiple advanced correlation filters and palm-specific segmentation, IEEE Trans. Inf. Forensics Secur. 2 (3) (2007) 613–622. [32] K. Ito, T. Aoki, H. Nakajima, A palmprint recognition algorithm using phaseonly correlation, IEICE Trans. Fundam. E91-A (4) (2008) 1023–1030. [33] G.M. Lu, D. Zhang, K.Q. Wang, Palmprint recognition using eigenpalms features, Pattern Recognit. Lett. 24 (9–10) (2003) 1463–1467. [34] X.Q. Wu, D. Zhang, K.Q. Wang, Fisherpalms based palmprint recognition, Pattern Recognit. Lett. 24 (15) (2003) 2829–2838. [35] J. Yang, A. Frangi, J. Yang, D. Zhang, J. Zhong, KPCA plus LDA: a complete kernel fisher discriminant framework for feature extraction and recognition, IEEE Trans. Pattern Anal. Mach. Intell. 27 (2) (2005) 230–244. [36] S.S. Wu, M.M. Sun, J.Y. Yang, Stochastic neighbor projection on manifold for feature extraction, Neurocomputing 74 (2011) 2780–2789. [37] D. Hu, G. Feng, Z. Zhou, Two-dimensional locality preserving projecting (2DLPP) with its application to palmprint recognition,, Pattern Recognit. 40 (3) (2007) 339–342. [38] R.X. Hu, W. Jia, D.S. Huang, Y.K. Lei, Maximum margin criterion with tensor representation, Neurocomputing 73 (10–12) (2010) 1541–1549. [39] Y. Xu, Z.Z. Fan, M.N. Qiu, D. Zhang, J.Y. Yang, A sparse representation method of bimodal biometrics and palmprint recognition experiments, Neurocomputing 103 (2013) 164–171. [40] N. Zhang, J. Yang, Low-rank representation based discriminative projection for robust feature extraction, Neurocomputing 111 (2013) 13–20. [41] T. Ojala, M. Pietikainen, T. Maenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns, IEEE Trans. Pattern Anal. Mach. Intell. 24 (7) (2002) 971–987. [42] D. Lowe, Distinctive image feature from scale-invariant key-point, Int. J. Comput. Vis. 60 (2) (2004) 91–110. [43] H. Bay, A. Ess, T. Tuytelaars, L.V. Gool, Speeded-up robust features (SURF),, Comput. Vision Image Underst. 110 (2008) 346–359. [44] N. Dalal, B. Triggs, Histograms of oriented gradients for human detection, Proc. 9th Eur. Conf. on CV 1 (2005) 886–893 , San Diego, CA.
43
[45] E. Tola, V. Lepetit, P. Fua, DAISY: an efficient dense descriptor applied to widebaseline stereo, IEEE Transactions on Pattern Analysis and Machine Intelligence 32 (5) (2010) 815–830. [46] M. Calonder, V. Lepetit, C. Strecha, P. Fua, BRIEF: binary robust independent elementary features, Proc. Eur. Conf. Comput. Vis. (ECCV) 6314 (2010) 778–792. [47] X. Yang, K.T. Cheng, Local difference binary for ultrafast and distinctive feature description, IEEE Trans. Pattern Anal. Mach. Intell. 36 (1) (2014) 188–194. [48] G.S. Badrinath, G. Phalguni, Palmprint verification using SIFT features, In: Proceedings of Image Processing Theory, Tools and Applications, 2008. [49] J.S. Chen, Y.S. Moon, Using SIFT features in palmprint authentication, In: Proceedings of the 19th International Conference on Pattern Recognition, 2008. [50] X.Q. Wu, Q.S. Zhao, W. Bu, A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors, Pattern Recognit. 47 (10) (2014) 3314–3326. [51] G.S. Badrinath, G. Phalguni, Palmprint based verification system using SURF features, Commun. Comput. Inf. Sci. 40 (2009) 250–262. [52] A. Ghandehari, M. Anvaripour, S. Soltanpour, Palmprint verification and identification using pyramidal HOG feature and fast tree based matching, in: Proceedings of the 5th IAPR International Conference on Biometrics (ICB), New Delhi, 2012. [53] W. Jia, R.X. Hu, Y.K. Lei, Y. Zhao, J. Gui, Histogram of oriented lines for palmprint recognition, IEEE Trans. Syst., Man, .Cybern.: Syst. 44 (3) (2014) 385–395. [54] J.J. Qian, J. Yang, G.W. Gao, Discriminative histograms of local dominant orientation (D-HLDO) for biometric image feature extraction, Pattern Recognit. 46 (10) (2013) 2724–2739. [55] Z.H. Guo, L. Zhang, D. Zhang, X.Q. Mou, Hierarchical multicscale LBP for face and palmprint recognition, in: Proceedings of 2010 IEEE 17th International Conference on Image Processing, Hong Kong, 2010. [56] L.L. Shen, Z. Ji, D. Zhang, Z.H. Guo, Applying LBP operator to Gabor response for palmprint identification, in: International Conference on Information Engineering and Computer Science, Wuhan, China, 2009. [57] M.R. Mu, Q.Q. Ruan, S. Guo, Shift and gray scale invariant features for palmprint identification using complex directional wavelet and local binary pattern, Neurocomputing 74 (2011) 3351–3360. [58] D. Huang, C.F. Shan, M. Ardabilian, Y.H. Wang, Local binary patterns and its application to facial image analysis: a survey, IEEE Trans. Syst., Man, Cybern., Part C: Appl. Rev. 41 (6) (2011) 765–781. [59] X.Y. Tan, B. Triggs, Enhanced local texture feature sets for face recognition under difficult lighting conditions, IEEE Trans. Image Process. 19 (6) (2010) 1635–1650. [60] S. Liao, M.W.K. Law, A.C.S. Chung, Dominant local binary patterns for texture classification, IEEE Trans. Image Process. 18 (5) (2009) 1107–1118. [61] M. Heikkila, M. Pietikainen, C. Schmid, Description of interest regions with center-symmetric local binary patterns, Proc. Comput. Vis., Graph. Image Process. 4338 (2006) 58–69. [62] B.C. Zhang, Y.S. Gao, S.Q. Zhao, J.Z. Liu, Local derivative pattern versus local binary pattern: face recognition with high-order local pattern descriptor, IEEE Trans. Image Process. 19 (2) (2010) 533–544. [63] Z.H. Guo, L. Zhang, D. Zhang, A completed modeling of local binary pattern operator for texture classification, IEEE Trans. Image Process. 19 (6) (2010) 1657–1663. [64] W.C. Zhang, S.G. Shan, W. Gao, et al., Local Gabor binary pattern histogram sequence (LGBPHS): a novel non-statistical model for face representation and recognition, Proc. 10th IEEE Int. Conf. Comput. Vis. 1 (2005) 786–791. [65] B.H. Zhang, S.G. Shan, X.L. Chen, et al., Histogram of Gabor phase patterns (HGPP): a novel object representation approach for face recognition, IEEE Trans. Image Process. 16 (1) (2007) 57–68. [66] S.F. Xie, S.G. Shan, X.L. Chen, et al., Fusing local patterns of Gabor magnitude and phase for face recognition, IEEE Trans. Image Process. 19 (5) (2010) 1349–1361. [67] T. Jabid, M.H. Kabir, O. Chae, Robust facial expression recognition based on local directional pattern, ETRJ J. 32 (5) (2010) 784–794. [68] F.J. Zhong, J.S. Zhang, Face recognition with enhanced local directional patterns, Neurocomputing 119 (2013) 375–384. [69] F. Ahmed, Gradient directional pattern: a robust feature descriptor for facial expression recognition, Electron. Lett. 48 (19) (2012) 1203–1204. [70] A.R. Rivera, J.R. Castillo, O. Chae, Local directional number pattern for face analysis: face and expression recognition, IEEE Trans. Image Process. 22 (5) (2013) 1740–1752. [71] D. Zhang, Z.H. Guo, G.M. Lu, L. Zhang, W.M. Zuo, An online system of multispectral palmprint verification, IEEE Trans. Instrum. Meas. 59 (2) (2010) 480–490. [72] W. Jia, R.X. Hu, J. Gui, X.M. Ren, Palmprint recognition across different devices, Sensors 12 (6) (2012) 7938–7964. [73] A. Kumar, S. Shekhar, Personal identification using rank-level fusion, IEEE Trans. Syst., Man, .Cybern.: Part C 41 (5) (2011) 743–752. [74] W.M. Zuo, F. Yue, D. Zhang, On accurate orientation extraction and appropriate distance measure for palmprint recognition, Pattern Recognit. 43 (4) (2011) 964–972. [75] http://svnext.it-sudparis.eu/svnview2-eph/ref_syst/Tools/PerformanceEvalua tion/. [76] R.M. Bolle, N.K. Ratha, S. Pankanti, Evaluation techniques for biometrics-based authentication systems (FRR), Proc. 15th Int. Conf. Pattern Recognit. 2 (2000) 835–841.
44
Y.-T. Luo et al. / Pattern Recognition 50 (2016) 26–44
Yue-Tong Luo received his Ph.D. degree in computer science from Hefei University of Technology, Hefei, China, in 2005. He is currently an associate professor in Department of Computer Science of Hefei University of Technology. His current research interests are in image processing and scientific visualization.
Lan-Ying Zhao received her B.Sc. degree in computer science from Hefei University of Technology, Hefei, China, in 2013. She is currently working toward the M.Sc. degree in Hefei University of Technology. Her research interests include pattern recogntion and image processing.
Bob Zhang is currently an Assistant Professor in the Department of Computer and Information Science at the University of Macau. He obtained his B.A. in Computer Science from York University in 2006, a M.A.Sc. in Information Systems Security from Concordia University in 2007, and a Ph.D. in Electrical and Computer Engineering from the University of Waterloo in 2011. After graduating from Waterloo he remained with the Center for Pattern Recognition and Machine Intelligence, and later worked as a Post Doctoral Researcher in the Department of Electrical and Computer Engineering at Carnegie Mellon University. His research interests focus on medical biometrics, pattern recognition, and image processing.
Wei Jia received the B.Sc. degree in informatics from Central China Normal University, Wuhan, China, in 1998, the M.Sc. degree in computer science from Hefei University of Technology, Hefei, China, in 2004, and the Ph.D. degree in pattern recognition and intelligence system from University of Science and Technology of China, Hefei, China, in 2008. He is currently an associate professor in Hefei Institutes of Physical Science, Chinese Academy of Science. His research interests include biometrics, pattern recognition, and image processing.
Feng Xue received his Ph.D. degree in computer science from Hefei University of Technology, Hefei, China, in 2006. He is currently an associate professor in Department of Computer Science in Hefei University of Technology. His current research interests are in digital image processing and computer vision.
Jing-Ting Lu received the B.Sc. degree and MSc degree in computer science from Hefei University of Technology, Hefei, China, in 2004 and 2009, respectively. She is currently working toward the PhD degree in Hefei University of Technology. Her research interests include machine learning and computer animation.
Yi-Hai Zhu received the Ph.D. degree in Computer Engineering from the University of Rhode Island (URI), Kingston, RI, USA, in 2014. He has broad research interests, including cybersecurity, smart grid security, cyber-physical systems, big data, and pattern recognition. He was a recipient of the Best Paper Award at the IEEE International Conference on Communications (2014), the Best Bachelor Thesis Award of HFUT (2007), and the Second Prize of “Soccer Simulation 3D” at the RoboCup China Open (2006).