A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition

A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition

Computers and Electrical Engineering xxx (2013) xxx–xxx Contents lists available at SciVerse ScienceDirect Computers and Electrical Engineering jour...

1MB Sizes 11 Downloads 120 Views

Computers and Electrical Engineering xxx (2013) xxx–xxx

Contents lists available at SciVerse ScienceDirect

Computers and Electrical Engineering journal homepage: www.elsevier.com/locate/compeleceng

A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition q Hafiz Imtiaz ⇑, Shaikh Anowarul Fattah Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1000, Bangladesh

a r t i c l e

i n f o

Article history: Received 21 July 2012 Received in revised form 15 January 2013 Accepted 15 January 2013 Available online xxxx

a b s t r a c t A feature extraction algorithm for palm-print recognition based on two dimensional discrete wavelet transform is proposed in this paper, which efficiently exploits the local spatial variations in a palm-print image. The palm-image is segmented into several spatial modules and a palm-print recognition scheme is developed, which extracts histogrambased dominant wavelet features from each of these local modules. The effect of modularization in terms of the entropy content of the palm-print images has been analyzed. The selection of dominant features for the purpose of recognition not only drastically reduces the feature dimension but also captures precisely the detail variations within the palmprint image. It is shown that, the modularization of the palm-print image enhances the discriminating capabilities of the proposed features and thereby results in high within-class compactness and between-class separability of the extracted features. Different types of Daubechies wavelets (in terms of use of number of vanishing moments, i.e., db1–db10) have been utilized for the purpose of feature extraction and the effect upon the recognition performance has been also investigated. In order to further reduce the feature dimension, a principal component analysis is performed. It is found from our extensive experimentations on different palm-print databases that the performance of the proposed method in terms of recognition accuracy and computational complexity is superior to that of some of the recent methods. Ó 2013 Elsevier Ltd. All rights reserved.

1. Introduction Automatic palm-print recognition has widespread applications in security, authentication, surveillance, and criminal identification. Although very popular, conventional ID card and password based identification methods are no longer reliable because of the use of several advanced techniques of forgery and password-hacking. As an alternative, biometric is being used for identity access management [1]. Biometric is defined as an intrinsic physical or behavioral trait of human beings and the main advantage of biometric features is that these are not prone to theft and loss, and do not rely on the memory of their users. Several biometrics, such as palm-print, finger-print, face and iris, remain almost unchanged over the life-time of a person. Moreover, it is difficult for one to alter his/her own physiological biometric or imitate that of others’. In security applications with a scope of collecting digital identity, the palm-prints are recently getting more attention among researchers [2,3] among different biometrics. The inner surface of the palm normally contains three flexion creases, known as principal lines, and secondary creases and ridges. These complex line patterns are very useful in personal identification. Nevertheless, palm-print recognition is a com-

q

Reviews processed and approved for publication by Editor-in-Chief Dr. Manu Malek.

⇑ Corresponding author. Tel.: +880 1677103153.

E-mail addresses: hafi[email protected] (H. Imtiaz), [email protected] (S. Anowarul Fattah). 0045-7906/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

2

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

plicated visual task even for humans. The primary difficulty arises from the fact that different palm-print images of a particular person may vary largely, while those of different people may not necessarily vary significantly. Moreover, some aspects of palm-print images, such as presence of noise and variations in illumination, position, and scale, make the recognition task more complicated [4]. A typical palm-print recognition system consists of some major steps, namely, input palm-print image collection, preprocessing, feature extraction, classification and template storage or database. A balanced treatment on all the steps (or phases) of a biometric recognition system is necessary to obtain an acceptable performance rate [5]. However, the major focus of this research is to develop an efficient algorithm to extract accurate features, which is very difficult in real-world scenarios. Moreover, the quality of acquired images is a factor in extracting accurate features in case of many real-life applications. Generally, either line-based or texture-based feature extraction algorithms are employed [6] in palm-print recognition systems. In the line-based schemes, different edge detection methods are used to extract different palm lines (principal lines, wrinkles, ridges, etc.) [7,8]. The extracted edges, either directly or being represented in other formats or domains, are used for template matching. For example, Canny edge detector is used for detecting palm lines in [7], whereas feature vectors are formed based on low-resolution edge maps in [8]. It is to be mentioned that if more than one person were to possess similar principal lines, line based algorithms may result in erroneous identification. The texture-based feature extraction schemes can be used to overcome this limitation, where the variations existing in either the different blocks of images or the features extracted from those blocks are computed [9–16]. In this regard, principal component analysis (PCA) or linear discriminant analysis (LDA) are usually employed directly on palm-print image data or some popular transforms, such as Fourier transform, wavelet transform and discrete cosine transform (DCT), are used for extracting features from the image data. For example, in [11], statistical features of different subimages are extracted from the DCT of the spatial data. The method in [15] proposed to perform wavelet decomposition and extract the most discriminating information from the low frequency data using 2D-PCA. The concept of texture energy is introduced in [14] to define both global and local features of a palm-print, whereas [13] proposed an online multi-spectral palm-print system which analyzes the extracted features from different bands and employs a score level fusion scheme to integrate the multi-spectral information. Because of the property of shift-invariance, it is well known that wavelet based approach is one of the most robust feature extraction schemes, even under variable illumination [17–19]. Various classifiers, such as decision-based neural networks and Euclidean distance based classifier, are employed for palm-print recognition [7,8] using the extracted features. Despite many relatively successful attempts to implement a robust palm-print recognition system, a single approach, which combines accuracy and low computational burden, is yet to be developed. In this paper, we propose to extract spatial variations from each local zone of the entire palm-print image instead of concentrating on a single global variation pattern in order to achieve distinguishable features from different people. In the proposed palm-print recognition scheme, the palm-print image of a person is segmented into several small modules and a wavelet domain feature extraction algorithm using two dimensional discrete wavelet transform (2D-DWT) is developed to extract histogram-based dominant wavelet coefficients corresponding to those spatial modules. The effect of modularization in terms of the entropy content of the palm-print images has been investigated. It is also shown that the discriminating capabilities of the proposed features are enhanced because of modularization of the palm-print image. In comparison to the Fourier transform, the DWT is preferred as it possesses a better space-frequency localization. The variation of recognition performance with the module size, different types of Daubechies wavelets (in terms of use of number of vanishing moments, i.e., db1–db10) and presence of different types of noise has also been investigated. Moreover, the improvement of the quality of the extracted features as a result of illumination adjustment has also been analyzed. Apart from considering only the dominant features, further reduction of the feature dimension is obtained by employing the PCA and the recognition task is then carried out using a simple distance based classifier. 2. Proposed method For any type of biometric recognition, the most important task is to extract distinguishing features from the template data, which directly dictates the recognition accuracy. In comparison to person recognition based on face or voice biometrics, palm-print recognition is very challenging even for a human being. For the case of palm-print recognition, obtaining a significant feature space with respect to the spatial variation in a palm-print image is very crucial. Moreover, a direct subjective correspondence between palm-print features in the spatial domain and those in the wavelet domain is not very apparent. In what follows, we are going to demonstrate the proposed feature extraction algorithm for palm-print recognition, where spatial domain local variation is extracted from wavelet domain transform. 2.1. Proposed feature For biometric recognition, feature extraction can be carried out using mainly two approaches, namely, the spatial domain approach and the frequency domain approach [20]. The spatial domain approach utilizes the spatial data directly from the palm-print image or employs some statistical measure of the spatial data. On the other hand, frequency domain approaches employ some kind of transform over the palm-print image for feature extraction. In case of frequency domain feature extraction, pixel-by-pixel comparison between palm-print images in the spatial domain is not necessary. Phenomena, such as Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

3

rotation, scale and illumination, are more severe in the spatial domain than in frequency domain. Recently, multi-resolution analysis, such as wavelet analysis, is also getting popularity among researchers. In what follows, we intend to develop a feature extraction algorithm based on multi-resolution transformation. It is well-known that Fourier transform based palm-print recognition algorithms involve complex computations and choices of spatial and frequency resolution are limited. In contrast, DWT offers a much better space-frequency localization. This property of the DWT is helpful for analyzing images, where the information is localized in space. The wavelet transform is analogous to the Fourier transform with the exception that it uses scaled and shifted versions of wavelets and the decomposition of a signal involves sum of these wavelets. The DWT kernels exhibit properties of horizontal, vertical and diagonal directionality. It should be mentioned that continuous wavelet transform (CWT) and wavelet series (WS) are defined for continuoustime signal, while discrete wavelet transform (DWT) for discrete-time signal. Because of the pyramidal algorithm and dyadic sampling [21], the DWT can be computed fast and efficiently. For a discrete-time sequence x[n], n 2 Z, DWT is defined by discrete-time multi-resolution decomposition [21], and can be computed by the pyramidal algorithm,

m0n ¼ x½n; ~ j ¼ 1; 2; . . . ; mjn ¼ Rk mj1 k g ½k  2n; j1 ~ j xn ¼ Rk mk h½k  2n; j ¼ 1; 2; . . . ;

ð1Þ ð2Þ ð3Þ

~ are the analysis scaling and wavelet filters, respectively [21]. ~ and h where g The approximate wavelet coefficients are the high-scale low-frequency components of the signal, whereas the detail wavelet coefficients are the low-scale high-frequency components. The 2D-DWT of a two-dimensional data is obtained by computing the one-dimensional DWT, first along the rows and then along the columns of the data. Thus, for a 2D data, the detail wavelet coefficients can be classified as vertical, horizontal and diagonal detail. In the proposed palm-print recognition scheme, 2D-DWT is used for feature extraction. Several palm-print images of different people have been investigated and it is observed that there exist some correspondences between palm-print features on the spatial domain image and those on the corresponding wavelet domain transform. For example, in Fig. 1a–c, three separate palm-print images of three different people are shown along with their corresponding approximate coefficients in Fig. 1d–f. It is observed from these three figures that only strong low-frequency parts are visualized in the approximate coefficients’ plots. In Fig. 1g–i, the corresponding diagonal coefficients are shown. The diagonal parts of the creases of the palmprint images are highlighted in these plots. Similarly, in Fig. 1j–l and m–o, the horizontal and vertical coefficients of the palm-prints are shown. Only horizontal parts of the creases of the palm-print images are highlighted in the horizontal coefficients’ plots, whereas, the vertical parts of the creases of the palm-print images are highlighted in the vertical coefficients’ plots. In order to demonstrate the effect of rotation on the extracted features in frequency domain, Fig. 2a and b shows two synthetic palm-print images. Two palm-prints are basically the same, except that the second one is derived by rotation of the first one by 30°. In order to analyze the feature quality, one common approach is to measure the normalized Euclidean distance (NED). In this case, the NED between the pixel values of the original and rotated images is computed. Next, for both the original and rotated images, the 2D-DWT is performed and vertical and horizontal coefficients are considered. The NED between the wavelet coefficients (both vertical and horizontal) of the original and rotated images is computed. The NEDs obtained in the spatial and DWT domains are shown in Fig. 2c. It is intuitive that the difference between the original and the rotated palm-print images in the spatial domain would be greater than the difference in the corresponding 2D-DWT coefficients as in the later case the vertical and horizontal DWT coefficients precisely signify the changes in the vertical and horizontal directions in the wavelet domain. It is observed from the figure that the latter one provides orders of magnitude lower Euclidean distance as opposed to the earlier one, which is an indication to expect better feature quality in wavelet domain in comparison to that obtained in the spatial domain. However, it is to be noted that in order to obtain shift and rotation invariant wavelet transform, instead of conventional DWT, one may employ undecimated DWT approaches [22,23]. 2.2. Illumination adjustment It is intuitive that palm-images of a particular person captured under different lighting conditions may vary significantly, which can affect the palm-print recognition accuracy. In order to overcome the effect of lighting variation in the proposed method, illumination adjustment is performed prior to feature extraction. Given two palm-print images of a single person having different intensity distributions due to variation in illumination conditions, our objective is to provide with similar feature vectors for these two images irrespective of the difference in illumination conditions. Since in the proposed method, feature extraction is performed in the DWT domain, it is of our interest to analyze the effect of variation in illumination on the DWT-based feature extraction. In Fig. 3a, two palm-print images of the same person are shown, where the second image has a slightly lower average illumination level. 2D-DWT operation is performed upon each image, first without any illumination adjustment and then after performing illumination adjustment. Considering all the 2D-DWT approximate coefficients to form the feature vectors Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

4

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

50

50

50

100

100

100

150

150

150

200

200

200

250

250

250

300

300

350

100

200

300

350

300 100

(a)

200

300

350

20

20

40

40

40

60

60

60

80

80

80

100

100

100

120

120

120

140

140

140

160

160 100

150

100

150

50

(e) 20

20

40

40

40

60

60

60

80

80

80

100

100

100

120

120

120

140

140 100

150

160

100

150

160

20

20

40

40

60

60

60

80

80

80

100

100

100

120

120

120

140

140

140

160

160 150

100

150

50

(k) 20

20

40

40

40

60

60

60

80

80

80

100

100

100

120

120

120

140

140

140

160

160

(m)

100

150

(l)

20

150

150

160 50

(j)

100

100

(i)

40

50

50

(h)

20

100

150

140 50

(g)

50

100

(f)

20

50

300

160 50

(d)

160

200

(c)

20

50

100

(b)

160 50

100

(n)

150

50

100

150

(o)

Fig. 1. (a)–(c) Sample palm-print images of three different people, (d)–(f) corresponding approximate coefficients of wavelet transform, (g)–(i) corresponding diagonal coefficients of wavelet transform, (j)–(l) corresponding horizontal coefficients of wavelet transform and (m)–(o) corresponding vertical coefficients of wavelet transform.

for these two images, a measure of similarity can be obtained by using correlation. In Fig. 3b and c, the cross-correlation values of the 2D-DWT approximate coefficients obtained by using the two images without and with illumination adjustment are shown, respectively. It is evident from these two figures that the latter case exhibits more similarity between the DWT approximate coefficients indicating that the features belong to the same person. The similarity measure in terms of Euclidean distances between the 2D-DWT approximate coefficients of the two images for the aforementioned two cases are also calculated and shown in Fig. 3d. It is observed that there exists a huge separation in terms of Euclidean distance when no illumination adjustment is performed, whereas the distance is very small when illumination adjustment is performed, as expected, which clearly indicates that a better similarity between extracted feature vectors.

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

5

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

50

50

100

100

150

150

200

200

250

250

300

300

350

350

400

400

450

450

500

500

550

50

100

150

200

250

300

350

400

550

450

(a)

50

100

150

200

250

300

350

400

450

(b)

2500

Euclidean distance

2000

1500

1000

500

0 Raw palm−print images

(c) Fig. 2. (a) An artificial palm-print images, (b) its rotated version and (c) comparison between the normalized Euclidean distances of two palm-prints and their 2D-DWT vertical and horizontal coefficients.

In the proposed method, the most conventional way of illumination adjustment, that is subtracting a certain percentage of the average intensity level from the entire palm image, is employed. Generally, this global approach of illumination adjustment may provide better feature quality depending on the level of adjustment. In this case, for each image pixel intensity f(i, j), some percentage of the average intensity value of the whole image is subtracted to obtain an illumination adjusted pixel intensity fadj(i, j), which, for an M  N dimensional image, can simply be defined as

fadj ði; jÞ ¼ f ði; jÞ 

M X N b X f ði; jÞ; M  N i¼1 j¼1

ð4Þ

where b controls the level of subtraction. 2.3. Modularization and its effect upon information content Palm-prints of a person possess some major and minor line structures along with some ridges and wrinkles. A person can be distinguished from another person based on the differences of these major and minor line structures. Fig. 4a and b shows sample palm-print images of two different people. The three major lines of the two people are quite similar. They differ only Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

6

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

Fig. 3. (a) Two sample palm-print images of the same person under different illumination; correlation of the 2D-DWT approximate coefficients of the sample palm-print images shown in Fig. 3a: (b) no illumination adjustment (c) illumination adjusted; and (d) Euclidian distance between 2D-DWT approximate coefficients of sample palm-print images.

in minor line structure. In this case, if we considered the line structures of the two images locally, we may distinguish the two images. For example, if we looked closely within the bordered regions of the palm-print images in Fig. 4a and b, they would seem different. Moreover, it is evident that a small block of the palm-print image may or may not contain the major lines but will definitely contain several minor lines. These minor lines may not be properly enhanced or captured when operating on an image as a whole and may not contribute to the feature vector. Hence, in that case, the feature vectors of the palm-prints shown in Fig. 4a and b may be similar enough to be wrongfully identified as if they belong to the same person. Therefore, we propose to extract features from local zones of the palm-print images. It is to be noted that within a particular palm-print image, the change in information over the entire image may not be properly captured if the DWT features are selected considering the entire image as a whole. Even if it is performed, it may offer features with very low between-class separation. In order to obtain high within-class compactness as well as high between-class separability, we modularize the palm-print images into some smaller segments, which are capable of extracting variation in image geometry locally. For the purpose of modularization, different non-uniform blocks have also been incorporated in the feature extraction process and it has been found that no significant change in recognition performance is achieved if non-uniform blocks were employed instead of ‘‘uniform blocks’’. Moreover, the precision in centering of the test and train images is a noteworthy aspect. A significant misalignment of the positions of the test and train images may affect the recognition accuracy. In particular, if features are extracted from the palm-print image as a whole instead of considering small spatial modules, the recognition accuracy would be severely affected. On the contrary, when a small block of an image is considered, the effect of the overall misalignment of the image would be much smaller, which can even be ignored depending on the degree of misalignment. One way to capture the spatial variation pattern existing in a palm-print image is to measure the intensity variation among different pixels of the image. It is of our interest to analyze the variation of such information content in cases, where instead of the whole image, different portions of the image are considered. For estimating the amount of information in a given image or segment of an image, an entropy based measure of intensity variation is defined as [24] Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

7

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

Palm−print image of person 1

Palm−print image of person 2

20

20

40

40

60

60

80

80

100

100

120

120

140

140 20

40

60

80

100

(a)

120

140

20

40

60

20

20

20

40

40

40

60

60

60

80

80

80

100

100

100

120

120

120

140

140 50

100

(c)

50

100

150

(d)

50

200

150

150

100

100

100

50

50

50

0

0 100

150

200

(f)

250

120

140

100

(e)

150

300

250

200

50

100

140

150

250

0

80

(b)

250 200 150

0 0

50

100

150

200

250

0

50

100

(g)

150

200

250

(h)

Average entropy per segment using entire palm−print image as a whole

7

7

6

6

6

5

5

5

4

4

4

3 2

3 2

1

1

0

0

−1

0

20

40

60

80

−1

100

Entropy

7

Entropy

Entropy

Average entropy per segment using segmented palm−print image

3 2 1 0

0

20

Segment index

40

60

80

−1

100

0

Segment index

(i)

20

40

60

80

100

Segment index

(j)

(k)

1400

1200

Entropy

1000

800

600

400

200

0 0

10

20

50

40

50

60

70

80

90

100

110

120

130

140

150

Segment size (NxN pixels)

(l) Fig. 4. (a)–(b) Sample palm-print images of two people. Square block contains portion of images without any minor line and with a minor line; (c)–(e) sample palm-print images of three different people, (f)–(h) corresponding histograms, (i)–(k) entropy distribution of different segments of palm-print images corresponding to Fig. 4c–e and (l) variation of entropy with segment size.

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

8

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

H¼

m X pk log2 pk ;

ð5Þ

k¼1

where the probabilities fpk gm 1 are obtained based on the intensity distribution of the pixels of that particular image or segment of the image. Three different palm-print images are shown in Fig. 4c–e along with their corresponding histograms calculated from the intensity distribution of those images in Fig. 4f–h. Based on these histograms, a general trend of the intensity distribution of palm-print images can be acquired. It is observed that the distribution follows an almost similar pattern for the three different palm-print images. One can compute the information content in terms of entropy of the entire palm-images using e is computed (5). For the purpose of comparison, from the entropy of the entire image, the average entropy per segment H by taking into account the total number of segments to be used for modularization. In Fig. 4i, the average entropy per sege computed for the image shown in Fig. 4c considering 100 segments each having a size of 15  15 is shown. ment H Next, we consider different small segments of the palm-print images and compute the entropy of each segment Hi based on the histogram of corresponding segments. In Fig. 4i, the entropy values computed from different segments of the palmprint images shown in Fig. 4c are also plotted along with the average entropy per segment calculated using the entire palmprint image. Fig. 4i and k shows similar entropy measures for the other two palm images shown in Fig. 4d and e. It can be clearly observed that the entropy measures vary significantly among different palm-print image segments. Moreover, the average value of the segmental entropies (shown in Fig. 4i–k as dot-dash lines) is much higher than the e computed from the entire palm-print image. This clearly gives an indication that, for feature average entropy per segment H extraction, instead of considering the entire image as a whole, modularization would be a better choice. However, the size of the module is also an important factor. In Fig. 4l, the variation of average entropy of a sample palm-print image with segment size is shown. It is clear that decreasing the size of the modules offers greater entropy values, i.e., variation in information, which is obviously desirable. However, if the modules were extremely small in size, it is quite natural that the small segments will not be capable of exhibiting significant differences in different images. 2.4. Proposed wavelet domain dominant feature extraction Instead of considering the DWT coefficients of the entire image, the coefficients obtained form each modules of the palmprint image are considered to form the feature vector of that image. However, if all of these coefficients were used, it would definitely result in a feature vector with a very large dimension. In view of reducing the feature dimension, we propose to utilize wavelet coefficients, which are playing the dominant role in the representation of the image. In order to select the dominant wavelet coefficients, we propose to consider the frequency of occurrence of the wavelet coefficients as the determining characteristic. It is expected that coefficients with higher frequency of occurrence would definitely dominate over all the coefficients for image reconstruction and it would be sufficient to consider only those coefficients as desired features. One way to visualize the frequency of occurrence of wavelet coefficients is to compute the histogram of the coefficients of a segment of a palm image. In order to select the dominant features from a given histogram, the coefficients having frequency of occurrence greater than a certain threshold value are considered. It is intuitive that within a palm-print image, the image intensity distribution may drastically change at different localities. In order to select the dominant wavelet coefficients, if the thresholding operation were to be performed over the wavelet coefficients of the entire image, it would be difficult to obtain a global threshold value that is suitable for every local zone. Use of a global threshold in a palm-print image may offer features with very low between-class separation. In order to obtain high within-class compactness as well as high between-class separability, we have considered wavelet coefficients corresponding to the smaller spatial modules residing within a palm-print image, which are capable of extracting variation in image geometry locally. In this case, for each module, a different threshold value may have to be chosen depending on the wavelet coefficient values of that segment. We propose to utilize the coefficients (approximate and horizontal detail) with frequency of occurrence greater than h% of the maximum frequency of occurrence for the particular module of the palmprint image and are considered as dominant wavelet coefficients and selected as features for the particular segment of the image. This operation is repeated for all the modules of a palm-print image. Next, in order to demonstrate the advantage of extracting dominant wavelet coefficients corresponding to some smaller modules residing in a palm-print image, we conduct an experiment considering two different cases: (i) when the entire palm-print image is used as a whole and (ii) when all the modules of that image are used separately for feature extraction. For these two cases, centroids of the dominant approximate wavelet coefficients obtained from several poses of two different people (appeared in Fig. 5a) are computed and shown in Fig. 5b and c, respectively. It is observed from Fig. 5b that the feature-centroids of the two people for different sample palm-print images are not well-separated and even for some images they overlap with each other, which clearly indicates poor between-class separability. In Fig. 5c, it is observed that, irrespective of the sample images, the feature-centroids of the two people maintain a significant separation indicating a high between-class separability, which strongly supports the proposed local feature selection algorithm. We have also considered dominant feature values obtained for various sample images of those two people in order to demonstrate the within class compactness of the features. The feature values, along with their centroids, obtained for the two different cases, i.e., extracting the features from the palm-print image without and with modularization, are shown in Fig. 5d and e, respectively. It is observed from Fig. 5d that the feature values of several sample palm-print images of Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

9

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx Palm−print image of Person 1

Palm−print image of Person 2

20

20

40

40

60

60

80

80

100

100

120

120 140

140 20 40 60 80 100 120 140

20 40 60 80 100 120 140

(a)

100 Person 1 Person 2

300

Person 1 feature centroid 50 Feature value

Feature centroid value

350

250 200

0

−50

150

Person 2 feature centroid 100

−100 1

2

3

4

5

6

7

8

9

1

2

3

Sample image no.

4

(b)

6

7

8

6

7

8

(d)

250

100 Person 1 Person 2

Person 1 feature centroid

240

50 Feature value

Feature centroid value

5

Feature index

230

0

220

−50

210

−100

Person 2 feature centroid 1

2

3

4

5

6

Sample image no.

(c)

7

8

9

1

2

3

4

5

Feature index

(e)

Fig. 5. (a) Sample palm-print images of two people; Feature centroids of different images for: (b) un-modularized palm-print image (c) modularized palmprint image; feature values for: (d) un-modularized palm-print image (e) modularized palm-print image.

the two different people are significantly scattered around the respective centroids resulting in a poor within-class compactness. On the other hand, it is evident from Fig. 5e that the centroids of the dominant features of the two different people are well-separated with a low degree of scattering among the features around their corresponding centroids. Thus, the proposed dominant features extracted locally within a palm-print image offer not only a high degree of between-class separability but also a satisfactory within-class compactness. 2.5. Reduction of the feature dimension Principal component analysis (PCA) is a very well-known and efficient orthogonal linear transformation [25]. It reduces the dimension of the feature space and the correlation among the feature vectors by projecting the original feature space into a smaller subspace through a transformation. The PCA transforms the original p-dimensional feature vector into the L-dimensional linear subspace that is spanned by the leading eigenvectors of the covariance matrix of feature vector in each cluster (L < p). PCA is theoretically the optimum transform for given data in the least square sense. For a data matrix, XT, with zero empirical mean, where each row represents a different repetition of the experiment, and each column gives the results from a particular probe, the PCA transformation is given by:

Y T ¼ X T W ¼ V RT ;

ð6Þ T

where the matrix R is an m  n diagonal matrix with nonnegative real numbers on the diagonal and WRV is the singular value decomposition of X. If q sample palm-print images of each person are considered and a total of M dominant DWT coefficients (approximate and horizontal detail) are selected per image, the feature space per person would have a dimension of Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

10

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

q  M. For the proposed dominant features, implementation of PCA on this feature space efficiently reduces the feature dimension without loosing much information. Hence, PCA is employed to reduce the dimension of the proposed feature space. 2.6. Distance based palm-print recognition In the proposed method, for the purpose of recognition using the extracted dominant features, a distance-based similarity measure is utilized. The recognition task is carried out based on the distances of the feature vectors of the training palmimages from the feature vector of the test palm-image. Given the m-dimensional feature vector for the kth sample image of the jth person be {cjk(1), cjk(2), . . . , cjk(m)} and a test sample image f with a feature vector {vf(1), vf(2), . . . , vf(m)}, a similarity measure between the test image f of the unknown person and the sample images of the jth person, namely average sumsquares distance, D, is defined as m 1 XX jc ðiÞ  v f ðiÞj2 ; q k¼1 i¼1 jk q

Dfj ¼

ð7Þ

where a particular class represents a person with q number of sample palm-print images. Therefore, according to (7), given the test sample image f, the unknown person is classified as the person j among the p number of classes when

Dfj 6 Dfg ;

8j–g and 8g f1; 2; . . . ; pg:

ð8Þ

3. Experimental results Extensive simulations are carried out in order to demonstrate the effectiveness of the proposed method of palm-print recognition using the palm-print images of several well-known databases. Different analyses showing the effectiveness of the proposed feature extraction algorithm have been shown. The performance of the proposed method in terms of recognition accuracy is obtained and compared with those of some recent methods [11,13,26–28]. 3.1. Palm-print databases used in simulation In this section, palm-print recognition performance obtained by different methods has been presented using two standard databases, namely, the PolyU palm-print database (version 2) (available at http://www4.comp.polyu.edu.hk/biometrics/) and the IITD palm-print database (available at http://www4.comp.polyu.edu.hk/csajaykr/IITD/Database_Palm.htm). In Fig. 6a and b, sample palm-print images from the PolyU database and the IITD database are shown, respectively. The PolyU database (version 2) contains a total of 7752 palm-print images of 386 people. Each person has 18–20 different sample palmprint images taken in two different instances. The IITD database, on the other hand, consists a total of 2791 images of 235 people, each person having 5–6 different sample palm-print images for both left hand and right hand. It can be observed

200

200

200

200

200

400

400

400

400

400

600

200 400 600 800

600

200 400 600 800

600

200 400 600 800

600

200 400 600 800

600

200

200

200

200

200

400

400

400

400

400

600

200 400 600 800

600

200 400 600 800

600

200 400 600 800

600

200 400 600 800

600

50 100 150 200 250 200 400 600 800

50 100 150 200 250 100 200 300

50 100 150 200 250 200 400 600 800

50 100 150 200 250 100 200 300

50 100 150 200 250 100 200 300

100 200 300

50 100 150 200 250 100 200 300

50

100 150

100

150

50

100

150

100

150

150

150

100

150

100

150

150

150

100 50

100

150

50

100 50

50

100 50

50

100 50

50

100 50

50

100 150

150

100

(c)

150

150

150

50 100 150 200 250 100 200 300

100 200 300

100

150

100 50

100

150

150

50

50

50

50

50

100

100

100

100

100

150 50

50

100 50

50 100 150 200 250

100 200 300

(b)

50

100 50

50 100 150 200 250 100 200 300

100 200 300

(a) 50

50 100 150 200 250

200

150 100

200

100

150

150 100

200

200

150 100

200

200

150 100

200

200

50

50

50

50

50

100

100

100

100

100

150 50

200

200

150 100

200

200

150 100

200

200

150 100

200

200

100

200

100

200

150 100

200

200

(d)

Fig. 6. Sample palm-print images from: (a) the IITD database and (b) the PolyU database; sample palm-print images after cropping: (c) from the IITD database and (d) from the PolyU database.

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

11

from Fig. 6a and b that not all the portions of the palm-print images are required to be considered for feature extraction [2]. The portions of the images containing fingers and the black regions are discarded from the original images to form the regions of interest (ROI) as shown in Fig. 6c and d. 3.2. Performance comparison In the proposed method, dominant features (approximate and horizontal detail 2D-DWT coefficients) obtained from all the modules of palm-print image are used to form the feature vector of that image and feature dimension reduction is performed using PCA. The recognition task is carried out using a simple Euclidean distance based classifier as described in Section 2.6. The experiments were performed following the leave-one-out cross validation rule. For simulation purposes, the module size for the PolyU database and the IITD database has been chosen as 32  32 pixels and 16  16 pixels, respectively. The dominant wavelet coefficients corresponding to all the local segments residing in the palm-print images are then obtained using h = 20. For the purpose of comparison, recognition accuracy obtained using the proposed method along with those reported in [11,13,26–28] are listed in Table 1. It is found from the table that the recognition accuracy obtained by the proposed method is still very competitive in comparison to that obtained by other methods. The performance of the proposed method is also very satisfactory for the IITD database (for both left hand and right hand palm-print images). An overall recognition accuracy of 99.82% is achieved, which is higher than that of the method presented in [26]. The simulations were performed using MATLAB R2009a on an Intel Core 2 Duo 2.00 GHz machine with Windows XP SP3 operating system and 4 GB of RAM. The training phase takes on an average 120 s for the PolyU database and 100 s for the IITD database. The testing phase takes, on an average, 1.07 ms per image for both the databases. One possible reason of getting errors in the palm-print recognition process using the proposed method could be the introduction of discontinuities among major lines as a result of modularization. For example, In Fig. 7a, a sample palm-print image is shown with two consecutive segments highlighted. In Fig. 7b and c enlarged versions of these segments are shown. Note that, a major line has fallen in the border of the two segments, where it can be clearly seen that the major line is divided in-between the two segments. As mentioned earlier, dominant features are extracted from the small modules of the palm-print images. Next, we intend to demonstrate the effect of variation of module size upon the recognition accuracy obtained by the proposed method. In Fig. 8a, the recognition accuracies obtained for different module sizes are shown. It is observed from the figure that, unless the module size is too small or too large, the recognition accuracies do not reduce beyond acceptable level. The best recognition accuracy is achieved for a module size of 32  32 pixels, which is an indication that variations in the image geometry and intensity, i.e., variations in local information are captured more successfully in case of moderate sized segments. However, the choice of module size depends upon the size of the acquired palm-print image. Note that, in case of considering the entire image as a whole instead of any modularization, the recognition accuracy drastically falls to a value less than 10% for both the databases, as expected. In view of reducing computational complexity, dimension reduction of the feature space plays an important role. In the proposed method, the task of feature dimension reduction is performed using PCA. In Fig. 8b, the effect of dimension reduction upon recognition accuracy is shown. It is found from this figure that even for a very low feature dimension, the recognition accuracies remain very high for both the databases. For the experiments performed, 20 most significant PCA coefficients are retained for both the databases. That is, the feature vector for each palm-print image is of the size 1  20. It should be noted that, if the entire palm-print image were used as feature, the feature vector sized for each image would be 1  40,000 for the PolyU database. The proposed dominant feature extraction algorithm thus utilizes a feature vector that is 0.05% of its spatial-domain counterpart. It is to be mentioned that for the purpose of practical implementation, one can consider the lifting wavelet transform which provides faster in-place calculation [29]. For the case of choosing dominant spectral coefficients based on the thresholding criterion in the proposed method, the effect of changing the threshold values, i.e., incorporating different amount of top approximate and horizontal detail coefficients, has been investigated. In Fig. 8c, variation of recognition accuracies with different threshold values is shown. It can be observed from the figure that as the amount of top coefficients decreases, the recognition accuracy also decreases, although the recognition accuracies are sufficiently high even for very low amount of coefficients utilized. Among a large number of wavelets, most commonly used orthogonal wavelet, namely Daubechies’ wavelet is used in this paper [30]. The effect of

Table 1 Comparison of recognition accuracies. Method

PolyU database (%)

Proposed method Method [11] Method [28] Method [13] Method [26] Method [27]

99.95 97.50 98.00 99.87 99.04 99.95

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

12

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

Sample palm−print image with highlighted segments

(a) Highlighted segments zoomed 5

5

10

10

15

15

20

20

25

25

30

30

35

35

40

5

10

15

20

25

30

35

40

45

40

50

5

10

15

20

(b)

25

30

35

40

45

50

(c)

80 60 40 20 0 8

16

32

Recognition accuracy (%)

100

Recognition accuracy (%)

Recognition accuracy (%)

Fig. 7. (a) Sample palm-print image where two consecutive segments highlighted (b) and (c) zoomed versions of the segments.

100 80 60 40 20 0

64

60 40 20

10

20

30

1

2

3

(b) Recognition Accuracy (%)

100 80 60 40 20 0 db1 db2 db3 db4 db5 db6 db7 db8 db9 db10

Mother wavelet

(d)

4

5

Threshold (%)

Reduced feature dimension

(a) Recognition accuracy (%)

80

0 8

Module size (W x W pixels)

100

(c) 100 99 98 97 96 95 94 3

4

5

6

7

8

9

Number of samples used in training

(e)

Fig. 8. Variation of recognition accuracy with: (a) module size, (b) reduced feature dimension, (c) threshold values, (d) different types of Daubechies wavelets, (e) the number of training samples used for the PolyU database.

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

13

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

0.02

1

0.018

0.9 False Rejection Rate

False Positive Rate

choosing different types of Daubechies wavelets (in terms of use of number of vanishing moments, i.e. db1–db10) upon the recognition accuracy has been investigated. It is to be noted that the db1 wavelet is also known as the Haar wavelet. In Fig. 8d, the variation of recognition accuracies with different mother wavelets is shown. It can be observed from the figure that db1, db4, db9, and db10 provide almost similar recognition accuracies. In our experiments, we have used the db4 wavelet. In order to obtain shift and rotation invariant wavelet transform one may employ undecimated DWT approaches [22,23]. The variation of recognition accuracy with the number of training samples used in the recognition process is investigated for both the databases used in the manuscript. The recognition performance considering different number of training samples for the PolyU database is shown in Fig. 8e. From the figure, it can be observed that, with the decrease in number of training samples, the recognition accuracy does not vary significantly. Similar results are found for the IITD database, as well. To maintain acceptable recognition accuracy while reducing the computational burden, we have used nine samples and five samples per person for training for the PolyU database and the IITD database, respectively. Illumination adjustment is generally carried out by subtracting a certain percentage of the average intensity level from the palm image, which may provide better feature quality depending on the level of adjustment. The variation of recognition accuracy with different level of illumination adjustment is investigated and it is observed that the recognition accuracy is increased with the level of subtraction up to a certain limit. From our extensive experimentation on several images, it is found that 50% of the average intensity can be chosen as an optimum level of illumination adjustment, which provides a satisfactory performance. Although, in the proposed method, no additional blocks are used to tackle the effect of noise, the variation of the recognition performance with the presence of different amount of additive Gaussian noise is also investigated. As expected, it is found that with the increase in noise strength that is with decreasing signal-to-noise-ratio (SNR), the recognition accuracy obtained by the proposed method decreases.

0.016 0.014 0.012 0.01

0.7 0.6 0.5 0.4

0.008 0.006 2

0.8

4

6

8 10 12 Reduced Feature Dimension

14

16

2

18

4

6

8 10 12 Reduced Feature Dimension

0.8

0.9

0.7 Genuine Accept Rate

Total Error Rate

16

18

14

16

18

(b)

(a) 1

0.8 0.7 0.6 0.5

0.6 0.5 0.4 0.3 0.2

0.4 2

14

4

6

8 10 12 Reduced Feature Dimension

14

16

0.1 2

18

4

6

8 10 12 Reduced Feature Dimension

(c)

(d) 0.8 0.7

Sensitivity

0.6 0.5 0.4 0.3 0.2 0.1 0.006

0.008

0.01

0.012

0.014

0.016

0.018

(e)

Fig. 9. Performance evaluation of the proposed method: variation of (a) FPR, (b) FRR (c) TER and (d) GAR with reduced feature dimension; (e) sensitivity vs (a-specificity) or false positive rate curve.

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

14

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

In the proposed method, the most conventional way of illumination adjustment, that is subtracting a certain percentage of the average intensity level from the entire palm image, is employed. Generally, this global approach of illumination adjustment may provide better feature quality depending on the level of adjustment. The variation of recognition accuracy with different level of illumination adjustment is investigated and it is found that the recognition accuracy is increased with the level of subtraction up to a certain limit. It is experimentally found that 60% of the average intensity can be chosen as an optimum level of illumination adjustment, which provides a satisfactory performance. In order to evaluate the recognition performance, apart from the recognition accuracy mentioned in Table 1, a few more performance indices, which are commonly used in biometric applications, are taken into consideration, such as False Positive Rate (FPR), False Rejection Rate (FRR), Total Error Rate (TER), Genuine Accept Rate (GAR) and Receiver Operating Characteristic (ROC) (variation of sensitivity with respect to (1-specificity) or FPR) [6,31,32]. The variation of FPR, FRR, TER and GAR with the reduced feature dimension is provided in Fig. 9a–d. From these curves, it is evident that both the FPR and the FRR reduce to very small values as the feature dimension exceeds 14, whereas the GAR increases to greater values. With such reduced feature dimensions, as seen from Fig. 9e, the sensitivity is high with low specificity. Apart from the high recognition accuracy, based on these new performance measures, it is therefore found that the proposed method offers very good recognition performance even for a very low feature dimension. 4. Conclusions In the proposed DWT-based palm-print recognition scheme, dominant features are extracted separately from each of the modules obtained by image-segmentation, instead of operating on the entire palm-print image at a time. An entropy based measure showing the effect of modularization of the palm-print images upon the information content has been presented. It has been shown that the proposed dominant features, that are extracted from the sub-images, attain better discriminating capabilities because of modularization of the palm-print image. The effect of variation of module size upon recognition performance has been investigated and found that the recognition accuracy does not depend on the module size unless it is extremely large or small. The effect of using different types of Daubechies wavelets (in terms of use of number of vanishing moments, i.e., db1–db10) for the purpose of feature extraction has been also investigated. Variation of recognition accuracy with different illumination adjustment, presence of different noise levels and different number of training samples have been discussed. The proposed feature extraction scheme is shown to precisely capture local variations that exist in the major and minor lines of palm-print images, which plays an important role in discriminating different people. Moreover, it utilizes a very low dimensional feature space for the recognition task, which ensures lower computational burden. For the task of classification, a simple Euclidean distance based classifier has been employed and it is found that, because of the quality of the extracted features, such a simple classifier provides a very good recognition performance and there is no need to employ any complicated classifier. It has been observed from our extensive simulations on different standard palm-print databases that the proposed method, in comparison to some of the recent methods, provides excellent recognition performance, both in terms of recognition accuracy and several ROC curves. Acknowledgments The authors would like to express their sincere gratitude towards the authorities of the Department of Electrical and Electronic Engineering and Bangladesh University of Engineering and Technology (BUET) for providing constant support throughout this research work. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]

Jain A, Ross A, Prabhakar S. An introduction to biometric recognition. IEEE Trans Circ Syst Video Technol 2004;14:4–20. Kong A, Zhang D, Kamel M. A survey of palmprint recognition. Pattern Recognit 2009;42:1408–18. Kong A, Zhang D, Lu G. A study of identical twins palmprint for personal verification. Pattern Recognit 2006;39:2149–56. Han C, Cheng H, Lin C, Fan K. Personal authentication using palm-print features. Pattern Recognit 2003;36:371–81. Fairhurst M, Abreu M. Balancing performance factors in multisource biometric processing platforms. IET Signal Process 2009;3:342–51. Wu X, Zhang D, Wang K. Palm line extraction and matching for personal authentication. IEEE Trans Syst Man Cybernet Part A: Syst Humans 2006;36:978–87. Wu X, Wang K, Zhang D. Fuzzy direction element energy feature (FDEEF) based palmprint identification. In: Proc int conf pattern recognition, vol. 1. p. 95–8. Kung S, Lin S, Fang M. approA neural network ach to face/palm recognition. In: Proc IEEE workshop neural networks for signal processing. p. 323–32. Connie T, Jin A, Ong M, Ling D. An automated palmprint recognition system. Image Vis Comput 2005;23:501–15. Jing XY, Zhang D. A face and palmprint recognition approach based on discriminant DCT feature extraction. IEEE Trans Syst Man Cybernet 2004;34:167–88. Dale MP, Joshi MA, Gilda N. Texture based palmprint identification using DCT features. In: Proc int conf advances in pattern recognition, vol. 7. p. 221– 4. Imtiaz H, Fattah S. A face recognition scheme using wavelet-based local features. In: IEEE symp computers informatics (ISCI). p. 313–6. Zhang D, Guo Z, Lu G, Zhang L, Zuo W. An online system of multispectral palmprint verification. IEEE Trans Instrum Measure 2010;59:480–90. Li W, You J, Zhang D. Texture-based palmprint retrieval using a layered search scheme for personal identification. IEEE Trans Multimedia 2005;7:891–8. Lu J, Zhang E, Kang X, Xue Y, Chen Y. Palmprint recognition using wavelet decomposition and 2d principal component analysis. In: Proc int conf communications, circuits and systems proceedings, vol. 3. p. 2133–6. Guo Z. Feature band selection for online multispectral palmprint recognition. IEEE Trans Inform Forensics Sec 2012;7:1094–9.

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

15

[17] Ekinci M, Aykut M. Palmprint recognition by applying wavelet-based kernel PCA. J Comput Sci Technol 2008;23:851–61. [18] Kekre H, Sarode Tanuja K, Tirodkar A. A study of the efficacy of using wavelet transforms for palm print recognition. In: Proc int conf computing, communication and applications. p. 1–6. [19] Zhang Y, Zhao D, Sun G, Guo Q, Fu B. Palm print recognition based on sub-block energy feature extracted by real 2d-gabor transform. In: Proc int conf artificial intelligence and computational intelligence, vol. 1. p. 124–8. [20] Li W, Zhang D, Zhang L, Lu G, Yan J. 3-D palmprint recognition with joint line and orientation features. IEEE Trans Syst Man Cybernet Part C 2011;41:274–9. [21] Zhang X-P, Tian L-S, Peng Y-N. From the wavelet series to the discrete wavelet transform-the initialization. IEEE Trans Signal Process 1996;44:129–33. [22] Percival DB, Walden AT, Gill R. Wavelet methods for time series analysis. Cambridge Ser Statist Probab Math 2000. [23] Harang R, Bonnet G, Petzold LR. WAVOS: a MATLAB toolkit for wavelet analysis and visualization of oscillatory systems. BMC Res Notes 2012;5. [24] Loutas E, Pitas I, Nikou C. Probabilistic multiple face detection and tracking using entropy measures. IEEE Trans Circ Syst Video Technol 2004;14:128–35. [25] Jolloffe I. Principal component analysis. Berlin: Springer-Verlag; 1986. [26] Kumar A, Wong D, Shen H, Jain A. Personal verification using palmprint and hand geometry biometric. Lecture notes in computer science. Springer; 2003. [27] Yue F, Zuo W, Zhang D, Wang K. Competitive code-based fast palmprint identification using a set of cover trees. Opt Eng 2009;48:1–7. [28] Lu J, Zhao Y, Hu J. Enhanced gabor-based region covariance matrices for palmprint recognition. Electron Lett 2009;45:880–1. [29] Sweldens W. The lifting scheme: a construction of second generation of wavelets. SIAM J Math Anal 1998;29:511–46. [30] Daubechies I. Ten lectures on wavelets. In: Proc CBMS-NSF conference series in applied mathematics. p. 1–6. [31] Chang C-I. Multiparameter receiver operating characteristic analysis for signal detection and classification. IEEE Sens J 2010;10:423–42. [32] Toh K-A, Kim J, Lee S. Biometric scores fusion based on total error rate minimization. Pattern Recognit 2008;41:1066–82. Hafiz Imtiaz was born in Rajshahi, Bangladesh on January 16, 1986. He received his M.Sc. and B.Sc. degrees in Electrical and Electronic Engineering (EEE) from Bangladesh University of Engineering and Technology (BUET), Dhaka, Bangladesh in July 2011 and March 2009, respectively. He is presently working as an Assistant Professor in the Department of EEE, BUET. Dr. Shaikh Anowarul Fattah received B.Sc. and M.Sc. degrees from BUET, Bangladesh and Ph.D. degree from Concordia University, Canada. He was a Postdoctoral-Fellow at Princeton University, USA. He received Dr. Rashid Gold Medal, 2009 Distinguished Doctoral Dissertation Prize, 2007 URSI Canadian Young Scientist Award and First prize in SYTACOM 2008. Currently he is as an Associate Professor in EEE Department, BUET.

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006