ARTICLE IN PRESS Signal Processing 90 (2010) 1176–1187
Contents lists available at ScienceDirect
Signal Processing journal homepage: www.elsevier.com/locate/sigpro
Feature correlation evaluation approach for iris feature quality measure Yingzi Du a,, Craig Belcher a, Zhi Zhou a, Robert Ives b a b
Department of Electrical and Computer Engineering, Indiana University-Purdue University Indianapolis, Indianapolis, IN 46202, USA Department of Electrical Engineering, U.S. Naval Academy, USA
a r t i c l e i n f o
abstract
Article history: Received 21 May 2009 Received in revised form 30 September 2009 Accepted 1 October 2009 Available online 27 October 2009
It is challenging to develop an iris image quality measure to determine compressed iris image quality. The compression process introduces new artificial patterns while suppressing existing iris patterns. This paper proposes a feature correlation evaluation approach for iris image quality measure, which can discriminate the artificial patterns from the natural iris patterns and can also measure iris image quality for uncompressed images. The experimental results show that the proposed method could objectively perform quality measure on both non-compressed and compressed images. & 2009 Elsevier B.V. All rights reserved.
Keywords: Biometrics Iris recognition Feature correlation evaluation Iris quality measure Compressed iris image quality measure
1. Introduction Iris recognition has been found to be one of the most accurate biometrics to date [1–3]. Poor quality iris images have been shown to reduce iris recognition accuracy [1–13] and iris image quality can be used to improve accuracy when considered [2,4,6]. When an iris image is compressed, the image quality will be affected. Many image quality measures were designed to evaluate human perception of images subjectively or objectively [14–24]. However, these methods are not designed to take iris pattern characteristics into account, and are not applicable in iris image quality. There are two different important topics in studying the compressed iris images: how to compress the image to achieve high compression rate without compromising recognition accuracy; and how to design an image quality measure to evaluate the compression effect on the recognition accuracy. In Ref. [25], Daugman and Downing did an excellent job of addressing the first issue. They
Corresponding author.
E-mail address:
[email protected] (Y. Du). 0165-1684/$ - see front matter & 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.sigpro.2009.10.001
found that the JPEG 2000 region of interest (ROI) based compression method could achieve reasonable recognition accuracy with 150:1 compression rate because the JPEG 2000 ROI method can achieve high compression rate with little compromising of iris patterns in the iris region by setting the rest of the iris image to a uniform gray. However, few if any have worked on the second issue. In the case of low-quality cameras, these systems could not perform the iris segmentation first and then select the non-iris areas for compression to generate the ideal JPEG 2000 ROI based compression as Daugman proposed. Rather, the entire image will be compressed. There is much interest in the iris recognition community to understand how this kind of compression can affect the iris image quality. In this paper, we investigate this issue by analyzing the effect of JPEG 2000 compression on images without ROI. The JPEG 2000 compression without ROI would have similar compression rate in all regions. In other words, the compression ratio of the image reflects the compression ratio in the iris patterns, in which case the iris patterns are considerably more affected. We propose a new quality measure method that would be applicable to the compressed images. The proposed method can discriminate the artificial patterns from the
ARTICLE IN PRESS Y. Du et al. / Signal Processing 90 (2010) 1176–1187
natural iris patterns and can also measure iris image quality for uncompressed images. The experimental results show that the proposed method is highly correlated with the recognition accuracy of both non-compressed and compressed images. As a result, it can be used to predict the performance of iris recognition systems. This paper is organized as follows: In Section 2, we review related literature in iris image quality measure and show that the existing methods would not work for compressed iris image quality measure. In Section 3, we introduce our proposed quality measure. In Section 4, we compare the proposed method with the selective feature information based method. In Section 5, we draw some conclusions on the work. 2. Literature review In the past decade, researchers in the iris recognition area have worked on designing automatic image quality methods for iris recognition. Daugman [2] analyzed the optical defocus model and proposed an 8 8 convolution kernel to quickly assess the focus of an iris image and used the percentage of high frequency energy component. Ma et al. [6] used the frequency energy distribution of two iris subregions in the horizontal direction to measure the image quality. The support vector machine (SVM) was used to classify the images into two categories (good and
1177
bad) based on low, middle and high frequency energy levels. Chen et al. [4] divided the entire iris region into eight concentric bands and measured the frequency content using a Mexican hat wavelet. The quality score of the entire iris image is the weighted average of the quality scores of individual bands. Kalka et al. [5] proposed using Dempster–Shafer theory to combine several quality measures including defocus, motion blur, occlusion, specular reflection, lighting, off-angle, and pixel counts. This method improves over other methods by using multiple parameters, and the defocus quality measure uses Daugman’s 8 8 convolution matrix in the iris region. Other quality measures include Zhang and Salganicoff’s sharpness measure for focus [9], and Zhu et al.’s iris texture analysis [10]. In Ref. [13], we proposed selective feature information based quality measure that combined feature information, dilation and occlusion measures to form a quality score. Feature information was measured by the information distance between selected features of the iris region and a uniform distribution where the uniform distribution modeled an iris region with no patterns. Using ICE 2005 [26], WVU [27] and CASIA 2.0 [28] databases, Fig. 1 shows the comparison results of four iris image quality methods: the selective feature information based method [13], 8 8 convolution matrix [2], spectrum energy method [6], and the 2-D Mexican hat wavelet [4]. Two recognition
Fig. 1. Recognition accuracy vs. quality scores using different methods, figure is from [13]. (a) The EER vs. quality group using the selective feature information based method using CASIA 2.0, ICE, and WVU databases with Gabor wavelet and log-Gabor wavelet methods. (b) The EER vs. quality group using the convolution matrix method using CASIA 2.0, ICE, and WVU databases with Gabor wavelet and log-Gabor wavelet methods. (c) The EER vs. quality group using the spectrum energy method using CASIA 2.0 database with Gabor wavelet and log-Gabor wavelet methods. Note: only two groups were used. In this figure, we also include the two group results using the selective feature information based method. (d) The EER vs. quality group using the 2-D Mexican hat wavelet method using CASIA 2.0, ICE, and WVU databases using 1-D log-Gabor wavelet method.
ARTICLE IN PRESS 1178
Y. Du et al. / Signal Processing 90 (2010) 1176–1187
methods were used: Gabor wavelet method and logGabor wavelet method [29–30]. Fig. 1 shows that only the selective feature information based method is well correlated with both recognition methods for all databases. In the iris challenge evaluation (ICE) phase 2006, organized by the U.S. National Institute of Standards and Technology (NIST), three iris recognition groups submitted their iris recognition executable code with the iris image quality measure function: Sagem-Iridian, Cambridge, and Iritech [31]. All three have outstanding recognition accuracy and represent the state-of-art in the iris recognition area in industry. However, none of their quality measures correlated with their recognition accuracy [31]. Fig. 2(a) and (b) shows example plots of the quality measure vs. the FRR and the quality measure vs. the FAR, respectively. Note the ‘‘fraction of samples discarded’’ means the percentage of images that were excluded in the recognition based on the quality measure. Ideally, the recognition accuracy should be increased (FRR and FAR should be decreased) with more bad quality samples discarded. However, in the chart, you can see the fluxion of the curves. The above analyses and research results show that it is very challenging to design an iris image quality measure that is well correlated with the recognition accuracy. The iris image quality measure for compressed images is even more challenging. When the iris image is compressed, the compression process introduces new artificial patterns while suppressing existing iris patterns. The artificial patterns could be confused with feature
patterns. It is challenging to measure the compressed image quality. Also, only selecting a small percentage of the patterns for evaluation could underestimate the overall effect from the compression or some other introduced artificial effect.
3. The proposed method In this paper, we propose the feature correlation evaluation approach for iris image quality measure. In comparison to any existing iris image quality measure methods, it uses the correlation between neighboring features to measure the quality of the iris images. It assumes that the true iris patterns in a high quality image will be distinctive (higher information distance between adjacent rows), while the compression process will demolish the distinctiveness of the iris patterns (lower information distance between adjacent rows). Therefore, we can use the information distance between neighboring rows to evaluate the compressed iris image quality. To make the image quality measure work efficiently in an iris recognition system, we believe that the designed image quality measure should be part of the iris recognition process and take advantage the interim outputs. Fig. 3(a) shows the traditional iris recognition system and Fig. 3(b) shows the iris recognition system with the proposed feature quality measure. In this way, we do not need additional iris segmentation or feature extraction procedures; instead, we will use the extracted features from the existing iris recognition systems.
Fig. 2. Quality vs. performance in the ICE 2006 Evaluation result, from Ref. [31, p. 21]. Here A, B, and C represent three different methods. (a) FRR vs. fraction of samples discarded, and (b) FAR vs. fraction of samples discarded.
ARTICLE IN PRESS Y. Du et al. / Signal Processing 90 (2010) 1176–1187
Iris Preprocessing
IiI Image Iris Acquisition
Ft Feature Extraction
1179
Template Generation
Matching
Traditional Iris Recognition System
Iris Image Acquisition
Iris Preprocessing
Feature Extraction
Template Generation
Matching
Feature Quality Measure Quality Measure in Iris Recognition System Fig. 3. Diagram of iris recognition systems.
Edge Detection 1
Original Image
Pupil Edge
Pupil Boundary
Iris Edge
Iris Boundary
Edge Detection 2 Eyelids/Eyelashes
Fig. 4. A typical iris segmentation procedure, which includes pupil edge detection, iris edge detection, pupil and iris boundaries detection, and eyelids/ eyelashes detection and remove.
Because the proposed image quality measure uses the outputs from the preprocessing and feature extraction in the iris recognition system, we first give a brief description about the segmentation and feature extraction process in a typical iris recognition system. We will then introduce our proposed feature correlation method for image quality measure.
measure, and therefore we assume the segmentation is accurate. The segmented iris is then transferred to the log-polar coordinates. This is based on Daugman’s homogeneous rubber sheet model [2] and results in constant sized logpolar image despite any differences in resolution, scale, or dilation. The mask is also transferred to the log-polar coordinates. Fig. 5 shows an example of transferring the iris image in Fig. 4 into log-polar coordinates.
3.1. Preprocessing Fig. 4 shows a typical iris image segmentation step-bystep process [2,4,6–8,12–13,28–29,32–35,36–42], which includes pupil and limbic boundaries identification and removal of eyelids and eyelashes. The image segmentation is an important step in iris recognition. Some image quality measure values, such as occlusion and dilation ratio, are obtained from the segmentation step. In [43], Zhou et al. found that the accuracy of the segmentation can affect the recognition dramatically, and have proposed an iris image segmentation measure method. Some iris images may have good feature quality, but cannot be segmented accurately. Therefore, it is important to combine both segmentation measure and feature quality measure together to predict the recognition accuracy. In this research, our goal is to design the feature quality
3.2. Feature extraction The first step is to extract the features from the segmented iris image. The Gabor wavelet method with log-Polar transformation was designed by Daugman in 1993 and is widely used in commercialized iris recognition systems [44]. The log-Gabor wavelet method with Polar transformation was designed by Masek and Kovesi and implemented in Matlab [29]. This method has been commonly used in academia due to the availability of the source code [29]. We have improved this method dramatically to increase the recognition accuracy to be similar to that of the 2-D Gabor wavelet method [33]. Our proposed method can work with both methods. In this paper, we use the 1-D log-Gabor band pass filter as an
ARTICLE IN PRESS 1180
Y. Du et al. / Signal Processing 90 (2010) 1176–1187
120 100
FFT
80 60 40 100 200 Pixel Index Row in Polar Image
300
x 104
2.5 2 1.5 1 0.5 0 -3
-2
-1 0 1 Radians
2
1 0.8 0.6 0.4 0.2 0
3
Frequency Magnitude
Frequency Magnitude
0
3
-3
-2
-1 0 1 Radians
2
3
Log-Gabor Filter
600
15
500 400
IFFT
300 200 100 0
Pixel Intensity
Pixel Intensity
140
Frequency Magnitude
Fig. 5. Iris and mask images transformation into polar coordinates: (a) segmented iris region of Fig. 4; (b) log-polar image of (a); and (c) log-polar mask of (a).
10 5
0 -3
-2
-1 0 1 2 Radians Filtered Row in Frequency Domain
3
0
100 200 Pixel Index Filtered Row in Spatial Domain
300
Fig. 6. Feature extraction process in iris recognition: (a) steps of automatic feature extraction. Each row in the polar coordinate was first transferred to the frequency domain using FFT. Then the signal in frequency domain is multiplied with log-Gabor filter. After inverse FFT, we got the filtered row in spatial domain and (b) filtered iris image of Fig. 3(b).
example feature extraction method. The 1-D log-Gabor band pass filter is used to extract the features in an iris [29,33] 2
GðoÞ ¼ elogðo=o0 Þ
=2 logðsÞ2
:
ð3:1Þ
Here, s is used to control the filter bandwidth and o0 is the filter’s center frequency, which is derived from the filter’s wavelength, l. The 1-D log-Gabor filter does not have a spatial domain format. Each row of the iris image in the log-polar coordinates is first transformed to the frequency domain using fast fourier transform (FFT). This frequency domain row signal is then filtered with the 1-D log-Gabor filter (i.e. multiplied with the 1-D log-Gabor filter in the frequency domain). The filtered row signal is transferred back to the spatial domain via inverse fast fourier transform (IFFT). Fig. 6(a) shows the step-by-step process of a row signal from Fig. 5(b). Fig. 6(b) shows the magnitude of the filtered log-polar image of Fig. 5(b). The segmented iris image area contains iris area and noise area (such as eyelids, eyelashes, and glare). There-
fore, the mask generated from the segmentation step is then used to remove the noise area in the filtered image.
3.3. The proposed measure 3.3.1. Feature information distance The goal in this research is to measure the distinctiveness of iris patterns. Consecutive rows of a filtered image exhibit a relative amount of correlation since iris patterns span multiple rows of the filtered image. However, it can be observed that as an iris image is compressed, the correlation increases dramatically between consecutive rows because less significant feature differences are demolished and only the most distinguishing features remain. Measuring the correlation of features would provide us information about the clarity of the image. We propose using feature correlation based information distance to measure the similarity of consecutive rows. By using information as a measure, it cannot only describe the randomness of the features, but also generate
ARTICLE IN PRESS Y. Du et al. / Signal Processing 90 (2010) 1176–1187
high-order statistics of the iris image based on its features [45]. For compressed images, the artificial patterns introduced by the compression generally would be more correlated than the natural iris patterns. That means there would be less difference between adjacent artificial patterns. Using information distance as a measure, we can measure the feature quality. The combination of occlusion and dilation determines the amount of iris patterns available in matching, and is also considered in the proposed quality measure. Let’s assume the magnitude of the filtered row i to be * * r . The probability mass function p can be calculated by *
r
*
p¼
:
*
ð3:2Þ
J r J2 Here JrJ2 is 2-norm. Let’s assume the magnitude of the neighboring filtered * * row i þ 1 to be s . The probability mass function q can be calculated by *
s
*
q¼
*
:
ð3:3Þ
J s J2 The information distance is calculated by * *
* *
* *
Jðp ; q Þ ¼ Dðp Jq Þ þ Dðq Jp Þ;
ð3:4Þ
where * X* p Dðp Jq Þ ¼ p i log2 *i qi
Dðq Jp Þ ¼
* X* q q i log2 *i ; pi
* *
ð3:6Þ
3.3.4. Occlusion measure The total amount of available iris patterns can affect the recognition accuracy. In this paper, we used the occlusion measure. Occlusion measure (O) is to measure how much percentage of the iris area is invalid due to eyelids, eyelashes, and other noise:
3.3.2. Feature correlation measure The feature correlation measure (FCM) of an iris image is calculated by 1X J : N i i;iþ1
Invalid area in the segmentation mask 100%: Segmentation mask size
ð3:8Þ
3.3.5. Dilation measure In addition to occlusion, the dilation of a pupil can affect the recognition accuracy. If the iris is too dilated, there would not be enough information for recognition. In this paper, the dilation measure (D) is calculated by
* *
where Dðp Jq Þ and Dðq Jp Þ are relative entropies [46]. Note * * * * * * * * generally Dðp Jq ÞaDðq Jp Þ, but Jðp ; q Þ ¼ Jðq ; p Þ. This is very different from the selective feature information approach, which assumed that the preservation of changing patterns is the way to measure the clarity of an iris image, and the smooth patterns will be more homogenous than changing patterns. It uses the information distance between the feature distribution and uniform distribution as a measure. However, in the case of compressed images, the introduced artificial patterns would be confused with the changing patterns, which could result in a mistake. In addition, the smooth patterns could be part of the iris image pattern, and also contribute to the matching results, which are ignored in the selective feature information approach.
FCM ¼
3.3.3. Feature region measures The amount of information available for measure is a combination of the distinctiveness of the iris region and the amount of iris region available. The amount of iris region available is directly affected by occlusion from eyelids, eyelashes and glare. The amount of iris pattern available is also affected by dilation since the contraction of the iris muscles can hide iris patterns in cases of severe dilation. After applying FCM to measure the distinctiveness of the iris region, we need to measure the amount of valid feature region.
ð3:5Þ
and * *
of the valid pixels in the same row. Compared to the selective feature information approach, the proposed method uses the features in the entire iris image (after excluding the noisy regions using segmentation). This could help better evaluate the quality of the entire image than just a small portion. It would be more robust and reliable.
O¼
* *
1181
ð3:7Þ
Here i and iþ1 are consecutive log-Gabor filtered rows. N is equal to (total number of rows1) used for feature correlation calculation. To eliminate the effects of noise, the masked pixels of each row are assigned the mean value
D¼
Pupil radius 100%; Iris radius
ð3:9Þ
where pupil radius and iris radius are obtained from segmentation results. 3.3.6. Score fusion So far, we have obtained two kinds of measures: feature correlation measure, and the feature region measure, which include occlusion measure and dilation measure. In this step, these measures are combined to output one quality score for the iris image with a simple multiplication: Q ¼ f ðFCMÞ gðOÞ hðDÞ:
ð3:10Þ
Here f( ), g( ), and h( ) are the normalization functions. In this way, any components can affect the quality scores. Simple multiplication of the normalized functions was empirically found to be representative of all the components considered and if any one component is smaller, the entire score will be smaller. f function is to normalize the FCM score from 0 to 1: ( a FCM; 0rFCMrb; ð3:11Þ f ðFCMÞ ¼ 1; FCM4b: Here b ¼ 0:005, and a ¼ 1=b. b was chosen experimentally. We found for original images, most Ji;iþ1 scores are
ARTICLE IN PRESS 1182
Y. Du et al. / Signal Processing 90 (2010) 1176–1187
above 0.005, while for compressed images, most Ji;iþ1 scores are lower than 0.005. a is the normalization parameter to ensure that when FCM ¼ b, f ðFCMÞ ¼ 1. In previous work [34–35,43], it was found that the relationship between occlusion and iris region quality was nonlinear and could be represented by an exponential relationship. Extending this idea to dilation, it was determined by experiment that dilation also held an exponential relationship to iris region quality. These observations follow from the results showing that slight occlusion or dilation has little or no effect on iris recognition, but severe cases of either greatly reduces performance. Hence, g function, is calculated as gðOÞ ¼ ð1 elð1OÞ Þ=k:
ð3:12Þ
Here, k ¼ 0:9179, and l ¼ 2:5. For occlusion, if l is too big, the occlusion would carry less weight, if l is too little, the occlusion would take too much penalty. Based on our observation, a normal open eye would average about 10–20% occlusion. More occlusion will reduce the recognition accuracy. We also observed that when the eye has 50% occlusion, there is still a possibility to perform iris recognition, but the accuracy would be much lower (around 25% penalty). k is used to normalize g(0) between zero and one and varies as l varies. Similar to the occlusion, the dilation would be nonlinear to the recognition accuracy. The h function is calculated as ( 1; Dr0:6; ð3:13Þ hðDÞ ¼ egðDxÞ ; 0:6oDr1: Here, x ¼ :6, and g ¼ 40. For dilation, x is selected based on the dilation functionality of a normal eye. In general, a healthy eye has 45% dilation under regular illumination. However, some eyes may have up to 60% dilation under dark light. Even under such situation, iris
recognition can be performed accurately. Therefore, o60% has little effect on the quality of the iris region. Usually only under the effect of medicine or a special medical condition, the eye could have dilation above 60%, which will have an effect on the iris patterns. g is selected to reduce the quality score quickly as dilation increases above 60% and assign a score of zero with dilation above 75%. This appropriately reflects the fact that, as the iris muscle contracts in dilation, much of the iris pattern is completely eliminated and cannot be recovered through any image processing. Compared to the selective feature information approach, the g( ) parameters have been adjusted in the proposed method for the occlusion measure. Fig. 9(a) compares the changes of the occlusion measure. It shows that the new function would be decreased more than that of the selective feature information approach. While only a selected portion would be used as the measure for the selective feature information approach, the proposed method is using the entire iris region for the measurement, and the feature information would be more affected by the occlusion.
4. Experimental results In our experiment, we used images from the iris challenge evaluation (ICE) 2005 [26] database which has 2953 iris images from 132 individuals. Within this database, 1426 images are right eye images from 124 individuals, and 1527 images are left eye images from 120 individuals. The example images are shown in Fig. 7. The left eye database has more images than right eye database. In addition, there are some off-angle iris images in left eye database which makes the iris recognition more challenging.
Fig. 7. Example of images from ICE 2005 database.
ARTICLE IN PRESS
0.025 0.02 0.015 0.01 0.005 0 0
10 20 30 Row Number
40
10 20 30 Row Number Compressed image with compression rate, 25:1.
40
FCM
Original Image
0.025 0.02 0.015 0.01 0.005 0
FCM
0
0.025 0.02 0.015 0.01 0.005 0
In this research, our focus is to measure the feature quality after segmentation and feature extraction. To better compare how the feature quality would change, we assume using the same segmentation mask for the image with different compression rate. In this way, the occlusion and dilation scores would be same, and only the feature correlation score would be changed due to compression. Fig. 8 shows that an example of how compression would affect the feature correlation score. We plot Ji;iþ1 in Eq. (3.7). It shows that with increasing of compression rate, the Ji;iþ1 curve would be more smooth and closer to 0. This shows that the proposed feature correlation score can be used to measure the image compression affect.
100 98
0
10 20 30 Row Number Compressed image with compression rate, 50:1.
FCM
1183
4.1. The proposed feature correlation measure vs. compression rate
40
Genuine Accept Rate
FCM
Y. Du et al. / Signal Processing 90 (2010) 1176–1187
0.025 0.02 0.015 0.01 0.005 0
96 94 92 90 Full Database Best Quality(1) 2 3 4 Worst Quality(5)
88 86 84 82 10-4
10 20 30 40 Row Number Compressed image with compression rate, 75:1.
10-2 100 False Accept Rate ICE right eyes
102
0
100
0
10 20 30 Row Number Compressed image with compression rate, 100:1.
40
Fig. 8. Feature correlation score vs. compression rate. The first column are same iris images with different compression rate. The second column are the row Ji;iþ1 scores.
Genuine Accept Rate
FCM
95 0.025 0.02 0.015 0.01 0.005 0
90 85 Full Database Best Quality(1) 2 3 4 Worst Quality(5)
80 75 70 10-4
10-2 100 False Accept Rate
102
ICE left eyes Fig. 9. GAR vs. FAR with quality measure using the selective feature information based method: (a) ICE right eyes and (b) ICE left eyes.
ARTICLE IN PRESS 1184
Y. Du et al. / Signal Processing 90 (2010) 1176–1187
4.2. Performance prediction verification for non-compressed images To evaluate the effectiveness of the quality measure, we divide the dataset from right eyes and left eyes equally into five groups based on calculated quality from best quality scores to worst, respectively. It has been shown in Fig. 1 that the selective feature information approach can perform better than classical approaches, such as the convolution matrix method [2], the spectrum energy method [6], and the 2-D Mexican hat wavelet method [4]. Of each of the methods tested, only the selective feature information approach was highly correlated with recognition accuracy. Therefore, in this experiment, we would only compare the proposed method with the selective feature information approach. We generate the receiver operating characteristic (ROC) curve [30] of each quality group using the similarity matrix. Fig. 9(a) and (b) shows the ROC curves using the selective feature information approach for ICE right eyes
and left eyes, respectively. It includes the five ROC curves from the five quality groups and the ROC curve from the entire dataset. Fig. 10(a) and (b) shows the ROC curves using the proposed method for ICE right eyes and left eyes, respectively. Both Figs. 9 and 10 show that as iris image quality decreases, recognition performance decreases since good performance is considered to be a high Genuine Accept Rate given a low false accept rate. Fig. 11(a) and (b) compares the equal error rates (EER) of the five groups for the right and left eyes, respectively. These figures show that both the selective feature information approach and the proposed method are well correlated with recognition accuracy since the groups of images with higher quality scores perform better than the groups of images with lower quality scores. For the ICE left eyes, the proposed method can better separate the image quality groups 2 and 3 than the selective feature information based method. This shows the proposed method can work very well with noncompressed images.
4.3. Quality measure for compressed image
100
Genuine Accept Rate
98 96 94 92 90 Full Database Best Quality(1) 2 3 4 Worst Quality(5)
88 86 84 82 10-4
10-2 100 False Accept Rate
The compressed databases [47] were formed using the JPEG2000 standard with compression ratios of 25:1, 50:1, 75:1, and 100:1. To ensure the segmentation would not be a factor, we used the segmentation mask from the original image to mask its compressed versions. Fig. 12 is the drawing of the false acceptance rate (FAR) and false rejection rate (FRR). Each pair of curves (FAR and FRR) represents the comparison of the each compressed database against the original database. We can find that as the compression ratio increases, the FAR curve remains virtually unchanged while the FRR curve moves further right. This shows that the compression process would
102 0.0200
ICE left eyes
0.0150
Proposed Method Previous Method
0.0100
100
Genuine Accept Rate
0.0050 0.0000
95
1 (Best)
2
3 ICE right eyes
4
5 (Worst)
90 0.0400 85 Full Database Best Quality(1) 2 3 4 Worst Quality(5)
80 75 10-4
10-2 100 False Accept Rate
102
ICE right eyes Fig. 10. GAR vs. FAR with different quality group using the proposed method.
0.0300
Proposed Method Previous Method
0.0200 0.0100 0.0000 1 (Best)
2
3 ICE left eyes
4
5 (Worst)
Fig. 11. The EER with quality measure using non-compressed images: comparison of the selective feature information based method with the proposed method. The X axle represents the quality group, and Y axle represents the EER. The blue line represents the selective feature information method, and the red line represents the proposed method.
ARTICLE IN PRESS Y. Du et al. / Signal Processing 90 (2010) 1176–1187
reduce the recognition accuracy by adding FRR. This is due to the reduction of image quality. It is very challenging to measure the image quality under the compression situation. The compression process would suppress the true patterns while introducing artificial patterns. Especially, areas with smooth patterns would be more vulnerable to the compression. It is very challenging to automatically discriminate the true patterns from artificial patterns. Table 1 presents the mean quality scores for the ICE left and right eyes with no compression and various levels of JPEG2000 compression. It is shown that the mean quality did not change for the selective feature information approach. This is due to the way feature information was compared to a uniform distribution. The selective information approach compares the feature patterns with the uniform distribution and only selects the portion with most difference as a measure. This is vulnerable to the compression effects (the introduced artificial patterns). The selective information approach would confuse the artificial patterns with the feature patterns since both would generate higher information distance when measured against a uniform distribution. The proposed method shows a decrease in quality score as compression ratio is increased, which is consistent with observation. This means that the proposed method is sensitive to the compression process. The reason the proposed method responds more consistently to the compressed images is because it measures the
1185
Table 1 Quality scores for the selective feature information based method and proposed methods.
Original Compression Compression Compression Compression
25:1 50:1 75:1 100:1
Selective feature information based method
Proposed method
Mean quality score
Mean quality score
0.9677 0.9677 0.9677 0.9677 0.9677
0.9255 0.8564 0.7916 0.7576 0.7306
Table 2 Correlation between the selective feature information based method and the proposed method. Correlation scores Original Compression Compression Compression Compression
25:1 50:1 75:1 100:1
Fig. 12. FARs vs. FRRs for each compression ratio vs. original images.
0.8764 0.5569 0.4219 0.3797 0.3552
ARTICLE IN PRESS 1186
Y. Du et al. / Signal Processing 90 (2010) 1176–1187
Original Previous: 0.9994 Proposed: 0.9975
25:1 0.9994 0.3874
50:1 0.9994 0.2660 Example Image 1
75:1 0.9994 0.2539
100:1 0.9994 0.2008
Original Previous: 0.9985 Proposed: 0.9935
25:1 0.9985 0.9935
50:1 0.9985 0.9935 Example Image 2
75:1 0.9985 0.8929
100:1 0.9985 0.7574
Fig. 13. Example quality scores to compare the proposed method and the selective feature information based Method. The first row shows the compression rate (‘‘original’’ means non-compression). The second row shows the quality score using the selective feature information based method, and the third row shows the quality score using the proposed method.
correlation between nearby features. The artificial patterns introduced by the compression process would be more correlated compared to the natural iris patterns in general. The suppression of the natural patterns in the compressed images would also reduce the quality score. As a result, the compressed images would have lower quality scores using the proposed method, even though the artificial patterns were introduced. Table 2 presents the correlation between the selective feature information approach and the proposed quality measure for the original and compressed ICE images. It is shown that as the compression ratio is increased, the results are less correlated. Combining Tables 1 and 2, we can see that the proposed method is, in fact, necessary to achieve a more reasonable result for the compressed images. Fig. 13 shows some example images and their quality measures using the selective feature information approach and the proposed method. Fig. 13(a) shows an image with smooth patterns and it can be seen that the artifacts introduced to the iris image can be mistaken for iris pattern that previously was not present. Fig. 13(b) shows that an eye with very discriminating patterns have a loss of quality at higher levels of image compression. The experimental results show that the proposed method is highly correlated with the recognition accuracy of both non-compressed and compressed images.
correlations of iris patterns. For compressed images, the proposed method can discriminate the artificial patterns from the intrinsic iris patterns by taking compression effects into account. As a result, the proposed feature correlation measure could better discriminate the natural iris patterns and artificial patterns. In addition, the proposed method uses the segmentation and feature extraction outputs of the iris recognition system. In this way, it can be easily incorporated into the iris recognition system and work efficiently. The experimental results show that the proposed method can objectively determine an image quality measure on both non-compressed and compressed images.
Acknowledgments The authors would like to thank the associate editor and the anonymous reviewers for their constructive comments. The authors would also like to thank N. Luke Thomas for his help. The research in this paper uses the ICE database provided by NIST [26]. This project is sponsored by the ONR Young Investigator Program (award number: N00014-07-1-0788) and National Institute of Justice (award number: 2007-DE-BX-K182). References
5. Conclusions In this paper, we proposed the feature correlation evaluation approach for iris image quality measure. For non-compressed images, the proposed method works well because using feature information as a measure cannot only describe the linear relationship of the neighbor features, but also generate higher-order statics of the
[1] T. Mansfield, G. Kelly, D. Chandler, J. Kane, Biometric Product Testing Final Report, issue 1.0, National Physical Laboratory of UK, 2001. [2] J. Daugman, How iris recognition works, IEEE Transaction on Circuits and Systems for Video Technology 14 (1) (2004) 21–30. [3] International Biometric Group, Independent Testing of Iris Recognition Technology: Final Report, May 2005. [4] Y. Chen, S.C. Dass, A.K. Jain, Localized iris image quality using 2-D wavelets, in: IEEE International Conference on Biometrics, 2006. [5] N.D. Kalka, J. Zuo, N.A. Schmid, B. Cukic, Image quality assessment for iris biometric, Proceedings of the SPIE 6202 (2006).
ARTICLE IN PRESS Y. Du et al. / Signal Processing 90 (2010) 1176–1187
[6] L. Ma, T. Tan, Y. Wang, D. Zhang, Personal identification based on iris texture analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence 25 (12) (2004) 1519–1533. [7] K.W. Bowyer, K. Hollingsworth, P.J. Flynn, Image understanding for iris biometrics: a survey, Journal of Machine Vision and Applications 110 (2) (2008) 281–307. [8] J.R. Matey, K. Hanna, R. Kolczynki, D. LoIacono, S. Mangru, O. Narodisky, M. Tinker, T. Zappia, W.Y. Zhao, Iris on the MoveTM: acquisition of images for iris recognition in less constrained environments, Proceedings of the IEEE 94 (11) (2007) 1936–1947. [9] G. Zhang, M. Salganicoff, Method of measuring the focus of close-up image of eyes, Technical Report 5953440, U.S. Patent, 1999. [10] X. Zhu, Y. Liu, X. Ming, Q. Cui, A quality evaluation method of iris images sequence based on wavelet coefficients in ‘region of interest, in: Proceedings of the Fourth ICCIT, 2004, pp. 24–27. [11] C. Belcher, Y. Du, Information distance based selective feature clarity measure for iris recognition, Proceedings of SPIE 6494 (2007) 64940E1–E12. [12] C. Belcher, Y. Du, Feature information based quality measure for iris recognition, in: IEEE International Conference on System, Man, and Cybernetics, 2007. [13] C. Belcher, Y. Du, A selective feature information approach for iris image-quality measure, IEEE Transactions on Information Forensics and Security 3 (3) (September 2008) 572–577. [14] A. Shnayderman, A. Gusev, A.M. Eskicioglu, An SVD-based grayscale image quality measure for local and global assessment, IEEE Transactions on Image Processing 15 (2) (2006) 422–429. [15] H.R. Sheikh, A.C. Bovik, Image information and visual quality, IEEE Transactions on Image Processing 15 (2) (2006) 430–444. [16] H.R. Sheikh, A.C. Bovik, G. de Veciana, An information fidelity criterion for image quality assessment using natural scene statistics, IEEE Transactions on Image Processing 14 (12) (December 2005) 2117–2128. [17] C. Chang, Y. Du, J. Wang, P. Thouin, C. Yang, A survey and comparative analysis of entropy and relative entropic thresholding techniques, IEE Proceedings of Vision, Image, and Signal Processing 153 (6) (2006) 837–850. [18] S. Winkler, Issues in vision modeling for perceptual video quality assessment, Signal Processing 78 (1999) 231–252. [19] A.P. Bradley, A wavelet visible difference predictor, IEEE Transactions on Image Processing 5 (5) (May 1999) 717–730. [20] Y. Du, C.-I. Chang, P.D. Thouin, Automated system for text detection in individual video images, Journal of Electronic Imaging 12 (3) (2003) 410–422. [21] Y. Du, C.-I. Chang, P.D. Thouin, Unsupervised approach to color video thresholding, Optical Engineering 43 (2) (2004) 282–289. [22] Z. Wang, A. Bovik, A universal image quality index, IEEE Signal Processing Letters 9 (3) (March 2002) 81–84. [23] X. Li, D. Tao, X. Gao, W. Lu, A natural image quality evaluation metric, Signal Processing 89 (4) (2009) 548–555. [24] X. Gao, W. Lu, D. Tao, X. Li, Image quality assessment based on multiscale geometric analysis, IEEE Transactions on Image Processing 18 (7) (2009) 1409–1423. [25] J. Daugman, C. Downing, Effect of severe image compression on iris recognition performance, IEEE Transactions on Information Forensics and Security 3 (1) (March 2008). [26] P.J. Phillips, K.W. Bowyer, P.J. Flynn, X. Liu, W.T. Scruggs, The iris challenge evaluation 2005, in: Proceedings of the Second IEEE International Conference on Biometrics: Theory, Applications, and Systems, 2008.
1187
[27] S. Crihalmeanu, A. Ross, R. Govindarajan, L. Hornak, S. Schuckers, A centralized web-enabled multimodal biometric database, in: Biometric Consortium Conference (BCC), Crystal City, VA, September 2004. [28] CASIA Iris Image Database: /http://www.sinobiometrics.comS. [29] L. Masek, P. Kovesi, MATLAB Source Code for a Biometric Identification System Based on Iris Patterns, The University of Western Australia, 2003. [30] Y. Du, C.-I. Chang, 3D combinational curves for accuracy and performance analysis of positive biometrics identification, Optics and Lasers in Engineering 46 (6) (2008) 477–490. [31] ICE 2006 Results: /http://iris.nist.gov/ice/ICE_Mining_PJF_080417. pdfS, Accessed in March 2009. [32] Y. Du, R.W. Ives, D.M. Etter, T.B. Welch, Use of one-dimensional iris signatures to rank iris pattern similarities, Optical Engineering 45 (3) (2006) 037201-1–037201-10. [33] V.A. Pozdin, Y. Du, Performance analysis and parameter optimization for iris recognition using log-Gabor wavelet, SPIE Electronic Imaging 6491 (2007) 649112-1–649112-11. [34] Y. Du, B. Bonney, R.W. Ives, D.M. Etter, Analysis of partial iris recognition, Proceedings of the SPIE Defense 5779 (March 2005) 31–40. [35] Y. Du, B. Bonney, R.W. Ives, D.M. Etter, R. Schultz, Analysis of partial iris recognition using a 1D approach, in: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. II, March 2005, pp. 961–964. [36] Y. Du, R.W. Ives, D.M. Etter, Iris Recognition, the Electrical Engineering Handbook, third ed., CRC Press, Boca Rotan, FL, 2005. [37] C. Belcher, Y. Du, Region-based SIFT approach to iris recognition, Optics and Lasers in Engineering 47 (1) (2009) 139–147. [38] Y. Du, Review of iris recognition: cameras, systems, and their applications, Sensor Review 26 (1) (2006) 66–69. [39] K. Miyazawa, K. Ito, T. Aoki, K. Kobayashi, H. Nakajima, An effective approach for iris recognition using phase-based image matching, IEEE Transactions on Pattern Analysis and Machine Intelligence 30 (10) (2008) 1741–1756. [40] C. Tisse, L. Martin, L. Torres, M. Robert, Personal identification technique using human iris recognition, in: Proceedings of the Vision Interface, 2002, pp. 294–299. [41] H.P. Proenca, Towards non-cooperative biometric iris recognition, Ph.D. Dissertation, Department of Computer Science, University of Beira Interior, 2006. [42] M. Vatsa, R. Singh, A. Noore, Improving iris recognition performance using segmentation, quality enhancement, match score fusion, and indexing, IEEE Transactions on Systems, Man, and Cybernetics, Part B 38 (4) (2008) 1021–1035. [43] Z. Zhou, Y. Du, C. Belcher, Transforming traditional iris recognition systems to work on non-ideal situations, IEEE Transactions on Industry Electronics 56 (8) (2009) 3203–3213. [44] J. Daugman, High confidence visual recognition of persons by a test of statistical independence, IEEE Transactions on Pattern Analysis and Machine Intelligence 15 (11) (November 1993) 1148–1161. [45] Y. Du, C.-I. Chang, H. Ren, F.M. D’Amico, J. Jensen, A new hyperspectral discrimination measure for spectral similarity, Optical Engineering 43 (8) (2004) 1777–1786. [46] T. Cover, J. Tomas, Elements of Information Theory, Wiley, New York, 1991. [47] R.W. Ives, D.A.D. Bishop, Y. Du, C. Belcher, Effects of image compression on iris recognition performance and image quality, in: IEEE Workshop on Computational Intelligence in Biometrics: Theory, Algorithms, and Applications, 2009.