Accepted Manuscript QDF: A face database with varying quality Shubhobrata Bhattacharya, Suparna Rooj, Aurobinda Routray
PII: DOI: Reference:
S0923-5965(18)30565-4 https://doi.org/10.1016/j.image.2018.12.013 IMAGE 15489
To appear in:
Signal Processing: Image Communication
Received date : 31 May 2018 Revised date : 21 December 2018 Accepted date : 21 December 2018 Please cite this article as: S. Bhattacharya, S. Rooj and A. Routray, QDF: A face database with varying quality, Signal Processing: Image Communication (2019), https://doi.org/10.1016/j.image.2018.12.013 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
QDF : A Face Database with Varying Quality Shubhobrata Bhattacharyaa , Suparna Rooja , Aurobinda Routrayb a Advanced
Technology Development Center, Indian Institute of Technology Kharagpur, Kharagpur 721302, India of Electrical Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
b Department
Abstract Face Recognition is one of the well-researched areas of biometrics. Although many researchers have shown considerable interest, the problems still persist because of unpredictable environmental factors affecting the acquisition of real-life face images. One of the major factors that causes poor recognition performance of the most face recognition algorithms is due to the unavailability of a proper training data-set which reflects real-life scenarios. In this paper, we propose a face dataset, of about 100 subjects, with varying degree of quality in terms of distance from the camera, ambient illumination, pose variations and natural occlusions. This database can be used to train systems with real-life face images. The face quality of this data-set has been quantified with popular Face Quality Assessment (FQA) algorithms. We have also tested this database with standard face recognition, super-resolution image processing and fiducial point estimation algorithms. Database is available to research community through https://sites.google.com/view/quality-based-distance-face-da/. Keywords: Face Quality Assessment, Face Recognition, Super-resolution, Fiducial point estimation, Face image database.
1. Introduction Face recognition is a popular field of research in image processing. The human face is accepted as a biometric because of its universality, non-intrusiveness and easy availability [1], [2]. At the present state of the art face detection [3] and recognition can be achieved with reasonable accuracy and 5
speed under variations in illumination, pose and partial occlusion [4], [1], [5], [6], [7], [8]. Despite sufficient efficacy of automatic face recognition algorithms, the performance is highly affected by variations of illumination, pose, occlusion and expression. This situation becomes even more critical in a scenario where the subject is unaware of or unwilling for image acquisition, and the environment Email addresses:
[email protected] (Shubhobrata Bhattacharya),
[email protected] (Suparna Rooj),
[email protected] (Aurobinda Routray)
Preprint submitted to Journal of LATEX Templates
January 5, 2019
is uncontrolled have been proposed in the literature to overcome the quality issues of the face im10
age. However, the recognition performance of any classifier improves highly with image quality.For example: when a person approaches a surveillance camera a sequence of frames are captured. However, the face recognition algorithms may not work well for all the frames. Only a few out of these frames will be enough to convey the information necessary for the identification of that individual. The quality of the face in the frame has to be evaluated, to ensure a better chance of recognition.
15
Apart from a few good quality frames, most of the other frames may get discarded as these may contain face images with challenging pose, partial occlusion, motion blur, poor illumination, and low resolution. Therefore, there is a need to develop a database to replicate the real-life surveillance situation aptly which will further help in the development and assessment of the algorithms for face recognition in such scenarios.
20
In this work, we present a face dataset, with varying degree of face quality. The facial images of the proposed dataset have been validated with the state-of-the-art Face Quality Assessment (FQA) algorithms available in [17], [18], [19], [20]. Further, for the purpose of validation the data set has been tested against a wide spectrum of algorithms that work on facial images. These algorithms belong to three categories i.e, (i) face recognition [21], [22], [15], [13] (ii) super-resolution-based
25
reconstruction [23], [24] and (iii) fiducial point estimation [25], [26]. The paper is organized as follows. Section 2 presents a review of earlier published databases. Section 3 discusses the database content and its characteristics. Section 4 discusses the formation procedure of the QDF. Section 5 is about the state of the art algorithms for face quality assessment. Section 6 presents the protocol followed during the experiments and formation of the dataset. The
30
evaluation of database base on face quality metrics are carried out in section 7.1. In section 7.2 we have compared the face recognition accuracy with the average face quality score (AFQS) for each subset of QDF. The results of super-resolution and fiducial point estimation experiments are detailed in section 7.3 and 7.4 respectively. Section 9 concludes the paper.
2. Existing Datasets 35
There are a large number of face databases available to researchers for testing algorithms in the area of face recognition. Table 1 gives a non-exhaustive list 1 . These databases range in size, scope, 1 Most
of the information is obtained from the Face Recognition Homepage,” maintained by Mislav Grgic and
Kresimir Delac (http://www.face-rec.org/).
2
and purpose. The photographs in many of these databases were captured by teams of researchers who work in the area of face recognition. Most of these databases have been constructed using one-time images without considering many variations in pose, illumination, resolution and partial occlusions. 40
Such an acquisition gives the experimenter direct control over the parameters of variability in the database. The high accuracy of some of the state-of-art methods for face recognition reported in the recent literature mostly use the databases mentioned above [27]. However, the real-life surveillance is highly subjected to the challenges of varying degrees of pose, occlusion, illumination as well as resolution. These factors naturally affect the quality of the face image. The standard algorithms do
45
not work as expected on these images. Therefore, we have attempted to replicate the situation of reallife surveillance by incorporating the challenges by systematically acquiring the images with natural degradation in the image quality. The SCface dataset has made a similar attempt of this kind. This dataset is widely accepted among the research community. However, the SCface dataset has some shortcomings. The dataset does not encompass faces captured with varying distance or resolution
50
to a great extent. The effect of natural occlusion and motion blur also remained un-addressed in the SCface dataset. In the proposed dataset, we have included these cases. In the recent past, an extensive dataset named QMUL-SurvFace [46] appeared in the literature. The dataset covers large-scale face quality variations in the surveillance environment. However, it does not capture the faces in the systematic distance, unlike our proposed dataset. The sample of variations per subject
55
in QMUL-SurvFace is much lesser than that of QDF dataset (i.e., an average of 520 per subject).
3
Table 1: Face databases. This table shows some of the face databases available at the time of writing. This list is not meant to be exhaustive, nor to describe the databases in detail, but merely to provide a sampling of the types of databases that are available. Wherever possible, a peer-reviewed paper or technical report has been cited, and
Subject
Total Image
Gender
Pose (Rotation)
Natural Occlusion
Systematic Distance from Camera
Eye Glass
Natural Illumination
Natural Back Ground
Uncontrolled Environment
otherwise, a citation referring to the web page for the database is given when available.
AR Face Database [28]
116
4000
Y
N
Y
N
Y
Y
N
N
AT&T Database [29]
40
400
Y
N
N
N
Y
N
N
N
BioID Face Database [30]
23
1521
Y
Y
N
N
Y
N
Y
Y
Caltech 10000 Web Faces [31]
27
450
Y
Y
N
N
Y
Y
Y
Y
CAS-PEAL Face Database [32]
1040
99,594
Y
Y
N
N
Y
N
N
N
FERET Databases [33]
1199
14126
Y
Y
N
N
Y
N
N
N
Georgia Tech Face Database [34]
50
750
Y
Y
N
N
Y
N
Y
N
Indian Face Database [35]
40
>440
Y
Y
N
N
Y
N
N
N
PIE Database, CMU [36]
68
41368
Y
Y
N
N
Y
Y
N
N
UMIST Face Database [37]
20
564
Y
Y
N
N
Y
N
N
N
NLPR Face Database [38]
22
450
Y
N
N
N
Y
N
N
N
University of Essex, UK [39]
395
7900
Y
N
N
N
Y
N
N
N
Yale Face Database [40]
15
165
Y
N
N
N
Y
Y
N
N
Yale Face Database B [41]
10
5760
Y
N
N
N
Y
Y
N
N
LFW [42]
5749
13,233
Y
Y
Y
N
Y
N
Y
Y
Celeb Faces [43]
5346
87628
Y
Y
N
N
Y
N
Y
Y
FEI [44]
200
2800
Y
Y
N
N
Y
N
N
N
SCface [45]
130
4,160
Y
Y
N
Y
Y
Y
N
Y
15,573
463,507
Y
Y
N
N
Y
Y
Y
Y
100
52,000
Y
Y
Y
Y
Y
Y
Y
Y
Database Name
QMUL-SurvFace [46] QDF (Proposed)
4
3. Characteristics of QDF Dataset The proposed QDF database consists of faces with variations in pose, resolution, illumination and partial occlusions. The unique feature of the QDF is that all images have been evaluated with respect to the standard quality measures. This is perhaps the first time a face database with state-of60
the-art quality indices has been made available to the research community. The researcher needs not resorts to artificial means to degrade face quality while testing the algorithms for quality variations due, distance, pose, illumination, occlusion, etc. The salient features of our QDF database can be outlined as follows: • Images are taken under natural, uncontrolled illumination conditions;
65
• A single camera has been used to capture the images from various distances. The subjects have been asked to not to look into the camera or look at a fixed point to replicate the typical scenarios of real-life surveillance. • The database has been constructed using facial image frames of natural video sequences while the subjects walk towards the camera.
70
• The database contains images of 100 subjects with 32 variants with respect to distance, pose, partial occlusion and illumination. • Every face has been marked with a Face Quality Scores (FQS). This score has been computed in the following sections.
4. Creation of QDF Database 75
The experiment was performed with different background and arena. The camera was set up at a fixed location. The subjects were asked to stand at a set of specific distances and do head movements so that we get a variation in facial images. 4.1. Camera Specification A Nikon D-5200 camera has been used for recording. It has a CMOS type sensor with a resolution
80
of 24 : 1 Mega-pixels and AF-S of 18 − 55 mm. This can record videos at 50 frames per second with a resolution up to 1920 × 1080. The camera is subjected to fixed focus while it is placed on a 1.5 m-stand.
5
4.2. Illumination and Background The experiment has been conducted outdoor, with different background variations and different 85
lighting ambiance. The experiment was carried out in daylight at different hours for ensuring uniformity in illumination distribution. The choice of natural background is motivated by the quest of achieving realistic surveillance situations. The background variations, in theory, may not influence the performance of face recognition algorithms provided that the face region is correctly segmented from the background. However, in real-world situations, cameras work in automatic mode, which
90
may generate noise grains and white balance problems. These factors change the face appearance under different imaging conditions, particularly in general video cameras. Therefore, it is necessary to mimic this situation in the database. 4.3. Subjects One hundred and ten subjects of the age group ranging from 18 to 30 years, from different regions
95
of India, participated in the experiment. Vocal instructions were given to the participants to perform the act of changing their physical positions followed by head pose variation and occlusion act. 4.4. Distance from the camera The variations due to distance, have been incorporated by asking the subjects to stand at certain positions. It ensures natural blurriness in the captured facial images. These type of images
100
are different from the artificial blurriness generally added by the researchers in clean images from standard databases for experimentation. Every subject is asked to stand at 8 different distances, i.e., 1m, 3m, 5m, 7m, 9m, 11m, 13m and 15m respectively from the camera position.(Fig.1) At every specified distance, each subject is asked to perform some predefined head movements as mentioned in the following section.
6
Figure 1: Multiple position of the subject for image acquisition.
105
4.5. Pose Variation Different head positions cause hindrances in capturing the frontal face in real life situations [47]. The human head is assumed to be limited to three degrees of freedom in the pose, which can be characterized by the pitch, roll, and yaw as shown in Fig. 2. In terms of facial orientation, yaw and pitch are ”off-plane” rotations while the roll is an ”in-plane” rotation. For our experiments, we
110
captured yaw and pitch. We didn’t consider the roll position of the face because in such cases face can be reoriented using state-of-the-art algorithms. Subsequently, we have grouped the head pose images by the angles of rotation around 15◦ , 30◦ , 45◦ and more than 45◦ for yaw. Similarly, for pitch, we grouped the images approximately at 15◦ and more than 15◦ up and down. Proper naming of these subsets is stated in section 6.1 of this paper. Sample images for different head positions in
115
the QDF dataset are shown in Fig. 2.
7
Figure 2: The three degrees of freedom of a human head can be described by the egocentric rotation angles (a) Roll (b) Yaw (c) Pitch.
Figure 3: Sample images of pose variation of faces at a different distance from the camera. First and second rows show the yaw for left and right movement at a different distance from the camera. Third and fourth row shows the pitch movement for up and down rotation of the head at different camera distance.
4.6. Occlusion The facial occlusions are a challenge for algorithms for face detection and recognition. Therefore we have included natural occlusions like partial covering of the face by hand or obstacles. The subjects were asked to cover their faces as in Fig. 4. Subjects who wear spectacles have done it 120
without removing it. 8
Figure 4: Sample images of occluded faces at different distance from camera.
4.7. Face cropping and Face Log generation Face detection is the primary step in face recognition. The popular Viola-Jones [3] face detection method has been found to have an overall success rate of 75.39% for detecting the faces in our recorded videos. Off plane rotation of head and occlusion are primarily responsible for poor 125
performance of this detector. In the case of faces distant from the camera, the detection is erroneous due to the low quality of the face images. Similarly, with occlusion or pose variations, the detection performance of this classifier is also poor. In such cases, for the sake of database, we manually cropped the images. Similarly, for localization of face parts where the state-of-the-art techniques fail to find the ground truth positions of eyes nose and lip corners, manual attempts have been made.
130
Face detection followed by cropping creates a complete collection of a sequence of face images called face-log. The term was first coined in [48] and got further developed in [49] and [19]. Usually, facelogs are results of face cropping from a video sequence where the relatively best quality face(s) are selected for face recognition, and poor-quality images get dismissed. This screening process enhances the efficiency of the recognition algorithm and reduces the computational burden. For applications
135
like super resolution, poor quality face images of the face log are utilized to match the ground truth. In our database face-logs are formed from the video sequences taken for each subject at different position from the camera. For the sake of our dataset evaluation, we divided our dataset into three groups of face-log (i) frontal, (ii) non-frontal and (iii) occluded. Fig. 1, 3 and 4 show the respective face logs.
140
5. Face Quality Assessment The term Face Quality Assessment (FQA) has been coined in [50]. Over time several techniques have been proposed in the literature that discuss the benefits of using image quality factors for
9
various face recognition related problems which are often affected by multiple image quality factors like sharpness, illumination, focus, occlusion, etc. Subasic et al. [51] present a system to validate 145
face images and to check whether the image is appropriate for use in document identification. The standards to maintain a certain quality in the facial images have been defined by the International Civil Aviation Organization (ICAO) [52]. ICAO defines thresholds and ranges for parameters of the features of a face image. Fronthaler et al. [53] proposed an orientation tensor with a set of symmetry descriptors to assess the quality of face images. Xiufeng et al. [54] present an approach
150
for standardization of facial image quality, and bring out facial symmetry-based methods for its assessment by which facial asymmetries caused by non-frontal lighting and improper facial pose can be measured. Zamani et al. [55] worked on the issues of shadows, hot-spots, video artifacts, salt and pepper noise, and movement blurring while improving the quality of the image for face detection. Sellahewa et al. [56] directly used the universal image quality index for measuring the
155
face image quality due to illumination distortion compared to a reference face image of high quality. In [57] Bhatt et al. proposed a framework for quality-based classifier selection, and, then, recognition performance be performed to regular fusion cases. Wong et al. proposed a patch based probabilistic model for quality assessment. This model has been standardized by training it by good quality reference face images, each having a frontal pose, uniform illumination, and neutral expression [58].
160
In [59] a face quality prediction method has been proposed using human assessment. A similarity score has been computed to assign a quality of the face. A regression model has been evolved using the features from a CNN as proposed in [60]. 5.1. Evaluation of QDF with contemporary methods We have chosen two of the recent and popular algorithms to assess our dataset. The selected
165
algorithms are discussed briefly as follows. In the works of Nasirollahi [19] four facial features are used to estimate the face quality, i.e., head pose, sharpness, brightness, and resolution. The result of head pose was categorized as frontal (between −15◦ to +15◦ ), right view (beyond +15◦ ) and left
view (beyond −15◦ ). Relative scores are assigned to the head poses based on the frontal position. Blur is obtained as the degree of the low pass filter required on the original image to match the 170
query image. Brightness is defined by means of the illumination component of the face in YCbCr color space. Weighted empirical relation gives the score for brightness. Resolution is very much affected due to the distance between the camera and subject. To include this feature, the resolution is taken as a relative score with respect to the best face in the face log. The final quality score is obtained as the weighted mean of the individual scores obtained from pose, sharpness, brightness, 10
175
and resolution. The empirical weights have been chosen as in [19]. We trained the model using CAS-PEAL Dataset [32]. In [20] the face quality has been assessed using rank-based learning. Features like Local Binary Pattern (LBP), Gabor wavelet (GW), Histogram of Gradient (HOG), Gist and CNN based deep features are obtained from images and a sequence of rankings are done to obtain the final quality
180
score. Let f (.) be the function that transforms a face image into a feature vector. Define a linear form quality assessment function S(I) = wT f (I), and our goal is to find the value of rank weight w. Let Sv = [S1 (I), S2 (I), S3 (I), · · · , Sm (I)] be the level 1 rank-based scores of for different features. In this case m = 5. Define the level 2 quality assessment function of as SK (I) = wk fΦ (Sv ),in which fΦ () is the mapping function of a polynomial kernel. The value of SK (I) is then normalized to
185
[0 ∼ 100] and is used as the final RQS of I. The training of the algorithm has been done based on three types of datasets. Good faces consist of face images selected from databases collected in controlled environments, such as FERET and FRGC. The second training dataset consists of face images selected from two real-world face databases: LFW and AFLW. The last training set consists of non-face natural images in which the face detector generates false positive detection results. We
190
used the pretrained model of [20] available in internet 2 .
6. Frame Work for Dataset Evaluation For a given a face database, there are many possible methods to evaluate a specific recognition algorithm. To facilitate the comparisons among the results of different methods, we have specified a standard evaluation framework for evaluating the database. We request the potential users of 195
this database to evaluate their methods according to this framework. The users can also validate their methods and queries and suggestions can be posted for improving the database in [61]. The following paragraphs discuss the methods and algorithms associated with the proposed database. 6.1. Annotations and Abbreviation The dataset contains images for the different distance from the camera. Each of these distances
200
has several variations of pose and occlusion. We have used certain abbreviations to label these varieties of classes. We have named the sets for frontal faces at the different distance as FD , where D is for 1st to 8th positions. For every position, we have occluded faces which we named as OD . 2 https://jschenthu.weebly.com/projects.html
11
1 : Yaw of approximately 0◦ to 15◦ ; Y 2 : Yaw of approximately between 15◦ to 30◦ ; Y 3 : Yaw of Table 2: Here YD D D +
+
3 : Yaw of angle greater than 45◦ ; P 1 : Pitch for Up/Down 15◦ ; P 1 : Pitch approximately between 30◦ to 45◦ ; YD D D
(Up/Down) Greater 15◦ .
Pose
Set Annotation
Sub Pose ◦
Frontal
FD
approx. 0
Occlusion
OD
approx. 0◦
Yaw
YD
between 0◦ to 15◦ between 15◦ to 30◦ between 30◦ to 45◦ greater than 45◦
Pitch
between 0◦ to 15◦
PM
greater than 15◦
Samples
Subset Annotation
1 ∼ 15
∼
1 ∼ 10
∼
1 ∼ 10 1 ∼ 10 1 ∼ 10 1∼5 1 ∼ 10 1∼5
YD1 YD2 YD3 +
YD3
1 PD +
1 PD
The head pose can be from side to side and top to bottom (off-plane rotation). As in Fig. 2 the profile faces are captures with yaw angles (0◦ to 15◦ ), (15◦ to 30◦ ) and (30◦ to 45◦ ). These are 205
named as YD1 , YD2 and YD3 respectively. Similarly for pitch as in Fig.2 the face-image groups are 1 for head pose between (0◦ to 15◦ ). For more than 45◦ for side to side and 15◦ for up names as PD +
+
1 to down we name the set as YD3 and PD respectively, where ”+” in suffix stands for greater than
the certain degree of angle. 6.2. Gallery for Face Recognition 210
A gallery set is a collection of images used to build a recognition model or to tune the parameters of the model or both. We construct a training set containing 600 images of 100 subjects, selected from F1 sub-set in the database, with each subject contributing six images selected randomly. 6.3. Probe for Face Recognition A probe set on the other hand is a collection of probe images of individuals that need to be recog-
215
nized. Eight probe sets comprising of images for varying distance and pose picked from FD ,OD ,YD and PD for the evaluating the quality and recognition performance of algorithms as in subsequent sections. The selection of gallery and probe sets can be randomized for testing the worst and best case performance of the algorithms. Thus we have constructed 8 sets for testing 3 different algorithms for face quality assessment, which makes about 24 cases for each pose and occlusion.
12
220
7. Evaluation of the Dataset 7.1. Average Face Quality Score (AFQS) for different sub-sets of QDF We have considered three applications, i.e. (a) Face Recognition with three different algorithms (b) Super-resolution construction (two different algorithms) (c) Fiducial point detection (two algorithms) for validating the QDF database. The effect of the face quality on these algorithms has
225
been demonstrated. There has been studies on the relation between the face quality score and face verification rate following different protocols ([17], [18], [20] and [59]). We evaluated the QDF dataset with several methods and arrived at a quality score in the range between 0 to 100 to bring the comparison to a common platform. We obtain the average face quality score (AFQS) of each set by randomly selecting images from each subject for each set. We then perform face recognition
230
for each of these sets and report the average performance accuracy. The average face quality scores have been given in Table. 3. The quality has been evaluated for each pose for each distance. The evaluation is carried out by randomly choosing a subject from the dataset at a time and computing the quality scores by each method. The average score is computed after several such runs for each dataset.
13
Table 3: Avg. Face Quality Score (AFQS) for different sub sets of QDF dataset F1
F2
F3
F4
F5
F6
F7
F8
Nasirollahi et al. [19]
82.68 ± 6.22
74.14 ± 7.86
65.46 ± 6.27
54.5 ± 5.83
47.18 ± 4.24
39.47 ± 5.36
31.17 ± 4.12
26.69 ± 4.59
Chen et al. [20]
85.84 ± 6.20
74.27 ± 5.08
59.45 ± 5.66
51.18 ± 4.34
47.18 ± 6.61
42.19 ± 5.37
34.64 ± 3.58
27.68 ± 3.06
(a) Avarage Face Quality Score (AFQS) for different distance of frontal face (FD ). Y11
Y21
Y31
Y41
Y51
Y61
Y71
Y81
Nasirollahi et al. [19]
45.68 ± 5.50
39.43 ± 5.12
37.65 ± 4.73
32.50 ± 6.89
25.18 ± 4.10
17.47 ± 5.84
10.17 ± 3.06
6.68 ± 2.63
Chen et al. [20]
43.42 ± 6.71
40.69 ± 6.54
36.47 ± 4.80
30.18 ± 5.81
25.18 ± 5.58
19.19 ± 4.16
13.64 ± 3.23
7.48 ± 2.20
1 (b) Avarage Face Quality Score (AFQS) for different distance of yaw face between 0◦ and 15◦ (YD ).
Y12
Y22
Y32
Y42
Y52
Y62
Y72
Y82
Nasirollahi et al. [19]
32.58 ± 6.51
27.35 ± 5.81
20.89 ± 7.45
11.24 ± 4.05
7.37 ± 3.91
4.71 ± 3.64
4.66 ± 2.48
4.45 ± 2.45
Chen et al. [20]
35.37 ± 5.86
32.52 ± 6.11
27.43 ± 5.83
24.37 ± 6.84
15.62 ± 5.74
7.88 ± 5.39
4.89 ± 3.75
3.31 ± 2.56
Y13
Y23
Y33
Y43
Y53
Y63
Y73
Y83
Nasirollahi et al. [19]
30.84 ± 5.80
25.47 ± 7.71
24.45 ± 4.43
18.58 ± 3.26
12.49 ± 2.40
7.37 ± 2.38
4.53 ± 2.65
2.17 ± 1.31
Chen et al. [20]
32.64 ± 6.52
27.73 ± 6.14
25.49 ± 2.74
19.79 ± 4.48
10.87 ± 3.39
9.62 ± 1.23
3.38 ± 1.04
1.33 ± 0.34
2 (c) Avarage Face Quality Score (AFQS) for different distance of yaw face between 15◦ and 30◦ (YD ).
3 (d) Avarage Face Quality Score (AFQS) for different distance of yaw face between 30◦ and 45◦ (YD ).
Y13
+
+
Y23
Y33
+
Y43
+
Y53
+
Y63
+
Y73
+
+
Y83
Nasirollahi et al. [19]
11.89 ± 2.64
11.37 ± 2.66
9.28 ± 2.56
8.85 ± 3.47
8.21 ± 2.22
6.45 ± 1.14
5.73 ± 1.29
2.62 ± 1.64
Chen et al. [20]
8.48 ± 1.17
8.32 ± 1.16
6.26 ± 2.15
3.13 ± 1.43
1.16 ± 0.14
1.26 ± 0.37
0.38 ± 0.16
0.43 ± 0.09
P71
P81
+
3 (e) Avarage Face Quality Score (AFQS) for different distance of yaw face more that 45◦ (YD ).
P11
P21
P31
P41
P51
P61
Nasirollahi et al. [19]
47.68 ± 7.49
38.43 ± 6.39
Chen et al. [20]
44.42 ± 6.43
38.69 ± 5.36
36.65 ± 4.56
30.5 ± 5.68
26.18 ± 5.54
18.47 ± 4.51
9.16 ± 3.54
7.68 ± 2.48
35.47 ± 5.34
29.18 ± 6.56
24.18 ± 4.35
18.19 ± 3.32
14.64 ± 4.16
7.23 ± 2.12
1 (f) Avarage Face Quality Score (AFQS) for different distance of pitch face between 0◦ and 15◦ (PD ).
P11
+
+
P21
+
P31
P41
+
+
P51
+
+
P61
P71
+
P81
Nasirollahi et al. [19]
28.19 ± 5.05
25.15 ± 6.67
23.26 ± 3.88
19.39 ± 3.85
17.26 ± 3.60
13.07 ± 2.36
3.28 ± 2.40
1.13 ± 0.57
Chen et al. [20]
16.87 ± 4.83
16.22 ± 2.66
14.29 ± 3.42
12.42 ± 4.21
7.29 ± 3.07
4.28 ± 2.83
1.15 ± 0.52
0.33 ± 0.35
1+ (PD ).
◦
(g) Avarage Face Quality Score (AFQS) for different distance of pitch face more than 15 O1
O2
O3
O4
O5
O6
O7
O8
Nasirollahi et al. [19]
51.19 ± 6.62
43.63 ± 5.45
38.28 ± 5.58
32.33 ± 4.25
24.32 ± 5.25
19.47 ± 4.75
12.53 ± 4.71
8.26 ± 3.27
Chen et al. [20]
50.55 ± 6.84
42.5 ± 5.52
37.49 ± 6.81
31.37 ± 5.21
26.21 ± 4.46
20.87 ± 4.13
16.27 ± 2.24
10.27 ± 3.14
(h) Avarage Face Quality Score (AFQS) for different distance of frontal face (OD ).
235
7.2. Face Recognition (FR) for relevant FQA in QDF database The proposed database has been designed primarily for testing face recognition (FR) methods. We have chosen three FR algorithms for testing on the database. They are (a) OpenBR [21] (b) OpenFace [62] and (c) Discriminant Correlation Analysis (DCA) [13]. In recent years with the 14
advent of collaborative software platforms like GitHub and Bitbucket, the scope for hosting services 240
for open source project became quite popular. To evaluate our dataset we used three such algorithms which are available as open source viz OpenBR [21], OpenFace [62], and DCA [13]. The OpenBR is a set of facial recognition tools developed by the team of researchers in [21]. At its best, OpenBR obtained a recogition rate of 94±1% and 89±2% at a False Accept Rate (FAR) of 0.001% on FERET and FRGC datasets respectively [63]. OpenFace [62] is a popularly used open source software kit
245
for face recognition. Experimental results on LFW databases show that the rank-1 identification accuracy of OpenFace is 0.9292±0.0134% [22]. In [13], the author used the Discriminant Correlation Analysis (DCA) for face recognition. DCA finds the correlation between the low and high-resolution images by coupled subspace projection that maximizes the pairwise correlation between the two set of images. The projection brings the intra-class faces closer and increases the inter-class distance
250
1 in the coupled subspace. We performed face recognition on FD , YD1 , PD and OD sets for each of
the face quality algorithms separately. For these experiments, we perform five repeated trails and report the average recognition performance. We have taken the F1 as the gallery set and the rest as the probe in all the cases. The recognition performance vis-` a-vis the quality scores obtained as in 7.1 is listed in Table. 4. It is apparent that the performance and quality scores comply with each 255
other and further reaffirms the proposed database and makes it qualify for its utility in research and development.
15
Table 4: Face Recognition for different range of AFQS for face quality algorithm by Nasirollahi et. al [19] and Chen et. al [20] F2
F3
F4
F5
F6
F7
F8
AFQS of [19]
74.14 ± 7.86
65.46 ± 6.27
54.50 ± 5.83
47.18 ± 4.24
39.47 ± 5.36
31.17 ± 4.12
26.69 ± 4.59
AFQS of [20]
74.27 ± 5.08
59.45 ± 5.66
51.18 ± 4.34
44.53 ± 6.61
42.19 ± 5.37
34.64 ± 3.58
27.68 ± 3.06
OpenBR [21]
96.43 ± 1.33
79.21 ± 1.76
39.38 ± 1.39
7.14 ± 1.56
0
0
0
OpenFace [62]
98.64 ± 1.42
83.53 ± 1.25
41.59 ± 1.42
12.72 ± 2.56
1.01 ± 0.51
0
0
97.76 ± 1.44
86.39 ± 1.17
43.59 ± 2.73
38.87 ± 1.68
7.87 ± 1.68
1.34 ± 0.24
0
DCA [13]
(a) Face recognition accuracy for average face quality score (AFQS) of frontal face FD .
Y21
Y31
Y41
Y51
Y61
Y71
Y81
AFQS of [19]
39.43 ± 5.12
37.65 ± 4.73
32.50 ± 6.89
25.18 ± 4.10
17.47 ± 5.84
10.17 ± 3.06
6.68 ± 2.63
AFQS of [20]
40.69 ± 6.54
36.47 ± 4.80
30.18 ± 5.81
25.18 ± 5.58
19.19 ± 4.16
13.64 ± 3.23
7.48 ± 2.20
OpenBR [21]
76.23 ± 4.52
65.62 ± 3.62
20.43 ± 3.48
0
0
0
0
OpenFace [62]
78.82 ± 4.93
67.92 ± 2.29
24.80 ± 3.66
14.69 ± 2.04
0
0
0
DCA [13]
70.72 ± 4.61
42.56 ± 3.46
23.22 ± 2.67
12.33 ± 2.05
4.39 ± 1.26
0
0
P21
P31
P41
P51
P61
P71
P81
AFQS of [19]
38.43 ± 6.39
36.65 ± 4.56
30.53 ± 5.68
26.18 ± 5.54
18.47 ± 4.51
9.16 ± 3.54
7.68 ± 2.48
AFQS of [20]
38.69 ± 5.36
35.47 ± 5.34
29.18 ± 6.56
24.18 ± 4.35
18.19 ± 3.32
14.64 ± 4.16
7.23 ± 2.12
OpenBR [21]
68.37 ± 3.42
41.26 ± 2.29
17.43 ± 2.04
0
0
0
0
OpenFace [62]
73.44 ± 4.76
47.92 ± 3.62
20.80 ± 2.31
0
0
0
0
DCA [13]
70.51 ± 3.59
43.56 ± 3.46
16.26 ± 2.67
2.33 ± 1.26
0
0
0
1 (b) Face recognition accuracy for average face quality score (AFQS) of Yaw face pose YD .
1 (c) Face recognition accuracy for average face quality score (AFQS) of Pitch face pose PD .
O2
O3
O4
O5
O6
O7
O8
AFQS of [19]
43.63 ± 5.45
38.28 ± 5.58
32.33 ± 4.25
24.32 ± 5.25
19.47 ± 4.75
12.53 ± 4.71
8.26 ± 3.27
AFQS of [20]
42.53 ± 5.52
37.49 ± 6.81
31.37 ± 5.21
26.21 ± 4.46
20.87 ± 4.13
16.27 ± 2.24
10.27 ± 3.14
OpenBR [21]
73.76 ± 2.54
52.87 ± 5.05
20.34 ± 2.63
0
0
0
0
OpenFace [62]
75.84 ± 4.29
50.34 ± 3.55
24.38 ± 3.23
1.53 ± 0.53
0
0
0
DCA [13]
68.49 ± 3.56
47.82 ± 4.28
23.43 ± 2.33
2.35 ± 0.53
0
0
0
(d) Face recognition accuracy for average face quality score (AFQS) of Occluded face OD .
7.3. Face Image Super-resolution (SR) for relevant FQA in QDF database In this paper, the distance is one major factor for deteriorating face quality. It gets reflected as lower-resolution in the acquired images. Super-resolution (SR) is a technique to increase the 260
resolution of these low-resolution images using algorithms without additional sensors [64], [65], [66], [67]. Super-resolution (SR) is a technique to estimate high-resolution sequence using multiple lower16
resolution observations of the scene or by interpolation. In the method of [23], an initial estimate of high-resolution (HR) image is obtained using bicubic interpolation of low-resolution (LR) image. LR and HR patches of face images are then updated iteratively to mitigate the inconsistency between the 265
LR and HR manifolds while preserving the geometry of the original HR space. Noise can be a reason for the poor performance of various super-resolution techniques. In [24] a sparse representation-based face image super-resolution method has been proposed to improve the performance under noise. We have performed super-resolution reconstruction for frontal faces and reported the structural similarity index (SSIM) of the newly generated face with the good quality gallery images. Average
270
SSIM is listed in Table. 5. Table 5: Structural Similarity Index for different subset of QDF dataset for varrying quality. F2
F3
F4
F5
F6
F7
F8
AFQS of [19]
74.14 ± 7.863
65.46 ± 6.275
54.5 ± 5.835
47.18 ± 4.240
39.47 ± 5.367
31.17 ± 4.124
26.69 ± 4.594
AFQS of [20]
74.27 ± 5.088
59.45 ± 5.664
51.18 ± 4.345
47.18 ± 6.611
42.19 ± 5.376
34.64 ± 3.583
27.68 ± 3.068
LINE [23]
75.35 ± 0.742
74.34 ± 0.819
68.77 ± 0.826
62.59 ± 0.587
57.94 ± 0.352
49.83 ± 0.514
43.94 ± 0.157
SSR [24]
81.35 ± 0.870
78.93 ± 0.875
74.37 ± 0.863
66.96 ± 0.462
64.47 ± 0.730
55.72 ± 0.723
48.75 ± 0.607
7.4. Fiducial Point Estimation (FPE) for relevant FQA in QDF database Face recognition starts with the critical modules of face detection and face alignment (FA). FA process estimates the locations of a set of facial landmarks on a face image, such as corners of eyes, mouth, nose, the contour of the face, etc. Like super-resolution, this also gets affected due to low 275
image quality. Moreover, pose, and occlusion also matters in FA. We attempted to estimate the fiducial points of some of the selected subsets of QDF dataset using some of the state-of-the-art methods. In [25] the authors use the cascade of linear regressors to train a discriminative model. The authors also have presented multiple ways of updating a cascade of regression functions. On the other hand, in [26] the author carries out FA by fitting Active Appearance Models (AAMs)
280
based on non-linear least-squares. Further, for fast and exact AAM fitting, it proposes two methods (Fast-SIC and Fast-Forward). To judge the goodness of the estimation of the fiducial points, we have taken pixel to pixel positioning error (the ground truth fiducial points were selected manually) and obtained the normalized mean error (NME) as the score for the FPE. Table. 6 shows that the average range of NME is quite low for better quality face images. With the decrease of the
285
face quality score, the NME has grown higher. We repeat the experiment for three FQA separately for the selected head pose sets of QDF. The fiducial points are not conspicuous beyond a certain
17
Table 6: Fiducial point estimation for different subset of QDF at different face quality F1
Y11
Y12
Y13
P11
F2
Y21
Y22
Y23
P21
AFQS of [19]
82.68 ± 6.22
45.68 ± 5.50
32.58 ± 6.51
30.84 ± 5.80
47.68 ± 7.49
74.14 ± 7.86
39.43 ± 5.12
27.35 ± 5.81
25.47 ± 7.71
38.43 ± 6.39
AFQS of [20]
85.84 ± 6.20
43.42 ± 6.71
35.37 ± 5.86
32.64 ± 6.52
44.42 ± 6.43
74.27 ± 5.08
40.69 ± 6.54
32.52 ± 6.11
27.73 ± 6.14
38.69 ± 5.36
CLR [25]
7.35 ± 2.16
12.84 ± 4.27
17.71 ± 5.69
21.83 ± 4.95
14.33 ± 4.38
12.53 ± 3.42
13.26 ± 4.57
19.63 ± 5.43
17.69 ± 4.43
14.38 ± 4.73
AAMs [26]
10.15 ± 2.69
14.74 ± 3.87
19.43 ± 4.29
22.35 ± 5.34
13.49 ± 3.89
11.38 ± 4.73
14.93 ± 5.11
20.74 ± 4.55
21.46 ± 5.64
14.24 ± 4.27
resolution and pose. Therefore, we have limited our experiments to fewer cases as shown in Table. 6.
8. Availability 290
The pointer to the description of the database is available at https://sites.google.com/view/qualitybased-distance-face-da/. An End User License Agreement (EULA) needs to be produced for accessing the database.
9. Conclusion In this paper, we describe the design, characteristics, setup and the contents of the new QDF 295
database (QDF) with varying quality because of pose, resolution, and occlusion. To the best of our knowledge, such a database does not exist for access to the researchers working in the area of real-world face recognition. We have validated the images of the QDF by three types of algorithms, i.e., face recognition, super-resolution, and fiducial point estimation. We also present detailed descriptions of the database, including the contents, image naming convention, and the framework for
300
database evaluation. The main characteristics of the QDF database lie in three aspects. Firstly, real-life face representation, unlike the controlled lab environment datasets. Secondly the diversity of the variations, including pose, occlusion, lighting, and the combined variations and lastly the detailed ground-truth information in a well-organized structure. From these results, the difficulty of the database and the strengths and weaknesses of the commonly used algorithms can be inferred.
305
The work presented here suggests the following future avenues of research for face image quality such as face detection, gender recognition, age recognition, eye state classification, etc. In the future, we will come up with the extended version of the dataset which will encompass these real-life challenges.
18
10. Reference References 310
[1] T. Ahonen, A. Hadid, M. Pietikainen, Face description with local binary patterns: Application to face recognition, IEEE transactions on pattern analysis and machine intelligence 28 (12) (2006) 2037–2041. [2] J. Jiang, R. Hu, Z. Wang, Z. Han, J. Ma, Facial image hallucination through coupled-layer neighbor embedding, IEEE Transactions on Circuits and Systems for Video Technology 26 (9)
315
(2016) 1674–1684. [3] P. Viola, M. J. Jones, Robust real-time face detection, International journal of computer vision 57 (2) (2004) 137–154. [4] M. Turk, A. Pentland, Eigenfaces for recognition, Journal of cognitive neuroscience 3 (1) (1991) 71–86.
320
[5] L. Wiskott, N. Kr¨ uger, N. Kuiger, C. Von Der Malsburg, Face recognition by elastic bunch graph matching, IEEE Transactions on pattern analysis and machine intelligence 19 (7) (1997) 775–779. [6] C. Liu, H. Wechsler, Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition, IEEE Transactions on Image processing 11 (4) (2002)
325
467–476. [7] B. Zhang, Y. Gao, S. Zhao, J. Liu, Local derivative pattern versus local binary pattern: face recognition with high-order local pattern descriptor, IEEE transactions on image processing 19 (2) (2010) 533–544. [8] W. Zhang, S. Shan, W. Gao, X. Chen, H. Zhang, Local gabor binary pattern histogram sequence
330
(lgbphs): A novel non-statistical model for face representation and recognition, in: Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, Vol. 1, IEEE, 2005, pp. 786–791. [9] W. Zhao, R. Chellappa, P. J. Phillips, A. Rosenfeld, Face recognition: A literature survey, ACM computing surveys (CSUR) 35 (4) (2003) 399–458.
19
335
[10] J. Jiang, R. Hu, Z. Han, Z. Wang, J. Chen, Two-step superresolution approach for surveillance face image through radial basis function-partial least squares regression and locality-induced sparse representation, Journal of Electronic Imaging 22 (4) (2013) 041120. [11] W. W. Zou, P. C. Yuen, Very low resolution face recognition problem, IEEE Transactions on Image Processing 21 (1) (2012) 327–340.
340
[12] B. Li, H. Chang, S. Shan, X. Chen, Low-resolution face recognition via coupled locality preserving mappings, IEEE Signal processing letters 17 (1) (2010) 20–23. [13] M. Haghighat, M. Abdel-Mottaleb, Lower resolution face recognition in surveillance systems using discriminant correlation analysis, in: Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on, IEEE, 2017, pp. 912–917.
345
[14] J. Jiang, R. Hu, Z. Wang, Z. Cai, Cdmma: Coupled discriminant multi-manifold analysis for matching low-resolution face images, Signal Processing 124 (2016) 162–172. [15] S. P. Mudunuri, S. Biswas, Low resolution face recognition across variations in pose and illumination, IEEE transactions on pattern analysis and machine intelligence 38 (5) (2016) 1034–1040. [16] C.-X. Ren, D.-Q. Dai, H. Yan, Coupled kernel embedding for low-resolution face image recog-
350
nition, IEEE Transactions on Image Processing 21 (8) (2012) 3770–3783. [17] R.-L. V. Hsu, J. Shah, B. Martin, Quality assessment of facial images, in: Biometric Consortium Conference, 2006 Biometrics Symposium: Special Session on Research at the, IEEE, 2006, pp. 1–6. [18] H.-I. Kim, S. H. Lee, Y. M. Ro, Face image assessment learned with objective and relative
355
face image qualities for improved face recognition, in: Image Processing (ICIP), 2015 IEEE International Conference on, IEEE, 2015, pp. 4027–4031. [19] K. Nasrollahi, T. B. Moeslund, Extracting a good quality frontal face image from a lowresolution video sequence, IEEE transactions on circuits and systems for video technology 21 (10) (2011) 1353–1362.
360
[20] J. Chen, Y. Deng, G. Bai, G. Su, Face image quality assessment based on learning to rank, IEEE signal processing letters 22 (1) (2015) 90–94. [21] www.openbiometrics.org. 20
[22] B. Amos, B. Ludwiczuk, M. Satyanarayanan, et al., Openface: A general-purpose face recognition library with mobile applications, CMU School of Computer Science. 365
[23] J. Jiang, R. Hu, Z. Wang, Z. Han, Face super-resolution via multilayer locality-constrained iterative neighbor embedding and intermediate dictionary learning, IEEE Transactions on Image Processing 23 (10) (2014) 4220–4231. [24] J. Jiang, J. Ma, C. Chen, X. Jiang, Z. Wang, Noise robust face image super-resolution through smooth sparse representation, IEEE transactions on cybernetics 47 (11) (2017) 3991–4002.
370
[25] A. Asthana, S. Zafeiriou, S. Cheng, M. Pantic, Incremental face alignment in the wild, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 1859–1866. [26] G. Tzimiropoulos, M. Pantic, Optimization problems for fast aam fitting in-the-wild, in: Computer Vision (ICCV), 2013 IEEE International Conference on, IEEE, 2013, pp. 593–600.
375
[27] J. Jiang, R. Hu, Z. Wang, Z. Han, Noise robust face hallucination via locality-constrained representation, IEEE Transactions on Multimedia 16 (5) (2014) 1268–1281. [28] A. Martinez, R. Benavente, Ar face database, 2000. [29] F. S. Samaria, A. C. Harter, Parameterisation of a stochastic model for human face identification, in: Applications of Computer Vision, 1994., Proceedings of the Second IEEE Workshop
380
on, IEEE, 1994, pp. 138–142. [30] O. Jesorsky, K. J. Kirchberg, R. W. Frischholz, Robust face detection using the hausdorff distance, in: International Conference on Audio-and Video-Based Biometric Person Authentication, Springer, 2001, pp. 90–95. [31] M. Fink, R. Fergus, A. Angelova, Caltech 10,000 web faces, URL http://www. vision. caltech.
385
edu/Image Datasets/Caltech 10K WebFaces. [32] W. Gao, B. Cao, S. Shan, X. Chen, D. Zhou, X. Zhang, D. Zhao, The cas-peal large-scale chinese face database and baseline evaluations, IEEE Transactions on Systems, Man, and CyberneticsPart A: Systems and Humans 38 (1) (2008) 149–161. [33] P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, J. Chang, K. Hoffman, J. Marques,
390
J. Min, W. Worek, Overview of the face recognition grand challenge, in: Computer vision and 21
pattern recognition, 2005. CVPR 2005. IEEE computer society conference on, Vol. 1, IEEE, 2005, pp. 947–954. [34] A. V. Nefian, Georgia tech face database. URL http://www.anefian.com/research/face_reco.htm 395
[35] V. Jain, A. Mukherjee, The indian face database (2002). [36] T. Sim, S. Baker, M. Bsat, The cmu pose, illumination, and expression (pie) database, in: Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on, IEEE, 2002, pp. 53–58. [37] D. B. Graham, N. M. Allinson, Characterising virtual eigensignatures for general purpose face
400
recognition, in: Face Recognition, Springer, 1998, pp. 446–456. [38] I. o. A. Chinese Academy of Sciences National Laboratory of Pattern Recognition, Nlpr face database. URL http://nlprweb.ia.ac.cn/english/irds/facedatabase.htm [39] L. Spacek, University of essex collection of facial images (1996).
405
[40] P. N. Belhumeur, J. P. Hespanha, D. J. Kriegman, Eigenfaces vs. fisherfaces: Recognition using class specific linear projection, IEEE Transactions on pattern analysis and machine intelligence 19 (7) (1997) 711–720. [41] A. S. Georghiades, P. N. Belhumeur, D. J. Kriegman, From few to many: Illumination cone models for face recognition under variable lighting and pose, IEEE transactions on pattern
410
analysis and machine intelligence 23 (6) (2001) 643–660. [42] G. B. Huang, M. Ramesh, T. Berg, E. Learned-Miller, Labeled faces in the wild: A database for studying face recognition in unconstrained environments, Tech. rep., Technical Report 07-49, University of Massachusetts, Amherst (2007). [43] Y. Sun, X. Wang, X. Tang, Hybrid deep learning for face verification, in: Proceedings of the
415
IEEE International Conference on Computer Vision, 2013, pp. 1489–1496. [44] C. E. Thomaz, G. A. Giraldi, A new ranking method for principal components analysis and its application to face image analysis, Image and Vision Computing 28 (6) (2010) 902–913.
22
[45] M. Grgic, K. Delac, S. Grgic, Scface–surveillance cameras face database, Multimedia tools and applications 51 (3) (2011) 863–879. 420
[46] Z. Cheng, X. Zhu, S. Gong, Surveillance face recognition challenge, arXiv preprint arXiv:1804.09691. [47] E. Murphy-Chutorian, M. M. Trivedi, Head pose estimation in computer vision: A survey, IEEE transactions on pattern analysis and machine intelligence 31 (4) (2009) 607–626. [48] A. Fourney, R. Laganiere, Constructing face image logs that are both complete and concise, in:
425
Computer and Robot Vision, 2007. CRV’07. Fourth Canadian Conference on, IEEE, 2007, pp. 488–494. [49] K. Nasrollahi, T. B. Moeslund, Complete face logs for video sequences using face quality measures, IET signal processing 3 (4) (2009) 289–300. [50] P. Griffin, Understanding the face image format standards, in: ANSI/NIST Workshop, 2005.
430
[51] M. Subasic, S. Loncaric, T. Petkovic, H. Bogunovic, V. Krivec, Face image validation system, in: Image and Signal Processing and Analysis, 2005. ISPA 2005. Proceedings of the 4th International Symposium on, IEEE, 2005, pp. 30–33. [52] J. Monnerat, S. Vaudenay, M. Vuagnoux, About machine-readable travel documents, in: RFID Security 2007, no. LASEC-CONF-2007-051, Springer, 2007.
435
[53] H. Fronthaler, K. Kollreider, J. Bigun, Automatic image quality assessment with application in biometrics, in: Computer Vision and Pattern Recognition Workshop, 2006. CVPRW’06. Conference on, IEEE, 2006, pp. 30–30. [54] X. Gao, S. Z. Li, R. Liu, P. Zhang, Standardization of face image sample quality, in: International Conference on Biometrics, Springer, 2007, pp. 242–251.
440
[55] A. N. Zamani, M. K. Awang, N. Omar, S. A. Nazeer, Image quality assessments and restoration for face detection and recognition system images, in: Modeling & Simulation, 2008. AICMS 08. Second Asia International Conference on, IEEE, 2008, pp. 505–510. [56] H. Sellahewa, S. A. Jassim, Image-quality-based adaptive face recognition, IEEE Transactions on Instrumentation and Measurement 59 (4) (2010) 805–813.
23
445
[57] H. S. Bhatt, S. Bharadwaj, M. Vatsa, R. Singh, A. Ross, A. Noore, A framework for qualitybased biometric classifier selection, in: Biometrics (IJCB), 2011 International Joint Conference on, IEEE, 2011, pp. 1–7. [58] Y. Wong, S. Chen, S. Mau, C. Sanderson, B. C. Lovell, Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition, in: Computer Vision
450
and Pattern Recognition Workshops (CVPRW), 2011 IEEE Computer Society Conference on, IEEE, 2011, pp. 74–81. [59] L. Best-Rowden, A. K. Jain, Automatic face image quality prediction, arXiv preprint arXiv:1706.09887. [60] D. Wang, C. Otto, A. K. Jain, Face search at scale: 80 million gallery, arXiv preprint
455
arXiv:1507.07242. [61] https://sbatdciitkgp.wixsite.com/sbkgp/quality-based-distance-face-databas. [62] T. Baltruˇsaitis, P. Robinson, L.-P. Morency, Openface: an open source facial behavior analysis toolkit, in: Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on, IEEE, 2016, pp. 1–10.
460
[63] J. C. Klontz, B. F. Klare, S. Klum, A. K. Jain, M. J. Burge, Open source biometric recognition, in: Biometrics: Theory, Applications and Systems (BTAS), 2013 IEEE Sixth International Conference on, IEEE, 2013, pp. 1–8. [64] S. S. Rajput, A. Singh, K. Arya, J. Jiang, Noise robust face hallucination algorithm using local content prior based error shrunk nearest neighbors representation, Signal Processing 147 (2018)
465
233–246. [65] L. Liu, S. Li, C. P. Chen, Quaternion locality-constrained coding for color face hallucination, IEEE transactions on cybernetics. [66] J. Jiang, C. Chen, J. Ma, Z. Wang, Z. Wang, R. Hu, Srlsp: A face image super-resolution algorithm using smooth regression with local structure prior, IEEE Transactions on Multimedia
470
19 (1) (2017) 27–40. [67] L. Liu, C. P. Chen, S. Li, Y. Y. Tang, L. Chen, Robust face hallucination via locality-constrained bi-layer representation, IEEE transactions on cybernetics 48 (4) (2018) 1189–1201. 24