Iris Image Classification Using SIFT Features

Iris Image Classification Using SIFT Features

ScienceDirect ScienceDirect Procedia Computer Science 00 (2019) 000–000 Procedia Computer Science 00 (2019) 000–000 Available online at www.scienced...

698KB Sizes 0 Downloads 174 Views

ScienceDirect ScienceDirect

Procedia Computer Science 00 (2019) 000–000 Procedia Computer Science 00 (2019) 000–000

Available online at www.sciencedirect.com

www.elsevier.com/locate/procedia www.elsevier.com/locate/procedia

ScienceDirect Procedia Computer Science 159 (2019) 241–250

23rd International Conference on Knowledge-Based and Intelligent Information & Engineering 23rd International Conference on Knowledge-Based Systems and Intelligent Information & Engineering Systems

Iris Image Classification Using SIFT Features Iris Image Classification Using SIFT Features Ioan Păvăloiaa, Anca Ignatb,b,* Ioan Păvăloi , Anca Ignat *

a Institute of Computer Science, Romanian Academy, Iaşi Branch, Romania a Faculty of Computer Science, University “Alexandru Ioan Cuza” of Iași, Romania Institute of Computer Science, Romanian Academy, Iaşi Branch, Romania b Faculty of Computer Science, University “Alexandru Ioan Cuza” of Iași, Romania b

Abstract Abstract The object of interest of this paper is automatic iris classification when dealing with missing information. Our approach uses and The object of interest thisrecognition, paper is automatic irisScale classification dealing with missing information. Ourthis approach extends a method forofface based on Invariantwhen Feature Transform (SIFT). We adapted methoduses for and iris extends a method for face based on Scale Invariant Feature Transform (SIFT). We thisthat method for iris classification and tested it onrecognition, occluded iris images. We add to the keypoint matching procedure newadapted conditions improve the and tested it on different occludedparameters iris images.involved We add in to the the SIFT keypoint matching procedure new conditions that improve classification rate. We tested extraction process and the keypoint matching scheme the on classification rate. Wewith tested different parameters involved the SIFT extraction process andUPOL the keypoint matching on eleven image datasets different levels of occlusion. For in testing, a standardized segmented iris database was scheme employed. eleven image datasets withthat different levels of approach occlusion.has Forbetter testing, a standardized segmented iris original databasemethod was employed. We experimentally prove the proposed results when compared withUPOL both the and the We experimentally the proposed approach has better results when compared with both the original method and the Daugman procedureprove on all that datasets. Daugman procedure on all datasets. © 2019 The Author(s). Published by Elsevier B.V. © 2019 The Authors. Published by Elsevier B.V. © 2019 The Author(s). Published bythe Elsevier B.V. This is an open access article under CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of KES KES International. Peer-review under responsibility of International. Peer-review under responsibility of KES International. Keywords: Scale Invariant Feature Transform, automatic iris classification, keypoint matching, UPOL Keywords: Scale Invariant Feature Transform, automatic iris classification, keypoint matching, UPOL

1. Introduction 1. Introduction In security and access control applications, specific physical characteristics of a person can be checked In security Because and access control applications, specific physical itcharacteristics of a provides person can be checked automatically. iris recognition is a noninvasive technology, is a solution which a highly reliable automatically. Because iris recognition is a noninvasive technology, it is a solution which provides a highly reliable

* Corresponding author. Tel.: +40 232 201529; fax: +40 232 201490. E-mail address:author. [email protected] * Corresponding Tel.: +40 232 201529; fax: +40 232 201490. E-mail address: [email protected] 1877-0509 © 2019 The Author(s). Published by Elsevier B.V. This is an open access underPublished the CC BY-NC-ND 1877-0509 © 2019 The article Author(s). by Elsevier license B.V. (https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of KES International. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of KES International.

1877-0509 © 2019 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of KES International. 10.1016/j.procs.2019.09.179

Ioan Păvăloi et al. / Procedia Computer Science 159 (2019) 241–250 Author name / Procedia Computer Science 00 (2019) 000–000

242 2

person authentication. The biomedical literature suggests that for each person, an iris has a complex pattern with many distinctive features and a unique texture. Irises are as distinct as fingerprints or patterns of retinal blood vessels. Although the person identification that uses the biometric information of the iris allows a very high accuracy in authentication, more than other biometric methods, one of the major limitations is related to the constraint given by the image acquisition conditions. During this process problems as occlusions or bad illumination can occur. Classical methods for iris recognition require special care when dealing with these problems. The SIFT descriptors developed by [1], [2] when applied to iris recognition, have the advantages that do not require good segmentation or polar coordinates normalization preprocessing steps. The qualities of the SIFT method are provided by the scaling, rotation, and translation invariance. SIFT was successfully used in many problems such as object recognition, scene analysis, Content Based Image Retrieval, and the list may continue. The SIFT based methods for iris recognition are suitable for situations of non-cooperative iris image acquisition, for example, when a person is recognized while walking through an airport access control point. Zhu et al. [3] were the first that proposed using SIFT descriptors for situations as inaccurate localization, missing information or elastic deformation of the iris. An improvement was deduced in [4] by applying SIFT on three iris regions (left, right, bottom). Alonso-Fernandez et al. [5] are using texture features around the keypoints provided by SIFT in order to solve the aforementioned problems in iris recognition. In [6] the authors use an extended SIFT method based on Fourier transform, and a Phase-Only Correlation matching procedure. They also proposed in [7] a fast segmentation method and using SURF for iris description. Yang et al. [8] study the importance of the normalization and enhancement preprocessing methods when using SIFT. They tested histogram equalization as enhancement method and polar coordinates transform for normalization and deduce that both these operations improve SIFT based iris recognition. A very good presentation of SIFT methods employed in iris recognition is presented in [9]. They also deduce binary-SIFT based features in order to acquire better recognition accuracy. In all these papers, testing was carried on well-known iris datasets as CASIA, BioSecure, ICE 2005, WVU, BATH, UBIRIS. The SIFT approach used in our experiments is an original extension of a method developed by Aly [10] for face recognition with SIFT. If in the Aly approach two distance measures were used for matching SIFT features, cosine distance and the angle distance. Our approach uses another distance and also extends the matching procedure. In our experiments we used UPOL dataset [11]-[13] and processed it in order to simulate the occlusion generated by eyelids and eyelashes. Other ways of dealing with occlusion can be found in [14], [15]. We tested the influence of two threshold parameters involved in the matching procedure and the SIFT generation process. We compare the classification results with the original Aly method and obtain better results. We also compare it with the Masek version of the Daugman iris processing method [16]-[20] and obtain better results except in an extreme case of occlusion (90% information missing). 2. Dataset The UPOL dataset [11]-[13] contains .PNG format iris images. It was created at the Palacky University of Olomouc and contains iris images of the same size, 576 × 768 pixels (see Fig. 1 (a)). There are 6 iris images for each person, three for the left eye and three for the right eye. The images have black homogeneous background. We experimented with three versions for UPOL dataset. First is the original unsegmented version (Fig. 1 (a)), the second one is a manually segmented version [20] (Fig. 1 (b)) and the third is a standardized segmented version [21] (Fig. 1 (c)). a

b

c

Fig. 1. UPOL iris images – (a) original; (b) – manually segmented; (c) - standardized



Ioan Păvăloi et al. / Procedia Computer Science 159 (2019) 241–250 Author name / Procedia Computer Science 00 (2019) 000–000

243 3

The experiments conducted on the original unsegmented dataset, show that the present approach gives inferior to the running on the other two datasets. The results obtained for standardized segmented datasets are quite similar with those obtained for the manually segmented dataset. The difference between the best results is below 1%. In this paper we use the standardized segmented dataset. This dataset has images of size 404×404 that contain an annular region of the iris of the same size for all irises in the database. We processed the standardized iris image in order to simulate the eyelids-eyelashes occlusion type (see Fig. 2). We cut less information from the lower part of the iris than from the upper part. The cuts were adapted in order to have approximatively 5%, 10%, 15%, 20%, 25%, 30%, 40%, 50%, 60%, 75% and 90% missing information from the annular region. a

b

c

d

e

f

g

h

Fig. 2. UPOL occluded images - (a) 10% missing (b) – 20%; (c) – 30%; (d) – 40%; (e) – 50%; (f) – 60%; (g) – 75%; (h) – 90%.

3. SIFT Features Extraction Scale Invariant Feature Transform (SIFT) algorithm was introduced by Lowe in 1999 [1], [2] in order to address the problem of invariance to scaling and rotation in the feature extraction procedure. SIFT descriptors remain invariant to translation, rotation, scaling and variation in illumination condition. More, they are partially invariant to affine distortion. SIFT features can be used in many different computer vision applications including image classification. The features from the image to be analyzed are extracted after which the matching with the features stored in a database of known objects is made. SIFT features can also be used in 3D mapping and localization, 3D scene modeling, recognition and tracking [22]-[25]. There are four steps in SIFT features detection (see Fig. 3): 1. Scale-space extrema detection: The first step searches over all scales and image locations, using a differenceof-Gaussian (DoG) function to identify potential interest points that are invariant to orientation and scale. 2. Keypoint localization: Then, the locations detected in the first step are refined by using two types of thresholds, one for contrast and one for edges, discarding points of low contrast and edges. 3. Orientation assignment: Then, one or more orientations are assigned to each key point based on local image gradient directions. All future operations are performed on image data that has been transformed relative to the assigned orientation, scale, and location for each feature, thereby providing invariance to these transformations. 4. Keypoint descriptor: Finally, in the region around each key point, a local feature descriptor is computed. Every feature descriptor is a feature vector of dimension 128 distinctively identifying the neighborhood around the key point. To provide orientation invariance this descriptor is based on the local image gradient, transformed according to the orientation of the key point. For images of the real world, the number of features generated by SIFT is generally large, therefore it is necessary to apply a selection of these descriptors.

Author / Procedia Science 00Science (2019)159 000–000 Ioanname Păvăloi et al. / Computer Procedia Computer (2019) 241–250

4244

Fig. 3. Steps followed for computing SIFT descriptor

4. Classification method Using Lowe’s SIFT algorithm, a set of SIFT feature vectors are extracted for each image from the UPOL dataset. Consider a new iris image, let’s call it the test image. SIFT features are also computed for the test image. This features are matched with all the other dataset images in order to be able to label / classify the test iris image. When comparing the test image with the images from the dataset, all the distances between the test feature vector and the feature vectors of the images from the dataset are then computed. The test feature vector is considered matching with another feature vector of the image from the dataset when the distance to that feature is less than a specific percent of the distance to the next nearest feature. This idea was used by Aly [10] when using SIFT for face recognition. In the SIFT based method proposed by Aly, the face image in the dataset with the largest number of matching points with the test image is considered the nearest face image and is used to classify the test image. In this paper an extended method of the Aly’s matching algorithm is used in occluded iris recognition. Next we shall present Aly’s matching procedure. Let I be a test image with m feature vectors, t1 , t2 , , tm (SIFT provided m keypoints, to each keypoint a 128-dimensional feature vector ti is calculated). Find all matching points for the test image I with each image J from the training dataset D. Each iris image J from the dataset is represented by n feature vectors d1 , d2 , , dn (n depends on each image J). The matching points between I and J are found by computing the distances between each test feature vector ti to all feature vectors dk of image J. The distances used in these experiments in order to compare two feature vectors t  (t1 , t2 , , tm ) and d  (d1 , d2 , , dn ) are Euclidean and Manhattan. A test feature vector ti is considered to match a feature vector dk of image J if the distance from ti to dk is less than the distance between ti to the next nearest feature vector of image J multiplied by a parameter denoted by TA: dist(ti , dk )  TA dist(ti , d j ) where dist(ti , d j )  dist(ti , d p )  p 1,..., n , p  j

(1)

We extended Aly’s method in the following way: Step 1. Compute a distance between test image I and each image J of the dataset. The distance between the two images is the average value between matching points coordinates distances. To compute the distance between two feature vectors coordinates Manhattan, Euclidean and Canberra distances were tested in this step. Step 2. Let I be a test image, and mi the number of matching points between I and each image Ji from the dataset, compute the maximum number of matching points,  N max  mi ;i 1,..., n . Select three subsets from D dataset:



Ioan Păvăloi et al. / Procedia Computer Science 159 (2019) 241–250 Author name / Procedia Computer Science 00 (2019) 000–000

245 5

a)

the subset that contains the images with the maximum number of matches, "p" with test image I (this subset is denoted N in the tables with results); b) the subset that contains at least "p-1" matches with the I test image, coded N1 in the results section; c) the subset that contains minimum "p-2" matches, coded N2. The a) subset is included in the b) subset which is included in c). The three subsets, with Canberra, Euclidean and Manhattan distances in Step1, will provide nine distinct methods denoted NC, NE, and NM for subset a), N1C, N1E, and N1M for b) subset and, N2C, N2E, and N2M for subset c), respectively. Step 3. Choose, in each of the nine variants, the image from the subset at minimum distance (computed in Step1) from the test image. The corresponding image will be used for the classification of the test iris image. We also considered the Kepenekci matching method [26]. This approach was adapted and integrated with the original SIFT method presented above. When the test image is matched against each image of the dataset, only the relevant vectors are kept in consideration. So, for I test image and J image from the dataset, for each feature vector ti of the iris image I, we are looking first for a set of relevant vectors of the iris image J. A vector dk of the iris image J is considered relevant if the Euclidean distance between the positions of the corresponding keypoints is less than a certain threshold. If no relevant vector is identified, vector dk is excluded from the comparison procedure. The results obtained using this approach were worse than those obtained using the method described above. 5. Results In this work we tested the influence on the classification results for different SIFT parameters. The most important parameter, the parameter which gave the most significant insights was the contrast threshold used in the keypoint localization step of the SIFT method. We denote this parameter by Tc. The other parameter that we varied is the threshold that was introduced by Aly in the keypoint matching process, denoted TA in equation (1). Different sets of feature vectors will be obtained by varying these parameters. For the contrast threshold we considered the following values: 0.02, 0.025, 0.03 and 0.035. In Table 1 are some statistical values computed for each set of feature vectors obtained for different Tc values. The statistical values include minimum, maximum, average and median for keypoints number and also statistics about the number of images that have between 1 to 50 matching keypoints, 51-100, 101-200, etc. Obviously, as the contrast threshold decreases the number of feature vectors increases. The number of feature vectors for the original unsegmented UPOL iris images is greater than that obtained for the standardized segmented dataset. For example, for Tc=0.04, the original dataset has an average of 211.99 features comparing with only 96.92 for the standardized segmented dataset. However, the classification results are better for the standardized segmented dataset than for the unsegmented one. Table 1. Statistic values for SIFT features vectors using different Tc values 0.02 0.25 0.03 0.035

0.04

Minimum

56

43

31

22

14

Maximum

1486

1143

796

540

326

Average

583.09

340.99

205.96

133.8

96.92

Median

522

259.5

148

98

82.5

1-50

0

1

12

44

67

51-100

15

59

112

149

177

101-200

69

109

97

109

112

200-350

66

53

87

70

28

350-500

39

57

52

11

0

>500

195

105

22

1

0

Ioan Păvăloi et al. / Procedia Computer Science 159 (2019) 241–250 Author name / Procedia Computer Science 00 (2019) 000–000

246 6

We observe from Table 1 that for smaller values of the contrast threshold parameter the number of feature vectors generated for each image increases, it starts from an average of 96.92 for Tc=0.04, and increases to 205.96 for Tc=0.35, and 583.09 for Tc=0.02. Considering Tc=0.03, we computed the above mentioned statistics for the eleven datasets, with different levels of iris information. The results are in Table 2: Table 2. Statistic values for SIFT features vectors for different datasets. 05

10

15

20

25

30

40

50

60

75

90

Minimum

31

26

28

44

43

44

42

43

45

29

20

Maximum

608

562

537

624

587

537

468

373

309

192

83

Average

165.51

156.72

149.44

176.35

166.24

158.96

141.58

120.68

107.14

77.11

42.72

Median

118

113

107.5

130

126

119

108

95

89

65

40

1-50

25

26

37

4

5

5

6

5

9

56

310

51-100

138

149

148

139

144

155

168

198

224

251

74

101-200

95

97

96

108

117

120

125

126

125

77

0

200-350

94

82

74

96

85

77

71

54

26

0

0

350-500

28

28

28

31

31

26

14

1

0

0

0

From Table 2 we notice that the number of feature vectors decreases as the iris information decreases. For 75% missing information dataset the images have an average of 77 feature vectors, 251 of them having between 51 and 100 matching keypoints. For Tc=0.03, we compared the classification results obtained with Aly’s method and the nine extended versions that we proposed in this paper from the point of view of the TA parameter from relation (1). In (1) we used the Euclidean distance. We tested values for TA starting from 0.1 to 0.9. The results are in Table 3, where the values are the number of well classified images out of a total of 384. Table 3. Number of well classified images – parameter TA variations 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Aly

276

364

374

379

377

374

372

305

92

NM

277

361

375

380

379

378

373

313

99

NE

277

362

374

379

380

378

373

313

99

NC

277

359

374

379

380

378

373

313

99

N1M

183

336

363

375

379

381

376

323

101

N1E

183

333

364

375

379

380

376

323

101

N1C

183

334

364

375

378

381

376

323

101

N2M

116

282

345

361

377

381

377

333

104

N2E

116

283

346

361

375

380

377

333

104

N2C

116

283

345

361

375

380

377

333

104

AVG

209.8

334.9

364.3

372.5

377.9

379.1

375

321.2

100.4

On average, the best results in Table 3 are obtained for TA= 0.6, we get 381 correctly classified images with N1M, N2M and N1C methods. Good results are also obtained also for TA= 0.5. 0.5 value, acceptable for TA= 0.4 and 0.7. The results for Aly’s method threshold parameter TA= 0.1 and 0.9 show that these values cannot be used in practice.



Ioan Păvăloi et al. / Procedia Computer Science 159 (2019) 241–250 Author name / Procedia Computer Science 00 (2019) 000–000

247 7

We also studied the influence of the contrast threshold variation on the classification results using the same methods as in Table 3. For these comparisons we used TA= 0.6 and Euclidean distance in (1). The results are in Table 4. Table 4. Results in image classification – Tc parameter variations. 0.02

0.025

0.03

0.035

0.04

Aly

381

380

374

370

366

NM

382

381

378

374

366

NE

382

381

378

374

366

NC

382

381

378

374

366

N1M

383

383

381

380

371

N1E

383

383

380

379

370

N1C

383

383

381

379

372

N2M

382

383

381

378

373

N2E

384

383

380

378

373

N2C

384

384

380

378

373

The maximum number of correctly classified images, 384 (a percent of 100%), is obtained for two values of the contrast threshold parameter TA = 0.025 and 0.02, using N2C method and respectively N2E and N2C methods. Very good results, namely 383 correctly classified images, are obtained for TA =0.02 and 0.025. On average, the N2 class methods perform very well, regardless of the value of the contrast parameter. Considering Tc= 0.03 and TA= 0.6, the correctly classified images depending on the degree of iris occlusion are in Table 5: Table 5. Results in image classification (number of well classified iris images) – occluded irises 05

10

15

20

25

30

40

50

60

75

90

Aly

379

380

376

374

373

374

370

362

345

337

240

NM

379

380

381

376

374

375

372

366

352

342

245

NE

379

380

381

376

375

375

375

366

352

344

250

NC

379

380

381

376

375

374

372

366

352

343

249

N1M

380

380

380

378

377

378

372

368

356

343

252

N1E

380

381

380

378

377

377

373

369

355

345

259

N1C

380

381

379

378

377

378

373

368

356

344

258

N2M

379

381

380

379

379

382

368

363

353

340

242

N2E

379

381

379

379

379

381

368

364

353

343

248

N2C

380

382

379

379

379

381

368

363

353

343

249

We observe from the data in Table 5, that the best average results are obtained for N1E and N1C methods, and very near them are the results provided by N1M method. The N2 methods gave better results than N methods. Aly’s methods gave the smallest average results in image classification for the eleven datasets that were tested. We also performed computations, separately on each eye. The classification results for the left eye are in Table 6 and for the right eye in Table 7. The total number of images tested in these computations is 192.

Ioan Păvăloi et al. / Procedia Computer Science 159 (2019) 241–250 Author name / Procedia Computer Science 00 (2019) 000–000

248 8

Table 6. Results in image classification (number of well classified iris images, total 192) – left eye 05

10

15

20

25

30

40

50

60

75

90

Aly

190

191

190

189

185

185

187

183

176

174

120

NM

190

191

191

189

187

187

189

186

177

177

121

NE

190

191

191

189

187

187

189

186

177

177

123

NC

190

191

191

189

187

187

189

186

177

177

122

N1M

190

191

191

190

189

189

189

186

180

180

125

N1E

190

191

191

190

189

189

189

186

179

181

128

N1C

190

191

191

190

189

189

189

185

180

181

127

N2M

192

191

190

189

190

191

188

186

176

178

126

N2E

192

191

190

189

189

191

188

185

176

179

131

N2C

192

191

190

189

189

191

188

185

176

178

130

From the results in Table 6 and Table 7 we see that the best average classification results are obtained for N1M, N1E and N1C methods. Keeping in consideration the amount of computations involved, N1M is a good choice. The results for the left eye are on average higher than for the right eye. Table 7. Results in image classification (number of well classified iris images, out of 192) – right eye 05

10

15

20

25

30

40

50

60

75

90

Aly

189

189

187

185

188

188

186

182

174

167

134

NM

189

189

191

187

187

188

185

184

176

172

137

NE

189

189

191

187

188

188

185

184

176

172

137

NC

189

189

191

187

188

188

185

184

176

172

137

N1M

190

190

190

187

189

190

186

183

179

172

141

N1E

190

190

190

187

189

189

186

184

179

173

141

N1C

190

190

190

187

189

190

186

184

179

172

142

N2M

188

191

189

190

190

191

182

182

180

171

139

N2E

187

191

187

191

190

190

183

183

180

172

141

N2C

188

192

187

191

190

190

183

181

180

172

141

We also compared our methods with the results provided by the classification provided by the Daugman procedure (the Masek implementation version [18], [19]). We used the Euclidean distance in (1), TA=0.6 and different values for the contrast parameter. The results are in Table 8. We observe that Daugman-Masek classification gave better results only for the dataset with 90% missing information. This is a natural result because in this case there is very few information from which to extract keypoint information. Table 8. Best results in image classification – comparisons with Daugman-Masek method 05

10

15

20

25

30

40

50

60

75

90

N1E cT=0.03

380

381

380

378

377

377

373

369

355

345

259

N2E cT=0.03

379

381

379

379

379

381

368

364

353

343

248

N1E cT=0.025

382

382

383

381

380

381

377

372

366

355

276

N2E cT=0.025

382

381

381

382

381

382

375

373

367

357

266

N1E cT=0.02

383

383

382

382

381

382

378

381

377

365

305

N2E cT=0.02

383

382

383

383

382

382

380

380

375

369

293

Daugman

381

380

379

378

378

377

377

375

370

355

316

Author name / Procedia Computer Science 00 (2019) 000–000 Ioan Păvăloi et al. / Procedia Computer Science 159 (2019) 241–250



9 249

Applying Kepenekci matching procedure and considering different values for Aly’s method threshold parameter, TA, the results are worse than those obtained using Aly matching scheme. 6. Conclusion, Future work This paper presents the experiments made using a new approach for iris image classification, based on matching SIFT features in occluded iris images. For testing the methods UPOL iris dataset was employed. We compare the new introduced methods with Aly’s matching procedure and we obtain equal or superior results. A maximum percent of 100% was reached in classification for two values of the contrast threshold parameter, i.e. 0.02 and 0.025, when using N2C, N2E methods and respectively N2C method. From our experiments we deduce that the accuracy increases rapidly as the number of SIFT features increases and then, from a certain level, it starts to saturate. A second remark is that the values of the parameters in generating SIFT features must be established after some experiments for each dataset. Experiments made on standardized segmented UPOL dataset show that some of the proposed distance measures which use the SIFT characteristics yield very good results. From our experiments we found that a good value for threshold parameter is 0.6, when using Euclidean distance in relation (1). N1E and N2E methods may be considered appropriate given the distance measure used to find a similar image. Our future work involve finding a method in which also color and texture features [27] will be considered. The proposed method will be tested on occluded image classification using other datasets [14] (UBIRIS, for example). Other experiments will be performed using the extended Aly method with color-SIFT features [28] and the SVM classifier [29]. References [1] Lowe, , David G. (1999) “Object recognition from local scale-invariant features”, International Conference on Computer Vision, Corfu, Greece, pp. 1150–1157. [2] Lowe, David G. (2004) “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision, 60(2), pp. 91-110. [3] Zhu, Ruihui, Jinfeng Yang, and Renbiao Wu. (2006) "Iris recognition based on local feature point matching." 2006 International Symposium on Communications and Information Technologies. IEEE. [4]

Belcher, Craig, and Yingzi Du. (2009) "Region-based SIFT approach to iris recognition." Optics and Lasers in Engineering 47.1: 139-147.

[5]

Alonso-Fernandez, Fernando, Pedro Tome-Gonzalez, Virginia Ruiz-Albacete, and Javier Ortega-Garcia (2009) "Iris recognition based on sift features." 2009 First IEEE International Conference on Biometrics, Identity and Security (BIdS).

[6] Mehrotra, Hunny, Banshidhar Majhi, and Pankaj Kumar Sa. (2011) "Unconstrained iris recognition using f-sift." 2011 8th International Conference on Information, Communications & Signal Processing. IEEE. [7] Mehrotra, Hunny, Pankaj K. Sa, and Banshidhar Majhi. (2013) "Fast segmentation and adaptive SURF descriptor for iris recognition." Mathematical and Computer Modelling 58.1-2: 132-146. [8]

Yang, Gongping, Shaohua Pang, Yilong Yin, Yanan Li, and Xuzhou Li. (2013) "SIFT based iris recognition with normalization and enhancement." International Journal of Machine Learning and Cybernetics 4.4: 401-407.

[9]

Rathgeb, Christian, Jörg Wagner, and Col Stephen. Busch. (2018) "SIFT-based iris recognition revisited: prerequisites, advantages and improvements." Pattern Analysis and Applications: 1-18.

[10] Aly, Mohamed. (2006) "Face recognition using SIFT features." CNS/Bi/EE report 186. [11] Dobeš, Mihal, Martinek, Jan, Skoupil, Dalibor, Dobešová, Zdena, and Pospíšil, Jaroslav (2006) “Human eye localization using the modified Hough transform”. Optik, 117 (10): 468-473. [12] Dobeš, Mihal,. Machala, Libor, Tichavský, Petr, and J. Pospíšil (2004) “Human Eye Iris Recognition Using the Mutual Information”, Optik, 115 (9): 399-405. [13] Dobeš, Mihal, and Machala, Libor, Iris Database, http://www.inf.upol.cz/iris/. [14] Păvăloi, Ioan, and Anca Ignat. (2018) “Experiments on Iris Recognition using Partially Occluded Images”, 8-th International Workshop on Soft Computing, SOFA 2018, Arad, Romania, 13-15 September. [15] Ignat, Anca, and Alexandru Vasiliu. (2018) "A study of some fast inpainting methods with application to iris reconstruction." Procedia Computer Science 126: 616-625.

250 10

Ioan Păvăloi et al. / Procedia Computer Science 159 (2019) 241–250 Author name / Procedia Computer Science 00 (2019) 000–000

[16] Daugman, John (1993) “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Anal.Mach. Intell., 15 (11): 1148–1161. [17] Daugman, John (2016) “Information Theory and the IrisCode”, IEEE Transactions on Information Forensics and Security, 11 (2): 400-409. [18] Masek, Libor (2003) “Recognition of human iris patterns for biometric identification”, Master’s thesis, University of Western Australia. [19] Masek, Libor, and Kovesi, Peter (2003) “MATLAB Source Code for a Biometric Identification System Based on Iris Patterns”, The School of Computer Science and Software Engineering, University of Western Australia. [20] Păvăloi, Ioan, Adrian Ciobanu, and Mihaela, Luca. (2013) "Iris Classification Using WinICC and LAB Color Features", the 4th IEEE International Conference on e-Health and Bioengineering, EHB 2013, Iaşi, Romania, 21-23 noiembrie 2013. [21] Ignat, Anca, Mihaela, Luca, and Adrian Ciobanu (2016) "New Method of Iris Recognition Using Dual Tree Complex Wavelet Transform." Soft Computing Applications. Springer International Publishing: 851-862. [22] Wang, Yu-Yao, Zheng-Ming, Li, Long, Wang, and Min, Wang. (2013) ”A scale invariant feature transform based method”, Journal of Information Hiding and Multimedia Signal Processing, 4(2), pp.73-89. [23] Yang, Wei-Jong, Wei-Hau, Du, Pau-Choo, Chang, Jar-Ferr, Yang, and Pi-Hsia, Hung. (2017) “Visual Thing Recognition with Binary ScaleInvariant Feature Transform and Support Vector Machine Classifiers Using Color Information”, World Academy of Science, Engineering and Technology International Journal of Computer and Information Engineering, Vol:11, No:6. [24] Bejinariu, Silviu-Ioan, and Ramona, Luca. (2018) „Analysis of Abnormal Crowd Movements based on Features Tracking”, Romanian Journal of Information Science and Technology , ROMJIST, Vol. 21(2), pp. 193-205. [25] Bejinariu, Silviu-Ioan, Hariton, Costin, Florin, Rotaru, Ramona, Luca, Cristina Diana, Niţă and Camelia, Lazăr (2017) “Fireworks Algorithm based Image Registration”, in Balaş V.E., Jain L., Balaş M., Soft Computing Applications, Proceedings of the 7th International Workshop Soft Computing Applications (SOFA 2016), Volume 1, Advances in Intelligent Systems and Computing, 633, pp. 509-523. [26] Kepenekci, Burcu. (2001) “Face Recognition Using Gabor Wavelet Transform”,. PhD thesis, The Middle East Technical University. [27] ] Păvăloi, Ioan, and Anca, Ignat, (2017) “Iris Recognition Using Color and Texture Features”, in Balas V.E., Jain L., Balas M., Soft Computing Applications, Proceedings of the 7th International Workshop Soft Computing Applications (SOFA 2016), Volume 2, Advances in Intelligent Systems and Computing, 634, Springer, Cham, pp. 498-514. [28] Bo, Lu, and Taeg Keun, Whangbo. (2014) “A SIFT-Color Moments Descriptor for Object Recognition”, Proc. of International Conference on IT Convergence and Security (ICITCS), pp. 1-3. [29] Anand, Bhaskar, and Prashant K Shah. (2016) “Face Recognition using SURF Features and SVM Classifier”, International Journal of Electronics Engineering Research., Volume 8, Number 1, pp. 1-8.