Accepted Manuscript Title: Resolution Enhancement of Images for Further Pattern Recognition Applications Author: Maha Awad Fatma G. Hashad Mustafa M. Abd Elnaby Said E. El Khamy Osama S. Faragallah Alaa M. Abbas Heba A. El-Khobby El-Sayed M. El-Rabaie Salah Diab Bassiouny Sallam Saleh A. Alshebeili Fathi E. Abd El-Samie PII: DOI: Reference:
S0030-4026(15)00883-9 http://dx.doi.org/doi:10.1016/j.ijleo.2015.08.122 IJLEO 56046
To appear in: Received date: Accepted date:
12-8-2014 23-8-2015
Please cite this article as: M. Awad, F.G. Hashad, M.M.A. Elnaby, S.E.E. Khamy, O.S. Faragallah, A.M. Abbas, H.A. El-Khobby, E.-S.M. El-Rabaie, S. Diab, B. Sallam, S.A. Alshebeili, F.E.A. El-Samie, Resolution Enhancement of Images for Further Pattern Recognition Applications, Optik - International Journal for Light and Electron Optics (2015), http://dx.doi.org/10.1016/j.ijleo.2015.08.122 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Resolution Enhancement of Images for Further Pattern Recognition Applications
ip t
Maha Awad1, Fatma G. Hashad2, Mustafa M. Abd Elnaby1, Said E. El Khamy3, Osama S. Faragallah4, Alaa M. Abbas2, Heba A. El-Khobby1, El-Sayed M. El-Rabaie2, Salah Diab, Bassiouny Sallam, Saleh A. Alshebeili, and Fathi E. Abd El-Samie2 1
an
us
cr
Department of Electronics and Electrical Communications, Faculty of Engineering, Tanta University, Tanta, Egypt. 2 Department of Electronics and Electrical Communications, Faculty of Electronic Engineering, Menoufia University, 32952, Menouf , Egypt. 3 Department of Electrical Engineering, Faculty of Engineering, Alexandria University, Alexandria, 21544, Egypt. 4 Department of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, 32952, Egypt Emails:
[email protected],
[email protected],
[email protected],
[email protected],
[email protected],
[email protected],
[email protected],
[email protected],
[email protected]
M
ABSTRACT
In storing large databases of images such as fingerprint and medical databases, the required memory
d
size becomes a great challenge. This paper presents a framework for reducing the size of large image
te
databases used in pattern recognition applications with decimation, and reconstructing the images with their
Ac ce p
original sizes using interpolation for feature extraction. For pattern recognition applications, a new trend based on Mel-Frequency Cepstral Coefficients (MFCCs) is presented the paper. To reconstruct the images to their original sizes, interpolation methods like bilinear, bicubic, warped-distance, and neural methods are investigated and compared. The sensitivity of the extracted features from the images to the interpolation method used is studied. For the feature extraction process, the interpolated images are converted to 1-D signals by lexicographic ordering and used in time domain or transformed to Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), or Discrete Wavelet Transform (DWT) domain. The MFCCs and polynomial shape coefficients are then extracted to generate the database of features, which can be used for pattern identification using neural networks. The pattern recognition is performed by extracting features from the pattern image under test with the same manner used in the training step. Experimental results show that feature extraction from an interpolated image to retain the original image dimensions can be used 1
Page 1 of 48
robustly for pattern recognition. In addition, results also show that the DCT is the most appropriate domain for feature extraction.
I.
cr
ip t
Keywords: Image interpolation, Pattern recognition, Cepstral analysis.
INTRODUCTION
us
Pattern recognition applications depend mainly on large stored databases that are used for feature extraction and training of a classifier for possible recognition on new incoming data samples. The main
an
problem encountered in these applications is the required tremendous storage size. This problem has
M
motivated the search to find storage reduction solutions that have no or small effect on the pattern recognition process. In this paper, we try to find a solution for this problem. We use decimation as a tool to
d
reduce the size of the data, and then image interpolation to reconstruct the image with original size before
investigated in the paper.
te
feature extraction. The effect on this decimation-interpolation process on the pattern recognition is
Ac ce p
Image interpolation is the process by which a high resolution image is obtained from a low resolution one. It allows the user to increase the size of images interactively, to concentrate on some details or to get a better overview of a certain part of the image. Image interpolation has a wide range of applications in numerous fields such as medical image processing, military applications, space imagery and remote sensing. The image interpolation problem has been intensively treated in the literature [1-5]. The optimal approach for image interpolation is based on the popular sinc function as the best interpolating basis function. However, the sinc function decays too slowly at infinity, and it is difficult to realize this function physically. Hence, different approximations such as the bilinear, bicubic and cubic Spline approximations have been proposed [3,4,5,6] to solve this problem. These conventional techniques are space-invariant algorithms based on certain basis functions. They do not take into consideration the spatial activity of the image to be interpolated. 2
Page 2 of 48
Recently, a linear space-invariant approach was proposed for image interpolation [7]. This approach is based on the evaluation of a “warped distance” between the pixel to be estimated and each of its neighbors. The warping process is performed by moving the estimation of the pixel towards the more homogeneous neighboring side. This algorithm has led to better results, especially for edge interpolation. Neural networks
ip t
have been successfully applied in the area of signal and image processing, including image interpolation. For instance, Plaziac [8] proposed a neural-based image interpolation method in noise-free and noisy line
cr
doubling and image expansion problems. He reported promising results, particularly under noisy conditions.
us
In this paper, we investigate several image interpolation methods for the pattern recognition applications. The rest of the paper is organized as follows. Section II describes the image interpolation methods used
an
in the paper. Section III introduces the some characteristics of the pattern images used in recognition applications. Section IV demonstrates the proposed framework for image storage and pattern recognition. In
M
Section V, the experimental results are given. Finally, Section VI summarizes the concluding remarks.
d
IMAGE INTERPOLATION METHODS
te
II.
A. LINEAR SPACE-INVARIANT IMAGE INTERPOLATION
Ac ce p
The process of image interpolation aims at estimating intermediate pixels between the known pixel values. This process is performed on a 1-D basis, row-by-row, and then column by column. If we have a discrete sequence f(xk) of length N as shown in Fig. (1-a), and this sequence is filtered and down-sampled by 2, we get another sequence g(xn) of length N/2 as shown in Fig. (1-b). The interpolation process aims at estimating a sequence l(xk) of length N as shown in Fig.(1-c), which is as close as possible to the original discrete sequence f(xk).
For equally-spaced 1-D sampled data, g ( x n ) , many interpolation functions can be used. The value of the sample to be estimated, l ( x k 1 ) , can, in general, be written in the form [1]:
l ( xk 1 )
c (x
n
n
k 1
xn )
(1)
where ( x ) is the interpolation basis function. 3
Page 3 of 48
From the classical sampling theory [3, 7], if g (xn) is band-limited to [-,], then: l ( x k 1 )
g(x
n
n
)sinc( x k 1 x n )
(2)
This is known as ideal interpolation. From the numerical computations perspective, the ideal interpolation
such as the bilinear, bicubic and cubic Spline are used as alternatives [3, 7].
s x k 1 x n
1 - s x n 1 x k 1
us
,
cr
As shown in Fig .1, we define the distance of xk+1 from xn and xn+1 as [6, 7]:
ip t
formula is not practical due to the slow rate of decay of the interpolation kernel sinc(x). So, approximations
For the bilinear, bicubic and cubic Spline image interpolation, we have [3,4]:
an
i- Bilinear l ( x k 1 ) (1 s) g ( x n ) sg ( x n 1 )
M
ii- Bicubic
l ( x k 1 ) g ( x n 1 )( s 3 2 s 2 s ) / 2 g ( x n )( 3 s 3 5 s 2 2 ) / 2
te
iii- Cubic Spline
(4)
d
g ( x n 1 )( s 3 4 s 2 s ) / 2 g ( x n 2 )( s 3 s 2 ) / 2
(3)
l( xk 1 ) g( xn1 )[(3 s) 3 4(2 s) 3 6(1 s) 3 4s 3 ] / 6
Ac ce p
g( xn )[(2 s) 3 4(1 s) 3 6s 3 ] / 6 g( xn1 )[(1 s) 3 4s 3 ] / 6
(5)
g( xn2 )s / 6 3
4
Page 4 of 48
For cubic Spline interpolation, the a pre-filtering step is used to calculate the Spline coefficients, and then Eq. (5) is implemented [3, 7].
f(xk-1)
f(xk)
f(xk+1)
f(xk+2)
1/2
bg(xn)
1/2
1/2
g(xn+1)
g(xn)
1
l(xk)
l(xk-1)
l(xk+1)
s 1/2
dc(xn)
c(xn)
1/2
1/2
an
1/2
l(xk+2)
1-s
c(xn+1)
M
cl(xk)
us
1
cr
1/2
ip t
af(xk)
a-
d
Fig. (1) Signal down-sampling and interpolation. Original data sequence.
te
b- Down-sampled version of the original data sequence. c- Interpolated data sequence.
Ac ce p
d- Down-sampled version of the interpolated data sequence.
For 2-D image interpolation, all of these techniques are applied along rows and then along columns. B. WARPED-DISTANCE IMAGE INTERPOLATION The idea of the warped distance can be used in any of the three above-mentioned methods, to improve their performance. This idea is based on modifying the distance s and using a new distance s’ based on the homogeneity in the neighborhood of each estimated pixel. The warped distance s’ can be estimated using the following relation [6]: s' s An s ( s 1)
(6)
where An refers to the asymmetry of the data in the neighborhood of x, and it is defined as [7]: An
g ( xn 1 ) g ( x n1 ) g ( xn 2 ) g ( x n )
(7)
L 1
5
Page 5 of 48
where L=256 for 8 bit pixels. The scaling factor L is to keep An in the range of –1 to 1. The parameter controls the intensity of warping. It has a positive value and may be equal to 1 or 2. The desired effect of this warping is to avoid blurring of the edges in the interpolation process.
ip t
C. NEURAL IMAGE INTERPOLATION In order to implement the interpolator using a neural network, it should first be trained. We can use a
cr
feed-forward neural network as shown in Fig. 2. A typical feed-forward neural network has an input layer, a number of hidden layers, and an output layer. Training a neural network is accomplished by adjusting its
us
weights using a training algorithm. The training algorithm adapts the weights by attempting to minimize the
an
sum of the squared error between a desired output and the actual output of the output neurons given by [9, 10]: 1 O Do Yo 2 2 o 1
(8)
M
E
where Do and Yo are the desired and actual outputs of the oth output neuron, respectively, and O is the
d
number of output neurons. Each weight in the neural network is adjusted by adding an increment to reduce E
te
as rapidly as possible. The adjustment is carried out over several training iterations, until a satisfactorily
Ac ce p
small value of E is obtained or a given number of epochs is reached. The error back-propagation algorithm can be used for this task [9, 10].
Fig. (2) A feed-forward neural network.
The steps of the proposed neural implementation of polynomial interpolation can be summarized as follows: 6
Page 6 of 48
In the training phase: 1-
A set of images is interpolated using a certain polynomial interpolation technique.
2-
The input points of the region of support used for each pixel estimation are sorted in a
vector form (2 points for bilinear and four points for bicubic and cubic Spline interpolation). All vectors are used as inputs to the neural network with the interpolation result as
ip t
3-
outputs for training.
In the image to be interpolated, the input points of the region of support used for each
pixel estimation are sorted in a vector form.
This vector is used as an input to the neural network to estimate the required pixel
an
2-
us
1-
cr
In the testing phase:
value.
III.
M
A neural network of any fixed size can be used with all types of interpolation. Pattern Characteristics and Recognition
d
The concept of image pattern recognition is based on storing a large amount of images in a database
te
to compare them to any new sample to decide whether it belongs to this database or not. The main problem
Ac ce p
with databases is the huge storage size. We can solve this problem by storing down-sampled versions of the original images in the database. The original image could be retrieved, when needed, by image interpolation. The proposed idea of saving down-sampled images rather than the original ones can be used with different types of databases such as fingerprint and landmine. We will try each of them. We will also present a robust feature extraction method suitable for the proposed application. This method is based on cepstral features. Fingerprints are biometric signs that can be utilized for identification and authentication purposes in biometric systems. Among all the biometric indicators, fingerprints have one of the highest levels of reliability [12].
The main reasons for the popularity of the fingerprint-based identification are the
uniqueness and permanence of fingerprints. It has been claimed that no two individuals, including identical twins, have the exact same fingerprints. It has also been claimed that the fingerprint of an individual does not change throughout his lifetime, with the exception of a significant injury to the finger that creates a permanent scar [13]. 7
Page 7 of 48
Fingerprints are graphical patterns of locally parallel ridges and valleys with well-defined orientations on the surface of fingertips. Ridges are the lines on the tip of one's finger. The unique pattern of lines can either be loop, whorl, or arch pattern. Valleys are the spaces or gaps that are on either side of a ridge. The most important features in fingerprints are called the minutiae, which are usually defined as the ridge endings and
ip t
the ridge bifurcations. A ridge bifurcation is the point, where a ridge forks into a branch ridge [14]. Examples of minutiae are shown in Fig. (3). A full fingerprint normally contains between 50 to 80 minutiae.
cr
A partial fingerprint may contain fewer than 20 minutiae. According to the Federal Bureau of Investigations,
us
it suffices to identify a fingerprint by matching 12 minutiae, but it has been reported that in most cases, 8
d
M
an
matched minutiae are enough.
te
Fig. (3) Examples of minutiae points. Landmines are small explosive objects that are buried under the earth's surface. They are classified
Ac ce p
as Anti-Personnel (AP) landmines, which are used to kill persons, and Anti-Tank (AT) landmines, which are used to attack vehicles and their occupants. There are about 100 million buried landmines covering more than 200,000 square kilometers of the world surface and affecting about 70 countries [15, 16]. Many obstacles are faced in removing these buried landmines, such as the loss or absence of maps or information about these mines or even the areas, where they were laid in, the change of mine locations due to climatic and physical factors, the large variety of types of AP and AT landmines, and the high costs needed to remove mines. It is known that the production cost of landmines is very low (may be $3 per mine), but the detection and removal cost is high (more than $1000 per mine). Several techniques have been proposed for demining (detecting and clearing) these buried mines. One of the promising landmine detection techniques is the acoustic to seismic A/S technique, which performs the detection of landmines by vibrating them with acoustic or seismic waves that are generated and 8
Page 8 of 48
received by acoustic or seismic transducers, respectively [17,18]. An acoustic or a seismic wave is excited by a source at a known position. It travels through the soil to interact with underground objects. This technique consists of a transmission system, which generates the acoustic or the seismic wave into the area
mechanical changes are used to produce acoustic images for the areas under test.
ip t
under test and a receiving system, which senses the changes in the mechanical properties of this area. These
Detection of landmines from acoustic images can be accomplished by traditional shape-based
cr
techniques. These techniques begin by intensity thresholding of images to reject the dark background and
us
then detection of landmines based on their dimensions [15, 16]. The drawbacks of these techniques lie in their inability to detect landmines with small dimensions and their inability to reject background noise in the
an
intensity thresholding approach. In most of these techniques, pre-processing steps like morphological operations are required. In spite of the ability of morphological operations to smooth objects in images, they
M
may close small clutter shapes to give false alarms.
The detection of a landmine can be achieved by the intensity thresholding of the landmine image with a
d
certain threshold to remove the dark background. This process helps eliminating the background and leaves
te
objects, only. Then an area thresholding process is performed based on the areas of the expected objects to A pre-processing step, such as the use of morphological
Ac ce p
remove the unwanted small-area clutters.
operations, may be required prior to thresholding. Fig (4) shows a block diagram of a traditional landmine detection technique.
Traditional landmine detection based on geometrical information has several limitations. The intensity thresholding may not remove all unwanted noise or clutter in images. The area thresholding process requires certain thresholds which may differ from image to image leading to either false alarms or missed landmines. Without preprocessing, the detection probability of landmines is about 90%. Morphological preprocessing can increase this detection probability to about 97% with a small false alarm probability, but all these probabilities are in the absence of noise [15, 16]. The issue of noise effect is an issue, which is rarely studied by researchers in this area. A new cepstral approach valid for fingerprint and landmine pattern recognition is presented in this paper. To reduce the size of large databases, decimated or down-sampled images are saved, and the different 9
Page 9 of 48
interpolation techniques are investigated to obtain the images with the original size. The sensitivity of the proposed cepstral feature extraction method for the synthetic pixels obtained through interpolation is studied. Histogram first trough estimation
Histogram estimation
ip t
Landmine image
an
us
Intensity Thresholding
cr
Preprocessing
M
Area Thresholding
Ac ce p
te
d
Decision Making
Landmine or no landmine
Fig. (4) Steps of the traditional landmine detection.
IV.
Proposed Pattern Recognition Method
The proposed pattern recognition method has two phases; a training phase and a testing phase. In the training phase, features are extracted from the training images. These features are used to train a neural network. In the testing phase, features are extracted from every incoming image and a feature matching process is performed to decide whether these features belong to a previously known pattern or not. The block diagram of the proposed recognition system is shown in Fig. (5). The steps of the feature extraction process from a fingerprint image can be summarized as follows: 10
Page 10 of 48
1- The image is lexicographically ordered into a 1-D signal. 2- The obtained 1-D signal can be used in time domain or in another discrete transform domain. The DCT, DST and DWT can be used for this purpose. 3- MFCCs and polynomial shape coefficients are extracted from either the 1-D signal, the discrete
Training of a neural network
an
To database
Feature extraction (MFCCs + Polynomial coefficients)
cr
Discrete transform (DCT, DST or DWT)
Lexicographic ordering
us
Training image
ip t
transform of the signal, or both of them.
M
(a) Training Phase
Discrete transform (DCT, DST or DWT)
d
Lexicographic ordering
Ac ce p
te
Test image
Identified pattern or not
Decision making
Feature extraction (MFCCs+ Polynomial coefficients)
Feature matching with the trained neural network
(b) Testing phase.
Fig. (5) Block diagram of the proposed pattern recognition method.
The concept of feature extraction using the MFCCs is widely known in speaker identification [1928]. It contributes to the goal of identifying speakers based on the low-level properties. Pattern images after lexicographic ordering are treated in this paper like speech signals. In speaker identification, the extraction provides sufficient information for good speaker discrimination. Experimental results will show a great success if the ideas of feature extraction from speech signals are applied to 1-D pattern signals.
11
Page 11 of 48
The MFCCs are commonly extracted from signals through cepstral analysis. The input signal is first framed and windowed, the Fourier transform is then taken, and the magnitude of the resulting spectrum is warped on the Mel-scale. The log of this spectrum is then taken and the DCT is applied [19, 20]. Fig (6)
Windowing
DFT
cr
Mel Cepstrum
Lexicographic ordering
IDFT
Log
Mel-frequency warping
us
Pattern image
ip t
shows the steps of extraction of MFCCs from an image.
an
Fig. (6) Cepstral transformation of a 1-D pattern signal.
M
The 1-D signal must first be broken up into small segments; each of N samples. These segments are called frames and the motivation for this framing process is the quasi-stationary nature of the 1-D signals.
d
However, if we examine the signal over discrete segments, which are sufficiently short in duration, then
te
these segments can be considered as stationary and exhibit stable characteristics [19, 20]. To avoid loss of
Ac ce p
information, frame overlap is used. Each frame begins at some offset of L samples with respect to the previous frame, where L≤N.
The MFCCs are sensitive to mismatches or time shifts between training and testing data. As a result, there is a need for other coefficients to be added to the MFCCs to reduce this sensitivity. Polynomial coefficients can be used for this purpose. These coefficients can help in increasing the similarity between the training and the testing signals. If each MFCC is modeled as a time waveform over adjacent frames, polynomial coefficients can be used to model the slope and curvature of this time waveform. Adding these polynomial coefficients to the MFCCs vector will be helpful in reducing the sensitivity to any mismatches between the training and testing data [25-28]. The classification step in the proposed recognition method is in fact a feature matching process between the features of a new image and the features saved in the database. Neural Networks are widely used for 12
Page 12 of 48
feature matching. Multi-Layer Perceptrons (MLPs) consisting of an input layer, one or more hidden layers, and an output layer can be used for this purpose [29, 30]. V.
Experimental Results
Several experiments have been carried out to test the performance of the proposed cepstral pattern
ip t
recognition method after several types of image interpolation to retain the original image. Spatial and transform domains are used for feature extraction after interpolation. The degradations considered are
cr
AWGN, impulsive noise, and speckle noise with and without blurring. In the training phase of the proposed
us
recognition method, a database is first composed. Twenty images are used to generate the database. The MFCCs and polynomial coefficients are estimated to form the feature vectors of the database. In the testing
an
phase, similar features to those used in the training are extracted from the degraded fingerprint images after interpolation and used for matching.
M
Seven methods are used for feature extraction. In the first method, the MFCCs and the polynomial coefficients are extracted from the spatial domain signals, only. In the second method, the features are
d
extracted from the DWT of these signals. In the third method, the features are extracted from both the
te
original signals and the DWT of these signals and concatenated. In the fourth method, the features are
Ac ce p
extracted from the DCT of the spatial domain signals. In the fifth method, the features are extracted from both the original signals and the DCT of these signals and concatenated. In the sixth method, the features are extracted from the DST of the spatial-domain signals. In the last method, the features are extracted from both the original signals and the DST of these signals and concatenated. Samples of the fingerprint and landmine images used in the database are shown in Fig. (7). The results of the experiments on fingerprint images are given in Figs. (8) to (14). Also, the results of experiments on interpolated landmine images are given in Fig. (15) to (21). From these figures, it is clear that feature extraction from transform domains like the DCT and DST are not sensitive to synthetic pixels obtained through all types of interpolation. This is attributed to the averaging effect of the transformation equation which cancels the effect of pixel synthesis errors. The results also reveal that neural interpolation is feasible with the proposed pattern recognition method.
13
Page 13 of 48
ip t cr
an
us
(a) Fingerprint images.
M
(b) Landmine images.
d
Fig. (7) Samples of the images used in the training phase.
te
100
80
Ac ce p
et a R n oti i n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
60
40
20
0 0
5
10
15
20
25 30 SNR (dB)
35
40
45
50
(a) AWGN.
14
Page 14 of 48
100
80
60
40
20
2
3
4 5 6 Error Percentage
(b) Impulsive noise.
80 et a R n oti i n g o c e R
8
9
10
an
100
7
cr
1
us
0 0
ip t
et a R n oti i n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
M
60
40
0 0
10
15
Ac ce p
5
te
d
20
20
25 30 SNR (dB)
35
40
45
50
(c) Blurring with AWGN.
100
80
et a R n oti i n g o c e R
60
40
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
20
0 0
1
2
3
4 5 6 Error Percentage
7
8
9
10
(d) Blurring with impulsive noise.
15
Page 15 of 48
100
et a R n oi ti n g o c e R
Features Features Features Features Features Features Features
60
40
from from from from from from from
signal the DWT of the signal the signal plus the DWT of the signal DCT of signal signal plus DCT of signal DST of Signal signal plus DST of signal
20
0.01
0.02
0.03
0.04 0.05 0.06 Noise Variance
0.07
0.08
100
80
an
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
60
M
40
0.02
0.03
0.04 0.05 0.06 Noise Variance
0.07
0.08
0.09
0.1
te
0.01
d
20
0 0
0.1
us
(e) Speckle noise.
et a R n oti i n g o c e R
0.09
cr
0 0
ip t
80
(f) Blurring with speckle noise.
Ac ce p
Fig. (8) Recognition rate variation with degradation for the different feature extraction methods from degraded fingerprint images.
100
80
et a R n oti i n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
60
40
20
0 0
5
10
15
20
25
30
35
40
45
50
SNR (dB)
(a) AWGN.
16
Page 16 of 48
100
et a R n oi ti n g o c e R
Features Features Features Features Features Features Features
60
40
from from from from from from from
signal the DWT of the signal the signal plus the DWT of the signal DCT of signal signal plus DCT of signal DST of Signal signal plus DST of signal
20
1
2
3
4 5 6 Error Percentage
7
8
100
an
80
60
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
M
40
10
15
20
25 30 SNR (dB)
35
40
45
50
te
5
d
20
0 0
10
us
(b) Impulsive noise.
et a R n oti i n g o c e R
9
cr
0 0
ip t
80
Ac ce p
(c) Blurring with AWGN.
100
80
et a R n oti i n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
60
40
20
0 0
1
2
3
4 5 6 Error Percentage
7
8
9
10
(d) Blurring with impulsive noise.
17
Page 17 of 48
100
80
60
40
ip t
et a R n oi ti n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
0.01
0.02
0.03
0.04 0.05 0.06 Noise Variance
(e) Speckle noise.
80
0.1
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
M
60
40
0.02
0.03
Ac ce p
0.01
te
20
0 0
0.09
d
et a R n oti i n g o c e R
0.08
an
100
0.07
us
0 0
cr
20
0.04 0.05 0.06 Noise Variance
0.07
0.08
0.09
0.1
Fig. (f) Blurring with speckle noise.
Fig. (9) Recognition rate variation with degradation for the different feature extraction methods from degraded fingerprint images with bilinear interpolation.
100
Features Features Features Features Features Features Features
80
et a R n oi ti n g o c e R
60
40
from from from from from from from
signal the DWT of the signal the signal plus the DWT of the signal DCT of signal signal plus DCT of signal DST of Signal signal plus DST of signal
20
0 0
5
10
15
20
25 30 SNR (dB)
35
40
45
50
(a) AWGN.
18
Page 18 of 48
100
80
60
40
ip t
et a R n oi ti n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
1
2
3
4 5 6 Error Percentage
(b) Impulsive noise.
80
10
M
60
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
40
10
15
Ac ce p
5
te
20
0 0
9
d
et a R n oti i n g o c e R
8
an
100
7
us
0 0
cr
20
20
25 30 SNR (dB)
35
40
45
50
(c) Blurring with AWGN.
100
80
et a R n oti i n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
60
40
20
0 0
1
2
3
4 5 6 Error Percentage
7
8
9
10
(d) Blurring with impulsive noise.
19
Page 19 of 48
100
80
60
40
ip t
et a R n oi ti n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
0.01
0.02
0.03
0.04 0.05 0.06 Noise Variance
100
80
0.1
M
60
d
40
20
0.01
0.02
0.03
Ac ce p
0 0
0.09
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
te
et a R n oti i n g o c e R
0.08
an
(e) Speckle noise.
0.07
us
0 0
cr
20
0.04 0.05 0.06 Noise Variance
0.07
0.08
0.09
0.1
(f) Blurring with speckle noise.
Fig. (10) Recognition rate variation with degradation for the different feature extraction methods from degraded fingerprint images with bicubic interpolation.
20
Page 20 of 48
100
80
60
40
ip t
et a R n oi ti n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
5
10
15
20
25 30 SNR (dB)
( a) AWGN.
80
50
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
M
60
40
2
Ac ce p
1
te
20
0 0
45
d
et a R n oti i n g o c e R
40
an
100
35
us
0 0
cr
20
3
4 5 6 Error Percentage
7
8
9
10
(b) Impulsive noise.
100
80
et a R n oti i n g o c e R
60
40
20
0 0
5
10
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal 15
20
25 30 SNR (dB)
35
40
45
50
(c) Blurring with AWGN.
21
Page 21 of 48
100
80
60
40
ip t
et a R n oi ti n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
0 0
1
2
3
4 5 6 Error Percentage
7
8
80
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
M
60
d
40
0.02
0.03
Ac ce p
0.01
te
20
0 0
10
an
100
et a R n oti i n g o c e R
9
us
(d) Blurring with impulsive noise.
cr
20
0.04 0.05 0.06 Noise Variance
0.07
0.08
0.09
0.1
(e) Speckle noise.
100
80
et a R n oti i n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
60
40
20
0 0
0.01
0.02
0.03
0.04 0.05 0.06 Noise Variance
0.07
0.08
0.09
0.1
(f) Blurring with speckle noise. Fig. (11) Recognition rate variation with degradation for the different feature extraction methods from degraded fingerprint images with warped-distance bilinear interpolation.
22
Page 22 of 48
100
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80
60
40
ip t
et a R n oi ti n g o c e R
5
10
15
20
25 30 SNR (dB)
(a) AWGN.
80
50
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
M
60
40
2
Ac ce p
1
te
20
0 0
45
d
et a R n oti i n g o c e R
40
an
100
35
us
0 0
cr
20
3
4 5 6 Error Percentage
7
8
9
10
(b) Impulsive noise.
100
80
et a R n oti i n g o c e R
60
40
20
0 0
5
10
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal 15
20
25 30 SNR (dB)
35
40
45
50
(c) Blurring with AWGN.
23
Page 23 of 48
100
80
60
40
ip t
et a R n oi ti n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
0 0
1
2
3
4 5 6 Error Percentage
7
8
80
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
M
60
d
40
0.02
0.03
Ac ce p
0.01
te
20
0 0
10
an
100
et a R n oti i n g o c e R
9
us
(d) Blurring with impulsive noise.
cr
20
0.04 0.05 0.06 Noise Variance
0.07
0.08
0.09
0.1
(e) Speckle noise.
100
80
et a R n oi ti n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
60
40
20
0 0
0.01
0.02
0.03
0.04 0.05 0.06 Noise Variance
0.07
0.08
0.09
0.1
(f) Blurring with speckle noise. Fig. (12) Recognition rate variation with degradation for the different feature extraction methods from degraded fingerprint images with warped-distance bicubic interpolation.
24
Page 24 of 48
100
80
ip t
60
40
20
5
10
15
20
25 30 SNR (dB)
100
80
50
M
60
d
40
20
1
2
Ac ce p
0 0
45
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
te
et a R n oti i n g o c e R
40
an
(a) AWGN.
35
us
0 0
cr
et a R n oti i n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
3
4 5 6 Error Percentage
7
8
9
10
(b) Impulsive noise.
100
80
et a R n oti i n g o c e R
60
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
40
20
0 0
5
10
15
20
25 30 SNR (dB)
35
40
45
50
(c) Blurring with AWGN.
25
Page 25 of 48
100
80
60
40
ip t
et a R n oi ti n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
0 0
1
2
3
4 5 6 Error Percentage
7
8
80
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
M
60
d
40
0.02
0.03
Ac ce p
0.01
te
20
0 0
10
an
100
et a R n oti i n g o c e R
9
us
(d) Blurring with impulsive noise.
cr
20
0.04 0.05 0.06 Noise Variance
0.07
0.08
0.09
0.1
(e) Speckle noise.
100
80
et a R n oti i n g o c e R
60
40
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
20
0 0
0.01
0.02
0.03
0.04 0.05 0.06 Noise Variance
0.07
0.08
0.09
0.1
(f) Blurring with speckle noise.
26
Page 26 of 48
Fig. (13) Recognition rate variation with degradation for the different feature extraction methods from degraded fingerprint images with neural implementation of bilinear interpolation. 100
80
ip t
60
40
cr
et a R n oti i n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
0 0
5
10
15
20
25 30 SNR (dB)
80
45
50
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
d
60
te
40
Ac ce p
20
0 0
40
M
100
et a R n oti i n g o c e R
35
an
(a) AWGN.
us
20
1
2
3
4 5 6 Error Percentage
7
8
9
10
(b) Impulsive noise.
100
80
et a R n oti i n g o c e R
60 Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
40
20
0 0
5
10
15
20
25 30 SNR (dB)
27
35
40
45
50
Page 27 of 48
(c) Blurring with AWGN. 100
80
ip t
60
40
20
1
2
3
4 5 6 Error Percentage
7
8
9
10
us
0 0
cr
et a R n oti i n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
an
(d) Blurring with impulsive noise. 100
80
M
60
d
40
20
0.01
0.02
0.03
Ac ce p
0 0
te
et a R n oti i n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
0.04 0.05 0.06 Noise Variance
0.07
0.08
0.09
0.1
(e) Speckle noise.
100
80
et a R n oti i n g o c e R
60
40
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
20
0 0
0.01
0.02
0.03
0.04 0.05 0.06 Noise Variance
0.07
0.08
0.09
0.1
(f) Blurring with speckle noise.
28
Page 28 of 48
Fig. (14) Recognition rate variation with degradation for the different feature extraction methods from degraded fingerprint images with neural implementation of bicubic interpolation.
100
cr
60
40
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
20
5
10
15
20
25 30 SNR (dB)
35
40
45
50
9
10
an
0 0
us
et a R n oti i n g o c e R
ip t
80
(a) AWGN.
M
100
te
60
40
Ac ce p
et a R n oti i n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
d
80
20
0 0
1
2
3
4 5 6 Error Percentage
7
8
(b) Impulsive noise.
100
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80 et a R n oti i n g o c e R
60
40
20
0 0
5
10
15
20
25 30 SNR (dB)
29
35
40
45
50
Page 29 of 48
(c) Blurring with AWGN. 100
60
40
cr
et a R n oti i n g o c e R
ip t
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80
0 0
1
2
3
4 5 6 Error Percentage
us
20
7
8
9
10
an
(d) Blurring with impulsive noise.
M
100
80
40
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
Ac ce p
20
d
60
te
et a R n oti i n g o c e R
0 0
0.01
0.02
0.03
0.04 0.05 0.06 Noise Variance
0.07
0.08
0.09
0.1
0.07
0.08
0.09
0.1
(e) Speckle noise.
100
80
et a R n oti i n g o c e R
60
40
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
20
0 0
0.01
0.02
0.03
0.04 0.05 0.06 Noise Variance
30
Page 30 of 48
(f) Blurring with speckle noise.
Fig. (15) Recognition rate variation with degradation for the different feature extraction methods from degraded landmine images.
ip t
100 Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
60
40
us
et a R n oti i n g o c e R
cr
80
0 0
5
10
15
20
an
20
25 30 SNR (dB)
35
40
45
50
M
(a) AWGN.
te
80
60
40
Ac ce p
et a R n oti i n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
d
100
20
0 0
1
2
3
4 5 6 Error Percentage
7
8
9
10
(b) Impulsive noise.
31
Page 31 of 48
100 Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80
60
40
ip t
et a R n oi ti n g o c e R
0 0
5
10
15
20
25 30 SNR (dB)
35
40
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80
M
60
d
40
2
3
Ac ce p
1
te
20
0 0
50
an
100
et a R n oti i n g o c e R
45
us
(c) Blurring with AWGN.
cr
20
4 5 6 Error Percentage
7
8
9
10
0.07
0.08
0.09
0.1
(d) Impulsive noise.
100
80
et a R n oti i n g o c e R
60
40
20
0 0
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
0.01
0.02
0.03
0.04 0.05 0.06 Noise Variance
(e) Speckle noise.
32
Page 32 of 48
100
et a R n oi ti n g o c e R
60
40
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
ip t
80
0 0
0.01
0.02
0.03
0.04 0.05 0.06 Noise Variance
0.07
cr
20
0.08
0.1
us
(f) Blurring with speckle noise.
0.09
Fig. (16) Recognition rate variation with degradation for the different feature extraction methods from degraded landmine images
an
with bilinear interpolation.
M
100
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
te
60
40
Ac ce p
et a R n oti i n g o c e R
d
80
20
0 0
5
10
15
20
25 30 SNR (dB)
35
40
45
50
9
10
(a) AWGN.
100
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80 et a R n oti i n g o c e R
60
40
20
0 0
1
2
3
4 5 6 Error Percentage
33
7
8
Page 33 of 48
(b) Impulsive noise.
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80
60
40
cr
et a R n oti i n g o c e R
5
10
15
20
25 30 SNR (dB)
us
20
0 0
ip t
100
35
40
45
50
9
10
an
(c) Blurring with AWGN.
M
100
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80
d
60
40
te
et a R n oti i n g o c e R
Ac ce p
20
0 0
1
2
3
4 5 6 Error Percentage
7
8
(d) Blurring with impulsive noise.
34
Page 34 of 48
100
80
60
0 0
0.01
0.02
0.03
ip t
20
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
0.04 0.05 0.06 Noise Variance
(e) Speckle noise.
60
40
0.02
0.03
Ac ce p
0.01
te
20
0 0
0.1
d
et a R n oti i n g o c e R
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
0.09
M
80
0.08
an
100
0.07
cr
40
us
et a R n oi ti n g o c e R
0.04 0.05 0.06 Noise Variance
0.07
0.08
0.09
0.1
(f) Blurring with speckle noise.
Fig. (17) Recognition rate variation with degradation for the different feature extraction methods from degraded landmine images with bicubic interpolation.
35
Page 35 of 48
100
80
60
40
20
0 0
5
10
15
20
25 30 SNR (dB)
( a) AWGN.
50
M
60
d
40
2
Ac ce p
1
te
20
0 0
45
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80 et a R n oti i n g o c e R
40
an
100
35
cr
ip t
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
us
et a R n oi ti n g o c e R
3
4 5 6 Error Percentage
7
8
9
10
(b) Impulsive noise.
100
80
et a R n oti i n g o c e R
60
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
40
20
0 0
5
10
15
20
25 30 SNR (dB)
35
40
45
50
(c) Blurring with AWGN.
36
Page 36 of 48
100 Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80
60
40
ip t
et a R n oi ti n g o c e R
0 0
1
2
3
4 5 6 Error Percentage
7
8
80
M
60
d
40
0.02
0.03
Ac ce p
0.01
te
20
0 0
10
an
100
et a R n oti i n g o c e R
9
us
(d) Blurring with impulsive noise.
cr
20
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
0.04 0.05 0.06 Noise Variance
0.07
0.08
0.09
0.1
0.07
0.08
0.09
0.1
(e) Speckle noise.
100
80
et a R n oti i n g o c e R
60
40
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
20
0 0
0.01
0.02
0.03
0.04 0.05 0.06 Noise Variance
(f) Blurring with speckle noise. Fig. (18) Recognition rate variation with degradation for the different feature extraction methods from degraded landmine images with warped-distance bilinear interpolation.
37
Page 37 of 48
ip t
100
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
60
40
us
et a R n oti i n g o c e R
cr
80
0 0
5
10
15
20
an
20
25 30 SNR (dB)
35
40
45
50
M
(a) AWGN.
100
d
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
60
Ac ce p
et a R n oti i n g o c e R
te
80
40
20
0 0
1
2
3
4 5 6 Error Percentage
7
8
9
10
(b) Impulsive noise.
38
Page 38 of 48
100 Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80
60
40
ip t
et a R n oi ti n g o c e R
0 0
5
10
15
20
25 30 SNR (dB)
35
40
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80
M
60
d
40
2
3
4 5 6 Error Percentage
7
8
9
10
0.08
0.09
0.1
Ac ce p
1
te
20
0 0
50
an
100
et a R n oti i n g o c e R
45
us
(c) Blurring with AWGN.
cr
20
(d) Blurring with impulsive noise.
100
80
et a R n oti i n g o c e R
60
40
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
20
0 0
0.01
0.02
0.03
0.04 0.05 0.06 Noise Variance
0.07
(e) Speckle noise.
39
Page 39 of 48
100
et a R n oi ti n g o c e R
60
40
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
ip t
80
0 0
0.01
0.02
0.03
0.04 0.05 0.06 Noise Variance
0.07
cr
20
0.08
0.1
us
(f) Blurring with speckle noise.
0.09
Fig. (19) Recognition rate variation with degradation for the different feature extraction methods from degraded landmine images
M
an
with warped-distance bicubic interpolation.
d
100
60
40
Ac ce p
et a R n oti i n g o c e R
te
80
20
0 0
5
10
15
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
20
25 30 SNR (dB)
35
40
45
50
(a) AWGN.
40
Page 40 of 48
100 Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80
60
40
ip t
et a R n oi ti n g o c e R
1
2
3
4 5 6 Error Percentage
(b) Impulsive noise.
10
M
60
d
40
10
Ac ce p
5
te
20
0 0
9
Features from signal Features from the DWT of the signal Features from the signal plus the DWT of the signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80 et a R n oti i n g o c e R
8
an
100
7
us
0 0
cr
20
15
20
25 30 SNR (dB)
35
40
45
50
9
10
(c) Blurring with AWGN.
100
Features from signal Features from the DWT of the signal Features from signal plus DWT of signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80
et a R n oti i n g o c e R
60
40
20
0 0
1
2
3
4 5 6 Error Percentage
7
8
(d) Blurring with impulsive noise.
41
Page 41 of 48
100
80
60
0 0
0.01
0.02
0.03
ip t
20
Features from signal Features from the DWT of the signal Features from signal plus DWT of signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
0.04 0.05 0.06 Noise Variance
(e) Speckle noise.
60
Features from signal Features from the DWT of the signal Features from signal plus DWT of signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
40
0.02
0.03
0.04 0.05 0.06 Noise Variance
Ac ce p
0.01
te
20
0 0
0.1
d
et a R n oti i n g o c e R
0.09
M
80
0.08
an
100
0.07
cr
40
us
et a R n oi ti n g o c e R
0.07
0.08
0.09
0.1
(f) Blurring with speckle noise.
Fig. (20) Recognition rate variation with degradation for the different feature extraction methods from degraded landmine images with neural implementation of bilinear interpolation.
100
Features from signal Features from the DWT of the signal Features from signal plus DWT of signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80 et a R n oti i n g o c e R
60
40
20
0 0
5
10
15
20
25 30 SNR (dB)
42
35
40
45
50
Page 42 of 48
(a) AWGN. 100
Features from signal Features from the DWT of the signal Features from signal plus DWT of signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80
ip t
60
40
20
1
2
3
4 5 6 Error Percentage
100
10
M
60
20
10
Ac ce p
5
te
d
40
0 0
9
Features from signal Features from the DWT of the signal Features from signal plus DWT of signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80 et a R n oti i n g o c e R
8
an
(b) Impulsive noise.
7
us
0 0
cr
et a R n oti i n g o c e R
15
20
25 30 SNR (dB)
35
40
45
50
(c) Blurring with AWGN.
100
Features from signal Features from the DWT of the signal Features from signal plus DWT of signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
80
et a R n oti i n g o c e R
60
40
20
0 0
1
2
3
4 5 6 Error Percentage
7
8
9
10
(d) Blurring with impulsive noise.
43
Page 43 of 48
100
80
60
40
20
0 0
0.01
0.02
0.03
0.04 0.05 0.06 Noise Variance
(e) Speckle noise.
60
Features from signal Features from the DWT of the signal Features from signal plus DWT of signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
40
0.02
0.03
Ac ce p
0.01
te
20
0 0
0.1
d
et a R n oti i n g o c e R
0.09
M
80
0.08
an
100
0.07
cr
ip t
Features from signal Features from the DWT of the signal Features from signal plus DWT of signal Features from DCT of signal Features from signal plus DCT of signal Features from DST of Signal Features from signal plus DST of signal
us
et a R n oi ti n g o c e R
0.04 0.05 0.06 Noise Variance
0.07
0.08
0.09
0.1
(f) Blurring with speckle noise.
Fig. (21) Recognition rate variation with degradation for the different feature extraction methods from degraded landmine images with neural implementation of bicubic interpolation.
VI.
Conclusions
This paper presented a new cepstral method for feature extraction for pattern recognition applications. In the proposed method, images are transformed to 1-D signals and the MFCCs and polynomial coefficients are extracted from them. Features are extracted from the 1-D signals and/or their transforms. A database of the cepstral features of pattern images is generated in the training phase and used for feature matching in the testing phase after interpolation with different types of interpolation methods. The proposed method is mostly used in speaker identification, but experimental results show that it can also be used for feature 44
Page 44 of 48
extraction from images. Feature extraction from the different transform domains have been tested, and it was noticed that features extracted from the DCTs of the 1-D pattern signals are the most powerful among all other features. This is attributed to the energy compaction property of the DCT, which makes the features extracted from the first samples after the DCT powerful enough to characterize the signals. Results have also
ip t
shown that recognition rates up to 100 % for pattern are possible in the absence of degradations.
cr
References
[1] M. Unser, A. Aldroubi, and M. Eden “B-Spline Signal Processing: Part I- Theory” IEEE
1993.
us
Trans. Signal Processing vol. 41 No.2 pp. 821-833 Feb.
[2] M. Unser, A. Aldroubi, and M. Eden “B-Spline Signal Processing: PartII-Efficient Design
an
and Applications” IEEE Trans. Signal Processing vol. 41 No.2 pp.834-848 Feb. 1993.
M
[3] M. Unser “Splines A Perfect Fit For Signal and Image Processing” IEEE Signal Processing
Magazine November 1999.
d
[4] P. Thevenaz, T. Blu and M. Unser, “Interpolation Revisited,” IEEE Trans. Medical Imaging,
te
vol. 19, No.7, pp. 739-758, July 2000.
Ac ce p
[5] W. K. Carey, D. B. Chuang and S. S. Hemami, “Regularity Preserving Image Interpolation,”
IEEE Trans. Image Processing, vol. 8, No.9, pp. 1293-1297, September 1999. [6] J.-K Han and H-M Kim, “Modified Cubic Convolution Scaler With Minimum Loss of
Information,” Optical Engineering. , 40 (4) 540-546 , April 2001. [7] G. Ramponi , “Warped Distance For Space Variant Linear Image Interpolation,” IEEE
Trans. Image Processing, vol. 8, pp. 629- 639, 1999. [8] N. Plaziac, “ Image Interpolation Using Neural Networks,” IEEE Trans. Image Processing,
vol. 8, No. 11, pp. 1647-1651, November 1999 [9] A. I. Galushkin, Neural Networks Theory, Springer-Verlag Berlin Heidelberg 2007. [10] G. Dreyfus, Neural Networks Methodology and Applications, Springer-Verlag Berlin
Heidelberg 2005. 45
Page 45 of 48
[11] C. Lee, M. Eden, M. Unser, “High-quility image resizing using oblique projection
operators ,IEEE Trans Image Processing Vol I, No 5, pp 679-692, MAY 1998 [12] W. Chaohong , S. Zhixin, and V. Govindaraju," Fingerprint Image Enhancement Method
ip t
Using Directional Median Filter," Proceedings of the SPIE, Vol. 5404, pp. 66-75, 2004. [13] S. Kasaei, M. deriche, and B. Boashash," Fingerprint Feature Enhancement Using Block-
on
Reconstructed
Images,"
International
on
Information,
us
Communications and Signal Processing, pp. 721-725, 1997.
Conferance
cr
Direction
[14] L. Hong, Y. Wan, and A.
Jain, " Fingerprint Image Enhancement: Algorithm and
an
Performance Evaluation," IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 20, No. 8, pp. 777-789, 1998.
M
[15] H. Kasban, O. Zahran, S. M. S. Elaraby, M. El-Kordy, and F. E. Abd El-Samie, “Automatic Object Detection from Acoustic to Seismic Landmines Images,” IEEE International Conference on
d
Computer Engineering & Systems, Cairo, Egypt, November 2008.
te
[16] H. Kasban, O. Zahran, S. M. S. Elaraby, M. El-Kordy, S. El-Rabie and F. E. Abd El-Samie, “Efficient Detection Of Landmines From Acoustic Images” Progress In Electromagnetics Research
Ac ce p
C, Vol. 6, 79–92, 2009.
[17] N. Xiang, and J. M. Sabatier, “An Experimental Study on Antipersonnel Landmine Detection Using Acoustic-To-Seismic Coupling, “J. Acoust. Soc. Am. 113 (3), March 2003. [18] N. Xiang, and J. M. Sabatier, “Landmine Detection Measurements Using Acoustic-To-Seismic Coupling,” Proceedings of SPIE, Vol. 4038, 645–655, Orlando, USA, 2000. [19]
T. Kinnunen, "Spectral Features for Automatic Text-Independent Speaker Recognition",
Licentiate’s Thesis, University of Joensuu, Department of computer science, Finland, 2003. [20]
R. Vergin, D. O. Shaughnessy, and A. Farhat, “Generalized Mel-frequency Cepstral Coefficients
for Large-Vocabulary Speaker-Independent Continuous-Speech Recognition” IEEE Transactions on Speech And Audio Processing, Vol. 7, No. 5, pp. 525-532, September 1999.
46
Page 46 of 48
[21]
R. Chengalvarayan, and L. Deng, “Speech Trajectory Discrimination Using the Minimum
Classification Error Learning", IEEE Transactions on Speech and Audio Processing, Vol. 6, No. 6, pp. 505-515, 1998. [22] P. D. Polur and G. E. Miller, " Experiments With Fast Fourier Transform, Linear Predictive and
ip t
Cepstral Coefficients in Dysarthric Speech Recognition Algorithms Using Hidden Markov Model", IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol. 13, No. 4, pp. 558-561,
cr
2005.
us
[23] S. Dharanipragada, U. H. Yapanel, and B. D. Rao, “Robust Feature Extraction for Continuous Speech Recognition Using the MVDR Spectrum Estimation Method", IEEE Transactions on Audio,
[24]
an
Speech, and Language Processing, Vol. 15, No. 1, pp. 224-234, 2007.
Z. Tufekci, "Local Feature Extraction for Robust Speech Recognition in The Presence of Noise",
[25]
M
PhD Dissertation, Clemson University, 2001.
R. Sarikaya, "Robust and Efficient Techniques for Speech Recognition in Noise", PhD
d
Dissertation, Duke University, 2001.
te
[26] S. Furui, “Cepstral Analysis Technique for Automatic Speaker Verification", IEEE Transactions
[27]
Ac ce p
on Acoustics, Speech, And Signal Processing, Vol. ASSP-29, No. 2, pp. 254-272, 1981. R. Gandhiraj, P.S. Sathidevi, " Auditory-based Wavelet Packet Filter-bank for Speech
Recognition using Neural Network", Proceedings of the 15thInternational Conference on Advanced Computing and Communications, pp.666-671, 2007. [28] A. Katsamanis, G. Papandreou, and P. Maragos, " Face Active Appearance Modeling and Speech Acoustic Information to Recover Articulation", IEEE Transactions on Audio, Speech, and Language Processing, Vol. 17, No. 3, pp.411-422, 2009. [29] A. I. Galushkin, “Neural Networks Theory”, Springer-Verlag Berlin Heidelberg 2007. [30] G. Dreyfus, “Neural Networks Methodology and Applications”, Springer-Verlag Berlin Heidelberg 2005.
47
Page 47 of 48
For Salah Diab and Bassiouny Sallam Facullty of Electronic Engineering , Menoufia University, Menouf, Egypt.
Ac ce p
te
d
M
an
us
cr
ip t
For Saleh A. Alshebeili 2KACST-TIC in Radio Frequency and Photonics for the e-Society (RFTONICS), King Saud University.
Page 48 of 48