A wavelet based multiresolution algorithm for rotation invariant feature extraction

A wavelet based multiresolution algorithm for rotation invariant feature extraction

Pattern Recognition Letters 25 (2004) 1845–1855 www.elsevier.com/locate/patrec A wavelet based multiresolution algorithm for rotation invariant featu...

596KB Sizes 3 Downloads 152 Views

Pattern Recognition Letters 25 (2004) 1845–1855 www.elsevier.com/locate/patrec

A wavelet based multiresolution algorithm for rotation invariant feature extraction Ch.S. Sastry, Arun K. Pujari *, B.L. Deekshatulu, C. Bhagvati Artificial Intelligence Lab, Department of Computer and Information Sciences, University of Hyderabad, Hyderabad 500046, India Received 11 December 2003; received in revised form 25 May 2004 Available online 16 September 2004

Abstract The present work aims at proposing a new wavelet representation formula for rotation invariant feature extraction. The algorithm is a multilevel representation formula involving no wavelet decomposition in standard sense. Using the radial symmetry property, that comes inherently in the new representation formula, we generate the feature vectors that are shown to be rotation invariant. We show that, using a hybrid data mining technique, the algorithm can be used for rotation invariant content based image retrieval (CBIR). The proposed rotation invariant retrieval algorithm, suitable for both texture and nontexture images, avoids missing any relevant images but may retrieve some other images which are not very relevant. We show that the higher precision can however be achieved by pruning out irrelevant images.  2004 Elsevier B.V. All rights reserved. Keywords: Content based retrieval; Radial symmetry; Rotation invariance and wavelets

1. Introduction In the past decade, with the rapid increase in the use of internet, steady growth of computer power, the demand for storing multimedia information has increased tremendously. Therefore, efficient *

Corresponding author. Tel.: +91 40 23010500; fax: +91 40 23010780. E-mail addresses: [email protected] (Ch.S. Sastry), [email protected] (A.K. Pujari), [email protected] (B.L. Deekshatulu), [email protected] (C. Bhagvati).

and fast methods are needed to retrieve desired image from large databases. Traditionally, text annotations, such as file name, caption, key words etc, have been used to describe the contents of images. These textual annotations become the basis for indexing and searching using the mature text search algorithms. As these algorithms are to be applied to large databases, the use of annotations becomes not only cumbersome but also inadequate to represent the image content. Many content based image retrieval (CBIR) algorithms are proposed in the current

0167-8655/$ - see front matter  2004 Elsevier B.V. All rights reserved. doi:10.1016/j.patrec.2004.07.011

1846

Ch.S. Sastry et al. / Pattern Recognition Letters 25 (2004) 1845–1855

literature (Chen, 2003; Manian and Vasquez, 1998; Manthalkar et al., 2003; Shen and Ip, 1999; Pujari et al., 2003) to overcome the aforementioned problems by indexing the images based on coefficients in the transform domain. Wavelet analysis and its applications have become one of the fastest growing research areas in the recent years. The most fascinating application of wavelets in image analysis is pattern recognition—making computers see and recognize objects like humans has captured the attention of many researchers. Broadly speaking, a wavelet is a localized, oscillatory function with such desirable properties as compact support, enough smoothness and good approximation properties like zero moments. The decomposition of functions into wavelet bases, generated by dyadic scaling and integer translates of the so called mother wavelet, results in a low frequency block containing the ‘‘identity’’ of the function and several high frequency blocks containing visually important features or ‘‘flavours’’ (such as edges or lines). Although this paper primarily focusses on developing a procedure for rotation independent feature extraction, it focusses as well on the application of it to CBIR. We get our motivation for the present work from CBIR applications and hence present our work with CBIR perspective. The recent literature on wavelet based CBIR shows that the wavelet technique has potential applications in CBIR (Manian and Vasquez, 1998; Manthalkar et al., 2003; Pujari et al., 2003; Shen and Ip, 1999). However, most of these applications make an implicit assumption that all images are captured under the same orientations. Rotation invariant feature extraction has been achieved in some earlier works (Manthalkar et al., 2003; Shen and Ip, 1999). In (Shen and Ip, 1999), a wavelet moment invariant method using the data in polar coordinate domain has been proposed. Another method (Manthalkar et al., 2003) using discrete wavelet packet transform has been proposed for rotation invariant texture classification. This method is based on the observation that most of the texture energy is in mid frequency range and hence the algorithm proposed therein uses a suitable subband decomposition to capture textual information with min-

imum increase in computations. The method (Yuan et al., 2002) using both Fractals and Wavelets, based on Ring-Projection approach, also generates rotation invariant feature vectors. But its apparent drawback is that the recognition rate may not be very high because less information from the original pattern is retained. In the present work, firstly, we observe that with radially symmetric wavelet bases, we can generate the feature vectors that are invariant with respect to rotation. In view of nonavailability of radially symmetric, compactly supported, MRA wavelets (Chui et al., 1996; Daubechies, 1992) and motivated by the requirement for the radial symmetry, we propose a new novel multilevel wavelet representation formula using the characterization property of orthonormal Daubechies wavelets (Daubechies, 1992). The formula does not involve wavelet decomposition in standard sense, yet it is a multilevel representation (Mallat, 1989). The construction procedure of our formula ensures some kind of radial symmetry in the representation. We make use of multiresolution and radial symmetry features present in the representation to capture image signature in the form of combined feature vector for rotation invariant CBIR. Signature vectors are sequences of numbers. Therefore, it is natural to employ well known techniques of similarity search in sequence database. However, since in the present context the database consists of a large number of images, we make use of clustering techniques of datamining. The algorithm captures not only the content (signature) but also how different the image is from other images by dividing the entire database into distinct clusters. Search time, which linearly depends on the number of similar images, is further improved by uniform distribution of images among clusters. The k-means clustering algorithm (Pujari, 2001) is used for clustering the images based on feature vectors. Main aim is to retrieve all relevant images at the same time minimizing irrelevant images during pruning. This paper is organized as follows. In Section 2, we present the basic notations and motivation for proposing a new wavelet based representation formula. In Section 3, a new wavelet decomposition formula based on the characterization property of orthonormal wavelets is presented. In the later part

Ch.S. Sastry et al. / Pattern Recognition Letters 25 (2004) 1845–1855

of the same section, we propose a procedure for generating rotation invariant feature vectors. While in Section 4, we present the simulation results. 2. Notations and motivation for the present work The Fourier transform of a sufficiently regular function f defined on R2 is defined by Z ^ f ðxÞe2pihx;xi dx ð2:1Þ f ðxÞ ¼ R2

The innerproduct of two sufficiently good and real valued functions f and g is denoted by Z hf ; gi ¼ f ðxÞgðxÞ dx ð2:2Þ R2

In all physical situations, an image function f : R2 ! R is finitely supported. The wavelet reconstruction of a finitely supported function f is done in the following way (Daubechies, 1988, 1992) X f ¼ < f ; UJ ;n > UJ ;n n2Z 2

þ

jDetD L j1 X

XX

i¼1

j P J n2Z 2

< f ; Wij;n > Wij;n

ð2:3Þ

Here, DL is the dilation matrix (Daubechies, 1992) and the pair U, W generates a 2D separable (nonseparable) MRA for L2 ðR2 Þ (Daubechies, 1992) and thej functions Wj,n are defined by Wj;n ðxÞ ¼ jDetDL j2 W ðDjL x  nÞ. Let f and g be two images such that g is the rotation of f by an angle h, i.e., g(x) = f(xeih). Here, x stands for the pixel and the multiplication xeih is interpreted as the product of two complex numbers. In the following, we verify the relation between the wavelet coefficients of f and g. Consider g to be UJ,n or Wij;n . We have, hg; gi ¼

Z gðxÞgðxÞdx 2

¼

ZR

f ðxeih ÞgðxÞdx

2

¼

ZR

ð2:4Þ f ðxÞgðxeih Þdx

2

¼

ZR

R2

f ðxÞgðxÞdx ¼ hf ; gi

1847

In the above, we have assumed g to be radially symmetric, i.e., g(x) = g1(kxk) for some univariable function g1. Finally, we have X X jhg; UJ ;n ij ¼ jhf ; UJ ;n ij n2Z 2

X

n2Z 2

n2Z 2

jhg; Wij;n ij

¼

X

jhf ; Wij;n ij

ð2:5Þ

n2Z 2

From the above relations, it may be concluded that the means of wavelet coefficients of an image and its rotated copy are exactly same when the wavelet basis is radially symmetric. Through analogous derivation, we can observe that the corresponding variances are also same. Hence the feature vectors, whose elements are the means and variances of approximation and detail coefficients of wavelet representation, are rotation invariant when the wavelet basis in use is radially symmetric. Consequently, using radially symmetric wavelets, the CBIR algorithm proposed in (Pujari et al., 2003) can be directly used for rotation invariant CBIR. Since there are no known compactly supported, radially symmetric MRA wavelets (Bonnet et al., 2002; Manthalkar et al., 2003), the above explanation has theoretical relevance. Motivated by this, in the present work, we propose a new wavelet based decomposition formula where we achieve some kind of radial symmetry. Using the proposed formula, we generate the feature vectors that are rotation independent, i.e, a function and its rotated copy have same feature vectors and hence belong to the same class after classification.

3. A new wavelet representation In this section, we present a new wavelet based decomposition formula using the characterization property of orthonormal Daubechies wavelets (Daubechies, 1988, 1992). Let w be the compactly supported, sufficiently smooth, real and orthonormal wavelet associated with a multiresolution framework (Daubechies, 1992). Let /, m0 be the associated scaling function and low pass filter respectively. Then, we have (Daubechies, 1992)

1848

Ch.S. Sastry et al. / Pattern Recognition Letters 25 (2004) 1845–1855 2

2

jm0 ðnÞj þ jm0 ðn þ pÞj ¼ 1 ^ /ðnÞ ¼

1 Y

ð3:1Þ

m0 ð2j nÞ

ð3:2Þ

j¼1

j¼J

    n ^ n ^ þp / wðnÞ ¼ e m0 2 2 in2

ð3:3Þ

We start with the following characterization which is basic to the method proposed in this work. Since w is the orthonormal wavelet, from the characterization conditions of orthonormal wavelets (Hernandez and Weiss, 1966), we have X ^ j xÞj2 ¼ 1 or jwð2 j2Z

X1

^ j xÞj þ jwð2 2

j¼1

1 X

^ j xÞj ¼ 1 jwð2 2

Let Ur be the radial extension of U onto R2 via b r ðxÞ ¼ UðkxkÞ b U for x 2 R2

ð3:4Þ From the definition of /, w given in (3.2), (3.3) and using the property of m0 given in (3.1), we have 2 2 2 ^ ^ ^ ¼ j/ð2xÞj þ jwð2xÞj j/ðxÞj 2 2 2 ^ ^ ^ ¼ j/ð4xÞj þ jwð2xÞj þ jwð4xÞj ¼

1 X ^ j xÞj2 a:e: x 2 R ¼ jwð2

ð3:7Þ

Assume that Wr has analogous definition. We can, therefore, write (3.6) as 1 X b r ðxÞ ¼ 1 for a:e: x 2 R2 b r ðxÞ þ U ð3:8Þ W j J j¼J

Multiplying on both sides of (3.8) by f^ and taking inverse Fourier Xtransform, we get f ¼ f H UrJ þ f H Wrj ð3:9Þ jPJ

a:e: x 2 R

j¼0

Remark 1. The above formula (3.9) is not a wavelet based method in standard sense as it does not involve any usual wavelet decomposition. Still the formula retains the progressive reconstruction (Mallat, 1989) feature of multiresolution framework by involving the addition of detail images to coarser image for better reconstruction.

ð3:5Þ

j¼1

In the above, we have used the property: for ^ j xÞj ! 0 as j ! 1 a:e: (Daubechies, x 2 R, j/ð2 1992). From now on, we stick to the following ~ notations: /ðxÞ ¼ /ðxÞ and /j(x) = 2j/(2jx) and ~ U ¼ / H /. Then we have, Z ~ Uj ðxÞ :¼ ð/j H /j ÞðxÞ ¼ /j ðx þ yÞ/j ðyÞ dy R

Hence, a.e. for x 2 R ^ j xÞ ^ j xÞj ¼ /ð2 ^ j xÞ/ð2 j/ð2 2

d~ b ¼ ð /j H / j ÞðxÞ ¼ U j ðxÞ Note that the function Uj is the autocorrelation function of /j. Let Wj denote the autocorrelation function of wj with analogous notations. Hence, (3.4) can be restated in terms of U, W as follows: 1 X b j xÞ ¼ 1 a:e: x 2 R b UðxÞ þ Wð2 j¼0

Equivalently, for any integer J, replacing x by 2Jx in the above equation and using the definition of Uj, Wj, we have 1 X d d ðU a:e: x 2 R ð3:6Þ ðW J ÞðxÞ þ j ÞðxÞ ¼ 1

r Remark 2. FromRthe definition of W Ur, it may R and r r be observed that R2 Uj ðxÞ dx ¼ 1, R2 Wj ðxÞ dx ¼ 0. Hence the functions Ur and Wr act as low pass and high pass functions respectively. Consequently, f H UrJ returns low resolution information while the functions f H Wrj , for j P J, return insignificant values in the portions where f has mild variations and significant values in the portions where f has rapid changes, as shown in Fig. 1.

Remark 3. The function gr, (gr = Ur, Wr and g = U, W), can be computed using the Fourier inversionZ formula and (3.7) as follows: gr ðxÞ ¼

Z

R2

gbr ðxÞe2pix x dx

^gðkxkÞe2pix x dx Z Z 1 2p ^gðtÞe2pitx uh jtj dt dh ¼ 2 0 R Z 1 2p IFTðj j^gÞðx uh Þ dh ¼ 2 0

¼

R2

ð3:10Þ

Ch.S. Sastry et al. / Pattern Recognition Letters 25 (2004) 1845–1855

1849

Fig. 1. A 3-level wavelet decomposition using (3.9): (a) approximation, (b)–(d) detail images which are amplified by 5, (e) sum of approximation and detail images, (f) test image.

In the above, we have taken the polar coordinate substitution: x = tuh, where uh = (cos h, sin h) and IFT stands for inverse Fourier transform operation. As an example, using the nearly symmetric orthonormal wavelet Ôsym6Õ (Daubechies, 1992), we have computed the radially symmetric func-

tions Ur, Wr. The functions Ur, Wr and the corresponding /, w are shown in the Figs. 2–4 (The functions Ur, Wr in Fig. 3 are considered to show the nature of Ur, Wr below the XY-plane). In the following, we define means over concentric circular regions in the region of support of the

Fig. 2. The radially extended functions: (a) Ur and (b) Wr.

1850

Ch.S. Sastry et al. / Pattern Recognition Letters 25 (2004) 1845–1855

Fig. 3. The radially extended functions: (a) Ur and (b) Wr.

1.2

1.5

1 1 0.8

0.6

0.5

0.4

0

0.2

0 –0.5 –0.2

–0.4

–5

0

(a)

5

–1 –5

0

5

(b)

Fig. 4. Nearly symmetric orthonormal wavelet Ôsym6Õ: (a) scaling function (/) and (b) wavelet function (w).

image. We show that the means are invariant with respect to rotation. Let Mg,f,i be the mean of f H g over the concentric circular region bounded by cir-

cles of radii Ri and Ri+1 as shown in Fig. 6. With g = Ur, Wr and using the fact that g is radially symmetric, we have

Ch.S. Sastry et al. / Pattern Recognition Letters 25 (2004) 1845–1855

M g;g;i ¼

Z

jg H gðxÞjdx Z Z ¼ 2 gðyÞgðx  yÞ dy dx Ri 6 kxk
Ri 6 kxk
ð3:11Þ In the above chain of equations, we have explicitly used the property of radial symmetry of g, (Ur, Wr), in the following way

1851

It is interesting to observe that, without taking wavelet decomposition (3.9), one can generate signature vectors by taking the means and variances over concentric circles of image function itself. With such feature vectors, we may achieve the desired rotation invariance, but the corresponding image classification will be very poor as it is always possible to construct the images having same feature vectors with vastly different appearance. However, in our wavelet approach, it is the detail images that do matter for overcoming this problem. To avoid the dependence of size of the image on feature vectors, we take L concentric circular regions (Fig. 6) with each involving nearly equal number of pixels. Consequently, an N-level wavelet decomposition of an image, irrespective of its size, results in a feature vector of size 2L(N + 1), (N + 1 images = 1 approximation image + N detail images with each image giving L means, L variances).

gðx  eih yÞ ¼ gðeih ðxeih  yÞÞ ¼ gðxeih  yÞ Proceeding the way the manipulations are carried out in (3.11), we can show that the corresponding variances are also same, i.e., Z 2 V g;g;i :¼ ðjg H gðxÞj  M g;g;i Þ dx R 6 kxk
ð3:12Þ From (3.11) and (3.12), it may be concluded that the signature vectors, generated by considering the means and variances of approximation and detail images over concentric circles, are exactly same for f and g. Hence the algorithm proposed herein is invariant with respect to rotation. The methodology presented above has the following important features: • The presence of rotation in the images does not change the feature vectors. • The method works equally well for textural and nontextural images. • Each summand in the representation is in a way a wavelet/scaling moment using data directly in cartesian domain.

3.1. Clustering and pruning Clustering is a useful technique for discovery of data distribution and patterns in the underlying data. The goal of clustering is to discover dense and sparse regions in a data set. Data clustering has been studied in Statistics, Machine Learning and Data base communities with diverse emphases (Pujari, 2001). In the clustering algorithms (Pujari, 2001), the Euclidean distance between two feature vectors has been taken as the metric to calculate the similarity or dissimilarity between two images. The representative of a cluster is a signature that symbolizes the whole cluster. This signature shares the maximum number of properties with the rest of the signatures of the cluster. It may or may not be a member of the cluster. The signature of the cluster representative is the one which is compared with the signature of the query image to decide whether the images of that cluster are relevant to the query image or not. There are some desirable properties of the cluster representative like, (1) All the members of the cluster should be used while computing the representative.

1852

Ch.S. Sastry et al. / Pattern Recognition Letters 25 (2004) 1845–1855

Fig. 5. The test image ÔbricksÕ at—(a) 0, (b) 30, (c) 60, (d) 90—orientations. Another test image ÔwoodÕ at—(e) 0, (f) 120, (g) 150, (h) 200—orientations.

Table 1 The first four columns are the feature vectors of the four brick images present at different orientations (Fig. 5(a)–(d)), while the rest four columns are feature vectors of the rotated wood images (Fig. 5(e)–(h)) 132.87 136.39 137.36 132.69 0.10 0.09 0.10 0.10 8.43 7.94 8.39 8.15 0.03 0.03 0.03 0.03 4.99 5.15 5.53 5.60 0.02 0.02 0.02 0.02 2.59 2.87 3.20 3.61 0.01 0.01 0.01 0.01

134.15 134.97 134.68 128.37 0.10 0.10 0.09 0.10 7.65 8.17 8.27 8.61 0.03 0.03 0.03 0.03 4.93 5.18 5.66 5.88 0.02 0.02 0.02 0.02 2.55 2.93 3.34 3.75 0.01 0.01 0.01 0.01

134.00 133.66 136.25 129.19 0.10 0.10 0.09 0.10 7.83 8.25 8.02 8.67 0.03 0.03 0.03 0.03 5.12 5.52 5.52 6.02 0.02 0.02 0.02 0.02 2.66 3.05 3.34 3.82 0.01 0.01 0.01 0.01

131.51 132.63 135.65 128.86 0.10 0.09 0.09 0.09 8.78 7.97 8.01 8.46 0.03 0.03 0.03 0.03 5.37 5.62 5.67 5.92 0.02 0.02 0.02 0.02 2.82 3.14 3.38 3.83 0.01 0.01 0.01 0.01

170.77 172.60 173.01 175.18 0.06 0.05 0.05 0.06 7.85 8.20 8.31 9.10 0.02 0.02 0.02 0.03 6.31 5.86 6.36 6.63 0.02 0.02 0.02 0.02 2.88 2.04 2.28 2.35 0.01 0.01 0.01 0.01

169.02 169.49 171.89 172.03 0.06 0.05 0.06 0.05 8.66 8.72 8.63 9.08 0.03 0.03 0.02 0.03 6.54 7.11 6.93 7.33 0.02 0.02 0.02 0.02 2.25 2.47 2.64 2.86 0.01 0.01 0.01 0.01

167.7600 166.01 172.24 170.72 0.05 0.06 0.05 0.05 8.38 8.85 8.72 8.99 0.02 0.03 0.02 0.03 6.56 6.60 6.64 7.05 0.02 0.02 0.02 0.02 2.08 2.29 2.37 2.55 0.01 0.01 0.01 0.01

167.31 166.39 171.01 170.70 0.06 0.06 0.06 0.05 8.49 9.12 8.62 8.87 0.03 0.03 0.02 0.03 6.22 6.73 6.49 6.84 0.02 0.02 0.02 1.96 2.20 2.36 2.51 2.51 0.01 0.01 0.01 0.01

Ch.S. Sastry et al. / Pattern Recognition Letters 25 (2004) 1845–1855

1853

(2) The representative should be the closest element to the maximum number of members of the cluster. (3) The representative should be a member of the cluster that it represents.

R3

R1

In the present work, we use k-means clustering algorithm (Pujari, 2001) for the clustering of images. Pruning is a process in which the irrelevant and dissimilar images are removed or eliminated from the result images. When a user inputs a query image, the system may retrieve images that may contain some images that are not relevant to userÕs query image. A pruning step helps in discarding the irrelevant images. In the present work, we start with a fewer number of circular regions, over which the means and variances are computed. Once the clustering is done with feature vectors of smaller size, we use more finer circular regions and consequently get feature vectors of bigger sizes. Using such bigger feature vectors, we refine the related cluster and remove the irrelevant images. Although, taking means and variances over thin circular regions could be helpful in pruning the cluster, it has a possi-

R2

Fig. 6. The concentric circular regions.

ble effect of altering the shift invariance property and consequently it is preferable to take thicker circular regions to the extent possible in the pruning step. The signature vectors of the query images and database of images are computed using the methodology described in the previous sections. We then cluster the images using the k-means clustering algorithm. The cluster of images, whose representative is close to the query image, is the cluster that provides the images similar to the query image. The implementation procedure of the algorithm proposed in this work is shown in Fig. 8.

Fig. 7. (a) Query image, (b)–(f) some of the members of the cluster.

1854

Ch.S. Sastry et al. / Pattern Recognition Letters 25 (2004) 1845–1855

Database images

Query image

Wavelet Decomposition using (3.9) Compute Signature Vector of size

2L(N+1) using (3.11) & (3.12)

Compute Signature Vector

Clustering

Cluster representative

Retrieve the cluster with nearest cluster representative

Output Pruning Images similar to query image Fig. 8. Implementation of the algorithm proposed in this work.

4. Simulation results As we have seen in Section 3, the wavelet function to be used is orthonormal, finitely supported and MRA wavelet. We have carried out the simulations using ÔDaub6Õ. Fig. 1 shows the nature of contribution of scaling and wavelet parts of (3.9). We have considered two sets of images, which are of size 512 · 512, at different orientations (Fig. 5) and have computed feature vectors using a 3 level wavelet decomposition in (3.9) starting from J = 2. The feature vectors, shown in Table 1, confirm that the feature vectors are invariant with respect to rotation. Here, in the computations, we have taken L, the no. of concentric circular regions, to be 4. Hence the feature vectors are of size 32. In Table 1, we have shown the feature vectors after normalizing them with the number of pixels involved in the computation of means and variances over concentric circles. For CBIR application, the image collection consisting of more than 600 images has been downloaded from the internet. 1,2,3 The query

1 2 3

http://sipi.usc.edu/services/database/Database.html. www.bluefly.com. www.llbean.com.

image considered is shown in Fig. 7(a). Some of the images of the cluster, which are similar to the query image, are shown in Fig. 7(b)–(f). We have observed that most of the irrelevant images can be eliminated by generating the feature vectors using more circular regions.

5. Conclusion In this paper, we propose a new wavelet based representation formula. It is verified, and justified through computations, that the formula can generate the feature vectors that are invariant with respect to rotation. It has been experimentally observed that the algorithm works well for CBIR applications.

Acknowledgments The authors thank the referees for their suggestions. The first author is thankful to the National Board for Higher Mathematics (NBHM), India, for its financial support (Grant No. FNO: 40/7/ 2002-R & D II/1124).

Ch.S. Sastry et al. / Pattern Recognition Letters 25 (2004) 1845–1855

References Bonnet, S., Peyrin, F., Turjman, F., Prost, R., 2002. Multiresolution reconstruction in fan beam geometry. IEEE Trans. Image Proc. 11 (3), 169–176. Chen, S.-C., 2003. Content-based image retrieval using moment preserving edge detection. Image Vision Comput. 21, 809– 826. Chui, C.K., Stockler, J., Ward, J.D., 1996. Analytic wavelets generated by radial functions. Adv. Comp. Math. 5, 95– 123. Daubechies, I., 1988. Orthonormal Bases of Compactly Supported Wavelets. Comm. Pure Appl. Math. 41, 909–996. Daubechies, I., 1992. Ten lectures on wavelets. CBMS-NSF Series in Applied Math No. 61, SIAM Philadelphia. Hernandez, E., Weiss, G., 1966. A First Course on Wavelets. CRC Press, New York. Mallat, S., 1989. Multiresolution approximations and wavelet orthonormal bases of L2 ðRÞ. Trans. Amer. Math. Soc. 315, 69–87.

1855

Manian, V., Vasquez, R., 1998. Scaled and rotated texture classification using a class of basis functions. Pattern Recognition 31 (12), 1937–1948. Manthalkar, R., Biswas, P.K., Chatterji, B.N., 2003. Rotation and scale invariant texture features using discrete wavelet packet transform. Pattern Recognition Lett. 24 (14), 2455– 2462. Pujari, A.K., 2001. Data Mining Techniques. Universities Press. Pujari, A.K., et al., 2003. Data mining and wavelet based content-based image retrieval systems. In: Chakrabarty, G., Sarkar, S. (Eds.), Proceedings of 6th International Conference on IT, CIT Ô03, December 22–24, 2003, Bhubaneshwar, India. Tata McGraw-Hill, India. Shen, D., Ip, H.H.S., 1999. Discriminative wavelet shape descriptors for recognition of 2-D patterns. Pattern Recognition 32, 151–165. Tang, Y.Y., Tao, Y., Lam, E.C.M., 2002. New method for feature extraction based on fractal behaviour. Pattern Recognition 35, 1071–1081.