Upper and lower volumetric fractal descriptors for texture classification

Upper and lower volumetric fractal descriptors for texture classification

Accepted Manuscript Upper and lower volumetric fractal descriptors for texture classification Andre´ Ricardo Backes PII: DOI: Reference: S0167-8655(...

2MB Sizes 0 Downloads 44 Views

Accepted Manuscript

Upper and lower volumetric fractal descriptors for texture classification Andre´ Ricardo Backes PII: DOI: Reference:

S0167-8655(17)30096-X 10.1016/j.patrec.2017.03.020 PATREC 6776

To appear in:

Pattern Recognition Letters

Received date: Revised date: Accepted date:

28 July 2016 30 December 2016 20 March 2017

Please cite this article as: Andre´ Ricardo Backes, Upper and lower volumetric fractal descriptors for texture classification, Pattern Recognition Letters (2017), doi: 10.1016/j.patrec.2017.03.020

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

CR IP T

ACCEPTED MANUSCRIPT

Research Highlights (Required)

To create your highlights, please type the highlights against each \item command.

It should be short collection of bullet points that convey the core findings of the article. It should include 3 to 5 bullet points (maximum 85 characters, including spaces, per bullet point.)

AN US

• In this study we propose a novel approach based on Bouligand-Minkowski method. • We convert the texture into a surface in R3 and use a sphere of radius r to dilate it.

• We consider that the arrangement of the pixels in the texture interferes in the dilation process. • We create two different sets of descriptors: upper and lower volumetric fractal descriptors.

AC

CE

PT

ED

M

• These two sets of descriptors are used separately for texture classification purposes.

ACCEPTED MANUSCRIPT 1

Pattern Recognition Letters journal homepage: www.elsevier.com

Upper and lower volumetric fractal descriptors for texture classification

CR IP T

Andr´e Ricardo Backes1,∗∗

ABSTRACT

M

AN US

In this work, we propose a novel set of fractal descriptors extracted from gray-level texture images using Bouligand-Minkowski method. To accomplish that, we first convert the texture image into a surface, where the z-axis is the intensity value associated to a pixel in the image. Then, we use a sphere of radius r to dilate this surface. However, instead of using only the influence volume information to extract the descriptors, we considered that the arrangement of the pixels in the texture interferes in the dilation process differently in regions of concavity and convexity of the texture. Thus, we separate these descriptors in two sets: upper and lower volumetric fractal descriptors. Classification results of the descriptors show that they presents a great discrimination power as they overcome traditional methods found in literature in three different benchmark databases, proving to be an efficient tool for texture analysis. c 2017 Elsevier Ltd. All rights reserved.

1. Introduction

AC

CE

PT

ED

Texture is one of the richest sources of information in digital images. In fact, effective and efficient description and classification of texture patterns is an important problem in image analysis and it has motivated intensive research in this field in recent years Xie et al. (2015); Jin et al. (2011). While humans present a visual system highly capable of distinguish different texture patterns, to emulate this ability on computers is still a challenge. Many researchers has focused in understand how the visual system extracts textural information Tkacik et al. (2010); Barbosa et al. (2013). Another problem found is that texture is a characteristic that lacks of a formal definition. On one hand, an artificial texture usually appears as a repetition of a model (in its exactly form or with small variations in illumination, scale, rotation etc) over a region of the image Backes et al. (2009). On the other hand, natural textures (such as a leaf surface, mammographic image, clouds images etc) are not composed by a model. Instead, they present random and persistent stochastic patterns, which results in a cloud like appearance for the texture Kaplan (1999). Although its lack of definition, the research in the field of texture analysis has greatly advanced. Literature presents a vast number of techniques to extract descriptors able to describe a

∗∗ Corresponding

author: Tel.: +55-34-3239-4393; fax: +55-34-3239-4392; e-mail: [email protected] (Andr´e Ricardo Backes)

texture pattern. These techniques exploit the image texture information using second-order statistics Murino et al. (1998); Haralick (1979), spectral analysis (e.g., Fourier and Gabor filters) Manjunath and Ma (1996); Casanova et al. (2009); Azencott et al. (1997), wavelet transform Seng¨ur et al. (2007); Lu et al. (1997), deterministic walks Backes et al. (2010), fractal dimension Backes et al. (2009); Chen and Bi (1999), complex network theory da F. Costa et al. (2007), shortest paths in graphs de Mesquita S´a Junior et al. (2013b, 2014), and gravitational systems S´a Junior and Backes (2012); de Mesquita S´a Junior et al. (2013a). Among these techniques is the fractal dimension, which have received great attention in the last few decades and has being applied in a wide range of areas that make use of digital images Backes et al. (2009); Xu et al. (2009). This technique aims to describe a texture in terms of its homogeneity/heterogeneity, a measurement which is related to the complexity of the pattern. Fractal dimension has also successfully been used in time series analysis Maragos and Sun (1993), and in combination with lacunarity to describe and segment natural texture images Keller et al. (1989). In this paper, we proposed a novel approach for static textures based on the BouligandMinkowski fractal dimension method Backes et al. (2009); Tricot (1995). This method enables to describe a texture pattern using its influence volume, which is computed by the dilation of the pattern using a sphere of radius r. However, regions of concavity and convexity of the texture

ACCEPTED MANUSCRIPT 2

AN US

2. Complexity analysis of Textures

V(r) is the influence volume computed through dilation of each point of the surface in S . Basically, V(r) represents the number of points in R3 whose distance from a point of the object is smaller than r. Figure 1 shows an example of texture image mapped into a surface and its posterior dilation. The process of dilation the surface is a time consuming task and an important issue in the method. This task is usually performed using the Euclidean Distance Transform (EDT). EDT enables us to compute the distance map for a 3D volume. Before the transform, each voxel in this volume represent either the surface object or empty space. After the transform, each voxel represents the the minimum distance from that voxel to the surface (Bruno and da Fontoura Costa, 2004; Saito and ichiro Toriwaki, 1994). In the sequence, the voxels at the same distance from the surface are counted, thus resulting in the influence volume V(r). Figure 2 shows an example of the EDT transform and influence volume computing for an image object. Finally, we estimate the fractal dimension D through simple line regression. Fractal dimension is estimated as D = 3 − α, where α is the slope of the line which approximates de log-log curve curve r × V(r).

CR IP T

may affect how the dilation process occurs, thus interfering in the resulting influence volume. Therefore, we propose to separate the influence volume in two sets, resulting in two different sets of descriptors: upper and lower volumetric fractal descriptors. The remaining of the paper is organized as follows. Section 2 describes the Bouligand-Minkowski fractal dimension and how descriptors can be extracted from the influence volume of a texture image. We describe the idea of upper and lower influence volumes in Section 3. In this section, we consider that the arrangement of the pixels in the texture interferes in the dilation process differently in regions of concavity and convexity of the texture. Thus, we separate these descriptors in two sets: upper and lower volumetric fractal descriptors. In Section 4, we describe experiments in which our descriptors are compared against traditional texture analysis methods using three different benchmark texture databases. Section 5 presents the results obtained by each method evaluated. We also present a detailed discussion of these results. Finally, Section 6 concludes this paper.

AC

CE

PT

ED

M

2.1. Bouligand-Minkowski fractal dimension An important approach to estimate the complexity of an image is through fractal dimension, i.e., the non-integer dimension associated to fractals as stated by Benoit Mandelbrot in the 70’s. Fractal dimension is an important attribute of fractals and for image analysis as it is related to the complexity and space occupation of objects Neil and Curtis (1997); Mandelbrot (2000). Although there exists many approaches to estimate the fractal dimension, an important one for texture analysis is the Bouligand-Minkowski method Backes et al. (2009). This method estimates the complexity of an object through its influence region (area for shapes and volume for textures), which is obtained by the dilation of the object with an structuring element. This influence region is very sensitive to structural changes of the object, what makes this method one of the most accurate methods to estimate the fractal dimension Tricot (1995). To estimate the fractal dimension of a texture image using the Bouligand-Minkowski method, first we must map the entire texture image onto a 3D surface S ∈ R3 . Let I(x, y) be a texture image, with ∀(x, y) ∈ I, 0 ≤ I(x, y) ≤ 255 intensity of the pixel (y, x) in the image I. We convert each pixel of the image I into a point s = (x, y, z), s ∈ S , with z = I(x, y). By dilating the surface S by a sphere of radius r, we are able to estimate the Bouligand-Minkowski fractal dimension D log V(r) D = 3 − lim r→0 log (r)

(1)

with n o V(r) = | s ∈ R3 |∃s0 ∈ S : s − s0 ≤ r |.

(2)

where r is the dilatation radius, s0 = (x0 , y0 , z0 ) is a point in R3 whose distance from s = (x, y, z) is smaller or equal to r, and

2.2. Volumetric Fractal descriptors Real fractal are mathematical objects and do not exists in the nature. Differently, textures images and other real objects are limited by size and the resolution of the space where they are inserted. This makes their self-similarity limited to some scales, and it reflects as different alterations in the complexity of the object. This variation in the complexity is easily perceived when studying the behavior of the log-log curve. As investigated in Peleg et al. (1984), all these small variations in the curve are not fully represented when using a single value as fractal dimension, as the slope of a simple line regression. In face of this, we propose to perform a more complete complexity analysis by using the fractal descriptors proposed in Backes et al. (2009). Basically, instead of using the estimated fractal dimension, we propose to use the influence volume V(r) computed for different radius values r, within a specific range rmax , 0 ≤ r ≤ rmax , as a feasible feature vector ψ(rmax ):

with

i h √ ψ(rmax ) = log V(0), log V(1), log V( 2), . . . , log V(rmax ) (3)

o n √ √ √ r = 0, 1, 2, 3, 2 2, . . . , rmax

(4)

q i2 + j2 + k2 ; i, j, k ∈ N

(5)

where

r=

By using different r values, we are able to represent small particularities of the pixel organization of the texture at different scales. This provides, a better discrimination and classification of the texture, as it combines both micro (interference in the dilation for nearby pixels) and macro (influence volume at different scales) texture information.

ACCEPTED MANUSCRIPT 3

M

AN US

CR IP T

Fig. 1. Example of the dilation of a texture pattern: (a) Sample image; (b) Texture image mapped into a surface; (c) Surface dilated using r = 5.

CE

PT

ED

Fig. 2. Example of the EDT transform and influence volume computing for an image object. Black points represent the image object. Values represents the squared distance from that point to the object. Gray points represent the influence volume (in this case, area) for and specific radius size.

AC

Fig. 3. Intersection of the influence volume of two points above (dark gray) and below (light gray) the texture for different dilation radius: (a) r = 1; (a) r = 2; (a) r = 3.

3. Upper and lower influence volumes

spheres. As a consequence, the influence volume is no longer the sum of individual spheres.

The accuracy of Bouligand-Minkowski method is due to the great ability of the influence volume in detecting small changes in the texture pattern during the dilation process. At the small radius, the spheres can dilate freely, without interacting with each other. In this situation, the influence volume is often similar to the sum of the volume of each individual sphere. As the dilation radius increases, one sphere may intercept with others, so that, the volume of the spheres is shared among different

The possibility of two or more spheres to intercept each other depends not only on the dilation radius, but also on the arrangement of the pixel in a region of the texture. Figure 3 shows two points representing pixels in a region of concavity in a texture. As the dilation radius increases, the influence volume of each points starts to intercept each other. However, this phenomenon is not equal above and below the texture line. We notice, in this case, that the intersection between the two influence vol-

ACCEPTED MANUSCRIPT 4

Both Flowers102 PFID61 are color images datasets, so we used luminance to convert them to graylevel. Classification was carried by using Linear Discriminant Analysis (LDA). LDA is a statistical classifier that considers all classes as having the same covariance matrix. Then, it estimates a linear subspace where the projection of the original data presents larger variance inter-classes than the variance intraclasses Fukunaga (1990); Everitt and Dunn (2001). We also used leave-one-out cross validation scheme to estimate the classifier error. This scheme considers each sample as validation data, while the remaining samples are used as training data. This process is repeated until all samples are used as validation data in order to compute the total number of errors and the classification error probability. To provide a better evaluation of our approach, we provide a comparison with some traditional texture analysis methods. The considered methods are: Fourier descriptors: we apply the bi-dimensional Fourier transform in an input image, followed by the shifting operator over the resulting spectrum. Then, we computed 99 descriptors from the shifted spectrum. Each descriptor contains the sum of all absolute values of the coefficients placed at the same radial distance from the image center in the Fourier spectrum Azencott et al. (1997). Co-occurrence matrices: each matrix represents the joint probability of a pair of pixels to be separated by determined distance d and direction θ. For our experiments, we computed co-occurrence matrices for d = {1, 2} with angles θ = {0, 45, 90, 135}, in a non-symmetric version for each sample image. Then, we computed energy and entropy descriptors from each co-occurrence matrix to compose image feature vector Haralick (1979). Gabor filters: this is a family of filters built by a Gaussian function modulated by a sinusoid oriented by frequency and direction. From the convolution of these filters over an input image we compute the energy as a descriptors. In this paper, we used a total of 24 filters (6 rotation filters and 4 scale filters), with frequency ranging from 0.05 to 0.4 Manjunath and Ma (1996); Casanova et al. (2009). Wavelet descriptors: Wavelet transform is a mathematical tool that enables us to describe a function using a basis function, named wavelet. In our experiments, we used the multilevel 2D wavelet decomposition to compute three dyadic decompositions using daubechies 4. Then, we computed energy and entropy for horizontal, diagonal and vertical details of each decomposition, resulting in a total of 18 descriptors Laine and Fan (1993); Lu et al. (1997); Jin et al. (2011). Tourist walk: this approach considers each pixel as a tourist wishing to visit cities (other pixels) according to the rule of going to the nearest (or farthest) city that has not been visited in the last µ time steps. For this experiment we computed a total of 48 descriptors according to the specification in Backes et al.

AN US

4. Experiments

• PFID61 Chen et al. (2009): this dataset contains 61 categories of food items and 1098 images. Each category contains 3 instances of a food item. Each food item has 6 images shot from different angle, totalizing 18 images per category.

CR IP T

umes starts first above the texture (thus resulting in a large intersection volume) than below. Moreover, in Figure 3(c) most of the points of the spheres are located under the texture line. As a consequence, the volumes of the regions of the sphere under and over the texture are different and may be considered separately as different sources of information about the texture pattern. To corroborate this theory we propose to compute the descriptors defined in Section 2.2 by considering only the influence volume which is above or below the texture pattern. We named these descriptors upper and lower volumes, respectively. In order to compute them, we also limited the direction of the dilation process. In traditional Bouligand-Minkowski method, we performed the dilation of the texture in all directions, i.e., the points in the rigth and left of the texture will also be occupied by its influence volume. For the upper and lower volumes, we allowed the dilation only above and below the texture pattern, as shown in Figure 4. The transitions of gray-levels indicate the dilation of the texture for an increasing radius value.

To evaluate the proposed approach, we used three texture dataset:

M

• Brodatz Brodatz (1966): these textures are extracted from the book of Brodatz Brodatz (1966) and it is a well established benchmark for researches of texture analysis. This database contains 1110 texture samples grouped into 111 classes with 10 images per class. Each image is 200 × 200 pixels size with 256 graylevels;

CE

PT

ED

• UIUC Lazebnik et al. (2005): this is a more challenging texture dataset. It contains images obtained from different viewpoints, with perspective distortions and non-rigid transformations. In this work, we used the original collection of 1,000 grayscale texture images (25 classes with 40 samples each). Each image is 640 × 480 pixels size with 256 graylevels. As a texture samplem we extracted a 200 × 200 pixels size window from the upper-left side of each image to build a new database which would keep consistency with the Brodatz database;

AC

• Outex TC 00013 Ojala et al. (2002a): this texture dataset consists of a collection of 68 natural scenes acquired under strictly controlled conditions. For our experiments, we cropped 20 non-overlapping texture windows of 128 × 128 pixels size to create a dataset containing 1360 samples. As these are color texture samples, we used luminance to convert them to graylevel. We also evaluated our approach on the classification of nature scenes and objects. For hat, we used two datasets: • Flowers102 Nilsback and Zisserman (2008): this dataset contains 8189 images of flowers acquired by searching the web and taking pictures. The images are grouped into 102 different categories. The number of samples per class is variable, but each class has a minimum of 40 images for each category.

ACCEPTED MANUSCRIPT 5

( a)

( b)

( c )

( d)

with the dilation process if it does not aggregate relevant information. We also notice that lower and upper volume descriptors present similar performance and, depending of characteristics of the texture dataset, the performance of one is slightly superior than the other (UIUC), and vice-versa (Outex). Moreover, both approaches have success rate equal or superior that traditional Bouligand-Minkowski method. The differences between lower and upper volume descriptors indicate that each approach emphasizes different characteristics of the textures under analysis and that their combination could provide a more discriminative feature vector. Thus, “lower and upper volume” represents the concatenation of these two volume descriptors. We must emphasize that, in this case, the proposed approach has twice the number of descriptors than other approaches. However, the combination of both descriptors increases the discrimination ability of the method, thus resulting in a higher success rate. Still according to Figure 5, success rate keeps increasing as we add new descriptors to the feature vector until we reach a total of 40 descriptors. Therefore, we propose to select 42 descriptors from each lower and upper volume to characterize texture patterns. This number represents the number of descriptors obtained when the texture is dilated using rmax = 7. After this radius size, the success rate remains constant, so that, the information in the influence volume are no longer relevant. Once we defined the number of features suitable for our approach, we decided to compared it with other texture analysis methods. At first, we decide to compare upper and lower volume descriptors with the Traditional Bouligand-Minkowski (Table 1). We notice that the three approaches present similar performance, each one performing better in a specific texture dataset. Although small, this difference between lower and upper volume indicates that the arrangement of pixels in different areas of the texture contributes differently for each descriptor. Therefore, we also decided to evaluate the combination of lower and upper descriptors into a single feature vector, nammed as “Lower and Upper volume”. This combination doubles the number of descriptors as also achieves the best success rate. The improvement in the results is not significant in Brodatz (improvement of 1.17% in comparison to the upper volume). However, this combination represents a significant improvement in

CE

5. Results

PT

ED

M

AN US

(2010). Local binary patterns (LBP): The method is based on recognizing certain local binary patterns, which characterize the spatial configuration of local image texture. The occurrence histogram of these patterns is used as texture descriptors. For this experiment, we used the concatenation of the histograms computed for (P, R) = {(8, 1), (16, 2), (24, 3)} to characterize a texture pattern, thus resulting in a total of 54 descriptors Ojala et al. (2002b). Discrete Cosine Transform (DCT): we generate a set of eight 3 × 3 DCT masks from three 1D DCT basis vectors U1 = [1, 1, 1]T , U2 = [1, 0, −1]T , and U3 = [1, −2, 1]T . Nine masks can be produced from these vectors. We exclude the mask with low pass property. Then, we computed the local variance for each filter output Ng et al. (1992). Complete Local Binary Pattern (CLBP): based on traditional LBP Ojala et al. (2002c); Zhou et al. (2008), this method uses the local difference sign-magnitude to build three operators: CLBP C, CLBP S and CLBP M. Although these operators can be combined in different ways, we computed the joint 3D histogram CLBP S/M/C. We also used radius = 2 and neighborhood = 16 with rotation invariant uniform patterns U ≤ 2 (riu2). This resulted in a feature vector containing 648 features. Nearest neighborhood classifier with the chi-square distance was used as classifier Guo et al. (2010).

CR IP T

Fig. 4. (a) Variation of gray levels extracted from a texture row; (b) Dilation process performed above and below the texture; (c) Lower volume; (d) Upper volume.

AC

Before applying our approach over a texture image, we have to set the dilation radius used to compute the texture descriptors. The number of descriptors in the proposed feature vector is strictly dependent on the dilation radius used. Figure 5 shows a graph of the success rate achieved as we increase the number of selected descriptors and, consequently, the dilation radius. This graph illustrates important aspects of each volume descriptor. We notice that the success rate increases rapidly, assuming a constant success rate after a few descriptors. A larger radius enables us to add information from different scales to the feature vector. However, after a specific radius size, the dilated spheres are so big for the texture pattern that most of the relevant information is already incorporated into the influence volume. In addition, we must keep in mind that the dilation process is expensive, so it doesn’t justify proceed

ACCEPTED MANUSCRIPT

CR IP T

6

Fig. 5. Success rate against the number of descriptors in UIUC (left), Brodatz (middle) and Outex (right) datasets.

Outex (5.14% in comparison to the upper volume) and UIUC’s results (8.30% in comparison to the lower volume), this last being a much harder classification dataset.

M

Success rate (%) UIUC Brodatz Outex 35.10 80.00 81.91 41.10 59.91 80.73 56.10 89.37 81.91 38.80 87.47 68.82 48.70 89.37 69.85 72.80 97.84 82.87 31.40 80.45 71.25 87.80 97.21 85.80 67.30 98.37 78.75 68.00 97.66 79.78 65.40 97.84 80.37 76.30 99.01 85.51

ED

Method Fourier descriptors Co-occurrence matrices Gabor filters Wavelet descriptors Tourist walk LBP DCT CLBP Traditional Bouligand-Minkowski Lower volume Upper volume Lower and Upper volume

AN US

Table 1. Comparison results for different texture methods.

PT

Table 2. Comparison results for different texture methods.

AC

CE

Method Fourier descriptors Co-occurrence matrices Gabor filters Wavelet descriptors Tourist walk LBP DCT CLBP Traditional Bouligand-Minkowski Lower volume Upper volume Lower and Upper volume

our approach uses more descriptors than most methods compared. The exceptions are Fourier (99 descriptors) and CLBP (648 descriptors) methods. CLBP is a sophisticated extension of the traditional LBP. In this approach, three different binary maps are obtained from the input image: CLBP C (central pixel information), CLBP S (traditional LBP) and CLBP M (sign-magnitude). Next, it combines CLBP M and CLBP C into a single map: CLBP MCSum. Then, it concatenates CLBP MCSum and CLBP S maps and computes a two dimensional histogram, that is used as the resulting descriptor. Oppositely, our approach considers that the dilation information from Traditional Bouligand-Minkowski can be split into different interpretations and recombined to improve success rate. Regardless of this, it is important to keep in mind that both lower and upper volume present success rates higher than most methods when used isolatedly, which reduces the number of descriptors to half.

Success rate (%) PFID61 Flowers102 18.12 10.89 33.06 12.91 37.80 23.09 35.34 14.54 36.70 25.38 48.63 33.92 25.32 8.33 70.77 30.28 53.46 31.19 50.55 29.78 53.73 30.15 58.56 39.27

In order to strengthen our evaluation, we also evaluated the combination of upper and lower descriptors with other texture analysis methods. Table 1 shows the results achieved by each method in three different textures datasets. Results show that our combined approach presents a higher success rate for all evaluated datasets. However, to yield this result,

Figure 6 shows the confusion matrices for the proposed descriptors compared to the traditional Bouligand-Minkowski. In these diagrams, the darker the point, the larger is the number of samples of a row class classified as a column class. The best method is the one that concentrates the darker points in the matrix diagonal and a minimum off-diagonal dark points. Although all compared approaches present similar behavior in the misclassification of some classes (e.g., classes 30 to 35 and above 60 in Outex) due to the high sample similarity, the difference in the confusion matrices is more perceptive in the combination of lower and upper volumes. This confusion matrix presents less, and also lighter, off-diagonal points than others matrices, corroborating its ability into discriminate texture samples. Table 2 shows the results for Flowers102 and PFID61 datasets. These datasets contain images of nature scenes and objects and they represent a more challenging classification problem. Results show that most methods perform poorly on both datasets. That is because most of the methods were not developed for scene recognition (Flowers102 dataset) or to deal with images shot from different angles (PFID61), but to discriminate texture samples (here included our approach). Although its success rate is also low, “Lower and Upper volume” approach was able to surpass all compared approaches in Flow-

ACCEPTED MANUSCRIPT

AN US

CR IP T

7

Fig. 6. Confusion matrices comparing the proposed method to the traditional Bouligand-Minkowski.

ED

M

ers102 dataset and the second best result in the PFID61 dataset, behind only the CLBP method. However, we must emphasize that our approach contains 84 descriptors against 648 from CLBP, i.e., almost 8 times less descriptors.

AC

CE

PT

An important difference between computing the traditional Bouligand-Minkowski and the lower and upper volume approach is the computational time. Let I be an image of N × N pixels size, L gray levels. Both approach requires the computation of the influence volume of the image I for an specific dilation radius R. To achieve that, traditional BouligandMinkowski requires that we process a matrix of (N + 2R) × (N + 2R) × (L + 2R), which results in a function of time complexity T (N, R) = LN 2 +2RN 2 +4NRL+8NR2 +4LR2 +8R3 . In contrast, our proposed approach only considers the dilation above and below the image I. Thus, it requires that we process a matrix of N ×N ×(L+2R). In terms of the image, both approaches present a time complexity of O(N 2 ). However, for the dilation radius, the time complexity for the traditional Bouligand-Minkowski is O(R3 ), while our approach has a time complexity of O(R).

eters N and R. For the same image size N, our approach is less affected by the parameter R than the traditional one. Despite this improvement in computing time, CLBP method presents a better running time that ours. This is because CLBP is based on the analysis of neighbor pixels, while Bouligand-Minkowski uses Euclidean Distance Transform (EDT) to compute the influence volume. For comparison, CLBP takes 0.4345 seconds to compute the descriptors for a 200 × 200 image, while our approach takes 0.7461 seconds when using radius r = 5.

To provide a better evaluation of our proposed approach, Figure 7 shows the running time for different values of N and R for both methods. The values presented represent the average running time for 10 executions. We implemented all the methods on a PC with Intel(R) Core(TM)i5-4460 CPU @ 3.2GHz 3.2GHz, 8GB RAM, 64-bit Operating System and MATLAB R2014a 64-bit C-Mex. The comparison shows that our proposed approach presents a slightly smaller running time. This is mostly due to the difference in the order of magnitude of param-

Fig. 7. Running time of for the traditional Bouligand-Minkowski and proposed approach for different image sizes N and dilation radius R.

6. Conclusion This work presented a novel approach for extracting texture descriptors using Bouligand-Minkowski fractal dimension

ACCEPTED MANUSCRIPT 8

Acknowledgment

AN US

Andr´e R. Backes gratefully acknowledges the financial support of CNPq (National Council for Scientific and Technological Development, Brazil) (Grant #302416/2015-3) and FAPEMIG (Foundation to the Support of Research in Minas Gerais) (Grant #APQ-03437-15).

Jin, X., Gupta, S., Mukherjee, K., Ray, A., 2011. Wavelet-based feature extraction using probabilistic finite state automata for pattern classification. Pattern Recognition 44, 1343–1356. Kaplan, L.M., 1999. Extended fractal analysis for texture classification and segmentation. IEEE Transactions on Image Processing 8, 1572–1585. Keller, J.M., Chen, S., Crownover, R.M., 1989. Texture description and segmentation through fractal geometry. Computer Vision, Graphics and Image Processing 45, 150–166. Laine, A., Fan, J., 1993. Texture classification by wavelet packet signatures. IEEE Trans. Pattern Anal. Machine Intell. 15, 1186–1190. Lazebnik, S., Schmid, C., Ponce, J., 2005. A sparse texture representation using local affine regions. IEEE Trans. Pattern Anal. Mach. Intell. 27, 1265–1278. Lu, C.S., Chung, P.C., Chen, C.F., 1997. Unsupervised texture segmentation via wavelet transform. Pattern Recognition 30, 729–742. Mandelbrot, B., 2000. The fractal geometry of nature. Freeman & Co. Manjunath, B.S., Ma, W.Y., 1996. Texture features for browsing and retrieval of image data. IEEE Trans. Pattern Anal. Mach. Intell 18, 837–842. Maragos, P., Sun, F.K., 1993. Measuring the fractal dimension of signals: morphological covers and iterative optimization. IEEE Transactions on Signal Processing 41, 108–121. de Mesquita S´a Junior, J.J., Backes, A.R., Cortez, P.C., 2013a. Color texture classification based on gravitational collapse. Pattern Recognition 46, 1628– 1637. de Mesquita S´a Junior, J.J., Backes, A.R., Cortez, P.C., 2013b. Texture analysis and classification using shortest paths in graphs. Pattern Recognition Letters 34, 1314–1319. de Mesquita S´a Junior, J.J., Cortez, P.C., Backes, A.R., 2014. Color texture classification using shortest paths in graphs. IEEE Transactions on Image Processing 23, 3751–3761. Murino, V., Ottonello, C., Pagnan, S., 1998. Noisy texture classification: A higher order statistics approach. Pattern Recognition 31, 383–393. Neil, G., Curtis, K.M., 1997. Shape recognition using fractal geometry. Pattern Recognition 30, 1957–1969. Ng, I., Tan, T., Kittler, J., 1992. On local linear transform and Gabor filter representation of texture, in: International Conference on Pattern Recognition, pp. 627–631. Nilsback, M.E., Zisserman, A., 2008. Automated flower classification over a large number of classes, in: Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing. Ojala, T., Maenpaa, T., Pietikainen, M., Viertola, J., Kyllonen, J., Huovinen, S., 2002a. Outex: New framework for empirical evaluation of texture analysis algorithms, in: ICPR, pp. I: 701–706. Ojala, T., Pietikainen, M., Maenpaa, T., 2002b. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Analysis and Machine Intelligence 24, 971–987. Ojala, T., Pietik¨ainen, M., M¨aenp¨aa¨ , T., 2002c. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell 24, 971–987. Peleg, S., Naor, J., Hartley, R., Avnir, D., 1984. Multiple resolution texture analysis and classification. IEEE Transactions on Pattern Analysis and Machine Intelligence 6, 518–523. S´a Junior, J.J.M., Backes, A.R., 2012. A simplified gravitational model to analyze texture roughness. Pattern Recognition 45, 732–741. Saito, T., ichiro Toriwaki, J., 1994. New algorithms for euclidean distance transformation of an n-dimensional digitized picture with applications. Pattern Recognition 27, 1551–1565. Seng¨ur, A., T¨urkoglu, I., Ince, M.C., 2007. Wavelet packet neural networks for texture classification. Expert Syst. Appl 32, 527–533. Tkacik, G., Prentice, J.S., Victor, J.D., Balasubramanian, V., 2010. Local statistics in natural scenes predict the saliency of synthetic textures. Proceedings of the National Academy of Sciences 107, 18149–18154. Tricot, C., 1995. Curves and Fractal Dimension. Springer-Verlag. Xie, J., Zhang, L., You, J., Shiu, S.C.K., 2015. Effective texture classification by texton encoding induced statistical features. Pattern Recognition 48, 447– 457. Xu, Y., Ji, H., Fermuller, C., 2009. Viewpoint invariant texture description using fractal analysis. International Journal of Computer Vision 83, 85–100. Zhou, H., Wang, R., Wang, C., 2008. A novel extended local-binary-pattern operator for texture analysis. Information Sciences 178, 4314–4325.

CR IP T

method. Instead of using only the influence volume information, we considered that the arrangement of the pixels in the texture interferes in the dilation process differently in regions of concavity and convexity of the texture. This leads to different influence volumes upper and lower the texture pattern, which could be used separately for texture analysis. We evaluated this proposed approach in three different texture benchmarks. Results show that the approach produces highly discriminant descriptors, which are superior in comparison to traditional texture methods (here included the traditional Bouligand-Minkowski fractal dimension method), especially when tested on a UIUC database. As a future work, we aim to investigate how this simple modification could affected the description of color textures since the traditional Bouligand-Minkowski fractal dimension method has already been efficiently used for color texture characterization Backes et al. (2012), and so, this modification could also be used to improve color texture characterization.

References

AC

CE

PT

ED

M

Azencott, R., Wang, J.P., Younes, L., 1997. Texture classification using windowed fourier filters. IEEE Trans. Pattern Anal. Mach. Intell 19, 148–153. Backes, A.R., Casanova, D., Bruno, O.M., 2009. Plant leaf identification based on volumetric fractal dimension. IJPRAI 23, 1145–1160. Backes, A.R., Casanova, D., Bruno, O.M., 2012. Color texture analysis based on fractal descriptors. Pattern Recognition 45, 1984–1992. Backes, A.R., Gonc¸alves, W.N., Martinez, A.S., Bruno, O.M., 2010. Texture analysis and classification using deterministic tourist walk. Pattern Recognition 43, 685–694. Barbosa, M.S., Bubna-Litic, A., Maddess, T., 2013. Locally countable properties and the perceptual salience of textures. Journal of the Optical Society of America A 30, 1687–1697. Brodatz, P., 1966. Textures: A photographic album for artists and designers. Dover Publications, New York. Bruno, O.M., da Fontoura Costa, L., 2004. A parallel implementation of exact euclidean distance transform based on exact dilations. Microprocessors and Microsystems 28, 107–113. Casanova, D., de Mesquita S´a Junior, J.J., Bruno, O.M., 2009. Plant leaf identification using gabor wavelets. International Journal of Imaging Systems and Technology 19, 236–243. Chen, M., Dhingra, K., Wu, W., Yang, L., Sukthankar, R., 0001, J.Y., 2009. PFID: Pittsburgh fast-food image dataset, in: ICIP, IEEE. pp. 289–292. Chen, Y.Q., Bi, G., 1999. On texture classification using fractal dimension. IJPRAI 13, 929–943. Everitt, B.S., Dunn, G., 2001. Applied Multivariate Analysis. 2nd edition ed., Arnold. da F. Costa, L., Rodrigues, F.A., Travieso, G., Boas, P.R.V., 2007. Characterization of complex networks: A survey of measurements. Advances in Physics 56, 167–242. Fukunaga, K., 1990. Introduction to Statistical Pattern Recognition. 2nd ed., Academic Press. Guo, Z., 0006, L.Z., Zhang, D., 2010. A completed modeling of local binary pattern operator for texture classification. IEEE Transactions on Image Processing 19, 1657–1663. Haralick, R.M., 1979. Statistical and structural approaches to texture. Proc. IEEE 67, 786–804.

ACCEPTED MANUSCRIPT 9 Biography

AC

CE

PT

ED

M

AN US

CR IP T

ANDRE´ RICARDO BACKES is a professor at the College of Computing at the Federal University of Uberlndia in Brazil. He received his B.Sc. (2003), M.Sc (2006). and Ph.D (2010). in Computer Science at the University of S. Paulo. His fields of interest include Computer Vision, Image Analysis and Pattern Recognition.