Computing fractal descriptors of texture images using sliding boxes: An application to the identification of Brazilian plant species

Computing fractal descriptors of texture images using sliding boxes: An application to the identification of Brazilian plant species

Journal Pre-proof Computing fractal descriptors of texture images using sliding boxes: An application to the identification of Brazilian plant species...

673KB Sizes 0 Downloads 21 Views

Journal Pre-proof Computing fractal descriptors of texture images using sliding boxes: An application to the identification of Brazilian plant species Giovanni Taraschi, Joao B. Florindo

PII: DOI: Reference:

S0378-4371(19)32037-0 https://doi.org/10.1016/j.physa.2019.123651 PHYSA 123651

To appear in:

Physica A

Received date : 15 August 2019 Revised date : 9 November 2019 Please cite this article as: G. Taraschi and J.B. Florindo, Computing fractal descriptors of texture images using sliding boxes: An application to the identification of Brazilian plant species, Physica A (2019), doi: https://doi.org/10.1016/j.physa.2019.123651. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

© 2019 Published by Elsevier B.V.

Journal Pre-proof

of

Research Highlights

• Texture descriptors using a sliding box counting approach.

p ro

• Theory based on fractal geometry and sliding detection probabilities. • Employed in the classification of gray level texture images. • Application to the identification of Brazilian plant species.

Jo

urn

al

Pr e-

• Better classification accuracy both on benchmark and practical tasks.

1

Journal Pre-proof *Manuscript Click here to view linked References

4

Giovanni Taraschia , Joao B. Florindoa,∗

5 6 7

a Institute of Mathematics, Statistics and Scientific Computing - University of Campinas Rua S´ ergio Buarque de Holanda, 651, Cidade Universit´ aria ”Zeferino Vaz” - Distr. Bar˜ ao Geraldo, CEP 13083-859, Campinas, SP, Brasil

Abstract

Pr e-

8

p ro

2

of

3

Computing fractal descriptors of texture images using sliding boxes: an application to the identification of Brazilian plant species

1

This work proposes a new model based on fractal descriptors for the classification of grayscale texture images. The method consists of scanning the image with a sliding box and collecting statistical information about the pixel distribution. Varying the box size, an estimation of the fractality of the image can be obtained at different scales, providing a more complete description of how such parameter changes in each image. The same strategy is also applied to a especial encoding of the image based on local binary patterns. Descriptors both from

al

the original image and from the local encoding are combined to provide even more precise and robust results in image classification. A statistical model based

urn

on the theory of sliding window detection probabilities and Markov transition processes is formulated to explain the effectiveness of the method. The descriptors were tested on the identification of Brazilian plant species using scanned images of the leaf surface. The classification accuracy was also verified on three benchmark databases (KTH-TIPS2-b, UIUC and UMD). The results obtained demonstrate the power of the proposed approach in texture classification and,

Jo

in particular, in the practical problem of plant species identification.

9

10

Keywords: Box-counting, Fractal descriptors, Texture classification, Automatic plant taxonomy.

∗ Corresponding

author Email addresses: [email protected] (Giovanni Taraschi), [email protected] (Joao B. Florindo) Preprint submitted to Elsevier

November 9, 2019

Journal Pre-proof

11

1. Introduction Fractals have shown to be a powerful tool in the modeling and analysis of

13

natural objects and images in general, being used in different areas such as

14

Biology, Physics, Engineering, Medicine, and many others [1, 2, 3, 4, 5]. The

15

high flexibility of fractal geometry and its ability to describe objects with high

16

degree of complexity at different scales make it an appropriate tool for describing

17

nature. Therefore, for many real world applications, fractal geometry is more

18

adequate than Euclidean geometry [6].

p ro

of

12

Most fractal-based methods use the concept of fractal dimension. This can

20

be interpreted as a measure of the complexity or, equivalently, of the way that an

21

object occupies the space [6]. In the analysis of materials and texture images in

22

general, the fractal dimension is related to physical parameters such as roughness

23

and luminance [7].

Pr e-

19

However, in most practical situations only the fractal dimension is not suf-

25

ficient to describe the object satisfactorily [8]. More elaborated approaches are

26

necessary. Examples are multifractal spectrum [9], multiscale fractal dimension

27

[10] and fractal descriptors [11]. In this work we focus on the last approach given

28

the relevant results achieved in practical situations, especially in the analysis of

29

biological images [8, 12, 13, 14, 15].

al

24

Most methods employed to numerically estimate the fractal dimension rely

31

on some sort of measure computed over a range of scales (in some sense this

32

approximates the Hausdorff measure in the analytical definition). Fractal de-

33

scriptors are vectors of numerical (real) values obtained from the values of such

34

measures in each observed scale. Despite the success previously demonstrated,

35

fractal and multifractal descriptors explore complex relations within the spatial

Jo

urn

30

36

structure of the object and direct approaches like the box counting dimension

37

has not been sufficiently investigated for that purpose. A possible explanation

38

for this is the sensitivity of classical box counting method to spatial translation,

39

stemming from the use of a fixed grid of boxes.

40

In this context, this work proposes new fractal descriptors for grayscale tex-

2

Journal Pre-proof

tures based on a translation invariant version of box counting dimension. The

42

gray image is mapped onto a three-dimensional cloud of points by associating

43

each pixel in coordinates (x, y) with normalized gray value z to a point with

44

coordinates (x, y, z). The invariance is achieved by using a sliding box with side

45

length r sweeping the cloud structure and counting the number of points inside

46

each cube. The descriptors are provided by the cumulated distribution of these

47

points for a range of r values. In this way both global and local description are

48

obtained. To make the classification performance even better we also compute

49

these descriptors from an alternative local encoding of the image based on local

50

binary patterns [16].

Pr e-

p ro

of

41

51

This approach is directly related to a typical characteristic of fractals: fine

52

structure. An object has fine structure if it presents the same level of details

53

at any scale of observation [17]. Here we present a statistical model, based on

54

Markov transition processes [18], to explain how the distribution of points can

55

express such fine structure and, in particular, the influence of attributes such

56

as homogeneity in the proposed distribution.

Finally, the accuracy of the method was tested in the identification of Brazil-

58

ian plant species using images from the leaf surface as well as in the classification

59

of three well-known benchmark databases of texture images, namely, UIUC [19],

60

UMD [20], and KTH-TIPS2-b [21]. The performance in such tasks is compared

61

to other state-of-the art image descriptors, to know, LBP [16], VZ-Joint [22],

62

SIFT + BoVW [23], WMFS [24], PLS [25], FC-CNN VGGVD [26], and oth-

63

ers. These compared approaches were outperformed by our proposal in terms of

64

classification accuracy. Such promising performance can be explained to a large

65

extent by the flexibility of fractal models in expressing the intrinsic richness of

66

natural structures like those here represented by the real-world texture images.

67

Attributes of these structures generally called “fractality” measures are known

68

to be tightly related to the complexity of those materials, which for example in

69

a biological context like that of the plant species correspond to a mapping of

70

the evolution process of that specimen.

Jo

urn

al

57

3

Journal Pre-proof

71

2. Bibliographical Review A texture image (or visual texture) is a grayscale image that presents par-

73

ticular patterns in the pixel distribution at different scales. Texture images are

74

capable of expressing high amount of information about a real-world object such

75

as luminosity, roughness, regularity and density. Approaches for texture analysis

76

are classically categorized into 4 groups: structural, statistical, transform-based

77

and model-based [27].

p ro

of

72

Structural methods usually work on well-defined geometric primitives and

79

mathematical morphology [28] is a typical example of this approach. Transform-

80

based approaches derive from the image representation in other spaces (mostly

81

in frequency domain). In this way this approach can faithfully describe the

82

periodicity present in many images. Fourier [29] and wavelets [30] are exam-

83

ples of techniques applied in this category. Statistical methods work on the

84

relationship between pixels. Although they have obtained interesting results

85

in practical situations, they do not have a meaningful mathematical modeling,

86

which makes their results difficult to be interpreted. LBP [31], bag-of-visual-

87

words [32], Scale-Invariant Feature Transform (SIFT) [33] and LPQ [34] are

88

important examples of this approach. More recently, an approach that may in

89

some sense be considered as a statistical solution correspond to those methods

90

based on the popular deep convolutional networks adapted for texture classifi-

91

cation. Examples are the methods proposed in [35, 26]. Despite the well known

92

success of deep networks in general images, texture classification is still a big

93

challenge due to the specificity of many problems in texture analysis as well as

94

to the difficulty to acquire sufficiently large number of samples necessary for

95

training these networks.

Jo

urn

al

Pr e-

78

96

Finally we have model-based methods, which seek to combine the precision

97

and robustness of methods that explore the relationship among pixels or among

98

regions of the image with an already well-established mathematical and physical

99

model. In this category, fractal-based methods such as multifractal spectrum

100

[9], multiscale fractal dimension [10] and fractal descriptors [11] are prominent.

4

Journal Pre-proof

Various natural structures are related to fractal geometry, mainly with respect

102

to the self-similarity idea [6]. In this way the fractal-based methods aim at

103

exploring and quantifying this relation to obtain a more faithful representation

104

of these objects. In the next sections we explain more about fractal theory and

105

the fractal descriptors.

p ro

of

101

In fact, several works have been recently published applying fractal geome-

107

try (especially fractal dimension) in the most diverse problems in nature. For

108

example, in [36] the authors employ the threedimensional fractal dimension of

109

of magnetic resonance images of cortical surfaces in a system of computer aided

110

diagnosis of Alzheimer’s disease. Other related applications can be found in

111

Parkinson’s disease [37], epileptic seizure [38], and other medical areas. An-

112

other interesting application is presented in [39] where the fractal dimension is

113

used in the analysis of the structure of adsorption pore of coals. In [40] the

114

fractal dimension is used to detect nonlinearity and chaos signature of a binary

115

star system. Coming specifically to image analysis, in [41] the fractal dimen-

116

sion of microscopy images is used to assess the soil digestibility of two types of

117

pretreated biomasses. In [42] the authors establish relations between the fractal

118

dimension of tissue images of pork loin and salmon with water and fat fractions.

119

3. Fractal theory

urn

al

Pr e-

106

In general, a fractal can be understood as a mathematical object that has re-

121

markable properties such as self-similarity, fine structure and complexity, which

122

make them unsuitable for a purely Euclidean representation [17]. In fact, there

123

is no unique definition for the concept of fractal object. The most classical one

124

is related to Hausdorff dimension.

Jo

120

125

3.1. Hausdorff dimension

126

Given a set U , the diameter of this set is given by |U | = sup{d(x, y) : x, y ∈

127

U }, where d(x, y) is a distance defined over a metric space. We say that a family

128

of sets {Ui } is a σ-cover of a set F if: 5

129

i) 0 < |Ui | < σ

130

ii) F ⊂ ∪∞ i=1 Ui .

Thus the Hausdorff measure of a set F is defined as: (∞ ) X s s s s H (F ) = lim Hσ (F ) where Hσ (F ) = inf |Ui | : Ui is a σ-cover of F .

p ro

131

of

Journal Pre-proof

σ→0

132

i=1

s

(1)

For any fractal structure, H shows a rather peculiar behavior, to know, that

Hσs = ∞ for s < D and Hσs = 0 for s > D, for some real and non-negative value

134

D. Then D is defined as the Hausdorff dimension of F . Formally:

Pr e-

133

D = inf{s : Hs (F ) = 0} = sup{s : Hs (F ) = ∞}.

(2)

135

Therefore, in the classical and most accepted definition of Mandelbrot [6], a

136

fractal is a mathematical object whose Hausdorff dimension strictly exceeds its

137

Euclidean (topological) dimension.

138

3.2. Estimates of the fractal dimension

Real world objects do not have infinite self-similarity and, in general, the

140

construction rules of the object are not known, as is usual in the generation

141

of geometric fractals. This makes the analytical calculation of the Hausdorff

142

dimension quite complicated and often impossible [6]. There is a need to develop

143

methods to estimate the fractal dimension in these situations.

urn

al

139

The basic definition of fractal dimension by Hausdorff measures involves an

145

infinite covering by elements with diameter smaller than σ and this diameter is

146

raised to an exponent s. When this idea is transfered to the discrete domain, it

147

can be understood as an exponential function

Jo

144

Mσ ∝ σ s ,

(3)

148

where Mσ is a measure of the object at the scale σ, i.e., where any detail

149

larger than σ is disregarded. This allows one to define alternative definitions to

150

Hausdorff dimension and therefore to obtain estimates for the fractal dimension.

6

Journal Pre-proof

It is important to note that alternative definitions may assume values other

152

than the Hausdorff dimension, but retain the idea of measuring the complexity

153

and spatial occupation of the object. Among the alternative definitions, box-

154

counting dimension is one of the most popular [17].

155

3.3. Fractal descriptors

p ro

of

151

In practical situations, a scalar measure such as the fractal dimension (or its

157

correspondent estimation) is not sufficient to describe all the details commonly

158

found at different scales of a real object. In the analysis of textures, for example,

159

there are images with visibly different aspects, but presenting the same fractal

160

dimension [8]. In this context, the idea of a more complete set of fractal measures

161

arises. Among the techniques that explore this gap, the most popular ones

162

are the multifractal spectrum, the multiscale fractal dimension and the fractal

163

descriptors. Here we are interested in the last approach.

Pr e-

156

Fractal descriptors are based on the exponential law obeyed by fractals.

165

However, unlike the estimation methods for fractal dimension where an ana-

166

lytical curve is adjusted to the data points, a function u : log σ → log Mσ is

167

defined and all the log × log curve is used to describe the object of interest. In

168

this way the descriptors provide information on all the scales of the texture,

169

giving a multiscale representation of the image. These descriptors can provide

170

information that is significantly richer than that obtained from a single scalar

171

measure and are advantageous when compared with other approaches [43].

172

4. Proposed method

urn

173

al

164

This work proposes a new type of fractal descriptors, named Sliding Box Descriptors (SBD), based on a different strategy for box counting together with the

175

respective statistical analysis of the distribution of pixels in the boxes. Whereas

176

classical box-counting partitions the analyzed image into a fixed grid, here we

177

“slide” a window over the image with different side lengths, each length corre-

178

sponding to a particular scale of analysis. Although sliding (gliding) boxes have

Jo 174

7

Journal Pre-proof

been adopted for the calculus of lacunarity [44], up to our best knowledge this

180

is the first attempt to use it for fractal descriptors.

of

179

The method begins by mapping the grayscale image I with dimensions m×n into a three-dimensional set of points B:

p ro

Zm×n → Zm×n×Imax

(i, j) 7→ (i, j, I(i, j)),

182

183

where 0 ≤ I(i, j) < Imax is the intensity of the pixel with coordinates (i, j). We define {ln }n∈N as a decreasing sequence such that l0 = min{m, n, Imax }   and ln+1 = l2n . For each value of the sequence, we scan the set B using a

Pr e-

181

(4)

184

sliding cubic box with side ln . As the box passes through the set it counts

185

the number of points involved in the current step. To execute this scans, a

186

three-dimensional discrete convolution is performed: D(j1 , j2 , j3 ) =

XXX k1

187

k2

k3

B(k1 , k2 , k3 ) · C(j1 − k1 , j2 − k2 , j3 − k3 )

(5)

with each ki running over all valid indexes of B and C.

al

To reproduce the sliding box effect, we take C ∈ Zln ×ln ×ln , C(k1 , k2 , k3 ) = 1 (∀k1 , k2 , k3 ) and only the valid part of the convolution is taken. That is, the

189

(6)

Therefore the texture SBD descriptors are obtained from D by taking the

Jo

188

urn

indices j1 , j2 , j3 belong to the ranges:     ln ln 1+ −1 ≤ j1 ≤ m − 2 2     ln ln 1+ ≤ j2 ≤ n − −1 2 2     ln ln 1+ ≤ j3 ≤ Imax − −1 . 2 2 cumulative distribution: r

C (k) =

k XXX X j1

j2

δ(D(j1 , j2 , j3 ), k 0 ),

(7)

j3 k0 =0

190

where δ(x, y) is the Kronecker delta (1 if x = y, 0 otherwise). Finally the

191

descriptors are provided by the cumulative distributions for r within a pre8

Journal Pre-proof

specified interval. D=

[

[C r (k)]α ,

of

192

r = 2, 4, 8, · · · , min(m, n),

(8)

where α is a constant empirically determined and specific for each database. Its

194

main role is to give the appropriate weight for each possible number of points.

195

Here we tested only α integer to keep the process simple, although there is no

196

constraint for using real values or even to combine more than one value. Here

197

we empirically found out that α = 15 yields optimal results for the analyzed

198

databases.

p ro

193

We also apply this methodology to maps of local binary patterns (LBP) of

200

the original images. Here we adopt the LBP riu2 maps [16], which are deter-

Pr e-

199

202

mined, for P interpolated points over a neighborhood with radius R by   PP −1 s(g − g ) if U (LBP ) ≤ 2 p c P,R p=0 riu2 LBPP,R =  P +1 otherwise,

203

pixels and

where gc is the value of the reference (central) pixel, gp are values of neighbor

al

201

U (LBPP,R ) = |s(gP −1 − gc ) − s(g0 − gc )| +

P −1 X p=0

|s(gp − gc ) − s(gp−1 − gc )|.

Generally speaking, each scan provides statistical data about the distribution

205

of the pixels and their intensities. As the size of the cube is different for each

206

scan, an image analysis is obtained in different scales: the cubes with larger

207

sides provide data about the overall structure of the image while the small cubes

208

capture information about local data. Here we see clearly the combination of

209

the fractal approach, given by the analysis at different scales, with the statistical

urn

204

distribution. The details of the whole process can be seen in the Algorithm 1

211

and Figure 4 exhibits the curves produced in each step of the method.

Jo 210

9

Journal Pre-proof

of

Algorithm 1 Sliding-box descriptor algorithm. Input: A

1:

Imax = max(size(m, n))

2:

for i = 1 to m do

5:

for j = 1 to n do m l max aux ← A(i,j)·I 256

6:

end for

4:

B(i, j, aux) ← 1

7:

end for

8:

l ← min(m, n)

9:

while l ≤ 2 do

10:

C ← ones3(l)

11:

D ← conv3(B, C)

for k = 0 to l2 do

C ← sum(D ≤ k)α

13: 14: 15: 16:

end while

urn

17:

end for S D ← {D, C}   l ← 2l

al

12:

Pr e-

3:

p ro

Output: D

Table 1: Variables used in Algorithm 1.

Matrix m × n containing the texture image in gray scales

B

Three-dimensional array resulting from mapping A

C

Three-dimensional array used as a mask for convolution

D

Valid convolution result between B and C

l

Box size in each scan

D

Vector of texture descriptors

Jo

A

10

Journal Pre-proof

Table 2: Pre-programmed routines supposed to be available by Algorithm 1

Return the size (number of rows and columns) of the matrix A

max(m,n)

Return the maximum value between m and n

ones3(l)

Return a three-dimensional array l × l × l whose elements

of

size(A)

p ro

are all 1 sum(A ≤ k)

Return the sum of values in A smaller or equal to k

conv3(B,C)

Return the valid part of three-dimensional discrete

al

Pr e-

convolution between B e C

Figure 1: Steps involved in the proposed method. From left to right, the original texture, the cloud of points and 3D sliding boxes, individual cumulated distribution and aggregated

212

urn

distribution.

4.1. Motivation

The strategy adopted here of inspecting the distribution of points inside

214

each box rather than simply counting the number of boxes covering the object

215

of interest leads to a more complete description about how the object occupies

216

the space enclosed by the image domain. Furthermore, the use of sliding boxes

Jo

213

217

instead of a fixed grid also makes this representation more precise and fine-

218

tuned.

219

To see how the distribution of the number of points impacts the image

220

descriptor, first we need to understand how the classical three-dimensional box

221

counting dimension works. That is obtained by counting the number of cubes 11

Journal Pre-proof

covering the image cloud at each scale. In practice, we would be counting

223

the number of cubes containing at least one point. This would simply be the

224

cumulated distribution of each non-null possible number of points.

of

222

To simplify the idea we illustrate the case of n points randomly placed within

226

a line segment with length L and partitioned into subintervals (boxes), each one

227

with length r. The probability of k boxes being non-empty is equivalent to

228

the probability of n points being distributed over sr = L/r boxes with sr − k

229

boxes left empty. This is a classical problem in combinatorics and the solution

230

is provided by

231



S(n, k)k! snr

,

Pr e-

P (k) =

sr sr −k

p ro

225

(9)

where S(n, k) are Stirling numbers of second kind, defined as S(n, k) =

  k k n 1 X (−1)k−j j . k! j=0 j

(10)

232

Figure 2 shows a couple of simulated examples illustrating how this distribution

233

is accurately confirmed in practical tests.

On the other hand, when we have sliding windows and wish to count the

235

distribution of the number of points in each window position, we should resort to

236

the theory of sliding window detection probabilities [18]. The underlying theory

237

was originally developed to analyze radar/sonar signals in naval surveillance and

238

models the sliding windows as an automaton, associating the current state with

239

the number of points within that window.

urn

al

234

In particular, our line segment can be represented by a binary vector with

241

L components (points correspond to 1’s and empty spaces to 0’s). Hence the

242

probability of k points falling within the sliding window is obtained with the

243

help of a transition matrix for the underlying finite automaton.

Jo

240

244

The states are binary numbers whose decimal representation ranges between

245

0 and 2r −1. As the points are randomly placed following a uniform distribution,

246

the probability of a ’1’ arising in the binary vector is a constant p = 1/L.

247

248

The transition table also has an accepting state “binary number containing k  1’s”. There are na = kr states that already are accepting states and these are 12

Journal Pre-proof

0.2

1

0.4

0.8

0.3

0.6

0.2 Simulated Predicted

0 0

0.4

0.1

5

10

Simulated Predicted

0.2

0

15

0

5

10

k

n = 16 0.5

0.6

Simulated Predicted

0.2

0 15

0.2

Simulated Predicted

0

5

10

15

0

2

Pr e-

k

r=6

Simulated Predicted

0

0

20

k

0.6 0.4

0.4

0.2

15

n = 64

P(k)

P(k)

P(k)

0.3

10

k

0.8

0.8

10

5

1

0.4

5

0

n = 36 1

0

15

k

0.6

0.1

Simulated Predicted

0

p ro

0.1

of

P(k)

P(k)

0.3

0.5

P(k)

0.4

r=8

4

6

8

10

12

k

r = 10

Figure 2: Simulations of the distribution of the number of boxes intersecting the object (classical box counting) using different values for the number of points n and box size r. The simulation considers L = 120. In the first row we fixed r = 8 and in the second one we used n = 64.

249

removed from the table.

We illustrate with r = 3 and k = 2. The transition probabilities correspond-

251

ing to going from state abc to bcd are given in Table 3. This table also has an

252

accepting state k = 2. States 011, 101 and 110 already are accepting states.

253

The automaton probability also depends on the initial probability of each state

254

S0 in the transition matrix (including the accepting state). In general:

urn

al

250

S0 =

h

1/L 1/L

1/L · · ·

(L − sa )/L

i

(11)

The final probabilities (for all states) are determined by the repeated multipli-

256

cation by T :

Jo

255

Pf = S0 T L−r = S0 T L−3 .

(12)

257

Finally, the probability of entering into an accepting state, which implies the ex-

258

istence of k points within the sliding window, is provided by the last component

259

of Pf , i.e.,

P (k) = Pf (L − na + 1). 13

(13)

Journal Pre-proof

Table 3: Transition matrix T for sliding probabilities in a simple case: r = 3 and k = 2.

001

1−p

p

010

111

1−p

001

p

p ro

1−p

k=2

p

1−p

010 100

100

of

000

000

p

111

p

k=2

1−p 1

Figure 3 compares the hypothesized distribution with some simulated situations.

261

In comparison with Figure 2 one can notice that the sliding distribution captures

262

more variance in the data than the distribution of fixed boxes. Moreover, the

263

influence of the box size r is also more evident in the sliding distribution. While

264

a smaller value of r yields a nearly flat uniform distribution, larger ones result

265

in more normal-like curves. Similar behavior is observed with respect to the

266

number of points n. This is an immediate consequence of the central limit

267

theory in statistics and the natural trend to normal distributions.

Pr e-

260

We notice that the curves in Figure 3 encompass larger part of the variance

269

in the Gaussian-like distribution than exhibited by the fixed box distribution in

270

Figure 2. The distribution variance is directly related to physical properties like

271

homogeneity that are well known to be fundamental in texture discrimination.

272

The ability of expressing larger part of the variance basically attests that the

273

descriptors provided by the proposed method are more complete than those

274

possibly provided by classical box counting dimension. The use of sliding boxes

275

allowed a statistical description that could not be carried out in the original

276

context at the same time that we preserve the straightforwardness of the analysis

277

of boxes covering objects of interest at different scales, as usual in any fractal

278

analysis.

Jo

urn

al

268

14

1

1

0.8

0.8

0.6

0.6

0.6

0.4

0.4 Simulated Predicted

0.2

0.4

0

0

2

3

4

5

6

7

8

0

2

4

k

n = 16

4

5

1

6

2

3

4

5

6

7

8

Pr e-

r=6

6

7

8

0.6

Simulated Predicted

0

k

k

5

n = 64

0.2

0

0.2

4

0.4

Simulated Predicted

0.2

Simulated Predicted

3

0.8

0.6 0.4

3

2

k

P(k)

P(k)

P(k)

0.6

2

1

1

0.8

0.8

1

8

n = 36 1

1

0.4

6

k

p ro

1

Simulated Predicted

0.2

Simulated Predicted

0.2

0

of

P(k)

1 0.8

P(k)

P(k)

Journal Pre-proof

r=8

2

4

6

8

10

k

r = 10

Figure 3: Simulations of the distribution of points within a sliding window (proposed descriptors) using different values for the number of points n and box size r. The simulation considers L = 120. In the first row we fixed r = 8 and in the second one we used n = 64.

279

5. Experiments

The classification accuracy of the proposed method was assessed on three

281

texture databases frequently used in the literature for benchmark purposes.

282

The proposed descriptors were also employed for the identification of species of

283

Brazilian plants, based on scanned images of their leaf surfaces.

urn

al

280

The first database in our comparison is KTHTIPS-2b [21], a collection of

285

4752 images evenly divided into 11 categories (materials). The classification

286

problem in this data set should focus on the material represented in the image

287

rather than on the instance of the photographed object. Each material is divided

288

into 4 samples, each one possessing particular settings of illumination, scale and

Jo

284

289

pose. The experimental protocol adopted here is the most frequently found

290

in the literature [21, 50, 23, 26, 48], i.e., 1 sample used for training and the

291

remaining 3 samples used for testing. The rationale behind this protocol is

292

the focus on categories rather than on exemplars, in this way the algorithm

293

should be capable of recognizing an image without seeing any exemplar of that 15

Journal Pre-proof

particular sample. In general, this poses more challenge to the classification

295

process as different categories share similarities, in their material composition,

296

for instance, but those similarities are not necessarily expressed in the visual

297

aspect. The accuracy (percentage of images assigned to the correct class) and

298

standard deviation are obtained by averaging out the results for the 4 possible

299

combinations of training/testing.

p ro

of

294

The second data set is UIUC [19]. This contains 1000 images equally di-

301

vided into 25 texture categories (classes). The images were photographed under

302

non-controlled conditions, which makes them susceptible to variation in scale,

303

perspective, illumination, and albedo. The training/testing division protocol

304

follows the usual procedure in the literature, i.e., 20 images of each texture

305

randomly selected for training and the remaining 20 images employed for test-

306

ing. Such procedure is repeated 10 times to provide the average accuracy and

307

respective deviation.

Pr e-

300

The third texture database is UMD [20]. This is composed by 1000 high-

309

resolution images, also collected under uncontrolled conditions. The images are

310

categorized into 25 classes, each one with 40 images and each image has a resolu-

311

tion of 1280 × 960. The main particularity of this database is the high variation

312

in viewpoint and scale, turning the classification process into a challenging task.

313

With the aim of reducing the number of features and attenuating the effects

314

of the “dimensionality curse”, the proposed descriptors are processed by prin-

315

cipal component analysis [45]. Following that, the descriptors are finally used

316

as the input of the classifier. Here we verified the use of two classifiers: sup-

317

port vector machines (SVM) with settings as those employed in [26], i.e., linear

318

kernel, C = 1 and L2 normalization, and linear discriminant analysis (LDA)

319

[46].

320

6. Results and Discussion

Jo

urn

al

308

321

The first test was carried out to assess the classification accuracy (percentage

322

of images correctly classified) of the proposed method when two different classi-

16

Journal Pre-proof

Table 4: Classification accuracy using SVM and LDA classifier.

SVM

LDA

KTHTIPS-2b

40.1±4.6

61.9±3.1

UIUC

59.7±2.8

88.9±0.9

UMD

78.9±1.9

99.2±0.4

1200Tex

61.2±1.9

86.6±1.1

p ro

of

Database

fiers (LDA and SVM) are employed. Table 4 shows the accuracy in each texture

324

database together with the corresponding standard deviation for the most com-

325

plete version of the descriptors, i.e., for feature vectors combining descriptors

326

D in Equation (8) both for the gray level image and for the LBP mapping.

327

LDA yielded the highest accuracies in all the data sets. Based on this, all the

328

remaining tests, presented in Tables 5, 6 (Proposed), and 7 (Proposed), and

329

Figures 4 and 5, were also accomplished by using this classifier.

Pr e-

323

In Table 5 we exhibit the accuracy of the proposed fractal descriptors com-

331

puted over the original gray-valued image, the LBP encoding and combining

332

both feature vectors. Notice that we used LDA classifier for the same complete

333

feature vector in Table 4 and that is why the second column of Table 4 and the

334

third column of Table 5 are identical. We can observe that each approach can

335

be more or less interesting depending on the specific database being analyzed.

336

In most cases, however, the sliding box method applied to the LBP encoding

337

provides higher accuracy than the direct application to the original image. We

338

also notice that combining fractal features over the gray values and the LBP

339

codes can provide even better classification results.

urn

al

330

Table 6 lists the accuracy of the proposed descriptors in the benchmark tex-

341

ture databases, compared with results published in the literature following sim-

342

ilar protocols. The proposed method outperforms state-of-the-art approaches

343

like SIFT + KCB or SIFT + BoVW in KTHTIPS-2b. Our proposal also pre-

344

sented better results than classical texture descriptors like LBP/VAR and VZ-

345

Joint in UIUC and even methods based on automatic learning like FC-CNN are

Jo

340

17

Journal Pre-proof

Table 5: Classification accuracy of the sliding fractal descriptors computed over the original

of

gray level image, the LBP encoding, and combining both approaches.

Database

Gray level

LBP

Gray level + LBP

KTHTIPS-2b

45.7±3.1

61.2±3.0

61.9±3.1

UIUC

81.1±1.5

69.1±2.2

UMD

92.4±1.1

99.0±0.4

99.2±0.4

1200Tex

77.1±1.4

84.4±1.2

86.6±1.1

p ro

88.9±0.9

also outperformed in UMD. UMD and UIUC are classical examples of “textures

347

in their strict sense” and such results confirm the suitability of fractal-based

348

methods to analyze such types of images.

Pr e-

346

Figure 4 depicts the confusion matrices for the benchmark textures. Gen-

350

erally speaking, these diagrams confirm the results in Table 6, but they also

351

provide a rather important information, which is the accuracy per class, al-

352

lowing in this way a more complete analysis of the classification outcomes. In

353

Figure 4 (a), KTHTIPS-2b presents lower accuracy in classes 3 (“corduroy”), 5

354

(“cotton”), and 10 (“wool”). These are actually materials highly susceptible to

355

confusion as they all correspond to types of clothing fabrics and are composed by

356

similar texture patterns. In UIUC, the method achieves much higher accuracy,

357

with some relevant misclassification only in class 19 - carpet ( confused with 8

358

- granite). These are materials characterized by a similar granular appearance,

359

which poses some difficulties even for visual discrimination.

urn

al

349

Another important aspect to be pointed out here is the computational com-

361

plexity of the compared approaches. As can be inferred from Algorithm 1,

362

the complexity bottleneck of the proposed method is the convolution. However

363

efficient algorithms for that can perform in order O(n log n) or even O(n) con-

364

sidering overlapping [56]. In our case, this basically corresponds to the number

365

of pixels in the image. Other “handcrafted” compared features like LBP and

366

VZ-Joint also have similar performance as they essentially rely on local-based

367

comparisons. On the other hand, estimating complexity of methods involving

368

some sort of learning-based process, such as FC-CNN or PCANet is actually a

Jo

360

18

of

Journal Pre-proof

Table 6: Accuracy of the proposed method compared with other texture descriptors in the literature.

UIUC

Accuracy (%)

VZ-MR8 [47]

46.3

LBP [16]

50.5

VZ-Joint [22]

53.3

LBP-FH [48]

54.6

CLBP [49] ELBP [50] SIFT + KCB [23] SIFT + BoVW [23] Proposed

Method

Accuracy (%)

RandNet (NNC) [51]

56.6

PCANet (NNC) [51]

57.7

BSIF [52]

73.4

Pr e-

Method

p ro

KTHTIPS-2b

57.3

VZ-Joint [22]

78.4

58.1

LBPriu2 /VAR [16]

84.4

58.3

ScatNet (NNC) [53]

88.6

58.4

Proposed

88.9

61.9

UMD

Accuracy (%)

FC-CNN AlexNet [26]

95.9

DeCAF [23]

96.4

al

Method

1

96.6

(H+L)(S+R) [19]

97.0

FC-CNN VGGM [26]

97.2

FC-CNN VGGVD [26]

97.7

SIFT+BoVW [23]

98.1

SIFT+LLC [26]

98.4

WMFS [24]

98.7

OTF [55]

98.8

PLS [25]

99.0

Proposed

99.2

Jo

urn

Scattering [54]

19

p ro

of

Journal Pre-proof

300

2

5

Predicted class

4

200

6

150

8

100

15

10

Pr e-

Predicted class

250

10

15

5

20

50

10

25

2

4

6

8

10

15

20

Expected class

Expected class

(a) 5

(b) 20

15

10

al

Predicted class

5

10

10

15

5

urn

20

25

5

10

15

20

25

Expected class

(c)

Jo

Figure 4: Confusion matrices. (a) KTHTIPS-2b. (b) UIUC. (c) UMD.

20

25

Journal Pre-proof

hard task, as it depends on the number of layers and operators. What can be

370

empirically determined is that, in general, those approaches usually consumes

371

much more computational resources than the traditional descriptors.

of

369

In general, the proposed method presents important advantages like the good

373

performance without requiring large amounts of data for training, as usual in

374

modern learning-based approaches, an algorithm that performs in reasonable

375

computational time even with not so advanced hardware and the interpretabil-

376

ity associated with a fractal-based model, as fractals are known for long time to

377

be a natural mathematical tool to describe nature. In terms of disadvantages,

378

the most significant one is the unsuitability of this approach for the analysis of

379

general purpose images, for example for object recognition. In this particular

380

task, neural networks are more adequate and recommended as a more general-

381

izable model.

382

6.1. Identification of Plant Species

Pr e-

p ro

372

Table 7 shows the accuracy of the proposed descriptors in 1200Tex database

384

[57], compared with other results recently published in the literature on this

385

same data set. 1200Tex is a collection of images from the leaf surface of 20

386

Brazilian species photographed in vivo. For each species 20 samples were col-

387

lected, cleaned, registered (alignment with respect to the vertical axis) and pho-

388

tographed by a commercial scanner. The original image of each photographed

389

sample was split into 3 non-overlapping windows, each one with resolution

390

128 × 128. Such windows were extracted from regions of the leaf less affected by

391

texture variance caused by spurious elements. Before classification, all the im-

392

ages were converted into gray values, resulting in a database with 1200 images.

393

In terms of validation protocol, we randomly selected 30 images per species for

Jo

urn

al

383

394

training and the remaining images were employed for testing. This procedure

395

was repeated 10 times, allowing in this way the computation of the average

396

accuracy and standard deviation.

397

Figure 5 provides more detailed scenario than Table 7 by showing the confu-

398

sion matrix of the proposed method in 1200Tex. The accuracies in most classes 21

Journal Pre-proof

the identification of plant species.

of

Table 7: Accuracy of the sliding fractal descriptors compared with other literature results in

Accuracy (%)

LBPV [49]

70.8

Network diffusion [58]

75.8

FC-CNN VGGM [26]

78.0

Gabor [57]

p ro

Method

84.0

FC-CNN VGGVD [26]

84.2

Schroedinger [59]

85.3

86.0

Pr e-

SIFT + BoVW [23] Proposed

Predicted class

5

10

15

86.6

25 20 15 10 5

al

20 5

10

15

20

Expected class

urn

Figure 5: Confusion matrix for 1200Tex.

are good and the most critical case is class 8, which is mostly confused with class

400

6. In fact, these correspond to species whose leaf textures look rather similar,

401

especially in terms of the arrangement of nervures and microtextures, which are

402

known to be prominent elements for the discrimination among samples from

403

different species.

Jo

399

404

In overall terms, the results corroborate that fractal descriptors can still be

405

considered as competitive in texture classification. This is in fact expected given

406

the way that many materials are usually formed in nature and the well known

407

adequacy of fractal geometry in modeling such processes. As also expected, the

408

effectiveness of fractal modeling is even more evident in “pure” textures (like 22

Journal Pre-proof

UIUC and UMD), or in practical problems, like those involving the classification

410

of biological images, here illustrated with the identification of plant species.

411

The obtained results not only suggest more in-depth research on this topic,

412

but they also presents fractal descriptors as a useful alternative that should

413

be verified in practice. Such practical interest is justified by a competitive

414

performance associated with the fact that “hand-crafted” approaches like the

415

one proposed here do not require neither large amounts of training data nor high

416

computational power. Fractal descriptors also provide more straightforward

417

interpretation of the model, given that fractal geometry has been associated for

418

a long time with a suitable model of nature.

419

7. Conclusions

Pr e-

p ro

of

409

This work proposed and studied the applicability of an image descriptor,

421

with focus on grayscale texture images, based on the fractal geometry theory.

422

The method relies on classical techniques to numerically estimate the box count-

423

ing fractal dimension. As usual, the grayscale image is mapped onto a cloud

424

of points in the three-dimensional space. However, instead of using boxes with

425

fixed position, we adopted a scheme where the boxes with different sizes slide

426

over the image. The proposed descriptors combine in this way the multiscale

427

analysis by using different box sizes with local features by quantifying the dis-

428

tribution of pixels within each box in each position.

urn

al

420

The performance of the proposed descriptors was assessed in the classifica-

430

tion of three benchmark data sets of grayscale images (KTHTIPS-2b, UIUC

431

and UMD) and in a real-world problem: the identification of species of Brazil-

432

ian plants. In both situations, the proposal achieved highest ratios of images

Jo

429

433

correctly classified in comparison with other classical and state-of-the-art tex-

434

ture descriptors. We also developed a theoretical statistical model to explain

435

how the distribution of covering boxes are different from the distribution of the

436

number of points within the sliding window. Such model confirmed that the

437

proposed strategy yields a more complete description of the texture image.

23

Journal Pre-proof

438

Summarizing, the obtained results confirmed that straightforward approaches to generate fractal descriptors can be competitive with more complex alterna-

440

tives where the statistics of the measure expressing the concept of “fractality” in

441

fractal geometry is not clearly stated. Moreover, it also validates that different

442

strategies potentially derived from the same idea in fractal geometry can lead

443

to different results when such strategies are adapted for algorithms in digital

444

images. This is the case of box-counting, a technique that theoretically is known

445

to be equivalent to Bouligand-Minkowski dimension, whereas the respective de-

446

scriptors present results significantly different.

447

Acknowledgements

Pr e-

p ro

of

439

G. T. gratefully acknowledges the financial support of The University of

449

Campinas Fund for Research and Extension Studies (FAEPEX) Proc. 2300/17.

450

J. B. F. gratefully acknowledges the financial support of The State of S˜ ao

451

Paulo Research Foundation (FAPESP) (Proc. 2016/16060-0) and from Na-

452

tional Council for Scientific and Technological Development, Brazil (CNPq)

453

(Grant #301480/2016-8).

454

References

al

448

[1] C. Nkono, O. F´em´enias, A. Lesne, J.-C. Mercier, F. Y. Ngounouno, D. De-

456

maiffe, Relationship between the fractal dimension of orthopyroxene distri-

457

458

bution and the temperature in mantle xenoliths, Geological Journal 51 (5) (2016) 748–759.

[2] J. Contreras-Ruiz, M. Mart´ınez-Gallegos, E. Ordo˜ nez-Regil, Surface fractal

Jo

459

urn

455

460

dimension of composites tio 2-hydrotalcite, Materials Characterization 121

461

(2016) 17–22.

462

[3] M. N. Starodubtseva, I. E. Starodubtsev, E. G. Starodubtsev, Novel fractal

463

characteristic of atomic force microscopy images, Micron 96 (2017) 96–102.

24

Journal Pre-proof

[4] S¸. T ¸ ˘alu, S. Stach, V. Sueiras, N. M. Ziebarth, Fractal analysis of afm

465

images of the surface of bowman’s membrane of the human cornea, Annals

466

of biomedical engineering 43 (4) (2015) 906–916.

of

464

[5] F. Wang, D.-w. Liao, J.-w. Li, G.-p. Liao, Two-dimensional multifractal

468

detrended fluctuation analysis for plant identification, Plant methods 11 (1)

469

(2015) 12.

470

471

p ro

467

[6] B. B. Mandelbrot, R. Pignoni, The fractal geometry of nature, Vol. 173, WH freeman New York, 1983.

[7] D. Chappard, I. Degasne, G. Hure, E. Legrand, M. Audran, M. Basle,

473

Image analysis measurements of roughness by texture and fractal analysis

474

correlate with contact profilometry, Biomaterials 24 (8) (2003) 1399–1407.

475

[8] O. M. Bruno, R. de Oliveira Plotze, M. Falvo, M. de Castro, Fractal dimen-

476

sion applied to plant identification, Information Sciences 178 (12) (2008)

477

2722–2733.

Pr e-

472

[9] L. Liu, X. Yang, X. Jing, Fourier transform infrared spectroscopy micro-

479

scopic imaging classification based on multifractal methods, Applied Optics

480

56 (6) (2017) 1689–1700.

al

478

[10] Y. Liu, Y. Liu, L. Sun, J. Liu, Multiscale fractal characterization of hi-

482

erarchical heterogeneity in sandstone reservoirs, Fractals 24 (03) (2016)

483

urn

481

1650032.

484

[11] J. B. Florindo, G. Landini, O. M. Bruno, Texture descriptors by a fractal

485

analysis of three-dimensional local coarseness, Digital Signal Processing 42 (2015) 70–79.

Jo

486

487

[12] A. R. Backes, D. Casanova, O. M. Bruno, Plant leaf identification based on

488

volumetric fractal dimension, International Journal of Pattern Recognition

489

and Artificial Intelligence 23 (6) (2009) 1145–1160.

25

Journal Pre-proof

[13] J. B. Florindo, N. R. da Silva, L. M. Romualdo, F. de Fatima da Silva, P. H.

491

de Cerqueira Luz, V. R. Herling, O. M. Bruno, Brachiaria species identi-

492

fication using imaging techniques based on fractal descriptors, Computers

493

and Electronics in Agriculture 103 (0) (2014) 48–54.

of

490

[14] N. R. da Silva, J. B. Florindo, M. C. Gmez, D. R. Rossatto, R. M. Kolb,

495

O. M. Bruno, Plant identification based on leaf midrib cross-section images

496

using fractal descriptors, PLoS ONE 10 (6) (2015) 1–14.

p ro

494

[15] J. B. Florindo, O. M. Bruno, G. Landini, Morphological classification

498

of odontogenic keratocysts using bouligand-minkowski fractal descriptors,

499

Comp. in Bio. and Med. 81 (2017) 1–10.

Pr e-

497

500

[16] T. Ojala, M. Pietik¨ainen, T. M¨ aenp¨ a¨ a, Multiresolution gray-scale and rota-

501

tion invariant texture classification with local binary patterns, IEEE Trans-

502

actions on Pattern Analysis and Machine Intelligence 24 (7) (2002) 971–987.

503

[17] K. Falconer, Fractal geometry: mathematical foundations and applications,

505

506

John Wiley & Sons, 2004.

[18] F. R. Castella, Sliding window detection probabilities, IEEE Transactions

al

504

on Aerospace and Electronic Systems AES-12 (6) (1976) 815–819. [19] S. Lazebnik, C. Schmid, J. Ponce, A sparse texture representation using

508

local affine regions, IEEE Trans. Pattern Anal. Mach. Intell. 27 (8) (2005)

509

urn

507

1265–1278.

510

[20] Y. Xu, H. Ji, C. Fermueller, Viewpoint Invariant Texture Description Using

511

Fractal Analysis, International Journal of Computer Vision 83 (1) (2009) 85–100.

Jo

512

513

[21] E. Hayman, B. Caputo, M. Fritz, J.-O. Eklundh, On the significance of

514

real-world conditions for material classification, in: T. Pajdla, J. Matas

515

(Eds.), Computer Vision - ECCV 2004, Springer Berlin Heidelberg, Berlin,

516

Heidelberg, 2004, pp. 253–266.

26

Journal Pre-proof

[22] M. Varma, A. Zisserman, A statistical approach to material classification

518

using image patch exemplars, IEEE Transactions on Pattern Analysis and

519

Machine Intelligence 31 (11) (2009) 2032–2047.

of

517

[23] M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, A. Vedaldi, Describing

521

textures in the wild, in: Proceedings of the 2014 IEEE Conference on

522

Computer Vision and Pattern Recognition, CVPR ’14, IEEE Computer

523

Society, Washington, DC, USA, 2014, pp. 3606–3613.

p ro

520

[24] Y. Xu, X. Yang, H. Ling, H. Ji, A new texture descriptor using multifractal

525

analysis in multi-orientation wavelet pyramid, in: CVPR, IEEE Computer

526

Society, 2010, pp. 161–168.

Pr e-

524

527

[25] Y. Quan, Y. Xu, Y. Sun, Y. Luo, Lacunarity analysis on image patterns

528

for texture classification, in: 2014 IEEE Conference on Computer Vision

529

and Pattern Recognition, 2014, pp. 160–167.

[26] M. Cimpoi, S. Maji, I. Kokkinos, A. Vedaldi, Deep filter banks for tex-

531

ture recognition, description, and segmentation, International Journal of

532

Computer Vision 118 (1) (2016) 65–94.

al

530

[27] A. Materka, M. Strzelecki, et al., Texture analysis methods–a review, Tech-

534

nical university of lodz, institute of electronics, COST B11 report, Brussels

535

(1998) 9–11.

urn

533

536

[28] A. Serna, B. Marcotegui, Detection, segmentation and classification of 3d

537

urban objects using mathematical morphology and supervised learning,

538

255.

Jo

539

ISPRS Journal of Photogrammetry and Remote Sensing 93 (2014) 243–

540

[29] R. Azencott, J.-P. Wang, L. Younes, Texture classification using windowed

541

fourier filters, IEEE Transactions on Pattern Analysis and Machine Intel-

542

ligence 19 (2) (1997) 148–153.

543

[30] Y. Qian, M. Ye, J. Zhou, Hyperspectral image classification based on struc-

544

tured sparse logistic regression and three-dimensional wavelet texture fea27

Journal Pre-proof

tures, IEEE Transactions on Geoscience and Remote Sensing 51 (4) (2013)

546

2276–2291.

548

[31] X. Wu, J. Sun, Joint-scale lbp: a new feature descriptor for texture classification, The Visual Computer 33 (3) (2017) 317–329.

p ro

547

of

545

549

[32] S. Chen, Y. Tian, Pyramid of spatial relatons for scene-level land use clas-

550

sification, IEEE Transactions on Geoscience and Remote Sensing 53 (4)

551

(2015) 1947–1957.

[33] X. Zhang, B. Zhu, L. Li, W. Li, X. Li, W. Wang, P. Lu, W. Zhang, Sift-

553

based local spectrogram image descriptor: a novel feature for robust music

554

identification, EURASIP Journal on Audio, Speech, and Music Processing

555

2015 (1) (2015) 6.

Pr e-

552

556

[34] V. Ojansivu, J. Heikkil¨ a, Blur insensitive texture classification using lo-

557

cal phase quantization, in: International conference on image and signal

558

processing, Springer, 2008, pp. 236–243.

[35] V. Andrearczyk, P. F. Whelan, Using filter banks in convolutional neu-

560

ral networks for texture classification, Pattern Recognition Letters 84 (C)

561

(2016) 63–69.

al

559

[36] S. Lahmiri, A. Shmuel, Performance of machine learning methods applied

563

to structural mri and adas cognitive scores in diagnosing alzheimers disease,

564

urn

562

Biomedical Signal Processing and Control 52 (2019) 414 – 419.

565

[37] S. Lahmiri, Gait nonlinear patterns related to parkinsons disease and age,

566

IEEE Transactions on Instrumentation and Measurement 68 (7) (2019) 2545–2551.

Jo

567

568

[38] S. Lahmiri, A. Shmuel, Accurate classification of seizure and seizure-free

569

intervals of intracranial eeg signals from epileptic patients, IEEE Transac-

570

tions on Instrumentation and Measurement 68 (3) (2019) 791–796.

28

Journal Pre-proof

[39] Z. Li, D. Liu, Y. Cai, Y. Wang, J. Teng, Adsorption pore structure and its

572

fractal characteristics of coals by N-2 adsorption/desorption and FESEM

573

image analyses, Fuel 257.

of

571

[40] S. George, V, R. Misra, G. Ambika, Fractal measures and nonlinear dy-

575

namics of overcontact binaries, Communications in Nonlinear Science and

576

Numerical Simulation 80.

p ro

574

[41] V. Rigual, J. C. Dominguez, S. Rivas, A. Ovejero-Perez, M. Vir-

578

ginia Alonso, M. Oliet, F. Rodriguez, Application of microscopy techniques

579

for a better understanding of biomass pretreatment, Industrial Crops and

580

Products 138.

Pr e-

577

581

[42] S. Verdu, J. M. Barat, R. Grau, Fresh-sliced tissue inspection: Character-

582

ization of pork and salmon composition based on fractal analytics, Food

583

and Bioproducts Processing 116 (2019) 20–29.

[43] J. B. Florindo, A. R. Backes, M. de Castro, O. M. Bruno, A compara-

585

tive study on multiscale fractal dimension descriptors, Pattern Recognition

586

Letters 33 (6) (2012) 798–806.

al

584

[44] A. Saa, G. Gasc´o, J. B. Grau, J. M. Ant´ on, A. M. Tarquis, Comparison of

588

gliding box and box-counting methods in river network analysis, Nonlinear

589

590

591

592

Processes in Geophysics 14 (5) (2007) 603–613.

[45] I. Jolliffe, Principal Component Analysis, Springer Series in Statistics, Springer, 2002.

[46] W. J. Krzanowski (Ed.), Principles of Multivariate Analysis: A User’s Perspective, Oxford University Press, Inc., New York, NY, USA, 1988.

Jo

593

urn

587

594

[47] M. Varma, A. Zisserman, A statistical approach to texture classification

595

from single images, International Journal of Computer Vision 62 (1-2)

596

(2005) 61–81.

29

Journal Pre-proof

[48] T. Ahonen, J. Matas, C. He, M. Pietik¨ ainen, Rotation invariant image

598

description with local binary pattern histogram fourier features, in: A.-

599

B. Salberg, J. Y. Hardeberg, R. Jenssen (Eds.), Image Analysis, Springer

600

Berlin Heidelberg, Berlin, Heidelberg, 2009, pp. 61–70.

of

597

[49] Z. Guo, L. Zhang, D. Zhang, A completed modeling of local binary pattern

602

operator for texture classification, IEEE Transactions on Image Processing

603

19 (6) (2010) 1657–1663.

p ro

601

[50] L. Liu, L. Zhao, Y. Long, G. Kuang, P. Fieguth, Extended local binary

605

patterns for texture classification, Image Vision Comput. 30 (2) (2012)

606

86–99.

Pr e-

604

607

[51] T. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, Y. Ma, Pcanet: A simple deep

608

learning baseline for image classification?, IEEE Transactions on Image

609

Processing 24 (12) (2015) 5017–5032.

610

611

[52] J. Kannala, E. Rahtu, Bsif: Binarized statistical image features., in: ICPR, IEEE Computer Society, 2012, pp. 1363–1366.

[53] J. Bruna, S. Mallat, Invariant scattering convolution networks, IEEE Trans-

613

actions on Pattern Analysis and Machine Intelligence 35 (8) (2013) 1872–

614

1886.

urn

al

612

615

[54] L. Sifre, S. Mallat, Rotation, scaling and deformation invariant scattering

616

for texture discrimination., in: CVPR, IEEE Computer Society, 2013, pp.

617

1233–1240.

[55] Y. Xu, S. Huang, H. Ji, C. Ferm¨ uLler, Scale-space texture description on

619

sift-like textons, Computer Vision and Image Understanding 116 (9) (2012)

Jo

618

620

999–1013.

621

[56] K. Pavel, S. David, Algorithms for efficient computation of convolution, in:

622

G. Ruiz, J. A. Michell (Eds.), Design and Architectures for Digital Signal

623

Processing, IntechOpen, Rijeka, 2013, Ch. 8.

30

Journal Pre-proof

[57] D. Casanova, J. J. de Mesquita S Junior, O. M. Bruno, Plant leaf identifi-

625

cation using gabor wavelets, International Journal of Imaging Systems and

626

Technology 19 (3) (2009) 236–243.

of

624

[58] W. N. Gon¸calves, N. R. da Silva, L. da Fontoura Costa, O. M. Bruno,

628

Texture recognition based on diffusion in networks, Information Sciences

629

364 (C) (2016) 51–71.

630

[59] J. B. Florindo, O. M. Bruno, Discrete schroedinger transform for texture recognition, Information Sciences 415 (2017) 142–155.

Jo

urn

al

Pr e-

631

p ro

627

31

Journal Pre-proof *Declaration of Interest Statement

Jo

urn

al

Pr e-

p ro

of

Declarations of interest: none