Local multifractal detrended fluctuation analysis for non-stationary image's texture segmentation

Local multifractal detrended fluctuation analysis for non-stationary image's texture segmentation

Applied Surface Science 322 (2014) 116–125 Contents lists available at ScienceDirect Applied Surface Science journal homepage: www.elsevier.com/loca...

3MB Sizes 0 Downloads 37 Views

Applied Surface Science 322 (2014) 116–125

Contents lists available at ScienceDirect

Applied Surface Science journal homepage: www.elsevier.com/locate/apsusc

Local multifractal detrended fluctuation analysis for non-stationary image’s texture segmentation Fang Wang a,∗ , Zong-shou Li b , Jin-wei Li c a b c

College of Science, Hunan Agricultural University, Changsha 410128, China College of Information Science and Engineering, Jishou University, Jishou 416000, China Agricultural Information Institute, Hunan Agricultural University, Changsha 410128, China

a r t i c l e

i n f o

Article history: Received 12 July 2014 Received in revised form 12 October 2014 Accepted 13 October 2014 Available online 23 October 2014 Keywords: Image stationarity Local multifractal detrended fluctuation analysis Local generalized Hurst exponent Image segmentation Noise

a b s t r a c t Feature extraction plays a great important role in image processing and pattern recognition. As a power tool, multifractal theory is recently employed for this job. However, traditional multifractal methods are proposed to analyze the objects with stationary measure and cannot for non-stationary measure. The works of this paper is twofold. First, the definition of stationary image and 2D image feature detection methods are proposed. Second, a novel feature extraction scheme for non-stationary image is proposed by local multifractal detrended fluctuation analysis (Local MF-DFA), which is based on 2D MF-DFA. A set of new multifractal descriptors, called local generalized Hurst exponent (Lhq ) is defined to characterize the local scaling properties of textures. To test the proposed method, both the novel texture descriptor and other two multifractal indicators, namely, local Hölder coefficients based on capacity measure and multifractal dimension Dq based on multifractal differential box-counting (MDBC) method, are compared in segmentation experiments. The first experiment indicates that the segmentation results obtained by the proposed Lhq are better than the MDBC-based Dq slightly and superior to the local Hölder coefficients significantly. The results in the second experiment demonstrate that the Lhq can distinguish the texture images more effectively and provide more robust segmentations than the MDBC-based Dq significantly. © 2014 Elsevier B.V. All rights reserved.

1. Introduction Since most natural surfaces are spatially isotropic fractals and the intensity images of these surfaces are fractal and multifractal [1], fractal and multifractal theory developed by Mandelbrot [5,7] has caused extensive attention and widely used in image processing. Multifractal analysis (MFA, mainly using multifractal dimension and Hölder exponent or multifractal spectrum) is a useful method to describe global and local irregular objects and the overall selfsimilar structure [19], which are in common with the complexity of structure and regularity of the texture of images. Hence, it can bring us a more efficacious way to process various texture recognition problems by the MFA [2–4,22,23,27]. Such as, Xu et al. [22] proposed a robust texture descriptor combining the MFA and Gabor filter. Ouahabi [4] proposed a new approach of MFA for texture characterization based on the discrete wavelet transform.

∗ Corresponding author. Tel.: +86 073184617816. E-mail addresses: [email protected] (F. Wang), [email protected] (Z.-s. Li), zs [email protected] (J.-w. Li). http://dx.doi.org/10.1016/j.apsusc.2014.10.065 0169-4332/© 2014 Elsevier B.V. All rights reserved.

However, many existing MFA methods can only deal with stationary measures of object and fail to handle the non-stationary measures. As an effective way to solve the problem of nonstationary, multifractal detrended fluctuation analysis (MF-DFA) was proposed by Kantelhardt et al. [18] in 2002. Because the MFDFA does not require the modulus maxima procedure in contrast to the method of wavelet transform modulus maxima (WTMM) [17], the MF-DFA and its various modified version have been proposed for resolving various non-stationary series in lots of fields [11,12,16,20,29]. Since the new idea of the method has been employed in these fields, the MF-DFA is maturing. Therefore, the idea of generalizing the one-dimensional (1D) MF-DFA to two- or high-dimensional MF-DFA arises spontaneously. The generalization work was done by Gu and Zhou [15] in 2006, which showed the two-dimensional (2D) MF-DFA was workable when tested with synthetic surfaces including fractional Brownian surfaces and multifractal surfaces. After that, some meaningful works of image processing have been conducted by the 2D MF-DFA [8,13,14,24]. For example, Yadav et al. [24] used 2D MF-DFA to analyze AFM images of the surface morphologies of the LiF thin films. In our previous study, we used the modified MF-DFA to analyze different natural textures [14] and various diseases corn leaf images [13].

F. Wang et al. / Applied Surface Science 322 (2014) 116–125

Lots of empirical results obtained from above researches show that the 2D MF-DFA can successful portray the global characteristics of an image. However, since there is no formal definition about image stationarity, a question is raised about why the method can bring the better results than other multifractal parameters in processing natural textures which may contain non-stationary measures. Based on the idea of solving the above problem, we give a stationary definition of 2D gray image tentatively and propose two methods of detecting the image stationarity in this paper firstly; two groups of surfaces generated by 2D signals of known stationarity are used to test the proposed definition and detection methods. And then, to study the non-stationary image segmentation which is an important content in image processing, a method of describing the local multifractal property of an image is proposed based on MF-DFA, which is so-called local multifractal detrended fluctuation analysis (Local MF-DFA). A set of new local multifractal feature indicators is defined to characterize the local texture. Eight similar texture images selected from the Brodatz album [21] are used to validate the proposed method. To demonstrate the advantage in distinguishing different textures, both the proposed descriptor and other two multifractal indicators are used in comparative segmentation and anti-noise experiments. We find that the proposed descriptor obtained by the Local MF-DFA is better than the other two indicators. The plan of the paper is the following. Section 2 reports an overview of image stationarity, which contains a definition of image stationarity, two detection methods and two groups of Image stationarity detection experiments. Section 3 introduces the local MF-DFA method based on two-dimensional MF-DFA. Section 4 describes main results of comparison segmentation experiments between our method and other two multifractal methods. Section 5 provides a brief summary of our study. 2. Overview of image stationarity There is some great significance for avoiding spurious regression and effective use of other traditional models by 1D sequence’s stationary detection. It also has a crucial role to conduct 2D image’s stationary detection for some problems of image’s feature extraction, segmentation, etc. In this section, we propose a definition of stationarity and two detection methods for a gray image and use two groups of images to test the definition and methods. 2.1. Definition of image stationarity Similar to the definition of stationarity for 1D sequence [25], we can give the definition of the stationarity for the 2D gray image as follows. For an image I with sizes are M × N, whose 2D gray sequence is G(i, j), (i = 1, 2, . . ., M, j = 1, 2, . . ., N), if it satisfies: (1) E(G(i, j)) = E(G(i+ k, j + k)) =  (k = ±1, ± 2, . . .); (2) Var(G(i, j)) = Var(G(i+ k, j + k)) =  2 (k = ±1, ± 2, . . .); (3)  i+k,j+k =  k (k = ±1, ± 2, . . .). Then the image I is called stationary. From the above definition, one can note that the mean and variance of the gray values in each sub-region are constant for a stationary image. The autocovariance only depend on the interval k and is independent on the location of (i, j). 2.2. Detection methods of image stationarity We can learn from the stationary of 1D sequence detection methods to detect a 2D gray image I as follows:

117

Fig. 1. The schematic diagram of the Zigzag scans method in the window with size of 7 × 7.

Method I: By using a moving window with size of k × k to slide in the image, the standard deviation of gray values in each window can be calculated and the average of them is computed as (k) firstly. Then a series of (k) is obtained by varying the size of k. Finally, we can determine whether the image is stationary or not by investigating the behavior of (k). If (k) is stabilized and converges to the gray value standard deviations of the whole image, then the image I can be considered as stationary; if not, the image is non-stationary. Method II: First, transforming the 2D gray sequence G(i, j) to 1D gray sequence Gx (x = 1, 2, . . ., n, where n = M × N) by a kind of arrangement which can record local gray information. In our paper, we use the popular Zigzags scan [30] to do this work, as shown in Fig. 1. And then, we can use Daniel detection [25] based on Spearman’s correlation coefficient to test the stationarity for the 1D Gx . The detection process is shown as follows, Firstly, denote rank of Gx as Rx = R(Gx ). Thus, the Spearman’s correlation coefficient Qs between the x and Rx is written as:

n

Qs =



x=1

n (x x=1

where x¯ = (1/n) rewritten as:

(x − x)(Rx − R)

− x) n 

2

 n

(R − R) x=1 x

x and R¯ = (1/n)

x=1

2

,

n 

(1)

Ri . By simplifying, it can be

x=1

 6 (x − Rx )2 . 2 n(n − 1) n

Qs = 1 −

(2)

x=1

Secondly, construct statistics about Qs as: √ Qs n − 2 T=  . 1 − Qs2

(3)

Thirdly, establish hypothesis testing between null hypothesis (H0: the 1D gray sequence Gx is stationary) and alternative hypothesis (H1: the 1D gray sequence Gx is non-stationary). Finally, investigate the thresholds of t˛/2 (n − 2) about the statistic T for a giving significant level ˛. If |T| ≤ t˛/2 (n − 2), then we can accept the null hypothesis and believe the Gx is stationary; if |T| > t˛/2 (n − 2), then we can reject the null hypothesis and believe the Gx is non-stationary. It is worth mentioning that due to the large size of an image, it may cause the calculated |T| too large and increase the probability of rejecting the null hypothesis. Hence, it will give inaccurate results about the stationarity of the image. Moreover, our concern is the singularity of the local area for a nature texture. In that spirit, we use a moving window with size r × r as a local region. After re-arraying the 2D gray sequence of the window to 1D sequence by Zigzag scan shown in Fig. 1, we can calculate the value of |T| and conduct

118

F. Wang et al. / Applied Surface Science 322 (2014) 116–125

Fig. 2. Ten synthetic images which are generated by fractal Gaussian noise and fractal Brownian motion.

stationary detection for it. When the moving window is sliding over the whole image, we are able to determine the stationarity of the whole image. 2.3. Image stationarity detection experiments In this subsection, we use two image databases named A and B, which are made of five synthetic texture surfaces with sizes of 128 × 128, respectively, to test the effectiveness of the proposed definition and the detection methods of image stationarity. The two groups of surfaces are generated by fractional Gaussian noise (fGn) [28] and fractional Brownian motion (fBm) [15] with the Hurst exponents H of 0.1, 0.3, 0.5, 0.7 and 0.9 respectively, as shown in Fig. 2. We use the free MATLAB software FracLab 2.04

developed by INRIA to synthesize the fBm surfaces which are the typical non-stationary signals. The surfaces of the classical stationary signals fGn can be obtained by the mixed second partial derivative ∂2 F(x, y)/∂x ∂y of the 2D fBm and can be calculated by the following approximation equation [9]: ∂2 BH (x, y) ∂x ∂y = [BH (x, y) − BH (x − 1, y)] − [BH (x, y − 1) − BH (x − 1, y − 1)]. (4)

On the one hand, we use method I to test the stationarity for the 10 synthetic texture images. A series of (k) with different k is obtained, as illustrated in Fig. 3. One can note that the (k) of

A1:σ = 0.823 0.7

0.9

0.55

0.6

0

10

20

30

A2: σ = 0.290

40

0.4

50

0.35

0.5

0.3

0.35

0.25

σ (k)

B1: σ = 0.633

1.2

0

10

20

30

A3: σ = 0.096

40

0.2

50

0.12

0.6

0.1

0.3

0.08

0 0

10

20

30

A4: σ = 0.029

40

50

0.05

0.2

0.035

0.1

0.02

0

10

30

40

0

50

0

10

0

10

0

10

x 10

20

30

40

50

20

30

40

50

20

30

40

50

20

30

40

50

40

50

B2: σ = 0.477

B3: σ = 0.577

B4: σ = 0.194

B5: σ = 0.349 0.3

6.5 5

10

A5: σ = 0.0065

-3

8

20

0

0.15 0

10

20

30

40

0

50

0

10

20

30

k Fig. 3. Behaviors of the standard deviation of the synthetic images’ gray in the databases A and B as a function of the intervals.

F. Wang et al. / Applied Surface Science 322 (2014) 116–125

119

Table 1 The non-stationary windows proportion of the ten synthetic gray images in image data A and B. Database A (%)

r=9 r = 11 r = 13 r = 15 r = 17

Database B (%)

H = 0.1

H = 0.3

H = 0.5

H = 0.7

H = 0.9

H = 0.1

H = 0.3

H = 0.5

H = 0.7

H = 0.9

0 0 0 0 0

0 0 0 0 0

0 0 0 0 0

0.26 0.10 0.05 0 0

1.51 1.80 0.91 1.43 0.32

45.11 48.63 58.11 63.32 65.09

62.88 67.42 74.32 75.54 78.54

74.22 77.06 88.54 79.29 86.42

78.89 81.47 84.56 89.76 84.73

63.99 92.78 92.19 74.25 98.88

Fig. 4. Eight natural textures form Brodatz album.

the images in database A tends stably to the standard deviations of gray value of the whole image which are shown on the each subgraph. It illustrates that the surfaces are stationary. On the contrary, the (k) of the images in database B does not converge to the gray value standard deviations of the whole image, which explains that the surfaces are non-stationary. On the other hand, we use a moving window with size of r × r to slide in each image and we can determine whether the image is stationary according to the P(r) obtained by method II. Due to unavoidable random error, it does not guarantee all the gray sequences in every moving window can be verified through the Daniel detection. Generally, giving a threshold (in this paper, the threshold is 10%), if P(r) < 10%, the image is considered stationary; otherwise, it is non-stationary. We listed the P(r) of the ten synthetic images when the r is 9, 11, 13, 15 and 17 in Table 1. From the table, we can see that all of the P(r) of the five images in database A are less than 10% and for the five images in database B, P(r) > 10%, which demonstrates that the former images are stationary and the latter images are non-stationary again. It is also explained that the proposed definition and the detection methods are workable.

C1: σ = 23.11

σ (k)

In this section, we first review the algorithm of the 2D MF-DFA proposed in Ref. [15] and then introduce the local MF-DFA.

C3: σ = 65.63

65 60 55 50 45

21 20 0

15

30

45

60

40

15

30

45

60

66

50

64 40

62 0

15

30

45

60

60

0

60

40

0

15

30

45

15

30

45

60

30

0

C7: σ = 89.88

C6: σ = 48.19 65 60 55 50 45

0

C4: σ = 54.46

68

C5: σ = 76.90 95 90 85 80 75 70 65

3. Methods

C2: σ = 60.37

24 23 22

19

Next, we use the proposed definition and two detection methods of image stationarity to test some similar texture images in a newly created texture database named as C. It consists of eight natural textures of size 128 × 128, selected from Brodatz album. Those samples are denoted by C1, C2, . . ., C8, as shown in Fig. 4. The series of (k) with the increasing k are plotted in Fig. 5 and the P(r) with different r is plotted in Fig. 6. Seen from Figs. 5 and 6, there are huge differences of the gray values’ stationarity among the eight similar texture images. Like Fig. 3, we mark the gray value standard deviations of the whole image on the each sub-graph in Fig. 5. One can note that the four textures of C1, C2, C5 and C7 are non-stationary; the other four images whose (k) can converge to the gray value standard deviations of the whole image are stationary, which are consistent with the results obtained by the Daniel detection basically.

85

100

80

95

75

90

70 65 0

15

30

45

30

45

60

C8: σ = 76.19

105

85

60

15

60

0

k Fig. 5. Behaviors of the standard deviation of the eight texture images’ gray as a function of the intervals.

15

30

45

60

120

F. Wang et al. / Applied Surface Science 322 (2014) 116–125

50% 45%

r= r= r= r= r=

40% 35%

Step 5: Vary the value of s ranging from 6 to min(M, N)/4. It determines the scaling behavior of the fluctuation functions, which could be obtained by analyzing ln Fq (s) and ln s for each value of q. In the cases if the surfaces Gm,n are long range power law correlated, Fq (s) increases, for large values of s, as a power law:

9 11 13 15 17

Fq (s) ∝ sh(q) ,

P(r)

30%

(11)

15%

where h(q = 2) is the Hurst exponent of the surface, by this regards, the index h(q) is the so-called generalized Hurst exponent. For each q, the corresponding classical multifractal scaling exponent (q) is given by:

10%

(q) = qh(q) − Df = qh(q) − 2,

25% 20%

5% 0

1

2

3

4

5

6

7

8

Texture Sequence Number

(12)

where Df is the fractal dimension of the geometric support of the multifractal measure, and takes the value of Df = 2 in our work. 3.2. Local MF-DFA

Fig. 6. The non-stationary windows proportion of the eight texture gray images.

3.1. Two-dimensional MF-DFA Consider a 2D surface denoted by a matrix X with the ijth entry X(i, j), where i = 1, 2, . . ., M and j = 1, 2, . . ., N. The 2D MF-DFA algorithm in our setting is outlined as follows: Step 1: Partition the surface into Ms × Ns (Ms ≡ [M/s], Ns ≡ [N/s]) non-overlapping square sub-domains of equal length s. Each subdomains can be denoted by Xm,n such that Xm,n = X(r + i, t + j) for 1 ≤ i, j ≤ s, where r = (m − 1)s and t = (n − 1)s. Step 2: For each sub-domain Xm,n , we construct the cumulative sum Gm,n (i, j) as follows: Gm,n (i, j) =

j i  

Xm,n (k1 , k2 ),

(5)

k1 =1k2 =1

where 1 ≤ i, j ≤ s. Note that Gm,n (i, j) itself is a surface. ˜ m,n Step 3: For each surface Gm,n , we can obtain a local trend G by fitting it with a pre-chosen bivariate polynomial function. The simplest function could be a plane in Eq. (6) and we shall adopt this trending function in this study. ˜ m,n (i, j) = ai + bj + c, G

(6)

where 1 ≤ i, j ≤ s. a, b and c are free parameters to be determined, and they can be estimated easily through simple matrix operations, derived from the least squares method. We can determine the residual matrix: ˜ m,n (i, j). ym,n (i, j) = Gm,n (i, j) − G

(7)

Step 4: The detrended fluctuation function F(m, n, s) of the segment Xm,n , can be defined as follows: 1  2 ym,n (i, j) . s2 s

s

F 2 (m, n, s) =

(8)

i=1 j=1

The qth-order fluctuation function is calculated by averaging over all the segments; that is,

 Fq (s) =

1  [F(m, n, s)]q Ms Ns Ms

 Fq (s) = exp

Ns

m=1 n=1

1/q ,

q= / 0,



1  ln[F(m, n, s)] Ms Ns Ms

Ns

m=1 n=1

(9)

,

q = 0.

(10)

Rather than the generalized Hurst exponent h(q) defined for the whole surface, we introduce a method to extract the multifractal characteristics for every pixels based on the 2D MF-DFA, which is so-called local MF-DFA. For an image of size M × N, the MF-DFA estimation is performed at each pixel (i, j) by using a moving window of size w × w centered on (i, j), where, w = 2k + 1, k + 1 ≤ i ≤ M − k, k + 1 ≤ j ≤ N − k. The exponent h(q) is calculated in this window and considered as the characteristic of the pixel (i, j), denoted as Lhq . Obviously, the Lhq expresses the non-stationary characteristics of the local window. In this sense, we call the Lhq as local generalized Hurst exponent. When the window slides over every pixel from left to right and up to down in the image, the exponent Lhq for every pixel is obtained. In the following experiments, we use the Lhq as the feature descriptors for the image. It should be noted that the recognition and segmentation results by the Lhq of different textures are greatly dependent on size of the window. Generally, too large window leads to different texture cross-aliasing together and cause ‘sawtooth effect’, which seriously affects the accuracy of the boundary segmentation. On the contrary, the average information is difficult to extract identical texture by the too small window. The singularity spot can be easy produced inside the identical texture, which thus does not reflect the correct statistical properties. In this paper, through repeated trial and referencing the other studies, we took the size of the window is 23 × 23. 4. Test and analysis In this section, firstly, we inspect the multifractal natures of the eight studied textures in the database C. And then, the proposed local MF-DFA exponent estimation method is tested by the estimation error. Finally, we arrange two groups of segmentation experiments to test the proposed Lhq by some mosaics spliced by the eight textures. 4.1. Multifractal nature of the textures Each image is stored as a two-dimensional matrix in 256 gray levels. This allows us to follow the procedure introduced in Section 3.1 to calculate the associated h(q) and (q). If (q) is nonlinear in q, that is h(q) is not independent of q, then the surface possesses the multifractal nature. For the eight studied textures in the database C, we find that the textures all possess the multifractal nature. Figs. 7 and 8 demonstrate the multifractal nature of two randomly chosen textures, namely, C3 (in Fig. 7) and C7 (in Fig. 8). In each the left panel illustrates the dependence of the detrended fluctuation function Fq (s) as a function of the scale s for different q. The well-fitted straight

F. Wang et al. / Applied Surface Science 322 (2014) 116–125

10

6

30 h(10) = 1.9004

(a)

5

20 h(5) = 1.9254

10

10

4

τ (q )

Fq(s)

h(1) = 1.9931

10

3

h(-5) = 2.1116

10

2

10

0

2.3 2.2

-10 -20

h(-10) = 2.2091

10

(b)

h(q)

10

121

1

-30

0

-40 -15 5

15

25

35

45

2.1 2 1.9 1.8 -15 -10 -5

0

5

10 15

q

-10

-5

0 q

5

10

15

s Fig. 7. Multifractal nature in the power-law of the texture C2. (a) The plots of the detrended fluctuation function Fq (s) for different values of q. The straight lines are the best fitted lines whose gradients are shown beside the lines. (b) Dependence of (q) and h(q) on q.

lines indicate the evident power law scaling of Fq (s) versus s. The right panel shows that (q) is nonlinear in q, indicated by the fact that h(q) depends on q. 4.2. Estimation evaluation We first test the fitting effect of ln Fq (s) vs. ln s by using the distance error [6,28] which can evaluate the quality of the proposed estimations. We plot Fq (s) and s in a double log plot. If the fit is perfect, then we should get a straight line. Let y = kx + c be the fitted line, where y denotes ln Fq (s) and x denotes ln s. The quality of the estimation may be evaluated by the distance error (DE), which can be expressed as the root mean-square distance of the points from the fitted line as follows:



DE =

n (kxi i=1

2

+ c − yi ) /(1 + k) n

2

,

(13)

where n denotes the number of points. For each texture, the Lhq can be calculated by a moving window of size 23 × 23. The results are averaged over all pixels to decrease the impact of randomness. Some components of the average Lhq , respectively, Lh−5 , Lh−3 , Lh−1 , Lh1 , Lh3 and Lh5 , together with the standard deviation of the Lhq , are illustrated in Fig. 9, and their average estimation fitted errors is also shown in Fig. 9.

Seen from Fig. 9(a), the Lhq can successfully capture the different textures. However, the performance of each component with respect to different order q is of notable difference. Compared with Lh−1 , Lh1 and Lh3 , the broken line of Lh−5 , Lh−3 and Lh5 occupy a wider dynamic range, which demonstrates a better ability to distinguish textures. Yet, one notes that there is also a relatively large difference existing in the standard deviation of each texture’s Lhq as shown in Fig. 9(b). We note that the multifractal parameter with larger dynamic range and smaller standard deviation serves well as an indicator to distinguish different textures. Therefore, we should comprehensively and comparatively prefer the Lhq as the feature indicators for our segmentation experiments after considering both two factors. By comparison, three optimal components of Lhq {Lh−5 , Lh1 , Lh5 } are selected in the following experiments. As introduced in the previous section, the qth-order fluctuation function Fq (s) used in the deduction of the local MF-DFA exponents are defined which is similar with the local DBC-based algorithm. If Fq (s) effectively describe the multifractal characteristics of a surface, it must follow the power-law with the scale s. Therefore, Fq (s) and s will exhibit a perfect linear relationship in a doublelogarithmic plot, which must show an ideal linear fit and a low fitted error. As might be expected in Fig. 9(c), the average distance errors of the proposed estimation calculated by Eq. (13) are as low as that of classical estimations of DBC-based multifractal dimension and current morphology-based multifractal dimension,

Fig. 8. Multifractal nature in the power-law of the texture C7. (a) The plots of the detrended fluctuation function Fq (s) for different values of q. The straight lines are the best fitted lines whose gradients are shown beside the lines. (b) Dependence of (q) and h(q) on q.

122

F. Wang et al. / Applied Surface Science 322 (2014) 116–125 2.7 (a)

q q q q q q

2.6

q

2.5

= = = = = =

-5 -3 -1 1 3 5

Average Lh

2.4 2.3 2.2 2.1 2 1.9

1

2

3

4

5

6

7

8

0.2 (b)

q q q q q q

Standard deviation of Lh

q

0.16 0.14

= = = = = =

-5 -3 -1 1 3 5

0.12

ε→0

0.08

0.04

i,j

(i, j) ln (i, j) against ln ε.

gradient via a linear regression of

0.02

i,j

1

2

3

4

5

6

7

8

Texture Sequence Number

23 (c)

21

Average Distance error (e-3)

(i, j) / ln(M/ε) for q = / 1,

where the measurement (i, j) is proportion of the number of boxes needed to cover the gray level of the (i, j)th in the total numbers of the boxes. In the case where q = 1, D1 can be computed by the

0.06

19 q q q q q q

17 15 13

= = = = = =

-5 -3 -1 1 3 5

11 9 7 5 3

q

is defined as Dq ≡ 1/(1 − q)lim ln

0.1

0

ε→0

image and be used in lots of fields [13,23,27]. In Ref. [23], three kinds of ˛c were considered gray level intensities and used to segment synthetic images, namely, the maximum measures, the minimum measures and the sum measures, defined  as max (Dr ) ≡ max(X(i, j)), min (Dr ) ≡ 1/max(X(i, j)), and sum (Dr ) ≡ i,j X(i, j), respectively. In this paper, the three capacity measures are also used to calculate the ˛max , ˛min and ˛sum , which are calculated within a sub-images of size 11 × 11 for every pixel. The components of them are considered as features in our comparison experiments. The MDBC characterand Sarker [6], which istics are initially developed by Chaudhuri 

Texture Sequence Number

0.18

sub-section, three components of the local generalized Hurst exponent {Lh−5 , Lh1 , Lh5 } are used as characteristic indicators. Firstly, for every pixel, the characteristic indicators are calculated in the moving window which centered on that point. Next, based on the calculated indicators, the pixels with similar features are classified to corresponding regions by a classifier. The calculation process is repeated 10 times to eliminate the impact of randomness. As a comparison, other two common multifractal methods are used to extract texture features in the segmentation experiments. One is local singularity exponent, namely Hölder coefficients [3,4], based on capacity measure, denoted as ˛c , the other is multifractal dimensions Dq based on differential box-counting (MDBC). The local singularity coefficients ˛c features, defined as ˛c ≡ lim ln (Dr )/ ln(ε), can well describe the characteristic of a texture

1

2

3

4

5

6

7

8

Texture Sequence Number Fig. 9. Local MF-DFA estimation on the eight Brodatz textures. (a) The average Lhq over all pixels; (b) standard deviation of Lhq and (c) average distance error over all pixels.

as reported in Refs. [6,28]. The satisfying distance error illustrates that the proposed Lhq can certainly follow the desired power-law and the proposed estimation is able to successfully characterize the local scaling features of texture images. 4.3. Segmentation experiments In this subsection, we arrange two aspects of segmentation experiments to test the proposed Lhq . As depicted in the previous

In Ref. [28], Xia et al. used the three components of them {D−5 , D1 , D5 } to segment textures in Brodatz database and suggested which should be computed by a moving window of size 17 × 17 for every pixel. In this paper, we also adopt this scheme for our comparison experiments. In the following experiments, two classifiers, the K-nearest neighbor algorithm (KNN) [10] and the random forests (RF) [26] are employed. We point out that the RF method is much more timeconsuming than the KNN method, but the RF method gives a better classification result. For the sake of fairness, those three segmentation methods share the same scheme, the same KNN-based and RF-based classifiers. The first comparative experiment has been performed on the image database named CA, which is generated by using the eight natural textures in database C as shown in Fig. 4. In the database CA, each image is deemed a mosaic with a size of 256 × 256 and 256 gray levels, is spliced by four natural textures, thus there are

which

8 totally 70 = test mosaics. They are indexed by CA1, CA2, 4 . . ., CA70. In Fig. 10, three example test cases (CA2, CA7 and CA55) are shown in the left column. The following two columns show the results obtained by applying the ˛c exponent based on KNN method and RF method; the fourth and the fifth columns show the segmentation effect by the Dq exponent based on the two classifiers; and the right two columns illustrate the results of the proposed Lhq exponent based on the two classifiers. The segmentation results are shown using different selected gray level to highlight different regions. Table 2 lists the segmentation errors of the effect shown in Fig. 10. Table 3 shows the errors over the whole database. From Fig. 10 and Tables 2 and 3, the proposed Lhq is markedly preferable to the ˛c and slightly better than the Dq . One notes that there are huge differences of average error of incorrectly classified pixels between the two classifiers. Anyway, the proposed Lhq is the

F. Wang et al. / Applied Surface Science 322 (2014) 116–125

123

Fig. 10. Three test cases of mosaics and their segmentation by applying the three texture features with the two classifiers.

Table 2 Error percentage of incorrectly classified pixels for three test cases of mosaics by the two classifiers. Image index

Mosaics

CA2

C1-C2-C3-C5

CA7

C1-C2-C4-C6

CA55

C2-C6-C7-C8

Methods

KNN RF KNN RF KNN RF

Texture features {˛max , ˛min , ˛sum }

{D−5 , D1 , D5 }

{Lh−5 , Lh1 , Lh5 }

23.99% 6.41% 22.03% 6.89% 23.08% 7.17%

10.97% 1.85% 19.09% 3.70% 15.17% 2.41%

5.26% 1.01% 11.04% 1.72% 10.02% 1.58%

Table 3 Average error percentage of incorrectly classified pixels for the mosaics in database CA by the two classifiers. Methods

KNN No. of minimum error RF No. of minimum error

Texture features {˛max , ˛min , ˛sum }

{D−5 , D1 , D5 }

{Lh−5 , Lh1 , Lh5 }

23.44% ± 0.3% 0 7.25% ± 0.3% 0

10.23% ± 0.3% 34 1.66% ± 0.3% 32

9.65% ± 0.3% 36 1.49% ± 0.3% 38

Fig. 11. Two test cases of mosaics of four textures with adding four noises and their segmentations.

124

F. Wang et al. / Applied Surface Science 322 (2014) 116–125

Table 4 Error percentage of incorrectly classified pixels for two test cases of mosaics by the KNN classifier. Mosaics

Image index

Noises

Texture features {˛max , ˛min , ˛sum }

{D−5 , D1 , D5 }

{Lh−5 , Lh1 , Lh5 }

C1-C2-C3-C5

CA2 CBI2 CBII2 CBIII2 CBIV2

Original Gaussian Salt & pepper Poisson Speckle

23.99% 24.22% 21.30% 23.75% 24.12%

10.97% 16.95% 16.65% 12.15% 13.97%

5.26% 5.29% 5.36% 5.47% 5.35%

C3-C5-C6-C8

CA63 CBI63 CBII63 CBIII63 CBIV63

Original Gaussian Salt & pepper Poisson Speckle

22.94% 23.29% 24.48% 23.74% 23.81%

2.34% 11.63% 11.12% 5.94% 13.63%

7.14% 7.70% 7.73% 7.54% 7.93%

Table 5 Average error percentage of incorrectly classified pixels for the mosaics in databases CBI–CBIV by the KNN classifier. Noises

Characteristic indicators {˛max , ˛min , ˛sum }

{D−5 , D1 , D5 }

{Lh−5 , Lh1 , Lh5 }

Gaussian No. of minimum error

24.09% ± 0.3% 0

16.22% ± 0.3% 6

9.85% ± 0.3% 64

Salt & pepper No. of minimum error

22.54% ± 0.3% 0

14.88% ± 0.3% 10

9.99% ± 0.3% 60

Poisson No. of minimum error

23.83% ± 0.3% 0

12.48% ± 0.3% 22

9.97% ± 0.3% 48

Speckle No. of minimum error

24.04% ± 0.3% 0

12.66% ± 0.3% 22

10.11% ± 0.3% 48

optimal descriptor to differentiate various natural textures for the mosaics in database CA. To demonstrate the robust of the proposed texture descriptor in describing an image and identify textures from others, the second comparative experiment has been carried out. In this experiment, four common noises are added on each mosaic in the database CA, namely, Gaussian noise with zero-mean and variance of 0.01, salt & pepper noise with 2% density, Poisson noise and speckle noise zeromean and variance of 0.04. Accordingly four new databases are set up, whose corresponding name is CBI, CBII, CBIII and CBIV, respectively. For each database, there are also 70 spliced images with a size of 256 × 256, denoted as CBI1, CBI1, . . ., CBI70; CBII1, CBII2, . . ., CBII70; CBIII1, CBIII2, . . ., CBIII70 and CBIV1, CBIV2, . . ., CBIV70, respectively. For time-saving considerations, at this time, only KNN classifier is used for the segmentation experiment. Two example test cases (CBI2–CBIV2 and CBI63–CBIV63), together with their

corresponding results, are appeared in Fig. 11. The top row expresses the results of the segmentation by the ˛c , the middle row shows the results obtained by Dq and the bottom row is the segmentation results obtained by the proposed Lhq . The left group is the segmentation results of CBI2–CBIV2 and the right one is CBI63–CBIV63. Table 4 lists the segmentation errors of the two samples. Table 5 shows the errors over the whole databases. From Table 5, it is easy to see that when using Dq as feature indicators, the segmentation results are influenced greatly by the noises, especially the Gaussian noise and the salt & pepper noise, since the numbers of minimum error in the 70 images are only 6 and 10, and the corresponding numbers are 34 as shown in Table 3. On the contrary, the two indicators of the ˛c exponent and the proposed Lhq express robust segmentation effect. Generally speaking, after adding noises, the mosaics can be upset and not be easily recognized by human vision. In order to

Fig. 12. Average identification accuracy of each texture in the mosaics by the KNN classifier. Left panel is before adding noises and right panel is after adding four noises.

F. Wang et al. / Applied Surface Science 322 (2014) 116–125

compare the recognition rate of the each texture impacted by noises between the three methods, we calculate the average identification accuracies for each texture before and after being impacted by the noises, which are shown in Fig. 12. As shown in right panel of Fig. 12, the segmentation results by Dq are influenced greatly by the noises, especially by the Gaussian noise and the salt & pepper noise, which is consistent with Table 5. One note that there is only one natural texture (C7: Loose Burlap) that can be recognized from others by the MDBC method more easily than other two methods. By contrast, before adding the noises, there is five textures’ identification accuracies get from MDBC method are highest among the three methods, as shown in the left panel of Fig. 12. However, the discrimination obtained by the local MF-DFA method is barely affected by the four noises due to the high discrimination before and after adding noises. 5. Conclusion In the process of pattern recognition and artificial intelligence, multifractal methods which can successfully portray different texture images are used in the human visual experience. Most works in the past on texture image recognition have been done by standard MFA. However, the method of MF-DFA is able to process nonstationary measure, which may be more suitable for the complex texture images. It could bring us a more explicit and accurate way for our work. The components of the MF-DFA which are used to distinguish between different textures can be considered statistically significant. In this paper, our works are twofold. Firstly, a definition of stationarity for 2D gray image and two detection methods have been proposed tentatively. By testing two groups of synthetic surfaces generated by the classic stationary and non-stationary signals of fractal Gaussian noises and fractal Brownian motion, it indicates the proposed definition and detection methods are effective. Using the proposed definition and detection methods on the eight natural textures chosen from open Brodatz album, we found four of them possess non-stationary measures and other four are stationary. After that, a method of Local MF-DFA has been proposed to calculate local multifractal feature for every pixel of an image. The corresponding local generalized Hurst exponent Lhq has been defined, which is used as the texture features with non-stationary measures. To assess the ability of the new texture descriptors to differentiate various natural textures, two segmentation experiments have been carried out. As comparisons, two classical multifractal features, i.e. Hölder coefficients ˛c based on capacity measure and multifractal dimension Dq based on MDBC are used for our experiments. The results obatined by the Local MF-DFA method are better than those obtained by the two latter methods in general. In particular, we first examined the eight studied textures own the multifractal natures. Then two gruops of expriments were conducted. In the first experiment, 70 mosaics were generated by combining every four patterns from eight studied natural textures. The result shows that the errors of incorrectly classified pixels by the proposed Lhq are lower than ˛c significantly and slightly lower than the Dq . In the second experiment, the original 70 mosaics were added four noises, namely, Gaussian noises, salt & pepper noises, Poisson noises and speckle noises, respectively, which generated other 70 × 4 = 280 mosaics. It shows that the segmentation results obtained by the method of MDBC are greatly impacted by the noises. By contrast, the results obtained from the proposed Lhq have a good robustness to noise immunity, which shows our approach is workable.

125

Acknowledgments The authors wish to thank the reviewers and the handling editor Dr. Robert L. Opila for their comments and suggestions, which led to a great improvement to the presentation of this work. This work was supported by the Young Scholar Project Funds of the Education Department of Hunan Province, P.R. China (14B087). References [1] A.P. Pentland, Fractal based description of natural scenes, IEEE Trans. Pattern Anal. Mach. Intell. 6 (1984) 661–674. [2] A. Ouahabi, Multifractal analysis for texture characterization: a new approach based on DWT, in: ISSPA, 2010, pp. 698–703. [3] A. Ouahabi, S. Femmam, Wavelet-based multifractal analysis of 1-D and 2-D signals: new results, Analog Integr. Circuits Signal Process. 69 (1) (2011) 3–15. [4] A. Ouahabi, Discrete wavelet transform-based multifractal analysis, Signal Image Multiresolut. Anal. (2013) 135–224. [5] B.B. Mandelbrot, Van Ness.F J.W., Fractional Brownian motions, fractional noises and applications, SIAM Rev. 10 (4) (1968) 422–437. [6] B.B. Chaudhuri, N. Sarkar, Texture segmentation using fractal dimension, IEEE Trans. Pattern Anal. Mach. Intell. 17 (1) (1995) 72–77. [7] B.B. Mandelbrot, A multifractal walk down Wall Street, Sci. Am. 298 (1999) 70–73. [8] C. Liu, X.L. Jiang, T. Liu, et al., Multifractal analysis of the fracture surfaces of foamed polypropylene/polyethylene blends, Appl. Surf. Sci. 255 (7) (2009) 4239–4245. [9] D.R. McGaughey, G.J.M. Aitken, Generating two-dimensional fractional Brownian motion using the fractional Gaussian process (FGp) algorithm, Physica A: Stat. Mech. Appl. 311 (3) (2002) 369–380. [10] D. Bremner, E. Demaine, J. Erickson, et al., Output-sensitive algorithms for computing nearest-neighbor decision boundaries, Discret. Comput. Geom. 33 (4) (2005) 593–604, http://dx.doi.org/10.1007/s00454-004-1152-0. [11] F. Wang, G.-P. Liao, X.-Y. Zhou, et al., Multifractal detrended cross-correlation analysis for power markets, Nonlinear Dyn. 72 (1–2) (2013) 353–363. [12] F. Wang, G.P. Liao, J.H. Li, et al., Multifractal detrended fluctuation analysis for clustering structures of electricity price periods, Physica A: Stat. Mech. Appl. 392 (22) (2013) 5723–5734. [13] F. Wang, J.W. Li, W. Shi, et al., Leaf image segmentation method based on multifractal detrended fluctuation analysis, J. Appl. Phys. 114 (21) (2013) 214905. [14] F. Wang, Z.S. Li, G.P. Liao, Multifractal detrended fluctuation analysis for image texture feature representation, Int. J. Pattern Recognit. Artif. Intell. 28 (3) (2014) 1455005. [15] G.F. Gu, W.X. Zhou, Detrended fluctuation analysis for fractals and multifractals in higher dimensions, Phys. Rev. E 74 (2006) 061104. [16] G.X. Cao, J. Cao, L.B. Xu, Asymmetric multifractal scaling behavior in the Chinese stock market: based on asymmetric MF-DFA, Physica A 392 (4) (2013) 797–807. [17] J.F. Muzy, E. Bacry, A. Arneodo, Wavelets and multifractal formalism for singular signals: application to turbulence data, Phys. Rev. Lett. 67 (1991) 3515. [18] J.W. Kantelhardt, S.A. Zschiegner, E.K. Bunde, et al., Multifractal detrended fluctuation analysis of non- stationary time series, Physica A: Stat. Mech. Appl. 316 (2002) 87–114. [19] J.W. Kantelhardt, Fractal and multifractal time series, Encycl. Complex. Syst. Sci. (2009) 3754–3779. [20] M.S. Movahed, G.R. Jafari, F. Ghasemi, et al., Multifractal detrended fluctuation analysis of sunspot time series, J. Stat. Mech.: Theory Exp. (2006) P02003. [21] P. Brodatz, Texture: A Photographic Album for Artists and Designers, Dover, New York, 1966. [22] P.F. Xu, H.X. Yao, R.R. Ji, et al., A robust texture descriptor using multifractal analysis with Gabor filter, in: Proceedings of the Second International Conference on Internet Multimedia Computing and Service (ICIMCS’10), 2010. [23] R. Lopes, P. Dubois, I. Bhouri, et al., Local fractal and multifractal features for volumic texture characterization, Pattern Recognit. 44 (8) (2011) 1690–1697. [24] R.P. Yadav, S. Dwivedi, A.K. Mittal, et al., Fractal and multifractal analysis of LiF thin film surface, Appl. Surf. Sci. 261 (2012) 547–553. [25] S. Ruey, Y.L. Wang, J.Z. Pan, Finance Time Series Analysis, Posts & Telecom Press, Beijing, 2012. [26] T.K. Ho, Random decision forest, in: Proc. of the 3rd Int’l Conf. on Document Analysis and Recognition, Montreal, Canada, August 14–18, 1995, pp. 278–282. [27] T. Stojic, I. Reljin, B. Reljin, Adaptation of multifractal analysis to segmentation of micro-calcifications in digital mammograms, Physica A 367 (15) (2006) 494–508. [28] Y. Xia, D.G. Feng, R.C. Zhao, Morpholgy-based multifractal estimation for texture segmentation, IEEE Trans. Image Process. 15 (3) (2006) 614–622. [29] Y. Zhou, Y. Leung, Multifractal temporally weighted detrended fluctuation analysis and its application in the analysis of scaling behavior in temperature series, J. Stat. Mech.: Theory Exp. P06021 (2010). [30] http://www.mathworks.com/matlabcentral/fileexchange/27078-zig-zagscan/content/zigzag.m