Local tri-directional patterns: A new texture feature descriptor for image retrieval

Local tri-directional patterns: A new texture feature descriptor for image retrieval

JID:YDSPR AID:1910 /FLA [m5G; v1.172; Prn:4/02/2016; 14:35] P.1 (1-11) Digital Signal Processing ••• (••••) •••–••• 1 Contents lists available at ...

3MB Sizes 5 Downloads 382 Views

JID:YDSPR AID:1910 /FLA

[m5G; v1.172; Prn:4/02/2016; 14:35] P.1 (1-11)

Digital Signal Processing ••• (••••) •••–•••

1

Contents lists available at ScienceDirect

2

67 68

3

Digital Signal Processing

4 5

69 70 71

6

72

www.elsevier.com/locate/dsp

7

73

8

74

9

75

10

76

11 12 13

Local tri-directional patterns: A new texture feature descriptor for image retrieval

14 15 16 17 18

Manisha Verma

, Balasubramanian Raman

b

23

82

a

Department of Mathematics, Indian Institute of Technology Roorkee, Roorkee, India b Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, Roorkee, India

26 27 28

83 84 85 86

a r t i c l e

i n f o

Article history: Available online xxxx

24 25

79 81

20 22

78 80

a,∗

19 21

77

Keywords: Local tri-directional pattern Local binary pattern Feature extraction Texture feature

29 30 31

a b s t r a c t

87 88

Texture is a prominent feature of image and very useful in feature extraction for image retrieval application. Statistical and structural patterns have been proposed for image retrieval and browsing. In the proposed work, a new texture feature descriptor is developed. The proposed method uses local intensity of pixels based on three directions in the neighborhood and named as the local tri-directional pattern (LTriDP). Also, one magnitude pattern is merged for better feature extraction. The proposed method is tested on three databases, in which first two, Brodatz texture image database and MIT VisTex database are texture image databases and third one is the AT&T face database. Further, the effectiveness of the proposed method is proven by comparing it with existing algorithms for image retrieval application. © 2016 Elsevier Inc. All rights reserved.

89 90 91 92 93 94 95 96 97

32

98

33

99

34

100

35

101

36 37

102

1. Introduction

1.1. Related work

38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59

104

Many techniques for image retrieval have been developed in the past few years due to extreme increment in digital images. Large image database and retrieval of similar images became a direct and real world problem. Many kinds of features exist in the image, and texture is one of them. Texture is a powerful feature of an image that can be recognized in a form of small repeated patterns. There are many types of texture, e.g., artificial texture, original texture, rough texture, silky texture, etc. Texture is mostly contained in images of rocks, leaves, grass, woods, walls, etc. Even in natural images different types of texture exist. Many local features have been proposed by researchers in past few years. Local features extract the information regarding to local objects in the image or local intensity of pixels. Local patterns consider the neighboring pixels to extract the local information in the image. Most of the local patterns proposed by researchers, were uniform for all neighboring pixels. A very few patterns utilized the pixel information based on the direction. This work is mainly concentrated on a direction based local pattern which can provide better features with respect to uniform local patterns. Extensive surveys of content based image retrieval are presented in past few years [1,2].

60 61 62 63 64 65 66

103

*

Corresponding author. E-mail address: [email protected] (M. Verma).

http://dx.doi.org/10.1016/j.dsp.2016.02.002 1051-2004/© 2016 Elsevier Inc. All rights reserved.

Gray level co-occurrence matrix (GLCM) was proposed for image classification by Haralick [3]. This matrix extracts features, based on co-occurrence of pixel pairs. GLCM was used in rock texture retrieval in [4]. Zhang et al. proposed a method for texture features that computes edge images using the Prewitt edge detector and extracts co-occurrence matrix for those edge images instead of original images [5]. Feature extraction has been performed on the co-occurrence matrix using statistical features. Transformation domains were also utilized for feature extraction by researchers. Wavelet packets were used for the feature extraction and applied for image classification in [6]. Gabor filter was proposed for image retrieval and browsing [7], and texture classification [8]. Rotation invariant feature vector has been proposed using Gabor filters for content based image retrieval [9]. To overcome computational complexity of Gabor wavelet, Rivaz and Kinsbury proposed a new feature vector based on complex wavelet transform and applied it to texture image retrieval [10]. A modified curvelet transform has been proposed and used for image retrieval [11]. Two novel features ‘composite sub-band gradient vector’ and ‘the energy distribution pattern string’, have been proposed for efficient and accurate texture image retrieval system [12]. These features were extracted from wavelet sub-band images. A robust texture feature called the local binary pattern (LBP) [13] was proposed by Ojala, and it uses the local intensity of

105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

JID:YDSPR

AID:1910 /FLA

2

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56

[m5G; v1.172; Prn:4/02/2016; 14:35] P.2 (1-11)

M. Verma, B. Raman / Digital Signal Processing ••• (••••) •••–•••

each pixel for feature vector extraction. Further, uniform and nonuniform patterns were discriminated based on appearance. Also, patterns were converted into rotation invariant [14]. LBP uses the difference of center pixel and boundary pixel with a threshold for binary pattern. Instead of a single threshold value, an interval was used and local ternary pattern (LTP) was obtained, which further converted into two binary patterns [15]. The local binary pattern was considered as first order derivative, and second and more order derivative patterns were proposed and called local derivative pattern (LDP) [16]. The LDP was proposed for face image recognition. LBP variance (LBPV) is proposed for texture classification [17]. Pyramid transformation of local binary pattern has been proposed using Gaussian and wavelet based low pass filters and called pyramid local binary pattern (PLBP) [18]. Again a pyramid based algorithm was proposed using LBP and LBPV with Guassian low pass filter and used for smoke detection in videos [19]. Murala et al. proposed local ternary co-occurrence pattern based on local ternary pattern and local derivative pattern [20]. After binary and ternary, local tetra patterns (LTrP) were obtained using horizontal and vertical directional pixels [21]. Further, local tetra pattern was divided into binary patterns for feature vector histogram. In LBP, all neighboring pixels were considered as same for every center pixel, and the pattern map was created according to the difference of neighboring pixels with the center pixel. In center symmetric local binary patterns (CSLBPs), a pattern has been created using only interest region pixels instead of all neighborhood pixels [22]. Papakostas et al. have been proposed a moment based local binary pattern image classification [23]. Feature extraction in transformation domain using Gabor filters and Local binary pattern has been performed in [24], and proposed method is applied in synthetic and multi textured images for segmentation. Binary patterns based on directional features, have been proposed in the directional local extrema patterns [25]. Yuan proposed a rotation and scale invariant local pattern based on high order directional derivatives [26]. The local edge pattern based on the Sobel edge image, has been proposed in [27]. The Sobel edge image was created first, then LEPSEG was computed for image segmentation and LEPINV was obtained for image retrieval. Local extrema patterns were proposed for object tracking [28]. Extended local extrema patterns with multi-resolution property were proposed in [29] for image retrieval. Murala et al. proposed the local maximum edge binary pattern (LMEBP) [30]. The LMEBPs considered the maximum edge from the local difference and extracted the information based on eight neighborhood pixels. This method was combined with Gabor transform and experimented on image retrieval and object tracking. Block division and primitive block based methods have been proposed using local binary pattern and applied to image retrieval system [31]. In this method, images were divided into blocks, and then comparison was conducted on blocks. Local mesh patterns (LMeP) have been proposed for medical image retrieval that create the pattern according to surrounding pixels for a particular center pixel [32]. Also, Gabor wavelet has been used for multi-scale analysis. Further, the local mesh patterns were improved in [33], where the first order derivative also included in the local mesh pattern and called local mesh peak valley edge pattern (LMePVEP).

57 58

1.2. Main contribution

59 60 61 62 63 64 65 66

Local binary pattern creates local pattern based on center and surrounding pixels. It measures the relationship and forms a pattern. In the proposed method, center and neighboring pixel relationship is considered in an instructive way that directional information can be utilized of image. Mutual relationship of neighboring pixels in three most significant directions are examined in the proposed work. A magnitude pattern is also considered using

67 68 69 70 71 72

Fig. 1. Local binary pattern example. (a) Center and neighboring pixel notations. (b) Sample window. (c) Local binary pattern using threshold with center pixel. (d) Weights. (e) Pattern value.

73 74 75 76

same three direction pixels, and both patterns histogram are combined for feature vector. The proposed method is tested on two texture and one face image database for performance. Presented work is organized as follows: Section 1 presents introduction of the problem and it includes motivation and related work. Section 2 explains local patterns and the proposed method. The framework of the proposed method, algorithm and similarity measure are demonstrated in Section 3. Experimental results are obtained in Section 4. Finally, the whole work is concluded in Section 5.

77 78 79 80 81 82 83 84 85 86 87

2. Local patterns

88 89

2.1. Local binary patterns

90 91

Ojala et al. invented the local binary pattern for texture images. Based on performance and speed, LBP operator is used in image classification [14], facial expression recognition [34], medical imaging [35], object tracking [36], etc. The LBP operator for p neighborhood and r radius is defined as below:

92 93 94 95 96 97

LBP p ,r =

p −1 

98 99

l

2 × S 1 ( Il − I c )

100

l =0



S 1 (x) =

101

1 x≥0 0 else

(1)

102 103 104

where I c and I l are center and neighborhood pixel intensities respectively. Histogram of LBP map is calculated using eqn. (2), where Pattern is LBP, and the size of image is m × n. A sample window example of LBP pattern is shown in Fig. 1.

105 106 107 108 109

 His( L ) 

= Pattern

n m  

110

S 2 (Pattern(a, b), L );

111 112

a =1 b =1

113

p

L ∈ [0, (2 − 1)]



S 2 (i , j ) =

1, 0,

114

i= j else

115

(2)

116 117

2.2. Local tri-directional patterns

118 119 120

Local tri-directional pattern is an extension of LBP. Instead of uniform relationship with all neighboring pixels, LTriDP consider the relationship based on different directions. Each center pixel have some neighboring pixels in a particular radius. Closest neighbor consists of 8 pixels all around the center pixel. Further, there are 16 pixels in next radius and so on. Closest neighboring pixels are less in number and gives more related information as they are nearest to center pixel. Hence, we consider 8-neighborhood pixels for pattern creation. Each neighborhood pixel at one time is considered and compared it with center pixel and also with two most adjacent neighborhood pixels. These two neighborhood pixels are either vertical or horizontal pixels as they are closest to the con-

121 122 123 124 125 126 127 128 129 130 131 132

JID:YDSPR AID:1910 /FLA

[m5G; v1.172; Prn:4/02/2016; 14:35] P.3 (1-11)

M. Verma, B. Raman / Digital Signal Processing ••• (••••) •••–•••

1 2 3 4 5 6

sidered neighboring pixel. The pattern formation is demonstrated in Fig. 2 and explained mathematically as follows. Consider a center pixel I c and 8-neighborhood pixels I 1 , I 2 , .., I 8 . Firstly, we calculate the difference between each neighborhood pixel with its two most adjacent pixels and difference of each neighborhood pixel with center pixel.

7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33

D 1 = I i − I i −1 ,

D 2 = I i − I i +1 ,

D3 = Ii − Ic

∀ i = 2, 3.., 7 D1 = Ii − I8, D 1 = I i − I i −1 ,

(3)

D 2 = I i − I i +1 , D2 = Ii − I1,

D3 = Ii − Ic D3 = Ii − Ic

for for

i=1 i=8

We have three differences, D 1 , D 2 and D 3 for each neighborhood pixel. Now, a pattern number is assigned based on all three differences.

f ( D 1 , D 2 , D 3 ) = {#( D k < 0)} mod 3 ∀ k = 1, 2, 3.

(6)

where #( D k < 0) denotes the total count of D k which is less than 0, for all k = 1, 2, 3. #( D k < 0) provides the values ranging from 0 to 3. To calculate each pattern value, a mod of #( D k < 0) is taken with 3. It gives the value according to #( D k < 0), e.g., when all D k < 0, k = 1, 2, 3 then #( D k < 0) is 3 and #( D k < 0)mod3 is 0. Similarly if no D k < 0 then also the value of #( D k < 0)mod3 will be 0. In this way, #( D k < 0)mod3 is assigned values 0, 1 and 2. More explanation of pattern value calculation using example, is given in the end of this section in Fig. 2. For each neighborhood pixel ‘i = 1, 2, .., 8’, pattern value f i ( D 1 , D 2 , D 3 ) is calculated using eqn. (6), and tri-directional pattern has been obtained.

LTriDP ( I c ) = { f 1 , f 2 , .., f 8 }

(7)

34 35 36 37 38

By this, we get a ternary pattern for each center pixel and convert it into two binary patterns.

LTriDP 1 ( I c ) = { S 3 ( f 1 ), S 3 ( f 2 ), .., S 3 ( f 8 )}



39 40

S 3 (x) =

41 42

45 46 47 48



S 4 (x) =

 LTriDP ( I c ) 

49 50 51 52 53 54 55 56 57

M1 =

60 61 62 63 64

M2 = M1 =

65 66

0,

else

(8)

1,

x=2

0,

else

i =1,2

=

7 

(9)

l

2 × LTriDP i ( I c )(l + 1)

M2 =

 

l =0

 

( I i −1 − I c )2 + ( I i +1 − I c )2 ( I i −1 − I i )2 + ( I i +1 − I i )2 , ∀ i = 2, 3.., 7

(11)

( I 8 − I c )2 + ( I i +1 − I c )2 ( I 8 − I i )2 + ( I i +1 − I i )2 ,

for

i=1

67 68

for

i=8

69

(13)

(12)

70 71

Values of M 1 and M 2 are calculated for each neighborhood pixel and according to these values, a magnitude pattern value is assigned to each neighborhood pixel.

 Mag i ( M 1 , M 2 ) =

1,

M1 ≥ M2

0,

else

(14)

mag

=

7 

(15)

74 76 77 79 80

l

2 × LTriDP mag ( I c )

(16)

l =0

81 82 83

Similarly, the histogram of the magnitude pattern is created by eqn. (2) and three histograms concatenated, and joint histogram is created.

   Hist = [His LTriDP , His LTriDP , His LTriDP 1

73

78

LTriDP mag ( I c ) = {Mag1 , Mag2 , .., Mag 8 }

 LTriDP ( I c ) 

72

75

2

mag

]

(17)

An example of pattern calculation is shown in Fig. 2 through (a)–(j) windows. In window (a), center pixel I c and neighborhood pixels I 1 , I 2 , .., I 8 are shown. Center pixel is marked as red color in windows (b)–(j). In the next window (c), first neighborhood pixel I 1 is marked as blue color, and two most adjacent pixels marked as yellow color. First, we compare blue pixel with yellow pixels and red pixel and assign ‘0’ or ‘1’ value for all the three comparisons. For example, in window (c) I 1 is compared with I 8 , I 2 and I c . Since I 1 > I 8 , I 1 < I 2 and I 1 > I c , the pattern for I 1 is 101. Now, according to eqn. (6) the pattern value for I 1 is 1. In the same way, for next windows (d)–(j) pattern values are obtained for other neighboring pixels. Finally, the local tri-directional pattern for center pixel is obtained by merging all neighborhood pixel pattern values. For magnitude pattern, magnitude of center pixel and neighborhood pixel is obtained and compared. In presented example, ‘6’ is center pixel and I 1 is ‘8’. In window (c), magnitude of the center pixel ‘6’ is 5.8 and magnitude of ‘8’ is 7.1 with respect to ‘1’ and ‘9’. Since, the magnitude of center pixel is less than neighborhood pixel, we assign ‘0’ pattern value here. Consequently, magnitude pattern is calculated for next neighborhood pixels and shown in (d)–(j) windows, and magnitude patterns of all neighborhood pixels are merged into one pattern, and that is magnitude pattern for center pixel.

(10)

After getting pattern map, the histogram is calculated for both binary patterns using eqn. (2). The tri-direction pattern is extracting most of the local information, but it has been shown that the magnitude pattern is also helpful in creation of more informative feature vector [37,21]. We have also employed a magnitude pattern based on center pixel, neighborhood pixel and two most adjacent pixels. Magnitude pattern is created as follows:

58 59

x=1

LTriDP 2 ( I c ) = { S 4 ( f 1 ), S 4 ( f 2 ), .., S 4 ( f 8 )}

43 44

1,

 ( I i −1 − I c )2 + ( I 1 − I c )2  M 2 = ( I i −1 − I i )2 + ( I 1 − I i )2 ,

M1 =

(4) (5)

3

84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114

2.2.1. Advantage over other methods Local patterns use local intensity of pixels for grabbing the information and create the pattern according to the information. Local binary patterns compare the neighborhood pixel and center pixel and assign a pattern to the center pixel. In the proposed work, additional relationships among local pixels have been observed. Along with relationship of center-neighborhood pixels, mutual relationship of adjacent neighboring pixels are obtained, and local information based on three direction pixels are examined. This method gives more information compare to LBP and other local patterns, as it calculates center-neighboring pixel information alongwith mutual neighboring pixel information. Nearest neighbors gives most of the information. Hence, the pattern is calculated using most adjacent neighboring pixels for each pattern value. Also, a magnitude pattern is introduced which provides information regarding intensity weight for each pixel. Both LTriDP and magnitude pattern, give different information and concatenation provides better feature descriptor.

115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

JID:YDSPR

AID:1910 /FLA

[m5G; v1.172; Prn:4/02/2016; 14:35] P.4 (1-11)

M. Verma, B. Raman / Digital Signal Processing ••• (••••) •••–•••

4

1

67

2

68

3

69

4

70

5

71

6

72

7

73

8

74

9

75

10

76

11

77

12

78

13

79

14

80

15

81

16

82

17

83

18

84

19

85

20

86

21

87

22

88

23

89

24

90

25

91

26

92

27

93

28

94

29

95

30 31 32

96

Fig. 2. Proposed method sample window example. (a) Center and neighboring pixel notations. (b) Sample window. (c)–(j) Local tri-directional pattern and magnitude pattern calculation. (For interpretation of the references to color in this figure, the reader is referred to the web version of this article.)

97 98

33

99

34

100

35

101

36

102

37

103

38

104

39

105

40

106

41

107

42

108

43

109

44

110

45

111

46

112

47

Fig. 3. Proposed method block diagram.

48 49

114

3. Proposed system framework

50 51 52 53 54

A block diagram of the presented method is shown in Fig. 3, and algorithm for the same is demonstrated below. Two parts of the algorithm are given. In part 1, feature vector construction is explained, and in part 2, image retrieval system is presented.

4. Concatenate both the histograms calculated in step 2 and step 3.

59 60

Part 1: Feature vector construction Input: Image. Output: Feature vector.

61 62 63 64 65 66

116 118 119 120 121

3.1. Algorithm

57 58

115 117

Part 2: Image retrieval Input: Query image. Output: Retrieved similar images.

55 56

113

1. Upload the image and convert it into gray scale if it is a color image. 2. Compute the tri-directional patterns and construct the histogram. 3. Evaluate the magnitude patterns and make the histogram.

1. Enter the query image. 2. Calculate the feature vector as shown in part 1. 3. Compute the similarity index of query image feature vector with every database image feature vector. 4. Sort similarity indices and produce images corresponding to minimum similarity indices as results.

122 123 124 125 126 127 128

3.2. Similarity measure

129 130

Similarity measure is also a major module of the content based image retrieval system. It computes the distance between feature

131 132

JID:YDSPR AID:1910 /FLA

[m5G; v1.172; Prn:4/02/2016; 14:35] P.5 (1-11)

M. Verma, B. Raman / Digital Signal Processing ••• (••••) •••–•••

1 2 3 4 5 6 7 8 9 10 11 12

vectors, i.e., dissimilarity between images using a distance metric. In the proposed work, d1 distance measure has been used as it has proved its excellence for similarity match in local patterns in [21,25,38,39]. For a query image, Q and nth database image DBn following distance metric is used:

 L  n   F db (s) − F q (s)  n   Dis(DB , Q ) =  1 + F n (s) + F (s)  s =1

db

(18)

q

where Dis() is distance function, L is the length of the feature vecn tor, and F db and F q are feature vector of database nth image and query image respectively.

13 14

4. Experimental results and discussion

15 16 17 18 19 20 21 22 23 24 25 26 27 28

The proposed method is tested on two texture database and one face image database for validation. The capability of the presented method for image retrieval is shown on the basis of precision, recall [40] and average normalized modified retrieval rank (ANMRR) [41]. These are performance evaluation parameters, which evaluate the ability of the method. In the process of retrieving images, for a given query, many images are retrieved. In those images some are relevant to query image and some are nonrelevant results which do not match to query image. Images which are of user interest, are called relevant images, and total images retrieved for a query, are called retrieved images. Precision computes retrieval performance in ratio of relevant images and retrieved images. Precision and recall can be computed as:

29 30 31 32 33

P rN

=

R rN =

34 35 36 37 38 39 40

(Relevant images) ∩ (Retrieved images) Total number of images retrieved (Relevant images) ∩ (Retrieved images) Total number of relevant images in the database

43

1 

=

n1

44 45 46

N (s) = R avg

n1

49 50

53

56 57 58 59 60 61 62 63 64 65 66

R rN

(22)

r =1

N P Total

1 

=

N = R Total

n2

(23)

s =1

n2 1 

n2

N P avg (s)

N R avg (s)

CS_LBP LEPINV LEPSEG LBP Nanni et al. LGPD PVEP LMEBP DLEP

68

Center symmetric local binary pattern [22] Local edge pattern for image retrieval [27] Local edge pattern for segmentation [27] Local binary pattern [13] Local binary patterns variants as texture descriptors [35] Local directional gradient pattern [42] Peak valley edge patterns [43] Local maximum edge binary pattern [30] Directional local extrema pattern [25]



70 71 72 73 74 75 76 77

Rank(i ) if Rank(i ) ≤ K ( Q ) Rank(i ) = 1.25 × K ( Q ) if Rank(i ) > K ( Q ) K ( Q ) = min(4 × N g ( Q ), 2 × max( N g ( Q ), ∀ Q ))

(25) (26)



Ng(Q )

79 80 82 83

Ng (Q )

1

78

81

Average rank (AVR) can be defined as:

AVR( Q ) =

69

Rank(i )

(27)

i =1

84 85 86

Modified retrieval rank (MRR) and normalized modified retrieval rank (NMRR) for different ground-truth values are defined as:

87 88 89 90

MRR( Q ) = AVR( Q ) − 0.5 × [1 + N g ( Q )] MRR( Q ) NMRR( Q ) = 1.25 × K ( Q ) − 0.5 × (1 + N g ( Q ))

(28) (29)

ANMRR =

NQ

92 93 94

Average normalized modified retrieval rank (ANMRR) is average of NMRR for different queries.

1 

91

95 96 97

NMRR( Q )

(30)

q =1

where N Q is number of query images. ANMRR value lies between 0 and 1, and ANMRR value more close to 0 indicates that more ground-truth results found in retrieval. Further explanation about ANMRR can be found in [41]. Every image of the database is treated as a query image, and for each image, precision, recall and NMRR are calculated. For every category, precision and recall are obtained using eqn. (21)–(24) and for all images ANMRR is calculated using eqn. (30). The presented method is compared with some previous methods as shown in Table 1. The proposed method is a combination of both LTriDP and LTriDPmag abbreviated as PM in the following sections and figures.

98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113

n2

54 55

(21)

where ‘s’ stands for each category number, and n1 is the number of total images in each category. Further, total precision and recall for the whole database is evaluated as follows:

51 52

P rN

r =1 n1

1 

47 48

(20)

n1

N P avg (s)

67

Table 1 Previous methods abbreviations.

NQ

where Relevant images and Retrieved images are all images relevant to query image in the database, and images retrieved from the database for a query image. Variables ‘N’ and ‘r’ associated with precision and recall, represent the number of images retrieved and number id of database image. Precision and recall, related to every category are calculated as:

41 42

(19)

5

(24)

s =1

where n2 is the total number of categories present in the database. Total recall is also called as average retrieval rate (ARR). For a given query image Q , total number of relevant images in database (ground-truth values) are N g ( Q ). Rank of each ground-truth value for query Q is defined as Rank(i), i.e., position of the ground-truth image i in retrieved images. Moreover, a variable K ( Q ) > N g ( Q ) is defined as a limit of ranks. In retrieved images, a ground-truth value that has a rank greater than K ( Q ) is considered as a miss. Rank(Q) and K(Q) is defined as follows [41]:

4.1. Database 1

114 115

In the first experiment, MIT VixTex database [44] of gray scale images is used. It contains total 40 texture images. Each image is of size 512 × 512. For retrieval experiment, each image is divided into 16 sub images of size 128 × 128. Consequently, total 40 categories are obtained and each category holds 16 images. Sample images from the database are shown in Fig. 4. Precision and recall for the presented method and other methods are calculated and demonstrated through graphs. In Fig. 5, plots of precision and recall are shown. Fig. 5(a) presents variation in precision with number of images retrieved and Fig. 5(b) shows graph between recall and number of images retrieved. Both the graphs clearly show that the presented method is better than others in terms of precision and recall. Average retrieval rate of proposed method has been increased from CS_LBP, LEPINV, LEPSEG, LBP, Nanni et al., LDGP, PVEP, LMEBP and DLEP by 19.13%, 22.91%, 10.43%, 7.72%, 8.47%, 10.90%, 12.05%, 0.97%, 10.13%. ANMRR for every method is calculated and presented in Table 3. ANMRR

116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

JID:YDSPR

AID:1910 /FLA

6

[m5G; v1.172; Prn:4/02/2016; 14:35] P.6 (1-11)

M. Verma, B. Raman / Digital Signal Processing ••• (••••) •••–•••

1

67

2

68

3

69

4

70

5

71

6

72

7

73

8

74

9

75

10

76

11

77

12

78

13

79

14

80

15

81

16

82

17

83

18

84

19

85

20

86

21

87

22

88

23

89

24

90

25

91

26

92

27

93

28

Fig. 4. MIT VisTex database sample images.

29

94 95

30

96

31

97

32

98

33

99

34

100

35

101

36

102

37

103

38

104

39

105

40

106

41

107

42

108

43

109

44

110

45

111

46

112

47

113

48

114

49

115

50

116

51

117

52

Fig. 5. Precision and recall with number of images retrieved for database 1.

53 54 55 56 57 58 59 60 61 62 63

119

for proposed method is more close to zero as compared to other methods. It clearly indicates that most ground-truth results have been achieved using the proposed method. In addition, we have shown the comparison of the proposed methods mutually. In Fig. 6, comparison of LTriDP and LTriDPmag has been shown in terms of precision and recall with number of images retrieved. It is clearly visible, that the LTriDP is more precise than LTriDPmag . 4.2. Database 2

64 65 66

118

In the second experiment, Brodatz textures [45] have been used for testing. The Brodatz database contains 112 texture images, each

of 640 × 640 size. Every image is divided into sub images of size 128 × 128. Total 25 sub images obtained from each original image. Hence, 112 × 25 images are there in this database with 112 categories and each category has 25 images. The results of the proposed algorithm in the form of precision and recall are presented in graphs. In this system, initially 25 images are retrieved for each query, and then they are increasing by an increment of 5, and up to 70 images are retrieved. Plots of precision and recall with number of retrieved images are shown in Fig. 7. The proposed method is more satisfying than other methods and it is clearly visible in the graphs. In terms of average retrieval rate the proposed method improved from CS_LBP, LEPINV, LEPSEG,

120 121 122 123 124 125 126 127 128 129 130 131 132

JID:YDSPR AID:1910 /FLA

[m5G; v1.172; Prn:4/02/2016; 14:35] P.7 (1-11)

M. Verma, B. Raman / Digital Signal Processing ••• (••••) •••–•••

7

1

67

2

68

3

69

4

70

5

71

6

72

7

73

8

74

9

75

10

76

11

77

12

78

13

79

14

80

15

81

16

82

17

83

18

84

19

85

20

Fig. 6. Precision and recall of proposed methods for database 1.

21

86 87

22

88

23

89

24

90

25

91

26

92

27

93

28

94

29

95

30

96

31

97

32

98

33

99

34

100

35

101

36

102

37

103

38

104

39

105

40

106

41

107

42

108

43

109

44

Fig. 7. Precision and recall with number of images retrieved for database 2.

45 46 47 48 49 50 51 52 53 54 55

111

LBP, Nanni et al., LDGP, PVEP, LMEBP and DLEP by 39.66%, 34.52%, 18.05%, 9.12%, 14.34%, 19.06%, 11.79%, 2.99% and 7.09%. Moreover, result of ANMRR are shown in Table 3, and it implies that more relevant images are retrieved using the proposed method as compared to other methods. Fig. 8 shows the plots between LTriDP and LTriDPmag . On the basis of precision and recall, LTriDP is more effective than LTriDPmag . Although, LTriDP is also good in performance, LTriDPmag is enhancing the image information as shown in Fig. 7.

56 57

4.3. Database 3

58 59 60 61 62 63 64 65 66

110

In the third experiment, a face database has been taken for face image retrieval purpose. The AT&T database of faces [46] contains 400 images of 40 subjects. Each subject is having 10 images with different facial expressions (open/closed eyes, smiling/not smiling) and facial details (glasses/no glasses). These images were taken at different times and in varying lighting conditions for some subjects. Size of each image in this database is 92 × 112. One image from each subject is presented in Fig. 9.

Results of database 3 have been presented in Figs. 10 and 11. In image retrieval system, 1, 2, .., 10 images have been retrieved, and precision and recall have been calculated for every time and shown in Fig. 10. The performance measures in experimental results clearly depict that the proposed method outperforms others. The average retrieval rate of the proposed method has been improved from CS_LBP, LEPINV, LEPSEG, LBP, Nanni et al., LDGP, PVEP, LMEBP and DLEP by 24.24%, 106.91%, 48.40%, 30.79%, 4.69%, 20.97%, 39.49%, 18.80%, 7.30%. Average normalized modified retrieval rank for this database is shown in Table 3. For the proposed method it is more close to zero as compared to other methods, hence the proposed method is more promising than others in terms of accurate retrieval. Further, in Fig. 11, comparison LTriDP and LTriDPmag is shown. Demonstration of the proposed method is shown in Fig. 12. Similar face images have been retrieved for five query images. In Fig. 12, first image in each row is query image and next three images are retrieved images from the proposed method. Feature vector length of each method is shown in Table 4. Feature vector length of the proposed method is comparatively very

112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

JID:YDSPR

8

AID:1910 /FLA

[m5G; v1.172; Prn:4/02/2016; 14:35] P.8 (1-11)

M. Verma, B. Raman / Digital Signal Processing ••• (••••) •••–•••

1

67

2

68

3

69

4

70

5

71

6

72

7

73

8

74

9

75

10

76

11

77

12

78

13

79

14

80

15

81

16

82

17

83

18

84

19

85

20 21

86

Fig. 8. Precision and recall of the proposed methods for database 2.

87

22

88

23

89

24

90

25

91

26

92

27

93

28

94

29

95

30

96

31

97

32

98

33

99

34

100

35

101

36

102

37

103

38

104

39

105

40

106

41

107

42

108

43

109

44

110

45

111

46

112

47

113

48

114

49

115

50

116

51

117

52

118

53

119

54

120

55

121

56

122

57

123

58

124

59 60 61 62

125

Fig. 9. AT&T database sample images.

126 127 128

less from PVEP, LMEBP and DLEP, and performance is better. Fea-

Table 2 explains the average retrieval rate (ARR) of two texture

64

ture vector length of proposed method is more than CS_LBP, LEP-

databases and one face image database for all compared methods

130

65

INV, LEPSEG, Nanni et al. method, LDGP and LBP, but performance

with the proposed method. Final ARR results clearly verify that the

131

66

is considerably better as shown in different database results.

proposed algorithm outperforms others.

132

63

129

JID:YDSPR AID:1910 /FLA

[m5G; v1.172; Prn:4/02/2016; 14:35] P.9 (1-11)

M. Verma, B. Raman / Digital Signal Processing ••• (••••) •••–•••

9

1

67

2

68

3

69

4

70

5

71

6

72

7

73

8

74

9

75

10

76

11

77

12

78

13

79

14

80

15

81

16

82

17

83

18

84

19

85

20

86

21

87

22

88

Fig. 10. Precision and recall with number of images retrieved for database 3.

23

89

24

90

25

91

26

92

27

93

28

94

29

95

30

96

31

97

32

98

33

99

34

100

35

101

36

102

37

103

38

104

39

105

40

106

41

107

42

108

43

109

Fig. 11. Precision and recall of proposed methods for database 3.

44

110

45 46 47

111

Table 2 Average retrieval rate of all databases.

Table 3 Average normalized modified retrieval rank of different methods and databases.

112 113

48

Method

Brodatz database

MIT VisTex database

AT&T face database

Method

Brodatz database

MIT VisTex database

AT&T face database

114

49

CS_LBP LEPINV LEPSEG LBP Nanni et al. LGPD PVEP LMEBP DLEP PM

54.74 56.83 64.76 70.06 66.86 64.21 68.39 74.23 71.39 76.45

74.39 72.10 80.25 82.27 81.70 79.91 79.09 87.77 80.47 88.62

44.35 26.63 37.13 42.13 52.63 45.55 39.50 46.38 51.35 55.10

CS_LBP LEPINV LEPSEG LBP Nanni et al. LGPD PVEP LMEBP DLEP PM

0.3664 0.3437 0.2704 0.2278 0.2626 0.2807 0.2465 0.1944 0.2117 0.1742

0.1696 0.1876 0.1198 0.0817 0.1132 0.1301 0.1074 0.0738 0.1278 0.0679

0.4607 0.6638 0.5398 0.4833 0.3799 0.4551 0.5179 0.4422 0.3918 0.3401

115

50 51 52 53 54 55 56 57

62 63 64 65 66

118 119 120 121 122 124

5. Conclusion

60 61

117

123

58 59

116

A novel method, named as Local tri-directional pattern, has been proposed in this paper and abbreviated as LTriDP. Each pixel in the neighborhood has been compared with the most adjacent pixels and center pixel for local information extraction. In most of the previous local patterns, only center pixel is considered for pattern formation, but in the proposed method information related to

each pixel of neighborhood is extracted, therefore, this method is giving more enhanced features. The magnitude pattern is also incorporated, that is again based on the same pixels used in LTriDP. The proposed method is compared with CS_LBP, LEPINV, LEPSEG, Nanni et al. method [35], LDGP, PVEP, LBP, LMEBP and DLEP with reference to precision and recall. All methods are tested on MIT VisTex texture database, Brodatz texture database and AT&T face image database. Precision and recall show that the proposed sys-

125 126 127 128 129 130 131 132

JID:YDSPR

AID:1910 /FLA

[m5G; v1.172; Prn:4/02/2016; 14:35] P.10 (1-11)

M. Verma, B. Raman / Digital Signal Processing ••• (••••) •••–•••

10

References

1 2

68

3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39

Fig. 12. AT&T database query example.

40 41 42 43 44 45 46 47 48 49 50 51 52

Table 4 Feature vector length of different methods. Method

Feature vector length

CS_LBP LEPINV LEPSEG LBP Nanni et al. LGPD PVEP LMEBP DLEP PM

16 72 512 256 93 64 4096 4096 2048 768

53 54 55 56 57

tem more proficient and appropriate than others in terms of accuracy. Also, the feature vector length of the proposed algorithm is more acceptable than PVEP, LMEBP and DLEP.

58 59

Acknowledgments

60 61 62 63 64 65 66

67

This work was supported by the Ministry of Human Resource and Development (MHRD), India under grant MHRD-02-23200-304. We sincerely thank to Dr. Subrahmanyam Murala (Assistant professor), IIT Ropar for providing the results of MIT VisTex database for other algorithms.

[1] A.W. Smeulders, M. Worring, S. Santini, A. Gupta, R. Jain, Content-based image retrieval at the end of the early years, IEEE Trans. Pattern Anal. Mach. Intell. 22 (12) (2000) 1349–1380. [2] H. Müller, N. Michoux, D. Bandon, A. Geissbuhler, A review of content-based image retrieval systems in medical applications—clinical benefits and future directions, Int. J. Med. Inform. 73 (1) (2004) 1–23. [3] R.M. Haralick, K. Shanmugam, I.H. Dinstein, Textural features for image classification, IEEE Trans. Syst. Man Cybern. 6 (1973) 610–621. [4] M. Partio, B. Cramariuc, M. Gabbouj, A. Visa, Rock texture retrieval using gray level co-occurrence matrix, in: Proc. of 5th Nordic Signal Processing Symposium, vol. 75, Citeseer, 2002. [5] J. Zhang, G.l. Li, S.W. He, Texture-based image retrieval by edge detection matching GLCM, in: 10th IEEE International Conference on High Performance Computing and Communications, 2008, 2008, pp. 782–786. [6] A. Laine, J. Fan, Texture classification by wavelet packet signatures, IEEE Trans. Pattern Anal. Mach. Intell. 15 (11) (1993) 1186–1191. [7] B.S. Manjunath, W.Y. Ma, Texture features for browsing and retrieval of image data, IEEE Trans. Pattern Anal. Mach. Intell. 18 (8) (1996) 837–842. [8] A. Ahmadian, A. Mostafa, An efficient texture classification algorithm using Gabor wavelet, in: Proceedings of the 25th Annual International Conference of the IEEE on Engineering in Medicine and Biology Society, 2003, vol. 1, 2003, pp. 930–933. [9] D. Zhang, A. Wong, M. Indrawan, G. Lu, Content-based image retrieval using Gabor texture features, in: IEEE Pacific-Rim Conference on Multimedia, University of Sydney, Australia, 2000, pp. 392–395. [10] P.D. Rivaz, N. Kingsbury, Complex wavelet features for fast texture image retrieval, in: International Conference on Image Processing, vol. 1, 1999, pp. 109–113. [11] A.B. Gonde, R.P. Maheshwari, R. Balasubramanian, Modified curvelet transform with vocabulary tree for content based image retrieval, Digit. Signal Process. 23 (1) (2013) 142–150. [12] P.W. Huang, S.K. Dai, Image retrieval by texture similarity, Pattern Recognit. 36 (3) (2003) 665–679. [13] T. Ojala, M. Pietikäinen, D. Harwood, A comparative study of texture measures with classification based on featured distributions, Pattern Recognit. 29 (1) (1996) 51–59. [14] T. Ojala, M. Pietikäinen, T. Maenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns, IEEE Trans. Pattern Anal. Mach. Intell. 24 (7) (2002) 971–987. [15] X. Tan, B. Triggs, Enhanced local texture feature sets for face recognition under difficult lighting conditions, in: Analysis and Modeling of Faces and Gestures, 2007, pp. 168–182. [16] B. Zhang, Y. Gao, S. Zhao, J. Liu, Local derivative pattern versus local binary pattern: face recognition with high-order local pattern descriptor, IEEE Trans. Image Process. 19 (2) (2010) 533–544. [17] Z. Guo, L. Zhang, D. Zhang, Rotation invariant texture classification using LBP variance (LBPV) with global matching, Pattern Recognit. 43 (3) (2010) 706–719. [18] X. Qian, X.S. Hua, P. Chen, L. Ke, PLBP: an effective local binary patterns texture descriptor with pyramid representation, Pattern Recognit. 44 (10) (2011) 2502–2515. [19] F. Yuan, Video-based smoke detection with histogram sequence of LBP and LBPV pyramids, Fire Saf. J. 46 (3) (2011) 132–139. [20] S. Murala, Q.J. Wu, Local ternary co-occurrence patterns: a new feature descriptor for MRI and CT image retrieval, Neurocomputing 119 (6) (2013) 399–412. [21] S. Murala, R.P. Maheshwari, R. Balasubramanian, Local tetra patterns: a new feature descriptor for content-based image retrieval, IEEE Trans. Image Process. 21 (5) (2012) 2874–2886. [22] M. Heikkilä, M. Pietikäinen, C. Schmid, Description of interest regions with center-symmetric local binary patterns, in: Computer Vision, Graphics and Image Processing, 2006, pp. 58–69. [23] G.A. Papakostas, D.E. Koulouriotis, E.G. Karakasis, V.D. Tourassis, Moment-based local binary patterns: a novel descriptor for invariant pattern recognition applications, Neurocomputing 99 (2013) 358–371. [24] L. Tlig, M. Sayadi, F. Fnaiech, A new fuzzy segmentation approach based on S-FCM type 2 using LBP-GCO features, Signal Process. Image Commun. 27 (6) (2012) 694–708. [25] S. Murala, R.P. Maheshwari, R. Balasubramanian, Directional local extrema patterns: a new descriptor for content based image retrieval, Int. J. Multimed. Inf. Retr. 1 (3) (2012) 191–203. [26] F. Yuan, Rotation and scale invariant local binary pattern based on high order directional derivatives for texture classification, Digit. Signal Process. 26 (2014) 142–152. [27] C.H. Yao, S.Y. Chen, Retrieval of translated, rotated and scaled color textures, Pattern Recognit. 36 (4) (2003) 913–929. [28] S. Murala, Q.J. Wu, R. Balasubramanian, R.P. Maheshwari, Joint histogram between color and local extrema patterns for object tracking, in: IS&T/SPIE Electronic Imaging, International Society for Optics and Photonics, 2013, 86630T.

69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

JID:YDSPR AID:1910 /FLA

[m5G; v1.172; Prn:4/02/2016; 14:35] P.11 (1-11)

M. Verma, B. Raman / Digital Signal Processing ••• (••••) •••–•••

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

[29] M. Verma, R. Balasubramanin, S. Murala, Multi-resolution local extrema patterns using discrete wavelet transform, in: Seventh International Conference on Contemporary Computing, IC3, IEEE, 2014, pp. 577–582. [30] S. Murala, R.P. Maheshwari, R. Balasubramanian, Local maximum edge binary patterns: a new descriptor for image retrieval and object tracking, Signal Process. 92 (6) (2012) 1467–1479. [31] V. Takala, T. Ahonen, M. Pietikäinen, Block-based methods for image retrieval using local binary patterns, in: Image Analysis, 2005, pp. 882–891. [32] S. Murala, Q.J. Wu, Local mesh patterns versus local binary patterns: biomedical image indexing and retrieval, IEEE J. Biomed. Health Inform. 18 (3) (2014) 929–938. [33] S. Murala, Q.J. Wu, MRI and CT image indexing and retrieval using local mesh peak valley edge patterns, Signal Process. Image Commun. 29 (3) (2014) 400–409. [34] S. Moore, R. Bowden, Local binary patterns for multi-view facial expression recognition, Comput. Vis. Image Underst. 115 (4) (2011) 541–558. [35] L. Nanni, A. Lumini, S. Brahnam, Local binary patterns variants as texture descriptors for medical image analysis, Artif. Intell. Med. 49 (2) (2010) 117–125. [36] J. Ning, L. Zhang, D. Zhang, C. Wu, Robust object tracking using joint color-texture histogram, Int. J. Pattern Recognit. Artif. Intell. 23 (07) (2009) 1245–1263. [37] Z. Guo, D. Zhang, A completed modeling of local binary pattern operator for texture classification, IEEE Trans. Image Process. 19 (6) (2010) 1657–1663. [38] M. Verma, B. Raman, S. Murala, Local extrema co-occurrence pattern for color and texture image retrieval, Neurocomputing 165 (2015) 255–269. [39] M. Verma, B. Raman, Center symmetric local binary co-occurrence pattern for texture, face and bio-medical image retrieval, J. Vis. Commun. Image Represent. 32 (2015) 224–236. [40] H. Müller, W. Müller, D.M. Squire, S. Marchand-Maillet, T. Pun, Performance evaluation in content-based image retrieval: overview and proposals, Pattern Recognit. Lett. 22 (5) (2001) 593–601. [41] B.S. Manjunath, P. Salembier, T. Sikora, Introduction to MPEG-7: Multimedia Content Description Interface, vol. 1, John Wiley & Sons, 2002. [42] S. Chakraborty, S.K. Singh, P. Chakraborty, Local directional gradient pattern: a local descriptor for face recognition, Multimed. Tools Appl. (2015) 1–6. [43] S. Murala, Q.M. Wu, Peak valley edge patterns: a new descriptor for biomedical image indexing and retrieval, in: IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, 2013, pp. 444–449.

11

[44] MIT Vision and Modeling Group, Cambridge, Vision texture, available online: http://vismod.media.mit.edu/pub/. [45] A. Safia, D. He, New Brodatz-based image databases for grayscale color and multiband texture analysis, ISRN Machine Vision, available online: http:// multibandtexture.recherche.usherbrooke.ca/original_brodatz.html, 2013. [46] AT&T Laboratories Cambridge, The AT&T database of faces, available online: http://www.uk.research.att.com/facedatabase.html, 2002.

67 68 69 70 71 72 73 74

Manisha Verma is a Ph.D. Research Scholar in the Department of Mathematics, Indian Institute of Technology Roorkee, Roorkee, India. She has received B.Sc. degree from Maharani’s college, Rajasthan University, Jaipur, India in 2009, the M.Sc. degree in Industrial Mathematics and Informatics from Mathematics Department, Indian Institute of Technology Roorkee, Roorkee, India in 2012. Her major area of interests are Content Based Image Retrieval, Face and Palmprint recognition, Object Tracking, Shot Boundary Detection and Video retrieval.

75 76 77 78 79 80 81 82

Balasubramanian Raman is an associate professor in the Department of Computer Science and Engineering at Indian Institute of Technology Roorkee from 2013. He has obtained M.Sc. degree in mathematics from Madras Christian College (University of Madras) in 1996 and Ph.D. from Indian Institute of Technology Madras in 2001. He was a post doctoral fellow at University of Missouri Columbia, USA in 2001–2002 and a post doctoral associate at Rutgers, the State University of New Jersey, USA in 2002–2003. He joined Department of Mathematics at Indian Institute of Technology Roorkee as lecturer in 2004 and became assistant professor in 2006 and associate professor in 2012. He was a visiting professor and a member of Computer Vision and Sensing Systems Laboratory in the Department of Electrical and Computer Engineering at University of Windsor, Canada during May August 2009. His area of research includes Vision Geometry, Digital Watermarking using Mathematical Transformations, Image Fusion, Biometrics and Secure Image Transmission over Wireless Channel, Content Based Image Retrieval and Hyperspectral Imaging.

83 84 85 86 87 88 89 90 91 92 93 94 95 96 97

32

98

33

99

34

100

35

101

36

102

37

103

38

104

39

105

40

106

41

107

42

108

43

109

44

110

45

111

46

112

47

113

48

114

49

115

50

116

51

117

52

118

53

119

54

120

55

121

56

122

57

123

58

124

59

125

60

126

61

127

62

128

63

129

64

130

65

131

66

132