Image characterizations based on joint gray level—run length distributions

Image characterizations based on joint gray level—run length distributions

Pattern Recognition Letters 12 (1991) 497-502 North-Holland August 1991 Image characterizations based on joint gray level-run length distributions B...

367KB Sizes 34 Downloads 70 Views

Pattern Recognition Letters 12 (1991) 497-502 North-Holland

August 1991

Image characterizations based on joint gray level-run length distributions B e l u r V. D a s a r a t h y a n d E d w i n B. H o l d e r Dynetics, Inc., P.O. Drawer B, Huntsville, AL 35814-5050, USA

Received 8 August 1990 Revised 9 April 1991

Abstract Dasarathy, B.V. and E.B. Holder, Image characterizations based on joint gray level-run length distributions, Pattern Recognition Letters 12 (1991) 497-502. Previous run length based texture analysis studies have mostly relied upon the use of run length or gray level distributions of the number of runs for characterizing the textures of images. In this study, some new joint run length-gray level distributions are proposed which offer additional insight into the image characterization problem and serve as effective features for a texturebased classification of images. Experimental evidence is offered to demonstrate the utilitarian value of these new concepts.

Keywords. Image characterization, textural features, run length distributions, gray level distributions.

1. Introduction Image characterization based on run lengths of gray levels has been a popular approach in many areas of image processing such as texture analysis (Galloway, 1975; Weszka, Dyer, and Rosenfeld, 1976; Conners and Harlow, 1980; Chu, Sehgal, and Greenleaf, 1990). However, comparative studies such as those reported by Weszka, Dyer, and Rosenfeld (1976), as well as Conners and Harlow (1980), have found that these run length related features have not performed as well as other features such as spatial gray level dependence and gray level difference (Conners and Harlow, 1980). This shows a clear need for a more explicit gray level distribution related information within the texture descriptive features. Presumably with a view to address this need, Chu, Sehgal and 0167-8655/91/$03.50

Greenleaf (1990) proposed a pair of new features that reflected the gray level distribution in the context of the run length matrix. They showed that the new features complement the earlier feature set and form an effective set of features for classification when used along with the earlier features. A detailed examination of the numerical example provided by them to illustrate the usefulness of their features showed that a counterexample (wherein the old features are better than the new feature pair) was easy to conceive, as indeec1 admitted by the authors themselves in their paper. This is primarily because these features are defined in terms of the row or column sums of the run length matrix rather than the individual entries thereof. Whenever such row or column sums are equal for a pair of images, corresponding types of textural features lose their discrimination potential. In-

© 1991 - - Elsevier Science Publishers B.V. (North-Holland)

497

Volume 12, Number 8

PATTERN RECOGNITION LETTERS

deed, it is easy to visualize cases wherein both the row and column sums could by themselves be equal for a pair of images without the individual elements of the two matrices themselves being equal. In such an event, all the previously defined features lose their discrimination capabilities at the same time. Such a column or row sum based feature definition corresponds to a usage of the distributions of the number of run lengths as a function of either run length sizes or gray level values alone, and not a joint distribution thereof. Accordingly, in this study a new set of features is proposed which in essence reflects the joint distribution of the run lengths and gray levels in the run length matrix. These features are not dependent on just the sums of rows and columns of the run length matrices, but on the individual elements thereof. Hence, they exhibit more robustness in their discrimination potential. Details of these developments along with examples illustrating the benefits thereof are presented in the sequel.

August 1991

Here, M

N

np= Z

Z jp(i,j), i=1 j=l M

N

nr = Z Z p(i,j)

(7)

i=1 j = l

where p(i,j) represents the number of runs of pixels of gray level i and run length j, Mis the number of gray levels in the image, N is the number of different run lengths in the image data set, n r is the total number of runs in the image, and np is the number of pixels in the image. In a very recent study (Chu, Sehgal, and Greenleaf, 1990) two additional features, characterizing the gray level distributions more explicitly, namely Low Gray-level Run Emphasis (LGRE) and High Gray-level Run Emphasis (HGRE) were defined as shown by equations (8) and (9): M

N

LURE = 1 ~ ~ p(i,j)/i 2,

(8)

F/r i=1 j = l M

N

HGRE = 1 ~ ~ i2p(i,j).

(9)

t/r i=1 j = l

2. Analysis Typical of the previously proposed image characterization parameters (features) based on run length statistics (Galloway, 1975; Weszka, Dyer, and Rosenfeld, 1976; Conners and Harlow, 1980) are Short Run Emphasis (SRE), Long Run Emphasis (LRE), Gray Level Distribution (GLD), Run Length Distribution (RLD) and Run Percentage (RPC) which can be defined as follows in equations (I) through (5): M

N

SRE = 1 ~ ~ p(i,j)/j 2,

(1)

nr i=! j = l

M

N

LRE= 1 ~ ~ j2p(i,j), ?/r i=1 j = l

(

?lr if I

RLD=-~. ~ rj=l

RPC = nr/np. 498

(6)

(2)

)2

j= 1

i

~tP(i,j) ,

(4) (5)

It is clear that Chu et al. (1990) in effect imitated the earlier run-length descriptive features in defining their new gray level based descriptors, by merely substituting the gray level parameters in place of the run length parameters. It was essential to have the squared value of j in the definition of LRE (and for symmetry SRE also) as otherwise it would become independent of the distribution (as can be seen from equation (6)). However, this was not really necessary in the definitions of HGRE and LGRE, except to preserve the similarity in the definitions between the two feature pairs based on run lengths and gray levels. The results presented in the previous study (Chu et al., 1990), especially Figure 4 therein showing good separation of image classes in the two-dimensional space defined by LGRE and LRE, brings out the desirability of exploring the joint distributions of run lengths and gray levels directly. Accordingly, in the present study, we propose four additional features (which emphasize the joint distribution properties of the run lengths and gray levels instead of the individual ones separately), namely, Short Run Low

Volume 12, Number 8

PATTERN RECOGNITION LETTERS IMAGE A

August 1991

IMAGE B

1

1

2

2

1

1

2

2

3

3

3

3

3

1

1

2

2

3

2

1

3

1

1

1

2

3

1

1

3

3

2

2

2

3

1

2

2

1

1

2

1

1

2

1

1

1

3

2

2

2

2

1

2

2

2

2

3

1

1

2

2

3

3

1

1

1

IMAGE D

IMAGE C 3

3

2

2

1

1

3

3

2

3

3

3

3

1

1

2

2

2

1

2

1

2

2

3

1

1

3

3

1

3

3

1

1

2

2

2

3

3

1

3

2

2

1

1

1

1

3

1

3

3

1

3

3

2

2

3

3

1

2

3

3

1

2

2

1

1

Figure 1. A set o f four test images.

Gray-level Emphasis (SRLGE), Short Run High Gray-level Emphasis (SRHGE), Long Run High Gray-level Emphasis (LRHGE), and Long Run Low Gray-level Emphasis (LRLGE), defined as shown, by equations (10) through (13): M

N

SRLGE= 1 ~ ~ p(i,j)/i2j 2,

(10)

7/r i=1 j = l M

N

SRHGE=I E ~ i2p(i,j)/J 2,

(11)

7/r i=1 j = l M

N

LRHGE= 1 ~ ~ i2j2p(i,j),

(12)

nr i=l j = l

M

N

LRLGE=Inr i=E1 j =E 1 j2p(i,J)/i2.

(13)

We shall now apply these to a small numerical example to illustrate the benefit of defining the new set of features. Consider a set of four (6 × 6) test images A, B, C and D as shown in Figure 1. These are essentially independent data sets as they

have not been derived from one another through gray level substitution as was the case in the example considered in the previous study by Chu et al. (1990). The run length matrices corresponding to these test images are shown in Figure 2. With these matrices, the entire set of features including the new ones proposed here have been computed and tabulated in Table 1. A careful analysis of the results tabulated in the table shows that the three features GLD, RLD and RPC have identical values for all the four test images and thus cannot distinguish any of these images from one another. The first two features, SRE and LRE have the same value for test images A, C, and D but a different value for image B only. On the other hand, the features introduced by Chu et al. (1990), HGRE and LGRE have the same value for images A, B, and D with a different value for image C only. Thus none of these features distinguish among all the four test images. But the new features defined in this study SRLGE, SRHGE, LRHGE, and 499

Volume 12, Number 8

PATTERN RECOGNITION LETTERS p(I,J) of Image A

p(l,J) of Image B I

2

1

7

I

1

2

4

0

3

0

3

2

0

3

1

2

3

I

1

8

0

2

2

4

3

4

I

p(l.J) of Image C

,,,<

August 1991

3

p(1,J) oi' Image D

1

2

3

1

3

4

0

2

2

2

3

2

7

I

2

3

1

3

6

0

1

2

4

3

0

0

3

0

4

1

Figure 2. Run length-gray level matrices of the four test images.

L R L G E have unique values for each o f the four

M

test images. Indeed it is difficult, even if not impossible, to construct independent test images which have the same values for all these features. As such these new features are considered robust. In view of the fact that these reflect the joint distribution of run lengths and gray levels (in terms o f number of runs) it implicitly includes all o f the information contained in the previously defined features (SRE, L R E , L G R E , and H G R E ) . Hence, these features can be used in place of these earlier features without any loss in the information content but with the added robustness as demonstrated in the test case. These features (both old and new) can be viewed as specific cases of a general run length gray level feature ( G R L G L F ) as shown in equation (14):

GRLGLF= I ~, ~ ik~jkT(i,j).

N (14)

?lr i=1 j = l

Here,

(for the summation to be not independent of j ) ,

k~.O (to t a k e i n t o a c c o u n t j o i n t r u n l e n g t h a n d g r a y level d i s t r i b u t i o n s ) , a n d

kj*O (to take into account joint run length and gray level distributions). The earlier features are specific cases of equation (14) with

Table 1 Complete set of feature values for the four test images Test image

5RE

LRE

OLD

RLD

RPC

LORE

HORE

SRLGE

SRHGE

LRHOE

LRLGE

A B C D

0.493 0.668 0.493 0.493

3.238 3.810 3.238 3.238

7.381 7.381 7.381 7.381

10.429 10.429 10.429 10.429

0.583 0.583 0.583 0.583

0.538 0.538 0.440 0.538

3.905 3.905 5.143 3.905

0.202 0.414 0.241 0.277

2.557 2.176 2.295 1.595

10.143 19.286 17.381 15.048

1.935 1.475 1.290 1.608

500

Volume 12, Number 8

PATTERN RECOGNITION LETTERS

SRE=GRLGLF

August 1991

Table 3 Results of clustering the four image class (eighty subimages) data using the two recently proposed features

I ki=O, k j = - 2 ,

L R E = G R L G L F I ki = O, kj = 2, L G R E = G R L G L F [ k i = - 2 , kj = O,

NUMBER OF SUBIMAGES OF IMAGE CLASS NO.

CLUSTER NO.

H G R E = G R L G L F ] k i = 2, kj = O. The new feature set proposed in this study are also specific cases of equation (14) with

1

2

3

18

7

1

90.0%

35.0%

5.0%

0 0.0%

2

0 0.0%

0 0.0%

0 0.0%

9 45.0%

3

2 10.0%

13 65.0%

19 95.0%

0 0.0%

4

0 0.0%

0 0.0%

1 5.0%

55.0%

S R L G E = G R L G L F [ ki = - 2 , kj = - 2 , S R H G E = G R L G L F I ki = 2, kj = - 2 , L R H G E = G R L G L F [ k i = 2, kj = 2, L R L O E = O R L O L F I ki = - 2 , kj = 2.

3. Experimental results

11

Average Recognition Rate: 60.00%

D a t a f r o m the same cell image set referred to in the earlier study (see Chu, Sehgal and Greenleaf (1990) for details regarding the cell imagery) was employed here in this study also to facilitate relative comparisons of the different features. All the eleven features: five original o n e s m S R E , L R E , G L D , R L D , and R P C , the two proposed in the previous study by Chu, Sehgal and Greenleaf ( 1 9 9 0 ) m L G R E and H G R E , and the four new features, S R L G E , S R H G E , L R L G E , and L R H G E

were extracted for the eighty image segments of the four image set and analyzed using previously developed unsupervised learning/clustering techniques (Dasarathy, 1974, 1975). The results of cluster analysis using each of these three sets of features are shown in Tables 2, 3 and 4 respectively. As can be seen therein, the newly developed four feature set resulted in close to perfect separa-

Table 2 Results of clustering the four image class (eighty subimages) data using the original five textural features

Table 4 Results of clustering the four image class (eighty subimages) data using the four new textural features

NO.

2

4

NUMBER OF SUBIMAGES

NUMBER OF SUBIMAGES OF IMAGE CLASS NO.

CLUSTER 1

2

3

NO.

4

19 95.0%

0

6

5.0%

0.0%

30.0%

0 0.0%

11 55.0%

0 0.0%

0 0.0%

2

0 0.0%

0 0.0%

20 100.0%

3 15.0%

3

0 0.0%

55.0%

1

5.0%

1

8 40.0%

OF IMAGE CLASS NO.

CLUSTER

1

2

3

4

20

0 0.0%

0 0.0%

0 0.0%

0.0%

19 95,0%

0 0.0%

0, 0.0%

0

1

100.0%

4

11

Average Recognition Rate: 76.25%

1

j

0

0.0%

5.0%

19 95.0%

0 0.0%

0 0.0%

0 0.0%

1 5.0%

20 100,0%

Average Recognition Rate: 97.50% 501

Volume 12, Number 8

PATTERN RECOGNITION LETTERS

Table 5 Results of clustering the four image class (eighty subimages) data using both the five old and four new features NUMBER OF SUBIMAGES OF IMAGE CLASS NO.

CLUSTER

NO.

2

3

20 100.0°A

0 0.0%

0 0.0%

0.0%

2

0 0.0%

19 95.0%

0 0.0%

0 0.0%

3

0 0.0%

20

0 0.0%

4

0 0.0%

0

0.0% 100.0% 1

5.0%

0 0.0%

mise than those obtained for the four new features (Table 4).

4. Conclusions

1

1

August 1991

0

20 100.0%

Both the test example and the extensive experimental results (shown in Tables 2 through 5, as well as others) clearly demonstrate the effectiveness of the new features relative to the previously defined features reported in the literature. These features encompass the information contained in the earlier features but are more robust than them. Thus these new features can be used in place o f the earlier ones with improved reliability on the results o f classification based on these features.

Average Recognition Rate: 98.75%

Acknowledgement tion o f the classes under unsupervised learning and p e r f o r m e d much better than either o f the two previous sets o f features. I f instead o f unsupervised learning, traditional supervised training methods were to be employed, one could conceivably get even better results. However, this was not attempted as the sample set (twenty samples per class) was too small to divide into training and test sets. Thus the results clearly demonstrate the superiority o f the newly defined feature set over the previously defined features. A test o f the effectiveness o f combining the four new features with the five original features was also conducted and the results thereof are shown in Table 5. There is a slight improvement over the results obtained using only the new four features (Table 4), but it is not statistically significant enough to establish any reliable enhancement in performance. This is probably due to the close to perfect results obtained with the four new features alone, which leaves little r o o m for improvement. Other experiments were also conducted, such as combining the four new features with the two recently proposed ones, using only some o f the four new features, and so on, and in all these experiments the results showed less pro-

502

The authors would like to thank Dr. A. Chu for providing the image data used in this study, which was o f invaluable help in demonstrating the effectiveness o f the new features relative to those presented in the earlier studies.

References Chu, A., C.M. Sehgal and J.F. Greenleaf (1990). Use of gray value distribution of run lengths for texture analysis. Pattern Recognition Letters 11,415--420. Erratum, ibid. 12, 65. Conners, R.W. and C.A. Harlow (1980). A theoretical comparison of textural algorithms. IEEE Trans. Pattern Anal. Machine Intell. 2 (3), 204-220. Dasarathy, B.V. (1974). A new clustering approach for pattern recognition in unsupervised environment. J. Indian Inst. of Science 56, 202-208. Dasarathy, B.V. (1975). An innovative clustering technique for unsupervised learning in the context of remotely sensed earth resources data analysis. Int. J. Systems ScL 6, 23-32. Galloway, M.M. (1975). Texture analysis using gray level run lengths. Computer Graphics and Image Processing 4, 172-179. Weszka, J.S., C.R. Dyer and A. Rosenfeld (1976). A comparative study of texture measures for terrain classification. IEEE 7~'ans. Syst. Man Cybernet. 6 (4), 269-285.