Chromatic correlation features for texture recognition

Chromatic correlation features for texture recognition

Pattern Recognition Letters 19 Ž1998. 643–650 Chromatic correlation features for texture recognition George Paschos 1 ) Department of Computer Sci...

814KB Sizes 0 Downloads 64 Views

Pattern Recognition Letters 19 Ž1998. 643–650

Chromatic correlation features for texture recognition George Paschos

1

)

Department of Computer Sciences, Florida Memorial College, Miami, FL 33054, USA Received 17 December 1996; revised 20 December 1997

Abstract The importance of chromatic correlations in representing chromato-structural images is investigated in this paper. A vision system consisting of two information channels, i.e., luminance and chrominance, is formed for the description of an image in both the xyY and HIS color spaces. Features from the correlation directional histograms are extracted. Chromatic correlations show a particularly robust behavior based on statistical redundancy tests. Classification results on a color texture set of 25 images in both the xyY and HIS color spaces show the discriminatory power of such features. q 1998 Elsevier Science B.V. All rights reserved. Keywords: Texture analysis; Color spaces; Classification; Autocorrelation

1. Introduction Color and texture are two of the most important attributes used in image analysis and pattern recognition. Although a variety of different methods for texture analysis have been presented in the recent years Žfor overviews, see ŽHaralick, 1979; Van Gool et al., 1983; Reed and Du Buf, 1993.., most of the work has focused on gray-level representation while considerably less work has been presented on how to combine chromatic Žcolor. with structural Žtexture. information ŽCaelli and Reye, 1993; Scharcanski et al., 1994; Kondepudy and Healey, 1993.. One particularly important aspect of the combined problem is

)

Corresponding author. E-mail: [email protected]. Electronic Annexes available. See http:rrwww.elsevier.nl rlocaterpatrec. 1

how chromatic information is involved in the formation and description of a textural pattern. It has already been demonstrated that 1st-order image properties Že.g., edge detection. can be successfully determined using color information Že.g., ŽNevatia, 1977... Texture is generally described as a 2nd- Žpossibly 3rd.-order property of scenes or surfaces, measured over image intensities Žluminance.. A natural extension of this description would include the spectral part of an image, i.e., color, thus defining color textures. The question, then, arises as to whether such 2ndr3rd-order relationships exist over the chromatic content of an image and whether they could be useful in recognizing color textures. This work investigates the importance of chromatic information of 2nd- and 3rd-order in recognizing color textures. A color texture is defined as a visual pattern characterized by its chromatic andror structural variation. For instance, two patterns with

0167-8655r98r$19.00 q 1998 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 7 - 8 6 5 5 Ž 9 8 . 0 0 0 3 8 - 5

644

G. Paschosr Pattern Recognition Letters 19 (1998) 643–650

similar structure but different colors are two different color texture patterns. The proposed methodology uses the autocorrelation function of an image as its basic processing mechanism. A two-channel color vision system is formed by separating the pure intensity from chromatic data, thus, explicitly identifying the two input sources, luminance and chrominance. The xyY color space has been selected for that purpose because it provides luminance directly through Y; the remaining two coordinates Ž x, y . are combined to form chrominance. The proposed methodology has also been implemented in the HIS space and analogous results have been obtained. Given the two visual channels Žluminance, chrominance., feature selection proceeds in three stages: Ža. the autocorrelation of each channel as well as their cross-correlations are found and the corresponding directional histograms are created, Žb. the most prominent histogram peaks along with additional 3rd-order features from the correlation matrices are extracted, thus, forming a set of candidate features, and Žc. the statistical redundancy of each candidate feature is computed over a test image set; the most robust features Žlowest redundancy. are selected for classification. As shown by experimental results on a color texture set of 25 images under noisy conditions, the chromatic autocorrelation as well as cross-correlation features exhibit a particularly high degree of robustness and they are the majority among the features that are selected for classification at the final stage. This is tested further with a neural-network classifier, which is applied in a variety of configurations. A success rate of near or over 90% is achieved. It should be noted that within this framework, it is not yet clear which of the selected features discriminate each aspect Žstructural or color. of a given pattern. The aim of this work is to show that features such as the proposed ones are in general important. More specific analysis will be needed to identify what features are more relevant to the structure or color aspect. The rest of the paper is organized as follows. Section 2 describes the xyY and HIS color spaces. Section 3 presents the algorithms for feature extractionrselection. In Section 4 we present the classification results and we conclude with observations in Section 5.

2. Color spaces: xyY and HIS The xyY color space is a derivative of the CIE XYZ space and has the advantage of separating luminance ŽY. from chrominance Ž x, y .. Given that an image is initially represented in the RGB space, the transformation to xyY is performed as follows ŽPratt, 1991.: Y s 0.299R q 0.587G q 0.114 B x s Xr Ž X q Y q Z . , y s Yr Ž X q Y q Z . .

Ž 1.

Since the two chromaticity coordinates Ž x, y . are normalized quantities, they can be combined into a single chrominance component by quantizing the continuous w0,1x interval. For instance, in a 16-level quantization, the continuous-valued pair Ž x, y . s Ž0.23,0.55. is mapped to the discrete-valued pair Ž x, y . s Ž2,5. which is then converted to a singlevalued discrete quantity in w0..255x: 2 q 16)5s 82. By doing such a conversion, some information is lost. The advantage, however, is in compressing the amount of chromatic information into one number. Furthermore, this loss is practically negligible as has been shown in related work Žsee Paschos and Valavanis, 1996a,b.. The HIS color space is defined by the following formulas ŽPratt, 1991.: H s arctan Is

2 RyGyB

'3 Ž G y B .

RqGqB

Ss1y

3

,

,

min Ž R q G q B . I

.

Ž 2.

In order to form a two-channel vision system in HIS, intensity and hue have been selected as the most important components Žsee Perez, 1995.. The singularity R s G s B can be removed by using the following modification: H s arctan

2 RyGyB

'3 Ž G y 2 B .

Ž H defaults to zero, if R s G s B .. Since arctan g

G. Paschosr Pattern Recognition Letters 19 (1998) 643–650

w0,p x, the following transformations are performed to map H to an integer value in w0, . . . , 255x: H s 180

arctan Ž '3 Ž G y 2 B . r Ž 2 R y G y B . . p

q 180, H Hs . 1.406

Ž 3.

The same set of algorithms is subsequently used in both xyY and HIS.

3. Direction histograms and correlation features The autocorrelation of an image provides useful statistics about its textural characteristics ŽHaralick, 1979; Caelli and Reye, 1993.. For an M = N matrix AŽ i, j ., the corresponding normalized autocorrelation matrix is defined as follows:

F Ž m,n . s MN

My1 Ny1

Ý Ý AŽ i , j . AŽ i q m, j q n .

is0

js0

rŽ M y m. Ž N y n.

My1 Ny1

Ý Ý

is0

A2 Ž i , j . ,

js0

0 ( m,n ( M y 1, N y 1.

Ž 4.

Based on this definition, the corresponding direction histogram is defined as DH Ž u . s

Ý F Ž m,n . ,

Three direction histograms are calculated from each image, namely, the luminance autocorrelation, the chrominance autocorrelation, and the luminancechrominance cross-correlation histogram. The basic algorithm is as follows: Algorithm 1. 1. Extract edges 2. Smooth spatially over a window 3. Calculate the autoŽcross.-correlation 4. Determine the direction histogram Edge extraction in gray-scale as well as color is performed using a simple Sobel-type operator. The importance of color edges is in that they provide a reduced version of the x–y chrominance image component. Color edges summarize the variation of color content in the image. Only the highly relevant edge information Žgray-scale and color. need be used and this is achieved by the compression scheme described in Section 2. Subsequently, this information is passed on to the correlation processing stage. From each of the three histograms, a number of different features can be extracted in order to characterize the corresponding distribution. In particular, Žposition, height, score. triplets for each of the most prominent histogram peaks are used as initial features. The prominence scoring function proposed by Ohta Ž1985. has been used: score s Ž Ž 2 h y u1 y u2 . r2 h . Ž Ž w y n prh . rw . ,

Ž 6.

m, n

u s arctan Ž nrm . , 0 ( m,n ( M y 1, N y 1.

645

Ž 5.

The direction histogram can be calculated at different levels of angle Ž u . resolution, where at the highest resolution information for each possible angle in the image is included. For instance, in a 4 = 4 image there are six possible angles with respect to the horizontal axis Ži.e., pixels Ž1,2., Ž1,3., Ž2,1., Ž2,3., Ž3,1., Ž3,2.. occurring once and three angles Ži.e., 08, 458, 908. occurring three times each. In general, there are Ž N y 1.)Ž N y 2. q 3 possible angles for an N = N image. Alternatively, one may work at a reduced level of angle resolution by accumulating the autocorrelation directional information into angle ranges Že.g., every 58., covering the w08– 908x spectrum.

where h is the peak height, u1 , u 2 are the heights of its left and right valleys, respectively, w is the peak width from left to right valley, and n p is the area under the peak. In addition, 3rd-order image features are included. These features measure the rate of change along different zones of a correlation matrix. A zone is formed by imagining a viewer at the upper-left corner of the matrix looking down at a specific viewing angle. For instance, using increments of 58 for the viewing angle results in 18 triangular zones for the upper-right half of the correlation matrix Ž908.. As a further step, this initial set of features is tested for redundancy over the sample image set. The particular measure of redundancy used is the within-

G. Paschosr Pattern Recognition Letters 19 (1998) 643–650

646

to-between class variance Žscatter. ratio ŽFugunaga, 1990.. The within variance of a given feature i is defined as follows: nk

sw2 Ž k , i . s

Ý

x k2l y

ls1

1 nk

nk

Ý

2

xkl Ž i. ,

Ž 7.

ls1

while its between variance is given by

s b2 Ž i . s

C

Ý

x k2 Ž i . y

ks1

1 C

C

Ý

2

xk Ž i. ,

Ž 8.

ks1

where x k l Ž i . is sample l of feature i in class k, x k Ž i . is the mean of feature i in class k, n k is the number of samples in class k, and C is the number of classes. The discriminatory power of feature i is then determined by the following Redundancy Factor: RF Ž i . s

C

Ý

ks1

sw2 Ž k ,i . s b2 Ž i .

.

Ž 9.

RF, as defined, is the inverse of a feature’s statistical robustness Žstability.. Thus, a low RF value indicates high robustness and vice versa. Therefore, features with the lowest RF values are included in the final feature set. The processing steps of the feature extractionrselection method are summarized as follows: Algorithm 2. 1. Determine the luminance direction histogram ŽAlgorithm 1. Ža. Find the most prominent histogram peaks and include Žposition, height, score. as features Žb. Calculate 3rd-order features 2. Determine the chrominance direction histogram ŽAlgorithm 1. Ža. Find the most prominent histogram peaks and include Žposition, height, score. as features Žb. Calculate 3rd-order features 3. Determine the luminance-chrominance direction histogram ŽAlgorithm 1. Ža. Find the most prominent histogram peaks and include Žposition, height, score. as features Žb. Calculate 3rd-order features 4. Calculate RF for each of the extracted features 5. Sort the feature set according to RF 6. Select features with lowest RF for classification

It should be noted here that, in practice, features extracted through the above process can not be guaranteed to be uncorrelated. Therefore, Principal Component Analysis may be applied in order to minimize feature correlation. This could lead to a better characterization of the feature space. However, the information available in such features is still useful, even if not completely uncorrelated, in the sense that the relevance of such information for color texture recognition can be demonstrated. This is accomplished in the following section.

4. Color texture classification A set of 25 color textures is used ŽFig. 1., covering a wide range of chromato-structural patterns. Each 512 = 512 image is contaminated with gaussian additive noise at twenty different levels Žfrom 0% to 40% in increments of 2%., thus providing twenty different images which represent class samples. The three most prominent peaks of the autocorrelation direction histograms Žluminance, chrominance. and the four most prominent peaks of the cross-correlation direction histogram are identified along with their position, height, and score. Also, 18 3rd-order features are extracted from each of the three correlation matrices Žsee Section 3.. Thus, the total number of candidate features is 84Žs 3)3 q 3)3 q 4)3 q 3)18. for each color texture. These features are then ranked according to their RF values ŽEq. Ž9.. and the first few of them are selected and form the input to the neural network classifier. A three-layer perceptron with the back-propagation learning algorithm has been used. Table 1 shows the redundancy factors in xyY for the luminance ŽL-RF., chrominance ŽC-RF., and luminance-chrominance ŽLC-RF. features Žthe last 18 measurements in each of the three columns correspond to the 3rd order features while the rest are the histogram features.. The features selected are those with RF - 1. Twelve features satisfy this condition Ži.e., L-RFŽ9., C-RFŽ2–3., C-RFŽ5–6., C-RFŽ9., CRFŽ13–18... In addition, classification results have been also obtained with the six most stable features. The significant observation one can make is that the

G. Paschosr Pattern Recognition Letters 19 (1998) 643–650

647

Fig. 1. The color texture images. The color version is available as an Electronic Annex Žsee http:rrwww.elsevier.nlrlocaterpatrec..

majority of the features selected by this process, which exhibit maximum relative robustness, have been extracted from the chrominance autocorrelation histogram. In fact, only one of the luminance autocorrelation histogram features has a redundancy value of less than one. This provides strong supporting evidence for the importance of chromatic correlation information in describing color textures. On the other hand, the observation that only one luminance-based feature has a ratio under the set threshold seems to indicate that luminance features are less important than chrominance ones. However, several luminance features are very close and within reasonable margin to the given RF threshold Ž- 1., thus, they would have been selected had the threshold been raised.

Thus, it is not clear whether chrominance features have an advantage over luminance Žor vice versa. in discriminating structurercolor. A color texture is considered as a whole pattern and no claim can be made, at this point, as to what aspect of a color texture Žstructural or chromatic. is discriminated by each of the selected features. A feature decorrelation stage may be added Že.g., Principal Component Analysis. to provide further and more specific evidence toward this end. The classification results for different network configurations are summarized in Table 2. Shown are the number of inputs Žfeatures. used, the number of hidden-layer nodes, the gain factor of the backpropagation learning rule Žthe momentum term was

G. Paschosr Pattern Recognition Letters 19 (1998) 643–650

648 Table 1 Redundancy Factors in xyY

Table 3 Redundancy Factors in HIS

F

L-RFŽf.

C-RFŽf.

LC-RFŽf.

F

L-RFŽf.

C-RFŽf.

LC-RFŽf.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

1.49 1.5 1.43 2.02 5.18 2.98 4.33 4.28 0.54 – – – 4.34 4.37 4.2 3.96 3.84 3.67 3.61 3.69 3.29 2.81 2.99 3.32 3.7 4.03 4.47 4.66 4.91 4.86

3.72 0.99 0.41 4.27 0.9 0.05 3.26 2.72 0.06 – – – 0.91 0.92 0.94 0.94 0.96 0.98 1.01 1.02 1.08 1.12 1.09 1.04 1.04 1.07 1.04 1.1 1.08 1.08

8.07 18.23 82.23 9.97 13.92 99.97 7.27 15.39 80.33 7.74 21.28 64.61 80.78 25.22 66.58 81.42 17.76 67.89 90.44 2.65 70.36 69.28 1.58 72.64 97.97 1.11 71.48 52.89 3.48 84.2

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

20.92 24.57 19.6 15.82 33.56 36.46 29.81 57.91 24.38 – – – 35.91 28.6 23.92 20.2 18.61 20.04 23.51 25.58 23.32 25.36 22.84 22.5 23.28 24.9 28.42 31.5 39.45 46.48

527.72 3066.31 2431.87 1101.45 724.19 888.06 595.01 1006.12 947.72 – – – 1455.72 1454.83 1318.63 1132.62 1235.71 1164.71 1395.17 1358.17 1163.39 984.47 895.46 1367.0 1341.53 680.64 822.58 814.65 824.24 734.21

11.43 13.04 14.29 15.05 14.91 13.86 14.26 17.86 15.38 19.62 14.21 9.47 8.41 7.08 5.95 5.36 5.1 5.33 3.04 6.44 13.09 5.28 3.31 19.89 16.07 10.14 4.93 10.96 10.59 8.1

set at 0.8., the error margin used for terminating the training stage, the number of trials required to achieve that error, and the percent classification error over the entire data set for the 25 images. Classification performance in this context provides a test on the power of the selected features for discriminating color textures and, thus, it serves as an additional

supporting factor for the relevance of chromatic correlations as observed in the selection stage. It can be argued that a simpler classifier could be used to demonstrate whether the selected features are powerful enough. Preliminary experimental results with simpler networks Že.g., no hidden nodes. showed that classification Žnetwork equilibrium. was more

Table 2 Classification results in xyY Features

Hidden nodes

Gain factor

Error margin

Trials

Classification error

12 12 6 6

30 35 30 35

0.6 0.3 0.3 0.3

0.001 0.001 0.001 0.001

200 278 553 729

28.0 6.8 8.8 8.2

G. Paschosr Pattern Recognition Letters 19 (1998) 643–650

649

Table 4 Classification results in HIS Features

Hidden nodes

Gain factor

Trials

Classification error

6 6 3

35 35 35

0.4 0.4 0.4

1000 2000 1000

16.4 12.4 36.6

difficult to achieve. A possible explanation is the fact that the given features have not been decorrelated, thus, information overlap may exist within the selected feature set. The same methodology has been employed in the HIS color space ŽTables 3 and 4.. The six features with the lowest RF values have been selected. Comparing Tables 1 and 3, one may observe that the xyY-based features exhibit greater stability. Nevertheless, the most robust features from HIS ŽRF - 10. represent once again chromatic correlations. This time, however, the best features come from the cross-correlation matrix ŽH–I.. This re-emphasizes the high relevance of color correlations in the analysis of chromato-structural patterns. Table 4 shows the HIS classification results. The correct classification rates are very near the 90% margin. When the three best features were used, instead, the performance achieved was not as high. Considering, however, that usually back-propagation requires several thousand trials to converge, additional training is expected to provide improved hit rates.

5. Conclusions This work has investigated the role played by chromatic correlation information in the representation and recognition of chromato-structural patterns. Two color spaces have been used and features of 2nd and 3rd order in two different color spaces have been examined. Statistical stability tests with a set of 25 color textures under noisy conditions have shown that the chromatic correlations, either as autocorrelation or in cross-correlation with luminance, provide features that are statistically robust for the representation of color textures. Classification experiments have provided additional supporting evidence for the importance of such features in terms of their discrim-

inating power. The main conclusion that has been drawn is that chromatic correlations provide highly robust features for color texture classification. The presented study has provided strong evidence in support of the hypothesis that color information of 2nd andror 3rd order can be important in color texture. Further experimental analysis will be needed in order to clarify what aspect of a color texture can be discriminated by chrominance and luminance features.

Acknowledgements The author would like to thank the anonymous reviewer for the very positive comments and suggestions.

References Caelli, T., Reye, D., 1993. On the classification of image regions by colour texture and shape. Pattern Recognition 26 Ž4., 461–470. Fugunaga, K., 1990. Introduction to Statistical Pattern Recognition. Academic Press, San Diego. Haralick, R.M., 1979. Statistical and structural approaches to texture. Proc. IEEE 67 Ž5., 786–804. Kondepudy, R., Healey, G., 1993. Modeling and identifying 3-D color textures. In: Proc. Internat. Conf. on Computer Vision and Pattern Recognition, pp. 577–582. Nevatia, R., 1977. A color edge detector and its use in scene segmentation. IEEE Trans. SMC 7 Ž11., 820–826. Ohta, Y., 1985. Knowledge-based Interpretation of Outdoor Natural Scenes. Pitman, Boston. Paschos, G., Valavanis, K.P., 1996a. Single-valued representation of color textures for fault detection. In: 4th Internat. Conf. on Automation, Robotics and Vision. Paschos, G., Valavanis, K.P., 1996b. A color texture based visual monitoring system for automated surveillance. In: IEEE Internat. Symposium on Autonomous Underwater Vehicles.

650

G. Paschosr Pattern Recognition Letters 19 (1998) 643–650

Perez, F., 1995. Hue Segmentation, VLSI Circuits and the Mantis Shrimp. Ph.D. Thesis, California Institute of Technology. Pratt, W.K., 1991. Digital Image Processing. Wiley, New York. Reed, T.R., Hans Du Buf, J.M.H., 1993. A review of recent texture segmentation and feature extraction techniques. Comput. Vision Graphics Image Process.: Image Understanding 57 Ž3., 359–372.

Scharcanski, J., Hovis, J.H., Shen, H.C., 1994. Representing the color aspects of texture images. Pattern Recognition Letters 15, 191–197. Van Gool, L., Dewaele, P., Osterlinck, A., 1983. Texture analysis anno 1983. Comput. Vision Graphics Image Process. 29 Ž12., 336–357.