Pattern Recognition, Vol, 25, No. 2, pp. 173-188, 1992
0031--3203/92 $5.00 + .00 Pergamon Preu pie ~) 1992 Pattern Recognition Society
Printed in Great Britain
TEXTURE CLASSIFICATION AND SEGMENTATION USING MULTIRESOLUTION SIMULTANEOUS AUTOREGRESSIVE MODELS* JIANCHANG MAO and ANIL K. JAINt Department of Computer Science, Michigan State University, East Lansing, MI 48824, U.S.A.
(Received 9 January 1991; in revised form 10 June 1991; received for publication 20 June 1991) Abstract--We present a multiresolution simultaneous autoregressive (MR-SAR) model for texture classification and segmentation. First, a multivariate rotation-invariant SAR (RISAR) model is introduced which is based on the circular autoregressive (CAR) model. Experiments show that the multivariate RISAR model outperforms the CAR model in texture classification. Then, we demonstrate that integrating the information extracted from multiresolution SAR models gives much better performance than single resolution methods in both texture classification and texture segmentation. A quality measure to evaluate individual features for the purpose of segmentation is also presented. We employ the spatial coordinates of the pixels as two additional features to remove small speckles in the segmented image, and carefully examine the role that the spatial features play in texture segmentation. Two internal indices are introduced to evaluate the unsupervised segmentation and to find the "true" number of segments or clusters existing in the textured image. Texture classification Segmentation Clustering Multiresolution Simultaneous Autoregressive Model
1. INTRODUCTION Texture is one of the important characteristics which exist in many natural images. It also plays an important role in human visual perception and provides information for recognition and interpretation. As a result, research on texture analysis has received considerable attention in recent years. A large number of approaches for texture classification and segmentation have been suggested. References (1--4) provide good reviews of literature on this subject. Various texture analysis methods can be divided loosely into two categories: statistical methods and structural methods. Statistical methods characterize an image in terms of some numerical attributes or features. For example, features have been derived from the Fourier power spectrum, gray level run length, and co-occurrence matrices computed from the input image. Other statistical methods include fitting simultaneous autoregressive models, Markov random field ( M R F ) models, (5-1°) and fractal models ol,x2) to the textured image, and using the model parameters as features in texture classification or segmentation. The spatial-frequency filtering methods for texture analysis have also attracted considerable attention. (13-16) Again, "energies" in the filtered images are treated as features. These methods are regarded to be consistent with plausible models of the human visual system. * Research supported by NSF grant IRI89-01513. t Author to whom correspondence should be addressed.
Rotation-invariance
Structural methods, on the other hand, describe a texture by emphasizing its structural primitives and their placement rules. Pure structural methods can deal with only those images which are too regular to be of any practical interest. So, combinations of structural and statistical methods can result in better performance. (7) The simultaneous autoregressive (SAR) model, which is an instance of M R F models, has been successfully used in texture classification, segmentation, and synthesis. (5A8-27) Like similar model-based methods, there are two major difficulties associated with the utilization of the S A R model. One is choosing an appropriate neighborhood size in which pixels are regarded to be dependent. The other is to select an appropriate window size over which local textural characteristics are extracted. Most approaches in the literature use fixed-size neighborhoods and windows which are usually determined empirically. Several texture analysis approaches have adopted the multiresolution paradigm. (s'9'~-31) Typically, a fixed-size neighborhood and window is used to derive features at varying scales corresponding to input image at different resolutions. However, these multiresolution approaches seldom integrate the information extracted from different image resolutions. Some approaches (9'~s'31) use the coarse level information only as an initial segmentation for the higher resolution images in order to speed up the segmentation process and to avoid getting trapped in a local minimum. However, there is no guarantee that 173
174
J. MAO and A. K. JAIN
using such an initial segmentation will lead to an "optimal" segmentation. In fact, we will demonstrate later that often such coarse level segmentations are far from optimal. Typically, only one of the multiresolution images which is regarded to be the best for the segmentation is used. (s,27) Other examples of the multiresolution approach include using features based on fractal models extracted from different resolutions, (3°) and using a set of features derived from co-occurrence matrices computed for six different values of the displacement v e c t o r . (29) In this paper, we first introduce a multivariate rotation-invariant SAR model. We demonstrate that integrating the information derived from multiresolution SAR models results in much better performance in both texture classification and texture segmentation. A feature evaluation measure which ranks individual features for the purpose of texture segmentation is also presented. We employ the spatial coordinates of the pixels as two additional features to remove small speckles in the segmented image, and carefully examine the role that the spatial coordinate features play in texture segmentation. Two internal clustering indices are introduced to evaluate the segmentations and to find the "true" number of clusters or segments present in the textured image. 2. SAlt TEXTURE MODELS
Texture is a neighborhood property; therefore, it is logical to utilize the spatial interactions among neighboring pixels to characterize it. There are two classes of commonly used models (5.24) for specifying the underlying interaction among the given observations: the simultaneous models, such as SAR models, and the conditional Markov (CM) models. The class of S A R models is a subset of the class of CM models, i.e. for every SAR model there exists a unique CM model with equivalent spectral density function. (5.24) Still, the class of S A R models deserves detailed study for the following reasons. (24) SAR models are parsimonious, i.e. the CM model, in general, is characterized by more parameters than the equivalent SAR model (if one exists). Secondly, the study of SAR models can be extended to include simultaneous moving average (SMA) models and simultaneous autoregressive and moving average (SARMA) models which are not subsets of CM models. In this section, we give the general description of the basic SAR model for textured images, and introduce a multivariate rotation-invariant SAR model.
2.1. Basic S A R model for textured images Let g(s) be the gray level value of a pixel at site s = (Sl, s2) in an M x M textured image, Sl, s2 = 1, 2 . . . . . M. The SAR model can be expressed as
(-1, I) ( O, 1) ( I, i)
(-i,o)(o,o) (x,o) (-I,-I) (0,-i) (i,-I) Fig. 1. The second-order neighborhood for pixel at site (0, 0).
g(s) = ~ + ~ o(r)g(s + r) + e(s),
(1)
rE~
where ~ is the set of neighbors of pixel at site s. It is common to choose the second-order neighborhood as shown in Fig. 1. In Equation (1), e(s) is an independent Gaussian random variable with zero mean and variance o2; O(r), r E ~ are the model parameters characterizing the dependence of a pixel to its neighbors, and # is the bias which is dependent on the mean gray value of the image. The standard deviation, o, has a direct relationship to the visually perceived granularity or "business" of the texture326) For a symmetric model, O(r)---O(-r). All model parameters, #, 0 and 0(r), r E ~, can be estimated from a given window (subimage) by using the least squares error (LSE) technique or the maximum likelihood estimation (MLE) method. These model parameters, excluding #, are often used as features for texture classification and segmentation. (25-27)Experiments have shown that both LSE and MLE estimates yield very similar segmentation results, (27) but the LSE technique is less time-consuming. So, in this paper, we have used the LSE technique. The basic SAR model is rotation-variant, which means that when the textured image rotates, the model parameters also change. This requires that the training samples and test samples from a texture class have the same orientation. Although sometimes such an orientation dependency is desired, e.g. discriminating between similar textures which are oriented differently, in general, it reduces the flexibility of the model. There have been many attempts to derive rotation-invariant features for texture analysis3 25,32) Kashyap and Khotanzad ~25)have suggested a rotation-invariant model named circular autoregressive (CAR) model. In the C A R model, the weighted average of the gray value of the pixel of interest and its eight neighbors define a new random variable at each pixel. In the next section, the C A R model will be extended to construct a multivariate rotation-invariant model/33) 2.2. Rotation-invariant SAR ( RISA R ) model This model is defined in order to obtain rotationinvariant features for texture classification. Define the weighted averages of ni points on a set of circles, Ci, i = 1, 2 , . . . , p, respectively, as shown in Fig. 2 as follows:
Texture classification and segmentation tI
175
Table 1. Values of Art, wi(r), i = 1, 2, 3 (r ~ 0)
i
i=l
t2 Fig. 2. Formation of rotation-invariant variables, where "x" denotes a pixel (grid point) and "O" denotes a resampled point.
xi(t) = 1 ~ g (t + iexp ni k=0
\
(2)
where ni = 8i and g(.) denotes the gray level. In Equation (2), we have changed the site notation s to a complex variable, t = tl + V~--lt2, tl, t2 = 1, 2, . . . , M, in order to represent the equation more concisely. The variables, xi(t), i = 1, 2 . . . . . p, are weighted gray values of the resampled points on circles, where p is the order of the model (the number of variables in the model). When the textured image rotates around pixel t, the values of xi(t) remain approximately the same. A small error occurs due to the digitization of the image and the resampling interval. Therefore, xi(t) can be used as rotationinvariant variables in the SAR model, resulting in a rotation-invariant SAR (RISAR) model. Choosing ni = 8i makes the resampling interval between the neighboring points on every circle equal t o :r/4 radian. A denser sampling of points on circles is unnecessary because the gray values at these additional points will also be estimated from the same neighboring pixels. From Fig. 2, it can be seen that most of the resampled points on circles do not correspond to pixels (grid points). Therefore, the gray values at these points must be interpolated. We use the bilinear interpolation technique in which the gray value at a point is estimated by taking the weighted average of its four nearest neighbor pixels. The weight assigned to a pixel is proportional to its distance from the point under consideration. Substituting the interpolated values into Equation (2) and changing the site notation back to the original one used in Section 2.1, Equation (2) can be rewritten as 1
rE~l
wt(r)
rE~2
(0, (0, (1, (1,
0.6636 1.4335 1.4335 0.4005
(0, (0, (1, (1, (1, (2, (2, (2,
o) 1) o) 1)
i=3 wz(0
1) 2) 0) 1) 2) 0) 1) 2)
0.2549 1.3730 0.2549 0.6304 0.7649 1.3730 0.7649 0.2117
rEN 3 (0, (0, (1, (1, (2, (2, (2, (2, (3, (3, (3, (3,
2) 3) 2) 3) 0) 1) 2) 3) 0) 1) 2) 3)
w3(r )
0.2318 1.3513 0.3744 0.8340 0.2318 0.3744 1.1072 0.4011 1.3513 0.8340 0.4011 0.0905
~i/2) ), i = 1 , 2 . . . . . p,
xi(s) = 8i ~
i=2
wi(r)g(s + r),
(3)
r~.~ i
where Ari is the neighbor set containing the pixels which are used for interpolating the points on the ith circle and the pixels which happen to be on the ith
circle; w,-(r), r EAri are the corresponding weights which indicate the contribution of the pixel r, r EAri, to the ith circle, i = 1, 2, . . . , p. Note that the intersection of two successive neighbor sets is not empty. It is easy to prove that wi(r) is symmetric with respect to the origin, that is,
wi(r) = wi(-r).
(4)
All these weights are independent of the underlying textured image, so they can be calculated and stored a priori. Table 1 lists a subset of these weights, wi(r), r EAri, i = 1, 2, 3, in one quadrant (r -> 0). Thus, the RISAR model can be established as follows: P
g(s) = ~l + ~ OiXi(S ) "[- E(S), i=l
(5)
where p is the number of variables in the RISAR model. When p = 1, the RISAR model reduces to the CAR model. The parameters, 01, 02 . . . . . 0p, o, which are estimated using the LSE technique, can be used as rotation-invariant features for texture classification. The power of the multivariate RISAR model in performing rotation-invariant classification of textural images is demonstrated on the following problem. Seven different types of natural textures (wave-like sand, scale-like sand, branch-like sand, beehive-like sand, mountain, grass, and brick) in reference (33) are used in this experiment. In order to test the rotation-invariance of the RISAR model, each 512 x 512 image is rotated with relative angles of 30 °, 45 ° , 60 ° , 90 °, 120°, 135" and 150°. The original images together with 30~, 45* and 60* rotated images are used as training samples, and the remaining images are used as test samples. Each image is divided into several 128 x 128 subimages resulting in 32 training samples per class and 32 test samples per class (we do not use all the subimages because the information contained in the four corners of rotated images is lost). The parameters of all the SAR models
176
J. MAO and A. K. JAIN Table 2. Classification accuracies (Pc,) using the RISAR model CAR P Pc, (%)
1 60.1
Multivariate RISARs 2
3
4
5
6
7
8
83.4
85.3
80.5
79.6
81.6
88.5
87.9
with different values of p are estimated within 64 x 64 windows. The minimum Mahalanobis distance classifier is used for classification. This assumes that the features from each class have a multivariate Gaussian distribution with a common covariance matrix. Table 2 lists the classification accuracies of the multivariate RISAR models. Table 2 shows that the multivariate RISAR models (p >- 2) achieve higher classification accuracies than the CAR model on the given textures. We also notice that the classification accuracy does not necessarily increase with an increase in the number of variates in the model. This is because the extent of dependence between neighboring pixels in a textured image generally decreases as the distance between them increases. In other words, the dominant terms in Equation (5) involve those variables whose corresponding circles are small. 3. MULTIRESOLUTION SAIl MODEL
There are two major difficulties associated with the utilization of the SAR model. One is choosing a proper neighborhood size in which pixels are regarded as being dependent. The other is to select an appropriate window size in which the texture is regarded as being homogeneous and the parameters of the SAR model are estimated. Most approaches in the literature use a fixed-size neighborhood and a fixed-size window which are usually empirically determined. The major problem of the fixed-size neighborhood and window is that it is "nonadaptive". For some images with fine texture, a small neighborhood and a small window are adequate, but for others, both small and large neighborhoods and windows may be necessary in order to extract information at different scales or resolutions. Why don't we use a SAR model with a sufficiently large neighborhood which will be suitable for all kinds of textures? Experimental results reported in Table 2 have already answered this question in the case of texture classification. We have observed similar behavior in texture segmentation experiments. Although it is true that a model with a large neighborhood will fit the texture better than a model with a small neighborhood if the window is chosen sufficiently large to estimate the model parameters reliably, it does not provide more discriminatory information. In fact, the severe averaging effect caused by a large number of parameters in the model often degrades the performance of those parameters that have strong discriminatory power. This behavior
is similar to the "curse of dimensionality" problem reported in the statistical pattern recognition in the literature. O4) Figure 3 shows the degradation caused by a large number of parameters in the SAR model. Both Figs 3(b) and (c) are the feature images (whose value at a pixel is the model parameter O(r) and is scaled to 0-255 for the purpose of display) corresponding to the parameter 0(1, 1) of two SAR models defined by Equation (1) with a small neighborhood G0 and a larger neighborhood G0 U ~l as shown in Fig. 5, respectively. The parameter O is estimated using 25 x 25 overlapping windows in Fig. 3(a). We see that the strong discriminatory power of feature 0(1, 1) is degraded by using a large neighborhood model. It seems, therefore, that it is unnecessary to use a large neighborhood in fitting SAR models. But this does not mean that pixels that are far apart are necessarily independent. If an image with a coarse texture is subsampled, a SAR model with a small neighborhood will fit the subsampled image well. In the subsampled image, two neighboring pixels are several pixels apart in the original image. Therefore, establishing SAR models at different resolutions of the input image can provide useful discriminatory information for many texture types. The most commonly used multiresolution image representation is the Gaussian pyramid image model. <3s) One constructs a low-pass filtered and subsampled image sequence, Gt, l = O, 1 , . . . , L - 1, in which the successive image sizes decrease as texture changes from fine to coarse. Note that Go denotes the input image and as I increases, the image resolution goes from high to low. If a small fixedsize window is used for {Gt}, then in high resolution images (small /), this window is more likely to emphasize the information regarding the texture "primitives", while in low resolution images (large /), the same window will cover many texture "primitives", so that the placement rule information can be obtained. The fixed neighborhood in different resolution images also covers different region sizes in the original image. Therefore, we expect that image classification and segmentation results will improve if multiresolution images are used. The multiresolution SAR model is drawn schematically in Fig. 4. At each image resolution, a SAR model or a RISAR model is fitted to the image, forming the MR-SAR model or MR-RISAR model, respectively. The collection of all the model parameters can be used as features for both texture
Texture classification and segmentation
177
[M+~+++W++~
Ca)
(b)
(c)
Fig. 3. Feature images, 0(1, 1), of SAR models with different neighborhood sizes: (a) a 256 x 256 image containing four natural textures (D68, D55, D84, D77) from the Brodatz album; (b) small neighborhood size ~0; (c) large neighborhood size ~0 U ~1.
$AR/RISAR
]
~d2
input Image
model parameters
image
GO l'yremid
d2 d2 dl dl dOdOdO d2dld0 x d0dld2 dOdOdO dl dl dl d2 d2 d2 dl
Fig. 4. Multiresolution SAR/RISAR model.
w
Table 3. The classification accuracies of single resolution and multiresolution RISAR models Single resolution
Muitiresolution
p
1=0
l=1
1=2
l=3
L=2
L=3
1 2 3 4
60.1 83.4 85.3 80.5
73.5 73.5 65.7 89.4
63.9 61.5 77.2 80.8
47.6 55.3 67.9 66.2
94.8 94.6 94.8 97.3
96.5 98.7 99.2 99.7
wl
Fig. 5. Neighbor sets ~t and the corresponding window sizes Wt.
4. TEXTURE SEGMENTATION
L=4 99.7 100.0 100.0 100.0
classification and texture segmentation. This is the simplest way to integrate the multiresolution information. The classification accuracies using the multiresolution rotation-invariant S A R (MR-RISAR) model are listed in Table 3 (the texture categories and the number of training and test samples are the same as those used to obtain results in Table 2 in Section 2.2). From Table 3, we can see that the classification accuracies using single resolution images are not encouraging, while the multiresolution rotationinvariant S A R model provides very high classification accuracies. By using the 4-resolution (L = 4) and 2variate (p = 2) M R - R I S A R model for the given textured images, a classification accuracy of 100% is achieved. In the next section, we will demonstrate that the multiresolution S A R model can also improve the texture segmentation performance.
In this section, we will use the SAR model instead of the R I S A R model. This is because two image regions with the same texture but different orientations can form a texture boundary. Images at different resolutions have different sizes, so it is difficult to integrate S A R models at different resolutions in order to label every pixel of the original image. To circumvent this problem, we adopt the following approach. We keep the size of the given image unchanged, but change the neighbor set and the window size as shown in Fig. 5, where ~t = {dr} denotes the set of neighboring pixels, and Wt denotes the window size, for the lth resolution image, Wl = (1 + 1) W0, l = 0, 1 . . . . . L - 1. Note that the model used here is different from the conventional Gaussian pyramid model (35) in image size and low-pass filtering. Preliminary texture segmentation results showed that if the window sizes are chosen according to this scheme, then the ambiguity near the texture boundary in low resolution images is too severe (the window sizes for the low resolution images are too large). Due to this specific problem of texture segmentation, we choose Wt = W0 = 25 for all l in all our segmentation experiments. The multiresolution S A R model can be written as
178
J. MAO and A. K. JAIN
g(s)=Itl+
~
Ol(r)g(s+r)+el(s),
(6)
where i = 0, 1 . . . . . L - 1. Using different neighbor sets we can fit different resolution S A R models to the given image. In all our experiments, we use 25 x 25 overlapping windows moving every two pixels in both the horizontal and vertical directions to estimate the model parameters. Symmetric models are used, so each model has six parameters, five of which are used as features (we do not use/tt due to its dependence on the mean gray value of the image). The parameters (features) from all the models, {0t(r), atlr ~ ~t, I = O, 1 . . . . . L - 1} are appended to form the feature vector at pixel s, {fl(s), f2(s) . . . . . fa(s)}, where d = 5L. The k-means clustering algorithm is used for all the segmentation experiments reported in this paper. Due to inhomogeneities in the input texture, the features tend to be noisy; in other words, the value of a model parameter changes from pixel to pixel within the same texture. If these features are used in segmentation, the resulting segments or image regions will contain many small speckles. To eliminate this problem, we modify the features as follows. (a) Local averaging
f (s) = 1/K 2 F~ f(s + r),
(7)
rEM
where M is a K x K window centered at pixel s. (b) Nonlinear transformation 1 - exp{- 2 a f (s)} f"(s) = tanh(af(s)) = 1 + e x p { - 2 a f ( s ) } '
(8)
where a¢ is a parameter which controls the speed of "saturation". In our segmentation experiments, s~ is a 3 x 3 window, and er = 0.5. 4.1. Feature evaluation
A feature is considered to be "good" for segmentation if its within-class variance is small and its between-class variance is large. Unfortunately, for unsupervised texture segmentation, it is difficult to evaluate the features using this criterion, because we do not have any training samples with known category information or labels. In image segmentation problems, the segmentation quality is often represented as a subjective score determined by human judges. These feature evaluation methods are very time-consuming, since one has to perform a large number of segmentations corresponding to different features or feature subsets. One can also evaluate texture features by texture synthesis. (26} A set of features is considered to be good if the synthesized texture derived from these features and the original texture are visually similar. But, the best features for texture synthesis are not always the best features for segmentation tasks, which require features with strong discriminatory information.
We now present a simple yet efficient feature evaluation measure for texture segmentation. We assume that the homogeneous areas in the image are relatively large in comparison to texture boundaries. This is often true for most segmentation tasks. By a good feature, we mean a feature which has a small variance within homogeneous regions, and a large variance over the entire image. Let fk be the kth feature image. Our feature evaluation measure, wk, is given as follows: k k Wk = var~lobal/varlocal,
(9)
where 1 vark, oba~= ~-~ ~ (fk(S) -- fk) 2, 1 N
1
__]k)2) '
(10)
(11)
where ]k is the mean value of the feature over the given M x M image, and ]k is the local mean value within a P × P window, dr,,, centered at site st. All N sites, st, t = 1 , . . . , N, are chosen to be randomly scattered over the M x M image. Actually, var~ocal is the average local variance and var~lobal is the total variance. In our experiments, P = 16 and N = 64. It is easy to prove that wk >-1 as long as N is sufficiently large. The value wk can be viewed as an estimate of the ratio of between-class variance to within-class variance. Therefore, wk can determine how good the segmentation is if only the kth feature is used. The larger the wk is, the better the segmentation result using the feature fk. For the two typical feature images shown in Figs 3(b) and (c), the feature evaluation measures are wb = 3.126 and wc = 1.534, respectively. It is also easy to see visually that the feature in Fig. 3(b) is much better than the feature in Fig. 3(c). The above feature evaluation measure can be used in feature selection. The simplest method is to select the p-best features out of the given d features. However, we know from the literature on feature selection that the combination of individually best features does not always form the best feature subset. (36) In the next section, we will use this evaluation measure to assign a weight to the individual features in segmentation. 4.2. Feature weighting In order to avoid the problem of some features with large variance dominating the features with small variance, it is common to normalize the features so that they have zero mean and unit variance (z-score normalization). However, if features do not have the same discriminatory power, this normalization scheme is not acceptable, because all features make the same contribution to the similarity measure. Using the feature evaluation measure
Texture classification and segmentation
179
~'~ ~4
"2
(a)
(b)
(c)
Fig. 6. Texture segmentation using the z-score normalized features and weighted features computed from the single resolution SAR model (1 = 0): (a) a 256 x 256 image containing four natural textures (D68, D55, D84, D77) from the Brodatz album; (b) four-cluster solution using the z-score normalized features; (c) four-cluster solution using the weighted features.
developed in the previous section, we now define a feature weighting scheme as follows: f'k(S) = (fk(s) --]k)Wk, k = 1, 2 . . . . , d,
(12)
where f;,(s) is the weighted feature,)~k is the mean of fk, and wk is the weight assigned tofk in Equation (9). Figure 6 shows the segmentation results using the z-score normalized features and weighted features originally from the single resolution SAR model (l = 0, five features). Using the z-score normalized features, the four-cluster solution is not able to separate the upper two textures, D68 and D55, as shown in Fig. 6(b), and the bottom-left texture, D84, is also divided into many subregions. However, using the weighted features, the texture image is reasonably divided into four regions, although the segmentation is still noisy as shown in Fig. 6(c).
4.3. Segmentation experiments All the segmentation experiments in this section assume that the number of textures present in the input image is known. In the next section, we will introduce two indices to evaluate the "validity" of segmentations in order to estimate the number of textures present in the input image. Figure 7 shows the segmentation results using different neighborhood sizes. Figures 7(b) and (c) are the segmentation results using the SAR models with a small neighborhood, 90, and a large neighborhood, 90 t3 5~1, respectively. Both experiments use the weighted features. We see that the SAR model with a larger neighborhood performs worse than the SAR model with a smaller neighborhood. Three of the four textured regions are mixed together in Fig. 7(c). This result is consistent with the classification results reported in Section 2.2. The explanation for this phenomenon is that when the SAR model contains a large number of parameters, the contribution of each parameter to the prediction of g(s) becomes
less significant. The averaging effect makes model parameters have poor discriminatory characteristics. The multiresolution model does not suffer from this drawback. We now demonstrate the performance of the multiresolution SAR model. Figure 8 shows the segmentation results for the image containing four different Gaussian Markov random field textures generated using non-causal finite lattice models, t2°) The result achieved by using the 3-resolution model is significantly better than the single resolution model. Figure 9 shows the segmentation results of the composite image containing four textures (D68, D55, D84, D77) from the Brodatz album337) The multiresolution SAR models with different numbers of resolutions achieve very similar results, as shown in Figs 9(f)--(h), while the single resolution SAR model performs quite differently at each resolution, as shown in Figs 9(b)-(e). We note that the single resolution segmentation with l = 2 is much better than the segmentation at the given resolution of the input image (1 = 0). How to determine the most appropriate resolution level at which the segmentation algorithm should be applied is a major problem in all single resolution methods. In reference (28), an entropy-based criterion was defined to determine which resolution is best for segmentation. However, we believe that the same criterion cannot be successfully applied to all textures and models used to describe them, because different models and features capture quite different information, some suitable for coarse texture, others suitable for fine texture. Multiresolution methods do not need to make such a decision. As Figs 9(0-(h ) demonstrate, multiresolution methods are also less sensitive to the choice of the number of resolution levels. Many approaches for texture segmentation in the literature have been demonstrated using images containing only a small number of categories, typically two or four. When the input image contains a large
"£ -= 7 O ) '.Z = 7 (a) :suo!l~luatuSas uo!lnlosa~!llnm (D '(o) '.Z = I (P) "l = I (a) '-0 = 1 (q) :suo!lgluotu$os uo!mloso~ ol~u!s (p)--(q) '.sa~nlxol pIa~ tuopus.t ^o~P¢IAI u~!ssn¢£) lua~t~B!p .mo~ Su!u!eluoa a~etu! 9 ~ x 9 ~ ~ (e) :so~nlxo~ poz!soqlu£s ~'tIIAI£) jo suo!lematu~a$ "8 "~!~I
O)
(~)
iiiiiiiiiiiiiiii!ii~ili!~iiiiiiii~ii~ii!~i!!~ii~i ii.iiii~iiiiii!:!ifiiiiiiiiiiii)ii:ili/~:il
ii!!iiiiiiiiiiiiiiiiii!i!iiiiiil;;!i!ili ;! i}!!i!~iii:)iiiiiiiii)!iiii~iiii!ili{i::illi~i:il
iiiil;~iiii!i!:iiii(iii;ii:/?,:!:~:i~iii~ ::::::::::::::::::::::
(P)
(~)
iii
(q)
(~) •
c
}
.; k~
"~5 ~ 0~ 'pooq,oqtI~!au ascii (o) '.0qb 'pootl~oqtl$!ou ll~tus (q) '.tunqle z l ~ p o ~ oql tuo~] (/.L(I 'I,8(I ' ~ ( I '$~)(I) somlxal Ie~meu ~no] Su!u!~lutya a$~tu! 9~Z × 9gZ ~ (e) :soz!s pooqloqq$!au luo~aB!p ql!g sDpom ~IVS Su!sn uo!leluotu$os amlxa& "/. "$!d
(a)
(q)
(~)
tqtvf ')I ' V pu~ OVlAI ' f
0~I
Texture classification and segmentation
(a)
(b)
..
(c)
181
: : :
(d)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
:?2
(e)
. . . . . . . . . .
(f) (g) (h) Fig. 9. Segmentations of natural textures: (a) a 256 × 256 image containing four natural textures (D68, D55, D84, D77) from the Brodatz album; (b)-(e) single resolution segmentations: (b) l = 0; (c) l = 1; (d) l = 2; (e) I = 3; (f)--(h) multiresolution segmentations: (f) 2 resolutions (L = 2); (g) 3 resolutions (L = 3); (h) 4 resolutions (L = 4).
number of textures, the boundary regions between textures are more likely to form new classes during the clustering procedure. Furthermore, many texture categories together with the boundary classes may overlap in the feature space. In the next experiment, we will segment the composite image in Fig. 10(a) which contains eight natural textures (D77, D24, D09, D04, D03, D33, D51, D54) from the Brodatz album using the multiresolution SAR models. Figures 10(b)-(d) show the segmentation results using the single resolution SAR model at three individual resolutions. Note that the segmentation results are noisy and several classes are mixed together. Figures 1 l(b) and (c) show the segmentations of the image in Fig. 10(a) using multiresolution SAR model. These segmentations are less noisy than in Figs 10(b)L~ 2 5 : 2 - B
(d). Using only the first two resolution levels together (L = 2) is not sufficient to distinguish two of the eight textures, D24 and D09, as shown in Fig. 11(b). However, using three resolution levels, all eight textured regions are separated as shown in Fig. 11(c). There are still many speckles in Fig. 11(c), and the boundary surrounding each textured region is quite thick. These segmentation problems can be removed by using the spatial coordinates of the pixels as two additional features in the clustering algorithm. The spatial coordinates are scaled to the range [ - 1 , 1]. The improved segmentation results are shown in Fig. 12. By tracing the region boundaries in Fig. 12(c) and comparing these regions with the textures present in the original image, we find that this segmentation is very reasonable and consistent with visual segmentation. Special attention must be paid to the use of spatial
182
J. MAO and A. K. JAIN
.....
""t" •, 4
(~) ~j,,
"- t .
7 ¢
~ -
• '
4"" I ~
(a)
,
t.
Cb)
(b)
(c)
(c)
(d)
Fig. 10. Single resolution segmentations of composite textures: (a) a 256 x 512 image containing eight natural textures (D77, D24, D09, D04, D03, D33, D51, D54) from the Brodatz album; (b) l = 0; (c) l = 1; (d) l = 2.
coordinates in clustering. The unreasonable partition within texture D04 (upper right corner) in Fig. 12(b) is caused by including spatial coordinate features in clustering. Table 4 lists the evaluation measures (weights for feature weighting) of 15 MR-SAR features and two spatial coordinate features, (r, c). Note that the spatial coordinate features are better than the MR-SAR features in the sense of this feature evaluation scheme! Indeed, using only the spatial coordinate features can yield perfect segmentations for the given composite image if the initial cluster centers are properly chosen. Figure 13 shows two of the 8-category segmentations using only two spatial coordinate features. Therefore, it is clear that incorporating spatial coordinate features in clustering algorithms can bias the segmentation results. On the other hand, when the same texture is distributed over several disjoint regions in the image as shown in Fig. 14(a), or one texture surrounds another tex-
Fig. 11. Multiresolution segmentations of composite textures: (a) a 256 x 512 image containing eight natural textures (D77, D24, D09, D04, D03, D33, D51, D54) from the Brodatz album; (b) L = 2; (c) L = 3.
ture as shown in Fig. 15(a), the use of spatial coordinate features may not help in segmentation as shown in Figs 14(d) and (g) and 15(d). In fact, sometimes, the use of spatial coordinate features may mislead the segmentation, as shown in Figs 14(c) and (f) and 15(c), (f) and (g). The 3-resolution MRSAR models have been used in these segmentation experiments. So, there is a tradeoff between removing small speckles or noisy regions and introducing bias in the region boundaries. Figure 16 shows the 8-category segmentation of the image in Fig. 10(a) using the 3resolution MR-SAR model, when we replace the weights for (r, c) in Table 4 by weights (2.0, 2.0). Note that not all small regions are completely removed. 4.4. Evaluating the segmentations In the previous section, we assumed that the number of textured regions in the input image was known, which made the task of clustering procedure easy. In this section, we introduce two internal indices to evaluate the segmentations and to estimate the true number of clusters. Jain and D u b e s (38,39) suggested an index, called MH index, which is a modification of Hubert's
Texture classification and segmentation L(i)=k
183 if
iECk,i=l
.....
n,
(13)
4 (a)
where n is the total number of patterns (in our case, n is the total number of pixels in the feature image). We define the distance between two vectors, x and y, as follows: 6(x, y) = %/(x - y)T(x -- y).
(14)
The center of the kth cluster is denoted by a vector, mk= (link) ~
f(i),
(15)
iECk
(b)
where f(i) is the d-dimensionl vector at the ith pixel in cluster Ck, and nk is the number of pixels in the kth cluster. The M H measure for the clustering {C~ . . . . . Ck} is: r - MpM¢ MH(k) = , (16) apO c
where
(c) r = ( l / M ) ~ ~ 6(f(i), f(D)~(mL(1), mL(j)), Mp = ( l / M ) ~ "~ 6(f(i), f(])), Me Fig. 12. Multiresolution segmentations of composite textures with additional (x, y) features: (a) a 256 x 512 image containing eight natural textures (D77, D24, D09, D04, D03, D33, D51, D54) from the Brodatz album; (b) L = 2; (c) L = 3.
F statistic. It is a measure of correlation between the matrix of inter-pattern distances and the distances recovered from the clustering solution. Let {C] . . . . , Ck} be a partition and L denote the label function established by the clustering procedure that maps the set of patterns to the set of cluster labels
=
(l/M) ~ ~
(~(mL(i),mL03),
02 = ( l / M ) ~ ~E~(~2(f(i), f(])) _ ME, = ( l / M ) ~ ~ 62(mL(o.mL(~) - M~,
!ii~!ili!i!!!)i!!i!'!!~!!iii!i?~!
(b)
Fig. 13. Two of the 8-category segmentations using only two spatial coordinate features.
Table 4. Evaluation measures (weights for feature weighting) of 15 MR-SAR features and two spatial coordinate features (r, c) 2 3.0
3 3.6
4 2.5
5 3.9
6 1.9
7 2.0
(20) (21)
Theoretically, the M H statistic is a monotonically increasing function of the number of clusters. Therefore, the decision rule for estimating the true number of clusters is to search for a "significant" knee in the curve of MH(k) as k varies from 2 to k~nax, where
(~)
1 2.0
(19)
{ ( i , j ) l l <-i<-n - 1, i + I <-j<-n}.
ii ii~i!!~iiiiiiil; '
f k w
(18)
where M = n(n - 1)/2. All of the above sums are over the set:
:i?:~:~:i:i:3i:i:?:!:?:i:!:i:i::~:ii
!!ii!
(17)
MR-SAR 8 9 2.3 2.2
10 4.0
11 2.6
12 1.8
13 3.0
14 2.7
15 4.0
(r, c) r c 7.5 15.0
184
J. MAO and A. K. JAIN
1
.i:
~
o;"t
(~) i ..
-..:
...=..
iL .i
(b)
(c) :
:
? iSiS: ?: >:
:
:i
(d)
:?! :>'
i i
(e)
(f)
(S)
Fig. 14. Two- and 3-category segmentations using features without and with weighted or normalized spatial coordinate features: (a) a 256 x 256 image containing two natural textures, one of which is separated by the other textures; (b)--(d) 2-category segmentations; (e)-(g) 3-category segmentations; (b) and (e) without spatial coordinate features; (c) and (0 with weighted spatial coordinate features; (d) and (g) with normalized spatial coordinate features.
k ~ is the upper bound o n the number of clusters present in the data. A "significant" knee at k* has the following property: a large change in MH(k) occurs near k = k* and small changes take place after k > k*. If the image contains q distinct textures, then MH(k) should be close to 1 for k > q, since this image can be partitioned into k > q clusters by dividing the true dusters into smaller regions. Dubes (39) developed an algorithm to detect such a "significant" knee. In this paper, we only present the curves of MH(k) for three textured images (Figs 8(a), 9(a) and 10(a)), as shown in Fig. 17. Fifteen features from the 3resolution S A R model (without spatial coordinate features) are used for all the three images. Because the k-means clustering algorithm depends on the choice of initial cluster centers, and is not guaranteed
to converge to the optimal solution, the monotonic behavior of the MH index is not observed. Therefore, we repeat the k-means clustering algorithm ten times, each time choosing the cluster center randomly. Each value of MH(k) plotted in Fig. 17 is the maximum value from ten k-means solutions with different initial cluster centers. For the image in Fig. 8(a), the values of MH(k) for k -> 4 change slowly and are close to 1.0, so the MH index suggests four clusters for this image, which equals the true number of texture categories present in Fig. 8(a). However, for the images in Figs 9(a) and 10(a), trying to find a significant knee is frustrating. Both curves seem to have two "significant" knees, k = 3 and 5 for the image in Fig. 9(a), and k = 4 and 8 for the image in Fig. 10(a). It is sometimes difficult to determine
Texture classification and segmentation +!
185
.......
++
I i i
(a)
(b)
(c)
(d)
(e)
(f)
(g)
Fig. 15. Two- and 3-category segmentations using features without and with weighted or normalized spatial coordinate features: (a) a 256 x 256 image containing two natural textures, one of which surrounds the other texture; (b)-(d) 2-category segmentations; (e)-(g) 3-category segmentations; (b) and (e) without spatial coordinate features; (c) and (f) with weighted spatial coordinate features; (d) and (g) with normalized spatial coordinate features.
exactly how many clusters are present in the data. In these situations, it is preferable for an algorithm to suggest several plausible solutions. Another index used for evaluating the segmentations and estimating the true number of clusters in the image is the ratio of the within-class (cluster) variance to the total variance:
WV(k) = ~']=1 a •
.
~
........
.............
i:~:~i:i:~iZi:i:~iiiii~i
Fig, 16. The 8-category segmentation of the image in Fig. 10 using the 3-resolution MR-SAR model when we replace the weights for coordinates in Table 4 by weights (2.0, 2.0).
(22) varr(j)
j=l
where vari(j) is the variance of feature j~ in class i, and varr(j) is the total variance of feature )~. This
186
J. MAO and A. K. JAIN
/
~ u . lo(:) F~Jro o(a) F~ur+ e(a)
...........
,~
\
. . . .
-+
w,,
...+...
•
(~ 2
i
I
!
I
I
I
I
I
3
4
S
6
7
8
9
10
11
Number ol clusters
Fig. 17. Two indices For the segmentations of three textured images.
index is similar to the one used by Coggins and Jain. (t4) The index WV(k) is a monotonically decreasing function of the number of dusters, k. Again, the decision rule for estimating the true number of clusters is to search for "significant" knees in the plot of WV(k) as k varies from 2 to kmx. Figure 17 also shows the WV curves for three textured images mentioned above. Again, we plot the minimum value of WV(k) from ten different runs of the k-means algorithm with different initial duster centers. The WV index suggests three clusters for the image in Fig. 8(a), but the true number of textured regions is 4. A significant point at k = 4 can be detected on the curve of WV for the image in Fig.
9(a). This is consistent with the true number of dusters. It is difficult to determine a significant point on the WV curve for the image in Fig. 10(a). However, if we use both the MH and WV indices, it is easy to make a decision. At the two solutions (k = 4 and k = 8) suggested by the MH index, the WV index has very different values, i.e. WV(8) is much smaller than WV(4). So the 8-duster solution for the image in Fig. 10(a) is more appropriate than the 4-duster solution. This example indicates that different indices may not be consistent, but sometimes using more than one cluster validity index is helpful. Figure 18 shows the changes of two cluster validity indices for the segmentations using differently weighted spatial coordinates, as shown in Figs 14 and 15. As the spatial coordinate features play a more and more important role in the segmentation in the order of without (r, c), with normalized (r, c), and with weighted (r, c) (weights for r, c = 7.5), the "significant" knees on both MH(k) and WV(k) curves are getting farther and farther away from the true number of clusters (=2). This is because when the number of clusters is small, the area of each segment is relatively large; the large variance of spatial coordinate features within a large region makes it necessary to subdivide the region, if the spatial coordinate features dominate all others.
4.5. A special case We now consider a special case in which an image contains textured regions as well as regions with constant gray levels. In this case, the parameter estimation procedure (LSE technique) for the SAR model does not work, since it requires the calculation of the inverse of local covariance matrices. In regions q
,!
"".........:j...::_2 2 2.
2_ 1i
/9'
'-,
~v
- ...... ",,.
wllh ~ ~
- - -
t
. . . . . . . . %°-
,
WV
!
|
!
i
|
!
i
!
3
4
S
6
7
8
9
10
Number of clusters
(.)
11
(r,l:D It,,+
~
3
4
5
......
O
7
8
9
Nuntbm"oi duslom
(b)
Fig. 18. MH and WV indices for the segmentations using differently weighted coordinates, as shown: (a) in Fig. 14; (b) in Fig. 15.
10
11
Texture classification and segmentation
i~.~
(a)
(b)
Fig. 19. Segmentation of the image containing regions with constant gray levels: (a) original; (b) 4-category segmentation results.
with almost constant gray levels, covariance matrices are almost singular. However, the parameter estimation procedure can easily be modified to adapt to such a situation. Before estimating the parameters, we first detect the regions with very small variances where the parameter estimation procedure may fail. For each pixel, we calculate the variance of gray levels in a small window centered at the pixel. If the variance is smaller than a prespecified threshold, we label this pixel. The labeled pixels can be further grouped into several homogeneous (in gray level) regions. The parameter estimation procedure and the k-means clustering algorithm which are used for texture segmentation are modified to operate only on the unlabeled pixels. Figure 19(a) shows the image containing two texture regions and two regions with constant gray level. The 4-cluster segmentation result using the 3-resolution S A R model is shown in Fig. 19(b). As we can see from Fig. 19(b), the textured region as well as regions with constant gray levels have been identified. 5. D I S C U S S I O N A N D C O N C L U S I O N
We have presented a multiresolution simultaneous autoregressive ( M R - S A R ) model for texture classification and texture segmentation. This model was motivated by the analysis and experiments which indicate that a S A R model with a large neighborhood does not perform as well as a S A R model with a small neighborhood. Experiments have shown that the M R - S A R model can achieve a substantial improvement over the single resolution S A R model. The feature weighting scheme for texture segmentation has also been demonstrated to be efficient. It can suppress the effect of some features which do not provide sufficient discriminatory information. However, it does not take into consideration the dependence between individual features. The two cluster validity indices, M H and WV, which we have introduced, can suggest one or several plausible solutions to the problem of "how many clusters". Sometimes using more than one validity index is helpful in determining the "true" number of clusters present
187
in the textured image. Special attention must be paid when spatial coordinate features are employed in segmentation. There is a trade-off between removing small speckles or noisy regions and introducing bias in the region boundaries. One problem associated with the M R - S A R model is determining the number of resolution levels used for texture classification and texture segmentation. We believe that there exists an optimal number for a given image. However, our experiments have shown that no more than four resolution levels are necessary. Acknowledgement--We thank Professor Richard Dubes for his helpful suggestions regarding duster validity and the role of spatial coordinate features.
REFERENCES
1. R. M. Haralick, Statistical and structural approaches to texture, Proc. IEEE 67,786--804 (May 1979). 2. H. Wechsler, Texture analysis---a survey, Signal Process. 2, 271-282 (1980). 3. L. S. Davis, Image texture analysis: recent developments, Proc. IEEE Conf. Pattern Recognition Image Process., _pp. 214-217. (1982). 4. L. V. Gool, P. Dewacle and A. Oosterlinck, Texture analysis anno 1983, Comput. Vision Graphics Image Process. 29, 336-357 (1985). 5. J. Besag, Spatial interaction and the statistical analysis of lattice systems, J. R. Statist. Soc. B (Methodological) 48, 192-226 (1974). 6. G. R. Cross and A. K. Jain, Markov random field texture models, IEEE Trans. Pattern Anal. Mach. Intell. 5, 25-39 (1983). 7. H. Derin and W. S. Cole, Segmentation of textured images using Gibbs random fields, Comput. Vision Graphics Image Process. 35, 72-98 (1986). 8. D. Geman, S. Geman, C. Graffigne and P. Dong, Boundary detection by constrained optimization, IEEE Trans. Pattern Anal. Mach. lntell. 12, 609-628 (1990). 9. C. Bouman and B. Liu, Multiple resolution segmentation of textured images, IEEE Trans. Pattern Anal. Mach. Intell. (forthcoming). 10. R. C. Dubes and A. K. Jain, Random field models in image analysis, J. Appl. Statist. 6, 131-164 (1989). 11. J. M. Keller and R. M. Crownover, Texture description and segmentation through fractal geometry, Comput. Vision Graphics Image Process. 45, 150-166 (1989). 12. A. P. Pentland, Shading into texture, Artif. Intell. 29, 147-170 (1986). 13. T. R. Reed and H. Wechsler, Segmentation of textured images and gestalt organization using spatial/spatialfrequency representations, IEEE Trans. Pattern Anal. Mach. Intell. 12, 1-12 (1990). 14. J. M. Coggins and A. K. Jain, A spatial filtering approach to texture analysis, Pattern Recognition Lett. 3, 195-203 (1985). 15. S. G. Mallat, Multifrequeney channel decomposition of images and wavelet models, 1EEE Trans. Acoust. Speech Signal Process. 37, 2091-2110 (1989). 16. A. K. Jain and F. Farrokhnia, Unsupervised texture segmentation using gabor filters, Proc. 1990 Int. Conf. Syst. Man Cybern., Los Angeles, CA, pp. 14-19 (1990). 17. J. Bala, Combining structural and statistical features in a machine learning technique for texture classification, Proc. 3rd Int. Conf. Ind. Engng Applic. Artif. Intell. Expert Syst., Charleston, SC, Vol. 1, pp. 175-183, July (1990).
188
J. MAO and A. K. JAIN
18. N. Ahuja and B. J. Schachter, Image models, Comput. Suro. 13, 373-397 (1981) 19. R. Chellappa, Two-dimensional discrete Gaussian Markov random field models for image processing, Progress in Pattern Recognition, L. N. Kanal and A. Rosenfeld, eds, Vol. 2, pp. 79-112. Elsevier Science Publishers B.V., North Holland (1985). 20. R. Chellappa, S. Chatterjee and R. Bagdazian, Texture synthesis and compression using Gaussian-Markov random field models, IEEE Trans. Syst. Man Cybern. 15, 298-303 (1985). 21. K. B. Eom and R. L. Kashyap, Robust image models for image restoration and texture edge detection, IEEE Trans. Syst. Man Cybern. 20, 81-93 (1990). 22. R. L. Kashyap, Analysis and synthesis of image patterns by spatial interaction models, Progress in Pattern Recognition, L. N. Kanal and A. Rosenfeld, eds, pp. 149186. Elsevier Science Publishers B.V., North Holland (1981). 23. R. L. Kashyap, R. CheUappa and A. Khotanzad, Texture classification using features derived from random field models, Pattern Recognition Lett. 1, 43-50 (1982). 24. R. L. Kashyap and R. Chellappa, Estimation and choice of neighbors in spatial-interaction models of images, IEEE Trans. Inf. Theory 29, 60-72 (1983). 25. R. L. Kashyap and A. Khotanzad, A model-based method for rotation invariant texture classification, IEEE Trans. Pattern Anal. Mach. Intell. 8, 472-480 (1986). 26. A. Khotanzad and R. L. Kashyap, Feature selection for texture recognition based on image synthesis, 1EEE Trans. Syst. Man Cybern. 17, 1087-1095 (1987). 27. A. Khotanzad and J. Y. Chen, Unsupervised segmentation of textured images by edge detection in multidimensional features, IEEE Trans. Pattern Anal. Mach. InteU. 11,414-421 (1989). 28. S. R. Yhann and T. Y. Young, A multiresolution approach to texture segmentation using neural networks, Proc. lOth Int. Conf. Pattern Recognition, New Jersey, Vol. 1, pp. 513-517, June (1990).
29. A. Visa, Identification of stochastic textures with multiresolution features and self-organizingmaps, Proc. lOth Int. Conf. Pattern Recognition, New Jersey, Vol. 1, pp. 518-522, June (1990). 30. S. Peleg, J. Naor, R. Hartley and D. Avnir, Multiple resolution texture analysis and classification, IEEE Trans. Pattern Anal. Mach. lnteU. 6, 518-523 (1984). 31. E. J. Eijlers, E. Backer and J. J. Gerbrands, An improved linked pyramid for texture segmentation using the fractal Brownian model, Proc. lOth Int. Conf. Pattern Recognition, New Jersey, Vol. 1, pp. 687-689, June (1990). 32. G. Eichmann and T. Kasparis, Topologically invariant texture descriptors, Comput. Vision Graphics Image Process. 41, 26%281 (1988). 33. Jiaruo Wan, Jianchang Mao and Cheng dao Wang, Multiresolution rotation-invariant simultaneous autoregressive modcl for texture analysis, Proc. 9th Int. Conf. Pattern Recognition, Rome, Italy, pp. 845-847, October (1988). 34. A. K. Jain and B. Chandrasekaran, Dimensionality and sample size considerations in pattern recognition practice, Handbook Statistics, P. R. Krishnaiah and L. N. Kanal, eds, Vol. 2, pp. 835-855. North-Holland, Amsterdam (1982). 35. P. J. Burt, The pyramid as a structure for different computation, Multiresolution Image Processing and Analysis, A. Rosenfeld, ed., pp. 6-35. SpringerVerlag, Berlin (1984). 36. T. M. Cover, The best two independent measurements are not the two best, IEEE Trans. Syst. Man Cybern. 4, 116-117 (1974). 37. P. Brodatz, Textures--a Photographic Album for Artists and Designers. Dover, New York (1966). 38. A. J. Jain and R. C. Dubes, Algorithms for Clustering Data. Prentice-Hall, NJ (1988). 39. R. C. Dubes, How many clusters are best?--an experiment, Pattern Recognition 20, 645-663 (1987).
About the Author--JIANCHANGMAO received the B.S. degree in Physics in 1983 and his M.S. degree
in Electronic Engineeringin 1986, from East China Normal University, Shanghai, P. R. China. Currently, he is working on his Ph.D. degree in Computer Science at Michigan State University. His research interests include pattern recognition, neural networks and image processing. He is a student member of the IEEE. About the Author--ANIL JAIN is Professor in the Department of Computer Science at Michigan State University. He received a B.Tech degree from the Indian Institute of Technology, Kanpur, and M.S. and Ph.D. degrees in Electrical Engineering from the Ohio State University. His research interests are pattern recognition and computer vision. Dr Jain served as Program Director of the Intelligent Systems Program at the National Science Foundation, and has held visiting appointments at Delft University of Technology, The Netherlands, Norwegian Computer Center, Oslo and Tara Research Development and Design Center, Pune, India. He has been a consultant to a number of industrial organizations. He serves as Editor-in-Chief of the IEEE Transactions on Pattern Analysis and Machine Intelligence and is on the Editorial Boards of Pattern Recognition, Pattern Recognition Letters, Journal of Mathematical Imaging and Vision, and Journal of Applied Intelligence. Dr Jain is the co-author of Algorithms for Clustering Data (Prentice-Hall, 1988), has edited the book Real-Time Object Measurement and Classification (Springer-Verlag, 1988), and co-edited the book, Analysis and Interpretation of Range Images (Springer-Verlag, 1990). He is a Fellow of the IEEE.