3-D analysis of projective textures using structural approaches

3-D analysis of projective textures using structural approaches

Pattern Recognition 32 (1999) 357—364 3-D analysis of projective textures using structural approaches Hyun-Ki Hong, Yun-Chan Myung, Jong-Soo Choi* De...

365KB Sizes 1 Downloads 39 Views

Pattern Recognition 32 (1999) 357—364

3-D analysis of projective textures using structural approaches Hyun-Ki Hong, Yun-Chan Myung, Jong-Soo Choi* Department of Electronic Engineering, Chung-Ang University, 221 Huksuk-dong, Dongjak-ku, Seoul, 156-756, South Korea Received 22 August 1997; received in revised form 18 May 1998

Abstract This paper presents a new algorithm that obtains the surface orientation of the texture image using structural approaches. The proposed method has shown that structural information can be effectively used in three-dimensional (3-D) analysis of textures as well as description and segmentation. By examining Fourier power spectrum of the texture image, we determine the tilt of the textured surface. Then, 1-D projection information of the texture in the obtained tilt direction is used to compute the slant. Using the obtained information, we can compute a vanishing point, and rearrange the textured surface with the lines converging to the vanishing point and the lines perpendicular to the tilt. In the experimental results, we have ascertained the proposed method can make a precise 3-D analysis of structural textures.  1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. Keywords: Shape from texture; Texture image; Texel; Structural approach; Surface orientation; Vanishing point; Rearrangement

1. Introduction Texture is a fundamental characteristic for the analysis of many types of images. Since texture usually reflects the composition and structure of an object, texture is useful in image segmentation and classification, and also gives an important clue in estimating the three-dimensional (3-D) orientation of an object surface. It is well known that if the surface is textured, then the apparent variations in the size, shape, and density of texture elements (texels) yield information about the surface orientation [1]. In other words, the slanted texture surface reminds human being of a uniform arrangement having the equal size and spacing of the texels. Then, the human visual system uses the apparent variations of the texel arrangement to derive a surface orientation of the texture image.

* Corresponding author. Tel.: 082 02 820 5295. 082 02 825 1584. E-mail: jschoi@ candy. ee.cau.ac.kr

Fax:

One of important tasks that arise in many computer vision systems is to obtain 3-D depth information from a monocular texture image. That is shape from texture and many studies have been presented [2—7]. However, no previous studies to compute the 3-D orientation of the textured surface, use spatial arrangement information of the texture directly. This paper presents a new method that obtains 3-D depth information of the textured surface using structural approaches. Many textures can be described structurally, in terms of the individual texels and their spatial relationships. We have shown that structural approaches can be effectively used in shape from texture as well as texture description and segmentation. When the textured surface is slanted, the use of structural information is similar to the human perception. In addition, the proposed method presents a new solution that can compute the slant and the actual 3-D depth. Fig. 1 shows the use of structural approaches for description, segmentation, and 3-D analysis of the textures.

0031-3203/99/$ — See front matter  1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. PII: S 0 0 3 1 - 3 2 0 3 ( 9 8 ) 0 0 0 8 3 - 1

358

H-K. Hong et al. / Pattern Recognition 32 (1999) 357—364

jection information, which reflects the apparent variations caused by 3-D orientation, in the obtained tilt direction. In addition, by the obtained slant and tilt, we can compute a vanishing point of the textured surface. Finally, to find out the obtained surface orientation with ease, we rearrange the texture image with the lines perpendicular to the tilt and the lines converging to the vanishing point. The texels are located at the intersections between the perpendicular lines and the converging. 2.1. Determination of the tilt of the textured surface Fig. 1. The use of structural approaches for description, segmentation, and 3-D analysis of the textures.

On the premise that the texture image is a planar surface, the proposed method is applicable to the structural texture having a certain regular pattern. In structural textures, texel shapes are diverse: circular (D102 Cane), rectangular (the tile image), netlike (D03 Reptile skin and D34 Netting), etc [7,8]. These textures reflect a regular pattern of an artificial structure like a high building and a brick wall. Therefore, the proposed algorithm may be used for application such as robotics, autonomous vehicle navigation, and photgrammetry. In this paper, we have compared the experimental results of our algorithm with those of two previous approaches. The one is the aggregation transform method using the edges converging to a vanishing point and the parallel [2]. The other is the morphological method using the centroid of texel and the region segmentation [7]. The experimental results have shown that the proposed method can make more precise 3-D analysis of structural textures than two previous approaches. The remainder of this paper is organized as follows. In Section 2, our algorithm is explained and we show the analysis and comparison of the simulation results in Section 3. Finally, the conclusion is described in Section 4.

2. Extraction of structural information for 3-D analysis A texture image is described by the number and types of its texels, and the spatial organization or layout of its primitives. In particular, structural approaches deal with the arrangement of texels, such as the description of the texture based on regularly spaced parallel lines. Therefore, their major purpose is to give a compact structural description of the texture [9]. In order to make 3-D analysis of structural textures, we use two structural approaches. The one is an analysis by Fourier transformation [10] and the other is a method using projection information [11]. By examining the energy distribution in the Fourier power spectrum, we determine the tilt line of the slanted texture. And then, to compute the slant of the textured surface, we use one-directional (1-D) pro-

Since we assume that the texture is periodic, its power spectrum also becomes periodic. It is well known that the Fourier transformed image of a picture represents the frequency components of the periodic repetition of the original picture. By examining the energy distribution in the Fourier power spectrum, we extract two spatial vectors representing the placement rule of the texels [10]. At the first step, by transforming the texture image (M;M), we obtain its Fourier power spectrum P(u, v) (!M/2)u, v)M/2!1). In the P(u, v) space, energy is concentrated at those frequencies corresponding to the periodicity of the texture. Therefore, the spatial frequencies of the local maxima of the power spectrum are extracted as candidates for periodic frequencies. Eq. (1) is used to select a pair of spatial frequencies ( fM and fM )   representing the periodicity of the texel arrangement. E( fM , fM )"P( fM )#P( fM )# P (mfM , nfM ), (1) G G G H  G G K L 1 P (u, v)"P(u, v)! P(u#k, v#l) (!3)k, l)3),  49 I J where k, l, m, n are an integer, and mfM #nfM denote lattice   points in the P (u, v) space generated by fM and fM . The first  G H two terms in Eq. (1) denote the energy levels of fM G and fM themselves, and the last term denotes the evaluH ated value of the pair. Using the obtained two basic spatial frequencies ( fM and fM ), we can compute the corre  sponding basic 2-D periodicity vectors (vN and vN ) in the   picture space dominating the arrangement of texels (Fig. 2). M "vN "" ,  " fM "sin h 

M "vN "" ,  " fM "sin h  n n tvN "tfM # , tvN "tfM # , (2)   2 (Mod n)   2 (Mod n) where h denotes the angle between fM and fM , and tv   represents the angle between vector vN and the horizontal axis. Subsequently, we determine the maximum vector having the more power between the extracted vectors. Because the tilt of the textured surface is perpendicular to the direction in which the texels are most uniformly

H-K. Hong et al. / Pattern Recognition 32 (1999) 357—364

359

Fig. 2. Relation between spatial frequencies and vectors in the picture space. Fig. 3. 1-D projection of the texture image.

distributed [12], we can obtain the tilt direction, which is perpendicular to the obtained maximum vector. 2.2. Obtaining of 1-D projection information

2.3. 3-D analysis of textures and rearrangement

In order to analyze the arrangement and position variation of the texels caused by projective distortions, we use 1-D projection information. This method extracts the spatial arrangement information of structural textures [11]. When a 2-D distribution F(x, y) of the texture is off a distance R from the origin, we show 1-D projection of the texture image in Fig. 3. Eq. (3) computes 1-D projection information.



F(R)"

F(x, y)d(x cos h#y sin h!R) dx dy,

where h is the obtained tilt direction. The obtained local maximum positions of F(x, y) are used to compute the slant of the textured surface.

(3)

Using the obtained tilt direction and projection information, we can compute the slant of the textured surface [13]. Fig. 4 shows a model of perspective projection. As shown in Fig. 4, » are points which are perL spective projected on the image plane, and located on the determined tilt line. !l cos h  » "f ,  f#d!l sin h  !l cos h  » "f ,  f#d#l sin h 

Fig. 4. A model of perspective projection.

360

H-K. Hong et al. / Pattern Recognition 32 (1999) 357—364

(l#l ) cos h  » "f ,  f#d#(l#l ) sin h  (2l#l ) cos h  , (4) » "f  f#d#(2l#l ) sin h  3» !2» !»    tan h"f , (5) » » #2» » !3» »       f 1 1 # , (6) "tan\ 2 » »   where f is focal length, d is a distance from an image plane to the textured surface, l ("l #l ) is a distance between   texels, and h is the slant of the textured surface. Solving Eq. (4) about h, we can induce Eq. (5). In Fig. 4, an origin of the coordinate system is the lens center and the optical axis is perpendicular to the image plane. By using only four points located in positive and negative directions on the image plane, the proposed algorithm can compute the slant, the actual 3-D depth distance, and the distance between texels. In case l "l , Eq. (6) can compute the   slant by only each point located in positive and negative directions, respectively. In this paper, the obtained slant (p) and the tilt (q) are used to more intuitively visualize the surface orientation.





Therefore, Eq. (7) can compute a unique gradient (p, q) by the obtained slant of the surface and the tilt. In case q is 90, p is zero. At the final step, to find out the obtained surface orientation with ease, we rearrange the textured surface. For this objective, we compute the vanishing point that is the intersection between the tilt line and the vanishing line. Eq. (8) defines the vanishing line that is the loci of vanishing points (u , v ) for all vectors on   the surface, and f is the focal length.



p"$

tan p , 1#tan q



q"$(tan q)

tan p , 1#tan q

pu #qv "!f.  

(7) (8)

In order to obtain the lines converging to the vanishing point, we use fan-beam projection method. By rotating a point light source, this projection obtains the perspective projected information [14]. The proposed algorithm locates the point source on the computed vanishing point and determines the local maximum position of projection information. Then, we determine the converging lines that link the vanishing point with the obtained maximum

Fig. 5. The real tile image and the experimental results of D102 Cane. (a) the tile image; (b) the perspective transformed D102 Cane; (c) the tilt line perpendicular to the vector having the maximum power; (d) 1-D projection information; (e) the texel arrangement obtained by the tilt line and the projection information; and (f) the textured surface rearranged with the perpendicular lines and the converging.

H-K. Hong et al. / Pattern Recognition 32 (1999) 357—364

points. Finally, we rearrange the textured surface with the following lines. The first lines obtained by 1-D projection information are perpendicular to the tilt line. The second those converging to the vanishing point are obtained by fan-beam projection. The texels are located at the intersections between the perpendicular lines and the converging.

3. Comparison of previous methods By the demonstration of the experimental results with structural textures, we have ascertained that the proposed algorithm can compute the 3-D surface orientation with a small error. In addition, we have compared the results of our algorithm with those of the morphological method and the aggregation transform. Fig. 5 shows the tile image and the experimental results of the perspective transformed D102 Cane [8]. Fig. 5c and d indicate the tilt line perpendicular to the vector having the maximum power and 1-D projection information. Fig. 5e shows the texel arrangement obtained by the

361

tilt and the projection information, and Fig. 5f displays the textured surface rearranged with the perpendicular lines and the converging. Figs. 6 and 7 show the results of the perspective transformed D20 French canvas and the artificial image. 3.1. The morphological method The morphological method identifies centroids and sizes of projective distorted texels by recursive erosions, according to the relative distances from the viewer to the textured surface. Then, considering the sizes of structuring elements and the obtained sequence of centroids, we can segment the textured surface into several sub-regions. Using major axes and sizes of structuring elements in segmented regions, we can compute a vanishing point and obtain a 3-D surface orientation [7]. Fig. 8a and b show the tile image of wall and centroids obtained by the recursive erosions. #, 䊐, ;, and 䊊 indicate centroids detected by erosions 2—5 times, respectively. The texture image can be segmented into several sub-regions, and then we obtain a major axis of each

Fig. 6. The experimental results of D20 French canvas. (a) perspective transformed D20 French canvas; (b) the texel arrangement obtained by the tilt line and the projection information; and (c) the textured surface rearranged with the perpendicular lines and the converging.

Fig. 7. The experimental results of the artificial image. (a) perspective transformed the artificial image; (b) the texel arrangement obtained by the tilt line and the projection information; and (c) the textured surface rearranged with the perpendicular lines and the converging.

362

H-K. Hong et al. / Pattern Recognition 32 (1999) 357—364

Fig. 8. The experimental results of the tile image of wall. (a) the tile image; (b) determined centroids; (c) major axes of sub-regions; (d) perpendicular line and intersections; (e) major axes corrected with a mean slope; and (f) rearranged lines.

Fig. 9. Edges of texel using morphological operator.

segmented region with the centroids to consider irregular texture distribution. Fig. 8c shows the major axes of the segmented regions and Fig. 8d displays the line perpendicular to the mean slope of major axes and the intersections between the perpendicular line and the major axes. The major axes modified with the mean slope and the rearranged lines are shown in Fig. 8e and f, respectively. As shown in Table 1, the morphological method has some errors, which are caused by the following reasons. First, the procedures of the region segmentation and the computation of major axis give rise to some errors. Fur-

thermore, in a scene image composed of complex texels, it is not easy to find the centroid of the texel exactly. 3.2. The aggregation transform method The aggregation method transforms local edge elements into the gradient space, and also directly indicates surface orientation [2]. This Hough-like transform can determine the location of local or global vanishing points or lines efficiently. Fig. 9 shows edges of texel using morphological operator (erosion). When the obtained

H-K. Hong et al. / Pattern Recognition 32 (1999) 357—364

363

Table 1 The experimental results of the proposed algorithm and previous methods (unit: degree)

Real value

D102 Cane D20 French canvas Artificial image Tile image

slant tilt slant tilt slant tilt slant tilt

40.00 90.00 40.00 60.00 50.00 150.00 40.00 90.00

Aggregation Transform

Morphological Method

Proposed Method

(A)

(B)

(C)

(A)

(B)

(C)

14.30 51.48 49.52 44.84 51.14 150.95 35.45 89.93

43.68 92.80 51.74 81.43 48.67 146.72 42.48 90.63

39.00 90.02 42.04 59.01 49.30 150.32 40.19 90.10

25.70 38.52 9.52 15.16 1.14 0.95 4.55 0.07

3.68 2.80 11.74 21.43 1.33 3.28 2.48 0.63

1.00 0.02 2.04 0.99 0.70 0.32 0.19 0.10

edges of the texture image consist of the only both lines converging to the vanishing point and the parallel lines (Fig. 9a and b), this method makes a 3-D analysis with small errors. In the case of the image including few straight-line components or complex edges, however, it is difficult to determine a precise intersection point in a gradient space. Therefore, the aggregation transform method could not provide a good performance in this case (Fig. 9c). The obtained edges of Fig. 9a the tile image; Fig. b the artificial image; and Fig. 8c the perspective transformed D102 Cane. 3.3. Comparison of previous methods In Table 1, we have compared the experimental results of the proposed method with those of two previous approaches. When the texel edges have few or complex linear (Fig. 5b), it is difficult to segment the obtained edges into two components with accuracy. Therefore, Table 1 has shown the proposed algorithm (C) is more effective than the aggregation transform (A). The morphological method (B) using the centroid of the texel and the segmented regions showed better results in Fig. 5b than the A method. By selecting the structuring element similar to the shape and size of the texel, the B method can obtain more precise 3-D information. However, prior knowledge of the size and shape of the texel is necessary in this method. The B method can obtain 3-D orientation of the textures in which texels are distributed some irregularly. However, in case the texel sizes are very different (Fig. 6a); the apparent variations of texels are not directly dependent on the relative 3-D depth of the textured surface, the morphological method is hard to make a precise region segmentation of the textured surface. Furthermore, when the shapes and sizes of the texels (Fig. 6a) are complex, we can not identify a precise centroid of those.

Error

Therefore, the results by the B method have shown large errors in this case. In the experimental results, we have ascertained that the proposed algorithm can make a precise 3-D analysis of structural textures.

4. Conclusions In this paper, we propose a new algorithm that estimates the 3-D orientation of structural textured surface. By using structural approaches which extract and describe the spatial arrangement information of the texels, the presented method can make a precise 3-D analysis of the texture variations due to projective distortion. Our integrated approach to the structural textures is similar to the human perception. In the experimental results of structural textures, we have ascertained that the proposed method was more effective than the morphological method and the aggregation transform. In order to make a 3-D analysis of more various textures, we will combine the structural approaches with the statistical, which describe textures by statistical rules governing the distribution and relation of gray levels. In addition, when the texture image containing the curved surface or the more than two planar surfaces, the further study includes a computation of the orientation of each surface. References [1] J.J. Gibson, The Perception of the Visual World, Houghton Mifflin, Boston, MA, 1950. [2] J.R. Kender, Shape from texture: an aggregation transform that maps a class of textures into surface orientation, Proc. 6th. IJCAI, Tokyo, August 1979, pp. 475—480. [3] A.P. Witkin, Recovering surface shape and orientation from texture, Artificial Intell. 17 (1981) 17—45.

364

H-K. Hong et al. / Pattern Recognition 32 (1999) 357—364

[4] D. Blostein, N. Ahuja, Shape from texture: integrating texture-element extraction and surface estimation, IEEE Trans. Pattern Anal. Mach. Intell. PAMI-11 (1989) 1233—1251. [5] L.G. Brown, H. Shvayster, Surface orientation from projective foreshortening of isotropic texture autocorrelation, IEEE Trans. Pattern Anal. Mach. Intell. PAMI-12 (1990) 584—588. [6] B.J. Super, A.C. Bovik, Planar surface orientation from texture spatial frequencies, Pattern Recognition 28 (1995) 729—743. [7] J.S. Kwon, H.K. Hong, J.S. Choi, Obtaining a 3-D orientation of projective textures using a Morphological method, Pattern Recognition 29 (1996) 725—732. [8] P. Brodatz, Textures: A Photographic Album for Artists and Designers. Dover, Mineola, NY, 1966.

[9] R.C. Gozalez, R.E. Woods, Digital Image Processing. Addison-Wesley, Reading, MA, 1992. [10] T. Matsuyama, S. Miura, M. Nagao, Structural analysis of textures by Fourier transformation, Comput. Graphics Image Process. 24 (1983) 347—362. [11] H.B. Kim, R.H. Park, Extracting spatial arrangement of structural textures using projection information, Pattern Recognition 25 (1992) 237—245. [12] D. Marr, Vision. W.H. Freeman and Company, San Francisco, CA, 1980. [13] H.K. Hong, Y.C. Myung, J.S. Choi, 3-D analysis of textures using structural information, Proc. 3rd., ICIP., Lausanne, September 1996, pp. 161—164. [14] T.Y. Young, (Ed.), Handbook of Pattern Recognition and Image Processing: Computer Vision. Academic Press, San Diego, CA, 1994.

About the Author—HYUN-KI HONG received the B.S. and the M.S. degrees from Chung-Ang University, Seoul, Korea, all in electronic engineering, in 1993 and 1995, respectively. He works on the Ph.D. degree in Department of Electronic Engineering, Chung-Ang University. His research interests include computer vision, image processing, mathematical morphology, and electro-optical systems.

About the Author—YUN-CHAN MYUNG received the B.S. degree from Chung-Ang University, Seoul, Korea, in electronic engineering, in 1996. He works on the M.S. degree in Department of Electronic Engineering, Chung-Ang University. His research interests include computer vision and image processing.

About the Author—JONG-SOO CHOI received the B.S. degree from Inha University, Inchon, Korea, the M.S. degree from Seoul National University, Seoul, Korea, and the Ph.D. degree from Keio University, Yokohama, Japan, all in electrical engineering, in 1975, 1977, and 1981, respectively. He joined the faculty at Chung-Ang University in 1981, where he is now a Professor in the Department of Electronic Engineering. His current research interests are in computer vision, image coding, and electro-optical systems.