Contour integration based on the characteristics of edge elements

Contour integration based on the characteristics of edge elements

International Congress Series 1301 (2007) 97 – 101 www.ics-elsevier.com Contour integration based on the characteristics of edge elements Yu Ma, Xia...

216KB Sizes 0 Downloads 18 Views

International Congress Series 1301 (2007) 97 – 101

www.ics-elsevier.com

Contour integration based on the characteristics of edge elements Yu Ma, Xiaodong Gu, Yuanyuan Wang ⁎ Department of Electronic Engineering, Fudan University, China

Abstract. The human brain can recognize objects well even when they have incontinuous contours. In order to mimic this function, a hierarchical contour integration method based on the characteristics of edge elements is proposed. The method consists of anisotropic edge extension and statisticbased contour line connection. Experimental results show that the method can effectively connect incontinuous edge elements into relatively integrated contours. © 2007 Elsevier B.V. All rights reserved. Keywords: Contour integration; Anisotropic extension

1. Introduction Contour integration plays an important role in perception and recognition. In order to correctly interpret images, local contour elements belonging to the same physical contour should be grouped together. In complex or noisy backgrounds, detection of the edge elements may be influenced so that some parts of the edge elements are incontinuous or even lost. Furthermore, an object's contours sometimes contain the points which are not among the edge elements, e.g. the “illusory contours” (a.k.a. objective contours). As a result, the whole contour of an object is difficult to obtain under many conditions in the computer vision. However, in the visual system of the brain, there are some mechanisms to group the ambiguous contours very well in order to recognize the objects, even when a large part of the contours are nonexistent. Although this wonderful mechanism of the brain is not exactly known, it has inspired many models of contour integration. ⁎ Corresponding author. Postal address: Department of Electronic Engineering, Fudan University, Shanghai 200433, China. E-mail address: [email protected] (Y. Wang). 0531-5131/ © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.ics.2006.11.008

98

Y. Ma et al. / International Congress Series 1301 (2007) 97–101

Fig. 1. (a) A simple simulated contour, (b) incontinuous edges, (c) edge orientations, (d) edge extension, (e) results of edge extension, (f) results of contour line connection.

Recent models of contour integration [1–4] have been based primarily on the Gestalt principles, such as “good continuation” and “proximity.” Most of these models contain three parts: extraction of edge elements, connection of edge elements into contour lines and grouping of contour lines. These models can accomplish the contour integration in some specific circumstances, but the performance quickly declines when the contour lines become more discrete. Recently, statistical methods have been used for investigating the contour integration and perception [1,3]. Experimental results have shown that statistical characteristics of edge elements are very useful in contour detection. However, such methods require huge computation to obtain statistical information for each pair of edge elements. In fact, several types of incontinuous edge elements can be grouped more simply to form long contour lines. In this paper, a hierarchical contour integration method considering both local characteristics and statistical co-occurrence information [3] of edge elements is proposed. First, the orientations of incontinuous edge elements are detected by oriented filtering. Second, edge elements are extended according to their orientations in order to form more continuous contour lines. Third, co-occurrence information of the contour lines is calculated, and the contour lines are selectively connected based on this information. This method is consistent with the hierarchical processing mechanism of the visual system and performs well in noisy backgrounds, without high complexity or computation. 2. Method In this method, simple contour lines as shown in Fig. 1(a) are used first to observe the effect. Many parts of the edge elements are randomly removed in order to simulate the incontinuity. The remaining edge elements are shown in Fig. 1(b). It is required that the edge elements are connected into a more integrated contour, while the least redundant or false edge elements are added at the same time. In addition, natural images are also used to test this method.

Y. Ma et al. / International Congress Series 1301 (2007) 97–101

99

Fig. 2. Three parameters for co-occurrence information.

2.1. Extension of edge elements according to their orientation information For natural images, edge elements are first extracted using the common edge detection algorithm (this step can be ignored for the simulated contours). Then more than half of the edge elements are removed randomly, so that the contour becomes more discrete. Then the image consisting of the remaining points is filtered with oriented log Gabor functions [3]. Parameters of filters are made equal to the average values of primate visual cortex [3,5] which can mimic the visual system well. Here we use filters in eight orientations (with intervals of 22.5°) to filter the edge image, and record the most salient orientation for each edge element. Experimental results are shown in Fig. 1(c), in which different orientations are displayed with different gray levels for direct observation. The edge elements are divided into eight kinds according to their different orientations, so that the edge image can be divided into eight layers and each layer contains only one kind of edge element. The extension is executed for each layer using extending filters with different preferred orientations. For example, the 45° layer is convoluted with a 45° filter, which mainly increases the values of adjacent pixels in the 45° orientation. The increasing amplitude is decided by coefficients of the corresponding oriented filter. When the extension has been finished in all the layers, the values of all the eight layers are summed. Then pixels with values larger than the threshold are recognized as the “extended edge elements” (such as the added pixels shown in Fig. 1(d)). It is obvious that a pixel is more likely to be considered as the “extended edge element” if it is along the orientations of two or more edge elements. The process can make the edge elements more continuous, with the number of contour lines decreased.

Fig. 3. (a), (e) Original images, (b), (f) edge elements, (c), (g) incontinuous edge elements, (d), (h) finally integrated contours.

100

Y. Ma et al. / International Congress Series 1301 (2007) 97–101

After one extension process, the extended pixels and the original edge elements are combined together for a new extension process. Similar procedures are repeated several times until a more integrated contour or contour lines are obtained. Here the condition of convergence is that the number of contour lines does not decrease after one extension step. Fig. 1(e) shows the result after the algorithm has converged, and it is seen that incontinuous edge elements are effectively integrated after the extension. 2.2. Connection of contour lines based on the edge co-occurrence information After the extension, incontinuous edge elements have formed more integrated contour lines. The last task is to selectively connect these unclosed contour lines. To reduce the computation of the statistic, the end points of the contour lines are extracted while other points are ignored. Positions of the end points are recorded and the orientations of these points are calculated using the method mentioned before. Then three parameters for each pair of end points are calculated: the distance d, the azimuth φ from one element to the other and the orientation difference θ, as shown in Fig. 2. These three parameters reflect several characteristics of the Gestalt principle, including proximity and collinear. Within these three parameters, the co-occurrence probability of each pair is obtained according to the statistical results in [3], so as to decide whether the end points can be connected or not. The detailed description of the co-occurrence probability can be seen in [3]. With all the possible connections completed, a more closed contour is obtained as shown in Fig. 1(f). 3. Results and discussion Using this method, several natural gray-level images (Fig. 3(a), (e)) are tested. Fig. 3(b), (f) shows the edge elements extracted by edge detection, and Fig. 3(c), (g) shows the results of partially eliminating the edge elements. In the brain, incontinuous edge elements such as Fig. 3 (c), (g) could be interpreted as continuous ones, such as shown in Fig. 3(b), (f) respectively. With the proposed method, similar results are obtained as shown in Fig. 3(d), (h). In this method, the contour integration is performed using the combination of two procedures: edge extension and contour line connection. Experimental results show that this method can group incontinuous edge elements into integrated contours without huge computation. This method also has some disadvantages. For example, false connection may increase if the edge elements are either too dense or too sparse. Furthermore, it is still under research whether there is a similar mechanism in the brain. It will be an interesting problem to investigate. Acknowledgement This work is supported by the National Basic Research Program of China (2005CB724303). References [1] J.H. Elder, R.M. Goldberg, Ecological statistics for the Gestalt laws of perceptual organization of contours, Journal of Vision 2 (2002) 324–353.

Y. Ma et al. / International Congress Series 1301 (2007) 97–101

101

[2] D.J. Field, A. Hayes, R.F. Hess, Contour integration by the human visual system: evidence for a local association field, Vision Research 33 (1993) 173–193. [3] W.S. Geisler, J.S. Perry, B.J. Super, D.P. Gallogly, Edge co-occurrence in natural images predicts contour grouping performance, Vision Research 41 (2001) 711–724. [4] Z.P. Li, A neural model of contour integration in the primary visual cortex, Neural Computation 10 (1998) 903–940. [5] W.S. Geisler, D.G. Albrecht, Visual cortex neurons in monkeys and cats: detection, discrimination, and identification, Visual Neuroscience 14 (1997) 897–919.