General Ribbons: A Model for Stylus-Generated Images

General Ribbons: A Model for Stylus-Generated Images

Computer Vision and Image Understanding Vol. 76, No. 3, December, pp. 259–266, 1999 Article ID cviu.1999.0806, available online at http://www.idealibr...

92KB Sizes 0 Downloads 11 Views

Computer Vision and Image Understanding Vol. 76, No. 3, December, pp. 259–266, 1999 Article ID cviu.1999.0806, available online at http://www.idealibrary.com on

General Ribbons: A Model for Stylus-Generated Images Elyse H. Milun and Deborah K. W. Walters1 Department of Computer Science and Engineering, University at Buffalo, Buffalo, New York 14260

Yiming Li Technology Group, Pershing, 19 Vreeland, Florham Park, New Jersey 07932

and Bemina Atanacio School of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213-3891 Received September 7, 1999; accepted September 27, 1999

General ribbons are presented as a mathematical model of stylusgenerated images which is based on the image formation process. One purpose of the model is to provide a formal basis for the development of thinning algorithms for the stroke components of stylusgenerated images. Before the stroke components can be thinned they must be segmented from the blob components of the image, which do not require thinning. The second purpose of the model is to provide a formal basis for the development of blob/stroke segmentation algorithms. Two blob/stroke segmentation algorithms based on the model are presented. °c 1999 Academic Press

1. INTRODUCTION Stylus-generated images have long been used to communicate information between humans. More recently, humans have desired to use stylus-generated images to communicate information to computers [7, 8, 11, 12, 17–19, 21]. There are two primary methods of drawing objects with a stylus: (a) moving the stylus along a path to generate stroke objects, where the information being transmitted lies primarily in the path taken by the stylus (Fig. 1a); and (b) using the stylus to fill in a region to form blob objects, where the information being transmitted lies primarily in the blob boundaries. In a given image both stroke and blob objects may be present. In the preliminary processing of such images it is often desirable to recover the stylus path information from the stroke objects and to recover the boundary information from the blob objects; thus a means of segmenting the strokes and blobs is necessary.

1 To whom correspondence should be addressed at Department of Computer Science and Engineering, 226 Bell Hall, University at Buffalo, Buffalo, New York 14260. Fax: (716) 645-3464. E-mail: [email protected].

In theory, thinning algorithms can be used to recover the stylus path information; however, existing thinning algorithms have two basic problems which make this difficult [16]. First, there is no formal definition of the operation of thinning, and thus no formal specification of the correct output of a thinning algorithm. This means there is no objective means of comparing the performance of thinning algorithms. Researchers often resort to demonstrating the performance of their algorithms by showing their results on a few images and arguing that the results “look good.” The second problem with existing thinning algorithms is that they do not preserve geometric information. Many algorithms work well except in three situations: regions of high curvature, unattached ends of stroke objects, and regions where strokes or portions of strokes intersect (Fig. 1b). These problems are especially unfortunate since the more perceptually relevant information for human perception of stylus-generated images is found in regions of high curvature, at the unattached ends of stroke objects, and at intersections [26]. Current thinning algorithms do not preserve the information that has been found to be most important for human perception. This is not surprising, since the preservation of perceptually significant information has not been among the criteria for thinning algorithms. However, the performance of many computer vision algorithms which require thinning as an early stage of processing could be improved through the use of a thinning algorithm which preserved the perceptually relevant features [25]. This paper presents a mathematical model of stylus-generated images which satisfies the following goals: to enable the creation of formal definitions of stroke and blob objects, to segment blobs from strokes in images, to enable the specification of formal definitions for the desired output of thinning algorithms [20], and to enable the development of thinning algorithms [20] which preserve perceptually relevant geometric information and thus

259 1077-3142/99 $30.00 c 1999 by Academic Press Copyright ° All rights of reproduction in any form reserved.

260

MILUN ET AL.

FIG. 1. (a) Stroke object versus blob object. (b) Three areas where problems occur with existing thinning algorithms. The black lines are the medial axes of the figures. (c) For a given generator and ribbon, there can be many possible spines which could have generated the ribbon.

can be used to recover the stylus path information from stroke objects. A preliminary version of the General Ribbon model appears in [25]. 2. GENERAL RIBBONS AS MODELS OF STYLUS-GENERATED IMAGES Stylus-generated images can be modeled by considering the geometry of the image formation process. Rosenfeld gives a comprehensive review of Blum and Nagel [3], Brooks [5], and Brady and Asada [4] ribbons, where a disk or a line-shaped generator is swept along a spine. If stylus cross sections were limited to just disks and lines, then these existing ribbons could be used to model stylus-generated images. A more complete solution is to define a new class of ribbons, general ribbons, where the generator can be any subset of the plane which is swept along a spine.

2.1. General Ribbon Defined A generator, G, is defined as an open subset of the Euclidean plane <2 . A spine, S, is defined as a closed subset of the Euclidean plane <2 . A general ribbon, R, is defined S as a set s.t. ∃ an S and a G s.t. R = S ⊕ G, where A ⊕ G = g∈G (A + g), and “+” is vector addition. The first part of Fig. 2a shows a generator, a spine, and the resulting ribbon. General ribbons may be used to model stylus-generated images by assuming that the generator is the cross section of the stylus and that the spine is the path of the stylus. In some cases this model is an exact model, as in images generated using a computer drawing program. In other cases the model will only be an approximation, as in images generated using a pencil on paper. In general, the cross section of a stylus does not rotate as an image is drawn. For example, in calligraphy, where the stylus cross section is not rotationally symmetric, the stylus cross section remains fixed relative to the axes of the drawing plane.

261

GENERAL RIBBONS

FIG. 2. Example ribbon: (a) Dilation using generator G with spine S yields the general ribbon R. (b) The eroded ribbon, the potential spine, P ⊕ G, and D are all functions of R and G. (c) Minimal blob/stroke segmentation of the ribbon. (d) Generic blob/stroke segmentation of the ribbon.

3. STROKE VERSUS BLOB OBJECTS The General Ribbon model applies equally well to both stroke and blob stylus-generated objects, but only stroke objects should be thinned by a thinning algorithm. Therefore a means of distinguishing between the two is necessary. The analysis of stroke and blob objects is based on the fact that general ribbons are created via dilation of a spine with a generator. For some simple stroke objects, the spine can be recovered from the ribbon by using dilation’s opposite operation, erosion. T Erosion, denoted ª, is defined as A ª G = g∈G (A − g), where “−” is vector subtraction. It is important to note that in general, erosion is not the inverse of dilation. The effect of applying erosion to a ribbon will, in general, not result in the original spine. For a given ribbon and generator there may be a number of spines which could have been used to generate the ribbon. Figure 1c shows a generator, a ribbon, and three possible spines which could have generated the ribbon. Thus, for a given R and G, it is in general not possible to find a unique S. However, it is possible to find the set which is the union of all possible spines of R by using erosion. This set is referred to as the eroded ribbon, E, and is defined as E = R ª G,

where R = S ⊕ G. Figure 2b shows the eroded ribbon, E, for the given ribbon and spine. The set A is defined to be adjacent to the set B if and only if A ∩ B = ∅ and there exists a point p ∈ A s.t. all neighborhoods of p contain a point in B. ∂ A is defined to be {a ∈ A | a is adjacent to (<2 − A)}. ∂ A is therefore the boundary of set A. P is defined to be the potential spine of R if P = ∂E. Figure 2b shows the potential spine, P, for the given ribbon and spine. 3.1. Stroke Definition Stroke and blob objects are formally defined based on a comparison of a ribbon with the dilation of its potential spine: if the ribbon can be reconstructed from the dilation of its potential spine, then the ribbon is a stroke. To aid in this comparison it is useful to define a difference, D = R − (P ⊕ G), where “−” is set subtraction. Figure 2b shows the difference, D, for the given ribbon and spine. R is defined as a stroke object if D = ∅. A stroke object is thus a ribbon which can be completely reconstructed by dilating its potential spine. This definition appears to agree with human perception as to what constitutes a stroke object.

262

MILUN ET AL.

3.2. Blob Definitions A blob object cannot simply be defined as a ribbon where D 6= ∅ because humans may perceive a single ribbon to have both blob and stroke components as in Fig. 2. The solution is to allow a ribbon to be subdivided into blob and stroke objects as necessary. Thus a blob will be defined in such a way that a given ribbon with D 6= ∅ may or may not be a blob. The following are two possible definitions of a blob which satisfy this constraint. 3.2.1. Minimal Blob The following definition defines a blob as being the smallest possible ribbon where D(R) is unique. R is a minimal blob object if and only if D 6= ∅, and there does not exist a spine, Si , such that (1) Ri ⊂ R and (2) Di = D. Though this definition is mathematically satisfying, it does not always agree with human perception. For example, while humans interpret the ribbon in Fig. 3a as being a single blob, by the minimal blob definition it is a blob and several small strokes. 3.2.2. Generic Blob The lack of agreement with human perception, along with a problem of segmenting ribbons that were created with disjoint

generators (see Section 4.1), led to the creation of a second blob definition. R is a generic blob object if and only if: (1) D 6= ∅; (2) P is a set of embedded circles; and (3) for each connected component, C, of E − P, ∃ an e ∈ C s.t. e ⊕ G ∩ D. By this definition, the class of ribbons exemplified by the ribbon in Fig. 3a are interpreted as single blobs. Therefore the definition is in closer agreement with human perception. However, given a ribbon perceived as a blob by humans, where P consists of both embedded circles and short attached embedded lines, the defined blob may not agree with human perception, as shown in Fig. 3b. So while the generic blob definition is in closer agreement with human perception, perfect agreement may require postprocessing. 4. SEPARATING STROKE OBJECTS FROM BLOB OBJECTS The General Ribbon model and the above definitions of stroke and blob objects make it possible to create algorithms for separating the stroke and blob objects in a stylus-generated image. The algorithms take as input a ribbon, R, and a generator, G, and then compute two ribbons: Rb , the blob segment which contains the blob objects of R; and Rs , the stroke segment which contains the stroke objects of R. The criteria that the blob and

FIG. 3. Perceptual blobs: (a) segmented into a blob and strokes using the minimal and generic blob definitions; (b) segmented into a blob and small strokes using the minimal and generic blob definitions; (c) an image which is a union of blobs and strokes segmented by using the minimal and generic blob definitions. (d) Minimal segmentation fails for disjoint generators and generic segmentation succeeds.

263

GENERAL RIBBONS

FIG. 4.

R − Rb is not necessarily a ribbon.

stroke segments must satisfy are: (1) the blob segment is a blob object; (2) the stroke segment is a stroke object; (3) the image can be reconstructed from the union of the blob and stroke segments (R = Rb ∪ Rs ); and (4) the erosion of the intersection of the blob and stroke segments is empty (E(Rb ∩ Rs ) = ∅). (Criterion 4 expresses the idea that the area of the intersection of blob and stroke segments is not a valid general ribbon.) The following definition of a blob segment satisfies these criteria and applies to both minimal and generic blobs. For a given R, Rb is defined as a blob segment if and only if: (1) If Rb 6= ∅, then Rb is a blob object; and (2) D(Rb ) = D(R), where D(R) is the difference set for the ribbon R. One approach to extracting the stroke objects from an image would be to just subtract Rb from R. However, the resulting image may not be a general ribbon, as illustrated in Fig. 4. The following definition does satisfy the criteria. For a given R, Rs is defined as the stroke segment if and only if: (1) Rs is a stroke; (2) ∀e ∈ E(Rs ), e ⊕ G 6⊂ Rb (R); and (3) R = Rs ∪ Rb . Although these definitions deal with local occlusion (intersection) of stroke and blob objects, the stroke segments may not agree with human perception in the cases where a perceived stroke object appears to pass behind a blob object, as in Fig. 4. These are cases where the human percept may involve the higher level processing of the Gestalt principle of good continuation [15]. To achieve better agreement with human perception, algorithms based on the above definitions would also require additional processing based on Gestalt principles. 4.1. Minimal Blob/Stroke Segmentation While a general algorithm for minimal blob segmentation has not been found, a segmentation algorithm has been developed by restricting the generator to be connected. A connected generator, G c , is defined as an open, connected subset of the Euclidean plane <2 . The development of the algorithm is based on dividing the eroded ribbon, E, into four subsets. (The following definitions apply to both G and G c .) E a is defined as {e ∈ E | (e ⊕ G) ∩ D}. E b is defined as {e ∈ E | (e ⊕ G) is adjacent to D}. E c is defined as {e ∈ E | ((e ⊕ G) ∩ D) = ∅, (e ⊕ G) is not adjacent to D, and (e ⊕ G) ⊂ E b ⊕ G}. E d is defined as {e ∈ E | ((e ⊕ G) ∩ D) = ∅, (e ⊕ G) is not adjacent to D, and (e ⊕ G) 6⊂ E b ⊕ G}.

There are two ways to compute Rb , as shown in Conjectures 1 and 2: Conjecture 1.

(E a + E b ) ⊕ G c is a blob segment, Rb .

Conjecture 2.

(E b ⊕ G c ) + D is a blob segment, Rb .

From Conjecture 1 and the criterion that R = Rb ∪ Rs one might imagine that the stroke segment could be computed as (E − (E a + E b )) ⊕ G c . However, while Rb = (E a (R) + E b (R)) ⊕ G c , the most that can be said about the erosion of each side of the equation is that E(Rb ) ⊇ E a (R) + E b (R). This is true because while the dilation of E c is a subset of Rb , if E c is nonempty, then E c 6⊂ (E a + E b ). When is E c nonempty? When the boundary of a blob segment includes a concavity it is possible that E c (R) 6= ∅. So to satisfy the criterion that E(Rb ∩ Rs ) = ∅, the dilation of E b cannot be in the stroke segment. This leads to the following conjecture. Conjecture 3. E d ⊕ G c is a stroke segment, Rs . An alternative computation for Rs is as follows. Conjecture 4. { p ∈ P | p ⊕ G c 6⊂ Rb } ⊕ G c is a stroke segment, Rs . These conjectures can be used as the basis for minimal blob/ stroke segmentation algorithms. The conjectures have been proved correct for disjoint generators by Cai and Meyer [6]. For images not containing regions of high curvature formed from connected generators the algorithm appears to work well (Fig. 3). While these algorithms have the advantage of being possible to prove (they are mathematically correct), their results fail to agree with human perception in two important ways. First, since the algorithm finds the smallest possible blob, figures which have regions of high curvature yet are perceived by humans as single blobs will be divided into stroke and blob components (Fig. 3). Second, the algorithm can fail for disjoint generators when there is accidental alignment (Fig. 3d). 4.2. Generic Blob/Stroke Segmentation A second segmentation algorithm was developed to overcome the problems found from using the minimal algorithm. As it applies to both disjoint and connected generators it is referred to as the generic blob/stroke segmentation algorithm. To develop the

264

MILUN ET AL.

segmentation algorithm additional definitions are necessary. E a0 is defined as {e ∈ (E − P) | ∃ a connected path within (E − P) from e to a point in E a }. P 0 is defined as { p ∈ P | p is adjacent to E a0 }. The conjectures specify the generic blob/stroke segmentation algorithm. Conjecture 5.

(P 0 ⊕ G) ∪ D is a blob segment, Rb .

Conjecture 6.

(P − P 0 ) ⊕ G is a stroke segment, Rs .

While these conjectures have not yet been proven, we have been unable to find counterexamples. Figures 3 and 5 show examples of applying the generic blob/ stroke segmentation algorithm. The results of the generic algorithm are more in agreement with human perception than the results of the minimal algorithm. However, in regions of extremely high curvature (i.e., when the stroke has line components), the results of the algorithm may not agree completely with human perception. Some postprocessing may be necessary. 5. DISCRETE RIBBONS While the basic concepts of general ribbons are applicable to both the continuous and the discrete space, the following definitions must be modified slightly for the discrete space. A generator, G, is defined as a finite subset of a rectangular image array, I . A connected generator, G c , is defined as a finite, connected subset of a rectangular image array, I . A spine, S, is defined as a finite subset of a rectangular image array, I . The set A is defined to be 8-adjacent to the set B if and only if A ∩ B = ∅ and there exists a point p ∈ A s.t. the 8-neighborhood of p contains a point

in B. ∂ A is defined as {a ∈ A | a is 8-adjacent to (I − A)}. Ea 0 is defined as {e ∈ (E − P) | ∃ an 8-connected path within (E − P) from e to a point in E a }. An embedded circle is defined as a set such that there exists a path from any pixel back to itself, passing through each other pixel in the set exactly once. The following new definitions are required: PA is defined as 0 ). PB is defined as { p ∈ (P − PA ) | { p ∈ P | ( p is 8-adjacent to E a1 p is 8-adjacent to PA }. The reduced set, PD , is defined in two steps. PC is defined as { p ∈ PB | 6 ∃ q ∈ (P − (PA ∪ PB )) s.t. q is 8-adjacent to p}. PD is defined as { p ∈ PC | p has at least two 8-neighbors in (PA ∪ PB )}. P 0 is defined as PA ∪ PD .

5.1. The Discrete Blob/Stroke Segmentation Algorithms The following are the discrete versions of the two continuous blob/stroke segmentation algorithms. 1. Discrete Minimal Blob/Stroke Algorithm. SB M (R, G c ) = {R Mb , R Ms } (a) E = R ª G c (b) P = ∂E = {e ∈ E | e is 8-adjacent to the background} (c) D = R − (P ⊕ G c ) (d) E b = {e ∈ E | (e ⊕ G c ) ⊂ (P ⊕ G c ) and (e ⊕ G c ) is 8-adjacent to D} (e) R Mb = (E b ⊕ G c ) ∪ D (f) R Ms = ({ p ∈ P | p 6∈ E b , ( p ⊕ G c ) 6⊂ Rb } ⊕ G c ). Figures 2c, 3, and 5 show the results of applying the discrete minimal blob/segmentation algorithm to various images. 2. Discrete Generic Blob/Stroke Algorithm. SBG (R, G c ) = {RGb , RGs }

FIG. 5. Minimal and generic blob/stroke segmentation of one of the test images.

265

GENERAL RIBBONS

(a) (b) (c) (d) (e) (f) (g) (h)

Steps a and b from the discrete minimal algorithm P = ∂E = {e ∈ E | e is 8-adjacent to the background} D = R − (P ⊕ G) E a = {e ∈ E | (e ⊕ G) ∩ D} E a0 = E 1 filled to P within (E − P) using 8-connectivity PA = { p ∈ P | ( p is 8-adjacent to E a0 )} PB = { p ∈ (P − PA ) | p is 8-adjacent to PA } PC = { p ∈ PB | 6 ∃ q ∈ (P − (PA ∪ PB )) s.t.; q is 8-adjacent to p} (i) PD = { p ∈ PC | p has at least two 8-neighbors in (PA ∪ PB )} (j) P 0 = PA ∪ PD (k) RGb = (P 0 ⊕ G) ∪ D (l) RGs = (P − P 0 ) ⊕ G. Figures 2d and 3–5 show the results of applying the discrete generic blob/segmentation algorithm to various images. 6. EXPERIMENTAL TESTS OF THE DISCRETE BLOB/STROKE SEGMENTATION ALGORITHMS Tests of the algorithms were performed on the UB Stylus Image Test Bank. As no test banks of such images exist for comparing the results of different segmentation algorithms, we commissioned the creation of this test bank. It is part of the Web-Based Image Database for Benchmarking Image Retrieval Systems [14] being developed at UB for the test of image retrieval algorithms. The ribbon images were generated by computer science (CS) undergraduates using a computer drawing program. The students generating the images did not have information about the General Ribbon model, the algorithm, or even the nature of the research. We thank Michael Lee Brzezniak, a CS undergraduate student, for supervising the collection of images. First, the results of the algorithms were viewed to look for unidentified problems in the algorithms, and none were found. Figure 5 shows one original test image, its minimal segmentation, and its generic segmentation. Second, tests were performed on the discrete versions of both the minimal and generic algorithms to determine if they were idempotent, that is, to test if SB(R, G) = SB(SB(R, G)). The results were tested in two ways. First, each image’s stroke segment was tested to determine if it was a stroke. This was done by applying the blob/stroke segmentation algorithm to the image, then applying the algorithm again to the stroke segment and finding the difference set between the initial stroke segment and the resulting stroke segment. In all cases the difference set for both algorithms was empty: the stroke segments were found to consist only of stroke segments. Second, the blob segment was tested in the same manner, with the result that all blob segment difference sets were also empty: the blob segments were found to consist only of blob segments. 7. CONCLUSION Segmenting a ribbon image, R, into its blob component, Rb , and its stroke component, Rs , makes it possible for subsequent

algorithms to extract perceptually relevant information in the image. In a blob component, it is the outer contour which contains the information; thus edge detection when applied to the blob segment will yield the desired information. For a stroke component, ribbon-based thinning should be applied to recover information about the path of the stylus. The General Ribbon model provides the basis for the minimal and generic segmentation algorithms as well as for ribbon-based thinning algorithms [20]. REFERENCES 1. C. Arcelli, L. Cordella, and S. Levialdi, Parallel thinning of binary pictures, Electron. Lett. 11, 1975, 148–149. 2. O. Baruch, Line thinning by line following, Pattern Recognition Lett. 8, 1988, 271–276. 3. H. Blum and R. N. Nagel, Shape description using weighted symmetric axis features, Pattern Recognition 10, 1978, 167–180. 4. J. M. Brady and H. Asada, Smoothed Local Symmetries and Their Implementation, MIT Artificial Intelligence Laboratory Memo 757, Massachusetts Institute of Technology, Cambridge, MA, 1984. 5. R. A. Brooks, Symbolic reasoning among 3-D models and 2-D images, Art. Intell. 17, 1981, 285–348. 6. J. Cai and G. Meyer, A mathematical problem arising in distinguishing stroke and blob objects in stylus generated images, in preparation. 7. I. Chakravarty, A generalized line and junction labeling scheme with applications to scene analysis, IEEE Trans. Pattern Anal. Mach. Intell. 1, 1979, 202–205. 8. H. I. Choi, S. W. Choi, H. P. Moon, and N. S. Wee, New algorithm for medial axis transform of plane domain, Graph. Models Image Process. 59, 1997, 463–483. 9. N. Chuei, T. Y. Zhang, and C. Y. Suen, New algorithms for thinning binary images and Chinese characters, Comput. Process. Chinese Oriental Lang. 2, 1986, 169–179. 10. E. R. Davis and A. P. N. Plummer, A new method for the compression of binary picture data, in Proceedings 5th Int. Conf. Pat. Recog., 1980, pp. 1150–1152. 11. Y. Ding and T. Y. Young, Complete shape from imperfect contour: A rulebased approach, Comput. Vision Image Understand. 70, 1998, 197–211. 12. M. Ejiri, T. Miyatake, S. Kakumoto, and H. Matsushiam, Automatic recognition of design drawings and maps, in Proceedings 7th ICPR, Vol. 2, pp. 1296–1305, 1984. 13. C. J. Hilditch, Comparison of thinning algorithms of a parallel processor, Image Vision Comput. 1, 1983, 115–132. 14. C. Jorgensen, D. K. W. Walters, A. Zhang, and R. K. Srihari, Creating a Web-based image database for benchmarking image retrieval systems, in Human Vision and Electronic Imaging IV (B. E. Rogowitz and T. N. Pappas, Eds.), SPIE, Vol. 3644, pp. 534–541, Int. Soc. Opt. Eng., Bellingham, WA, 1999. 15. K. Koffka, Principles of Gestalt Psychology, Harcourt Brace, New York, 1935. 16. L. Lam, C. Y. Suen, and S. W. Lee, Thinning methodologies—a comprehensive survey, IEEE Trans. Pattern Anal. Mach. Intell. 14, 1992, 869–885. 17. S. H. Lee, R. M. Haralick, and M. C. Zhang, Understanding objects with curved surfaces from a single perspective view of boundaries, Artif. Intell. 26, 1985, 145–169. 18. J. Malik, Interpreting line drawings of curved objects, Int. J. Comput. Vision 1, 1987, 73–103. 19. Mansouri, A. Malowany, and M. Levine, Line detection in digital pictures: A hypothesis prediction/verification paradigm, Comput. Vision Graphics Image Process. 40, 1987, 95–114.

266

MILUN ET AL.

20. E. H. Milun, D. K. W. Walters, and Y. Li, General ribbon-based thinning algorithms for stylus generated images, Comput. Vision Image Understand. 76, 1999, 267–277. 21. R. Nevatia and K. R. Babu, Linear feature extraction and description, Comput. Graphics Image Process. 13, 1980, 257–269. 22. T. Pavlidis, A vectorizer and feature extractor for document recognition, Comput. Vision Graphics Image Process. 35, 1986, 111–127. 23. A. Rosenfeld and L. S., Davis, A note on thinning, IEEE Trans. Systems Man Cybernet. 25, 1976, 226–228.

24. R. W. Smith, Computer processing of line images: A survey, Pattern Recognition 20, 1987, 7–15. 25. D. Walters, K. Ganapathy, and F. van Huet, An orientation-based representation for contour analysis, in Spatial Vision in Humans and Machines (L. Harris and M. Jenkin, Eds.). Cambridge Univ. Press, Cambridge, UK, 1992. 26. D. K. W. Walters, Selection and use of image primitives for general-purpose computer vision algorithms, Comput. Vision Graphics Image Process. 37, 1987, 261–298.