Topological Algorithms for Digital Image Processing T.Y. Kong and A. Rosenfeld (Editors) © 1996 Elsevier Science B.V. All rights reserved.
145
Parallel Connectivity-Preserving Thinning Algorithms Richard W. H a l P ^Department of Electrical Engineering, University of Pittsburgh, Pittsburgh, PA 15261 Abstract A variety of approaches to parallel thinning using operators with small supports are reviewed, with emphasis on how one may preserve, and prove one has preserved, connectivity. Tests are demonstrated for verifying connectivity preservation; and for fundamental classes of parallel thinning algorithms, including fully parallel, two-subiteration, and twosubfield, conditions are identified using these tests which are sufficient for preservation of connectivity. Thus "design spaces" for connectivity preserving algorithms belonging to these classes are identified. Some fundamental limitations on parallel thinning operators for images with 8-4 connectivity are also reviewed, including constraints on support size and shape. Parallel computation time issues are addressed and it is shown that existing fully parallel thinning algorithms are nearly optimally fast. 1.
INTRODUCTION
In digital image processing it is usually desirable to reduce the complexity of an image by determining some simpler representation for parts of the image. For example an image might be preprocessed into a binary image containing a set of distinct connected regions, perhaps representing objects. Each of the regions might then be reduced to a simpler representation. This representation might be a potentially unconnected digital version of the medial axis skeleton as defined originally by Blum [5] or a connected approximation of the medial axis for each region when these are composed of elongated parts [57]. These connected approximations will be called medial curves. (These are not necessarily digital arcs or curves as defined in [57]; they may branch (see, e.g.. Figure 4), and if the original object has holes (i.e., is not simply connected) its medial curve will also not be simply connected.) A process which constructs these medial curves is usually referred to as thinning. Although thinning algorithms can be defined for grey level images [16, 24, 35], this chapter will address only the thinning of binary images. Thinning processes typically reduce an object by successively removing border pixels of the object while maintaining connectivity. An example of this process is illustrated in Figure 1 (using the thinning algorithm in [36]). Processes or operators which transform images only by removing object pixels will be called reduction operators; such operators are typically used to perform thinning. Although it is sometimes of interest to be able to reconstruct the original image from its thinned representation, investigators have frequently focused solely on the reductive aspect of thinning since image reconstruction is often not necessary. This chapter will focus on the reduction processes used to construct medial curves.
146
1
1
1
1
1
•
•
1
1
1
1
1
•
1
1
•
1
1
1 1 1 ] 1 1 1 ] [ 1 1
• •
1 1
1
][
1
]L
1
1
]L
1
L
1
L
1
0
• • • • • • • • • •
• • • • •
1
Figure 1. Example of parallel thinning. The -'s represent deleted I's of the original image and t h e • 's represent the medial curve. The iterations are numbered from 0 to 2 with 0 representing the original image.
For sequential reduction operators, which change only a single pixel at any one time, there are well known necessary and sufficient conditions for preserving connectivity properties [55, 64]. Parallel reduction operators are more difficult to analyze since large numbers of pixels may change simultaneously, which complicates the proof of connectivity preservation. In t h e image processing community parallel approaches to thinning using reduction operators have received much attention, partly because parallel computers are becoming more available and larger [12, 15, 23, 38, 42, 52]; but the care taken with connectivity preservation has been mixed [6, 10, 13, 14, 30, 36, 44, 58, 61, 65]. If connectivity properties are not preserved in a low level vision operation like thinning, then higher level processes may have difficulty performing correctly. For example, in processing printed characters a key problem is to identify distinct characters. If the raw images are tractable enough to provide distinct connected image regions for distinct characters, then in performing a thinning operation it is particularly desirable that a thinned connected component also be connected. This assumption can ease the burden on further processing to identify the individual characters. Taking a contrary view, total connectivity preservation may not always be a critical factor and in some cases it might be desirable to violate certain connectivity properties. For example, if single-pixel holes are assumed to arise only because of "noise" in image acquisition, then it might be desirable to "fill in" such holes before thinning. Similarly, snaall regions in an image may not be of interest if one chooses to focus solely on larger elongated regions. In such a case one might choose to completely remove small objects (e.g., 2 x 2 squares) from the image at early stages of the algorithm. In both of these cases t h e "topology" of the image is not preserved, but this may be irrelevant to further processing. Regardless, in the design of algorithms one must be concerned with identifying t h e connectivity properties which are preserved and with properly characterizing situations in which connectivity is not preserved. In this chapter a variety of approaches to parallel thinning will be reviewed with a view
147
towards showing how one may preserve and prove one has preserved connectivity. We will also exhibit local conditions which are required for preservation of connectivity in various classes of parallel algorithms, and based on these conditions we will characterize "design spaces" for connectivity preserving operators within these classes. 2. P R E L I M I N A R Y N O T A T I O N 2.1. Images Pixel values are assigned from the set {0,1}; 1-valued pixels are called I's and 0-valued pixels are called O's. T h e non-zero region of the image is assumed to be finite in extent. T h e set 5 of I's is referred to as the foreground, and its complement, the set S' of O's, as the background. Terms like i-path, i-adjacent, i-neighbor, i-connected, and i-component are used in the same sense as in [39] for i = i and 8. To avoid connectivity "paradoxes", S and S' are understood to have 8-connectivity and 4-connectivity, respectively; this is referred to as 8-4 connectivity. T h e dual 4-8 definition could also be used, but the 8-4 definition seems to be used more often by the image processing community and this presentation will focus on that case. Unless otherwise indicated the term object or component will refer to an 8-component of S. Variables m and n will be used exclusively to refer to the foreground and background adjacency relations, respectively; e.g., in the 8-4 case m = 8 and n = i. Lower case letters will be used to denote pixels or integer variables and upper case letters will be used to denote sets of pixels and paths. Ni{p) refers to the set consisting of p and its iadjacent neighbors; N*{p) = Ni{p) — {p}] Ni{P) is the union of Ni{pj) for all pj in P ; and A^*(P) = 7Vi(P) — P , all for z — 4 or 8. In illustrations of regions of an image < 5 > refers to the set of all pixels labeled s in the illustration. In such illustrations, if I's are shown but O's are not (see, e.g.. Figure 1), then pixels at blank positions are O's unless otherwise indicated. If certain pixels are explicitly labeled as O's (see, e.g., the illustrations in the statement of Proposition 4.2), then the values of pixels at blank positions are unspecified and may be 0 or 1, unless otherwise indicated. In certain cases we specify 90° rotations of given patterns; such rotations are taken clockwise. 2.2. N e i g h b o r h o o d F u n c t i o n s W h e n we refer to specific pixels in Ns{p) we will use the notation given in Figure 2. We say t h a t p = 1 is a border 1 if N^{p) contains a 0; p is called an interior 1 otherwise. We define C{p) as the number of distinct 8-components of I's in Ng{p). We say a 1, p, is 8-simple if C[p) — 1 and p is a border 1. We define A[p) as the number of distinct 4-components of I's in N^{p) and B{p) as the number of I's in N^{p)'
Pi P8 PI
P2 Ps P
PA
Pe, Pb
Figure 2. 8-neighborhood notation.
148 2 . 3 . P a r a l l e l R e d u c t i o n O p e r a t o r s and A l g o r i t h m s We will consider thinning algorithms which use operators that transform a binary image only by changing some I's to O's (this is referred to as deletion of I's); we call these reduction operators. Algorithms are defined by a sequence of operator applications; each such application is termed an iteration. The support of an operator O applied at a pixel p is t h e minimal set of pixels (defined with respect to p) whose values determine whether p's value is changed by O. We assume that O's support at any other pixel q is just the translation to q of the support at p. For example, the support of the operator which deletes a 1, p, iff p is 8-simple is Ns{p)] this is called a 3 x 3 support. W h e n the support has small diameter it is referred to as a local support. Thinning algorithms typically delete only border I's and we need to determine which border I's can be deleted without disrupting connectivity. Two conditions have found substantial application in the design of thinning operators: A{p)=l and C{p)=l. There are straightforward ways to compute A[p) (e.g., A{p) is equivalent to the crossing number CN of T a m u r a [63]) and efficient methods are known for testing the condition C{p)=l for border I's p [1, 11, 27^ 34, 64]. We will be concerned primarily with operators t h a t have local support and in particular with operators that have very small supports, e.g., 3 x 3 . Operators t h a t require A{p) = 1 {C{p) = 1) for deletion, and algorithms t h a t use only such operators, are said to be A{p) = 1-based {C{p) = 1-based). Algorithms apply operators over parts or all of the image in a sequence of iterations. W h e n an operator is applied to only one pixel at each iteration, it is called a sequential operator; otherwise, it is called parallel. The term completely parallel operator is used to denote an operator which is applied to the entire image at each iteration where it is applied. Such operators can be particularly desirable when algorithms will be implemented on parallel 2D mesh computers [12,15, 23, 38, 42, 52]. Operators with local support are highly desirable in such implementations since larger supports require either higher time cost or higher interconnection complexity for obtaining the values of the pixels in the support. In particular thinning operators with 3 x 3 (or smaller) supports are especially desirable; b u t , unfortunately, a completely parallel reduction operator with 3 x 3 support cannot provide adequate thinning if it is used exclusively [27, 32, 57] (this point is discussed in Section 3.2). Investigators have worked around this problem using two basic approaches. Subiteration algorithms apply an operator to the entire image at each iteration, but rather t h a n using the same operator at each iteration, they cycle through a small set of operators; t h e iterations of the cycle are usually called subcycles or subiterations [3, 27, 44, 56, 57, 61, 65]. Subfield algorithms partition the image into subsets in some manner and a parallel operator is appHed to one of the subsets (a subfield) at each iteration, cycHng through the subsets [26, 27, 52]. An algorithm is called fully parallel if it applies the same completely parallel operator at every iteration. We measure the parallel speed of a parallel thinning algorithm on a given image by counting the number of iterations required for the algorithm to t e r m i n a t e ; this number is called the iteration count. We say a parallel algorithm is fast (i.e., has high parallel speed) if it has low iteration count.
149
3.
F U N D A M E N T A L S OF PARALLEL T H I N N I N G
3.1. Thinning Goals A thinning algorithm typically makes use of reduction operators which are designed to iteratively delete border I's until thin curves (medial curves) are obtained which lie approximately along the midhnes of elongated objects. A good thinning algorithm should satisfy the following conditions: T l . Medial curve results must be thin; T 2 . Medial curve results must approximate the medial axis; T 3 . Thin curves and endpoints must be preserved; T 4 . Connectivity of foreground and background must be preserved. For parallel thinning algorithms, a further condition is T 5 . Parallel speed should be substantially higher than that achievable with a sequential algorithm. As we shall now see, it is difficult to precisely define most of these goals, especially the geometric goals T 1 - T 3 . Consider condition T l . A thin curve G of I's (in the 8-4 case) would ideally be expected to be composed of all border I's, i.e., I's 4-adjacent to a 0. Further, most of the pixels of G should have exactly two 8-neighbors in G and a few pixels in G could be endpoints (e.g., only one 8-neighbor in G) or branch points (more than two 8-neighbors in G) [57]. A medial curve might be defined as ideally thin if no set of non-endpoint pixels on the curve can be removed without violating connectivity properties. However, consider the image whose I's are as shown below 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Here all I's are either endpoints or are not 8-simple and the deletion of any set of nonendpoint I's will violate connectivity properties; thus the image is ideally thin by our definition, but it contains an interior 1. An alternative to T l might be that the medial curve should be a smallest set of pixels satisfying T 2 - T 4 . However, this condition cannot in general be satisfied by a local operator. For example, in the following two images, where e = p = q = 1
150
e 1 1
e l 1 1 1 1 p q 1 1
1 1 p q 1 1 1 1 1
1 1 e
e
if we want to preserve the endpoints labeled e and obtain the shortest 8-path between t h e m as the medial curve, then in the example on the left we must delete p while preserving q and in the example on the right we must delete q and preserve p. We can create such examples with diagonal lines of width two and any length. Thus, there is in general no reduction operator with local support which can determine whether to delete p or ^; but clearly p or q must be deleted if we are to achieve a smallest medial curve. Heuristic approaches are usually taken to produce medial curves which satisfy T l . A typical approach [36, 60] applies post-processing to do a final thinning using reduction operators which delete I's, p, with 3 x 3 neighborhoods like the following: 0 0 1 p 0 0 1 (1)
0 1 I p 0 0 0 (2)
1 0 Q p I 0 0 (3)
0 0 0 p I 1 0 (4)
T h e pixels t h a t are left blank may be I's or O's. (Similar deletion conditions were earlier used in [3].) In order to preserve connectivity only certain subsets of these four deletion conditions can be used in parallel. Using the Ronse techniques presented in Section 4, it can be shown t h a t a connectivity preserving parallel operator can delete p only for any single one of the conditions or for the pairs of conditions (1) and (2), (1) and (4), (2) and (3), and (3) and (4). It is also possible to add these conditions to the main thinning operator and avoid post-processing [28]. An example of such an algorithm is given in Section 7.1. Other issues relating to achieving thin medial curves are discussed in [60, 63]. Medialness (T2) can also be difficult to define precisely. For example, it is unclear what t h e ideal medial curve should be for a u; x /-pixel upright rectangle when the width w is even. Figure 3 illustrates several alternatives for the part of the medial curve near the center of t h e rectangle. Example (a) would usually be preferred by practitioners, although t h e medial curve's vertical position is biased. Example (b) has a less biased vertical position, but the medial curve is not straight. Example (c) has an unbiased vertical position, but t h e medial curve is not ideally thin. For an odd-width rectangle, the midline of t h e rectangle is thin, straight, and centered, but at the end of the rectangle t h e medial curve m a y take several forms, as illustrated in Figure 4. In practice thinning algorithm designers a t t e m p t to balance deletions from the four compass point directions in order to achieve an approximation to isotropic erosion (while preserving connectivity). Davies and P l u m m e r [11] illustrate a methodology for evaluating the quality of a medial curve by taking t h e union of the maximal disks contained in the object and centered at t h e pixels of t h e medial curve. Analysis of the differences between this union and the original image provides quantitative measures of how good the medial curve is geometrically. Plamondon et al. [50] evaluate medial curves by comparing t h e m to medial curves constructed by
151
• • • • • • • • • • (a)
(b)
(c)
Figure 3. Examples of central regions of possible medial curves for a rectangle of even width.
•
• • •
• •
• • •
• (a)
(b)
• • • • (c)
Figure 4. Examples of possible medial curves for a rectangle of odd width.
h u m a n subjects for selected test images. As regards T 3 , we first observe that non-endpoints of ideally thin curves are not 8simple pixels; hence T4 guarantees that they are not deleted. T h e key T3 issue is to preserve endpoints. There are three traditional definitions of an endpoint pixel p (in the 8-4 case): E l . B{p) = 1; E 2 . B{p) - 1 or 2; E 3 . B{p) == 1, or B{p) = 2 and A{p) = 1. E l is widely used and any point which satisfies E l is obviously an endpoint. However, E l is too restrictive for certain kinds of objects, e.g., p
I I I I I 1 1 1 p
p
11 1 1 1 p
where p = 1. In such objects the pixels p may need to be considered as endpoints to avoid excessive erosion. (An example will be given later in this section.) E2 treats these p's as endpoints, but unfortunately all points in a diagonally oriented rectangle of width 2 are endpoints according to E2:
152
1
1 1
1 1 1 1 1
Nevertheless, E2 is useful in certain A(p)=l-based thinning algorithms to avoid erosion of certain diagonal lines [29, 44]. E3 properly handles the width-2 diagonally oriented rectangle case; it is used in [27, 45, 62]. The difficult problem of distinguishing between "true endpoints" and "noise spurs" leads some investigators to allow partial erosion of endpoints [11, 62]. Typically the thinning operator initially allows endpoint deletions and after a certain number of iterations (related to the investigator's notion of how long noise spurs might be) the operator is changed to preserve endpoints at subsequent iterations
[62]. Connectivity preservation (T4) can be precisely defined and will be treated in some detail in Section 4. One way to quantitatively express goal T5 is to argue that for w x I rectangles of I's where w <^ I a. parallel thinning algorithm should require only 0{w) iterations. Such a definition is used in [32] and in the following subsection to simplify arguments about support requirements for fully parallel thinning algorithms. A variety of aspects of thinning algorithm performance and design are treated in [11, 22, 41, 48, 57, 59, 63]. There has been some work on operators with fairly large (i.e., 5 x 5 or larger) support [7, 43, 46]. For example, Li and Basu [43] use up to 9 x 9 supports in order to better preserve vertical strokes, so that characters like ' B ' may be more easily distinguished from characters like ' 8 ' . Nevertheless, thinning operators with small support are usually preferred for reasons of efficiency. This chapter will focus on the goal of preserving connectivity using parallel operators of small (i.e., 4 x 4 or smaller) support. 3.2. Support Limitations Small operator support sizes are desirable but, unfortunately, as already mentioned, a completely parallel reduction operator with 3 x 3 support cannot provide adequate thinning if it is used exclusively [27, 56, 57]. For example, a completely parallel thinning operator which yields 90° rotated results for each 90° rotation of an object will either completely delete a 2 X 2 square or will completely preserve it [56]. Furthermore, we know t h a t a long 3 x / horizontally oriented rectangle can be thinned in 0 ( 1 ) iterations using, say, the four-subiteration thinning algorithm of Rosenfeld [56]. But if we are using a fully parallel algorithm, in order to avoid requiring 0(1) iterations and still satisfy T2 we must delete the north and south border I's of this long rectangle, except possibly near t h e corners. T h e same algorithm applied to a long horizontal rectangle of width 2 will then disconnect it, completely delete it, or delete all but at most two I's at one end or the other of t h e rectangle. Thus, fully parallel thinning algorithms which use 3 x 3 reduction operators are unable to meet our thinning goals. Note that if we omitted T2, we could define a satisfactory fully parallel "thinning" algorithm with 3 x 3 support, e.g., using an operator which deletes all north border 8-simple I's which are not endpoints [56]. This subject is addressed more formally and completely in Section 7 and [32].
153 3.3. A C ( p ) = l - B a s e d Thinning Algorithm Some investigators have circumvented these support Hmitations by using subiterations. We illustrate this using the well known four-subiteration thinning algorithm of Rosenfeld [56, 57] which is based on a 3 x 3 operator that deletes certain 8-simple I's: Algorithm ROS T h e following four reduction operators are applied at successive iterations to all pixels in the image. A pixel p = 1 is deleted if
a. C{p) = 1, b . p is not an endpoint (see below) and c. Pi = 0; where i takes the values 2, 6, 4, 8, 2, 6, 4 , . . . at successive iterations. T h e algorithm terminates when no deletions occur during four successive iterations. Examples of ROS's performance are given in Figure 5 for two endpoint definitions. ROS deletes 8-simple north, south, east, west, n o r t h , . . . border I's that are not endpoints at successive iterations. Rosenfeld has shown that these four operators preserve connectivity. ROS does quite well on goal T l , producing ideally thin results. ROS also tends to produce rather good medialness (T2) since deletions are performed from the four compass directions. But the definition of an endpoint can impact on both the T2 and T 3 performances of ROS. Thus, if we use the E l endpoint definition, ROS performs badly on the image illustrated in Figure 5a, as the triangular shaped "endpoints" are successively deleted. A more robust definition of an endpoint, such as E3, is needed to preserve such pixels. T h e n ROS produces a more acceptable medial curve result, as illustrated in Figure 5b. T h e definition of an endpoint does not typically affect connectivity preservation (T4) except for certain very small objects. 3.4. In A{p) have
An A ( p ) = l - B a s e d Thinning Algorithm some thinning algorithms the C{p) = 1 condition is replaced by the more restrictive = 1 condition. Border I's satisfying A{p) = 1 are 8-simple since for such I's we also C{p) = 1. But there are 8-simple pixels at which A{p) ^ 1, e.g., p in 1
p 1
Most A ( p ) = l - b a s e d thinning algorithms find their roots in the early work of Rutovitz [58]. Operators which require A{p) = 1 for deletion of a pixel p tend to produce thicker medial curves t h a n operators which allow deletion in C{p) = 1 cases; but this may be acceptable in some appHcations. T h e following i4(p)=l-based thinning algorithm, HSCPN, is derived from the fastest approach in [36] as modified in [29].
154
1 3 4 4 4 1 1 4 • • • 2
1 1 1 1 3 1 4 • 3 5 3 1 4 • 3 7 5 3 1 1 1 1 1 1 1 1 » « « 7 3 » 7 5 5 5 4 # 3 6 * « « * 3 4*3 4 6 7 6 6 3 4 « 3 2 4 6 3 2 2 4 » 3 2 4 3 2 2 2 2 3 2 (a)
•
• 1 • 4 4 1 1 4 • • • 2
111
3 1 4 • 3 • 3 1 4 • 3 « 5 3 1 1 1 1 1 1 1 1 # » « 7 3 » 7 5 5 5 « » « « « » 4 « 3 6 « 9 « « 3 4 « 3 4 6 * 6 6 3 4 » 3 2 4 » 3 2 2 4 « 3 2 • 3 2 2 2 2 • (b)
Figure 5. Examples of thinning by ROS: (a) Using El; (b) using E3. The nunabers and • 's indicate I's of the original image. The numbers indicate the iteration at which the 1 at that position is deleted and the • 's denote pixels in the medial curves. Algorithm H S C P N At successive iterations do both of the following: a. Find those I's, p, for which A{p) = 1 and 3 < B{p) < 6. b. Delete the I's that were found in (a), except those I's, p, that satisfy one of the following conditions: 1. P2 = Pe = ^ a-nd p4 is a 1 that was found in (a); 2. p4 = PQ = I and pe is a. 1 that was found in (a); or 3. P4,P5, and pe are I's that were found in (a). The algorithm terminates when no deletions occur during an iteration. Connectivity preservation for HSCPN is proved in Section 4. Figures 1 and 6 show examples of the operation of HSCPN. Although expressed differ-
155
ently, it is very close to the original Rutovitz algorithm [58] but preserves all connectivity properties. (The original Rutovitz algorithm completely deletes 2 x 2 components of I's.) Step (a) identifies potentially deletable pixels, and the conditions in step (b) are preservation conditions which prevent the deletion of certain pixels in order to preserve connectivity. Conditions ( b l ) and (b2) preserve p if its neighborhood looks like 6 1 1 c 0 p P4 0 a I I d
or
a 0 I p 1 pe d 0
b 1 1 c
respectively, where p = p4 — pe = I and {a, 6} and {c, c?} each contain at most one 1. Condition (b3) preserves one 1 in a 2 x 2 component of I's. In Figure 6, H S C P N is viewed as being fully parallel with support < s > U p: s
s
s
s
s
p
s
s
s
s
s
s
s
s
s
s
This is t h e support required to determine whether or not p will be deleted as a result of steps (a) and (b). From this standpoint the operator is unchanged from iteration to iteration. This algorithm can also be regarded as a two-subiteration thinning algorithm when operators are restricted to 3 x 3 supports. In this view step (a) is computed in parallel in one iteration; but since step (b) uses intermediate results from step (a), it requires a second parallel iteration. Step (a) is not a traditional subiteration operator since no image pixels are transformed; rather, a flag is set for each 1 of the image t h a t satisfies (a).
• • 1 1 • 1 1 • 2 1 1 2 « 2 1 1 1 « « . « 3 2 » 1 1 2 3 « « « 2 12 3 * 3 2 1 2 * 2 1 1 • 1 1 • •
1 « « * « « 1 1 1 1 1 1 1 1
1 1 1 1 • 1 1 • 1 1 1 » 1 1 » 1 1 • 1 1 • 1 1 1 1
Figure 6. Example of thinning by HSCPN. Same notation as in Figure 5.
Note t h a t in H S C P N , deletions can occur from all four compass directions. As can be seen by comparing Figures 1 and 6, HSCPN is substantially faster t h a n ROS.
156 4. C O N N E C T I V I T Y P R E S E R V I N G R E D U C T I O N
OPERATORS
Connectivity preservation is a key design goal for parallel reduction processes like thinning. There is a need for straightforward and efficient techniques for proving connectivity preservation. W h e n these proof techniques are stated as algorithms, they are referred to as connectivity preservation tests. Using such proof techniques, algorithm designers can more easily prove the correctness of their algorithms based on reduction operators. If one has connectivity preservation tests which can be efficiently realized (i.e., with fast execution times) in a computer program, algorithm designers can improve the efficiency of their design processes by automating the proofs of connectivity preservation for their algorithms or operators. To keep the complexity of manual or automatic proofs reasonable we wish to have proof techniques which use local support for their computations. Such approaches have been presented over the past two decades by Rosenfeld [56, 57], Ronse [53, 54] and others [17-20, 27, 29, 31, 37] to prove preservation of connectivity for various classes of thinning algorithms. T h e proof techniques of Rosenfeld [56] provide a method for proving certain key connectivity properties in thinning based on reduction operators. Kameswara Rao et al. [37] gave a connectivity preservation test for a very restricted subset of such operators. Hall [29] has determined simple local sufficient conditions for connectivity preservation for a large class of parallel thinning algorithms which use A(p)=l-based reduction operators. Eckhardt [17-20] has reported related results using his notion of perfect points. Ronse [54] has presented a set of sufficient conditions which constitute a particularly simple set of connectivity preservation tests for arbitrary parallel thinning algorithms based on reduction operators. Hall [31] has related the work of Ronse and Rosenfeld and has extended t h e work of Ronse, deriving Ronse-like connectivity preservation tests for hexagonal image spaces and giving conditions under which the Ronse tests are necessary as well as sufficient. Some of this work will be presented in some detail in Sections 4.2 and 4.3. 4.1. Connectivity Properties to Preserve Connectivity preservation can be characterized in many equivalent ways. In this section a characterization is used which formed the basis for early connectivity preservation proofs for parallel thinning algorithms [56, 61]. Recall that S refers to the set of I's and S' to t h e set of O's in a binary image and that ( m , n ) = (8,4) or (4,8). A reduction operator, O , is said to preserve ( m , n ) connectivity if all of the following properties hold: F C l . O must not split an m-component of S into two or more m-components of I's; F C 2 . O must not completely delete an m-component of S; B C 3 . O must not merge two or more n-components of S' into one n-component of O's; and B C 4 . O must not create a new n-component of O's. T h e analogous (unstated) foreground conditions FC3, FC4 and background conditions B C l , BC2 are always satisfied for reduction operators since no O's may be changed to I's. We will focus on operators which delete only border I's. For such operators BC4 holds trivially and it will not be considered further. This classical definition of connectivity
157 preservation is applied to algorithms by requiring the conditions to hold for each operator application. This strong condition is relaxed in the chapter on shrinking in this volume. There is a fundamental class of reduction operators of substantial interest when connectivity preservation is a concern; this class is the subject of most of the results in this section. D e f i n i t i o n 4.1 A reduction operator, O, belongs to Class R when every 1 that is deleted by O is 8-simple. It has been shown t h a t an (8,4) connectivity preserving reduction operator with 3 x 3 support must belong to Class R and that a sequential Class R operator always preserves connectivity [55, 64]. 4.2. Ronse Connectivity Preservation Tests Ronse [54] has reported a rather simple, local set of sufficient conditions for a reduction operator, O , to preserve (8,4) or (4,8) connectivity. We will focus on the (8,4) conditions. Ronse has shown t h a t an operator is connectivity preserving if it does not completely delete certain small sets of I's. These critical sets are: single I's which are not 8-simple; pairs of 4-adjacent I's, p and g, with special conditions on N^{{p^q})] and 8-components consisting of two, three, or four mutually 8-adjacent I's. Ronse defines an 8-deletable set as a set which can be deleted while preserving connectivity in an 8-4 image. He shows t h a t a pair of 8-simple I's, {p, ^ } , is 8-deletable iff q is 8-simple after p is deleted. T h e following set of sufficient conditions for connectivity preservation [31] can be derived from Rouse's results [53, 54]: R l . If a 1 is deleted by O then it must be 8-simple; R 2 . If two 4-adjacent I's are both deleted by O , then they must constitute an 8-deletable set; R 3 . No 8-component composed of two, three, or four mutually 8-adjacent I's is completely deleted by O. R l is a test to determine if O belongs to Class R. T h e test set for R3 is shown in Figure 7. T h e bulk of the complexity in verifying the Ronse conditions arises in the R2 test. T h e Ronse tests were originally formulated as sufficient conditions for proving connectivity preservation [54] but they have also been shown to be necessary (i.e., connectivity preservation implies satisfaction of the conditions) for completely parallel operators t h a t satisfy certain support restrictions, including 3 x 3 operators [31]. Pavlidis [49] addresses similar issues when using his definitions of multiple and tentatively multiple I's to identify deletable I's in connectivity preserving thinning algorithms. For a discussion of multiple I's see the chapter by Arcelli and Sanniti di Baja in this volume. We can measure the complexity of the Ronse tests in terms of the number of test patterns required. For the R l test one considers the 2^ = 256 possible patterns of I's in Ng[p) for a given 1, p, and determines those patterns for which p is not 8-simple. These are the test patterns for which the reduction operator must not delete p; there are 140 such. For the R2 test (where it is assumed that R l has been satisfied) one must consider test patterns containing two 4-adjacent I's, p and g, where p and q are each 8-simple;
158 1
1 1
1
1 1
1
1
1 1 1
1
1 1 1
1 1 1 1
Figure 7. Test patterns for Ronse's condition R3.
{p^q} is not 8-deletable; and N^{{p^q}) fl 6" is non-empty, i.e., {p^q} is not a two-pixel component of 5*—such components are part of the R3 test. For each of these test patterns, either p oi q must not be deleted by O. Of the 2^° possible patterns of I's in Ng{{p,q}) for each orientation of p^q (vertical or horizontal), 192 are of this sort. Examples of such test p a t t e r n s , for the case where p and q are horizontally adjacent, are:
0 1 0 0
1100
O p ^ O
I
0 0 1 0
0011
p
q 0
0 1 1 0 0 p
q 0
0 1 1 0
T h e R3 test (where it is assumed that R l is satisfied) is performed by determining that t h e nine test patterns illustrated in Figure 7 are not completely deleted by O . A computer implementation of the Ronse tests is reported in [31]. We do not have to consider all of the Ronse test patterns in detail in order to prove that an operator satisfies the Ronse tests. The following results can be derived from Ronse's work [53, 54] and help in applying the tests in proofs. P r o p o s i t i o n 4.2 A set {p^q} Q S, where p is ^-adjacent to q, is 8-deletable iff Ng{{p,q}) n S is an 8-connected nonempty subset of Ng{{p,q}) and either p or q is 4-adjacent to a 0. This is probably the easiest condition to use for 8-deletability when doing proofs manually, since it is an easily perceived property. P r o p o s i t i o n 4 . 3 Ifp andq are both 8-simple and N^{{p,q})nS is non-empty, then {p,q} is 8-deletable if Ns{{p^q}) matches either of the following patterns or their rotations by multiples of 90°. 0 0 p q
0 P q I 0
Note the relative simplicity of proofs using these notions as compared to the proofs in [27, 29, 56]. For example, consider a proof that the ROS algorithm preserves connectivity. Here R l follows directly from the definition of the operator. For R2 we consider all possible cases where two 4-adjacent I's, p and q^ are deleted by the ROS operator (say for subiterations where north border I's are deleted), giving 0 0 p q
159
Since p and q must be 8-simple and neither p nor q is an endpoint (and hence N^{{p^q})r\S is non-empty), Proposition 4.3 gives R2 immediately. Finally, it is easy to show t h a t no 8-components in the R3 test set are completely deleted by the ROS operator. 4 . 3 . C o n n e c t i v i t y P r e s e r v a t i o n T e s t s for A ( p ) = l - B a s e d O p e r a t o r s H S C P N , which uses an A{p)=l deletion condition, can be proven to preserve connectivity using Ronse tests; but particularly simple connectivity preservation tests [29] are available for the following class of reduction operators, which includes HSCPN: Operator Class F P A reduction operator is in the F P class if the deletion of a 1, p, requires all of the following conditions: a. B(p) > 1; b . p is 4-adjacent to a 0; and c. A{p) = 1. If p satisfies these conditions it is 8-simple and deletion of p alone cannot affect connectivity properties in p's 8-neighborhood. Parallel deletion of all I's satisfying F P will not in general preserve all connectivity properties; thus additional conditions are required. It can be shown [28, 29] t h a t F P class reduction operators preserve connectivity if for the following three patterns
0 p 1 q 1
1 1 0 p q 0 1 1
0 Hla
Hlb
0
0
0
0
Zi
Z2
0
Zs Z4
0
0
0
0 0 0 0
H2
(where p, g, and the ^'s are I's and unspecified pixels' values are irrelevant), either p or q is preserved (not deleted) in H l a and H l b and at least one of the ^'s is preserved in H2. (Similar results are reported by Eckhardt in [17-20] using his notion of perfect points.) An operator which preserves p or ^ in H l a ( H l b ) is said to satisfy Hla (Hlb), and an operator which preserves one or more of the z^s in H2 is said to satisfy H2. We will refer to these connectivity preservation tests as the FP tests. Condition (a) in the definition of H S C P N guarantees that it is an F P class reduction operator. Condition ( b l ) ((b2)) guarantees satisfaction of H l b ( H l a ) by preserving p in each case. For H2, all the z^s satisfy condition (a). Thus condition (b3) is needed; it implies that Zi is preserved. H l a - b and H2 are satisfied by a variety of other parallel operators [28, 44, 65]. Satisfying either these F P tests or the Ronse tests is sufficient for preserving connectivity. Further, it is easy to show that an F P class operator satisfies H l a - b and H2 iff it also satisfies t h e Ronse tests. This is a useful observation when proving connectivity preservation properties for algorithms which use a mix of F P class and non-FP class conditions for deletion. An example of such an algorithm is given in Section 7.
160 5. S U B I T E R A T I O N - B A S E D T H I N N I N G
ALGORITHMS
Since (as indicated in Section 3.2) fully parallel 3 x 3 reduction algorithms cannot do successful thinning, many investigators—striving to restrict themselves to a 3 x 3 support—have used a subiteration approach. In this approach, the operator is changed from iteration to iteration with a period of typically two [8, 27, 44, 60, 61, 65], four [4, 11, 35, 56, 57, 61], or eight [3]; each iteration of a period is then called a subiteration. (Suzuki and Abe's [62] two-subiteration algorithm uses an operator with support larger t h a n 3 x 3 . ) We presented a four-subiteration algorithm in Section 3. To reduce the total number of (sub)iterations required for thinning it is desirable to reduce the period to a m i n i m u m , i.e., two. We present examples of two-subiteration algorithms in the following. 5 . 1 . E x a m p l e s of T w o - S u b i t e r a t i o n T h i n n i n g A l g o r i t h m s T h e well known A(p)=l-based algorithm of Zhang and Suen [65] as modified by Lii and Wang [44] is presented first. Algorithm ZSLW T h e following pair of reduction operators is applied repeatedly. A pixel p — 1 is deleted if a. A(p) = 1; b . 3 < B{p) < 6; and: c. At odd subiterations 1. p4 = 0 or pe = 0 or p2 = ps = 0 At even subiterations 2. p2 = 0 ov ps = 0 OT p4 = pe = 0 T h e algorithm terminates when no deletions occur at two successive subiterations. Figure 8a illustrates the performance of this algorithm. Note the improvement over ROS (see Figure 5) in iteration counts. Condition (cl) allows deletion of border I's on an east or south boundary, or of northwest "corner" I's. Condition (c2) allows deletion of border T s on a north or west boundary, or of southeast "corner" I's. (A similar condition was earlier used by Deutsch in [13, 14].) The original presentation of this algorithm [65] used 2 < B{p) < 6 for condition (b), which reduces to a single 1 diagonally oriented rectangles like the following: 1
1 1 1 1
1 1 1 T h e ZSLW operators [44, 65] for each subiteration are F P class operators and it is simple to show t h a t the HI conditions are satisfied. Unfortunately, the H2 condition is not satisfied since the 2 x 2 component of I's is completely deleted. H2 would be satisfied if t h e "corner" I's were not deleted in condition (c). A similar observation was m a d e in [19] to repair an analogous flaw in the original Rutovitz operator [58]. This deletion of 2 x 2 components can be serious since there is a large (in fact unbounded) class of components
161
which ZSLW eventually reduces to the 2 x 2 component which it then completely deletes. Figure 8b shows an example. Next we present a C ( p ) = l - b a s e d two-subiteration algorithm [27] which preserves all connectivity properties and produces thinner results than ZSLW.
• • 1 • 2 1 2 • 2 1 2 • 3 2 2 2 1 2 3 • • 4 • • • 1 1 1 1 1 1 1 2 3 4 • 4 3 1 1 2 3 • 1 2 • 1 • 1
3 2 1 2 1 1 1 • • 1 1 1 2 2 2 3 2 3 1 2 2 1
2 3 4 4 3 1
1 2 4 4 3 1
1 • 1 2 • 1 2 • 1 2 2 2 2 1
2 3 3 3 2
1 1 1 2 1 2 1
1
1 1
• 1 t
(a)
1
• 1 2 1 1 1
(b)
Figure 8. Examples of thinning by ZSLW. Same notation as in Figure 5. Note t h a t in (b) the component is completely deleted.
Algorithm GH89-A1 T h e following pair of reduction operators is applied repeatedly. A pixel p = l is deleted if a. C{p) = 1; b . p does not satisfy t h e E3 endpoint condition; and: c. At odd subiterations 1. p4 = 0, or p2 = P3 = 0 and ps = 1. At even subiterations 2. ps = 0^ or pe = P7 = 0 and pi = 1. T h e algorithm terminates when no deletions occur at two successive subiterations. Figure 9 illustrates the performance of this algorithm. Condition (cl) is satisfied when Ns{p) takes either of the following forms:
162
p 0
0 0 p 1
or
This allows deletion of certain east and north border I's. Condition (c2) is satisfied for 180° rotations of these two conditions, allowing deletion of certain south and west border I's. Although this algorithm does not use FP class operators, connectivity preservation is easily shown for this algorithm using the Ronse tests. Rl follows easily from the definition of the operators. To show R2 we consider any two 4-adjacent 8-simple I's, p and q, at (say) odd subiterations, and we find that one of the following conditions must hold: 0 0 p q 0 1
0 0 0 p q 1
1
p 0 ^ 0
In each case {p^q} is 8-deletable by Proposition 4.3; and an analogous result follows for even subiterations. Finally, it is straightforward to show that no member of the R3 test set (Figure 7) is completely deleted.
• • 1 2 • 1 2 • 3 1 2 4 • 3 1 1 « 3 4 « 5 3 « • 2 « » « 7 « 2 4 6 • 5 2 4 • 5 2 • 1 • 1
1 1 1 2 • 1 2 • 1 • • 3 1 • • 1 1 1 1 1 1 2 • 1 1 2 • 1 3 1 2 • 1 3 1 2 • 1 3 1 2 2 1
• Figure 9. Example of thinning by GH89-A1. Same notation as in Figure 5.
5.2. Two-Subiteration Thinning Algorithm Design Space It is of some interest to characterize the class of all connectivity preserving twosubiteration thinning algorithms based on 3 x 3 reduction operators. We pointed out in Section 3.2 that north and south border I's, such as p and g in a long 2 x /-pixel horizontal rectangle, e.g., ... ... . . ...
0 1 . 1 0
0 1 1 0
0 p ^ 0
0 0 1 1 1 1 0 0
... ... ... ...
must not both be deleted if connectivity is to be preserved. Consideration of such examples leads to the well known restriction that two-subiteration algorithms that use 3 x 3
163 operators should only delete north and west, north and east, south and west or south and east border I's at any one subiteration [56, 57]. Thus, a typical two-subiteration operator would have the following deletion conditions: T S I D e l e t i o n C o n d i t i o n s ( n o r t h a n d east d e l e t i o n s ) a. P2 = 0 or p4 = 0; b . C{p) = 1; and c. p does not satisfy the E3 endpoint condition. In addition there will be certain cases in which p must be preserved from deletion in order to preserve connectivity. We can use the Ronse tests to identify these cases. It can be verified t h a t conditions R l and R3 are implied by TSI. To insure R2 consider two 4-adjacent I's, p and q, which both satisfy TSI, but do not constitute an 8-deletable set, so t h a t if both were deleted, R2 would be violated. For example, suppose the I's p,q are horizontally adjacent: e a p q b c d By TSI condition (a), e = 0. Since {p^q} is not 8-deletable, by Proposition 4.3 a = 1 and c and d cannot both be O's; and since a = 1, we must have 6 — 0 in order that q satisfy condition (a). Enumerating the allowed values of c and d for horizontally or vertically adjacent p and q we find that the following six cases include all possible cases where {p, q) is not 8-deletable: y z
0 0 1 0 p ^ 0 0 0 1
0 1 p q ^ 1 1 (1)
y 0 1 z p q 0 1 0 0
(2)
(3)
0 1 q 1 1 p 0 z y
0 1 q 1 0 p 0 0 0 0
0 0 0 q 1 1 p 0 z y
(4)
(5)
(6)
where {y^z} are such that C{p) = 1 (i.e., y = I and z = 0 is not allowed) and the blank pixels' values are irrelevant. Cases (4), (5) and (6) can be obtained by reflection in a 45° line through q from cases (1), (2) and (3). To insure that R2 holds, preservation conditions must be added to the TSI deletion conditions to guarantee t h a t at least one of p and q is not deleted in each of these cases. At the subiteration when south and west border I's may be deleted, preservation conditions based on the 180° rotations of (1-6) must be used. Similar preservation conditions hold for other border choices (e.g., south and east) for the I's deleted at one subiteration. These requirements on the preservation conditions reveal the design space for connectivity preserving two-subiteration thinning operators which satisfy the TSI deletion conditions. Note t h a t if TSI did not include the endpoint condition, then we would also have to preserve from complete deletion the following small 8-components: 1
1
1
1 1
1 1
1
1
1 1
1
1 1
1
1 1
164 Another discussion of two-subiteration design spaces can be found in [8]. The TSI preservation conditions might be chosen to maximize the number of 3 x 3 neighborhoods of a 1, p, for which p is deleted. Alternatively, they can be chosen for their simplicity, as in the following example: Algorithm TSIN The following pair of reduction operators is applied repeatedly. A pixel ^ == 1 is deleted if: 1. At odd subiterations, ^'s neighborhood satisfies the TSI north and east deletion conditions, but does not match either of the following patterns: 0 1 1 ^ 0
0 q I 1 0 (The first pattern preserves q in TSI preservation cases (1-3), and the second in cases (4-6).) 2. At even subiterations, ^'s neighborhood satisfies the TSI south and west deletion conditions, but does not match either of the following patterns: 0 ^ 1 10
01 1 q 0
The algorithm terminates when no deletions occur at two successive subiterations. The performance of the algorithm is illustrated in Figure 10.
1 1 2 • 2 • 1 1 1 1 1 1 1 3 •
1 1 2 4 2
Figure 10. Example of thinning by TSIN. Same notation as in Figure 5.
There are four-subiteration algorithms which give iteration counts comparable to those of two-subiteration algorithms. In [4] a four-subiteration algorithm is defined which deletes from the south and west, north and east, north and west, and south and east directions at
165 successive subiterations. Since deletions occur from two directions at each subiteration, its iteration counts are comparable to those of fast two-subiteration algorithms such as ZSLW. Further, this approach tends to produce more symmetrical medial curve results t h a n two-subiteration approaches. 6. S U B F I E L D - B A S E D T H I N N I N G A L G O R I T H M S Subfield approaches are useful in thinning, shrinking and more general image processing tasks [25-27, 51, 52]. In these approaches the image is partitioned and at each iteration a parallel operator is applied only over one member (subfield) of the partition. Golay and Preston introduced this notion for images on a hexagonal grid [26, 51, 52]. Preston in Chapter 6 of [52] proposed the use of the following partition into four subfields for images on a rectangular grid: Vi
V2
Vi
V2
Vi
...
Vs
V4
Vs
V4
V3
...
Vi
V2
Vi
V2
Vi
...
Here no two 8-adjacent pixels belong to the same subfield. This property implies t h a t if a sequential operator preserves connectivity, so does the parallel operator, with the same deletion conditions, defined on each subfield. For example, any operator which deletes only 8-simple I's preserves connectivity if appHed in parallel to any one of the subfields of this partition. Preston presented thinning algorithms using four subfields for the rectangular case [52] and three subfields for the hexagonal case [51, 52]. Since a smaller number of subfields tends to produce faster thinning, it is of interest to consider two-subfield partitions; in particular, we shall consider the "checkerboard" partition ^1
V2
Vi
V2
Vi
...
V2
Vi
V2
Vi
V2
...
Vi
V2
Vi
V2
Vi
...
Since only the I's in one subfield can be deleted at any iteration, a pixel js's 4-neighbors do not change at the same iteration as p. Hence, it is obvious that for any "checkerboard" reduction operator the Ronse R2 condition automatically holds. Thus, for such operators only R l and R3 need to be checked. Also, the only 8-components in the R3 test set which could possibly be completely deleted are the two-pixel 8-components 1
1 1
1
where t h e I's are both in the same subfield. These I's are endpoints for any of the three endpoint definitions given in Section 3.1. Thus, proving connectivity preservation for checkerboard reduction operators is particularly easy. In fact any such operator which deletes only 8-simple I's (thus satisfying R l ) which are not endpoints (thus satisfying R3) is connectivity preserving. A thinning algorithm has been defined which is an adaptation of the ROS thinning
166 algorithm [56, 57] to the checkerboard approach, and which appears to provide lower iteration counts than typical two-subiteration algorithms [27]: Algorithm GH89-A2 A 1, p, is deleted iff it is 8-simple and B{p) > 1; this is done in parallel, at alternate iterations, for each of the two checkerboard subfields. T h e algorithm terminates when no deletions occur at two successive iterations. Examples of the operation of this algorithm on simple images are given in Figure 11. T h e algorithm has particularly low iteration counts and produces ideally thin results [27]. T h e speed of Algorithm GH89-A2 can be understood by considering its ability to erode borders in various orientations. For example, consider an upright rectangle such as t h a t shown in Figure l i b . We note that at the second and subsequent iterations GH89-A2 deletes all border I's which are not on the medial curve. Thus, for upright (horizontal or vertical) w x / rectangles (it? < /) GH89-A2 requires [w/2\ + 1 iterations when w > 1. This is very good since even a fully parallel algorithm which deletes all border I's, without concern for connectivity preservation, would require [w/2\ iterations to reduce t h e rectangle to unit width. Since algorithms with 3 x 3 support cannot delete pixels from t h e north and south or the east and west borders of the rectangle at the same (sub)iteration, the iteration counts achievable by such algorithms on these w x / rectangles are at least w — 1 (sub)iterations. Next consider diagonally oriented borders as illustrated in Figure 12. Such a border is composed of I's t h a t are all in the same subfield and is entirely deletable by GH89-A2 in one iteration when that subfield is used. At the iteration after these border I's are deleted, t h e new border I's are in the other subfield and are in turn all deletable. As a result, GH89-A2 is able to delete all of these border I's at successive iterations until t h e vicinity of the medial curve is reached. Thus, the iteration counts of GH89-A2 on these diagonal borders again comes close to that achievable by fully parallel deletion of all border I's. Note t h a t there are four classes of diagonal border I's, having 4-adjacent O's on t h e north and west, north and east, south and west, and south and east, respectively. After t h e first iteration GH89-A2 is able to delete all four classes of border I's at each iteration. Typical two-subiteration algorithms will only delete border I's from three of these four classes at one subiteration. Thus, the example of GH89-A2 suggests t h a t two-subfield ("checkerboard") algorithms appear to have fundamental speed advantages over twosubiteration algorithms.
167 1 • 1 2 1 1 2 • • 2
1 2 1 2 1
2 ^ 2 ^ 2
1 2 ^ 2 1
1 • • 2 3 • 3 2
2 3 ^ 3 2
1 2 • • • 4 3 2
1 2 ^ 2 1
• 1 • 1 • 1 1 2 • 2 2 1 2 1 • 1 • 1 • • • • 2 1 2 ^ 2 3 2 • • 1 • 1 • 1 1 • 1 4 • 2 1 2 ^ 2 • 4 3 2 1 • 1 • 3 2 1 • 1 • • 2 • 1 • 2 3 ^ 3 2
1 2 1 2 3 2 ^ ^ ^ 2 3 2 1 2 1
2 1 2 1 2 1 ^ 3 2 3 2 3 ^ 1 32 3 2 3 2 3 ^ 1 2 1 2 1 2 1 ^
(a)
(b)
Figure 11. Examples of operation for GH89-A2. Same notation as in Figure 5; note that odd numbers are in the subfield operated on first and even numbers are in the other subfield.
• •
•
• • 3 2 1 • • • 3 2 3 ^ • • • • •
1 2 1 3 2 1 • 3 2 1
1 1 2 1 2 3 1 2 3 ^ 1 2 3 ^ ^
2 3 • • • 3 • • • • • • • • • ^ ^ ^ ^ ^
Figure 12. GH89-A2 performance on borders with diagonal orientations (only fragments of the borders are shown). The numbers indicate iterations at which I's of the original image are deleted. Odd numbers are in the first subfield used and I's which are undeleted just after iteration 3 are indicated by • 's.
168
GH89-A2 tends to preserve medial curve branches emanating from corners of objects (see Figure 11). For example, if the corner c= 1 shown below c d 1 1 ... d i l l . . . 1 1 1 1 ...
is not in the subfield used at the first iteration, then the two I's labeled d are deleted (since they are in t h a t subfield) and on subsequent iterations c is preserved as an endpoint. Conversely, if c is in the subfield used at the first iteration then after that iteration we have t h e following configuration at the corner: 0 a 0
a 0 a b a 1 a 1 1
... ... ...
where a = b = 1. At the second iteration the I's labeled a are deleted and b is subsequently preserved as an endpoint. Thus, GH89-A2 tends to produce a medial curve which looks like a medial axis skeleton [5, 57]. T h e medial curves produced by the corners of upright rectangles are symmetrical for rectangles with odd lengths and widths, since all corners are in the same subfield. When the corners are not all in the same subfield, the resulting medial curves are not precisely symmetrical. In general the performance of two-subfield algorithms will vary somewhat depending on which subfield is used first. GH89-A2 produces medial curves with a "zigzag" pattern for certain object orientations and widths (for example upright rectangular regions of even width). This p a t t e r n can be seen in one of t h e examples in Figure 11. Although visually disconcerting, these patterns are of similar complexity to straight medial curves in a chain code representation. But t h e p a t t e r n s are a disadvantage if an interpixel distance of 2^/^ between two diagonally adjacent pixels is used when estimating curve length, and they also partially conceal t h e essential linearity of the chain code. However, this form of medial curve can provide the least biased estimate of the position of the medial axis of, for example, a long even-width upright rectangle. 7. F U L L Y P A R A L L E L T H I N N I N G
ALGORITHMS
Ideally, we would like to use fully parallel thinning algorithms, but we have observed t h a t fully parallel 3 x 3 algorithms cannot perform correct thinning. We will now relax the 3 x 3 support restriction. In Section 3 we gave an example (HSCPN) of a fully parallel thinning algorithm which uses an operator with a sixteen-pixel support. We will now look at some algorithms which have smaller supports. T h e earliest fully parallel thinning efforts are found in [13, 14, 58], although connectivity is not entirely preserved in this early work.
169 7 . 1 . E x a m p l e s of Fully Parallel T h i n n i n g A l g o r i t h m s We consider first a thinning algorithm, GH92-AFP2 [28], which is of F P class (Section 4.3), has an eleven-pixel support, and has preservation conditions which provide a simple way to satisfy H l a - b and H2 (Section 4.3). Algorithm GH92-AFP2 A 1, p, is deleted whenever all of the following conditions are satisfied: a. Aip) = 1; b . p is 4-adjacent to a 0; c. B{p) > 2; and d. T h e neighborhood of p does not match any of the following patterns: 0 1 p 1 1 1 1 0
0
1 1 1 p 1 1
0
0
0 0 0 1 p 0 1 1 0 0
T h e algorithm terminates when no deletions occur at an iteration. Connectivity preservation follows obviously from conditions HI and H2. This algorithm is quite similar to H S C P N , with identical thinning performance on the image in Figure 6, but uses a much smaller support. GH92-AFP2 produces medial curve results which are not particularly thin [28]. A variation of GH92-AFP2 has been identified [28] which achieves substantially thinner results: Algorithm GH92-AFP3 A 1, p, is deleted whenever either of the following conditions is satisfied: a. Conditions (a), (b), (c) and (d) of GH92-AFP2 are satisfied, or b . T h e neighborhood of p matches either of the following patterns: 0 0 1 p 0 0 1
0 0 0 p
1 1 0
T h e algorithm terminates when no deletions occur at an iteration. Figure 13 contrasts the performance of GH92-AFP2 and GH92-AFP3, illustrating t h a t thinner results can be achieved with the same iteration counts. Condition (b) of GH92A F P 3 allows deletion of I's along diagonal curves which are thicker t h a n necessary. Similar conditions are used as a separate post-processing step in [36] while here the conditions are imbedded in the parallel operator. T h e support for both of these thinning operators when applied at p is the eleven-pixel set < 5 > U p shown below: s s s s s p s s s s s Algorithm GH92-AFP3 is not entirely an F P class algorithm but the connectivity
170
• 1 1 1 • • 1 1 • • 1 1 • • 1 1 • • 1 1 • • 1 1 • • 1 1 1 • • • 1 1 • 1 2 • 1 2 • 1 2 2 1 1 1
1 2 2 2 1
1 1 1 1 1
• 1 1 1 • 2 1 1 • 2 1 1 • 2 1 1 • 2 1 1 • 2 1 1 • • 1 1 1 • 1 • 1 1 • 1 2 • 1 2 • 1 2 2 1 1 1
1 2 2 2 1
1 1 1 1 1
Figure 13. Examples of thinning for GH92-AFP2 (left) and GH92-AFP3 (right).
preservation proof is simplified since condition (a) is an F P condition. Deletion of the set of all pixels satisfying condition (a) must satisfy H l a - b and H2; thus, as stated in Section 4.3, it also satisfies the Ronse tests. Since p is 8-simple for any p satisfying condition (b), Ronse condition R l holds. Further, Ronse condition R2 could only fail if for two 4-adjacent I's, p and ^, which are both deleted, at least one is deleted by condition (b). But \i p satisfies (b) the I's 4-adjacent to p are not deletable, since none of these I's can satisfy both A{p) = 1 and B{p) > 2, and they also cannot satisfy (b). Thus, the R2 condition is satisfied. Finally, it is straightforward to show that no component in the R3 test set is completely deleted. 7 . 2 . O p t i m a l l y S m a l l S u p p o r t s for Fully Parallel T h i n n i n g A l g o r i t h m s T h e two algorithms just discussed have an eleven-pixel support. (A similar eleven-pixel support is used in [10].) It has been shown that eleven-pixel supports are the smallest possible in t h e 8-4 case [32]; thus, the previous algorithms have optimally small supports. Further, the possible locations for the eleven pixels are tightly constrained by the following result from [32]: T h e o r e m 7.1 The support of a fully parallel thinning algorithm atp must contain at least eleven pixels. If it contains just eleven, these must consist of Ns{p) and two additional pixels chosen in one of the following two ways (see the illustration below): a. Exactly one of the x 's and one of the y 's, or b. One of the z 's and one of its ^-neighbors in the 5 x 5 square centered at p, e.g., {zi.xi], {zi.yo},... Zi
Xi
X2
X3
Z2
ye ys y4
w w w
w p w
w w w
yi y2 ys
Z4
xe
X5
X4
Z3
171
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s s
s
s
s
s
s p
s
s s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s s
s
s
s
s
s
s
s
s
Figure 14. An infinite set of pixels, p U < s > , no subset of which can be a support for the operator of a connectivity preserving fully parallel thinning algorithm.
T h e results in [32] can also be used to demonstrate that algorithms whose supports are contained in the (infinite) set p U < s > illustrated in Figure 14 (and its 90° rotation) cannot provide adequate fully parallel thinning. Thus, an infinite class of inadequate supports has been identified. 7.3. Fully Parallel Thinning Algorithm Design Space In this section we will characterize a large class of connectivity preserving fully parallel thinning algorithms. A typical fully parallel thinning algorithm would use an operator with the following deletion conditions: F P T Deletion Conditions a. p is 8-simple b . p is not an E3 endpoint There will be certain cases where p must be preserved from deletion in order to preserve connectivity. We can use the Ronse tests to identify these cases. Consider two 4-adjacent I's, p and ^, both of which satisfy the F P T deletion conditions, but which do not constitute an 8-deletable set (so that if both were deleted, R2 would be violated). We find, using Propositions 4.2 and 4.3, t h a t the only possible cases are those shown in Figure 15 and their 90° rotations, where t h e pairs of adjacent pixels, {a, 6}, are chosen so t h a t C{p) = 1 and C{q) = 1 (i.e., a = 1 and 6 = 0 is not allowed). Preservation conditions must be added to the F P T deletion conditions to guarantee that at least one of p and q is not deleted in each case. Finally, we must avoid the complete deletion of a 2 x 2 square component of I's. These requirements reveal the design space for connectivity preserving fully parallel thinning algorithms which satisfy the F P T deletion conditions. We now give an example of an algorithm in which the preservation conditions are relatively simple.
172 a 0 1 b p q b 1 0 a
a 0 1 b p q Q 1 1
0 0 1 0 p q 0 0 0 1
1 0 a b p q b a 0 1
1 0 a Q p q b 1 1
(1)
(2)
(3)
(4)
(5)
10 0 0 p q 0 10 0
11 b p q 0 a 0 1
11 0 p q b 1 0 a
11 0 p q 0 11
(6)
(7)
(8)
(9)
Figure 15. Cases where the Vs p and q are both FPT-deletable but {p^q} is not 8deletable. The values of pixels that are left blank are irrelevant; and either a = 0 or 6 = 1 in each adjacent pair {a, 6}, which ensures that C{p) = C[q) = 1.
Algorithm F P T N The following completely parallel reduction operator is applied repeatedly. A pixel p = 1 is deleted if it satisfies the FPT deletion conditions, and its neighborhood does not match any of the following patterns, or the 90° rotations of (a-e):
I z Opl I y (a)
y I IpO z I (b)
01 pi 10 (c)
10 pi 01 (d)
11 OplO 11 (e)
000 OplO Oil 0 (f)
where {y^z} contains at least one 0. The algorithm terminates when no deletions occur at an iteration. Condition (a) preserves p from deletion in cases (5), (6), and (8) of Figure 15; condition (b) preserves q in cases (2), (3), and (7); conditions (c), (d) and (e) preserve p in cases (1), (4) and (9), respectively; and condition (f) prevents deletion of the 2 x 2 square. This operator has an optimally small eleven-pixel support. Typical performance of FPTN is illustrated in Figure 16. The iteration counts for FPTN are not better than those of HSCPN and GH92-AFP2. We will see in the next section that, in fact, these algorithms are already nearly optimally fast.
173
1
1
1 1 1 2 • • 1 1 1 1 1 • 2 1 3 2 1 2 1 1 1
• Figure 16. Example of thinning by F P T N . Same notation as in Figure 5.
8. I T E R A T I O N
COUNTS
Designers of parallel thinning algorithms strive to reduce the iteration count (i.e., the total number of iterations required). Specific iteration counts can be measured by applying an algorithm to any given test image. It would be nice to have an estimate of the best achievable iteration count for any such image. W i t h this we would be able to measure how close an algorithm's iteration count is to optimal for a chosen test image. We now develop such estimates, which we call lower bound estimates. In order to avoid the creation of holes, thinning algorithms usually require t h a t a deletable 1 be a border pixel. (This requirement is necessary for small supports such as those considered in this chapter, but parallel operators with sufficiently large supports may be able to delete interior I's while preserving connectivity properties [46].) For any algorithm which deletes only border I's, there would appear to be a lower bound on the iteration count imposed by the "thickness" of the objects in the image. For example, an algorithm which deletes only border I's requires at least \_w/2\ iterations to produce a medial curve of width one from an upright rectangle of size w x l^ w < l. For general patterns a lower bound might be estimated by deleting border I's at successive parallel iterations (without regard to connectivity preservation) and counting the number of iterations required to reach a "final result". However, determining an appropriate final result is problematical. We could allow deletion to proceed until all remaining pixels are border pixels; let us denote the corresponding lower bound by /SQ. But we would then fail to delete any pixels of two-pixel wide upright rectangles. To address such cases we define a more refined lower bound estimate by performing deletion until all remaining pixels are border pixels, at which point we allow the estimate to include one more iteration if at least one "deletable" pixel remains. For this purpose we define two alternative deletability conditions: D l . A{p) = 1 and B{p) > 2 D 2 . [C{p) = 1 and B{p) > 2] or [C{p) = 1 and B{p) = 2 and A{p) > 1].
174 Condition Dl is appropriate for i4(p)=l-based thinning algorithms in which the E2 endpoint definition is used. Condition D2 is appropriate for C(p)=l-based algorithms in which E3 is used. We define two refined lower bound estimates, /^i and /?2, where ^i uses Dl and ^2 uses D2. We can give explicit definitions of the l3i in the following manner. For any set of I's, 5*, let t{S) = m^x{d4{p,S')
\peS}
where d^{p^S') denotes the length of the shortest 4-path between p and a 0. 5 will be empty after exactly t{S) iterations; hence A = t(S) - 1. Next let i?(5) = { p € 5 M 4 ( p , 5 ' ) = <(5)} which is the set of I's of 5 removed at the t(5)-th iteration. We then have i(S) if R[S) contains a pixel p satisfying Dl i(S^ — 1 otherwise.
..{
/?2 is defined analogously, but using D2 instead of Dl. (Note that the Dl and D2 conditions are computed for -^(5), not for 5.) Consider the following example images whose I's are shown: 1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
(a)
(b)
/?o = 1 for both examples; ^1 = 2 for (a) and ^\ — \ for (b); and /52 = 2 for both examples. ^\ estimates the time performance achievable by an A(p)=l-based reduction operator which uses the E2 endpoint definition; whereas ^2 is more appropriate for C(p)=l-based operators which use the E3 definition. For any image we have /3Q < /^i < ^2The I3i are not correct lower bounds for all images. Indeed, using an observation made originally by Arcelli [2], we can construct images like the following whose I's (including p = I) are shown: 11
11 1 1 1 1 1 1 1 1 1 1 1 p 1 1 1 1 1 1 1 1 1 1 1 1 11 11
1 1 1 1 1
1 1 1 1
175 In this image no 1 can be deleted without violating connectivity preservation and any correct thinning algorithm will terminate in 0 iterations; but ^Q = f3i = /32 = 2 . (Eckhardt et al. [20, 21] consider classes of similar irreducible sets with interior I's and with holes.) Thus the ^ ' s are only estimates of the best results achievable. Table 1 gives typical results for HSCPN, GH92-AFP2 and GH92-AFP3 for artificial and natural test images. Details of the test sets are given in [28]. T h e artificially created rectangle test set included rectangles of various orientations and widths. HSCPN and GH92-AFP2 are F P class algorithms and their iteration counts are compared to ^ i . GH92A F P 3 , which has a more liberal deletion condition (similar to t h a t of a C ( p ) = l - b a s e d algorithm), is compared to ^2- The reported iteration counts do not include the final iteration, at which no changes occur. The three algorithms approach closely or are even better t h a n their corresponding ^ estimates and for this reason we believe t h a t they are nearly optimally fast. GH92-AFP3 produces the thinnest medial curves yet reported for a connectivity preserving fully parallel thinning algorithm. Since GH92-AFP3 produces thin results with near-optimal iteration counts, it may be an ideal choice for practitioners looking for fast effective fully parallel thinning algorithms with an optimally small operator support.
Table 1 Lower bound estimates for four sets of test patterns, and iteration counts for three algorithms applied to these patterns. Entries are average numbers of iterations; percentages are relative to /?i for the first two algorithms and relative to ^2 for the third. Lower Bounds Thinning Algorithms Test P a t t e r n Sets ^ ^1 ^2 HSCPN GH92-AFP2 G H 9 2 - A F P 3 ~ English Letters 4.25 4.75 4.75 4.58 (-3.5%) 4.58 (-3.5%) 4.67 (-1.8%) Chinese Characters 5.67 5.83 6.17 5.75 (-1.4%) 5.75 (-1.4%) 6.17 ( 0.0%) Arabic Words 4.92 5.08 5.33 5.17 ( 1.6%) 5.00 (-1.6%) 5.25 (-1.5%) Rectangles 3.71 3.88 4.15 3.90 ( 0.5%) 3.90 ( 0.5%) 4.18 ( 0.7%)
T h e actual computing time for a given thinning algorithm implemented on a specific machine will depend on the time complexity of the implementation of the operator(s) as well as the iteration counts. But if we assume that sufficient hardware is available to enable the computation of any given local operator in some fixed time—this would be reasonable if, for example, operators are stored in lookup tables—then iteration counts are an appropriate measure of computing time. Further, when the iteration counts for an algorithm are given, it is possible to predict the performance of an implementation of the algorithm on any particular machine by evaluating the time complexities of the required operators. Chen and Hsu [9] use the number of 3 x 3 neighborhoods of a 1 for which the 1 is deleted as another measure of parallel thinning operator efficiency. A variety of issues relevant to the realization of parallel thinning algorithms on parallel architectures are addressed in [10, 33, 40, 47, 49].
176 9.
SUMMARY
A variety of approaches to parallel thinning using operators with small supports have been reviewed, with some emphasis on how one may preserve, and prove one has preserved, connectivity. Simple connectivity preservation tests have been presented for A{p)=l-heised and C ( p ) = l - b a s e d parallel thinning operators and examples have been given of how to use these tests to prove connectivity preservation for various thinning algorithms. For fundamental classes of parallel thinning algorithms, including fully parallel, two-subiteration, and two-subfield, conditions have been identified using these tests which are sufficient for preservation of connectivity. Thus design spaces for connectivity preserving operators belonging to these classes have been described. Some fundamental limitations on fully parallel thinning algorithms have been reviewed, including constraints on support size and shape. T h e o u t p u t s of the algorithms considered in this chapter tend to be fairly similar, usually differing mainly in how well the endpoint condition prevents excessive erosion and in t h e thinness of the medial curve. The algorithms are more clearly distinguished by their iteration counts. Most existing fully parallel thinning algorithms seem to be nearly optimally fast. This suggests that little further progress is attainable in improving their time performance. There is some irony in the fact that the early thinning algorithms of Rutovitz [58] and Deutsch [13, 14] were very similar to recently defined fully parallel algorithms (e.g., GH92-AFP3 [28]) which appear to be near-optimal.
Acknowledgment T h e author enjoyed the incisive comments of the editors, T.Y. Kong and A. Rosenfeld, regarding m a n y facets of the material in this chapter. REFERENCES 1 2 3 4
5
6 7
C. Arcelli. A condition for digital points removal. Signal Processing^ 1:283-285, 1979. C. Arcelli. P a t t e r n thinning by contour tracing. Comput. Graphics Image Process., 17:130-144, 1981. C. Arcelli, L. Cordelia, and S. Levialdi. Parallel thinning of binary pictures. Electronics Letters, 11:148-149, 1975. C. Arcelli, P.C.K. Kwok, and G. Sanniti di Baja. Parallel pattern compression by octagonal propagation. Int. Journal of Pattern Recognition and Artificial IntelL, 7:1077-1102, 1993. H. Blum. A transformation for extracting new descriptors of shape. In W. Wat henDunn, editor, Models for the Perception of Speech and Visual Form, pages 362-380. M I T Press, Cambridge, MA, 1967. N.G. Bourbakis. A parallel-symmetric thinning algorithm. Pattern Recognition, 22:387-396, 1989. Y.S. Chen and W.H. Hsu. A 1-subcycle parallel thinning algorithm for producing perfect 8-curves and obtaining isotropic skeleton of an L-shape pattern. In Proceedings
177
8 9 10
11 12 13 14 15 16 17
18
19 20 21
22
23 24 25 26
IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 208-215, San Diego, CA, June 4-8, 1989. Y.S. Chen and W.H. Hsu. A systematic approach for designing 2-subcycle and pseudo 1-subcycle parallel thinning algorithms. Pattern Recognition, 22:267-282, 1989. Y.S. Chen and W.H. Hsu. A comparison of some one-pass parallel thinnings. Pattern Recognition Letters, 11:35-41, 1990. R . T . Chin, H.K. Wan, D.L. Stover, and R.D. Iverson. A one-pass thinning algorithm and its parallel implementation. Comput. Vision Graphics Image Process., 40:30-40, 1987. E.R. Davies and A.P.N. Plummer. Thinning algorithms: A critique and a new methodology. Pattern Recognition, 14:53-63, 1981. A.L. DeCegama. The Technology of Parallel Processing: Parallel Processing Architectures and VLSI Hardware, volume 1. Prentice-Hall, Englewood Cliffs, NJ, 1989. E.S. Deutsch. Towards isotropic image reduction. In Proceedings IFIP Congress 1971, pages 161-172, Ljubljana, Yugoslavia, 1971. North-Holland. E.S. Deutsch. Thinning algorithms on rectangular, hexagonal, and triangular arrays. Comm. ACM, 15:827-837, 1972. M . J . B . Duff and T.J. Fountain, editors. Cellular Logic Image Processing. Academic Press, New York, 1986. C.R. Dyer and A. Rosenfeld. Thinning algorithms for grayscale pictures. IEEE Trans. Pattern Anal. Mach. IntelL, PA MI-1:88-89, 1979. U. Eckhardt. Digital topology I. A classification of 3 x 3 neighborhoods with application to parallel thinning in digital pictures. Technical Report Reihe A, Preprint 8, Hamburger Beitrage zur Angewandten Mathematik, Hamburg, Germany, August 1987. U. Eckhardt. Digital topology II. Perfect points on the inner boundary. Technical Report Reihe A, Preprint 11, Hamburger Beitrage zur Angewandten Mathematik, Hamburg, Germany, November 1987. U. Eckhardt. A note on Rutovitz' method for parallel thinning. Pattern Recognition Letters, 8:35-38, 1988. U. Eckhardt and G. Maderlechner. Parallel reduction of digital sets. Siemens Forsch.u. Entwickl.-Ber., 17:184-189, 1988. U. Eckhardt and G. Maderlechner. The structure of irreducible digital sets obtained by thinning algorithms. In Proceedings Ninth IA PR International Conference on Pattern Recognition, pages 727-729, Rome, Italy, November 14-17, 1988. U. Eckhardt and G. Maderlechner. Thinning of binary images. Technical Report Reihe B, Bericht 11, Hamburger Beitrage zur Angewandten Mathematik, Hamburg, Germany, April 1989. T.J. Fountain and M.J. Shute, editors. Multiprocessor Computer Architectures. North-Holland, Amsterdam, 1990. V. Goetcherian. From binary to grey tone image processing using fuzzy logic concepts. Pattern Recognition, 12:7-15, 1980. M. Gokmen and R.W. Hall. Parallel shrinking algorithms using 2-subfields approaches. Comput. Vision Graphics Image Process., 52:191-209, 1990. M.J.E. Golay. Hexagonal parallel pattern transformations. IEEE Trans. Computers,
178
C-18:733-740, 1969. 27 Z. Guo and R.W. Hall. Parallel thinning with two-subiteration algorithms. Comm. ACM, 32:359-373, 1989. 28 Z. Guo and R.W. Hall. Fast fully parallel thinning algorithms. CVGIP: Image Understanding, 55:317-328, 1992. 29 R.W. Hall. Fast parallel thinning algorithms: Parallel speed and connectivity preservation. Comm. ACM, 32:124-131, 1989. 30 R . W . Hall. Comments on 'A parallel-symmetric thinning algorithm' by Bourbakis. Pattern Recognition, 25:439-441, 1992. 31 R . W . Hall. Tests for connectivity preservation for parallel reduction operators. Topology and Its Applications, 46:199-217, 1992. 32 R . W . Hall. Optimally small operator supports for fully parallel thinning algorithms. IEEE Trans. Pattern Anal. Mach. IntelL, PAMI-15:828-833, 1993. 33 S. Heydorn and P Weidner. Optimization and performance analysis of thinning algorithms on parallel computers. Parallel Computing, 17:17-27, 1991. 34 C.J. Hilditch. Linear skeletons from square cupboards. In B. Meltzer and D. Michie, editors, Machine Intelligence 4^ pages 403-420. American Elsevier, New York, 1969. 35 C.J. Hilditch. Comparison of thinning algorithms on a parallel processor. Image and Vision Computing, 1:115-132, 1983. 36 C M . Holt, A. Stewart, M. Clint, and R.H. Perrott. An improved parallel thinning algorithm. Comm. ACM, 30:156-160, 1987. 37 C.V. Kameswara Rao, D.E. Danielsson, and B. Kruse. Checking connectivity preservation properties of some types of picture processing operations. Comput. Graphics Image Process., 8:299-309, 1978. 38 J. Kit tier and M . J . B . Duff, editors. Image Processing System Architectures. Wiley, New York, 1985. 39 T.Y. Kong and A. Rosenfeld. Digital topology: Introduction and survey. Comput. Vision Graphics Image Process., 48:357-393, 1989. 40 J . T . Kuehn, J.A. Fessler, and H.J. Siegel. Parallel image thinning and vectorization on PASM. In Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 368-374, San Francisco, CA, June 10-13, 1985. 41 L. Lam, S-W. Lee, and C.Y. Suen. Thinning methodologies-A comprehensive survey. IEEE Trans. Pattern Anal. Mach. IntelL, PAMM4:869-885, 1992. 42 H. Li and Q. F . Stout, editors. Reconfigurable Massively Parallel Computers. Prentice Hall, Englewood Cliffs, NJ, 1991. 43 X. Li and A. Basu. Variable-resolution character thinning. Pattern Recognition Letters, 12:241-248, 1991. 44 H.E. Lii and P.S.P. Wang. A comment on "A fast parallel algorithm for thinning digital p a t t e r n s " . Comm. ACM, 29:239-242, 1986. 45 N.J. Naccache and R. Shinghal. An investigation into the skeletonization approach of Hilditch. Pattern Recognition, 17:279-284, 1984. 46 L. O'Gorman. k x k thinning. Comput. Vision Graphics Image Process., 51:195-215, 1990. 47 J. Olszewski. A flexible thinning algorithm allowing parallel, sequential and distributed application. ACM Trans. Math. Software, 18:35-45, 1992.
179 48 T. Pavlidis. Algorithms for Graphics and Image Processing. Springer-Verlag, Berlin, 1982. Chap. 9. 49 T. Pavlidis. An asynchronous thinning algorithm. Comput. Graphics Image Process.^ 20:133-157, 1982. 50 R. Plamondon and C.Y. Suen. Thinning of digitized characters from subjective experiments: A proposal for a systematic evaluation protocol of algorithms. In A. Krzyzak, T. Kasvand, and C.Y. Suen, editors, Computer Vision and Shape Recognition. World Scientific, Singapore, 1989. 51 K. Preston. Feature extraction by Golay hexagonal pattern transforms. IEEE Trans. Computers, C-20:1007-1014, 1971. 52 K. Preston and M.J.B. Duff. Modern Cellular Automata - Theory and Applications. Plenum Press, New York, 1984. 53 C. Ronse. A topological characterization of thinning. Theoret. Comput. Sci., 43:3141, 1986. 54 C. Ronse. Minimal test patterns for connectivity preservation in parallel thinning algorithms for binary digital images. Discrete Applied Math., 21:67-79, 1988. 55 A. Rosenfeld. Connectivity in digital pictures. J. ACM, 17:146-160, 1970. 56 A. Rosenfeld. A characterization of parallel thinning algorithms. Information and Control, 29:286-291, 1975. 57 A. Rosenfeld and A.C. Kak. Digital Picture Processing, volume 2. Academic Press, New York, second edition, 1982. 58 D. Rutovitz. Pattern recognition. J. Royal Statist. Soc, 129:504-530, 1966. 59 R.W. Smith. Computer processing of line images: A survey. Pattern Recognition, 20:7-15, 1987. 60 J.H. Sossa. An improved parallel algorithm for thinning digital patterns. Pattern Recognition Letters, 10:77-80, 1989. 61 R. Stefanelli and A. Rosenfeld. Some parallel thinning algorithms for digital pictures. J. ACM, 18:255-264, 1971. 62 S. Suzuki and K. Abe. Binary picture thinning by an iterative parallel two-subcycle operation. Pattern Recognition, 20:297-307, 1987. 63 H. Tamura. A comparison of line thinning algorithms from dfgital geometry viewpoint. In Proceedings Fourth lAPR International Conference on Pattern Recognition, pages 715-719, Tokyo, Japan,1978. 64 S. Yokoi, J. Toriwaki, and T. Fukumura. An analysis of topological properties of digitized binary pictures using local features. Comput. Graphics Image Process., 4:6373, 1975. 65 T.Y. Zhang and C.Y. Suen. A fast parallel algorithm for thinning digital patterns. Comm. ACM, 27:236-239, 1984.