Pattern Recognition, Vol. 29, No. 7, pp. 1105 1115, 1996 Elsevier Sdence Ltd Copyright © 1996 Pattern Recognition Society Printed in Great Britain. All rights reserved 0031-3203/96 $ 15.00 +.00
Pergamon
0031-3203(95)00153-0
EQUILATERAL POLYGON APPROXIMATION OF CLOSED CONTOURS F E R N A N D O R A N N O U and JENS GREGOR* Department of Computer Science, University of Tennessee, Knoxville, TN 37996-1301, U.S.A. (Received 2 December 1994; in revisedform 24 August 1995; receivedfor publication 13 November 1995)
Abstract--Numerous algorithms have been suggested for computing polygonal approximations of closed contours. We present an algorithm which as a unique feature creates polygons which are equilateral, that is, whose edges are all of the same length. This allows one-dimensional shape descriptions to be derived using the interior polygon angles. Conceptually, the algorithm is an optimization framework for finding the minimum energy configuration of a mechanical system of particles and springs that represent the contour points, the polygon vertices and their interaction. In practice, the dominant points are detected on the Gaussian smoothed contour and used to seed an initial polygon. Nonlinear programming is then used to minimize the system energy subject to the constraint that adjacent vertices must be equidistant. The objective function is the sum of a curvature weighted distance between each vertex and a set of contour points associated therewith. Experimental results are given for closed contours obtained from grey-scale images. Copyright © 1996 Pattern Recognition Society. Published by Elsevier Science Ltd. Closed contour
Shape representation
Polygonal approximation
1. INTRODUCTION Polygonal approximation is the piecewise linear representation of a closed contour. An important area of application is computer vision where it is used to represent object boundaries, thereby facilitating shape recognition. Numerous algorithms have been suggested for computing polygonal approximations. There are, for example, algorithms that decide where to place polygon vertices by minimizing, respectively, the polygon perimeter, I1) the maximum distance from the contour points to the polygon edges,C2-4) the number of polygon edges together with an error norm like the maximum distance t5-8) or the deviation in area bettween the contour and the polygon. 19'1°) Contour points with high curvature are known to convey important shape information and a large family of algorithms places vertices at the location of these so-called dominant points. I11-2o) Some of the algorithms make one-time-only decisions and some iteratively correct an initial estimate, but a common characteristic is that the resulting edge lengths are by-products of where the vertices get placed. As will become apparent shortly, none of the existing polygon approximation algorithms are therefore competitive with our algorithm. To distinguish one closed contour from another, characteristic features of their polygonal approximations such as vertex and edge length statistics may be extracted and compared. An alternative is to represent the polygons as cyclic sequences of edge length and angle pairs and compare those. String matching is a powerful technique for determining the similarity * Author for correspondence.
between two sequences of symbols,t21"22) even when they are cyclic.<23-27) However, a polygon description is inherently two-dimensional and the method requires that the sequences be one-dimensional. Modified string matching algorithms have been reported which overcome this problem by incorporating a heuristic mapping that combines edge lengths and angles into a single number, t28-3°) We present an algorithm for creating a polygonal approximation which immediately lends itself to string encoding. We accomplish this by constraining the polygon to be equilateral, that is, to make all edges be of the same prespecified length, which allows a onedimensional shape description to be derived using the interior polygon angles. Our algorithm is, to the best of our knowledge, the first one to consider such a constraint. While the equilateral polygon approximation problem is easily stated, it is difficult to solve because of the inherent connectedness of the vertices. Basically, the impediment is that the local placement decision made for a vertex has global consequences since it most likely will affect the decisions made for some of the other vertices. We therefore propose to create an initial polygon and then iteratively manipulate it until it becomes equilateral. The resulting polygon must, of course, resemble the original contour which calls for the iteration scheme to also minimize an appropriate error function. In Section 2, we consequently formulate the equilateral porygon approximation computation as a nonlinear equality-constrained optimization problem. Due to the immense computational requirements involved with finding solutions to such problems, the initial polygon should be close to the optimal
1105
1106
F. RANNOU and J. GREGOR
solution as this in general will speed up the computation and for some applications perhaps even render it obsolete. In Sections 3 and 4, respectively, we review a number of algorithms for dominant point detection, choose one and describe how to create a suitable initial polygon from the set of dominant points detected thereby. In Section 5 we argue why the nature of the optimization problem necessitates the use of nonlinear programming and describe the basic steps of the computation with emphasis on the role of the edge length constraints. In Section 6, we provide an experimental evaluation to illustrate the performance of the algorithm. 2. O U T L I N E O F A L G O R I T H M
Let C = (Co, c 1. . . . . c._ 1) be a discrete, closed contour where cj = (xj, yj) denotes the jth contour point and let S = (s 0, s 1. . . . . s . _ 1) be a polygonal approximation thereof, where s~ = (x i, y~) denotes the ith polygon vertex. Given contour C and edge length L, the problem is to find a polygon S that resembles C well and has the property that Usi - s t_ 111= L for i = 0 1 , . . . , m - 1 . Throughout the paper, the indices of C and S are taken as modulo n and m, respectively, and I1"11denotes Euclidean distance. With reference to Fig. 1, consider the following mechanical system of particles and springs. Let the contour points be fixed particles that cannot be moved from their location. Let the polygon vertices be particles that can move freely under the constraint that all adjacent vertex particles lie the same fixed distance apart. Connect each vertex particle to a number of contour particles through ideal springs, whose natural length is zero such that the potential energy stored in a spring is zero if and only if the corresponding particles coincide. The stiffness constant, which represents the amount of force needed to stretch a spring one length unit, may be different for the individual springs. From any given initial configuration, the vertex particles will move back and forth under the forces exerted by the springs until the overall potential energy stored in the system reaches its minimum. The equilibrium state configuration of the mechanical system is well-defined and can be determined
numerically by solving the following nonlinear equality-constrained optimization problem: m--1
~ ki.jllst-cjll 2
Minimize o~(S)= ~
/ = 0 cjcRi
subject to Hsl - si_ xll = L for i = O, 1. . . . . m - 1, where cj is a contour particle to which vertex particle s t has a spring attached, R~ denotes the set of all such contour particles and kt,j is the stiffness constant for the spring connecting st and % The number of vertex particles remains fixed throughout the computation and must be chosen with care as must the system parameters. For example, a large value of kt.~will tend to keep st close to % whereas a small value will allow, but not enforce, displacement. The effect of choosing a specific R~ is more subtle but, in general, a large number of contour particles located close together will have a larger pull on s t than a smaller number of dispersed contour particles. Note that the objective function is convex which ensures that the solution found will' be optimal with respect to the edge length constraint and the setting of the system parameters. With these considerations in mind, we propose the following algorithm for equilateral polygon approximation: (i) Create an initial polygon whose edges are close to being of the desired length by assigning vertices first to dominant points and then to other contour points as necessary. (ii) Solve the above nonlinear equality-constrained optimization problem using system parameter settings that discourage the relocation of vertices associated with dominant points. The polygon initialization serves to determine the number of vertices and their approximate location, and the dominant points are used to emphasize important shape characteristics. The system parameters describe the interaction between the contour points and the vertices and can therefore not be prespecified but must be derived on basis of the initial polygon.
3. D O M I N A N T P O I N T D E T E C T I O N
Although the term dominant point is not welldefined in the literature there is general agreement that it refers to a contour point with high curvature as found at corners and bends. Given a contour of the parametric form x = f ( t ) and y=g(t), curvature measures the instantaneous rate of change of slope angle O with respect to arc length s,t31) that is:
Ck
dO
dO/dt
ds
ds/dt
where
cJ'l
9
Fig. 1. Mechanical system of particles and springs.
tan ~ -
dy/dt dx/dt
Equilateral polygon approximation of closed contours
nes. Teh and Chin ~t41determine parameter m by evaluating the ratio da/lik for increasing values of k, where d~k is the perpendicular distance from contour point c~
and
ds
V (d,V
dt = ~ \ d t ]
+\~)
"
Using the short-hand notation ~ = d x / d t , p = d y / d t , 5i = d 2 x / d t 2 and )7 = d Z y / d t 2, it follows that: K
(.~2 _1_ ~2)3/2"
Generally speaking, dominant point detection is a matter of identifying the contour points that correspond to local maxima of the absolute value of r. However, when the contour is discrete, as it will be when extracted from a digital image, it is not clear how to compute ~c. Consequently, numerous algorithms have been suggested for discrete dominant point detection. Several authors, such as Mokhtarian and Mackworth, (131 Ansari and Delp} TM Pei and Lin,(16) Rattarangsi and Chin,"7~ and Melen and Ozanian,"9J compute ~ by approximating the differentiation operators by finite differences. To achieve stability of the numerical differentiation step the contour must first be regularized, e.g. by means of Gaussian smoothing.(3z~ The cyclic sequences of x and y coordinates of the contour points are independently convolved with a symmetric one-dimensional filter of the form [ a w . . . a l a o a l . . . awl, where a,
' x / ~ o . exp
[ - 2 k , o.s]
1107
J
The width w determines the accuracy of the filter and could, for example, be chosen so that 0.99< ao + 2Y~7'=~a,< 1.0."51 The standard deviation a~ controls the level of smoothness and is used by some authors to obtain a scale-space representation/' 3.x6,17) The differentiation operators are typically approximated by~5~ ~ = x i + ~ - x i_~ and 5( = x i 1 - 2xi + xi+ 1 and likewise for 3) and .9. More complex approximations have been suggested (~v) that ensure :t 2 + p2 = 1 which in turn leads to x = ~j7 - 3~. This is in compliance with the continuous case for which the same simplified expression can be obtained by substituting ~ = cos 4' and 3~= sin 4'. The Gaussian smoothed contour is not discrete, but there is a one-toone correspondence between the contour points of it and the original one. Having detected the curvature extrema it is thus a simple matter to identify the corresponding discrete contour points. Other authors carry out the dominant point detection by identifying "significant angles". For each contour point % the k-cosine method by Rosenfeld and Johnston (111 computes {cos Oik, k = 1..... m}, where Oik is the angle spanned by the chords C~_kC ~ and c~c~+k and m is an input parameter. The largest h is found for which cos Oi, < cos Oi. m_ ~ < ... < c o s Oih ¢/( COSi.h - 1' and c~is labeled as a dominant point if cos O~h is a local maximum. Rosenfeld and Weazka ta2) improve the performance using local averaging to smooth the k-cosi-
to the chord c~_ kC~+ k, whose length is lik. The asymmetric k/-cosine method by Ray and Ray ~181 computes only one cosine per contour point using the angle spanned by the chords q_kC~ and ~ , where k and I are determined on the basis of collinearity properties of the left and right regions of support, respectively. The dominant points are those for which the kl-cosine is a local maximum with respect to the immediate neighbors. For our purposes, the dominant points will be used to seed rather than completely establish the polygon initialization. We are therefore only interested in detecting the more significant dominant points. Since the angular methods all tend to produce a large number of dominant points, we choose to use a curvature approximation method. In particular, we follow the approach of Ansari and Delp ~15)who use a threshold to avoid near-zero curvature maxima since they mainly reflect noise. Our threshold will be set to pick only the more significant dominant points.
4. P O L Y G O N INITIALIZATION
The polygon initialization serves to determine the number of vertices as well as their approximate location. The number of vertices must be chosen carefully in order not to jeopardize the optimization process; the length eventually being the same for all edges, poor polygon approximations may result from not having enough vertices as well as having too many. Being able to determine the exact vertex locations is less crucial, but the closer the initial estimate is to the optimal solution the easier the optimization problem is to solve. Each pair of adjacent dominant points defines a contour segment. Before placing any vertices, we ensure that all these segments are large enough to accommodate two edges of almost the right length. That is, for the pair of adjacent dominant points ci and ci, we check to see if there exists a contour point Ck located between them for which ~L_< Ilci- ckll + tlc~- cjll, where ~ is a prespecified constant close to 2; if such a ck does not exist, we discard the dominant point associated with lower curvature. Using the prevailing dominant points as vertex locations, a first crude polygon approximation is obtained. The above rule compensates for the fact that contour points are labeled as being dominant without considering their location relative to one another. Were we to use all the dominant points, too many vertices might result from the polygon initialization. We evaluate the effect of choosing different values for ~ in the experiments section. The next step is to fill in more vertices in order to make the polygon almost equilateral. For that purpose, we carry out an iterative forward-backward
1108
F. RANNOU and J. GREGOR
sf
sf
(a)
(b)
Fig. 2. Vertex s, is assigned to the contour point that minimizes the accumulated edge length error where (a) ~ is the error introduced by ck_ 1 and (b) ~ is the error introduced by Ck.
$f
$b Sr
(a)
(b)
Fig. 3. (a) The adjacent vertices sp and sq may lie too close to one another in which case (b) they are replaced by a single vertex s,.
search within each contour segment. Consider, for example, the pair of adjacent d o m i n a n t point vertices si = c~ and sj = cj. Furthermore, let s f and Sb, respectively, denote the last vertices found when searching forward and backward; initially, Sy = sl and sb = st The forward-backward search then takes place as follows: • Scan forward along the contour going from Sy toward s~ until finding a c k for which Ilsf- q - x l l
new vertex s, to c k_ 1 or Ck, as described above and update s b accordingly. The forward-backward search is repeated until a c k can no longer be found at which point we check whether s f and sb lie too close to each other. With reference to Fig. 3, if Ils: - sbH < ilL, where fl is a prespecified constant less than 1, then we replace s f and s~ by a single vertex s,, which is equidistant to both of them; otherwise, we accept the vertex configuration and proceed with the next contour segment. If fl is set too low then we may produce a polygon with too many vertices. Conversely, if fl is set too high then we may give up a perfectly good fit. We evaluate the effect of choosing different values for B in the experiments section. When all contour segments have been processed in the above manner, the polygon initialization terminates.
5. N O N L I N E A R P R O G R A M M I N G O P T I M I Z A T I O N
The initial polygon will most likely not be equilateral, but its vertices will neither lie too close together nor too far apart for them to be moved much to obtain one that is. With respect to the specification of the system parameters, we thus want to penalize moving vertices away from their initial location. In particular, we want vertices located at or near d o m i n a n t points to
Equilateral polygon approximation of closed contours stay where they are which calls for high curvature regions of the contour to have a larger pull on nearby vertices than low curvature regions. Both goals are accomplished by setting k~j -- [Ksland defining R~ as the set of contour points located between the midpoints of the contour segments defined by vertex s~ and its neighbors s~_ ~ and s~+ ~ on either side. Regarding the optimization, recall that the problem to be solved is given as: m--1
Minimize d°(S)= ~"
~ kidllsi-cjll z
i=0 cjERi
subjectto IIs~-si-lll = L
f o r i = 0 , 1 . . . . . m - 1.
The energy function do(S) itself is smooth and twice differentiable and could easily be minimized using standard methods. The nonlinear edge length constraints, however, complicate matters considerably and necessitate the use of nonlinear programming (NLP). In fact, this type of problem is considered to be one of the most diffficult of the smooth optimization problems. ~33) The most powerful methods for solving N L P problems with equality constraints are based on finding a point satisfying conditions that hold at the solution, t34) Sequential quadratic programming (SQP) is claimed to be the most efficient such method followed by the generalized reduced gradient, multiplier and penalty methods, t35) Below we give a brief introduction to SQP. For more details, the reader is referred to the literature. As the name implies SQP is iterative. Let So denote the initial polygon and let S k denote the polygon approximation at the kth step of the algorithm. The iteration scheme can be stated as:
straints is replaced by a simpler problem which has linear equality-constraints. Specifically, search direction D k is found by solving the quadratic programming problem: Minimize 7D 1 kr J~'~kDk+ DffVdo(Sk) subject to gi(Sk) + Drk Vgi(Sk) = 0 for i = 0 , 1 .... , m - - l , where matrix ~ k is an approximation to the Hessian of the Lagrangian function L,e(S)= do(S) + 2,i=0"-"12igit'S') evaluated for S = S k, gi(Sk) denotes the equality-constraint Ilsi - s~ 111- L evaluated for S = S k and V is the gradient operator. Step length o-k is computed by a line search, which is designed to produce a decrease in a penalty function. There are several nonlinear programming codes that implement this general framework. 135-39) They mainly differ in the update algorithm for Jgk, the definition of the penalty function for finding trk and its line minimization, and other aspects that affect convergence rates and stability. The authors have found that the algorithm by Gill eta/. (39) provides flexibility and robustness for minimizing the energy function. This algorithm is included in the N A G library,t4°) 6. EXPERIMENTALEVALUATION We evaluate the algorithm using the contours shown in Fig. 4. Each contour is obtained by thresholding the gray-scale image of a leaf and then applying a contour follower; no special care is taken to avoid artifacts like single-pixel wide protrusions. For the Gaussian smoothing, we use ~ = 1.3 which results in the filter coefficients a o = 0.307, a 1 = 0.228, a 2 0.094 and a 3 = 0.024. Dominant points are detected using the threshold Ixl > #~ + 2a~, where/x~ and ~r denote the mean and standard deviation of the absolute curvature values. ~
Sk + 1 = Sk + tTkDk,
where o-k and D k denote step length and search direction, respectively. To find the search direction, the original problem with its nonlinear equality-con-
/ fj / t l
(a)
1109
i
~
_
~
(b)
Fig. 4. Contours of four leaves used in the experiments.
Ill0
F. RANNOU and J. GREGOR
/
(a)
(b)
Fig. 5. The effect of using (a) ct = 1.5 and (b) ct = 2.0. Figures 5 and 6 illustrate the effect of choosing different settings for ~ and fl, the two parameters that control the polygon initialization. A low setting of either parameter may cause too many vertices to be selected which typically produces an accordion effect; see Fig. 5(a) top left and right and Fig. 6(a) bottom left. A high setting of ~ has no such impact even if i/causes dominant points to be rejected since the forwardbackward search will compensate and place vertices at
l
nearby contour points. Setting fl too high, however, may result in vertices being discarded and in turn produce a shrinkage effect; see Fig. 6{c) bottom and bottom right. Of all the settings tested, we have found ct = 2.0 and fl = 0.8 to be good compromises; see Figs 5(b) and 6(b). With the original contours indicated for reference, Figs 7-9 show the equilateral polygon approximations obtained for edge lengths 20, 30 and 50. High quality approximations that resemble the original contours well are obtained for edge length 20. Increasing the edge length to 30 results in slight deterioration of the approximations. However, when the edge length is increased to 50, very coarse approximations are obtained that deviate significantly from the original contours. This emphasizes the fact that high-quality equilateral polygon approximations can only be produced when the edge length is set small enough to capture all the significant contour variations. We also note that as the edge length is increased, contour convexities tend to be emphasized more so than con-
I J
(a)
(b)
I J
J
(c)
Fig. 6. The effect of using (a) fl = 0.5, (b) fl = 0.8 and (c) fl = 1.0.
(c)
(a)
@
(b)
Fig. 7.. Equilateral polygon approximations obtained for edge length 20.
Equilateral polygon approximation of closed contours
1111
? (c)
(d) Fig. 8. Equilateral polygon approximations obtained for edge length 30.
(c)
(a)
(b)
Fig. 9. Equilateral polygon approximations obtained for edge length 50.
cavities. This is a direct result of the dominant points here being located at convex contour points only and is not a general property of the algorithm. In addition to the number of polygon vertices, two measures are normally reported in the literature for quantifying the quality of polygon approximations, namely, integrated-square-error and maximum-error, where the term error refers to the perpendicular distance from a contour point to a polygon edge. However, in our case, both error measures are ill-defined because the polygon vertices are not guaranteed to coincide with the contour points. As an alternative, we introduce area deviation. Let two consecutive intersections of the contour and its equilateral polygon approximation define a gap; one polygon edge may create many gaps and, likewise, many polygon edges may create just a single gap. The term area deviation then refers to the area of a gap.
PR 29:7-D
Tables 1 and 2 list statistics for the original contours and the equilateral polygon approxim~ttions, respectively. The number of polygon vertices is almost equal to the number of contour points divided by the edge lengths in all cases. The area deviation quantifies the polygon quality very well. Not only does the total area deviation, i.e. the sum of all area deviations,
Table 1. Statistics for the original contours Object reference (a) (b) (c) (d)
No. of contour points
Object area
1136 396 843 928
23976 14170 22617 18098
1112
F. RANNOU and J. GREGOR
Table 2. Statistics for the equilateral polygon approximations Area deviation statistics No. of polygon vertices
Mean
Standard deviation
Total
(a) L = 20 L = 30 L = 50
58 38 21
19 45 262
61 110 328
2185 3131 6304
(b) L = 20 L = 30 L = 50
22 14 8
2 99 686
7 333 1329
256 1191 2743
(c) L = 20 L = 30 L = 50
41 23 12
19 76 287
28 100 357
1469 2923 5169
(d) L = 20 L = 30 L = 50
47 31 18
24 52 139
67 119 312
1558 2253 4194
Object reference
Table 3. NLP statistics No. of iterations Object reference
(a) (b) (c) (d)
L = 20
L = 30
L = 50
66 29 45 89
45 22 31 42
27 15 19 28
increase as the polygon quality decreases but the standard deviation of the measure also increases indicating that the deterioration is not uniformly distributed over the polygon. In percentage of the original contour area, the total area deviation ranges from 2 to 9% for edge length 20, 8 to13% for edge length 30 and 19 to 26% for edge length 50.
The number of iterations required by the N L P optimization is proportional to the number of polygon vertices as indicated by Table 3. However, so is the computational cost of each iteration. When choosing an edge length, it may consequently be necessary to trade-off the desired polygon quality (short edges, many vertices) for faster computation (long edges, few vertices). If the initial polygon is sufficiently close to its equilateral counterpart, an alternative would be to not carry out the computationally costly N L P optimization. Figures 10 12 show the initial polygons with the final equilateral polygons indicated for reference. The two appear very similar for edge lengths 20 and 30, but show significant differences for edge length 50. Associated statistics a r e listed in Table 4. The average deviation from the desired edge length of the initial polygon's edges seems to indicate the quality of the resulting equilateral polygon; small values are
(a) Fig. 10. Initial polygons obtained for edge length 20.
Equilateral polygon approximation of closed contours
1113
(c)
(b) ~d(d)
(a)
Fig. 11. Initial polygons obtained for edge length 30.
/ (c) f
(a)
(b)
Fig. 12. Initial polygons obtained for edge length 50.
Table 4. Statistics for the initial polygon approximations. Object reference (a) (b) (c) (d)
Average edge length deviation
Interior angle correlation*
L=20
L=30
L=50
L =20
L =30
L=50
1.4 0.3 1.1 0.9
2.0 1.3 4.6 1.6
4.4 3.7 14.3 3.1
0.97 0.94 0.99 0.98
0.98 0.98 0.96 0.97
0.99 0.97 0.96 0.99
* The correlation is computed for angles of the initial versus the final polygons.
found for all the good approximations and large values are found for the less satisfying. With respect to the interior polygon angles of the initial and final polygons, we find a very high correlation for
all edge lengths. Thus, for short edge polygons, the initial approximation is very close to the equilateral one produced by the N L P optimization. For long-edge polygons, the initial approximation is further from
1114
F. RANNOU and J. GREGOR
being equilateral, but its interior angles are relatively unaffected by the relocation of the vertices that result from the N L P optimization. 7. CONCLUSION The equilateral polygon approximation is a difficult problem mainly due to the global effect of local placement decisions made for the individual vertices. We have presented a solution analogous to minimizing the energy of a mechanical system of particles and springs. An almost equilateral polygon is created by first assigning vertices to a contour's d o m i n a n t points and then to other contour points as necessary. To make this initial polygon equilateral, an N L P solver is used to minimize curvature weighted distances between the vertices and the contour points subject to the resulting edges being of the same prespecified length. The experiments show that good approximations are obtained especially when the edges are relatively short. To reduce the computational requirements of the algorithm, however, longer edges should be used. There are indications that the average deviation from the desired edge length of the initial polygon's edges can be used to determine this very important parameter. There is very high correlation between the interior angles of the initial and the final polygons. This suggests that time critical applications could use the initial polygon and not carry out the N L P optimization; in this case, short edges would be the better choice. The algorithm can be modified to work also for open curves in a straightforward manner. REFERENCES
1. J. Sklansky, R. L. Chazin and B. J. Hansen, Minimumperimeter polygons of digitized silhouettes, IEEE Trans. Comput. 21,260-268 (1972). 2. U. Ramer, An iterative procedure for the polygonal approximation of plane curves, Comput. Graph Image Process 1, 244-256 (1972). 3. C. M. Williams, An efficient algorithm for the piecewise linear approximation of planar curves, Comput. Graph Image Process. 8, 286-293 (1978). 4. J. Sklansky and V. Gonzalez, Fast polygonal approximation of digitized curves, Pattern Recognition 12, 327331 (1980). 5. T. Pavlidis and S. L. Horowitz, Segmentation of plane curves, IEEE Trans. Comput. 23, 860-870 (1974). 6. Y. Kurozumi and W. A. Davis, Polygonal approximation by the minimax method, Comput. Graph Image Process. 19, 248-264 (1982). 7. J. G. Dunham, Optimum uniform piecewise linear approximation of planar curves, IEEE Trans. Pattern Anal. Mach. lntell. 8, 67-75 (1986). 8. B. K. Ray and K. S. Ray, Determination of optimal polygon from digital curve using L~norm, Pattern Recognition 26, 505-509 (1993). 9. K. Wall and P.-E. Danielson, A fast sequential method for polygonal approximation of digitized curves, Comput. Vis. Graph. Image Process. 28, 220-227 (1984). 10. J.-S. Wu and J.-J. Leou, New polygonal approximation schemes for object shape representation, Pattern Recognition 26, 471-484 (1993). 11. A. Rosenfeld and E. Johnston, Angle detection on digital curves, IEEE Trans. Comput. 22, 875-878 (1973).
12. A. Rosenfeld and J. S. Weazka, An improved method of angle detection on digital curves, IEEE Trans. Comput. 24, 940-941 (1975). 13. F. Mokhtarian and A. Mackworth, Scale-based description and recognition of planar curves and two-dimensional shapes, IEEE Trans. Pattern Anal. Mach. Intell. 8, 34-43 (1986). 14. C. Teh and R. T. Chin, On the detection of dominant points on digital curves, IEEE Trans. Pattern Anal. Mach. lntell. 11, 859-872 (1989). 15. N. Ansari and E. J. Delp, On detecting dominant points, Pattern Recognition 24, 441-451 (1991). 16. S.-C.Pei and C.-N. Lin, The detection of dominant points on digital curves by scale-space filtering, Pattern Recognition 25, 1307-1314 (1992). 17. A. Rattarangsi and R. T. Chin, Scale-based detection of corners of planar curves, IEEE Trans. Pattern Anal. Mach. Intell. 14, 430-449 (1992). 18. B. K. Ray and K. S. Ray, An algorithm for detection of dominant points and polygonal approximation of digitized curves, Pattern Recognition Lett. 13, 849-856 (1992). 19. T. Melen and T. Ozanian, A fast algorithm for dominant point detection on chain-coded contours, Proc. 5th Intl Conf. Comput. Anal. Images Patterns 245-253 (1993). 20. D. Sarkar, A simple algorithm for detection of significant vertices for polygonal approximation of chain coded curves, Pattern Recognition Lett. 14, 965-974 (1993). 21. R. A. Wagner and M. J. Fischer, The string-to-string correction problem, J. Assoc. Comput. Mach. 21, 168-173 (1974). 22. P. H. Sellers, The theory and computation of evolutionary distances: pattern recognition, J. Algorithms 1, 359373 (1980). 23. H. Fuchs, Z. M. Kedem and S. P. Uselton, Optimal surface reconstruction from planar contours, Graphics Image Process. 20, 693-702 (1977). 24. M. Maes, On a cyclic string-to-stringcorrection problem, Inf. Process. Lett. 35, 73-78 (1990). 25. H. Bunke and U. Bliihler, Application of approximate string matching to 2D shape recognition, Pattern Recognition 26, 1797-1812 (1993). 26. J. Gregor and M. G. Thomason, Dynamic programming alignment of sequences representing cyclic patterns, IEEE Trans. Pattern Anal. Mach. Intell. 15, 129-135 (1993). 27. J. Gregor and M. G. Thomason, Efficient dynamic programming alignment of cyclic strings by shift elimination, Pattern Recognition 29, 1179-1185 (1996). 28. W. H. Tsai and S. S. Yu, Attributed string matching with merging for shape recognition, IEEE Trans. P A M I 7, 453-462 (1985). 29. Y. T. Tsay and W. H. Tsai, Model-guided attributed string matching by split-and-merge for shape recognition, Intl J. Pattern Recognition Artif. Intell. 3, 159-179 (1989). 30. M. Maes, Polygonal shape recognition using string matching techniques, Pattern Recognition 24, 433-440 (1991). 31. G. B. Thomas, Calculus and Analytic Geometry. Addison Wesley, New York (1972). 32. V. Torre and T. A. Poggio, On edge detection, IEEE Trans. Pattern Anal. Mach. Intell. 8, 147-163 (1986). 33. R. Fletcher, Practical Methods of Optimization. John Wiley & Sons, New York (1987). 34. P.E. Gill, W. Murray, M. A. Saunders and M. H. Wright, Constrained nonlinear programming, Handbooks in OR & MS, G. L. Nemhauser, A. H. G. R. Kan, and M. J. Todd, eds. Vol. 1, Elsevier, Amsterdam (1989). 35. K. Schittkowski, NLPQL: A FORTRAN subroutine solving constrained nonlinear programming problems, Ann. Operat. Res. 5, 485-500 (1985).
Equilateral polygon approximation of closed contours
36. M. J. D. Powell, A fast algorithm for nonlinearly constrained optimization calculations, Numerical Analysis, G. A. Watson ed. Springer-Verlag, Berlin (1978). 37. R. L. Crane, B. S. Grabow, K. E. Hillstrom and M. Minkoff, Solution of the gener~.l nonlinear programming problem with subroutine VMCOM, Technical Report ANL-80-64, Argonne National Laboratory, Argonne, Illinois (1980). 38. M.J.D. Powell, VMCWD: A FORTRAN subroutine for constrained optimization, Technical Report DAMTP
1115
1982/NA4, University of Cambridge, Cambridge, Massachussetts (1982). 39. P. E. Gill, W. Murray, M. A. Saunders and M. Wright, User's guide for SOL/QPSOL: A FORTRAN package for nonlinear programming, Technical Report SOL 8312, Department of Operations Research, Stanford University (1983). 40. The NAG (Numerical Algorithms Group) Fortran Library.
About the Author--FERNANDO R. RANNOU received an engineering degree from Universidad de
Santiago de Chile in 1988 and a M.S. degree from the University of Tennessee, Knoxville, in 1993, where he is working toward his Ph.D. in computer science. His current research interests include image processing and analysis with applications to medical imaging.
About the Author--JENS GREGOR r~ceived an M.S. degree and a Ph.D. degree from Aalborg University,
Denmark, in 1988 and 1991, respectively. He has been with the University of Tennessee, Knoxville, as Assistant Professor of Computer Science since 1991. His research interests include pattern recognition and computed imaging.