COMPUTER VISION AND IMAGE UNDERSTANDING
Vol. 69, No. 1, January, pp. 106–116, 1998 ARTICLE NO. IV960596
NOTE Two Fast Euclidean Distance Transformations in Z2 Based on Sufficient Propagation Hinnik Eggers Institut f¨ur Angewandte Mathematik, Universit¨at Hamburg, Hamburg, 20146, Germany E-mail:
[email protected]
Received March 6, 1996; accepted December 17, 1996
Two new error-free sequential Euclidean distance transformations (EDT) for binary images in Z2 are introduced: sufficient d1 propagation and sufficient d∞ -propagation. Both methods use ordered propagation, i.e. iterative propagation via contour pixels. However, we restrict the propagation to unique shortest Euclidean paths, the sufficient propagation paths. Moreover, we ensure errorfree direct pixel update by adding a distance suggestion to each propagation pixel. Using these ideas, we avoid many unneccesary calculations. The computational tests show that our algorithms, used as signed and as unsigned methods, are significantly faster than other well-known signed and unsigned EDTs. Comparing both methods, sufficient d∞ -propagation yields the better average performance. °c 1998 Academic Press
ally use metrices. A map d : Z2 × Z2 7→ R which satisfies all x, y, z ∈ Z2 : (M1) d(x, y) = 0 ⇔ x = y (definiteness), (M2) d(x, y) = d(y, x) (symmetry), (M3) d(x, y) ≤ d(x, z) + d(z, y) (triangle inequality) is called the metric in Z2 . The Euclidean distance de (x, y) := p (x1 − y1 )2 + (x2 − y2 )2 , the chessboard distance d∞ (x, y) := maxi=1,2 |xi − yi |, and the cityblock distance d1 (x, y) := |xi − yi | + |x2 − y2 | are commonly used metrices in Z2 . For each pixel z in the digital plane there are eight pixels z˜ ∈ Z2 , where d∞ (z, z˜ ) = 1. These pixels are called neighbors of z and are denoted by
1. INTRODUCTION The Euclidean distance transformation (EDT) is a basic operation in computer vision. It is applicable to many image processing problems, e.g. thinning, data compression, and digital Voronoi diagrams. In 1984 Yamada [9] proposed a parallel iterative propagation method and proved that this method was an error-free EDT in Z2 . In 1990 Ragnemalm [7] suggested a sequential ordered propagation signed EDT, which was based on Yamada’s method. In 1994 Huang and Mitchell [5] introduced an unsigned d∞ -propagation method. Based on these results, we introduce two new ordered propagation EDTs using a new d1 -propagation technique and d∞ propagation, respectively. The main idea in both methods is to restrict the distance propagation to sufficient propagation paths and to ensure error-free direct update. 2. BASIC DEFINITIONS The digital plane Z2 is the set of all points in the Euclidean plane having integer coordinates. The points z ∈ Z2 are called pixels. In order to measure the distance between pixels, we usu-
N3 (z) N4 (z) N5 (z)
N1 (z) N0 (z) . N7 (z)
Neighbors with even indices satisfy d1 (z, z˜ ) = 1. They are called direct neighbors. Neighbors with odd indices satisfy d1 (z, z˜ ) = 2. We call them indirect neighbors. A path W (z 1 , z m ) = (z 1 , . . . , z m ) from z 1 to z m in X ⊂ Z2 is a sequence of pixels z i ∈ X , where Pm−1z i and z i+1 are neighbors for de (z i , z i+1 ) is the Euclidean all i = 1, . . . , m − 1. |W |e := i=1 length of the path. ¯ (z, z˜ )|e = min{|W |e |W is a path from z to z˜ in X }, then, If |W ¯ (z, z˜ ) is a shortest Euclidean path from z to z˜ in X . The set W X ⊂ Z2 is convex, if for every two pixels z, z˜ ∈ X all shortest ¯ (z, z˜ ) in Z2 are also paths in X . Euclidean paths W 2 In Z all shortest Euclidean paths from z to z˜ consist of (d1 (z, z˜ ) − d∞ (z, z˜ )) indirect neighbors (z i , z i+1 ) and (2 · d∞ (z, z˜ ) − d1 (z, z˜ )) direct neighbors (z i , z i+1 ). Let X ⊂ Z2 be a finite set. Let S ⊂ X be the set of feature pixels, let S¯ = X \S 6= ∅ be the set of nonfeature pixels, and let d be a metric on Z2 . Then, the (unsigned) distance map
106 1077-3142/98 $25.00 c 1998 by Academic Press Copyright ° All rights of reproduction in any form reserved.
N2 (z) z N6 (z)
SUFFICIENT PROPAGATION EUCLIDEAN DTs
of (X, S), D : X 7→ R;
¯ z 7→ min d(z, z˜ ) =: d(z, S), z˜ ∈ S¯
maps to each pixel the Euclidean distance to the set of nonfeature pixels. The signed distance map of (X, S), ¯ S D : X 7→ S;
z 7→ z ∗ ,
¯ where d(z, z ∗ ) = d(z, S)
maps to each pixel z ∈ X a nearest nonfeature pixel z ∗ . Methods on computing the (signed) distance map are (signed) distance transformations. Using the Euclidean metric de , we get the (signed) Euclidean distance map De (S De ) and the (signed) Euclidean distance transformation EDT (SEDT), respectively. In most applications X is the set of pixels of a binary image and we always assume that X is convex. This limitation is not restrictive for practical purposes, since the usually scanned sets X ⊂ Z2 , i.e. digital squares and digital rectangles, are convex and finite. The set S of feature pixels is either the set of black pixels or the set of white pixels. In the first case, we call the resulting distance transformation foreground distance transformation (FDT), in the second case, we call it background distance transformation (BDT). 3. ORDERED PROPAGATION TECHNIQUES Most Euclidean distance transformations consist of two phases: In a main processing phase, the squared Euclidean distance map is computed using fast integer operations only. After that, in a postprocessing phase the square root is either computed or read from a lookup table. The idea of propagation distance transformations is to transmit distance informations iteratively via neighbor pixels. For example, the first published EDTs, the raster scanning 8SSED of Danielsson [2] in 1980 and the parallel EDT of Yamada [9] in 1984, belong to this class. Ragnemalm [7] pointed out that in these parallel or raster scanning EDTs much processing power was wasted in processing pixels unnecessarily. He noticed that processing a pixel was only really useful when the pixel got its final value and that, ideally, each pixel should be processed only once. He suggested an iterative ordered propagation technique: in iteration l, distances are propagated from so-called contour pixels z ∈ M(l) ⊂ X only, to special neighbors which are likely to change their distance information. Then, these updated neighbors build the contour set M(l + 1) of the next iteration until M(l + 1) is empty. 3.1. Main and Minor Contour Pixels Ragnemalm noticed that it is not necessary to propagate distance information from a contour pixel to all its neighbors. We suggest avoiding unnecessary updates by restricting the propagation to sufficient paths. In order to do that, we partition the contour pixels into two lists:
107
“LIST ” contains the so-called main contour pixels and “list” contains minor contour pixels. Besides, we add to every contour pixel z a direction index k, pointing to that neighbor Nk (z) which will receive the distance information from z. Hence, every entry has the structure: LIST [i].pixel list [i].pixel coordinates of z LIST [i].k list [i].k index of the propagation direction. Instead of propagating the distance information of a contour pixel to all eight neighbors, main contour pixels z, which got their distance information from a pixel z˜ , i.e. z = Nk (˜z ), transmit their distance information to their three neighbors Nk (z), Nk+1 (z), and Nk−1 (z), only. Minor contour pixels z propagate their distance information to their neighbor Nk (z) only. The criterion for pixels to be main or minor contour pixels depends on the propagation technique. Based on this structure, we introduce two ordered propagation techniques: sufficient d1 -propagation and sufficient d∞ propagation. Using these methods, we calculate for all pixels z of a convex and finite set X ⊂ Z2 the squared Euclidean distances de2 (z, z ∗ ) to an optional pixel z ∗ ∈ X . We show that both methods calculate every pixel only once. 3.2. Sufficient d1 -Propagation Here we use five lists of pixels: LIST1, LIST 2, list1, list 2, and list3. LIST1 and list1 consist of the topical marked pixels, i.e. (LIST1 ∪ list1) is the topical contour set. (LIST 2 ∪ list 2) is a subset of the contour set of the next one, and list3 is a subset of the contour set of the next iteration but one. Main contour pixels are exactly those contour pixels with even direction index k. Minor contour pixels are those with an odd direction index. If X ⊂ Z2 is finite and convex and z ∗ ∈ X , we can calculate the squared Euclidean distances Q(z) := de2 (z ∗ , z) for all z ∈ X as follows: SUFFICIENT d1 -PROPAGATION iter = 1; Q(z ∗ ) = 0; for k = 0, 2, 4, 6 {if Nk (z ∗ ) ∈ X {Q(Nk (z ∗ )) = 1; (Nk (z ∗ ), k) 7→ LIST1;}} for k = 1, 3, 5, 7 {if Nk (z ∗ ) ∈ X {Q(Nk (z ∗ )) = 2; (Nk (z ∗ ), k) 7→ list 2;}} while (LIST1 ∪ list1 ∪ list 2) is not empty: iter = iter + 1; adddir = 2 · iter − 1; addind = 2 · iter; for all (z, k) in LIST1: {if Nk (z) ∈ X {(Nk (z), k) 7→ LIST2; Q(Nk (z)) = Q(z) + adddir;} if Nk+1 (z) ∈ X {(Nk+1 (z), k + 1) 7→ list3; Q(Nk+1 ) = Q(z) + addind;} if Nk−1 (z) ∈ X {(Nk−1 (z), k − 1) 7→ list3;
108
HINNIK EGGERS
( ∗
dc (z , z) =
if z 1∗ = z 1 ∨ z 2∗ = z 2 , otherwise.
d1 (z ∗ , z), d1 (z ∗ , z) − 1,
LEMMA 1. Let X ⊂ Z2 be convex and finite. Starting from an optional pixel z ∗ ∈ X , sufficient d1 -propagation yields the squared Euclidean distances de2 (z, z ∗ ) for all pixels z ∈ X . Every z ∈ X is calculated just once and the iteration stops after exactly maxz∈X d1 (z, z ∗ ) + 1 steps.
FIG. 1. The first three iterations of sufficient d1 -propagation. Horizontal and vertical arrows to a pixel indicate pixel update and adding the pixel to LIST 2; diagonal arrows indicate pixel update and adding the pixel to list 3. The figures give the present distance information. Boldface figures belong to the present contour pixels.
Proof. Without loss of generality let z ∗ = (0, 0)T ∈ X be the nonfeature pixel and let z = (x, y)T ∈ X, x ≥ y ≥ 0, be an optional feature pixel. Since X is convex, the propagation path W1 (z ∗ , z) is a path in X . Since z ∗ is the only propagation source, the pixel z gets distance information in step dc (z ∗ , z) only. We examine two cases: Case I. y > 0. In this case the propagation yields
Q(Nk−1 ) = Q(z) + addind;}} for all (z, k) in list1: {if Nk (z) ∈ X {(Nk (z), k) 7→ list3; Q(Nk (z)) = Q(z) + addind;}} LIST1 = LIST 2; LIST 2 = ∅; list1 = list2; list2 = list3; list3 = ∅; Figure 1 shows the first three iterations. The distance information from z ∗ to z is exactly propagated via the shortest Euclidean path W1 (z ∗ , z) from z ∗ to z which is the sequence of 2 · d∞ (z ∗ , z) − d1 (z ∗ , z) direct neighbors followed by d1 (z ∗ , z) − d∞ (z ∗ , z) indirect neighbors. Figure 2a shows such unique d1 -propagation paths W1 . These paths are sufficient propagation paths, since it turns out that propagating via other paths is not necessary. This propagation technique is called d1 -propagation, since starting with a single pixel S¯ = {z ∗ } the contour set in iteration m + 1 is K m1 (z ∗ ) = {z ∈ Z2 | d1 (z, z ∗ ) = m}, i.e. the d1 -“circle-line” with radius m. The iteration, in which a pixel z receives its distance information from pixel z ∗ , is determined by the propagation distance:
Q(z) =
x−y y−1 X X (2i − 1) + 2(x − y + 1 + 2i) i=1
|
{z
i=0
|
}
direct neighbors
{z
}
indirect neighbors
= x 2 + y 2 = de2 (z ∗ , z). Case II. y = 0. In this case we also get Q(z) =
x X (2i − 1) = x 2 = de2 (z ∗ , z). i=1
|
{z
}
direct neighbors
Hence, the distance propagation yields exactly the squared Euclidean distances. Since X is finite, maxz∈X d1 (z, z ∗ ) + 1 exists and is the iteration step in which the pixels z˜ , satisfying d1 (˜z , z ∗ ) = maxz∈X dc (z, z ∗ ), are in the contour set. After that, the contour set is empty and the iteration stops. In the known “propagation distance transformations,” a pixel propagates its distance information to all its neighbors. The neighbor pixel accepts this suggestion, if this suggestion is strictly lower than the pixel’s own distance information. However, this means that the distance information of nonfeature pixels is propagated via optional paths. If we generalize the propagation value P1 (W1 (z ∗ , z)) := Q(z) to optional paths W (z 1 , . . . , z n ) ∈ Z2 , i.e. P1 (W (z 1 , . . . , z n )) :=
n−1 X
upd(z i , z i , +1, iteri ),
i=1
where iter1 := 0; FIG. 2. (a) Three sufficient propagation paths W1 of d1 -propagation. (b) Three sufficient propagation paths W∞ of d∞ -propagation.
iteri := iteri−1 + d1 (z i−1 , z i ),
i = 2, . . . , n − 1,
upd(z i , z i + 1, iteri ) := 2(iteri − 1) + d1 (z i , z i+1 ),
109
SUFFICIENT PROPAGATION EUCLIDEAN DTs
then the following corollary shows that all these additional calculations always yield wrong distance suggestions. COROLLARY 2. Let z ∗ , z ∈ Z2 . For all paths W (z ∗ , z) 6= W1 (z ∗ , z) from z ∗ to z in Z2 holds: P1 (W (z ∗ , z)) > P1 (W1 (z ∗ , z)). Proof. One update step via direct neighbors increases the propagation value less than one update step via indirect neighbors. Besides, one update step via indirect neighbors increases the propagation value less than two update steps via direct neighbors. Hence, only a shortest Euclidean path can minimize the propagation value; i.e., there are always d1 (z ∗ , z) − d∞ (z ∗ , z) update steps via indirect neighbors and 2d∞ (z ∗ , z) − d1 (z ∗ , z) update steps via direct neighbors. A contour pixel, which was updated in step iter by a direct neighbor, propagates its distance value in step iter +1. A contour pixel, which was updated in step iter by an indirect neighbor, propagates its distance value in step iter +2. Since the update function upd(iter) is strictly increasing, the propagation valuePis minimized by the shortest Euclidean path that n iteri . This path is exactly W1 (z ∗ , z), the seminimizes i=1 quence of 2 · d∞ (z ∗ , z) − d1 (z ∗ , z) direct neighbors followed by d1 (z ∗ , z) − d∞ (z ∗ , z) indirect neighbors. Hence, P(W1 (z ∗ , z)) is strictly minimal. 3.3. Sufficient d∞ -Propagation In this section, we restrict the d∞ -propagation of Huang and Mitchell [5] to sufficient propagation paths. Here we use four lists: LIST1, LIST 2, list1, list 2. These lists have the same structure and the same meaning as contour sets. Contrary to sufficient d1 -propagation, here, odd direction indices determine main contour pixels and even direction indices determine minor contour pixels. If X ⊂ Z2 is finite and convex and z ∗ ∈ X , we can calculate the squared Euclidean distances Q(z) := de2 (z ∗ , z) for all z ∈ X as follows. SUFFICIENT d∞ -PROPAGATION iter = 1; Q(z ∗ ) = 0; for k = 0, 2, 4, 6 {if Nk (z ∗ ) ∈ X {Q(Nk (z ∗ )) = 1; (Nk (z ∗ ), k) 7→ list1;}} for k = 1, 3, 5, 7 {if Nk (z ∗ ) ∈ X {Q(Nk (z ∗ )) = 2; (Nk (z ∗ ), k) 7→ LIST1;}} while (LIST1 ∪ list1) is not empty: iter = iter + 1; adddir = 2 · iter − 1; addind = 2 · adddir; for all (z, k) in LIST1: {if Nk (z) ∈ X {(Nk (z), k) 7→ LIST 2; Q(Nk (z)) = Q(z) + addind;} if Nk+1 (z) ∈ X {(Nk+1 (z), k + 1) 7→ list 2; Q(Nk+1 )(z)) = Q(z) + adddir;} if Nk−1 (z) ∈ X {(Nk−1 (z), k − 1) 7→ list 2;
FIG. 3. The first three iterations of sufficient d∞ -propagation. Horizontal or vertical arrows to a pixel indicate pixel update and adding the pixel to list 2, diagonal arrows indicate pixel update and adding the pixel to LIST 2. The figures give the present distance information. Boldface figures belong to the present contour pixels.
Q(Nk−1 (z)) = Q(z) + adddir;}} for all (z, k) in list1: {if Nk (z) ∈ X {(Nk (z), k) 7→ list 2; Q(Nk (z)) = Q(z) + adddir;}} LIST1 = LIST 2; LIST 2 = ∅; list1 = list 2; list 2 = ∅; Figure 3 shows the first three iterations. Here, the distance information from z ∗ to z is exactly propagated via the shortest Euclidean path W∞ (z ∗ , z) from z ∗ to z, which is the sequence of d1 (z ∗ , z) − d∞ (z ∗ , z) indirect neighbors followed by 2 · d∞ (z ∗ , z) − d1 (z ∗ , z) direct neighbors. Figure 2b shows such sufficient d∞ -propagation paths W∞ . This propagation technique is called d∞ -propagation, since, starting with a single pixel S¯ = {z ∗ }, the contour set in iteration m + 1 is K m∞ (z ∗ ) = {z ∈ Z2 | d∞ (z, z ∗ ) = m}, i.e. the d∞ -“circle-line” with radius m. The following lemma can be proven analogously to the proof of Lemma 1. LEMMA 3. Let X ⊂ Z2 be convex and finite. Starting from an optional pixel z ∗ ∈ X , sufficient d∞ -propagation yields the squared Euclidean distances de2 (z, z ∗ ) for all pixels z ∈ X . Every z ∈ X is calculated just once and the iteration stops after exactly maxz∈X d∞ (z, z ∗ ) + 1 steps. In order to compare this method with other d∞ -propagation methods, we again generalize the propagation value P∞ (W∞ (z ∗ , z)) := Q(z) to optional paths W (z 1 , . . . , z n ) ∈ Z2 , i.e. n−1 X upd(z i , z i + 1, i) P∞ (W (z 1 , . . . , z n )) := i=1
=
n−1 X i=1
d1 (z i , z i+1 ) · (2i − 1).
110
HINNIK EGGERS
Hence, our lists have the structure:
Analogously to the proof of Corollary 2, we can show: COROLLARY 4. Let z ∗ , z ∈ Z2 . For all paths W (z ∗ , z) 6= W∞ (z ∗ , z) form z ∗ to z in Z2 holds: P∞ (W (z ∗ , z)) > P∞ (W∞ (z ∗ , z)).
(1)
4. SUFFICIENT PROPAGATION EDTs Introducing both propagation methods, we assumed we have just one propagation source. If we calculate a Euclidean distance map, there are many propagation sources: Every nonfeature pixel which has a direct feature neighbor is a propagation source, since it transmits the correct distance information to at least one feature pixel. In parallel EDTs, e.g. the methods of Yamada and Huang/ Mitchell, all sources propagate simultaneously; i.e., the distance information of all contour pixels are stored simultaneously to the new lists. In every iteration only those propagation paths are accepted which yield a strict decrease of the present distance information Q(z). Since we sequentially go through the lists, it is possible that a pixel z gets its distance information Q new (z) in an iteration step in which it should propagate its previous distance information Q old (z). This might yield errors. To avoid such errors, Ragnemalm suggested delaying the pixel update by one iteration step. This delayed update ensured that his signed CSED could be led back to Yamada’s exact parallel EDT.
LIST[i].pixel list[i].pixel coordinates of z LIST[i].pixel list[i].k index of the propagation direction list[i].v v := (z − z ∗ ) list[i].d distance information. For unsigned EDTs v is not necessary. Since there is more than one propagation source, not all sufficient propagation paths are really necessary. Hence, restricting to sufficient paths cannot ensure that every pixel is calculated just once. However, our contour list structure efficiently reduces the number of pixel calculations and the cpu speed for usual images significantly because • If a pixel receives the same distance information from several pixels, using direct update, the first direction index is added to the next contour set only. • Hence, we should sort the entries of list to increasing distance suggestions before going through it. However, this sorting is automatically done while generating list. • All main contour pixels have the same distance information. Hence, they all propagate the same distance information to their neighbors: The distance suggestion DIRECT is propagated to their direct neighbors and the distance suggestion INDIRECT to their indirect neighbors. DIRECT and INDIRECT must be calculated just once for every LIST. 4.2. SP1 —Sufficient d1 -Propagation EDT
4.1. Direct Distance Update In general, delayed update yields many unnecessary pixel calculations. However, our aim is to efficiently decrease the number of operations. Hence, we suggest adding distance information d to every entry of list, i.e. to each minor contour pixel. If we use d instead of the present value Q(z) to calculate the propagation value, we can do error-free direct update and avoid many unnecessary operations. This additional distance information d is not necessary for main contour pixels, since their distance information depends just on the iteration step: ( d=
for d1 -propagation (iter − 1)2 , 2 2 · (iter − 1) , for d∞ -propagation.
Besides, for signed EDTs we add a vector v ∈ Z2 to every entry of list. If pixel z has received its present distance information from the nonfeature pixel z ∗ , then v = z − z ∗ . Again, we do not need v for main contour pixels, since v is uniquely determined by the iteration step and the direction index: v = (iter − 1) · Nk (0).
In the following, pseudocode is given for the signed sufficient d1 -propagation EDT: SSP1 . For every z ∈ X, V (z) is a vector we need for the signed distance map S D: SSP1 —SIGNED SUFFICIENT d1 -PROPAGATION Initialization. Let Z ∈ N be an integer upper bound for maxz∈X De2 (z) + 1. iter = 0; DIRECT = 0; for all z ∈ S¯ {Q(z) = 0; V (z) = (0, 0); for k = 0, 2, 4, 6 if Nk (z) ∈ S(z, k) 7→ LIST1;} for all z ∈ S Q(z) = Z ; Iterations. While (LIST1 ∪ list1 ∪ list 2) is not empty: iter = iter + 1; adddir = 2 · iter − 1; addind = adddir + 1; DIRECT = DIRECT + adddir; INDIRECT = DIRECT + addind; for all (z, k) in LIST1 {PROPAGATE((z, k), DIRECT);} for all (z, k, v, d) in list1 {propagate((z, k, v), d + addind);} for all (z, k) in LIST1 {v = (iter − 1) · Nk (0);
111
SUFFICIENT PROPAGATION EUCLIDEAN DTs
propagate((z, k + 1, v), INDIRECT); propagate((z, k − 1, v), INDIRECT);} LIST1 = LIST 2; LIST 2 = ∅; list1 = list 2; list 2 = list 3; list 3 = ∅; Analysis. The signed distance map and the unsigned distance map of (X , S) are: ¯ z 7→ z − V (z) and S D : X 7→ S; √ D : X 7→ R; z 7→ Q(z). Propagation. function PROPAGATE((z, k), DIST) q = Nk (z); if (q ∈ X and Q(q) > DIST) {Q(q) = DIST; V (q) = iter ·Nk (0); (q, k) 7→ LIST2;} function propagate ((z, k, v), DIST) q = Nk (z); if (q ∈ X and Q(q) > DIST) {Q(q) = DIST; V (q) = v + Nk (0); (q, k, V (q), DIST) 7→ list3;} Leaving out all operations with v and V , we get the unsigned method: USP1 . The main processing phase consists of initialization and iteration. In this phase the squared EDT is generated. The analysis step is the postprocessing phase. This distance propagation always works. THEOREM 5. Let X ⊂ Z2 be finite and convex and let ∅ 6= S¯ ⊂ X . Then, S P1 yields the Euclidean distance map De after max{T1 (z) | z ∈ X } + 2 iterations, where T1 (z) := min{d1 (z, z˜ ) | z˜ ∈ S¯ ∧ de (z, z˜ ) = De (z)}. Proof. If S¯ = {z ∗ }, i.e. S¯ consists of only one pixel, the theorem follows immediately from Lemma 1. Now let S¯ contain more than one pixel. The distances are propagated exactly via the shortest Euclidean paths W1 , which are paths in X , since X is convex. Since X is finite, max{T1 (z) | z ∈ X } exists. An error occurs at z ∈ S only, if for all z ∗ ∈ M1 (z) := {˜z ∈ S¯ | de (z, z˜ ) = De (z) ∧ d1 (z, z˜ ) = T1 (z)} the propagation via W1 (z ∗ , z) is disturbed. Now assume z ∗ ∈ M1 (z). The propagation can only be dis¯ where turbed if there is a pixel w ∈ W1 (z ∗ , z) and a pixel z˜ ∈ S, 1. de (z ∗ , z) < de (˜z , z), 2. de (˜z , w) ≤ de (z ∗ , w), and 3. w has already received its distance information de2 (˜z , w), when it receives the distance suggestion de2 (z ∗ , w) from z ∗ . In this case the distance propagation via W1 (z ∗ , z) stops in w, and we may get the final value Q(z) = de2 (˜z , z) > de2 (z ∗ , z). From item 3 we get dc (˜z , w) ≤ dc (z ∗ , w). In every iteration the updates via direct neighbors from LIST1 are executed first:
if iter = dc (z ∗ , w) = d1 (z ∗ , w), the update is done before all updates, where iter = dc (˜z , w) = d1 (˜z , w) − 1. Because of this update order, we get from item 3: d1 (˜z , w) ≤ d1 (z ∗ , w).
(2)
However, (2) means, that if w has received its distance information via W1 (¯z , w), this value is not propagated later than in the case that w has received its distance information via W1 (z ∗ , w). Hence, we get, in contradiction to 1, ¡ ¢ de2 (z ∗ , z) = de2 (z ∗ , w) + de2 (z ∗ , z) − de2 (z ∗ , w) ¡ ¢ (2.) ≥ de2 (˜z , w) + de2 (z ∗ , z) − de2 (z ∗ , w) ≥ P(W1 (˜z , w) ∪ W1 (w, z)) ≥ P(W1 (˜z , z)) =
de2 (˜z , z)
(3.) (Corollary 2).
Hence, the path W1 (z ∗ , z) cannot be disturbed and every z ∈ X receives its distance information at the latest after T1 (z) iterations from z ∗ ∈ M1 (z). Therefore, after max{T1 (z)|z ∈ X } + 2 steps the contour set is empty and the method stops. 4.3. SP∞ —Sufficient d∞ -Propagation EDT In the following, pseudocode is given for the signed sufficient d∞ -propagation EDT. SSP∞ —SIGNED SUFFICIENT d∞ -PROPAGATION Initialization. Let Z ∈ N be an integer upper bound for maxz∈X De2 (z) + 1. iter = 0; INDIRECT = 0; for all z ∈ S¯ {Q(z) = 0; V (z) = (0, 0); for k = 0, 2, 4, 6 if Nk (z) ∈ S(z, k + 1) 7→ LIST1;} for all z ∈ S Q(z) = Z ; Iterations. While LIST1 and list1 are not empty: iter = iter +1 adddir = 2 · iter − 1; addind = 2 · adddir; DIRECT = INDIRECT + adddir; INDIRECT = INDIRECT + addind; for all (z, k, v, d) in list1 {propagate((z, k, v), d + addir);} for all (z, k) in LIST1 {v = (iter − 1) · Nk (0); propagate((z, k + 1, v), DIRECT); propagate((z, k − 1, v), DIRECT);} for all (z, k) in LIST1{PROPOGATE((z, k), INDIRECT);} LIST1 = LIST2; LIST2 = ∅; list1 = list2; list2 = ∅; Analysis. The signed distance map and the unsigned distance map of (X , S) are: ¯ SD : X → 7 S; D:X→ 7 R;
z 7→ z − V (z) and √ z 7→ Q(z).
112
HINNIK EGGERS
Propagation. function PROPAGATE((z, k), DIST) q = Nk (z); if (q ∈ X and Q(q) > DIST) {Q(q) = DIST; V (q) = iter ·Nk (0); (q, k) 7→ LIST2;} function propagate((z, k, v), DIST) q = Nk (z); if (q ∈ X and Q(q) > DIST) {Q(q) = DIST; V (q) = v + Nk (0); (q, k, V (q), DIST) 7→ list2;}
FIG. 4. Test images: (a) random circle image; (b) random square image.
Again, we get the unsigned EDT, USP∞ , leaving out all operations with v and V . Some words about the starting strategy: We know that the nonfeature pixels with at least one direct feature neighbor are relevant propagation sources. If z ∈ S¯ and Nk (z) ∈ S, k even, then we only add Nk+1 (z) to LIST1. That this is sufficient, can be proven as follows: ¯ (1, 0)T ∈ S, (0, −1)T Without loss of generality let (0, 0)T ∈ S, ¯ then for all z ∈ Q := {0, 1, . . .} × {−1, −2, . . .} it holds ∈ S; that
However, in this section we show the very good speed performance of our methods, compared with other well-known signed and unsigned EDTs, if applied to some “realistic” test images: Many usual binary images can be generated by suitably combining rotated squares and circles of different radii. Hence, we applied our methods to 512×512 binary test images which were randomly generated by rotated squares and circles of different radii, respectively. To get these test images, we generated about 15% black pixels by randomly choosing midpoints of
de (z, (0, −1)T ) < de (z, (0, 0)T ).
• black pixel squares with random radii 4 ≤ r ≤ 40 rotated by ϕ ∈ [0, 45◦ ] and • black pixel de -circles with random radii 4 ≤ r ≤ 40,
(3)
Especially, we do not have to propagate the distance from (0, 0)T to (1, −1)T . Again, the main processing phase consists of initialization and the iteration step. The analysis step is the postprocessing phase. This distance propagation always works. The following theorem can be proven analogously to the proof of Theorem 5 using Lemma 3 and Corollary 4. THEOREM 6. Let X ⊂ Z2 be finite and convex and let ∅ 6= S¯ ⊂ X. Then, SP∞ yields the Euclidean distance map De after exactly max{T∞ (z)|z ∈ X } + 1 iterations, where T∞ (z) := min{d∞ (z, z˜ )|˜z ∈ S¯ ∧ de (z, z˜ ) = De (z)}. Moreover, using our results about the method of Huang and Mitchell [3], we are able to generalize SP∞ to higher dimensional spaces Zn [4]. 5. COMPUTATIONAL RESULTS We know from Ragnemalm [7] that ordered propagation EDTs are not O(n 2 )-methods for n × n binary images and, hence, not optimal regarding complexity: Every n × n binary image which consists of exactly one digital straight line of black pixels, which is not horizontal, vertical, or diagonal, yields a counterexample. If n increases, the d1 and the d∞ -Voronoi diagrams, which represent the propagation paths, increasingly differ from the de -Voronoi diagram. Hence, the average number of calculations per pixel for our BEDTs is not bounded for increasing n. Especially for worst-case images of that kind, the complexity is O(n 3 ).
respectively, until the number of black pixels exceeded 15%. Figure 4 shows two test images. In these test images, objects generated by rotated squares represent objects with straight boundaries and corners. Objects generated by circles represent objects with rounded boundaries and without corners. We applied every method twice to every test image: as FEDT, i.e. the black pixels are feature pixels; and as BEDT, i.e. the white pixels are feature pixels. In order to compare the performance of the methods we used two characteristics: • the number of calculations per pixel and • the cpu time run on a RS6000-workstation. We used the number of calculations per pixel to get an impression of the avarage complexity of the methods. We calculated the number of pixel calculations for USP1 and USP∞ as PC(SP∞/1 ) = 3 · # S¯ + 3 ·
X
#LIST + 1 ·
X
#list,
(4)
P where # S¯ is the number of nonfeature #LIST is the Ppixels, number of main contour pixels, and #list is the number of minor contour pixels. However, since a single pixel calculation of different methods was not exactly comparable, we also considered the cpu times of our C-implementation to measure the performance.
113
SUFFICIENT PROPAGATION EUCLIDEAN DTs
TABLE 1 Mean of Calculations per Pixel (PC) and CPU-Time (CPU) in 1/10 s for FEDTs (F) and BEDTs (B) Applied to Samples of Hundred Binary 512 × 512-Testimages of Random Circles with Random Radii 4 ≤ 4 ≤ 40 Circle images
SEDTs
UEDTs
SSP∞
SSP1
8SSED
CSED
CC94
USP∞
USP1
ST94
CH34
V
PC CPU
2.97 3.56
3.11 3.72
2.37 4.44
7.56 13.77
21.80 19.25
2.97 1.34
3.11 1.53
4.36 2.14
2.06 0.87
H
PC CPU
2.43 5.74
2.89 6.76
8.60 11.72
9.00 31.99
164.04 44.32
2.43 3.12
2.89 3.78
8.98 5.35
6.91 2.07
5.1. Performance as Signed EDT In the first test we compared our signed methods, SSP1 and SSP∞ with three well-known SEDTs: the not completely errorfree O(n)2 -method 8SSED of Danielsson [2], the CSED of Ragnemalm [7], and the error-free O(n)2 -method CC94 of Chen and Chuang [1]. Table 1 shows the results applied to random circle images: 8SSED, SSP∞ and SSP1 were at least 2.5-times faster than CSED and at least 3.5-times faster than CC94. The complexity of CC94 is O(n)2 for an arbitrary (n × n)-binary image; however, the complexity constant is very large. Applied to images with rotated squares, CC94 and CSED were also significantly slower than the other three methods. Hence, we only compare in Fig. 5 the number of calculations per pixel and the cpu times of 8SSED with SSP∞ and SSP1 . 8SSED is an O(n)2 -method. Hence, the number of calculations per pixel and the cpu time was almost independent of the rotation angle ϕ. 8SSED was seldom error-free; however, it never yielded more than 0.1% errors. SSP∞ and SSP1 were always faster than 8SSED. For instance, SSP∞ was at least 15% faster than 8SSED as FEDT and at least 25% faster as BEDT. The number of calculations per pixel and the cpu time of SSP1 and SSP∞ heavily depend on the rotation angle. For both methods we got local maxima of the PC-function and the CPUfunction at a rotation angle between 25◦ and 27◦ . SSP∞ was fastest at the rotation angle 0◦ , while SSP1 was fastest at the rotation angle 45◦ . These angles obviously fitted to the circles of the propagation metrices. However, the PC-function and the CPU-function of SSP∞ were flatter and almost always lower than the functions of SSP1 . Hence, we suggest using SSP∞ as a signed EDT. 5.2. Performance as an Unsigned EDT Then, we compared both methods with the fastest sequential EDT that we had known so far, the method of Saito and Toriwaki [8]. Besides, we compared them with the O(n)2 raster scanning chamfer(3,4)-DT, C(3,4), which is often used to approximate the Euclidean distance map [6]. The results of the circle-image tests are given in Table 1. The results of the square-image tests are given in Fig. 6.
The PC-function and the CPU-function of ST94 also depended on the rotation angle and had nearly the same shape as the functions of USP∞ and USP1 having a local maximum rotation angle between 25◦ and 27◦ . However, USP∞ and USP1 were always faster than ST94. For instance, USP∞ was at least 25% faster as FEDT and at least 20% faster as BEDT. Moreover, USP∞ and USP1 never needed more than five calculations per pixel, which is low, since every pixel has eight neighbors. Comparing USP∞ and USP1 , we saw again that on average the PC-function and the CPU-function of USP∞ were flatter and mostly lower than the functions of USP1 . The PC- and CPUfunctions of the linear raster-scanning chamfer(3,4)-DT were independent of the rotation angle. C(3,4) was almost always faster than USP∞ , and up to twice as fast at the rotation angle 27◦ . This lack in speed is the price for an error-free Euclidean distance map. If one needs the Euclidean distance map, we suggest using USP∞ . 6. CONCLUSIONS SP1 and SP∞ are exact signed and unsigned EDTs in Z2 . Restricting the propagation to sufficient paths and direct pixel update avoid many unnecessary calculations. In the case of just one nonfeature pixel, both methods yield just one calculation per pixel, which is optimal. The complexity of both methods is O(n 3 ) for worst-case 2 n -images. However, calculating our test images, random images based on circles and rotated squares, the signed versions of these methods, SSP1 and SSP∞ , were significantly faster than the well-known methods CSED, 8SSED, and CC94. Especially, SSP∞ was at least 15% faster than all of these methods. The same speed result turned out for the unsigned methods, USP1 and USP∞ ; they were always faster than the method of Saito and Toriwaki. Especially, USP∞ was at least 20% faster. SP1 and SP∞ yielded as signed and as unsigned methods the best speed results, if they were applied to random square images, where the object edges were approximately horizontal, vertical, or diagonal. Both methods always needed less than five calculations per pixel. That is low, considering that every pixel has eight neighbors.
114
HINNIK EGGERS
FIG. 5. Mean of the calculations per pixel (PC) and of the cpu times (CPU) of (a) foreground-SEDTs and (b) background-SEDTs applied to samples of hundred random 512 × 512 binary testimages generated by squares of random radii 4 ≤ r ≤ 40 rotated by ϕ ∈ [0◦ , 45◦ ]. The line with “×” means SSP1 , “◦” means SSP∞ , “+” means 8SSED: (a) FEDTs (15% feature pixels and 85% nonfeature pixels); (b) BEDTs (85% feature pixels and 15% nonfeature pixels).
SUFFICIENT PROPAGATION EUCLIDEAN DTs
115
FIG. 6. Mean of the calculations per pixel and of the cpu-times of (a) foreground-EDTs and (b) background-EDTs applied to samples of hundred random 512 × 512 binary testimages generated by squares of random radii 4 ≤ r ≤ 40 rotated by ϕ ∈ [0◦ , 45◦ ]. The line with “×” means USP1 , “◦” means USP∞ , “+” means ST94 and the line means CH34: (a) FEDTs (15% feature pixels and 85% nonfeature pixels); (b) BEDTs (85% feature pixels and 15% nonfeature pixels).
116
HINNIK EGGERS
Comparing both methods, SP∞ yielded the better average performance. Besides, SP∞ can be generalized to higher dimensions. Since many binary images can be generated by combining rotated squares and circles of different radii, we think that SP1 and SP∞ yield the same speed results applied to usual binary images of practical applications.
5. C. T. Huang and O. R. Mitchell, A Euclidean distance transform using grayscale morphology decomposition, IEEE Trans. Pattern Anal. Mach. Intell. 16, 1994, 443–448.
REFERENCES
8. T. Saito and J.-I. Toriwaki, New algorithms for Euclidean distance transformations of an n-dimensional digitized picture with applications, Pattern Recognition 27, 1994, 1551–1565.
1. L. Chen and H. Y. H. Chuang, A fast algorithm for Euclidean distance maps of a 2-D binary image, Inform. Process. Lett. 51, 1994, 25–29. 2. P. E. Danielsson, Euclidean distance mapping, Comput. Vision Graphics Image Process. 14, 1980, 227–248. 3. H. Eggers, Parallel Euclidean distance transformations in Zng , Pattern Recognition Lett. 17, 1996, 751–757. 4. H. Eggers, Fast Euclidean distance transformation in Zn based on ordered propagation via sufficient paths, in Vision Geometry V (R. A. Melter, A. Y. Wu, and L. Latecki, Eds.), pp. 238–245, Proc. SPIE 2826, SPIE, Bellingham, WA, 1996.
6. F. Leymarie and M. D. Levine, Fast raster scan distance propagation on the discrete rectangular lattice, CVGIP: Image Understanding 55, 1992, 84–94. 7. I. Ragnemalm, Neighborhoods for distance transformation using ordered propagation, CVGIP: Image Understanding 56, 1992, 399–409.
9. H. Yamada, Complete Euclidean distance transformation by parallel operation, in Proc. 7th Internat. Conf. on Pattern Recognition, Montreal, Canada, 1984, pp. 69–71. 10. H. Eggers, Fast parallel Euclidean distance transformation in Zn , in Vision Geometry VI (R. A. Melter, A. Y. Wu, and L. Latecki, Eds.), pp. 33–40, Proc. SPIE 3168, SPIE, Bellingham, WA, 1997. 11. H. Eggers, Euklidische Distanztransformationen mit hiureicheude Fortschreibung, thesis, University of Hamburg, Institute of Applied Mathematics, 1997.