Shape Recovery by Diffusion Generated Motion

Shape Recovery by Diffusion Generated Motion

Journal of Visual Communication and Image Representation 13, 94 –102 (2002) doi:10.1006/jvci.2001.0494, available online at http://www.idealibrary.com...

214KB Sizes 0 Downloads 51 Views

Journal of Visual Communication and Image Representation 13, 94 –102 (2002) doi:10.1006/jvci.2001.0494, available online at http://www.idealibrary.com on

Shape Recovery by Diffusion Generated Motion Bj¨orn Jawerth and Peng Lin1 Department of Mathematics, University of South Carolina, Columbia, South Carolina 29208 Received January 31, 2000; accepted October 22, 2001

Diffusion generated motion has been used to generate a variety of interface motions. In this paper, we present a new shape recovery model with diffusion generated motion. The approach is based on alternately diffusing and sharpening the initial region to move the sharp interface toward the boundaries of the desired objects. The shapes are recovered by an anisotropic interface motion with a local image property dependent speed. Our algorithm is simple and easy to implement. It automatically captures topological changes and works for both 2D and 3D image data. Experimental results for synthetic and real images are presented. C 2002 Elsevier Science (USA) Key Words: shape recovery; deformable models; mean curvature motion; diffusion generated motion.

1. INTRODUCTION Recovering shapes of objects in two and three dimensional image data is an important problem in image analysis. Deformable model [18] is a promising tool for shape recovery. The deformable model approaches are based on deforming initial contours or surfaces toward the boundaries of the objects to be recovered. Generally speaking, there are two types of deformable models: parametric models [8] and geometric models [2, 3, 9]. The parametric models heavily depend on the parameterization of curves or surfaces. The deformation is obtained by minimizing an energy functional that is designed to attract the contour to the boundary while controlling its smoothness. The parametric models require the initialization to be close to the desired object. Generally they are not capable of changing topology, unless special procedures are used for detecting possible splitting and merging [11, 17]. In the geometric models, the curve (surface) propagates by an implicit velocity. The deformation is driven by mean curvature motion. The geometric models can automatically handle changes in topology. Therefore multiple objects can be recovered simultaneously. Another advantage of the geometric models is that the approach is d-dimensional. In this paper, we present a simple shape recovery model with a more general type of front propagation—diffusion generated motion. In the early 1990s, Merriman et al. [12, 13] 1 Both authors supported in part by ONR Grant N00014-97-11059, ARO Grant 36197RTDPS, and DoD Grant N00014-97-10806.

94 1047-3203/02 $35.00  C 2002 Elsevier Science (USA) All rights reserved.

SHAPE RECOVERY BY DIFFUSION GENERATED MOTION

95

developed the surprisingly simple “diffusion generated motion by mean curvature” algorithm, which gave mean curvature motion without computing curvature. The basic idea of this approach is to move the boundary of a set by alternately “diffusing” and “sharpening” the set. Recently this algorithm was extended to generate more general interface motion [14, 15]. The term “diffusion generated motion” or more generally “convolution generated motion” is used for this type of motion. In our work, we adopt the diffusion generated motion technique to the problem of shape recovery. To recover a shape, a solid set is initially placed in the image to completely cover the desired object. By incorporating local image properties into the sharpening steps, and alternately diffusing and sharpening this solid set, the boundary of the set will move toward and stop at the object boundary. The front evolution is an anisotropic motion with a local image property dependent speed. Our algorithm is simple and very easy to implement. It also automatically captures topological changes. The paper is organized as follows. In Section 2 we briefly explain the diffusion generated motion techniques for front propagation. In Section 3 we present the main result of the paper, shape recovery by diffusion generated motion. Experimental results of applying our method to 2D and 3D images are given in Section 4 followed by concluding remarks in Section 5. 2. DIFFUSION GENERATED MOTION In this section we briefly review the main results of diffusion generated motion, including diffusion generated mean curvature motion [12, 13], general diffusion generated motion [14], and convolution generated motion [15]. 2.1.

Diffusion Generated Mean Curvature Motion

In the early 1990s, a surprisingly simple algorithm for motion by mean curvature was introduced by Merriman, Bence, and Osher [12, 13]. Suppose we want to follow an interface propagating with a normal velocity that is proportional to its mean curvature. The Merriman– Bence–Osher algorithm is as follows: DIFFUSION GENERATED MEAN CURVATURE MOTION ALGORITHM. 1. Initialize: u(x, 0) = χ D (x), where χ is the characteristic function for the initial region D ⊂ Rd . 2. Apply diffusion to u for some time t, i.e., solving  ∂u = u ∂t for time t = t. u(x, 0) = χ D (x) 3. “Sharpen” the diffused region by setting  u=

1 0

if u > 1/2 otherwise.

Then begin again. For anytime t, the level set {x : u(x, t) = 1/2} gives the location of the interface.

96

JAWERTH AND LIN

In order to generate mean curvature motion, it is essential to apply the diffusion for only a short time, since only the initial sharp interface motion generated by the diffusion is proportional to the mean curvature. In fact, several rigorous proofs have been given to show that this simple algorithm converges to the mean curvature motion as the time step size goes to zero [1, 4, 6]. 2.2.

General Diffusion Generated Motion

The diffusion generated mean curvature motion method has been extended to the case of more general interface motion. One way to generalize this method is to allow a general threshold, λ, in the “sharpening” step [14]. DIFFUSION GENERATED MOTION ALGORITHM. 1. Initialize: u(x, 0) = χ D (x), where χ is the characteristic function for the initial region D ⊂ Rd . 2. Apply diffusion to u for some time t, i.e., solving  ∂u ∂t

= u

for time t = t.

u(x, 0) = χ D (x) 3. “Sharpen” the diffused region by setting  u=

1 0

if u > λ otherwise,

where λ ∈ [0, 1]. Then begin again. The location of the interface is given by the boundary of the new region. By choosing different λ, various interface motions can be generated by this algorithm. In particular, λ = 1/2 corresponds to mean curvature motion as t tends to zero. In general, λ can even be allowed to depend on other quantities such as the time step t. A variety of interface motions with velocity vn = a + bκ have been obtained by diffusion generated √ motion with λ = 1/2 + C t [6, 10, 16]. 2.3.

Convolution Generated Motion

Since the diffusive evolution is equivalent to convolution with a Gaussian kernel, it is natural to extend diffusion generated motion to convolution generated motion. It was pointed out in [12] and proven rigorously in [5] that any positive radially symmetric kernel can be used in place of the Gaussian kernel to generate mean curvature motion. For example, in two dimension, one can take the kernel K to be the normalized characteristic function of a disc of radius r , centered at the origin,  K (x) = where r ∼

√ t.

1 πr 2

if |x| < r

0

otherwise,

SHAPE RECOVERY BY DIFFUSION GENERATED MOTION

97

Convolution generated motion was further studied in [14, 15]: CONVOLUTION GENERATED MOTION ALGORITHM. 1. Initialize:

 χ (x) =

1 0

if x ∈  otherwise,

where  ⊂ Rd is the initial region. 2. Update the new region new by letting new = {x : K ∗ χ (x) > λ} for some threshold value λ ∈ [0, 1) and some convolution kernel function K , or equivalently, setting  1 if K ∗ χ (x) > λ χ new (x) = 0 otherwise. Then begin again. The location of the interface is given by the boundary of the new region new . To be consistent with the diffusion generated motion, a time step must be introduced for the convolution generated motion. The effective time step t is determined by the size of the support of the convolution kernel K since the larger the support of K is, the further its convolution will move the set boundary. For further details about the relationship between the time step and the size of the support of a general kernel see [14]. For a Gaussian kernel with support size r , the relation is t ∼ r 2 . This can also be motived by the √ simple fact that diffusion for a time t will smear the set boundary over a distance r ∼ t. Formally the convolution generated motion method allows to use any kind of kernel function. A variety of interesting interface motions including anisotropic motions have been obtained by using different kernel functions, especially asymmetric kernels [14, 15]. 3. SHAPE RECOVERY BY DIFFUSION GENERATED MOTION In this section we describe how diffusion generated motion can be used for shape recovery. The basic idea is to use a local image property dependent threshold in the sharpening step of the diffusion generated motion procedure so that the front will propagate in the smooth region and stop at the object boundaries. Our first shape recovery algorithm is as follows: ALGORITHM 1. 1. Initialize: u(x, 0) = χ (x), where  ⊂ Rd is the initial region covers the desired objects in the d-dimensional image I (x). 2. Apply diffusion to u for one time step, i.e., solving  ∂u = u ∂t for time t = 1. u(x, 0) = χ (x) 3. “Sharpen” the diffused region by setting u(x) = χ˜ (x) with ˜ = {x ∈  : u(x) ≥ 1 − ∇(G σ ∗ I )2 /T 2 }, 

98

JAWERTH AND LIN

where G σ is the d-dimensional normalized Gaussian kernel defined by G σ (x) =

  x2 , exp − 1 4σ 2 (4π σ 2 ) d 1

and T > 0 is a constant. Then begin again. ˜ It eventually converges The location of the front is given by the boundaries of the set . to the boundaries of the desired objects in the image. The diffusion equation in Algorithm 1 can be discretized by standard finite difference method with the time step equal to the grid spacing step. Since solving the diffusion equation for a time t with the initial data χ is equivalent to convolving χ with the Gaussian kernel G √t , we also have the following algorithm: ALGORITHM 2. 1. Initialize:

 χ (x) =

1 0

if x ∈  otherwise,

where  ⊂ Rd is the initial region covers the desired objects in the d-dimensional image I (x). ˜ by letting 2. Update the new region  ˜ = {x ∈  : G 1 ∗ χ (x) ≥ 1 − ∇(G σ ∗ I )2 /T 2 },  where σ and T are some positive constants. Then begin again. Note that in both Algorithm 1 and Algorithm 2, the way that the original region  is initialized is for the case of inward motion. For the case of outward motion,  should be initialized such that c is in the interior of the object. The parameter σ determines the scale at which the image gradient is evaluated, and the parameter T determines whether a gradient is considered significant. The advantage of Algorithm 2 is that it allows to use different smoothing kernel as in the general convolution generated motion algorithm. We now explain how Algorithm 2 (or equivalently, Algorithm 1) works. Let us denote the threshold 1 − ∇(G σ ∗ I )2 /T 2 by λ. In the smooth region of the image I (x) we have ∇(G σ ∗ I )2 /T 2 ≈ 0, thus λ = 1 − ∇(G σ ∗ I )2 /T 2 ≈ 1. Since the convolution G 1 ∗ ˜ = {x ∈  : G 1 ∗ χ (x) ≥ χ only smears within a region of radius 1, the updated region  λ ≈ 1} guarantees that the front only moves a distance of one pixel wide in the smooth region. When the front meets an object boundary point P ∈ , since ∇(G σ ∗ I )(P)2 /T 2 > 1, ˜ During we always have G 1 ∗ χ (P) ≥ 1 − ∇(G σ ∗ I )(P)2 /T 2 . Thus we have P ∈ . the next iteration, the front is supposed to move one pixel inward (with respect to region ) from the point P to point P  , but by the same argument as above P will be also in the newly updated region, and since P and P  are only one pixel away, the front remains at the position P. Therefore after the front meets the object boundary, it will stop there. The algorithm allows the propagating front to change topology. This means that multiple objects can be detected simultaneously. Note that the front evolution obtained by the proposed algorithm is an anisotropic motion. It has an image gradient dependent speed rather than a curvature dependent speed. This

SHAPE RECOVERY BY DIFFUSION GENERATED MOTION

99

differs from our method from previous work in which mean curvature motion is used for shape recovery. 4. EXPERIMENTAL RESULTS In this section we present some examples of the proposed shape recovery model. The numerical implementation is based on Algorithm 2. The algorithm is very easy to implement, and it works for both 2 and 3 dimensional images. The images in Figs. 1 and 2 were taken from the book [7]. Figure 1 presents a binary image (256 × 256) with inward front propagation. The upper left image shows the original image together with the boundary of the initial region. The initial region  covers the four objects in the original image, so the initial function has value 1 inside the region and 0 outside the region. Algorithm 2 is applied with σ = 1.0 and T = 9.5. The propagating front splits and converges to the boundaries of the objects after 120 iterations. Figure 2 shows a real image (256 × 256) with inward front propagation. The initial region  is a square that covers the tools in the original image with its boundary very close to the frame of the image. Algorithm 2 is applied with σ = 1.0 and T = 9.5. The initial contour converges to the boundaries of the tools after 130 iterations. Figure 3 presents a medical image (256 × 256) with outward front propagation. Initially a small disk D is placed inside the black hole. Since this is an outward propagation, the small disk should be considered as the complementary of the initial region . So the initial

FIG. 1. Recovery of multiple shapes from a binary image (256 × 256) with inward diffusion generated motion. σ = 1.0 and T = 9.5.

100

JAWERTH AND LIN

FIG. 2. Recovery of shape from a real image (256 × 256) with inward diffusion generated motion. σ = 1.0 and T = 9.5.

FIG. 3. Recovery of the shape of a hole from a medical image (256 × 256) with outward diffusion generated motion. σ = 3.0 and T = 9.0.

SHAPE RECOVERY BY DIFFUSION GENERATED MOTION

101

FIG. 4. Recovery of 3D shape from a 3D image (128 × 128 × 128) with diffusion generated motion. σ = 1.0 and T = 20.0.

region  = Dc . Therefore the initial function has value 1 outside the small disk D and 0 inside D. Algorithm 2 is applied with σ = 3.0 and T = 9.0. The initial contour converges to the boundary of the black hole after 70 iterations. Figure 4 shows a 3D image (128 × 128 × 128) with inward front propagation. The initial region is a ball that covers the ring in the original image. Algorithm 2 is applied with σ = 1.0 and T = 20.0. The initial region converges to the ring after 35 iterations. Note how the model handles the change of topology. The 2D views in Fig. 4 are generated using the Stanford VolPack volume renderer. 5. CONCLUDING REMARKS In this paper we presented a simple shape recovery model based on diffusion generated motion. By alternately diffusing and sharpening the initial region, and incorporating local image properties into the sharpening steps, the sharp interface is able to move toward and stop at the desired object boundary. The shapes are recovered by an anisotropic interface motion with a local image property dependent speed. Our algorithm works for both 2D and 3D image data. It automatically captures topological changes. Therefore multiple objects can be recovered simultaneously. ACKNOWLEDGMENT We thank the reviewers for valuable comments and suggestions. We also thank the authors and publisher of the book Machine Vision for allowing us to use the two test images.

102

JAWERTH AND LIN

REFERENCES 1. G. Barles and C. Georgelin, A simple proof of convergence for an approximation scheme for computing motions by mean curvature, SIAM J. Numer. Anal. 32(2), 1995, 484–500. 2. V. Caselles, F. Catt´e, T. Coll, and F. Dibos, A geometric model for active contours in image processing, Numer. Math. 66, 1993, 1–31. 3. V. Caselles, R. Kimmel, and G. Sapiro, Geodesic active contours, Int. J. Comput. Vision 22, 1997, 61–79. 4. L. C. Evans, Convergence of an algorithm for mean curvature motion, Indiana Univ. Math. J. 42, 1993, 553–557. 5. H. Ishii, A generalization of the Bence, Merriman, and Osher algorithm for motion by mean curvature, in Curvature Flows and Related Topics (A. Damlamian, J. Spruck, and A. Visintin, Eds.), pp. 111–127, Gakkˆotosho, Tokyo, 1995. 6. H. Ishii, G. E. Pires, and P. E. Souganidis, Threshold dynamics type schemes for propagating fronts, TMU Mathematics Preprint Series 4, 1996. 7. R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision, McGraw-Hill, New York, 1995. 8. M. Kass, A. Witkin, and D. Terzopoulos, Snakes: Active contour models, Int. J. Comput. Vision 1, 1988, 321–331. 9. R. Malladi, J. Sethian, and B. Vemuri, Shape modeling with front propagation: A level set approach, IEEE Trans. Pattern Anal. Machine Intell. 17, 1995, 158–174. 10. P. Mascarenhas, Diffusion Generated Motion by Mean Curvature, CAM Report 92-33, Univ. of California, Dept. of Math., Los Angeles, 1992. 11. T. McInerney and D. Terzopoulos, Topologically adaptable snakes, Proc. ICCV, Cambridge, June 1995. 12. B. Merriman, J. Bence, and S. Osher, Diffusion generated motion by mean curvature, in Computational Crystal Growers Workshop (J. E. Taylor, Eds.), pp. 73–83, Amer. Math. Soc., Providence, RI, 1992. 13. B. Merriman, J. Bence, and S. Osher, Motion of multiple junctions: A level set approach, J. Comput. Phys. 112, 1994, 334–363. 14. S. J. Ruuth, B. Merriman, and S. Osher, Convolution generated motion as a link between cellular automata and continuum pattern dynamics, J. Comput. Phys. 151, 1999, 836–861. 15. S. J. Ruuth and B. Merriman, Convolution Generated Motion and Generalized Huygens’ Principles for Interface Motion, CAM Report 98-04, Univ. of California, Dept. of math., Los Angeles, 1980. 16. S. J. Ruuth, A diffusion-generated approach to multiphase motion, J. Comput. Phys. 145, 1998, 166–192. 17. R. Szeliski, D. Tonnesen, and D. Terzopoulos, Modeling surfaces of arbitrary topology with dynamic particles, Proc. CVPR, pp. 82–87, 1993. 18. D. Terzopoulos, A. Witkin, and M. Kass, Constrains on deformable models: Recovering 3-D shape and nonrigid motion, Artif. Intell. 36, 1988, 91–123.

¨ BJORN JAWERTH received the M.Sc. in mathematics and statistics in 1974, the M.Sc. in physics in 1974, and the Ph.D. in mathematics in 1977, all from the Lund Institute of technology, Lund, Sweeden. He is the David M. Robinson Professor in the Department of Mathematics at the University of South Carolina, where he is the director of the Industrial Mathematics Initiative. Dr. Jawerth is also the CEO of Summus, Ltd., a company specializing in image and video compression and related wavelet technologies. His research interests include harmonic analysis and partial differential equations with computational applications. PENG LIN received his B.S. and M.S. in mathematics in 1985 and 1988, respectively, both from Peking University, China. From 1988 to 1990 he was an assistant professor of mathematics at this university. He received his Ph.D. in mathematics from Washington University in 1996. From 1997 to 1999 he worked as a postdoctoral research associate at the University of South Carolina. Currently he is a research scientist with Summus, Ltd. His current research interests including image processing, video processing, and computer vision.