Detection of critical points of multivariate piecewise polynomial systems

Detection of critical points of multivariate piecewise polynomial systems

Computer Aided Geometric Design 40 (2015) 76–87 Contents lists available at ScienceDirect Computer Aided Geometric Design www.elsevier.com/locate/ca...

2MB Sizes 0 Downloads 35 Views

Computer Aided Geometric Design 40 (2015) 76–87

Contents lists available at ScienceDirect

Computer Aided Geometric Design www.elsevier.com/locate/cagd

Detection of critical points of multivariate piecewise polynomial systems ✩ Jonathan Mizrahi a,∗ , Gershon Elber b a b

Department of Applied Mathematics, Technion, Israel Department of Computer Science, Technion, Israel

a r t i c l e

i n f o

Article history: Received 5 July 2015 Received in revised form 10 October 2015 Accepted 12 October 2015 Available online 23 October 2015 Keywords: Critical points Subdivision solvers B-spline basis functions Singular points

a b s t r a c t We propose a general scheme for detecting critical locations (of dimension zero) of piecewise polynomial multivariate equation systems. Our approach generalizes previously known methods for locating tangency events or self-intersections, in contexts such as surface–surface intersection (SSI) problems and the problem of tracing implicit plane curves. Given the algebraic constraints of the original problem, we formulate additional constraints, seeking locations where the differential matrix of the original problem has a non-maximal rank. This makes the method independent of a specific geometric application, as well as of dimensionality. Within the framework of subdivision based solvers, test results are demonstrated for non-linear systems with three and four unknowns. © 2015 Elsevier B.V. All rights reserved.

1. Introduction and related work The general problem of finding all solutions of algebraic equation systems in a given domain, arises in various contexts in Computer Aided Design (CAD), engineering, robotics, or whenever the geometry governing the problem can be mapped to (or represented by) a set of algebraic (non-linear in general) equations. The subdivision approach, exploiting properties of the Bernstein/B-spline basis functions, has been extensively investigated for several decades – introducing algorithms for finding roots of a univariate polynomial such as in Lane and Riesenfeld (1981), solutions of fully constrained multivariate systems (Sherbrooke and Patrikalakis, 1993; Elber and Kim, 2001 and more) as well as for under-constrained systems (Hanniel and Elber, 2007; Liang et al., 2008 and more). Typically, the generic methods used in advanced solvers to guarantee the topology of the solution set (number of roots, number of connected components, loops, closed surfaces, etc.) rely on some regularity (or transversality) assumptions which may slightly vary according to the application. However, topological guarantee near singular locations is treated separately, and is usually much more difficult to achieve (if at all). Results and algorithms related to critical points detection are known, in various geometric contexts. In Grandine and Klein IV (1997), Hass et al. (2007), implicit planar curves which may admit self-intersections are traced. The selfintersections are first identified as critical points of the underlying implicit function, f (x, y ) = 0, namely the solutions of the fully determined system: ∇ f (x, y ) = 0¯ ∈ R2 . A numerical method with topological guarantee for implicit planar curves is given in Burr et al. (2008), which also detects isolated singularities and computes their degrees, using the number of connected components of certain topological structure in the neighborhood of the singularity.



*

This paper has been recommended for acceptance by Rida Farouki. Corresponding author. E-mail address: [email protected] (J. Mizrahi).

http://dx.doi.org/10.1016/j.cagd.2015.10.001 0167-8396/© 2015 Elsevier B.V. All rights reserved.

J. Mizrahi, G. Elber / Computer Aided Geometric Design 40 (2015) 76–87

77

Critical points in the context of curve-surface intersection and surface–surface intersection are identified as tangency events of the two parametric geometries involved. For example, in Hu et al. (1997), tangency events of a parametric curve r (t ) and a parametric surface R (u , v ), are found by adding an orthogonality condition: r  (t ), R u (u , v ) × R v (u , v ) = 0, to the original intersection requirement: r (t ) = R (u , v ). In a similar manner, the additional conditions for tangency events of two parametric surfaces R (u , v ) and P (s, t ), are formulated to require that the two normal vectors at the intersection point are collinear. Further, the dynamic version of the surface–surface intersection problem (Chen, 2008; Chen et al., 2007) is another application where the detection of critical points is essential: as the parametric surfaces evolve continuously with respect to a third parameter (time/control variable), the critical points are the events where intersection curve components may appear/disappear/merge and split. These are the topological events, where the solution curves may undergo topological changes, and they are characterized by tangency events of the evolving surfaces. In Chen (2008), Chen et al. (2007), they are recognized as the locations where a specific projection mapping is degenerate. However, no general method to locate the critical points of a smooth function F : D ⊂ Rn → Rk , k ≤ n is provided. Critical points are also used to determine the topology of an iso-surface of the form f (x, y , z) = const, using established results on the classification of critical points from Morse theory. In Stander and Hart (2005), the critical points are found as the solutions of ∇ f (x, y , z) = 0¯ ∈ R3 . Each such point corresponds to a critical value of f . The topological change in the iso-surfaces, as the function values vary through the critical one, is then identified using the Hessian value at the singular location. Consequently, a topologically correct triangular mesh is interactively updated. Such concepts are also used in Ni et al. (2004), where an appropriate Morse function is constructed on a polygonal mesh, such that its critical points can be optimized (their number can be controlled by the user). The critical points of the chosen function are then used to interrogate the topology of the mesh, and to separate it into disc-topology patches. Other applications of critical point analysis arise in the context of bisector curves, offset curves and medial axis computation (Seong et al., 2010; Muthuganapathy et al., 2011; Johnson and Cohen, 2009; Musuvathy, 2011). These methods use critical point analysis of certain distance functions to locate transition events in the required solution manifolds. To the best of our knowledge, there’s no (subdivision based) general method for finding the critical points of a smooth function F : D ⊂ Rn → Rk , k ≤ n. Although critical point analysis is widely used, it is usually of a specific function, intimately related to the problem or application domain. Our approach assumes no knowledge of the underlying motivation that gave rise to the equation solving problem or its dimensions. This generality, however, also has its drawbacks, as we discuss in further detail in Section 4. The rest of this paper is organized as follows: In Section 2, the method for detecting critical points is detailed. Section 3 provides test results, for several types of problems and dimensions. Finally, Section 4 concludes and discusses future work. 2. Critical points detection Let F : D ⊂ Rn → Rk (k ≤ n), be a (piecewise) polynomial, at least C 1 smooth, and given in a tensor product B-spline form. The vector valued function F is defined on the axis parallel, n dimensional compact box:

D = [a1 , b1 ] × . . . × [an , bn ], which may formally be considered as a subset of an open set U ⊂ Rn on which F is defined and smooth. The scalar components of F are denoted by: F = ( f 1 , . . . , f k ). In this section, we describe a method for finding the critical points of:

F (¯x) = 0¯ ,

(1)

using a (subdivision based) multivariate constraint solver. First, recall that the differential of F at p ∈ D, denoted dF p , is the linear map, matrix representation of which has the partial derivatives of F evaluated at p as its elements:

[dF p ]i j =

∂ fi ( p ). ∂xj

The critical points of F are generally defined by the following (Do Carmo, 1976): Definition 1. Given a differentiable map F : U ⊂ Rn → Rk , defined in an open set U of Rn , we say that p ∈ U is a critical point of F if the differential dF p : Rn → Rk is not a surjective (onto) mapping. The image F ( p ) ∈ Rk of a critical point is called a critical value of F . A point of Rk which is not a critical value is called a regular value of F . Remark 2. We are not interested, typically, in all the critical points of F , but only in those that are solution points as well. However, as will be evident shortly, the proposed method can be easily adopted to find all the critical points, rather than only those that are solution points (i.e. belonging to the critical value 0¯ ∈ Rk ). Prior to proceeding to solution details, the following clarification is in order. The singularities (or critical points) we seek are of the solution manifold, implicitly represented by the underlying equations system. This is not to be confused with the input equations, which may admit discontinuities of their own, but are trivial to handle: Since we assume a B-spline

78

J. Mizrahi, G. Elber / Computer Aided Geometric Design 40 (2015) 76–87

represented input (i.e. piecewise polynomials/rationals) – discontinuities of the input representation (if exist) occur only at the B-spline knots. This is commonly handled in subdivision solvers by first subdividing the problem into a small set of smoothly represented problems. It is stressed that each of these, in turn, may admit singular solutions. Both situations will be demonstrated in Section 3. 2.1. Critical points as solutions of algebraic constraints For general positive integers k ≤ n, saying that a k × n matrix A is a surjective linear operator can be equivalently stated as rank( A ) = k, or simply that A has maximal possible rank. Hence, p ∈ D is a critical point of F if and only if:

rank(dF p ) < k.

(2)

To formulate the non-maximal rank requirement such that it can be handled by an equation solver, observe that rank(dF p ) = k if and only if there is at least one subset of k linearly independent columns. Denote by Skn the set of all possible k-tuples of multi-indices of the form I = (i 1 , . . . , ik ), which correspond to all possible selections of k columns out of the n columns of dF p . Each such a selection determines a k × k size sub-matrix. For a specific selection I of k columns, the corresponding minor (determinant of the sub-matrix), denoted M I ( p ), must vanish if p is a critical point of F . Hence, the points we are interested in are exactly all p ∈ D which simultaneously satisfy:

F ( p ) = 0¯ ∈ Rk , M I ( p ) = 0, ∀ I ∈ Skn .

(3) n k+ k

From now on, the vector valued function defined on D ⊂ Rn with values in R denoted by G. Hence we seek solutions of: n

G (¯x) = ( F (¯x), M 1 (¯x), . . . , M n (¯x)) = 0¯ ∈ Rk+

k

k

which is given by Equation (3), is

(4)

.

Since efficient and robust algorithms for symbolic operations on B-spline functions are known (Elber and Cohen, 1993; Chen et al., 2009 and more), the multivariate determinant functions can be computed, and, conceptually, the system G (¯x ) = 0¯ in Equation (4) can be sent to an algebraic constraint solver. However, observe that Equation (4) poses two difficulties. First, G inherits the singularity from the original equation solving problem: if p ∈ D is a critical point of the original system ¯ then the gradients of F are linearly dependent, when evaluated at p. Since these gradients are a subset of the F (¯x) = 0, n gradients of G, the linear dependence persists. The second difficulty is that Equation (4) has k + k constraints and n unknowns. This over-constrained (or redundant) structure requires some reformulation prior to passing it to an algebraic equation solver: geometric root isolation techniques such as tangent cones overlapping criteria (Hanniel and Elber, 2007) assume exactly n gradients of functions and n unknowns. Finally, the Newton–Raphson numeric iterative procedure for the solution refinement step, is also designed for n × n systems. In what follows, we describe our proposed method and how these difficulties are addressed. Remark 3. If all critical points of F are needed, rather than only those corresponding to the critical value zero, the proposed method can be modified by ignoring the first k constraints in Equation (4), and solving only for simultaneous zeros of all minors. 2.2. Reformulation of the critical points detection problem Recall that subdivision based solvers (such as Sherbrooke and Patrikalakis, 1993; Hanniel and Elber, 2007; Bartonˇ et al., 2011; Sederberg and Nishita, 1990 and more) essentially consist of two steps. The recursive subdivision and domain exclusion step tests if the currently handled domain can be purged, due to positivity/negativity of all B-spline coefficients of the current representation, in at least one of the constraints (or some other purging criteria using the geometry of the control polygons, etc.). If the domain cannot be purged, a solution may exist, and some root isolation/uniqueness condition is tested. If uniqueness cannot be guaranteed, subdivision proceeds. Otherwise, subdivision is terminated and the second step is executed – iterative numeric improvement in the current domain, for example – via Newton–Raphson iterations – using the center as the initial candidate. Within this framework, our motivation is to find an alternative formulation for Equation (4), that is not overconstrained/redundant (in terms of dimensionality), and, if possible, does not suffer from the same singularity at the locations we seek, as does the original problem. Three reasonable options are the following: 1. Formulate the requirement that all minors vanish simultaneously at p using a single scalar constraint, denoted L( p ), given by:

L( p ) = ( M I 1 ( p ))2 + ( M I 2 ( p ))2 + . . . + ( M I n ( p ))2 = 0. k

(5)

J. Mizrahi, G. Elber / Computer Aided Geometric Design 40 (2015) 76–87

79

This formulation is ill-conditioned in many senses, and is hardly useful. Not only does it inherit the singularity of the original problem, it also adds another non-transversal zero requirement, since any zero of the non-negative function L is a tangential contact. Further, there are k + 1 constraints in this formulation (k being the number of constraints in the original problem), which is not always equal to n (the dimension of the parameter space). Apart from being useless in the (easier) case of k = n (the problem is still over-constrained/redundant), in the k ≤ n − 2 case it causes a situation where isolated critical points are searched by a solver designed to trace a higher dimensional solution (curves, surfaces, etc.). Due to these drawbacks, and as verified in practice, this approach is highly unlikely to efficiently detect the critical locations. 2. Solve a subsystem of size n × n, and filter the candidate solutions by evaluating the other constraints: a critical point must be a solution of the chosen subsystem, which is also a zero of all other (unchosen) components of the redundant system Equation (4). The selection method we have experimented with, is using the first k constraints, completing it to a system of n constraints with any choice of n − k minors. In some cases, we have also tried using only minors exclusively. This approach does prove useful in detecting the critical points, but it is not our chosen method, as it suffers from the following inherent problems: first, a proper choice of a subset of constraints must be made. Such a selection must take place with no knowledge of the regularity of the chosen subsystem, how to recognize better/worse sub-systems, or even the anticipated dimension of the variety of critical points of the original system. In that sense, this is a biased decision – some constraints participate in the solution process, while others participate as post-filters only. The first actual effect of this asymmetry is closely related to scaling and numeric tolerances: experiments show that a solution candidate close to the exact critical point of interest, may satisfy the chosen sub-system (and hence numeric iterations are terminated), while fail the post-filter constraints due to inadequate tolerance adjustments and different constraints scaling. Further, different choices of sub-systems yield different behavior with respect to this technical issue. Another undesired phenomenon due to the asymmetry at present is the following: it can happen that for a problem with isolated critical points only, a very dense set of candidate solutions shall be produced, due to a poor choice of a rank-deficient sub-system. Root isolation techniques are of course useless in such unfortunate case, since the roots of the poorly chosen sub-system are indeed non-isolated. This results in high and unnecessary computational cost – for the equation solver, as well as for the evaluation based post-filter effort. 3. The third option is our choice and is the one used to generate all the examples in this paper. We replace the overconstrained/redundant formulation of Equation (4) with a problem of size n × n that always accounts for all minors in the same manner: we view the over-constrained system G (¯x) = 0¯ in the minimal norm sense, hence attempt to find minimizers of 12 G (¯x) 2 , that are also zeros of G. This approach is detailed in Section 2.3. A-priori, it need not inherit the singularities from the original problem, unlike the previous two alternatives. One drawback, however, is that this approach produces higher order constraints than the previous alternative, due to the optimization formulation (taking the square of the constraints before differentiation, as will soon be described). Consequently, run times are significantly slower (one or two orders of magnitude), but it is still our preferred method due to its robustness. This framework is also relevant to any over-constrained system (not restricted to the context of finding critical points) and is described next. 2.3. Over constrained systems using a minimal norm As in the above notations, let G denote the underlying vector valued function of some over-constrained equation solving problem. In our context, G represents Equation (4), but the question that gave rise to the over-constrained system may be disregarded for most of this section. Since G ( p ) = 0¯ if and only if 12 G ( p ) 2 = 0, and since the Euclidean norm function

¯ are necessarily minimizers of 1 G (¯x) 2 . This approach is is non-negative, the required solutions (if exist) of G (¯x) = 0, 2 well known, and classically resides in the field of least squares (constrained and unconstrained) optimization, exhibiting established theory and algorithms, such as the powerful Gauss–Newton iterative numeric method (Nocedal and Wright, 2006). The Gauss–Newton method essentially uses the notion of the pseudo-inverse of the Jacobian to take the role of the inverse of the Jacobian in the Newton–Raphson analog. The Gauss–Newton method is a numeric iterative method, applied to over-constrained systems, aiming to converge to a location of minimal norm. Now, remember that numeric iterative methods are local: they cannot guarantee the isolation of all solutions in a domain, and highly depend on the initial candidate. Advanced subdivision solvers are global, and address this issue via root isolation techniques: numeric iterations are applied locally, but in all sub-domains for which solution uniqueness could be verified. Put differently, root isolation enables the termination of the subdivision process, and proceed to numerical improvement. The difficulty we are now facing, however, is that common root isolation techniques (e.g. tangent cone non-overlapping criteria (Hanniel and Elber, 2007)) assume a system of size n × n, while our system is over-constrained. This technicality prevents us from adopting the optimization approach in a straight forward manner, at the numeric improvement level: we are lacking appropriate subdivision termination criteria! To resolve this issue, we provide the solver with the following alternative problem: the first n constraints have the semantics of the necessary condition for a local minimization of 12 G (¯x) 2 , and are given by the simultaneous zeros of the vector valued function H : D ⊂ Rn → Rn , given by: 1 H (¯x) = ∇( G (¯x) 2 ) = 0¯ . 2

(6)

80

J. Mizrahi, G. Elber / Computer Aided Geometric Design 40 (2015) 76–87

Since we are not interested in minimizers that are not zeros of G, we pass G (¯x) = 0¯ as additional constraints, but tag them as subdivision step constraints: they are only considered for the domain purging by coefficient signs – not for root isolation or numeric iterative steps. Consequently, the solver shall avoid searching for zeros of H that are not near zeros of G, to the extent of subdivision tolerance. Any local optimizers that are not zeros of G but survive this domain purging filter shall be filtered by evaluation of G in post process – a rare event. Using the above described scheme accounts for all scalar components of G in the same manner, hence no asymmetric selection is done, while the preferred structure of a system of size n × n exploits the existing machinery of root isolation tests for subdivision termination and Newton–Raphson iterations. The additional constraints (tagged as subdivision step only) act as a filter – also accounting for all components of G equally. Another advantage is that there’s no necessary inheritance of degeneracy: as verified by test results – critical points can successfully be detected as isolated roots of the optimization formulation, implying that a problem with singularities may be mapped to an alternative formulation that enjoys regularity. There’s no guarantee that this is always the case, but in order for a solution p ∈ D of H (¯x) = 0¯ to be critical point of H as well as of F , it must be a location of linear dependence of the gradients of H and of the gradients of F , which are essentially unrelated events. Examples of such an unfortunate case can obviously be constructed, with a high enough level of degeneracy, and yet – we are in a preferred setting over the previously described guaranteed singularity feature, inherent in the other alternative formulations. Remark 4. Observe that although could not be adopted directly, the Gauss–Newton numeric procedure (Nocedal and Wright, ¯ 2006) has been used: it is easily verified by differentiation that the Newton–Raphson iterations of the formulation H (¯x ) = 0, ¯ Hence, the numeric iterations that coincide with the Gauss–Newton iterations of the over-constrained formulation G (¯x) = 0. our n × n formulation yielded are exactly the Gauss–Newton iterations of the over-constrained original problem, with the distinction that now they are applied locally on all sub-domains for which solution uniqueness could be guaranteed. It is the need to test for subdivision termination, that required switching to seeking zeros of H at an earlier stage of the subdivision solver paradigm. 2.4. Discussion: dimensionality of the critical locations

¯ This work concerns with detection of critical points of dimension zero, i.e. isolated singularities of the system F (¯x) = 0. The singular locations can, clearly, form a manifold of higher dimension or, more generally, a higher dimensional variety (not necessarily a regular manifold). Since the critical points of F = ( f 1 , . . . , f k ) have been characterized as a subset of yet another equation solving problem, the dimensionality (and regularity) of the critical locations variety of F are now governed by the regularity of H = (h1 , . . . , hn ). Assume a critical point p ∈ D has been detected, i.e. the set:

{∇ f i ( p )}i =1,...,k is linearly dependent. The linear dependence or not, of the set:

{∇ hi ( p )}i =1,...,n is, as mentioned earlier, conceptually unrelated: it involves the first and second order derivatives of F , and is again, roughly speaking, an event of “zero probability” (a term made precise by Sard’s theorem (Lee, 2012), in the sense of the sparsity of the critical values of a smooth function). Hence, it is not surprising that in practice, the alternative formulation chosen can yield solutions (critical points of F ) that have been successfully isolated using regularity based methods. When the critical point p of F is also a critical point of H , the question of the dimension of the critical points of F near p arises – p need not be isolated. This is a completely general question, not restricted to the context of critical point detection. To the best of our knowledge, there is no known method to compute the dimension of the variety of real solutions of algebraic equations in the non-regular case. In the case of algebraically closed fields (e.g. the variety of complex solutions), this topic is systematically treated: theory and computational methods are known, relying on algebraic geometry concepts (Gröbner bases and the Hilbert polynomial (Cox et al., 2007)). We summarize what can be concluded regarding the dimensionality of the critical locations near the detected critical point p of F . Denote:

rank(dH p ) = m. If m = n then p is an isolated critical point of F . This is a sufficient but not necessary condition. When m < n the dimension cannot be concluded since the linear dependence of {∇ h i ( p )}i =1,...,n may occur only at p (in which case p may be isolated), or it may hold globally in some neighborhood of p (in which case p is a member of some higher dimensional variety of solutions). In Section 3 we show how the method designed to detect isolated critical points can be used to approximate a one dimensional variety of critical points, using a dense point set. 2.5. The algorithm The above described method is summarized in Algorithm 1. Note that the output may be a set of critical points, but it may also be deduced so that the problem is globally degenerate in D: if all the constructed minor functions are identically zero, then rank(dF ) < k independent of x¯ .

J. Mizrahi, G. Elber / Computer Aided Geometric Design 40 (2015) 76–87

81

Algorithm 1: FindCriticalPts(D , F ). input : A Compact hyper-box D ⊂ Rn . F = ( f 1 , . . . , f k ) : D → Rk . Numeric tolerance  . output: C – a (perhaps empty) set of critical points of F : either the zero dimensional variety of isolated points, or a point-cloud approximation of the higher dimensional variety, or D – the entire domain if the problem is degenerate globally. 1.1 for All i = 1, . . . , k do 1.2 for All j = 1, . . . , n do 1.3

∂f

Compute ∂ x i , ordered as in dF x¯ ; j

1.4 for All I ∈ Skn do 1.5 Compute the scalar valued multivariate function M I (¯x); 1.6 if All coefficients of M I are smaller than  for all I ∈ Skn then 1.7 return D ; 1.8 G ← ( F , M I 1 , . . . , M I n−k );

1.9 H ← ∇( 12 G 2 ); 1.10 Tag G as subdivision step constraints only;

n

¯ ∈ Rn+k+ 1.11 C ← Solutions of ( H (¯x), G (¯x)) = 0 1.12 for All p ∈ C do 1.13 if G ( p ) >  then 1.14 Purge p;

k

;

1.15 return C ;

Fig. 1. The problem of finding the tangency contact of two planar curves (a), an ellipse (red) and a circle (green), given in implicit form as the zero set of two quadratic functions f 1 (x, y ) and f 2 (x, y ), graphs of which are the respective green and red surfaces in (b). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

3. Test results We demonstrate the critical point detection method for several equation solving problems with various situations regarding the set of critical points of the solution. The first example is a simple step-by-step presentation of how the algorithm operates, while demonstrating how a critical point of the original problem can be detected via a regular equation system: the suggested formulation yields a regular solution. The entire sequence is depicted in Figs. 1 to 3. Fig. 1 shows the general, original problem of finding the intersection of two planar curves that are singular (i.e. tangent, Fig. 1(a)), given as the zero sets of two scalar bivariate functions in Fig. 1(b). Fig. 2(a) shows the additional constraint, originating from the non-maximal rank differential requirement, yielding a single minor function. At this stage, the problem is formulated as three equations with two unknowns, hence over-constrained, and corresponds to G (x, y ) = 0¯ in our notations. The solution we seek is a minimizer of 12 G (x, y ) 2 , represented by the surface in Fig. 2(b). Finally, the minimization problem is approached by searching zeros of H = ∇( 12 G 2 ). As shown in Fig. 3, this is realized as a regular (or transversal) zero of a 2 × 2 system. In the next example, the situation is of an isolated critical point as the only solution point of an anticipated solution curve in R4 . Consider the following system of three equations and four unknowns:

y 2 + z2 + w 2 − 0.25 = 0, y 2 + z2 + ( w − 1)2 − 0.25 = 0, x − 0.5 = 0.

(7)

When approached manually, it is easily verified that the three dimensional hyper-cylinders of the first two equations are tangent along their common contact line (x, 0, 0, 0.5). The intersection of this straight line with the hyper-plane x = 0.5 in the last equation yields the unique solution point (0.5, 0, 0, 0.5). When sent to the equation solver (Bartonˇ et al., 2011) which relies on regularity for the topological considerations, the algorithm anticipates a univariate solution curve and fails

82

J. Mizrahi, G. Elber / Computer Aided Geometric Design 40 (2015) 76–87

Fig. 2. A new constraint: M (x, y ) = 0, is introduced, with the semantics of non-maximal rank differential. In (a), the zero sets of M (cyan) and of the original problem (green and red), are shown. Together these are the scalar components of the function G = ( f 1 , f 2 , M ). In (b), the graph of 12 G 2 (blue) is shown. The solution, at this stage, is realized as the minimizer of referred to the web version of this article.)

1 G 2 . 2

(For interpretation of the references to color in this figure legend, the reader is

Fig. 3. The minimal norm of G (x, y ) is detected as a (transversal) simultaneous zero of the partial derivatives of 12 G 2 . The zero sets of the partial derivatives H = ( H 1 , H 2 ) with respect to x (cyan) and y (magenta) are shown in (a). The graphs of the respective functions are the surfaces in (b). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

to detect this isolated (critical) solution point. Using the presented critical point detection method, the point (0.5, 0, 0, 0.5) is detected. This problem does not originate from a parameterized representation of curves/surfaces, and hence the critical point cannot be found using the methods reviewed in Section 1. Next, we show how Algorithm 1 can be applied to detect singular points of surface–surface intersection (SSI) problems, as a special instance of the method. Indeed, this can also be solved by the specific methods using the normal vectors of the parameterized surfaces, as described in Section 1, but this formulation is not used in the examples herein. In the first SSI example, the solution curve is almost entirely regular, except for a unique problematic location. Consider the “monkey saddle” surface (Do Carmo, 1976):

S 1 (u , v ) = (u , v , u 3 − 3v 2 u ), which admits a complicated intersection curve with the paraboloid:

S 2 (s, t ) = (s, t , s2 + t 2 ). The parametrized version of the surface–surface intersection problem can be formulated as follows:

u − s = 0, v − t = 0, u 3 − 3v 2 u − s2 − t 2 = 0.

(8)

This formulation is of course artificially higher dimensional than required, since by trivial substitution it can be reduced to a single equation in two unknowns. However, for instructive reasons we would like to demonstrate the detection of the critical point at the origin, in this four-variable case (and of course – the subdivision solver does not perform such symbolic substitution/reduction of the problem). Three regular branches of the intersection curve are traced, missing the high order contact at the origin (Fig. 4). The critical point detection method returns the origin as an isolated critical point. Another version of the above example is the zero level contour of the monkey saddle surface, formulated similarly. The critical point at the origin is again missed, but this time it is a complicated self-intersection of the contour curve, leaving it not connected (Fig. 5). Using the presented method, the tangency point at the origin is detected. The last SSI example provided is of two rational B-spline surfaces. Again, the regular SSI solver reports no solution found, and the critical point is detected, as depicted in Fig. 6.

J. Mizrahi, G. Elber / Computer Aided Geometric Design 40 (2015) 76–87

83

Fig. 4. The intersection of the “monkey saddle” surface (green) and a paraboloid (red). Only the three components of the solution curve (black) are detected by SSI methods based on regularity, and the isolated point at the origin (blue) is missed (a). This critical point is detected by the presented method, as also shown in the enlarged image (b), with a zoom factor of approximately 6. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Fig. 5. The intersection curve of the monkey saddle surface (green) with the x, y plane (red), in approximately a unit size cube. The self-intersecting solution curve (black) is shown in (a), but the solution curve is incomplete near the critical point, as depicted in the enlarged image (b), with a zoom factor approximately x100, showing a top view of a small neighborhood of the origin. The critical point (blue) at the origin is the output of the presented detection method. The subdivision tolerance used in this example is 0.001. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Fig. 6. The intersection of two rational B-spline surfaces: the wine glass (red) and the cylinder (green). The tangency point is missed by the regular SSI solver, and is detected as a critical point (blue). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

84

J. Mizrahi, G. Elber / Computer Aided Geometric Design 40 (2015) 76–87

Fig. 7. The critical points of the Steiner surface (different views are shown in (a) and (b), of the same object). The solution of the original problem (Equation (9)) is shown in red, as provided by the solver (Mizrahi and Elber, 2015), which assumes regularity, reaching subdivision tolerance near the singular areas and returning them empty. The solutions of G (¯x) = 0¯ are the non-isolated critical points (in blue): the self-intersections of the surface along the coordinate axes. They are detected as a large set of solution points, approximating the actual solution, which is a self-intersecting curve on its own. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Next, an example with non-isolated singular points is given. Consider the implicit equation of the Steiner surface:

x2 y 2 + x2 z2 + y 2 z2 − xyz = 0.

(9)

The Steiner surface admits complicated self-intersections, with a variety of critical points in the form of self-intersecting curves along the coordinate axes. As mentioned in Section 2, in such a case (not known to the user in advance), the critical point detection algorithm returns a large set of points, approximating the higher dimensional, self-intersection variety. Indeed, critical points are detected along the coordinate axes (Fig. 7). As discussed in Section 2.4, the presented method does not provide a way to recognize that the critical points compose a one dimensional variety. We proceed and demonstrate how the presented method can be applied to the problem of detecting topological events in the dynamic intersection of deforming and/or moving geometries. In this context, the term “deforming/displacing” geometry shall refer to any regular manifold, representation of which is, other than its dependence on the parameters x¯ ∈ D ⊂ Rn , endowed with additional deformation/displacement degrees of freedom via control variables t¯ ∈ T ⊂ Rm . We assume that the representation depends smoothly on x¯ as well as on t¯ , and that the manifold remains regular for all t¯ ∈ T . This framework is well defined whether the representation is implicit or parametric, and regardless of dimensionality. Consequently, the intersection of such objects can be written as the solution of an equation of the form:

F (¯x, t¯) = 0¯ ∈ Rk ,

(10) n+m

where: F is defined on D × T ⊂ R , D is the product of the parameter spaces of the involved geometries, and T is the product of their deformation/displacement control spaces. The values of n, k depend on the representation and on the number of geometries in the problem. For each t¯ value, the solution in the parameter space is the result of projecting the solution from D × T to D. A detailed treatment of this problem can be found in Chen (2008), Chen et al. (2007). The approach taken in Chen (2008), Chen et al. (2007) is, roughly speaking, propagation in the control space. The intersection of the geometries is advanced, either preserving the topological structure, or (in the case of a singularity in the control/time propagation), undergoing one of several generic transition events. These topological events are classified based on results from Morse theory, and their location is characterized as the critical points of a certain projection mapping of the solution manifold to the control space (although no general method for detecting non-maximal rank is proposed). We now apply a version of Algorithm 1 to the detection of these transition events. Observe that for a fixed t¯ ∈ T , if the parameter space gradients of F (i.e. gradients with respect to x¯ ∈ D only) are linearly independent, then the resulting manifold is regular in D. A topological event must occur at t¯ values for which the parameter-space intersection is degenerate. For example, the intersection curve of two deforming surfaces changes its topological structure only when the deforming surfaces are tangent. Hence, adopting the notation dx¯ F (¯x, t¯) for the differential of the mapping F with respect to x¯ only, we are interested in the (¯x, t¯) ∈ Rn+m values for which the k × n matrix, entries of which are scalar functions of (n + m) variables, has non-maximal rank. Therefore, we may apply Algorithm 1 with the exception that the matrix providing the minor functions at step 1.4 is d x¯ F (¯x, t¯).

J. Mizrahi, G. Elber / Computer Aided Geometric Design 40 (2015) 76–87

85

Fig. 8. The topological events in the dynamic intersection of an ellipsoid (red), and a torus (green) displaced along the curve t → (t , t 2 , t 3 ) (magenta). The t = 0.0 and t = 1.0 frames present an empty solution curve. The other t values and their corresponding locations (blue), are found as the non-maximal rank points in (x, y , z, t ) space, of the (x, y , z) differential obtained from Equations (11) and (12). These events are when the solution curve is either introduced with a new component, two components merge/split or a component disappears. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Table 1 Running times for the critical points detection examples presented in this work. The times are for solution of the system in Equation (6) with G (¯x) = 0¯ as subdivision/filter constraints. The maximal orders of the constructed function H (¯x) in Equation (6), and of the original problem F (¯x) = 0¯ in Equation (1) are shown. Example

Solution time (sec) of H (¯x) = 0¯ s.t. G (¯x) = 0¯

Maximal order of H (¯x)

Maximal order of input problem F (¯x)

Equation (7) Fig. 4 (Equation (8)) Fig. 5 Fig. 6 Fig. 7 (Equation (9)) Fig. 8 (Equations (11), (12))

0.03 4. 1 0.82 4 .9 4.52 15.05

5×9×9×8 19 × 13 × 13 × 12 19 × 13 × 7 × 6 13 × 19 × 13 × 6 4×5×5 8 × 9 × 9 × 25

2×3×3×3 4×3×3×3 4×3×2×2 3×4×3×2 3×3×3 5 × 5 × 5 × 13

For example, consider the implicit equation of a torus, obtained by revolving a circle of radius r along a circle of radius R, about the z axis:

(x2 + y 2 + z2 + R 2 − r 2 )2 − 4R 2 (x2 + y 2 ) = 0. Now, for t ∈ [0, 1], the implicit torus can be smoothly displaced along the curve1 parameterized by: ing with the following implicit equation:

γ (t ) = (t , t 2 , t 3 ), result-

((x − t )2 + ( y − t 2 )2 + ( z − t 3 )2 + R 2 − r 2 )2 − 4R 2 ((x − t )2 + ( y − t 2 )2 ) = 0.

(11)

The intersection of the smoothly displaced torus of Equation (11) with the ellipsoid, implicitly given by:

(x − 0.5)2 ( y − 0.5)2 (x − 0.5)2 + + − 1 = 0, 0.42 0.32 0.22

(12)

depends smoothly on t. It is empty for t = 0.0, 1.0, and changes its topological structure at six different t values. These events correspond to tangency locations in the (x, y , z) space (and need not be tangency critical points in the (x, y , z, t ) space). The six events are detected, and are shown according to their t values in Fig. 8. ¯ subject to G (¯x) = 0, ¯ not including the minor funcThe running times for solving the critical points equation (H (¯x) = 0, tions construction) are presented in Table 1, tested on an Intel Core i5-2300 CPU, 2.8 GHz with 4 GB memory. For each example, the maximal orders of the constructed problem is provided, as well as the orders of the input problem. Since (squares of) determinants of function valued matrices are involved, the critical points problem is typically of high order, as

1

This space curve is referred to as the “twisted cubic” in literature (Cox et al., 2007).

86

J. Mizrahi, G. Elber / Computer Aided Geometric Design 40 (2015) 76–87

can be seen from the third column of Table 1. As mentioned in Section 2.2, solving the problem using the minimal norm form is typically slower than the less robust (and unbalanced) option of selecting an n × n subsystem from Equation (4). For example, the computation time of the example in Fig. 5, using the faster unbalanced method was 0.09 seconds, while using the balanced, minimal norm approach (shown in Table 1), the time is 0.82 seconds. We suspect that the computation time differences are mostly due to the fact that our balanced approach has to deal with higher order constraints (see also Table 1). 4. Conclusion and future work In this paper, we have presented a general method for detecting the (zero dimensional set of) critical points in the ¯ where F : D ⊂ Rn → Rk , k ≤ n. The method assumes parameter space of a (nonlinear) algebraic equation system F (¯x) = 0, no knowledge of the geometry that gave rise to the equation system, and addresses only the geometry of the zero set of F : the intersection of k hyper-surfaces, given in implicit form by each of the scalar components of F = ( f 1 , . . . , f k ). One potential drawback of the method is that in many applications the parameter space of F is different than the underlying problem’s objective (or model) space: for example, the equations represented by F may have a parameter space that consists of parameters of several geometric objects, as well as other auxiliary variables that measure distances/time/control variables, etc. This means that solution points belonging to the zero set of F are mapped (in a post process) by some “application specific” function, to the actual solution points of interest to the user. It is not uncommon that this post process mapping has degeneracies of its own, and the presented method cannot possibly detect them. Nevertheless, the presented method proved useful in detecting points where a general k × n matrix with multivariate function elements has non-maximal rank. The proper exploitation of these singularity detections in subdivision solvers are to be investigated. First, consider locating all critical points, as a pre-process step of any general equation solving problem. As a result, it is possible to subdivide the problem in a manner that guarantees two types of sub-domains: sub-domains that contain the critical locations, and a set of sub-problems that are strictly regular: admitting no critical points, not even at domain boundaries, bounded away from the singularity. Next, when k < n, correct tessellation of the solution manifold (in general dimension), at the above mentioned singular sub-domains, is a very challenging problem, solution of which shall result in topologically correct “stitching” of the regular areas. Such results are known for the cases of f (x, y ) = 0 and f (x, y , z) = 0, as in Bloomenthal and Ferguson (1995), Burr et al. (2008), Hass et al. (2007), Stander and Hart (2005). Further, the ability to detect non-maximal rank is in fact useful and can be applied whenever linear dependence is to be found. For example, if all k × k size minors happened to be the identically-zero function, then the problem is degenerate globally in a domain, and the anticipated solution set dimension is found to be wrong by such an observation (which should affect the design of any solver that is meant to support solution sets of several possible dimensions). Acknowledgements This work was supported in part by the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme FP7/2007-2013/ under REA grant agreement PIAP-GA-2011-286426, and was supported in part by the Israel Science Foundation (grant No. 278/13). References ˇ M., Elber, G., Hanniel, I., 2011. Topologically guaranteed univariate solutions of underconstrained polynomial systems via no-loop and singleBarton, component tests. Comput. Aided Des. 43 (8), 1035–1044. Bloomenthal, J., Ferguson, K., 1995. Polygonization of non-manifold implicit surfaces. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques. ACM, pp. 309–316. Burr, M., Choi, S.W., Galehouse, B., Yap, C.K., 2008. Complete subdivision algorithms, ii: isotopic meshing of singular algebraic curves. In: Proceedings of the Twenty-First International Symposium on Symbolic and Algebraic Computation. ACM, pp. 87–94. Chen, X., 2008. An Application of Singularity Theory to Robust Geometric Calculation of Interactions Among Dynamically Deforming Geometric Objects. ProQuest. Chen, X., Riesenfeld, R.F., Cohen, E., 2009. An algorithm for direct multiplication of b-splines. IEEE Trans. Autom. Sci. Eng. 6 (3), 433–442. Chen, X., Riesenfeld, R.F., Cohen, E., Damon, J., 2007. Theoretically-based algorithms for robustly tracking intersection curves of deforming surfaces. Comput. Aided Des. 39 (5), 389–397. Cox, D.A., Little, J., O’Shea, D., 2007. Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra. Undergrad. Texts Math., vol. 10. Springer Verlag. Do Carmo, M.P., 1976. Differential Geometry of Curves and Surfaces, vol. 2. Prentice-Hall, Englewood Cliffs. Elber, G., Cohen, E., 1993. Second-order surface analysis using hybrid symbolic and numeric operators. ACM Trans. Graph. 12 (2), 160–178. Elber, G., Kim, M.-S., 2001. Geometric constraint solver using multivariate rational spline functions. In: Proceedings of the Sixth ACM Symposium on Solid Modeling and Applications. ACM, pp. 1–10. Grandine, T.A., Klein IV, F.W., 1997. A new approach to the surface intersection problem. Comput. Aided Geom. Des. 14 (2), 111–134. Hanniel, I., Elber, G., 2007. Subdivision termination criteria in subdivision multivariate solvers using dual hyperplanes representations. Comput. Aided Des. 39 (5), 369–378. Hass, J., Farouki, R.T., Han, C.Y., Song, X., Sederberg, T.W., 2007. Guaranteed consistency of surface intersections and trimmed surfaces using a coupled topology resolution and domain decomposition scheme. Adv. Comput. Math. 27 (1), 1–26. Hu, C.-Y., Maekawa, T., Patrikalakis, N.M., Ye, X., 1997. Robust interval algorithm for surface intersections. Comput. Aided Des. 29 (9), 617–627. Johnson, D.E., Cohen, E., 2009. Computing surface offsets and bisectors using a sampled constraint solver. In: Proceedings of Graphics Interface 2009. Canadian Information Processing Society, pp. 31–37.

J. Mizrahi, G. Elber / Computer Aided Geometric Design 40 (2015) 76–87

87

Lane, J.M., Riesenfeld, R.F., 1981. Bounds on a polynomial. BIT Numer. Math. 21 (1), 112–117. Lee, J.M., 2012. Introduction to Smooth Manifolds. Grad. Texts Math., vol. 218. Springer. Liang, C., Mourrain, B., Pavone, J.-P., 2008. Subdivision methods for the topology of 2d and 3d implicit curves. In: Geometric Modeling and Algebraic Geometry. Springer, pp. 199–214. Mizrahi, J., Elber, G., 2015. Topologically guaranteed bivariate solutions of under-constrained multivariate piecewise polynomial systems. Comput. Aided Des. 58, 210–219. Musuvathy, S.R., 2011. Medial axis of regions bounded by B-spline curves and surfaces. PhD thesis. The University of Utah. Muthuganapathy, R., Elber, G., Barequet, G., Kim, M.-S., 2011. Computing the minimum enclosing sphere of free-form hypersurfaces in arbitrary dimensions. Comput. Aided Des. 43 (3), 247–257. Ni, X., Garland, M., Hart, J.C., 2004. Fair morse functions for extracting the topological structure of a surface mesh. ACM Trans. Graph. 23 (3), 613–622. Nocedal, J., Wright, S.J., 2006. Numerical Optimization. Springer Science+Business Media. Sederberg, T.W., Nishita, T., 1990. Curve intersection using bézier clipping. Comput. Aided Des. 22 (9), 538–549. Seong, J.-K., Johnson, D.E., Elber, G., Cohen, E., 2010. Critical point analysis using domain lifting for fast geometry queries. Comput. Aided Des. 42 (7), 613–624. Sherbrooke, E.C., Patrikalakis, N.M., 1993. Computation of the solutions of nonlinear polynomial systems. Comput. Aided Geom. Des. 10 (5), 379–405. Stander, B.T., Hart, J.C., 2005. Guaranteeing the topology of an implicit surface polygonization for interactive modeling. In: ACM SIGGRAPH 2005 Courses. ACM, p. 44.