Sparse representation of step functions on manifolds

Sparse representation of step functions on manifolds

Computers & Graphics 82 (2019) 117–128 Contents lists available at ScienceDirect Computers & Graphics journal homepage: www.elsevier.com/locate/cag ...

2MB Sizes 0 Downloads 45 Views

Computers & Graphics 82 (2019) 117–128

Contents lists available at ScienceDirect

Computers & Graphics journal homepage: www.elsevier.com/locate/cag

Special Section on STAG 2018

Sparse representation of step functions on manifolds Simone Melzi University of Verona, Strada le Grazie 15, Verona 37134, Italy

a r t i c l e

i n f o

Article history: Received 10 December 2018 Revised 18 May 2019 Accepted 19 May 2019 Available online 25 May 2019 Keywords: Computer graphics Computational geometry

a b s t r a c t Step functions are non-smooth and piecewise constant functions with a finite number of pieces. Each of these pieces indicates a local region contained in the entire domain. Several geometry processing applications involve step functions defined on non-Euclidean domains, such as shape segmentation, partial matching and self-similarity detection. Standard signal processing cannot handle this class of functions. The classical Fourier series, for instance, does not give a good representation of these non-smooth functions. In this paper, we define a new frame for the sparse approximation and transfer of the step functions defined on manifolds. The definition of our frame is completely spectral and provides a concise representation through an efficient computation. We exploit the sparse representation that takes full advantage of the proposed frame. Furthermore, our frame is built specifically to enhance its use in combination with the functional maps, a powerful tool for transferring signals between manifolds. This functional approach makes the proposed framework stable to isometric and non-isometric deformations. A large set of experiments confirms that the proposed frame improves the sparse approximation and transfer of step functions. © 2019 Elsevier Ltd. All rights reserved.

1. Introduction In the last decades, the interest in the digital representation of real objects has grown significantly. The wide availability of devices for the acquisition of geometric data from the real world is the main reason for this increment. Several applications involve these digital geometric data: acquisition, reconstruction, analysis, manipulation, simulation and transmission of complex 3D models. Since the real world is not rigid, most of the shapes undergo non-rigid deformations. Working with digital representations of real objects, we still have to face many issues related to their deformations. One of the possible approaches to deal with non-rigid deformations is the functional maps framework proposed by Ovsjanikov et al. in [1]. The main idea of functional maps is to represent correspondences between shapes as functional correspondences instead of points correspondences. In the functional spaces, in fact, the problems related to the non-rigid deformations of the surfaces are solved or at least simplified. These functional spaces are usually represented as the spaces spanned by a finite set of basis functions. The Fourier basis is a standard choice in this context. Recently, several different basis and representations for the functional space have been proposed. Each of these basis is designed to suit a given task or a fixed subset of the functional space.

E-mail address: [email protected] https://doi.org/10.1016/j.cag.2019.05.010 0097-8493/© 2019 Elsevier Ltd. All rights reserved.

In this paper we focus on a particular class of functions: the step functions. Given a manifold M a step function on M is a finite linear combination of indicator functions. Given a region R ⊂ M an indicator function is defined as f : M → {0, 1}, such that f (x ) = 1 if x ∈ R, while f (x ) = 0 otherwise. These functions are directly involved in several applications in geometry processing. In shape segmentation, for instance, the shape is subdivided into labelled regions that can be represented by a step function. Another application is partial matching, where only a region in the first shape have to be correctly matched with a second shape. The function that indicates this region is a step function. In this paper, we propose Binary Sparse Frame (BSF), a new frame for the approximation and transfer of step functions. The BSF is designed to be exploited especially in conjunction with the functional maps framework enabling direct transfer of functions between shapes. The BSF has a spectral definition that provides a concise and efficient formulation. The main contributions of this paper are: 1. We propose the Binary Sparse Frame for the sparse synthesis and analysis of step functions. 2. We exploit the sparse representation provided by the BSF. 3. We compare the efficiency and precision of the proposed frame with the state-of-the-art method. 4. We prove that our spectral definition is well suited to transfer step functions between shapes in comparison with possible alternatives.

118

S. Melzi / Computers & Graphics 82 (2019) 117–128

The BSF proposed in this paper could improve several applications in shape analysis. This paper is an extended version of the conference paper “Indicators Basis for Functional Shape Analysis” [2]. The basis proposed in [2], namely the “Indicator Basis” coincides with the BSF, but the name has been modified to improve its descriptiveness. With Binary Sparse Frame we highlight three properties of this set of functions. Binary, as the functions that compose the basis. Sparse, as the representation that they provide. Frame, because these functions are not linearly independent and thus they form a frame and not a basis. In comparison with [2], we focus on the sparse representation provided by the BSF and we add a more comprehensive experimental section: • • • •

• •

We select two new solvers for the sparse optimization. We compare different parameters in the basis definition. We evaluate the timing for all the competitors. We test the proposed method on estimated functional maps (while in [2] only ground truth maps were considered). We perform experiments on TOSCA dataset. We consider different classes of shapes.

The rest of the paper is organized as follows. In Section 2 we summarize the existing methods for functional representation and signal processing on manifolds. Section 3 provides motivations for the proposed framework. The definition and implementation of the Binary Sparse Frame is given in Section 4. Experiments and evaluation of the method are collected in Section 5. Finally Sections 6 concludes the paper. 2. Related work A key question in signal processing is how we should represent a signal. The desired representation must be precise, concise and must simplify the operations. One of the widely adopted representation is the Fourier Transform. The Fourier transform was proposed by Joseph Fourier [3], in 1807. The Fourier transform is defined as the linear projection in the Fourier basis, a set of orthonormal functions that can be computed as the eigenvectors of the Laplace operator. Given a signal defined on an Euclidean domain, such as time, the Fourier transform represents the signal as the linear combination of the Fourier basis. In Geometry processing, we analyze signals defined on a nonEuclidean domain such as 2-dimensional surfaces embedded in the 3D space. This kind of domain is modelled as a smooth, compact, connected Riemannian surface M (possibly with a boundary ∂ M) embedded into R3 . We refer the reader to do Carmo [4] for more details about Riemannian manifolds. To extend signal processing to a non-Euclidean domain, we should define several tools. The theory needed in these definitions is out of the scope of this paper. For this reason, we just briefly recap the main characters involved. The Laplace–Beltrami operator (LBO) is a positive semi-definite operator that generalizes the corresponding notion of Laplace operator from Euclidean spaces to manifolds. Geometrically, the Laplace–Beltrami Operator can be interpreted as the (normalized) difference between the average of a function on an infinitesimal sphere around a point and the value of the function at the point itself. By analogy with the Euclidean Fourier transform the eigenfunctions of LBO form an orthonormal basis for signals defined on surfaces [5,6]. The LBO has been involved in several applications as shown in [7] giving rise to the spectral approaches in the computer graphics. The global point signatures [8] and the shape DNA [9] are directly based on the LBO. Diffusion processes defined on manifolds are also strictly related to LBO and spectral geometry processing: the diffusion distances [10], the heat kernel signature [11,12] and the wave kernel signatures [13].

In 2012, Ovsjanikov et al. [1] introduced the functional maps paradigm to represent correspondences between functional spaces instead of correspondences between points on the surfaces. Recently, many improvements have been proposed for the functional maps framework. In the case of shape collections, a possible strategy to improve the functional maps is consistency. A consistency constraint promotes maps for which compositions along a closed cycle of shapes is approximately the identity map. Several works, based on this consistency constraint, have been proposed in the last years such as [14,15]. In [16] the authors show how to improve the functional maps estimation representing descriptors as linear operators acting on functions through point-wise multiplication. Nogneng et al. [17] show that a given functional map can provide more accurate correspondences representing the functional spaces not only as linear combinations of the LBO eigenfunctions but also involving their (possibly high-order) point-wise products. The functional maps framework is strictly related to the LBO. The LBO eigenfunctions are indeed the basis adopted for the functional spaces. In [18] Aflalo et al. show how the choice of the LBO eigenfunctions is optimal for the representation of continuous functions with bounded variation. Usually, only a small subset of the LBO eigenfunctions is used as a truncated basis for the functional space (the subset of eigenfunctions with smaller eigenvalues). This choice constitutes the main limitation of the LBO eigenfunctions, in fact, the truncated set of eigenfunctions represents only the low-frequencies and it is not able to represent functions that vary quickly. Furthermore, the LBO eigenfunctions from two different shapes suffer from switches in the sign and in the order as investigated in [19]. To solve these issues, Kovnatsky et al. proposed the construction of compatible basis on collections of shapes using simultaneous diagonalization of Laplacians [20]. Similarly, in [21], the authors proposed to estimate the functional maps between two shapes in both directions, improving the quality of the maps with respect to the standard one-sided alternatives. Another drawback of LBO eigenfunctions is that they are globally defined. Using the LBO eigenfunctions it is not possible to localize the analysis on the shape. Tools for local spectral analysis on non-Euclidean manifolds have been proposed: the Windowed Fourier Transform (WFT) for graphs [22] and shapes [23] also in an anisotropic version [24]. These methods localize the analysis of signals on the spatial domain, but the LBO eigenfunctions are still adopted as a basis for the spectral representation. In [25] authors proposed the compressed manifold modes which provide a new basis locally supported. As suggested in [26], the compressed modes are the solutions of an optimization problem that combines the Dirichlet energy with a sparsity-inducing regularization based on L1 -norm. Kovnatsky et al. [27] provided an alternative and more efficient computation of the compressed modes. The main limitation of this approach is that it does not permit to select the region of localization of the modes. A recent approach proposed by Choukroun et al. [28] considers a Hamiltonian operator realized as a modification of the LBO and its eigendecomposition as a local basis. In [29], a similar solution is proposed. The provided basis is simultaneously localized and orthogonal to a set of global LBO eigenbasis. As for classical signal processing, each of the proposed basis can face different issues and suits specific applications. The solutions proposed in [17,20,26,28,29] are all alternatives to the standard Fourier basis [6]. The key idea of this paper is to propose the Binary Sparse Frame (BSF), a new alternative for signal processing on manifolds. This frame is specifically defined for step functions, that being not continuous, cannot be precisely represented by the truncated Fourier basis. The BSF naturally provides a sparse representation. Sparse representation is a wide field already exploited in computer graphics [30–32]. Recently, in [2], the author proposed the Indicator Basis, a

S. Melzi / Computers & Graphics 82 (2019) 117–128

new basis for the sparse representation of step functions. For our knowledge, [2] is the first sparse representation method proposed for signals defined on manifolds. In this paper, we deeply analyze the sparse representation provided by the basis proposed in [2]. The sparsity, induced by BSF, improves the representation preserving efficiency. Exploiting the functional maps framework, the BSF can transfer step functions between different shapes. Our experiments highlight that the proposed basis handles shape deformations and bring several advantages in various applications.

119

the transfer of indicator functions. Some examples of Fourier basis and BSF functions are shown in Fig. 1. 3.2. 2-dimensional surfaces A 2-dimensional surface embedded in R3 is modelled as a connected compact smooth 2-dimensional Riemannian manifold M ⊂ R3 , eventually with a boundary ∂ M. In the discrete setting, we represent M as a triangular mesh with a set of nM vertices V and edges E.

3. Overview In this Section we outline the general idea of this paper, starting from the 1D Euclidean domain and then generalizing it to 2-dimensional surfaces. 3.1. 1D Euclidean domain The Fourier basis in the 1D domain corresponds to the eigenfunctions of the Laplace operator, namely the harmonic functions. The Laplace operator coincides with the sum of second partial derivatives. This basis is optimal for the analysis and the synthesis of smooth functions with limited variations. In this paper, we focus on a specific class of functions: the step functions. A standard example is the Heaviside step function, which is 0 for negative values and 1 for positive ones. Oliver Heaviside defined Heaviside step functions at the beginning of the 20th century in the context of differential calculus. Heaviside and step functions have several applications, for instance in statistics and economics. For a wider overview on step functions and their applications we refer to Abramowitz [33], Davies [34]. In practice, step functions correspond to the set of all possible finite linear combinations of indicator functions. Where the definition of an indicator function fR is given by:



fR : D −→ R

|

f R (x ) =

1, if x ∈ R 0, otherwise

(1)

R ⊆ D is an interval in the domain D. Given a set of intervals {R1 , . . . , Rq } and the corresponding indicator functions { fR1 , . . . , fRq }, a step function s is any linear combination of them:

s : D −→ R

| s = α1 fR1 + · · · + αq fRq ,

(2)

where α1 , . . . , αq ∈ R. For simplicity, we consider only connected intervals as support of s, since we can always assign an indicator function to each of the connected components of the support of s. The step functions are not smooth and therefore the Fourier basis is not an optimal choice to represent them. This motivates the definition of the Binary Sparse Frame (BSF) for the approximation and

Fig. 1. Three Fourier basis functions (top row) and three BSF functions (bottom row) in 1D under Neumann boundary conditions.

The surface M can be equipped with the Laplace Beltrami Operator (LBO), the second-order differential operator that is the generalization of the Laplace operator on manifolds. Due to lack of space, we refer to do Carmo [4] for more details on the differential geometry theory involved in the definition of the LBO. In the discrete setting the LBO is represented as a matrix of dimension nM × nM . This matrix is obtained as the product ΔM = (AM )−1 W M , where AM is the mass matrix a diagonal matrix which entries are equal to the area element associated with each vertex and W M is the stiffness matrix defined accordingly to the local geometry of the surface. One possible computation of the stiffness matrix W M is based on the cotangent weights wi j defined as:

wi j =

⎧ (cot αi j + cot βi j )/2 i j ∈ E \ E∂ M ; ⎪ ⎪ ⎪ ⎪ i j ∈ E∂ M ; ⎨(cot αi j )/2 

− wik ⎪ ⎪ ⎪ k=i ⎪ ⎩ 0

i = j;

(3)

else;

ˆ of the triangles sharing the where α ij , β ij denote the angles ikˆ j, jhi edge ij and E∂ M are the edges on the boundary ∂ M. The inset Figure clarifies the adopted notation. We refer to Pinkall and Polthier [35] for additional details. The LBO is a positive semidefinite operator ΔM : F (M, R ) → F (M, R ), thus it admits an eigendecomposition ΔM φl = λl φl , where λ1 ≤ λ2 ≤ . . . are real eigenvalues, and φ1 , φ2 , . . . are the corresponding eigenfunctions. These eigenfunctions correspond to the Fourier basis on manifolds. Even in this case, smooth functions compose the Fourier basis and thus it is optimal for the representation of smooth functions as shown in [18]. A step function on M is again every finite linear combination of indicator functions. On M, we define an indicator function fR as done in Eq. (2), where M substitutes D. Given a set of regions {R1 , . . . , Rq } contained in M and the corresponding indicator functions { fR1 , . . . , fRq }, then s = α1 fR1 + . . . + αq fRq (α1 , . . . , αq ∈ R ) is a step function on M. As for the 1D, we only consider connected regions. Once again, the Fourier basis does not represent the step functions particularly well. In this paper, we propose the BSF which provides an efficient and concise representation of step functions. Fig. 2 shows some examples of Fourier basis and BSF functions on a 2-dimensional surface. We refer to the proposed set of functions as a frame to highlight that these functions may be linearly dependent, as can be seen in Fig. 2. The lack of linear independence could give rise to a loss of representation capacity. At the same time, we can use only a small number of BSF elements to represent a given step function.

120

S. Melzi / Computers & Graphics 82 (2019) 117–128

Fig. 3. Three spectral Gaussian g ˆτ1 , g ˆτ2 , g ˆτ3 for different values of τ .

Fig. 2. Examples of Fourier basis and BSF functions on a 2-dimensional surface of a dog. On the left three Fourier basis functions. Each of the other columns contains three functions for the alternatives frames constructions, namely geod and heat and BSF. More details on these alternatives will be given in Section 5. White corresponds to 0 while blue is −1 and red is 1. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article).

Thanks to this fact, the BSF allows us to exploit the sparse representation of step functions on manifolds. Potentially, if the BSF functions span the entire functional space they will represent perfectly all the functions defined on the manifold as done by the full Fourier basis. This extreme case is not considered in this paper that focuses on the sparse representation of step functions provided by the BSF. 4. Method In this Section, we define the BSF and the associated pipeline for approximation and transfer of step functions on triangular meshes. We divide the Section into three parts. In the first one, we define the BSF on a single shape. The second part describes how to approximate a step function with BSF. At the end of the Section, we focus on the transfer of the BSF representation between two given shapes. 4.1. BSF on a single shape Let M be a shape with its mass matrix AM and its LBO eigendecomposition: 

M = λM , . . . , λM the smallest kM eigenvalues, 1 kM  ΦM = φM , . . . , φM 1 k

M

the corresponding eigenfunctions.

To compute the BSF on M we define the following pipeline. Compute the Subset of Vertices. Given a manifold M, we consider the set of vertices V that defines the mesh on M. We select a subset VBSF of vertices of V that are uniformly distributed on the surface, guaranteeing to cover the entire shape. We adopt the fast marching farthest point sampling algorithm [36,37], based

on the geodesic distances. In Section 5, we also provide a comparison in the stability of the fast marching farthest point sampling algorithm based on different distances. We denote the number of selected vertices as nBSF < nM . An example of the subset VBSF on a human shape is depicted on the left of Fig. 4. In this case, we select the 5% of the vertices. Spectral Gaussian. In order to build our BSF we need to define a set of functions on M. This definition should be general and specifically designed for approximation and transfer of step functions. For these reasons, we build the proposed functions on a spectral definition of Gaussian functions. We define a Gaussian function through its coefficient in the Fourier (spectral) domain. To determine this spectral Gaussian, two parameters should be selected: the center and the spread of the spectral Gaussian. We fix λ1 = 0 as the center and τ ∈ R+ for the spread. For each frequency l the coefficient of the spectral Gaussian is defined as: gˆ l = e−τ (λl −λ1 ) ,

(4)

varying l in the set of adopted frequencies: l ∈ {1, . . . , kM }. Three examples corresponding to three different choices of τ are depicted in Fig. 3. The x-axis represents the index of the frequencies while the value of the Gaussians is represented on the y-axis. The spatial image of this spectral Gaussian is a Gaussian. This spatial Gaussian depends on the choice of a point x ∈ M that is the center of the Gaussian gx and can be defined as follows: gx =

kM 

(gˆl  φl (x ))φl ,

(5)

l=1

where  is the element-wise products. The values (g ˆl  φl (x )), ∀l ∈ {1, . . . kM } are the coefficients of gx in the Fourier basis ΦM . We compute a spatial Gaussian centered at each vertex in the set VBSF = { p1 , . . . , pnBSF }. The width of the spectral Gaussian depends on the parameter τ . The larger is τ , the narrower is the spectral Gaussian. We fix a set T = {τ1 , . . . , τt } of t different values of τ . In Section 5, we evaluate the dependence of the BSF from this set of parameters. With these parameters, we obtain q = t × nBSF spatial Gaussians collected side by side in a matrix τ τ τ τ G = [g p11 , . . . , g pt1 , . . . , g p1n , . . . , g ptn ]. BSF

BSF

Our definition of the BSF functions is strictly related to the LBO eigenfunctions. This spectral definition is general because it does not depend on specific properties of the mesh. Furthermore, the LBO is completely intrinsic, being fully defined by the metric tensor. The space of functions spanned by the first eigenfunctions of the LBO are then stable under near-isometries as shown in [38]. Many works from graph theory and geometry processing already exploited this spectral definition of the Gaussians. Shuman et al. in [22] proposed the spectral Gaussian definition as a part

S. Melzi / Computers & Graphics 82 (2019) 117–128

121

4.2. Sparse approximation through B Given a function f, we look for a linear combination of the columns of B that approximates f. More explicitly, we compute a set of coefficients α f ∈ Rq such that:

f˜ = B α f and f˜ ≈ f, and αf are the coefficients of f w.r.t. B. A first definition of αf is given by the least square solution of the following linear system:

B α f = f. Fig. 4. An overview of the steps that define the Binary Sparse Frame. From left to right: the points selected on M, the spatial Gaussians defined for 3 different τ values (with two different centres on the two rows), the BSF functions obtained from the Gaussian functions. Near to each shape, we add a zoom on the region where the function is localized.

of the framework that defines the windowed Fourier transform on graphs. In [23], the spectral Gaussians define window functions on the manifold. These windows are a key components of the convolutional neural network proposed in [23]. The same definition is adopted in [24] where an anisotropic version of the windowed Fourier transform for manifolds is proposed. Some spatial Gaussians obtained from this definition varying the value of τ and with two different centers are shown in the middle of Fig. 4. From Gaussian to Indicator function. Finally, in order to obtain the indicator functions that compose the proposed BSF, we perform the following post-processing to each of the Gaussian function gxτ : |gxτ | . max(|gxτ | )

Normalization: gxτ =

Saturation: bτx ( p) = 1 if gxτ ( p) > γ . Remove noise: bτx ( p) = 0 if gxτ ( p) < γ . In our experiments, we compare different choices for the parameter γ . We refer to the last two steps as binarization of the basis functions. In fact, after these steps, each of the spatial Gaussians becomes a binary indicator function. The BSF consists of all this indicator functions collected side by side in a matrix B. If necessary, we will refer to this matrix as B M , making explicit the surface on which it is defined. In Algorithm 1, we summarize the pipeline for the construction of the Binary Sparse Frame. Algorithm 1 Binary Sparse Frame computation on M. Input: M the mesh and its Fourier basis ΦM , nBSF ∈ N, T = {τ1 , . . . τt } s.t. τ j ∈ R > 0, ∀ j ∈ {1, . . . t } Γ = {γ1 , . . . γt } s.t. γ j ∈ (0, 1 ), ∀ j ∈ {1, . . . t }. Output: B = [b1 , . . . , bq ] the BSF on M. Select a subset of nBSF vertices VBSF ⊂ V. for j = 1 : t do Compute g ˆτ j the spectral Gaussian. for i = 1 : nBSF do τ Compute g pij the spatial Gaussian translated in pi . τ

τ

τ

τ

Normalize g pij : g pij = |g pij |/max(|g pij | ). τj

τj

Compute b pi as binarization of g pi w.r.t. γ j . end for end for τ

τ

return B = [b1 = b p11 , . . . , bq = b ptn

BSF

]

(6)

Since the columns of B are not linearly independent, the linear system (6) is usually undetermined. Due to this non-linear independence of the BSF, the definition of αf is not unique, as infinite many solutions are possible. The construction of the BSF suggests we look for solutions where the coefficients are sparse and to use this as a regularization prior. In fact, only a few functions of the BSF should be involved in the approximation of a single indicator function as the BSF naturally provides a sparse representation of step functions. In Section 5 will show that promoting sparsity of the representation improves the quality of the approximation and transfer. We propose two different alternatives to compute a sparse solution of Eq. (6). The optimization problem to compute the coefficients αf is formulated as: α f = arg min f − B α 2F + α 0 ,

(7)

α∈ R q

where · F is the Frobenius norm and · 0 is the l0 pseudonorm. The minimization of the l0 pseudo-norm promotes the sparsity of the solution or in other words, it promotes solutions with a larger number of zeros. Many zeros in αf corresponds to only a few functions of B involved in the approximation of the signal. To minimize the l0 pseudo-norm problem in Eq. (7), we exploit two different formulations, proposed in [39]. The first one, already adopted in [2], aims to minimize the error-constrained sparse coding problem that is described by the following equation: α f = arg min α 0 s.t. α∈ R q

f − Bα 2F ≤ ,

(8)

where ∈ R is an upper bound on the approximation error. The second solution, not considered in [2], is to solve the sparsityconstrained sparse coding problem that is equivalent to: α f = arg min f − B α 2F s.t. α∈ R q

α 0 ≤ S.

(9)

where S ∈ N is the maximum number of non-zero coefficients in α. In Fig. 5 we show two examples of approximation in the 1D Euclidean case. On the left the approximation of a binary ({0, 1}) step function. On the right results on a similar step function, but with values different from 0 and 1. In comparison to the Fourier basis (in blue), the proposed BSF (in red) improves the approximation of the original step function (in black). An example of the approximation of a function between two 2-dimensional surfaces using the BSF is depicted in the first row of Fig. 6, where we compare the BSF with the standard approach based on the LBO eigenbasis. 4.3. Sparse transfer through B We consider  now two shapes M and N with their LBO eigen-

basis ΦM = φM , . . . , φM 0 k

M −1

and ΦN = φN , . . . , φN 0 k

N −1

trun-

cated to dimension kM and kN ∈ N respectively. The transfer of functions between two different domains is one of the main applications in signal processing. The transfer of step functions is the main application that we address with our BSF B.

122

S. Melzi / Computers & Graphics 82 (2019) 117–128

cients for the transfer of gxτ as: C [gˆ 1τ  φ1 (x ), . . . , gˆ kτM  φkM (x )] . Thus for every τ N as:

(11)

and every x ∈ M we compute the image of gτx on

hτx = ΦN C [gˆ 1τ  φ1 (x ), . . . , gˆ kτM  φkM (x )] ,

Fig. 5. We compare the approximation of two 1D step functions (in black) using the standard Fourier basis (in blue) and the proposed frame (in red). On the left, we consider a function with values in {0, 1}. On the right, a similar step function with non-integer values. In the legend, we report the relative errors. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article).

Functional Maps. To transfer function between shapes, we make use of the Functional maps framework [1]. We give a short introduction to the original framework, and we refer to [1] for further details. Let π : N → M be a pointwise map between N and M and let F (M, R ) and F (N , R ) be the sets of real-valued functions defined respectively on M and N . The functional maps framework [1] considers the linear operator T : F (M, R ) → F (N , R ) that maps functions from N to M, defined as T ( f ) = f ◦ π . Through the LBO eigenbasis, the operator T, namely the functional map, admits a matrix representation C = (ci j ), the entries of which are:

T(f) = T





i

ij

M f, φM i M φi =

M N f, φM i M T φi , φ j N .



(10)

c ji

Accordingly to Ovsjanikov et al. [1], we use only a truncated subset of ΦM and ΦN , thus we also truncate the series in Eq. (10). This choice reduces the functional map to a matrix C ∈ RkN ×kM . This matrix C is usually small and can be efficiently estimated through several methods listed in Section 2. Gaussian transfer. A Gaussian function gxτ on M is defined M through its spectral coefficients: {g ˆlτ  φl (x )}l=1 . These coefficients τ are the Fourier representation of gx , and can be transferred by the functional map C. Indeed thanks to C we have the spectral coeffi-

k

(12)

where the multiplication with ΦN corresponds to the inverse Fourier transform. With a notation abuse, we denote the Gaussian by hτx , while it should be hτy , where y = π −1 (x ) ∈ N is the center of the Gaussian defined on N . However, we maintain the notation hτx to explicit that this is the Gaussian which corresponds to gτx . Varying the value of τ ∈ T and varying the point x ∈ VBSF we obtain τ τ τ τ H = [h p11 , . . . , h pt1 , . . . , h p1n , . . . , h ptn ] the image of G, the set of BSF

BSF

spatial Gaussians defined on M. With this construction, we have an ordered correspondence between each column of G and each column of H. Applying to H the same binarization step performed on G, we are able to obtain the BSF B N on N . Given B M and B N we can transfer the function f ∈ F (M, R ) on N . If αf denotes the coefficients that represent f in the BSF B M , then g, the transfer of f, can be computed as:

g = BN α f .

(13)

This computation is justified by the ordered correspondence between the columns in B M and B N and holds for all the αf defined in one of the Eqs. (6), (7), (8) or (9). Note that B M and the functional map C completely determine B N . We perform the sampling of the subset of points and the spectral definition of the Gaussians only on M. The correspondence between the columns of B M and B N comes directly from C. Thanks to its formulation, the BSF is specifically built to exploit the transfer of function with the functional map C. In Fig. 6, we depict an example of a transfer between two different shapes. Summing Up. The proposed pipeline could be summarized as follows: 1. Fix a subset VBSF = { p1 , . . . , pnBSF } of points on M in covering the shape homogeneously. 2. Define the Gaussian functions centred at each of the points in VBSF for different widths. 3. Compute the functional map C between M and N 4. Map the Gaussians on N through C.

Fig. 6. A comparison in the approximation and transfer of a step function between standard LBO eigenfunctions (std) and the BSF (ours). From left to right: the original function on the source shape (top) and its ground truth transfer on the target shape (bottom). Approximation (top) and Transfer (bottom) using the standard Fourier basis, and using the proposed BSF. The functions of the BSF involved in the approximation (top) and in the transfer (bottom). The colormap represents the values of the step function from 0 to max. The value 0 is white to highlight the errors.

S. Melzi / Computers & Graphics 82 (2019) 117–128

5. Binarize the Gaussian functions on M and N . 6. Compute the sparse coefficients to approximate and transfer functions between M and N . 5. Experiments In this section, we collect all the experiments performed to evaluate the proposed method. 5.1. Implementation details We run all the experiments on a MATLAB 2017b software. We select the uniformly distributed subset of points on the surfaces through the fast marching farthest point sampling algorithm [36,37], and we use the Dijkstra algorithm [40] to approximate the geodesic distances. To estimate the functional maps, we adopt the pipeline proposed in [16]. The sparse optimization is performed accordingly to [39]. All this code is available online. All the experiments were computed on a machine with 32GB of RAM and an Intel 3,6 GHz Core i7. Selection of the parameters. The BSF definition depends on several parameters. In this paragraph, we list the parameters and explicitly refer to the choices mainly adopted in our experiments. nBSF : We adopt the 5% of the number of vertices of M. T = {τ1 , . . . , τt } : We select t = 3 different τ , T = {τ1 = 0.0 01, τ2 = 0.0 05, τ3 = 0.01}. Γ = {γ1 , . . . , γt } : We use γ = 34 for all the values of τ . We should also select the parameters and S involved in 1 Eqs. (8) and (9). In our experiments we use = 10

f F and S = 10. If not specified, we adopt this set of parameters. We briefly investigate the dependency on these parameters in Fig. 10. We also compare two different selections of parameters in Table 5. We leave a complete analysis as future work. 5.2. Datasets FAUST dataset [41] contains 100 human shapes. All these shapes come from real scans of 10 different people acquired in 10 fixed poses. The obtained 100 shapes are registered to a common template, a triangular mesh with 6890 vertices and 13,776 triangles. This provides them of a complete ground truth point-to-point correspondence. FAUST is one of the more recent and challenging dataset thanks to the non-isometric deformations generated by the different poses and subjects. TOSCA [42] is composed of synthetic models represented as triangular meshes. TOSCA dataset comprises different shape classes (we use centaur, cat, dog and wolf). Each of these classes contains different pose deformations. All the elements in the same class are near-isometric. The number of vertices of these shapes is different for each class and goes from around 15K to around 30K. A ground truth point-wise correspondence between the shapes in the same class is available. 5.3. Competitors and alternatives In the following experiments, we will compare the proposed BSF with existing methods. We also add a comparison with two alternative constructions of BSF. Standard (std). The classical approach proposed in [43]. The first eigenfunctions of the LBO are the basis of this method. Standard double size (std2). The same as the previous one but with the double number of eigenfunctions. Point-to-point mapping (p2p). Given a point-to-point map, we transfer a function from M to N assigning to each vertex of N the value of the function at the corresponding vertex on M. The point

123

to point map is the conversion of the functional map proposed in [43]. Products (prod). The method proposed in [17]. It exploits products between pairs of eigenfunctions and not only linear combinations of them, providing a second order polynomial representation of the functional space. Geodesic frame (geod). The first of the two alternatives proposed in [2]. For each point in VBSF we collect a set of indicator functions obtained from 3 different sizes of the geodesic neighbourhood. We perform the transfer on the target shape using the functional map on the Fourier representation of these geodesic indicators. The analysis and synthesis of a given function are done using the approaches that we proposed for the BSF. The three geod functions (corresponding to 3 different levels of locality around the same point) are shown in the second column of Fig. 2. Heat Indicators (heat). The heat diffusion on the surface provides the second alternative. We take a saturated version of the heat kernel around each of the points in VBSF for 3 different timescales as indicator functions. Then the obtained frame is used as done for the geodesic indicators. Three heat functions (corresponding to 3 different time-scales around the same point) are shown in the third column of Fig. 2. Fair comparisons. We emphasize that we add std2 and p2p in order to provide a more extensive comparison of the proposed method. They are not direct competitors of our method for the following reasons. std2 involves a larger set of eigenbasis and the estimation of a new functional map. Thus it may be considered as a different choice of parameters for the std method. For a fair comparison, we should compute our method and prod with these new parameters. p2p is not a functional representation, and can not be seen as a standard tool for signal processing. Indeed with p2p, it is not possible to perform analysis and synthesis of the involved signals that can be desirable for many standard signal processing applications such as compression and modification of a signal. All the other competitors and alternatives can be directly compared with our results because all of them are functional representations and they all adopt the same numbers of eigenfunctions and the same functional map without requiring any additional functional maps estimation. 5.4. Different computation of the coefficients In this paper, we propose three different estimations of the BSF coefficients. These three are also applied to geod and heat. In the following we refer to the results obtained from coefficients computed through Eq. (6) as geodl , heatl and oursl . We refer to the ones obtained from the error-constrained sparse coding problem in Eq. (8) as geode , heate and ourse . Finally we refer as geods , heats and ourss to the ones obtained from the sparsity-constrained sparse coding problem in Eq. (9). 5.5. Evaluation details To give a quantitative evaluation of the results, we compute the relative error for the approximation and transfer, respectively denoted by Eapp and Etran . These relative errors are defined as :





 Etran =

( f (x ) − f˜(x ))2 dμ(M) ,  2 M f ( x ) d μ (M )

M

Eapp =

 N

( ftran (y ) − f˜tran (y ))2 dμ(N ) ,  2 N ftran (y ) d μ(N )

(14)

(15)

where ftran is the ground truth transfer of f from M to N computed through the point-to-point correspondence provided by the

124

S. Melzi / Computers & Graphics 82 (2019) 117–128

Table 1 Comparison in the approximation of indicator functions. The relative errors are computed accordingly to Eq. (14). These are average results on 100 randomly generated indicator functions on 10 pairs from FAUST. We also report ± the standard deviation on the right of each of these results. Each column in the Table corresponds to a different dimension of the basis used. For each dimension, we highlight the best result in red, the second one in green and the third one in blue.

Table 2 Comparison in the transfer of indicator functions. The relative errors are computed as in Eq. (15). These are average results on the same 100 functions on the same 10 pairs from the FAUST dataset used in Table 1. The functional maps used for the transfer are ground-truth functional maps. Each column in the Table corresponds to a different size of the map C. For each size, we highlight the best result in red, the second one in green and the third one in blue.

dataset involved in the evaluation. f˜ and f˜tran are respectively the approximation and the transfer of the function f. dμ(M ) and dμ(N ) are the infinitesimal area elements respectively on M and N . With respect to Melzi [2], we adopt a different discretization of the area elements that approximates better the continuous integral. For this reason, some of the values in the experiments are slightly different from the one in [2]. 5.6. Results In the following experiments, the dimensions of the functional map are equal to the number of basis functions used for the std method. For all the methods the functional map is the same, with its extension for the method prod as explained in [17]. std2 is the only exception. The dimensions of the basis for std2 are double of the ones for the std. Thus also the map has dimensions equal to two times the original dimensions. The BSF and heat depend on the dimensions of the basis. In fact, the definitions of these frames rely directly on the basis. All the transfer results depend

on these dimensions as they are the dimensions of the map exploited for the transfer. Approximation on FAUST. In the first experiment, we compare the approximation errors varying the number of basis functions adopted. The results in Table 1 are the average of 100 random indicator functions on 10 random pairs from FAUST dataset. For each of these errors, we report also the standard deviation. Being only suitable for transferring, the p2p is not present in this Table. The geod approximation, being defined extrinsically on the surfaces and not in the spectral domain, does not depend on the number of eigenfunctions. On the other side, the heat method is highly dependent on the number of eigenfunctions as it corresponds to the quality of the spectral representation of the heat diffusion. For small dimensions of the functional maps (10 and 30) geod is the best method. Once more basis are used ours overcomes all the others. ourss performs slightly worse than oursl and ourse as they directly minimize the error in the approximation of the signal. The same is true for geods . geod and ours outperform the state of the art competitors in the approximation. For step function, we should thus prefer the BSF methods. Transfer on FAUST. In Table 2, we report the transfer results for the FAUST dataset. The p2p (that is not a functional representation method) outperform all the other methods, except for the dimension 10 × 10. These results show the limits of heat and geod. In all the cases ours performance is comparable or better than all the other methods. Excluding p2p and std2 we provide the best results for dimensions 30, 50, 70 and 90. Dependency on the number of the eigenfunctions on the target shape. In Fig. 7, we plot the curves produced by the transfer error variation obtained increasing the number of the eigenfunctions on the target shape. On the y-axis, we report the average relative error computed on 100 randomly generated indicator functions on 10 random pairs from the FAUST dataset. On the x-axis the increasing value of kN is shown. The last value at the end of the x-axis corresponds to the approximation error. We perform this experiment for all the competitors with three different dimensions of the Source basis kM = 10, 30 and 50 depicted respectively on the left, on the middle and on the right of Fig. 7. We outperform the p2p only for kM = 10 while p2p achieves the best results for every value of kN when larger kM are considered. The other methods are all far from the p2p results. When the number of eigenfunctions on the target shape increases our method improves, especially for ourse . If we use a larger number of eigenbasis on the target shape our method can reach the approximation error, shown as the last value of each curve in Fig. 7. These results prove that our frame is the best one in the transfer application with respect to the other functional representation methods. The powerful transfer results obtained by the p2p are limited by the fact that p2p does not constitute a functional representation. In Fig. 8, we show two qualitative examples for the approximation and the transfer of an indicator function on the FAUST dataset. We compare all the methods on two different pairs and for two different indicator functions. For the Binary Sparse Frame, we consider only the coefficients obtained from (9). The white colour corresponds to 0 value while red is the value of the considered segment. The rest of the colormap gradually denotes the level of the error. In these experiments, the ground-truth functional maps are encoded by matrices of dimension 100 × 50. The BSF well approximates and transfers the indicator function in all the pairs considered. Efficiency and timing. In Table 3, we compare the efficiency in the computation of the frame for the three alternatives, namely geod, heat and ours. ours is definitely faster than the competitors. The efficiency and the quantitative comparison in the transfer error confirm that we should definitely prefer ours to both geod and heat. In Table 4, we compare the time for transferring one indicator function between two FAUST shapes. In this experiment, we adopt the same set of parameters, but we estimate the functional

S. Melzi / Computers & Graphics 82 (2019) 117–128

125

Fig. 7. Comparison of the transfer, increasing the number of eigenfunctions on the Target shape. The comparison is performed from left to right respectively with kM = 10, 30 and 50. All the functional maps involved are ground-truth maps. The relative error is computed as in Eq. (15). These are average results on 100 randomly generated indicator functions and on 10 random pairs from FAUST. The last value on the x-axis corresponds to the approximation error.

Fig. 8. Two qualitative comparisons on the FAUST dataset. For each indicator function, we show the approximation in the first row and the transfer in the second one. We report the original function and its ground-truth transfer on the left of the figure. On the right, we collect the results for all the competitors. The p2p method does not provide results in the approximation.

126

S. Melzi / Computers & Graphics 82 (2019) 117–128

Table 3 Comparison in the timing of the three different definitions of the Binary Sparse Frame: geod, heat and ours. The times (in seconds) are on average on 10 pairs from FAUST. We highlight the best result in red, the second one in green and the third one in blue.

Table 4 Time comparison for the transfer of a step function. The times (in S) are the average on 100 pairs from FAUST.We highlight the best result in red, the second one in green and the third one in blue.

Table 5 Quantitative comparison in the transfer of indicator functions between two shapes from TOSCA. The transfer errors are computed as in Eq. (15). These results are the average on 20 randomly generated indicator functions on 2 random pairs. The functional maps used for the transfer are estimated accordingly to Nogneng and Ovsjanikov [16]. The two columns on the left are results on the class dog. The three on the right are the ones on the class centaur. The last column on the right contains the average time for the transfer for the two map sizes considered. For each column, we highlight the best result in red, the second one in green and the third one in blue.

Fig. 9. Qualitative comparison on different classes from TOSCA (1 wolf pair, 2 cat pairs and 1 dog pair). On the left of the first row of each pair, we depict in red the indicator function denoted as f. The error is encoded by the colormap. White represents small error while large error is encoded by dark colours. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article).

maps using the approach proposed in [16]. As probe functions, we adopt the WKS [13] descriptor and one landmark. std is definitely the fastest method. Excluding std, ourss is more efficient than the other methods. This evaluation also clarifies that the sparse optimization defined by Eq. (9) is more efficient than the others. This gap is amplified if we consider shapes with more vertices. We refer to the last column of Table 5 where we compare the efficiency for the transfer on the centaur class from TOSCA (around 16K vertices). For this reason, we select ourss for the experiments on the TOSCA dataset. 90 × 90 is one of the largest dimensionality adopted in the functional maps framework. With 90 eigenfunctions the efficiency and scalability of ourss is completely comparable to the one of std and p2p. Furthermore as shown on TOSCA dataset ourss is also efficient in the case of a large number of vertices. We addition-

ally perform a preliminary experiment on the Michael class from TOSCA (more than 50K vertices) and the timing of ourss remains comparable to std and p2p. Transfer on TOSCA. We perform a quantitative and qualitative evaluation on two classes from the TOSCA dataset. In Table 5, we collect the quantitative results on the dog and centaur classes. In the last column on the right, we report the average time for the transfer on the centaur class with the two map sizes considered. We perform these experiments with two different dimensionalities of the functional map: 10 × 10 and 30 × 30. We estimate these functional maps with the method proposed in [16]. As probe functions, we adopt WKS descriptor [13] and one landmark.

S. Melzi / Computers & Graphics 82 (2019) 117–128

127

Fig. 10. Analysis of the dependence on the parameters of the proposed BSF. The results are the average on 20 indicator functions selected on two pairs of shapes from the centaur class in the TOSCA dataset. The functional maps are all estimated with size 10 × 10. In the top row, we collect the results in approximation while in the bottom row the transfer ones. From left to right we evaluate: the dependence on the cardinality of the subset VBSF of vertices, the distance adopted in the farthest point sampling selection, the value of the parameter γ involved in the binarization step, and the values of and S involved in two of the sparse optimization alternatives.

Differently from Melzi [2], we evaluate the dependency of the prol , posed method from its parameters. The methods denoted as ours s and ours e are obtained with different values of τ and γ : ours T = {0.0 0 0 01, 0.0 0 01, 0.0 01, 0.01} and Γ = {0.96, 0.86, 0.76, 0.66}. As can be seen, the new parameters improve the transfer results. Excluding the p2p method that is not a functional representas provides the best performance in this tion, the method ours experiment and it is also efficient. In Fig. 9, we depict a qualitative evaluation of these transfer results. For each pair, we show the original indicator function f, its approximation error for std and ourss on the first row. On the second row, we show the transfer error for p2p, std and ourss . The errors are encoded by the colormap. White corresponds to 0 error while red and black colours are the larger errors. We show results on different classes (wolf, cat and dog) and for different indicator functions. In the last example on the dog class, we can see how in the case of noise in the functional map the p2p produces larger error than ourss . Dependency on the parameters. Finally, we propose an analysis of the dependence of the proposed method from the parameters that are involved in its definition. On the centaur class from the TOSCA dataset we compare the approximation and the transfer of oursl , ourse and ourss with the standard approach (std). In the transfer, we add a comparison with p2p to provide a more competitive comparison for the achieved results. The results are the average on 20 indicator functions defined on two pairs of shapes from the centaur class in the TOSCA dataset. As in the previous experiments, we estimate the functional maps of size 10 × 10 through the method proposed in [16]. In the top row, we collect the results in approximation. The bottom row contains the transfer results. From left to right, we analyze different parameters varying their values in a neighbour of the value selected in the previous experiments. On the left, we analyze the dependence on the cardinality nBSF of the subset VBSF of vertices of M. We vary

this value from the 5% to the 15% of the total number of vertices nM . Augmenting the values of nBSF , all our methods improve the approximation results. In the transfer, ourss is the only one that improves its performance with respect to this parameter. At least for oursl and ourse , it is not true that a larger value of nBSF provides better transfer. In the middle-left, we consider different distances adopted in the farthest point sampling selection: Euclidean distance (Euc), Dijkstra approximation of the geodesic distance (Dij) and Biharmonic distance (Bih). Again, ourss is more stable than the others. All the methods perform similarly with Euclidean distance or geodesic approximation. The Euclidean choice is preferable for its efficiency. In the middle-right, we compare three different choices of the parameter γ . γ defines the binarization step in the definition of the BSFfunctions. The values 0.75 adopted in the main part of our experiments gives the best transfer results for all our three methods. Finally, on the right, we consider different values of the parameters and S involved respectively in Eqs. (8) and (9). Only ourse depends on and only ourss depends on S. For this reason, we analyze them together. S must be an integer. We consider S = 5, 10 and 15. Increasing the value of S the approximation and the transfer provided by ourss are slightly improved. must be selected accordingly to the norm of the analyzed function. On the x-axis, we report the value that we multiply with the norm of the function. We test 0.005, 0.1 and 0.15. It is interesting to note that for ourse better approximation corresponds to the worst transfer. As can be seen, our approximations are more accurate than standard representation. ourss is more stable for all the parameters considered. Furthermore, ourss is the method for which the approximation and the transfer results are more similar. These experiments do not provide a complete analysis of the parameters involved. With these results, we clarify that the parameters selected in our experiments are reasonable. We leave a deeper analysis of these choices as future work.

128

S. Melzi / Computers & Graphics 82 (2019) 117–128

6. Conclusion In this work, we propose the Binary Sparse Frame (BSF), a new tool for the sparse representation of step functions defined on different domains. This tool relies on the construction of a set of functions for which we provide a spectral formulation that makes their computation very efficient. The usefulness and the strength of the proposed frame are evaluated and compared with the state-of-theart. The BSF is specifically designed to transfer step functions. The performance in the transfer confirms this claim. The quality of the obtained results and the efficiency of its computation on different dataset make the BSF a desirable alternative for the analysis of step functions. Limitations and future work. The applicability of the proposed method is limited to the set of step functions. The BSF is indeed specifically defined for the approximation and the transfer of any step function. The BSF does not directly generalize to other functions. The proposed frame is highly dependent on its parameters. In this paper, we do not extensively analyze this dependency. We leave a complete investigation as future work. Possible applications in signal processing tasks, such as localization of signals, can also be explored in future work. In this paper, we only start to exploit the sparse representation of signals provided by the proposed frame. For sparse optimization, we completely rely on the implementation proposed in [39]. As a future direction, we would investigate a task-specific framework for the sparse optimization on manifolds. Furthermore, we believe that the sparsity of the matrix B could give rise to future extensions of the proposed framework. For instance, we would investigate the scalability of the method to data with a larger number of vertices. Acknowledgments We thankfully acknowledge Emanuele Rodolá, Diego Carrera, Giacomo Boracchi, Riccardo Marin and Umberto Castellani for their valuable comments and support, and the University of Verona that funded my research. References [1] Ovsjanikov M, Ben-Chen M, Solomon J, Butscher A, Guibas L. Functional maps: a flexible representation of maps between shapes. ACM Trans Gr 2012a;31(4):30:1–30:11. [2] Melzi S. Indicators basis for functional shape analysis. In: Livesu M, Pintore G, Signoroni A, editors. Proceedings of the smart tools and Apps for graphics – eurographics Italian chapter conference. The Eurographics Association; 2018. p. 75–85. [3] Fourier J. Mémoire sur la propagation de la chaleur dans les corps solides [memoir on the propagation of heat in solid bodies]. Nouv Bull Sci Soc Philom Paris 1807;6:215–21. [4] do Carmo MP. Riemannian geometry. Boston: Bikhausen; 1992. [5] Taubin G. A signal processing approach to fair surface design. In: Proceedings of the CGIT. New York, NY: ACM; 1995. p. 351–8. [6] Vallet B, Lévy B. Spectral geometry processing with manifold harmonics. Comput Gr Forum 2008;27(2):251–60. [7] Lévy B. Laplace–Beltrami eigenfunctions towards an algorithm that understands geometry. In: Proceedings of the SMI. Washington, DC: IEEE; 2006. p. 13–25. [8] Rustamov RM. Laplace–Beltrami eigenfunctions for deformation invariant shape representation. In: Proceedings of the SGP. Aire-la-Ville, Switzerland: Eurographics Association; 2007. p. 225–33. [9] Reuter M, Wolter F-E, Peinecke N. Laplace–Beltrami spectra as ‘Shape-DNA’of surfaces and solids. Comput-Aided Des 2006;38(4):342–66. [10] Coifman RR, Lafon S. Diffusion maps. Appl Comput Harm Anal 2006;21(1):5–30.

[11] Sun J, Ovsjanikov M, Guibas LJ. A concise and provably informative multi-scale signature based on heat diffusion. Comput Gr Forum 2009;28(5):1383–92. [12] Gebal K, Bærentzen JA, Anæs H, Larsen R. Shape analysis using the auto diffusion function. Comput Gr Forum 2009;28(5):1405–13. [13] Aubry M, Schlickewei U, Cremers D. The wave kernel signature: a quantum mechanical approach to shape analysis. In: Proceedings of the ICCV workshops. IEEE; 2011. p. 1626–33. [14] Huang Q, Wang F, Guibas L. Functional map networks for analyzing and exploring large shape collections. ACM Trans Gr 2014;33(4):36:1–36:11. [15] Ganapathi-Subramanian V, Diamanti O, Guibas LJ. Modular latent spaces for shape correspondences. Comput Graph Forum 2018;37(5):199–210. [16] Nogneng D, Ovsjanikov M. Informative descriptor preservation via commutativity for shape matching. Comput Graphics Forum 2017;36(2):259–67. [17] Nogneng D, Melzi S, Rodolà E, Castellani U, Bronstein MM, Ovsjanikov M. Improved functional mappings via product preservation. Comput Gr Forum 2018;37(2):179–90. [18] Aflalo Y, Brezis H, Kimmel R. On the optimality of shape and data representation in the spectral domain. SIAM J Imaging Sci 2015;8(2):1141–60. [19] Shtern A, Kimmel R. Matching the LBO eigenspace of non-rigid shapes via high order statistics. Axioms 2014;3(3):300–19. [20] Kovnatsky A, Bronstein MM, Bronstein AM, Glashoff K, Kimmel R. Coupled quasi-harmonic bases. Comput Gr Forum 2013;32(2pt4):439–48. [21] Eynard D, Rodolà E, Glashoff K, Bronstein MM. Coupled functional maps. In: Proceedings of the 2016 fourth international conference on 3D vision (3DV); 2016. p. 399–407. [22] Shuman DI, Ricaud B, Vandergheynst P. A windowed graph Fourier transform. In: Proceedings of the 2012 IEEE statistical signal processing workshop (SSP); 2012. p. 133–6. [23] Boscaini D, Masci J, Melzi S, Bronstein MM, Castellani U, Vandergheynst P. Learning class-specific descriptors for deformable shapes using localized spectral convolutional networks. Comput Gr Forum 2015;34(5):13–23. [24] Melzi S, Rodolà E, Castellani U, Bronstein MM. Shape analysis with anisotropic windowed Fourier transform. In: Proceedings of the 3DV. Stanford, CA: IEEE; 2016. p. 470–8. [25] Neumann T, Varanasi K, Theobalt C, Magnor M, Wacker M. Compressed manifold modes for mesh processing. Comput Gr Forum 2014;33(5):35–44. [26] Ozolin¸ š V, Lai R, Caflisch R, Osher S. Compressed modes for variational problems in mathematics and physics. Proc Natl Acad Sci 2013;110(46):18368–73. [27] Kovnatsky A, Glashoff K, Bronstein MM. MADMM: a generic algorithm for non-smooth optimization on manifolds. In: Proceedings of the ECCV. Cham, Switzerland: Springer Internatioal Publishing; 2016. p. 680–96. [28] Choukroun Y., Shtern A., Bronstein A., Kimmel R. Hamiltonian operator for spectral shape analysis. arXiv: 1611019902017. [29] Melzi S, Rodolà E, Castellani U, Bronstein MM. Localized manifold harmonics for spectral shape analysis. Comput Gr Forum 2018;37(6):20–34. [30] Litman R, Bronstein A, Bronstein M, Castellani U. Supervised learning of bag-of-features shape descriptors using sparse coding. In: Proceedings of the symposium on geometry processing. Aire-la-Ville, Switzerland, Switzerland: Eurographics Association; 2014. p. 127–36. [31] Wang R, Liu L, Yang Z, Wang K, Shan W, Deng J, et al. Construction of manifolds via compatible sparse representations. ACM Trans Gr 2016;35(2):14:1–14:10. [32] Digne J, Valette S, Chaine R. Sparse geometric representation through local shape probing. IEEE Trans Vis Comput Gr 2018;24(7):2238–50. [33] Abramowitz M. Handbook of mathematical functions, with formulas, graphs, and mathematical tables. New York, NY, USA: Dover Publications, Inc.; 1974. ISBN 0486612724. [34] Davies B. Integral transforms and their applications; 1978. [35] Pinkall U, Polthier K. Computing discrete minimal surfaces and their conjugates. Exp Math 1993;2(1):15–36. [36] Moenning C, Dodgson NA. Fast marching farthest point sampling. Technical Report. University of Cambridge, Computer Laboratory; 2003. [37] Eldar Y, Lindenbaum M, Porat M, Zeevi YY. The farthest point strategy for progressive image sampling. IEEE Transactions on Image Processing 1997;6(9):1305–15. [38] Kato T. Perturbation theory for linear operators. Berlin Heidelberg: Springer-Verlag; 1995. [39] Rubinstein R, Zibulevsky M, Elad M. Efficient implementation of the k-SVD algorithm using batch orthogonal matching pursuit. 2008. [40] Mitchell JSB, Mount DM, Papadimitriou CH. The discrete geodesic problem. SIAM J Comput 1987;16(4):647–68. doi:10.1137/0216045. [41] Bogo F, Romero J, Loper M, Black MJ. FAUST: dataset and evaluation for 3d mesh registration. In: Proceeding of the CVPR; 2014. p. 3794–801. [42] Bronstein AM, Bronstein MM, Kimmel R. Numerical geometry of non-rigid shapes. Springer Science & Business Media; 2008. [43] Ovsjanikov M, Ben-Chen M, Solomon J, Butscher A, Guibas L. Functional maps: a flexible representation of maps between shapes. ACM Trans Gr (TOG) 2012b;31(4):30.