Deep Functional Maps: Structured Prediction for Dense Shape Correspondence
Or Litany,Tal Remez,Emanuele Rodolà,Alexander M. Bronstein,Michael M. Bronstein +4 more
- pp 5660-5668
Reads0
Chats0
TLDR
In this paper, a deep residual network is proposed to learn dense correspondence between deformable 3D shapes by taking dense descriptor fields defined on two shapes as input, and outputs a soft map between the two given objects.Abstract:
We introduce a new framework for learning dense correspondence between deformable 3D shapes. Existing learning based approaches model shape correspondence as a labelling problem, where each point of a query shape receives a label identifying a point on some reference domain; the correspondence is then constructed a posteriori by composing the label predictions of two input shapes. We propose a paradigm shift and design a structured prediction model in the space of functional maps, linear operators that provide a compact representation of the correspondence. We model the learning process via a deep residual network which takes dense descriptor fields defined on two shapes as input, and outputs a soft map between the two given objects. The resulting correspondence is shown to be accurate on several challenging benchmarks comprising multiple categories, synthetic models, real scans with acquisition artifacts, topological noise, and partiality.read more
Citations
More filters
Journal ArticleDOI
Geometric Deep Learning: Going beyond Euclidean data
TL;DR: In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions) and are natural targets for machine-learning techniques as mentioned in this paper.
Posted Content
Dynamic Graph CNN for Learning on Point Clouds
TL;DR: In this paper, a new neural network module called EdgeConv is proposed for CNN-based high-level tasks on point clouds including classification and segmentation, which is differentiable and can be plugged into existing architectures.
Journal ArticleDOI
Image Matching from Handcrafted to Deep Features: A Survey
TL;DR: This survey introduces feature detection, description, and matching techniques from handcrafted methods to trainable ones and provides an analysis of the development of these methods in theory and practice, and briefly introduces several typical image matching-based applications.
Proceedings ArticleDOI
SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels
TL;DR: This work presents Spline-based Convolutional Neural Networks (SplineCNNs), a variant of deep neural networks for irregular structured and geometric input, e.g., graphs or meshes, that is a generalization of the traditional CNN convolution operator by using continuous kernel functions parametrized by a fixed number of trainable weights.
Book ChapterDOI
3D-CODED: 3D Correspondences by Deep Deformation
TL;DR: This work presents a new deep learning approach for matching deformable shapes by introducing Shape Deformation Networks which jointly encode 3D shapes and correspondences, and shows that this method is robust to many types of perturbations, and generalizes to non-human shapes.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings ArticleDOI
Dimensionality Reduction by Learning an Invariant Mapping
TL;DR: This work presents a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold.
Proceedings ArticleDOI
Surface simplification using quadric error metrics
Michael Garland,Paul S. Heckbert +1 more
TL;DR: This work has developed a surface simplification algorithm which can rapidly produce high quality approximations of polygonal models, and which also supports non-manifold surface models.
Posted Content
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
TL;DR: The Exponential Linear Unit (ELU) as mentioned in this paper was proposed to alleviate the vanishing gradient problem via the identity for positive values, which has improved learning characteristics compared to the units with other activation functions.