scispace - formally typeset
Open AccessPosted Content

Deep Implicit Templates for 3D Shape Representation.

TLDR
Spatial Warping LSTM is proposed, a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations and can not only learn a common implicit tem-plate for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously with-out any supervision.
Abstract
Deep implicit functions (DIFs), as a kind of 3D shape representation, are becoming more and more popular in the 3D vision community due to their compactness and strong representation power. However, unlike polygon mesh-based templates, it remains a challenge to reason dense correspondences or other semantic relationships across shapes represented by DIFs, which limits its applications in texture transfer, shape analysis and so on. To overcome this limitation and also make DIFs more interpretable, we propose Deep Implicit Templates, a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations. Our key idea is to formulate DIFs as conditional deformations of a template implicit function. To this end, we propose Spatial Warping LSTM, which decomposes the conditional spatial transformation into multiple affine transformations and guarantees generalization capability. Moreover, the training loss is carefully designed in order to achieve high reconstruction accuracy while learning a plausible template with accurate correspondences in an unsupervised manner. Experiments show that our method can not only learn a common implicit template for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously without any supervision.

read more

Citations
More filters
Posted Content

Convolutional Mesh Regression for Single-Image Human Shape Reconstruction

TL;DR: This paper addresses the problem of 3D human pose and shape estimation from a single image by proposing a graph-based mesh regression, which outperform the comparable baselines relying on model parameter regression, and achieves state-of-the-art results among model-based pose estimation approaches.
Proceedings ArticleDOI

i3DMM: Deep Implicit 3D Morphable Model of Human Heads

TL;DR: The first deep implicit 3D morphable model (i3DMM) of full heads, which not only captures identity-specific geometry, texture, and expressions of the frontal face, but also models the entire head, including hair is presented.
Posted Content

Deformed Implicit Field: Modeling 3D Shapes with Learned Dense Correspondence

TL;DR: This work proposes a novel Deformed Implicit Field (DIF) representation for modeling 3D shapes of a category and generating dense correspondences among shapes and demonstrates several applications such as texture transfer and shape editing, where the method achieves compelling results that cannot be achieved by previous methods.
Proceedings ArticleDOI

Structured Local Radiance Fields for Human Avatar Modeling

TL;DR: This work introduces a novel representation on the basis of recent neural scene rendering techniques that enables automatic construction of animatable human avatars for various types of clothes without the need for scanning subject-specific templates, and can generate realistic images with dynamic details for novel poses.
Proceedings ArticleDOI

Towards Implicit Text-Guided 3D Shape Generation

TL;DR: This work proposes a new approach for text-guided 3D shape generation, capable of producing high-fidelity shapes with colors that match the given text description, and decouple the shape and color predictions for learning features in both texts and shapes.
References
More filters
Proceedings ArticleDOI

Marching cubes: A high resolution 3D surface construction algorithm

TL;DR: In this paper, a divide-and-conquer approach is used to generate inter-slice connectivity, and then a case table is created to define triangle topology using linear interpolation.
Proceedings ArticleDOI

3D ShapeNets: A deep representation for volumetric shapes

TL;DR: This work proposes to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network, and shows that this 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.
Posted Content

ShapeNet: An Information-Rich 3D Model Repository

TL;DR: ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy, a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations.
Journal ArticleDOI

SMPL: a skinned multi-person linear model

TL;DR: The Skinned Multi-Person Linear model (SMPL) is a skinned vertex-based model that accurately represents a wide variety of body shapes in natural human poses that is compatible with existing graphics pipelines and iscompatible with existing rendering engines.
Proceedings ArticleDOI

DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation

TL;DR: DeepSDF as mentioned in this paper represents a shape's surface by a continuous volumetric field: the magnitude of a point in the field represents the distance to the surface boundary and the sign indicates whether the region is inside (-) or outside (+) of the shape.
Related Papers (5)