SurfNet: Generating 3D Shape Surfaces Using Deep Residual Networks
Ayan Sinha,Asim Unmesh,Qixing Huang,Karthik Ramani +3 more
- pp 791-800
TLDR
This work develops a procedure to create consistent shape surface of a category of 3D objects, and uses this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation.Abstract:
3D shape models are naturally parameterized using vertices and faces, i.e., composed of polygons forming a surface. However, current 3D learning paradigms for predictive and generative tasks using convolutional neural networks focus on a voxelized representation of the object. Lifting convolution operators from the traditional 2D to 3D results in high computational overhead with little additional benefit as most of the geometry information is contained on the surface boundary. Here we study the problem of directly generating the 3D shape surface of rigid and non-rigid shapes using deep convolutional neural networks. We develop a procedure to create consistent ‘geometry images’ representing the shape surface of a category of 3D objects. We then use this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation. Our experiments indicate that our network learns a meaningful representation of shape surfaces allowing it to interpolate between shape orientations and poses, invent new shape surfaces and reconstruct 3D shape surfaces from previously unseen images. Our code is available at https://github.com/sinhayan/surfnet.read more
Citations
More filters
Proceedings ArticleDOI
Learning Implicit Fields for Generative Shape Modeling
Zhiqin Chen,Hao Zhang +1 more
TL;DR: In this paper, an implicit field is used to assign a value to each point in 3D space, so that a shape can be extracted as an iso-surface, and a binary classifier is trained to perform this assignment.
Book ChapterDOI
Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images
TL;DR: In this paper, the authors propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image.
Book ChapterDOI
Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network
TL;DR: Yadira et al. as mentioned in this paper proposed a simple convolutional neural network to regress the 3D shape of a complete face from a single 2D image, which can reconstruct full facial geometry along with semantic meaning.
Proceedings ArticleDOI
A Papier-Mache Approach to Learning 3D Surface Generation
TL;DR: This work introduces a method for learning to generate the surface of 3D shapes as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape.
Book ChapterDOI
Learning Category-Specific Mesh Reconstruction from Image Collections
TL;DR: In this paper, the shape is represented as a deformable 3D mesh model of an object category where a shape is parameterized by a learned mean shape and per-instance predicted deformation.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
On Spectral Clustering: Analysis and an algorithm
TL;DR: A simple spectral clustering algorithm that can be implemented using a few lines of Matlab is presented, and tools from matrix perturbation theory are used to analyze the algorithm, and give conditions under which it can be expected to do well.
Proceedings ArticleDOI
3D ShapeNets: A deep representation for volumetric shapes
TL;DR: This work proposes to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network, and shows that this 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.
Proceedings Article
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TL;DR: Deep convolutional generative adversarial networks (DCGANs) as discussed by the authors learn a hierarchy of representations from object parts to scenes in both the generator and discriminator for unsupervised learning.
Journal ArticleDOI
Shape distributions
TL;DR: The dissimilarities between sampled distributions of simple shape functions provide a robust method for discriminating between classes of objects in a moderately sized database, despite the presence of arbitrary translations, rotations, scales, mirrors, tessellations, simplifications, and model degeneracies.