Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction
Rohan Chabra,Jan Eric Lenssen,Eddy Ilg,Tanner Schmidt,Julian Straub,Steven Lovegrove,Richard Newcombe +6 more
- pp 608-625
Reads0
Chats0
TLDR
Deep Local Shapes (DeepLS) as discussed by the authors replaces the dense volumetric signed distance function (SDF) representation used in traditional surface reconstruction systems with a set of locally learned continuous SDFs defined by a neural network.Abstract:
Efficiently reconstructing complex and intricate surfaces at scale is a long-standing goal in machine perception. To address this problem we introduce Deep Local Shapes (DeepLS), a deep shape representation that enables encoding and reconstruction of high-quality 3D shapes without prohibitive memory requirements. DeepLS replaces the dense volumetric signed distance function (SDF) representation used in traditional surface reconstruction systems with a set of locally learned continuous SDFs defined by a neural network, inspired by recent work such as DeepSDF. Unlike DeepSDF, which represents an object-level SDF with a neural network and a single latent code, we store a grid of independent latent codes, each responsible for storing information about surfaces in a small local neighborhood. This decomposition of scenes into local shapes simplifies the prior distribution that the network must learn, and also enables efficient inference. We demonstrate the effectiveness and generalization power of DeepLS by showing object shape encoding and reconstructions of full scenes, where DeepLS delivers high compression, accuracy, and local shape completion.read more
Citations
More filters
Proceedings Article
Implicit Neural Representations with Periodic Activation Functions
TL;DR: In this paper, the authors propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives.
Journal ArticleDOI
Instant neural graphics primitives with a multiresolution hash encoding
TL;DR: A versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations is introduced, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of 1920×1080.
Posted Content
GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields
Michael Niemeyer,Andreas Geiger +1 more
TL;DR: The key hypothesis is that incorporating a compositional 3D scene representation into the generative model leads to more controllable image synthesis and a fast and realistic image synthesis model is proposed.
Posted Content
pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis
TL;DR: In this article, a novel generative model, named Periodic Implicit Generative Adversarial Networks (pi-GAN) was proposed for 3D-aware image synthesis, which leverages neural representations with periodic activation functions and volumetric rendering.
Proceedings ArticleDOI
Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans
TL;DR: In this paper, the authors propose Neural Body, a new human body representation which assumes that learned neural representations at different frames share the same set of latent codes anchored to a deformable mesh, so that the observations across frames can be naturally integrated.
References
More filters
Journal ArticleDOI
Multilayer feedforward networks are universal approximators
TL;DR: It is rigorously established that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available.
Journal ArticleDOI
Multilayer feedforward networks are universal approximators
HornikK.,StinchcombeM.,WhiteH. +2 more
Proceedings ArticleDOI
3D ShapeNets: A deep representation for volumetric shapes
TL;DR: This work proposes to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network, and shows that this 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.
Proceedings ArticleDOI
KinectFusion: Real-time dense surface mapping and tracking
Richard Newcombe,Shahram Izadi,Otmar Hilliges,David Molyneaux,David Kim,Andrew J. Davison,Pushmeet Kohi,Jamie Shotton,Steve Hodges,Andrew Fitzgibbon +9 more
TL;DR: A system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware, which fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real- time.
Proceedings ArticleDOI
Parallel Tracking and Mapping for Small AR Workspaces
Georg Klein,David W. Murray +1 more
TL;DR: A system specifically designed to track a hand-held camera in a small AR workspace, processed in parallel threads on a dual-core computer, that produces detailed maps with thousands of landmarks which can be tracked at frame-rate with accuracy and robustness rivalling that of state-of-the-art model-based systems.