scispace - formally typeset
M

Michael Niemeyer

Researcher at Max Planck Society

Publications -  25
Citations -  5563

Michael Niemeyer is an academic researcher from Max Planck Society. The author has contributed to research in topics: 3D reconstruction & Computer science. The author has an hindex of 13, co-authored 20 publications receiving 2059 citations. Previous affiliations of Michael Niemeyer include University of Tübingen.

Papers
More filters
Posted Content

Occupancy Networks: Learning 3D Reconstruction in Function Space

TL;DR: This paper proposes Occupancy Networks, a new representation for learning-based 3D reconstruction methods that encodes a description of the 3D output at infinite resolution without excessive memory footprint, and validate that the representation can efficiently encode 3D structure and can be inferred from various kinds of input.
Proceedings ArticleDOI

Occupancy Networks: Learning 3D Reconstruction in Function Space

TL;DR: In this paper, the authors propose Occupancy Networks, which implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier, which can be used for learning-based 3D reconstruction methods.
Proceedings ArticleDOI

Differentiable Volumetric Rendering: Learning Implicit 3D Representations Without 3D Supervision

TL;DR: This work proposes a differentiable rendering formulation for implicit shape and texture representations, showing that depth gradients can be derived analytically using the concept of implicit differentiation, and finds that this method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.
Posted Content

GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields

TL;DR: The key hypothesis is that incorporating a compositional 3D scene representation into the generative model leads to more controllable image synthesis and a fast and realistic image synthesis model is proposed.
Posted Content

Convolutional Occupancy Networks

TL;DR: Convolutional Occupancy Networks is proposed, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes that enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.