M
Michael Zollhöfer
Researcher at Facebook
Publications - 108
Citations - 14736
Michael Zollhöfer is an academic researcher from Facebook. The author has contributed to research in topics: Rendering (computer graphics) & 3D reconstruction. The author has an hindex of 46, co-authored 106 publications receiving 9686 citations. Previous affiliations of Michael Zollhöfer include University of Pittsburgh & Stanford University.
Papers
More filters
Posted Content
Neural Animation and Reenactment of Human Actor Videos
Lingjie Liu,Weipeng Xu,Michael Zollhöfer,Hyeongwoo Kim,Florian Bernard,Marc Habermann,Wenping Wang,Christian Theobalt +7 more
Posted Content
DEMEA: Deep Mesh Autoencoders for Non-Rigidly Deforming Objects
TL;DR: A general-purpose DEep MEsh Autoencoder (DEMEA) is proposed which adds a novel embedded deformation layer to a graph-convolutional mesh autoencoders, and Reasoning about the local rigidity of meshes using EDL allows us to achieve higher-quality results for highly deformable objects, compared to directly regressing vertex positions.
Posted Content
VolumeDeform: Real-time Volumetric Non-rigid Reconstruction
TL;DR: This work presents a novel approach for the reconstruction of dynamic geometric shapes using a single hand-held consumer-grade RGB-D sensor at real-time rates, and casts finding the optimal deformation of space as a non-linear regularized variational optimization problem by enforcing local smoothness and proximity to the input constraints.
Posted Content
Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations
TL;DR: This article proposed Scene Representation Networks (SRNs), a continuous, 3D-structure-aware scene representation that encodes both geometry and appearance, which can be trained end-to-end from only 2D images and their camera poses.
Journal ArticleDOI
Neural Style-Preserving Visual Dubbing
Hyeongwoo Kim,Mohamed Elgharib,Michael Zollhöfer,Hans-Peter Seidel,Thabo Beeler,Christian Richardt,Christian Theobalt +6 more
TL;DR: In this article, a recurrent generative adversarial network (GAN) is used to capture the spatio-temporal co-activation of facial expressions and enables generating and modifying the facial expressions of the target actor while preserving their style.