scispace - formally typeset
M

Michael Zollhöfer

Researcher at Facebook

Publications -  108
Citations -  14736

Michael Zollhöfer is an academic researcher from Facebook. The author has contributed to research in topics: Rendering (computer graphics) & 3D reconstruction. The author has an hindex of 46, co-authored 106 publications receiving 9686 citations. Previous affiliations of Michael Zollhöfer include University of Pittsburgh & Stanford University.

Papers
More filters
Journal ArticleDOI

Opt: A Domain Specific Language for Non-Linear Least Squares Optimization in Graphics and Imaging

TL;DR: Opt as discussed by the authors is a language for writing objective functions over image- or graph-structured unknowns concisely and at a high level, which automatically transforms these specifications into state-of-the-art GPU solvers based on Gauss-Newton or Levenberg-Marquardt methods.
Journal ArticleDOI

HeadOn: Real-time Reenactment of Human Portrait Videos

TL;DR: It is proposed that HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze, is proposed.
Posted Content

IGNOR: Image-guided Neural Object Rendering

TL;DR: A learned image-guided rendering technique that combines the benefits of image-based rendering and GAN-based image synthesis to generate photo-realistic re-renderings of reconstructed objects for virtual and augmented reality applications.
Proceedings ArticleDOI

Demo of FaceVR: real-time facial reenactment and eye gaze control in virtual reality

TL;DR: FaceVR is introduced, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context that combines a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos.
Proceedings Article

Image-guided Neural Object Rendering

TL;DR: This work presents a novel method for photo-realistic re-rendering of reconstructed objects that combines the benefits of image-based rendering and GAN-based image synthesis, and proposes EffectsNet, a deep neural network that predicts view-dependent effects.