scispace - formally typeset
M

Michael Zollhöfer

Researcher at Facebook

Publications -  108
Citations -  14736

Michael Zollhöfer is an academic researcher from Facebook. The author has contributed to research in topics: Rendering (computer graphics) & 3D reconstruction. The author has an hindex of 46, co-authored 106 publications receiving 9686 citations. Previous affiliations of Michael Zollhöfer include University of Pittsburgh & Stanford University.

Papers
More filters
Proceedings ArticleDOI

Face2Face: Real-Time Face Capture and Reenactment of RGB Videos

TL;DR: A novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video) that addresses the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling and re-render the manipulated output video in a photo-realistic fashion.
Journal ArticleDOI

Real-time 3D reconstruction at scale using voxel hashing

TL;DR: An online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure.
Journal ArticleDOI

Deferred neural rendering: image synthesis using neural textures

TL;DR: This work proposes Neural Textures, which are learned feature maps that are trained as part of the scene capture process that can be utilized to coherently re-render or manipulate existing video content in both static and dynamic environments at real-time rates.
Journal ArticleDOI

BundleFusion: real-time globally consistent 3D reconstruction using on-the-fly surface re-integration

TL;DR: In this paper, a robust pose estimation strategy is proposed for real-time, high-quality, 3D scanning of large-scale scenes using RGB-D input with an efficient hierarchical approach, which removes heavy reliance on temporal tracking and continually localizes to the globally optimized frames instead.
Proceedings ArticleDOI

Demo of Face2Face: real-time face capture and reenactment of RGB videos

TL;DR: A novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video) that addresses the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling and re-render the manipulated output video in a photo-realistic fashion.