scispace - formally typeset
C

Christian Theobalt

Researcher at Max Planck Society

Publications -  508
Citations -  34680

Christian Theobalt is an academic researcher from Max Planck Society. The author has contributed to research in topics: Motion capture & Computer science. The author has an hindex of 89, co-authored 450 publications receiving 25487 citations. Previous affiliations of Christian Theobalt include Stanford University & Facebook.

Papers
More filters
Proceedings ArticleDOI

GANerated Hands for Real-Time 3D Hand Tracking from Monocular RGB

TL;DR: This work proposes a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network, and uses a neural network that translates synthetic images to "real" images, such that the so-generated images follow the same statistical distribution as real-world hand images.
Posted Content

Face2Face: Real-time Face Capture and Reenactment of RGB Videos

TL;DR: Face2Face addresses the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling and convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination.
Journal ArticleDOI

Real-time non-rigid reconstruction using an RGB-D camera

TL;DR: A combined hardware and software solution for markerless reconstruction of non-rigidly deforming physical objects with arbitrary shape in real-time, an order of magnitude faster than state-of-the-art methods, while matching the quality and robustness of many offline algorithms.
Posted Content

Neural Sparse Voxel Fields

TL;DR: This work introduces Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering that is over 10 times faster than the state-of-the-art (namely, NeRF) at inference time while achieving higher quality results.
Journal ArticleDOI

Real-time expression transfer for facial reenactment

TL;DR: The novelty of the approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video.