scispace - formally typeset
C

Christian Theobalt

Researcher at Max Planck Society

Publications -  508
Citations -  34680

Christian Theobalt is an academic researcher from Max Planck Society. The author has contributed to research in topics: Motion capture & Computer science. The author has an hindex of 89, co-authored 450 publications receiving 25487 citations. Previous affiliations of Christian Theobalt include Stanford University & Facebook.

Papers
More filters
Journal ArticleDOI

PhysCap: physically plausible monocular 3D motion capture in real time

TL;DR: In this article, a CNN infers 2D and 3D joint positions, and subsequently, an inverse kinematics step finds space-time coherent joint angles and global 3D pose.
Book ChapterDOI

Full body performance capture under uncontrolled and varying illumination: a shading-based approach

TL;DR: A marker-less method for full body human performance capture by analyzing shading information from a sequence of multi-view images, which are recorded under uncontrolled and changing lighting conditions, and is applicable in cases where background segmentation cannot be performed or a set of training poses is unavailable.
Proceedings ArticleDOI

Combining 2D feature tracking and volume reconstruction for online video-based human motion capture

TL;DR: A system to capture human motion at interactive frame rates without the use of markers or scene-intruding devices is described and a multilayer hierarchical kinematic skeleton is fitted to each frame in a two-stage process.
Journal ArticleDOI

Capturing Relightable Human Performances under General Uncontrolled Illumination

TL;DR: The method enables plausible reconstruction of relightable dynamic scene models without a complex controlled lighting apparatus, and opens up a path towards relightingable performance capture in less constrained environments and using less complex acquisition setups.
Posted Content

Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation.

TL;DR: A novel human video synthesis method that approaches limiting factors by explicitly disentangling the learning of time-coherent fine-scale details from the embedding of the human in 2D screen space and shows significant improvement over the state of the art both qualitatively and quantitatively.