C
Christian Theobalt
Researcher at Max Planck Society
Publications - 508
Citations - 34680
Christian Theobalt is an academic researcher from Max Planck Society. The author has contributed to research in topics: Motion capture & Computer science. The author has an hindex of 89, co-authored 450 publications receiving 25487 citations. Previous affiliations of Christian Theobalt include Stanford University & Facebook.
Papers
More filters
Journal ArticleDOI
On-set performance capture of multiple actors with a stereo camera
TL;DR: A new algorithm is described which is able to track skeletal motion and detailed surface geometry of one or more actors from footage recorded with a stereo rig that is allowed to move, and is one of the first performance capture methods to exploit detailed BRDF information and scene illumination for accurate pose tracking and surface refinement in general scenes.
Proceedings ArticleDOI
Dense correspondence finding for parametrization-free animation reconstruction from video
TL;DR: A dense 3D correspondence finding method that enables spatio-temporally coherent reconstruction of surface animations from multi-view video data and can be reconstructed as a sequence of meshes with constant connectivity and small tangential distortion.
Book ChapterDOI
Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction
TL;DR: In this paper, an Implicit Part Network (IP-Net) is used to jointly predict the outer 3D surface of the dressed person, the inner body surface, and the semantic correspondences to a parametric body model.
Posted Content
Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video.
Edgar Tretschk,Ayush Tewari,Vladislav Golyanik,Michael Zollhöfer,Christoph Lassner,Christian Theobalt +5 more
TL;DR: In this article, a non-rigid neural ray bending (NR-NeRF) network is proposed to disentangle the dynamic scene into a canonical volume and its deformation.
Journal ArticleDOI
Live intrinsic video
TL;DR: This work proposes a novel combination of sophisticated local spatial and global spatio-temporal priors resulting in temporally coherent decompositions at real-time frame rates without the need for explicit correspondence search, which enables on-line processing of live video footage.