scispace - formally typeset
Z

Zerong Zheng

Researcher at Tsinghua University

Publications -  42
Citations -  1402

Zerong Zheng is an academic researcher from Tsinghua University. The author has contributed to research in topics: Computer science & Motion capture. The author has an hindex of 10, co-authored 28 publications receiving 569 citations.

Papers
More filters
Proceedings ArticleDOI

DoubleFusion: Real-Time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor

TL;DR: Li et al. as mentioned in this paper proposed DoubleFusion, which combines volumetric dynamic reconstruction with data-driven template fitting to simultaneously reconstruct detailed geometry, non-rigid motion and the inner human body shape from a single depth camera.
Proceedings ArticleDOI

DeepHuman: 3D Human Reconstruction From a Single Image

TL;DR: DeepHuman, an image-guided volume-to-volume translation CNN for 3D human reconstruction from a single RGB image, leverages a dense semantic representation generated from SMPL model as an additional input to reduce the ambiguities associated with the reconstruction of invisible areas.
Posted Content

PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction

TL;DR: In the PaMIR-based reconstruction framework, a novel deep neural network is proposed to regularize the free-form deep implicit function using the semantic features of the parametric model, which improves the generalization ability under the scenarios of challenging poses and various clothing topologies.
Posted Content

DeepHuman: 3D Human Reconstruction from a Single Image

TL;DR: DeepHuman as discussed by the authors proposes an image-guided volume-to-volume translation CNN for 3D human reconstruction from a single RGB image, which fuses different scales of image features into the 3D space through volumetric feature transformation, which helps to recover accurate surface geometry.
Proceedings ArticleDOI

Function4D: Real-time Human Volumetric Capture from Very Sparse Consumer RGBD Sensors

TL;DR: In this article, the authors proposed a human volumetric capture method that combines temporal VOLUME 7, 2019 fusion and deep implicit functions, which can not only preserve the geometric details on the depth inputs but also generate plausible texturing results.