X
Xu Chen
Researcher at ETH Zurich
Publications - 15
Citations - 323
Xu Chen is an academic researcher from ETH Zurich. The author has contributed to research in topics: Pose & Computer science. The author has an hindex of 6, co-authored 15 publications receiving 162 citations. Previous affiliations of Xu Chen include Disney Research.
Papers
More filters
Book ChapterDOI
Category Level Object Pose Estimation via Neural Analysis-by-Synthesis
TL;DR: This paper combines a gradient-based fitting procedure with a parametric neural image synthesis module that is capable of implicitly representing the appearance, shape and pose of entire object categories, thus rendering the need for explicit CAD models per object instance unnecessary.
Proceedings ArticleDOI
Monocular Neural Image Based Rendering With Continuous View Control
Jie Song,Xu Chen,Otmar Hilliges +2 more
TL;DR: In this paper, a learning pipeline determines the output pixels directly from the source color, which leads to more accurate view synthesis under continuous 6-DoF camera control and outperforms state-of-the-art baseline methods on public datasets.
Posted Content
Monocular Neural Image Based Rendering with Continuous View Control
Xu Chen,Jie Song,Otmar Hilliges +2 more
TL;DR: The experiments show that both proposed components, the transforming encoder-decoder and depth-guided appearance mapping, lead to significantly improved generalization beyond the training views and in consequence to more accurate view synthesis under continuous 6-DoF camera control.
Book ChapterDOI
Human Body Model Fitting by Learned Gradient Descent
Jie Song,Xu Chen,Otmar Hilliges +2 more
TL;DR: In this article, a neural network is used to predict the parameter update rule for each iteration, which guides the optimizer towards a good solution in very few steps, converging in typically few steps.
Proceedings Article
SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural Implicit Shapes
TL;DR: This article proposed a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding, enabling end-to-end training from 3D meshes with bone transformations.