scispace - formally typeset
C

Chen Qian

Researcher at SenseTime

Publications -  207
Citations -  10106

Chen Qian is an academic researcher from SenseTime. The author has contributed to research in topics: Computer science & Pose. The author has an hindex of 30, co-authored 125 publications receiving 5669 citations. Previous affiliations of Chen Qian include Shanghai Jiao Tong University & The Chinese University of Hong Kong.

Papers
More filters
Posted Content

GreedyNAS: Towards Fast One-Shot NAS with Greedy Supernet

TL;DR: GreedyNAS as mentioned in this paper proposes a multi-path sampling strategy with rejection, and greedily filter the weak paths to improve the performance of a single supernet on a huge-scale search space.
Proceedings ArticleDOI

Towards Improving the Consistency, Efficiency, and Flexibility of Differentiable Neural Architecture Search

TL;DR: EnTranNAS as discussed by the authors is composed of Engine-cells and Transit-cells, which is differentiable for architecture search, while the Transit-cell only transits a sub-graph by architecture derivation.
Posted Content

Weakly-Supervised Discovery of Geometry-Aware Representation for 3D Human Pose Estimation.

TL;DR: A geometry-aware 3D representation for the human pose is proposed to address this limitation by using multiple views in a simple auto-encoder model at the training stage and only 2D keypoint information as supervision, and injecting the representation as a robust 3D prior.
Posted Content

HMOR: Hierarchical Multi-Person Ordinal Relations for Monocular Multi-Person 3D Pose Estimation

TL;DR: This paper attempts to address the lack of a global perspective of the top-down approaches to 3D human pose estimation by introducing a novel form of supervision - Hierarchical Multi-person Ordinal Relations (HMOR).
Posted Content

TransGaGa: Geometry-Aware Unsupervised Image-to-Image Translation

TL;DR: A novel disentangle-and-translate framework to tackle the complex objects image-to-image translation task, which disentangles image space into a Cartesian product of the appearance and the geometry latent spaces and supports multimodal translation.