scispace - formally typeset
C

Chen Qian

Researcher at SenseTime

Publications -  207
Citations -  10106

Chen Qian is an academic researcher from SenseTime. The author has contributed to research in topics: Computer science & Pose. The author has an hindex of 30, co-authored 125 publications receiving 5669 citations. Previous affiliations of Chen Qian include Shanghai Jiao Tong University & The Chinese University of Hong Kong.

Papers
More filters
Proceedings Article

AOT: Appearance Optimal Transport Based Identity Swapping for Forgery Detection

TL;DR: This work provides a new identity swapping algorithm with large differences in appearance for face forgery detection and proposes an Appearance Optimal Transport model (AOT) to formulate it in both latent and pixel space.
Posted Content

Deep Comprehensive Correlation Mining for Image Clustering

TL;DR: Wang et al. as discussed by the authors proposed a novel clustering framework, named deep comprehensive correlation mining (DCCM), for exploring and taking full advantage of various kinds of correlations behind the unlabeled data from three aspects: 1) Instead of only using pair-wise information, pseudo-label supervision is proposed to investigate category information and learn discriminative features.
Proceedings ArticleDOI

TRB: A Novel Triplet Representation for Understanding 2D Human Body

TL;DR: This paper proposes the Triplet Representation for Body --- a compact 2D human body representation, with skeleton keypoints capturing human pose information and contour keypoints containing human shape information, and proposes a two-branch network (TRB-net) with three novel techniques, namely X-structure, Directional Convolution and Pairwise mapping.
Proceedings ArticleDOI

DMVOS: Discriminative Matching for Real-time Video Object Segmentation

TL;DR: This work proposes Discriminative Matching for real-time Video Object Segmentation (DMVOS), a real- time VOS framework with high-accuracy to fill this gap in segmentation accuracy.
Proceedings ArticleDOI

Green Hierarchical Vision Transformer for Masked Image Modeling

TL;DR: An efficient approach for Masked Image Modeling (MIM) with hierarchical Vision Transformers (ViTs), e.g, Swin Transformer, allowing the hierarchical ViTs to discard masked patches and operate only on the visible ones is presented.