scispace - formally typeset
C

Chen Qian

Researcher at SenseTime

Publications -  207
Citations -  10106

Chen Qian is an academic researcher from SenseTime. The author has contributed to research in topics: Computer science & Pose. The author has an hindex of 30, co-authored 125 publications receiving 5669 citations. Previous affiliations of Chen Qian include Shanghai Jiao Tong University & The Chinese University of Hong Kong.

Papers
More filters
Proceedings ArticleDOI

DeeperForensics-1.0: A Large-Scale Dataset for Real-World Face Forgery Detection

TL;DR: DeeperForensics-1.0 as mentioned in this paper is a large-scale benchmark for face forgery detection, which contains 60, 000 videos constituted by a total of 17.6 million frames.
Book ChapterDOI

VisDrone-DET2018: The Vision Meets Drone Object Detection in Image Challenge Results

Pengfei Zhu, +104 more
TL;DR: A large-scale drone-based dataset, including 8, 599 images with rich annotations, including object bounding boxes, object categories, occlusion, truncation ratios, etc, is released, to narrow the gap between current object detection performance and the real-world requirements.
Proceedings ArticleDOI

Deep Comprehensive Correlation Mining for Image Clustering

TL;DR: A novel clustering framework, named deep comprehensive correlation mining~(DCCM), for exploring and taking full advantage of various kinds of correlations behind the unlabeled data from three aspects: Instead of only using pair-wise information, pseudo-label supervision is proposed to investigate category information and learn discriminative features.
Posted Content

ReenactGAN: Learning to Reenact Faces via Boundary Transfer

TL;DR: The proposed method, known as ReenactGAN, is capable of transferring facial movements and expressions from an arbitrary person's monocular video input to a target person’s video, and can perform photo-realistic face reenactment.
Book ChapterDOI

ReenactGAN: Learning to Reenact Faces via Boundary Transfer

TL;DR: Wu et al. as discussed by the authors presented a learning-based framework for face reenactment, which is capable of transferring facial movements and expressions from an arbitrary person's monocular video input to a target person's video by mapping the source face onto a boundary latent space.