scispace - formally typeset
C

Chen Qian

Researcher at SenseTime

Publications -  207
Citations -  10106

Chen Qian is an academic researcher from SenseTime. The author has contributed to research in topics: Computer science & Pose. The author has an hindex of 30, co-authored 125 publications receiving 5669 citations. Previous affiliations of Chen Qian include Shanghai Jiao Tong University & The Chinese University of Hong Kong.

Papers
More filters
Proceedings ArticleDOI

Residual Attention Network for Image Classification

TL;DR: Residual Attention Network as mentioned in this paper is a convolutional neural network using attention mechanism which can incorporate with state-of-the-art feed forward network architecture in an end-to-end training fashion.
Posted Content

Residual Attention Network for Image Classification

TL;DR: Residual Attention Network as discussed by the authors is a convolutional neural network using attention mechanism which can incorporate with state-of-the-art feed forward network architecture in an end-to-end training fashion.
Proceedings ArticleDOI

Realtime and Robust Hand Tracking from Depth

TL;DR: A hybrid method that combines gradient based and stochastic optimization methods to achieve fast convergence and good accuracy is proposed and presented, making it the first system that achieves such robustness, accuracy, and speed simultaneously.
Proceedings ArticleDOI

The Seventh Visual Object Tracking VOT2019 Challenge Results

Matej Kristan, +179 more
TL;DR: The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative; results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years.
Proceedings ArticleDOI

Look at Boundary: A Boundary-Aware Face Alignment Algorithm

TL;DR: Wu et al. as mentioned in this paper proposed a boundary-aware face alignment algorithm by utilizing boundary lines as the geometric structure of a human face to help facial landmark localisation, which achieves 3.49% mean error on 300-W Fullset, which outperforms state-of-the-art methods by a large margin.