scispace - formally typeset
S

Sheng Tang

Researcher at Chinese Academy of Sciences

Publications -  143
Citations -  3507

Sheng Tang is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Visual Word & TRECVID. The author has an hindex of 25, co-authored 131 publications receiving 2431 citations. Previous affiliations of Sheng Tang include National University of Singapore & Dalian University of Technology.

Papers
More filters
Proceedings ArticleDOI

Joint Learning of Binary Classifiers and Pairwise Label Correlations for Multi-label Image Classification

TL;DR: This paper jointly learning the binary classifiers and pairwise label correlations (JBP) in an end-to-end manner and introduces the strategy of online hard sample mining to focus on distinguishing confusing label pairs.
Proceedings ArticleDOI

A hierarchical framework for movie content analysis: Let computers watch films like humans

TL;DR: The promising results of userspsila subjective assessment indicate that the proposed framework for movie content analysis is applicable for automatic analysis of movie content by computers.
Journal ArticleDOI

FSpH: Fitted spectral hashing for efficient similarity search

TL;DR: Fitted spectral hashing is proposed, based on the fact that one-dimensional data of any distribution could be mapped to a uniform distribution without changing the local neighbor relations among data items, and could be fitted well by S-curve function and Fourier function.
Journal ArticleDOI

Representative selection based on sparse modeling

TL;DR: A two-step iterative representative selection algorithm, based on the assumption that the dataset can be approximately reconstructed by linear combinations of dictionary items, which can minimize this Kullback-Leibler (KL) divergence.
Proceedings ArticleDOI

Scalable logo recognition based on compact sparse dictionary for mobile devices

TL;DR: A novel scalable logo recognition system which can recognize a large number of logo categories locally on mobile devices, unsupervised without any supervised training procedure, and very time efficient at low memory cost is presented.