scispace - formally typeset
S

Sheng Tang

Researcher at Chinese Academy of Sciences

Publications -  143
Citations -  3507

Sheng Tang is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Visual Word & TRECVID. The author has an hindex of 25, co-authored 131 publications receiving 2431 citations. Previous affiliations of Sheng Tang include National University of Singapore & Dalian University of Technology.

Papers
More filters
Book ChapterDOI

Ensemble Learning with LDA Topic Models for Visual Concept Detection

TL;DR: Video concept detection is a classification task, in which a binary classifier is usually learned to predict the presence of a certain concept in a video shot or keyframe (image).
Journal ArticleDOI

Detection and tracking based tubelet generation for video object detection

TL;DR: This work proposes a novel tubelet fusion method to combine these multi-modal information (appearance information in independent images and contextual information in videos) and shows that the proposed method can achieve state-of-the-art performances.

TRECVID 2006 Rushes Exploitation By CAS MCG

TL;DR: This work proposes a novel and interactive rushes video selection and editing method based on hierarchical browsing of key frames, where high level features of each key frame such as face, interview, person, crowd, building, outdoor, waterbody, and other information about redundancy and repetition are displayed at same time for helping editors to select what they really want.

Rushes Exploitation 2006 By CAS MCG

TL;DR: This work proposes a novel and interactive rushes video selection and editing method based on hierarchical browsing of key frames, where high level features of each key frame such as face, interview, person, crowd, building, outdoor, waterbody, and other information about redundancy and repetition are displayed at same time for helping editors to select what they really want.
Proceedings ArticleDOI

Human Attention Model for Action Movie Analysis

TL;DR: Based on video structuralization and human attention analysis, action concept annotation for interest-oriented navigation of viewers is realized and the effectiveness and generality of human attention model for action movie analysis is demonstrated.