S
Sheng Tang
Researcher at Chinese Academy of Sciences
Publications - 143
Citations - 3507
Sheng Tang is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Visual Word & TRECVID. The author has an hindex of 25, co-authored 131 publications receiving 2431 citations. Previous affiliations of Sheng Tang include National University of Singapore & Dalian University of Technology.
Papers
More filters
Journal ArticleDOI
A density-based method for adaptive LDA model selection
TL;DR: A method of adaptively selecting the best LDA model based on density is proposed, and experiments show that the proposed method can achieve performance matching the best of LDA without manually tuning the number of topics.
Proceedings ArticleDOI
Scale-Adaptive Convolutions for Scene Parsing
TL;DR: The proposed scale-adaptive convolutions are not only differentiable to learn the convolutional parameters and scale coefficients in an end-to-end way, but also of high parallelizability for the convenience of GPU implementation.
Journal ArticleDOI
CGNet: A Light-Weight Context Guided Network for Semantic Segmentation
TL;DR: Wang et al. as mentioned in this paper proposed a Context Guided Network (CGNet) which is a light-weight and efficient network for semantic segmentation, which learns the joint feature of both local feature and surrounding context effectively and efficiently.
Posted Content
CGNet: A Light-weight Context Guided Network for Semantic Segmentation
TL;DR: This work proposes a novel Context Guided Network (CGNet), which is a light-weight and efficient network for semantic segmentation, and develops CGNet which captures contextual information in all stages of the network.
Proceedings Article
Image caption with global-local attention
TL;DR: This paper proposes a global-local attention (GLA) method by integrating local representation at object-level with global representation at image-level through attention mechanism that can pay more attention to how to predict the salient objects more precisely with high recall while keeping context information atimage-level cocurrently.