G
Gang Yang
Researcher at Northeastern University (China)
Publications - 11
Citations - 612
Gang Yang is an academic researcher from Northeastern University (China). The author has contributed to research in topics: Eye tracking & Video tracking. The author has an hindex of 5, co-authored 10 publications receiving 390 citations.
Papers
More filters
Proceedings ArticleDOI
Detect Globally, Refine Locally: A Novel Approach to Saliency Detection
TL;DR: A global Recurrent Localization Network (RLN) is proposed which exploits contextual information by the weighted response map in order to localize salient objects more accurately and performs favorably against all existing methods in terms of the popular evaluation metrics.
Journal ArticleDOI
Multi attention module for visual tracking
TL;DR: This work proposes a new visual tracking algorithm leveraging multi-level visual attention to take full use of the information during tracking, and implements the attention network with long short term memory (LSTM) units, which are capable of capturing the historical context information to perform more reliable inference at the current time step.
Journal ArticleDOI
An Unsupervised Game-Theoretic Approach to Saliency Detection
TL;DR: A novel unsupervised game-theoretic salient object detection algorithm that does not require labeled training data and an iterative random walk algorithm to combine saliency maps produced by the Saliency Game using different features is proposed.
Journal ArticleDOI
Boundary-Guided Feature Aggregation Network for Salient Object Detection
TL;DR: A novel FCN framework to integrate multilevel convolutional features recurrently with the guidance of object boundary information and fuse boundary information into salient regions to achieve accurate boundary inference and semantic enhancement is proposed.
Journal ArticleDOI
Deep mutual learning for visual object tracking
TL;DR: Extensive experiments on the OTB2013, OTB2015, VOT2017 and LaSOT benchmarks demonstrate that the tracking performance can be improved effectively by using the proposed mutual-learning-based training methodology.