scispace - formally typeset
J

Jin-Gang Yu

Researcher at South China University of Technology

Publications -  43
Citations -  688

Jin-Gang Yu is an academic researcher from South China University of Technology. The author has contributed to research in topics: Object detection & Matching (graph theory). The author has an hindex of 12, co-authored 42 publications receiving 434 citations. Previous affiliations of Jin-Gang Yu include University of Nebraska–Lincoln & Huazhong University of Science and Technology.

Papers
More filters
Journal ArticleDOI

Maximal entropy random walk for region-based visual saliency.

TL;DR: This paper adopts a novel mathematical model, namely, the maximal entropy random walk (MERW) to measure saliency, and establishes a generic framework for saliency detection based on the MERW model based on graph representation.
Journal ArticleDOI

Computer-aided diagnosis of laryngeal cancer via deep learning based on laryngoscopic images

TL;DR: The DCNN has high sensitivity and specificity for automated detection of LCA and PRELCA from BLT and NORM in laryngoscopic images and facilitates earlier diagnosis of early LCA, resulting in improved clinical outcomes and reducing the burden of endoscopists.
Proceedings ArticleDOI

FGN: Fully Guided Network for Few-Shot Instance Segmentation

TL;DR: A Fully Guided Network for few-shot instance segmentation is presented, which introduces different guidance mechanisms into the various key components in Mask R-CNN in order to make full use of the guidance effect from the support set and adapt better to the inter-class generalization.
Journal ArticleDOI

A novel spatio-temporal saliency approach for robust dim moving target detection from airborne infrared image sequences

TL;DR: The proposed spatio-temporal saliency model can achieve much better detection performance than the state-of-the-art approaches and is able to outperform existing approaches remarkably.
Proceedings ArticleDOI

Temporally aligned pooling representation for video-based person re-identification

TL;DR: This paper proposes an effective Temporally Aligned Pooling Representation (TAPR) for video-based person re-identification by selecting the “best” walking cycle from the noisy motion information according to the intrinsic periodicity property of walking persons.