scispace - formally typeset
Y

Yue Ming

Researcher at Beijing University of Posts and Telecommunications

Publications -  85
Citations -  792

Yue Ming is an academic researcher from Beijing University of Posts and Telecommunications. The author has contributed to research in topics: Facial recognition system & Feature extraction. The author has an hindex of 15, co-authored 70 publications receiving 575 citations. Previous affiliations of Yue Ming include Tencent & Beijing Jiaotong University.

Papers
More filters
Patent

Hand motion identifying method and apparatus

TL;DR: In this paper, a hand motion detection method and apparatus was proposed to acquire feature points from RGB and depth information video pairs and compare the feature points with three-dimensional grid motion SIFT feature descriptors.
Journal ArticleDOI

MP-LN: motion state prediction and localization network for visual object tracking

TL;DR: A novel motion state prediction and localization network, named MP-LN, for visual object tracking, which predicts and translates a reasonable search area depending on the continuous motion state and incorporates rewards to enhance the back-propagation of errors for more accurate motion state.
Journal ArticleDOI

Efficient scalable spatiotemporal visual tracking based on recurrent neural networks

TL;DR: A novel tracking framework, called scalable spatiotemporal visual tracking algorithm (SSVT), which performs favorably against state-of-the-art trackers, which can effectively reduce computation redundancy and improve tracking accuracy.
Journal ArticleDOI

Visuals to Text: A Comprehensive Review on Automatic Image Captioning

TL;DR: This work presents a comprehensive review on image captioning, covering both traditional methods and recent deep learning-based techniques, and compares the state of the art methods on the MS COCO dataset.
Proceedings ArticleDOI

A new scheme for 3D face recognition

TL;DR: A novel system for 3D face recognition that outperforms the other popular approaches reported in the literature and achieves much higher accurate recognition rate is presented.