M
Mengyuan Liu
Researcher at Sun Yat-sen University
Publications - 67
Citations - 1982
Mengyuan Liu is an academic researcher from Sun Yat-sen University. The author has contributed to research in topics: Computer science & Convolutional neural network. The author has an hindex of 17, co-authored 47 publications receiving 1354 citations. Previous affiliations of Mengyuan Liu include Tencent & Nanyang Technological University.
Papers
More filters
Journal ArticleDOI
Enhanced skeleton visualization for view invariant human action recognition
Mengyuan Liu,Hong Liu,Chen Chen +2 more
TL;DR: Enhanced skeleton visualization method encodes spatio-temporal skeletons as visual and motion enhanced color images in a compact yet distinctive manner and consistently achieves the highest accuracies on four datasets, including the largest and most challenging NTU RGB+D dataset for skeleton-based action recognition.
Proceedings ArticleDOI
Recognizing Human Actions as the Evolution of Pose Estimation Maps
Mengyuan Liu,Junsong Yuan +1 more
TL;DR: This work presents a novel method to recognize human action as the evolution of pose estimation maps, which outperforms most state-of-the-art methods.
Posted Content
Two-Stream 3D Convolutional Neural Network for Skeleton-Based Action Recognition.
Hong Liu,Juanhui Tu,Mengyuan Liu +2 more
TL;DR: This paper proposes a novel two-stream model using 3D CNN in skeleton-based action recognition, which outperforms most of RNN-based methods, which verify the complementary property between spatial and temporal information and the robustness to noise.
Posted Content
A Survey on 3D Skeleton-Based Action Recognition Using Learning Method.
TL;DR: This survey highlights the necessity of action recognition and the significance of 3D-skeleton data, and gives an overall discussion over deep learning-based action recognitin using 3D skeleton data.
Proceedings Article
3D action recognition using multi-temporal depth motion maps and fisher vector
TL;DR: Extensive experiments on the public MSRAction3D, MSRGesture3D and DHA datasets show that the proposed method outperforms state-of-the-art approaches for depth-based action recognition.