scispace - formally typeset
Z

Zicheng Liu

Researcher at Huazhong University of Science and Technology

Publications -  416
Citations -  19843

Zicheng Liu is an academic researcher from Huazhong University of Science and Technology. The author has contributed to research in topics: Computer science & Microphone. The author has an hindex of 60, co-authored 343 publications receiving 15879 citations. Previous affiliations of Zicheng Liu include Microsoft & University of Illinois at Urbana–Champaign.

Papers
More filters
Proceedings ArticleDOI

Mining actionlet ensemble for action recognition with depth cameras

TL;DR: An actionlet ensemble model is learnt to represent each action and to capture the intra-class variance, and novel features that are suitable for depth data are proposed.
Proceedings ArticleDOI

Action recognition based on a bag of 3D points

TL;DR: An action graph is employed to model explicitly the dynamics of the actions and a bag of 3D points to characterize a set of salient postures that correspond to the nodes in the action graph to recognize human actions from sequences of depth maps.
Proceedings ArticleDOI

HON4D: Histogram of Oriented 4D Normals for Activity Recognition from Depth Sequences

TL;DR: A new descriptor for activity recognition from videos acquired by a depth sensor is presented that better captures the joint shape-motion cues in the depth sequence, and thus outperforms the state-of-the-art on all relevant benchmarks.
Proceedings ArticleDOI

Large Scale Incremental Learning

TL;DR: This work found that the last fully connected layer has a strong bias towards the new classes, and this bias can be corrected by a linear model, and with two bias parameters, this method performs remarkably well on two large datasets.
Journal ArticleDOI

Learning Actionlet Ensemble for 3D Human Action Recognition

TL;DR: This paper proposes to characterize the human actions with a novel actionlet ensemble model, which represents the interaction of a subset of human joints, which is robust to noise, invariant to translational and temporal misalignment, and capable of characterizing both the human motion and the human-object interactions.