scispace - formally typeset
Q

Qiuhong Ke

Researcher at University of Melbourne

Publications -  55
Citations -  2120

Qiuhong Ke is an academic researcher from University of Melbourne. The author has contributed to research in topics: Computer science & Convolutional neural network. The author has an hindex of 11, co-authored 35 publications receiving 1304 citations. Previous affiliations of Qiuhong Ke include University of Western Australia & Beijing Forestry University.

Papers
More filters
Proceedings ArticleDOI

A New Representation of Skeleton Sequences for 3D Action Recognition

TL;DR: Wang et al. as mentioned in this paper proposed to use deep convolutional neural networks to learn long-term temporal information of the skeleton sequence from the frames of the generated clips, and then use a Multi-Task Learning Network (MTLN) to jointly process all frames in parallel to incorporate spatial structural information for action recognition.
Proceedings ArticleDOI

A New Representation of Skeleton Sequences for 3D Action Recognition

TL;DR: Deep convolutional neural networks are proposed to be used to learn long-term temporal information of the skeleton sequence from the frames of the generated clips, and a Multi-Task Learning Network (MTLN) is proposed to jointly process all Frames of the clips in parallel to incorporate spatial structural information for action recognition.
Journal ArticleDOI

Learning Clip Representations for Skeleton-Based 3D Action Recognition

TL;DR: Experimental results consistently demonstrate the superiority of the proposed clip representation and the feature learning method for 3D action recognition compared to the existing techniques.
Journal ArticleDOI

SkeletonNet: Mining Deep Part Features for 3-D Action Recognition

TL;DR: This letter presents SkeletonNet, a deep learning framework for skeleton-based 3-D action recognition, which contains two parts: one to extract general features from the input images, while the other to generate a discriminative and compact representation for action recognition.
Proceedings ArticleDOI

Time-Conditioned Action Anticipation in One Shot

TL;DR: Experimental results show that the proposed time-conditioned method is capable of anticipating future actions at both short-term and long-term, and achieves state-of-the-art performance.