AdaScan: Adaptive Scan Pooling in Deep Convolutional Neural Networks for Human Action Recognition in Videos
Amlan Kar,Nishant Rai,Karan Sikka,Gaurav Sharma +3 more
- pp 5699-5708
TLDR
The effectiveness of the proposed pooling method consistently improves on baseline pooling methods, with both RGB and optical flow based Convolutional networks, and in combination with complementary video representations is shown.Abstract:Â
We propose a novel method for temporally pooling frames in a video for the task of human action recognition. The method is motivated by the observation that there are only a small number of frames which, together, contain sufficient information to discriminate an action class present in a video, from the rest. The proposed method learns to pool such discriminative and informative frames, while discarding a majority of the non-informative frames in a single temporal scan of the video. Our algorithm does so by continuously predicting the discriminative importance of each video frame and subsequently pooling them in a deep learning framework. We show the effectiveness of our proposed pooling method on standard benchmarks where it consistently improves on baseline pooling methods, with both RGB and optical flow based Convolutional networks. Further, in combination with complementary video representations, we show results that are competitive with respect to the state-of-the-art results on two challenging and publicly available benchmark datasets.read more
Citations
More filters
Proceedings ArticleDOI
Flow-Guided Feature Aggregation for Video Object Detection
TL;DR: This work presents flow-guided feature aggregation, an accurate and end-to-end learning framework for video object detection that improves the per-frame features by aggregation of nearby features along the motion paths, and thus improves the video recognition accuracy.
Posted Content
Human Action Recognition and Prediction: A Survey.
TL;DR: The complete state-of-the-art techniques in the action recognition and prediction are surveyed, including existing models, popular algorithms, technical difficulties, popular action databases, evaluation protocols, and promising future directions are provided.
Book ChapterDOI
Hidden Two-Stream Convolutional Networks for Action Recognition
TL;DR: In this paper, a hidden two-stream CNN architecture is proposed, which takes raw video frames as input and directly predicts action classes without explicitly computing optical flow, which is 10x faster than its two-stage baseline.
Journal ArticleDOI
A review of Convolutional-Neural-Network-based action recognition
TL;DR: This paper presents a comprehensive review of the CNN-based action recognition methods according to three strategies and provides a guide for future research.
Journal ArticleDOI
Recurrent Spatial-Temporal Attention Network for Action Recognition in Videos
Wenbin Du,Yali Wang,Yu Qiao +2 more
TL;DR: The experimental results show that, the proposed RSTAN outperforms other recent RNN-based approaches on UCF101 and HMDB51 as well as achieves the state-of-the-art on JHMDB.
References
More filters
Proceedings ArticleDOI
A Key Volume Mining Deep Framework for Action Recognition
TL;DR: A key volume mining deep framework to identify key volumes and conduct classification simultaneously and an effective yet simple "unsupervised key volume proposal" method for high quality volume sampling are proposed.
Proceedings ArticleDOI
Poselet Key-Framing: A Model for Human Activity Recognition
Michalis Raptis,Leonid Sigal +1 more
TL;DR: A new model for recognizing human actions that supports spatio-temporal localization and is insensitive to dropped frames or partial observations is developed and shows classification performance that is competitive with the state of the art on the benchmark UT-Interaction dataset.
Proceedings Article
Actions ~ Transformations
TL;DR: A novel representation for actions is proposed by modeling an action as a transformation which changes the state of the environment before the action happens (precondition) to the state after the action (effect).
Journal ArticleDOI
Temporal Localization of Actions with Actoms
TL;DR: This work proposes a model based on a sequence of atomic action units, termed "actoms," that are semantically meaningful and characteristic for the action that outperforms the current state of the art in temporal action localization, as well as baselines that localize actions with a sliding window method.
Book ChapterDOI
Trajectory-Based modeling of human actions with motion reference points
TL;DR: This paper proposes a simple representation specifically aimed at the modeling of human action recognition in videos that operates on top of visual codewords derived from local patch trajectories, and therefore does not require accurate foreground-background separation, which is typically a necessary step to model object relationships.