scispace - formally typeset
Open AccessProceedings ArticleDOI

AdaScan: Adaptive Scan Pooling in Deep Convolutional Neural Networks for Human Action Recognition in Videos

TLDR
The effectiveness of the proposed pooling method consistently improves on baseline pooling methods, with both RGB and optical flow based Convolutional networks, and in combination with complementary video representations is shown.
Abstract: 
We propose a novel method for temporally pooling frames in a video for the task of human action recognition. The method is motivated by the observation that there are only a small number of frames which, together, contain sufficient information to discriminate an action class present in a video, from the rest. The proposed method learns to pool such discriminative and informative frames, while discarding a majority of the non-informative frames in a single temporal scan of the video. Our algorithm does so by continuously predicting the discriminative importance of each video frame and subsequently pooling them in a deep learning framework. We show the effectiveness of our proposed pooling method on standard benchmarks where it consistently improves on baseline pooling methods, with both RGB and optical flow based Convolutional networks. Further, in combination with complementary video representations, we show results that are competitive with respect to the state-of-the-art results on two challenging and publicly available benchmark datasets.

read more

Citations
More filters
Proceedings ArticleDOI

Flow-Guided Feature Aggregation for Video Object Detection

TL;DR: This work presents flow-guided feature aggregation, an accurate and end-to-end learning framework for video object detection that improves the per-frame features by aggregation of nearby features along the motion paths, and thus improves the video recognition accuracy.
Posted Content

Human Action Recognition and Prediction: A Survey.

TL;DR: The complete state-of-the-art techniques in the action recognition and prediction are surveyed, including existing models, popular algorithms, technical difficulties, popular action databases, evaluation protocols, and promising future directions are provided.
Book ChapterDOI

Hidden Two-Stream Convolutional Networks for Action Recognition

TL;DR: In this paper, a hidden two-stream CNN architecture is proposed, which takes raw video frames as input and directly predicts action classes without explicitly computing optical flow, which is 10x faster than its two-stage baseline.
Journal ArticleDOI

A review of Convolutional-Neural-Network-based action recognition

TL;DR: This paper presents a comprehensive review of the CNN-based action recognition methods according to three strategies and provides a guide for future research.
Journal ArticleDOI

Recurrent Spatial-Temporal Attention Network for Action Recognition in Videos

TL;DR: The experimental results show that, the proposed RSTAN outperforms other recent RNN-based approaches on UCF101 and HMDB51 as well as achieves the state-of-the-art on JHMDB.
References
More filters
Proceedings Article

Unsupervised Learning of Video Representations using LSTMs

TL;DR: In this paper, an encoder LSTM is used to map an input video sequence into a fixed length representation, which is then decoded using single or multiple decoder Long Short Term Memory (LSTM) networks to perform different tasks.
Proceedings ArticleDOI

ActivityNet: A large-scale video benchmark for human activity understanding

TL;DR: This paper introduces ActivityNet, a new large-scale video benchmark for human activity understanding that aims at covering a wide range of complex human activities that are of interest to people in their daily living.
Proceedings ArticleDOI

Beyond short snippets: Deep networks for video classification

TL;DR: In this article, a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN was proposed to model the video as an ordered sequence of frames.
Proceedings ArticleDOI

A 3-dimensional sift descriptor and its application to action recognition

TL;DR: This paper uses a bag of words approach to represent videos, and presents a method to discover relationships between spatio-temporal words in order to better describe the video data.
Journal ArticleDOI

Dense Trajectories and Motion Boundary Descriptors for Action Recognition

TL;DR: The MBH descriptor shows to consistently outperform other state-of-the-art descriptors, in particular on real-world videos that contain a significant amount of camera motion.
Related Papers (5)