scispace - formally typeset
Open AccessProceedings ArticleDOI

AdaScan: Adaptive Scan Pooling in Deep Convolutional Neural Networks for Human Action Recognition in Videos

TLDR
The effectiveness of the proposed pooling method consistently improves on baseline pooling methods, with both RGB and optical flow based Convolutional networks, and in combination with complementary video representations is shown.
Abstract: 
We propose a novel method for temporally pooling frames in a video for the task of human action recognition. The method is motivated by the observation that there are only a small number of frames which, together, contain sufficient information to discriminate an action class present in a video, from the rest. The proposed method learns to pool such discriminative and informative frames, while discarding a majority of the non-informative frames in a single temporal scan of the video. Our algorithm does so by continuously predicting the discriminative importance of each video frame and subsequently pooling them in a deep learning framework. We show the effectiveness of our proposed pooling method on standard benchmarks where it consistently improves on baseline pooling methods, with both RGB and optical flow based Convolutional networks. Further, in combination with complementary video representations, we show results that are competitive with respect to the state-of-the-art results on two challenging and publicly available benchmark datasets.

read more

Citations
More filters
Proceedings ArticleDOI

Flow-Guided Feature Aggregation for Video Object Detection

TL;DR: This work presents flow-guided feature aggregation, an accurate and end-to-end learning framework for video object detection that improves the per-frame features by aggregation of nearby features along the motion paths, and thus improves the video recognition accuracy.
Posted Content

Human Action Recognition and Prediction: A Survey.

TL;DR: The complete state-of-the-art techniques in the action recognition and prediction are surveyed, including existing models, popular algorithms, technical difficulties, popular action databases, evaluation protocols, and promising future directions are provided.
Book ChapterDOI

Hidden Two-Stream Convolutional Networks for Action Recognition

TL;DR: In this paper, a hidden two-stream CNN architecture is proposed, which takes raw video frames as input and directly predicts action classes without explicitly computing optical flow, which is 10x faster than its two-stage baseline.
Journal ArticleDOI

A review of Convolutional-Neural-Network-based action recognition

TL;DR: This paper presents a comprehensive review of the CNN-based action recognition methods according to three strategies and provides a guide for future research.
Journal ArticleDOI

Recurrent Spatial-Temporal Attention Network for Action Recognition in Videos

TL;DR: The experimental results show that, the proposed RSTAN outperforms other recent RNN-based approaches on UCF101 and HMDB51 as well as achieves the state-of-the-art on JHMDB.
References
More filters
Proceedings ArticleDOI

Dynamic Image Networks for Action Recognition

TL;DR: The new approximate rank pooling CNN layer allows the use of existing CNN models directly on video data with fine-tuning to generalize dynamic images to dynamic feature maps and the power of the new representations on standard benchmarks in action recognition achieving state-of-the-art performance.
Proceedings Article

Action Recognition using Visual Attention

TL;DR: In this article, a soft attention based model was proposed for action recognition in videos using multi-layered RNNs with Long Short-Term Memory (LSTM) units which are deep both spatially and temporally.
Posted Content

Towards Good Practices for Very Deep Two-Stream ConvNets

TL;DR: This report presents very deep two-stream ConvNets for action recognition, by adapting recent very deep architectures into video domain, and extends the Caffe toolbox into Multi-GPU implementation with high computational efficiency and low memory consumption.
Proceedings ArticleDOI

Learning latent temporal structure for complex event detection

TL;DR: A conditional model trained in a max-margin framework that is able to automatically discover discriminative and interesting segments of video, while simultaneously achieving competitive accuracies on difficult detection and recognition tasks is utilized.
Book ChapterDOI

Action Recognition with Stacked Fisher Vectors

TL;DR: Experimental results demonstrate the effectiveness of SFV, and the combination of the traditional FV and SFV outperforms state-of-the-art methods on these datasets with a large margin.
Related Papers (5)