scispace - formally typeset
Open AccessProceedings ArticleDOI

Spatiotemporal Pyramid Network for Video Action Recognition

TLDR
This work proposes a novel spatiotemporal pyramid network to fuse the spatial and temporal features in a pyramid structure such that they can reinforce each other and achieves state-of-the-art results on standard video datasets.
Abstract
Two-stream convolutional networks have shown strong performance in video action recognition tasks. The key idea is to learn spatiotemporal features by fusing convolutional networks spatially and temporally. However, it remains unclear how to model the correlations between the spatial and temporal structures at multiple abstraction levels. First, the spatial stream tends to fail if two videos share similar backgrounds. Second, the temporal stream may be fooled if two actions resemble in short snippets, though appear to be distinct in the long term. We propose a novel spatiotemporal pyramid network to fuse the spatial and temporal features in a pyramid structure such that they can reinforce each other. From the architecture perspective, our network constitutes hierarchical fusion strategies which can be trained as a whole using a unified spatiotemporal loss. A series of ablation experiments support the importance of each fusion strategy. From the technical perspective, we introduce the spatiotemporal compact bilinear operator into video analysis tasks. This operator enables efficient training of bilinear fusion operations which can capture full interactions between the spatial and temporal features. Our final network achieves state-of-the-art results on standard video datasets.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Weakly Supervised Action Localization by Sparse Temporal Pooling Network

TL;DR: In this article, a weakly supervised temporal action localization algorithm is proposed, which learns from video-level class labels and predicts temporal intervals of human actions with no requirement of temporal localization annotations.
Proceedings ArticleDOI

STM: SpatioTemporal and Motion Encoding for Action Recognition

TL;DR: Wang et al. as discussed by the authors proposed a STM block, which contains a Channel-wise SpatioTemporal Module (CSTM) to present the spatio-temporal features and a Channelwise Motion Module(CMM) to efficiently encode motion features, and then replaced original residual blocks in the ResNet architecture with STM blcoks to form a simple yet effective STM network by introducing very limited extra computation cost.
Proceedings ArticleDOI

Recognizing Human Actions as the Evolution of Pose Estimation Maps

TL;DR: This work presents a novel method to recognize human action as the evolution of pose estimation maps, which outperforms most state-of-the-art methods.
Proceedings ArticleDOI

Compressed Video Action Recognition

TL;DR: In this article, the authors proposed to train a deep network directly on the compressed video, which has a higher information density, and found the training to be easier than learning deep image representations.
Proceedings ArticleDOI

Optical Flow Guided Feature: A Fast and Robust Motion Representation for Video Action Recognition

TL;DR: In this article, the optical flow guided feature (OFF) is proposed to extract spatio-temporal information, especially the temporal information between frames simultaneously, which enables the network to distill temporal information through a fast and robust approach.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.