scispace - formally typeset
Open AccessPosted Content

Convolutional Two-Stream Network Fusion for Video Action Recognition

TLDR
In this paper, a spatial and temporal network can be fused at the last convolution layer without loss of performance, but with a substantial saving in parameters, and furthermore, pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance.
Abstract
Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters; (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy; finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.

read more

Citations
More filters
Posted Content

Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization

TL;DR: Wang et al. as mentioned in this paper proposed an Actor-Context-Actor Relation Network (ACAR-Net) which builds upon a novel High-order Relation Reasoning Operator to model indirect relations for spatio-temporal action localization.
Book ChapterDOI

Online Detection of Action Start in Untrimmed, Streaming Videos

TL;DR: In this article, Zhang et al. proposed three novel methods to specifically address the challenges in training online detection of action start (ODAS) models: hard negative samples generation based on Generative Adversarial Network (GAN) to distinguish ambiguous background, explicitly modeling the temporal consistency between data around action start and data succeeding action start, and adaptive sampling strategy to handle the scarcity of training data.
Journal ArticleDOI

HGR-Net: a fusion network for hand gesture segmentation and recognition

TL;DR: In this article, a two-stage CNN architecture is proposed for robust recognition of hand gestures, where the first stage performs accurate semantic segmentation to determine hand regions, and the second stage identifies the gesture.
Proceedings ArticleDOI

Classifying Pedestrian Actions In Advance Using Predicted Video Of Urban Driving Scenes

TL;DR: This work explores prediction of urban pedestrian actions by generating a video future of the traffic scene, and uses a binary action classifier network for determining a pedestrian’s crossing intent from predicted video.
Proceedings ArticleDOI

One-Shot Action Localization by Learning Sequence Matching Network

TL;DR: This work conceptualizes a new example-based action detection problem where only a few examples are provided, and the goal is to find the occurrences of these examples in an untrimmed video sequence and introduces a novel one-shot action localization method that alleviates the need for large amounts of training samples.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Related Papers (5)