scispace - formally typeset
Open AccessPosted Content

Convolutional Two-Stream Network Fusion for Video Action Recognition

TLDR
In this paper, a spatial and temporal network can be fused at the last convolution layer without loss of performance, but with a substantial saving in parameters, and furthermore, pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance.
Abstract
Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters; (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy; finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.

read more

Citations
More filters
Proceedings ArticleDOI

Dense Multimodal Fusion for Hierarchically Joint Representation

TL;DR: DMFeng et al. as discussed by the authors propose to densely integrate the representations by greedily stacking multiple shared layers between different modality-specific networks, which is named as Dense Multimodal Fusion (DMF).
Proceedings ArticleDOI

Channel Attention Networks

TL;DR: This talk proposes Channel Attention Networks (CAN), a deep learning model that uses soft attention on individual channels that outperforms previous models and is significantly more robust to noise in individual bands than the other models.
Journal ArticleDOI

Fine-grained action recognition using dynamic kernels

TL;DR: In this paper, an action-independent Gaussian mixture model (AIGMM) is trained on the extracted features of all fine-grained actions to analyze spatio-temporal information and preserve the local similarities among fine grained actions.
Journal ArticleDOI

Deep Hashing for Secure Multimodal Biometrics

TL;DR: A deep learning framework for feature-level fusion that generates a secure multimodal template from each user’s face and iris biometrics and provides cancelability and unlinkability of the templates along with improved privacy of the biometric data is presented.
Journal ArticleDOI

A Deeper Look at Image Salient Object Detection: Bi-stream Network with a Small Training Dataset

TL;DR: This article provides a feasible way to construct a novel small-scale training set, which only contains 4 K images and proposes a novel bi-stream network consisting of two different feature backbones, which can achieve complementary fusion status for its subbranches.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Related Papers (5)