Open AccessPosted Content
Convolutional Two-Stream Network Fusion for Video Action Recognition
TLDR
In this paper, a spatial and temporal network can be fused at the last convolution layer without loss of performance, but with a substantial saving in parameters, and furthermore, pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance.Abstract:
Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters; (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy; finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.read more
Citations
More filters
Journal ArticleDOI
Failure Prognosis of Complex Equipment With Multistream Deep Recurrent Neural Network
TL;DR: A deep learning-based approach is proposed to predict the failure of the complex equipment by building a deep neural network, namely, multistream deep recurrent neural network (MS-DRNN), involving recurrent layer, fusion layer, fully connected layer, and linear layer.
Posted Content
VIENA2: A Driving Anticipation Dataset
Mohammad Sadegh Aliakbarian,Fatemeh Sadat Saleh,Mathieu Salzmann,Basura Fernando,Lars Petersson,Lars Andersson +5 more
TL;DR: This paper introduces a new, large-scale dataset, called VIENA2, covering 5 generic driving scenarios, with a total of 25 distinct action classes, and benchmark state-of-the-art action anticipation techniques, including a new multi-modal LSTM architecture with an effective loss function for action anticipation in driving scenarios.
Journal ArticleDOI
A component-based video content representation for action recognition
TL;DR: Experimental results demonstrate that the proposed Component-based Multi-stream CNN model (CM-CNN), trained on a WSL setting, outperforms the state-of-the-art in action recognition, even the fully-supervised approaches.
Proceedings ArticleDOI
Two-Stream Gated Fusion ConvNets for Action Recognition
Jiagang Zhu,Wei Zou,Zheng Zhu +2 more
TL;DR: An end-to-end trainable gated fusion method, namely gating ConvNet, is proposed in this paper based on the MoE (Mixture of Experts) theory to enhance the adaptability of two-stream ConvNets and competitive performance is achieved on the video action dataset UCF101.
Proceedings ArticleDOI
Unsupervised Human Action Detection by Action Matching
TL;DR: In this article, an unsupervised action detection by action matching task is proposed, where a pair of video segments are matched if they share the same human action and no supervision is used to discover such video segments.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI
Going deeper with convolutions
Christian Szegedy,Wei Liu,Yangqing Jia,Pierre Sermanet,Scott Reed,Dragomir Anguelov,Dumitru Erhan,Vincent Vanhoucke,Andrew Rabinovich +8 more
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings Article
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.