UntrimmedNets for Weakly Supervised Action Recognition and Detection
Limin Wang,Yuanjun Xiong,Dahua Lin,Luc Van Gool +3 more
- pp 6402-6411
TLDR
This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances.Abstract:
Current action recognition methods heavily rely on trimmed videos for model training. However, it is expensive and time-consuming to acquire a large-scale trimmed video dataset. This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances. Our UntrimmedNet couples two important components, the classification module and the selection module, to learn the action models and reason about the temporal duration of action instances, respectively. These two components are implemented with feed-forward networks, and UntrimmedNet is therefore an end-to-end trainable architecture. We exploit the learned models for action recognition (WSR) and detection (WSD) on the untrimmed video datasets of THUMOS14 and ActivityNet. Although our UntrimmedNet only employs weak supervision, our method achieves performance superior or comparable to that of those strongly supervised approaches on these two datasets.read more
Citations
More filters
Posted Content
Exploring Relations in Untrimmed Videos for Self-Supervised Learning.
TL;DR: Experimental results show that ERUV is able to learn richer representations with untrimmed videos, and it outperforms state-of-the-art self-supervised methods with significant margins.
Proceedings ArticleDOI
Localizing Visual Sounds the Easy Way
Shentong Mo,Pedro Morgado +1 more
TL;DR: A simple yet effective approach for Easy Visual Sound Localization, namely EZ-VSL, without relying on the construction of positive and/or negative regions during training, which achieves state-of-the-art performance on two popular benchmarks, Flickr SoundNet and VGG-Sound Source.
Posted Content
Anchor-Constrained Viterbi for Set-Supervised Action Segmentation
Jun Li,Sinisa Todorovic +1 more
TL;DR: This paper specifies an HMM, which accounts for co-occurrences of action classes and their temporal lengths, and explicitly training the HMM on a Viterbi-based loss, and introduces a new regularization of feature affinities between training videos that share the same action classes.
Proceedings ArticleDOI
RCL: Recurrent Continuous Localization for Temporal Action Detection
TL;DR: Recurrent Continuous Localization (RCL) is introduced, which learns a fully continuous anchoring representation that is fully differentiable, al-lowing to be seamlessly integrated into existing detectors, e.g., BMN and G-TAD.
Posted Content
A flexible model for training action localization with varying levels of supervision
TL;DR: This work proposes a unifying framework that can handle and combine varying types of less demanding weak supervision, based on discriminative clustering and integrates different types of supervision as constraints on the optimization.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI
Gradient-based learning applied to document recognition
Yann LeCun,Léon Bottou,Léon Bottou,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio,Patrick Haffner +6 more
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings Article
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Journal ArticleDOI
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky,Jia Deng,Hao Su,Jonathan Krause,Sanjeev Satheesh,Sean Ma,Zhiheng Huang,Andrej Karpathy,Aditya Khosla,Michael S. Bernstein,Alexander C. Berg,Li Fei-Fei +11 more
TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Related Papers (5)
Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
Joao Carreira,Andrew Zisserman +1 more