UntrimmedNets for Weakly Supervised Action Recognition and Detection
Limin Wang,Yuanjun Xiong,Dahua Lin,Luc Van Gool +3 more
- pp 6402-6411
TLDR
This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances.Abstract:
Current action recognition methods heavily rely on trimmed videos for model training. However, it is expensive and time-consuming to acquire a large-scale trimmed video dataset. This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances. Our UntrimmedNet couples two important components, the classification module and the selection module, to learn the action models and reason about the temporal duration of action instances, respectively. These two components are implemented with feed-forward networks, and UntrimmedNet is therefore an end-to-end trainable architecture. We exploit the learned models for action recognition (WSR) and detection (WSD) on the untrimmed video datasets of THUMOS14 and ActivityNet. Although our UntrimmedNet only employs weak supervision, our method achieves performance superior or comparable to that of those strongly supervised approaches on these two datasets.read more
Citations
More filters
Book ChapterDOI
Unified Multisensory Perception: Weakly-Supervised Audio-Visual Video Parsing
TL;DR: In this article, a hybrid attention network is proposed to explore unimodal and cross-modal temporal contexts simultaneously, and an attentive MMIL pooling method is developed to adaptively explore useful audio and visual content from different temporal extent and modalities.
Book ChapterDOI
Weakly-Supervised Action Localization with Expectation-Maximization Multi-Instance Learning
TL;DR: In this article, the key instances assignment is modeled as a hidden variable and adopted an Expectation-Maximization (EM) framework, which achieves state-of-the-art performance on two standard benchmarks.
Posted Content
Multi-shot Temporal Event Localization: a Benchmark.
TL;DR: In this paper, a large scale dataset called MUlti-shot EventS (MUSES) is proposed for multi-shot temporal event localization, which consists of 31,477 event instances for a total of 716 video hours.
Posted Content
SF-Net: Single-Frame Supervision for Temporal Action Localization
TL;DR: SF-Net significantly improves upon state-of-the-art weakly-supervised methods in terms of both segment localization and single-frame localization and Notably, SF-Net achieves comparable results to its fully-super supervised counterpart which requires much more resource intensive annotations.
Proceedings ArticleDOI
WSLLN: Weakly Supervised Natural Language Localization Networks.
TL;DR: Weakly supervised language localization networks (WSLLN) is proposed to detect events in long, untrimmed videos given language queries to relieve the annotation burden by training with only video-sentence pairs without accessing to temporal locations of events.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI
Gradient-based learning applied to document recognition
Yann LeCun,Léon Bottou,Léon Bottou,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio,Patrick Haffner +6 more
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings Article
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Journal ArticleDOI
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky,Jia Deng,Hao Su,Jonathan Krause,Sanjeev Satheesh,Sean Ma,Zhiheng Huang,Andrej Karpathy,Aditya Khosla,Michael S. Bernstein,Alexander C. Berg,Li Fei-Fei +11 more
TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Related Papers (5)
Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
Joao Carreira,Andrew Zisserman +1 more