scispace - formally typeset
Open AccessProceedings ArticleDOI

UntrimmedNets for Weakly Supervised Action Recognition and Detection

TLDR
This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances.
Abstract
Current action recognition methods heavily rely on trimmed videos for model training. However, it is expensive and time-consuming to acquire a large-scale trimmed video dataset. This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances. Our UntrimmedNet couples two important components, the classification module and the selection module, to learn the action models and reason about the temporal duration of action instances, respectively. These two components are implemented with feed-forward networks, and UntrimmedNet is therefore an end-to-end trainable architecture. We exploit the learned models for action recognition (WSR) and detection (WSD) on the untrimmed video datasets of THUMOS14 and ActivityNet. Although our UntrimmedNet only employs weak supervision, our method achieves performance superior or comparable to that of those strongly supervised approaches on these two datasets.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Weakly supervised action segmentation with effective use of attention and self-attention

TL;DR: In this paper, a hybrid sequence-to-sequence model is proposed to generate human action sequences using a novel hybrid sequence to sequence model that outputs a sequence of actions in the chronological order of the actions being performed in the longer activity of a given video.
Journal ArticleDOI

Weakly Supervised Temporal Action Detection With Temporal Dependency Learning

TL;DR: Zhang et al. as discussed by the authors proposed a two-branch weakly supervised temporal action detection framework with learning the long-range temporal dependencies for obtaining more accurate detection results, where only action class labels are required.
Journal ArticleDOI

Progressive enhancement network with pseudo labels for weakly supervised temporal action localization

TL;DR: Wang et al. as discussed by the authors proposed a weakly supervised framework called the progressive enhancement network (PEN), which takes full advantage of the predictions generated by the preceding models to enhance the subsequent models.
Journal ArticleDOI

Weakly supervised moment localization with natural language based on semantic reconstruction

TL;DR: In this article , a proposal generation module uses a two-dimensional temporal feature map to model cross-modal video representations and can encode the moment-by-moment temporal relationships of moment candidates.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Related Papers (5)