scispace - formally typeset
Open AccessProceedings ArticleDOI

The “Something Something” Video Database for Learning and Evaluating Visual Common Sense

TLDR
This work describes the ongoing collection of the “something-something” database of video prediction tasks whose solutions require a common sense understanding of the depicted situation, and describes the challenges in crowd-sourcing this data at scale.
Abstract
Neural networks trained on datasets such as ImageNet have led to major advances in visual object classification. One obstacle that prevents networks from reasoning more deeply about complex scenes and situations, and from integrating visual knowledge with natural language, like humans do, is their lack of common sense knowledge about the physical world. Videos, unlike still images, contain a wealth of detailed information about the physical world. However, most labelled video datasets represent high-level concepts rather than detailed physical aspects about actions and scenes. In this work, we describe our ongoing collection of the “something-something” database of video prediction tasks whose solutions require a common sense understanding of the depicted situation. The database currently contains more than 100,000 videos across 174 classes, which are defined as caption-templates. We also describe the challenges in crowd-sourcing this data at scale.

read more

Citations
More filters
Posted Content

Only Time Can Tell: Discovering Temporal Data for Temporal Modeling

TL;DR: This paper identifies action classes where temporal information is actually necessary to recognize them and call these "temporal classes", and proposes a methodology based on a simple and effective human annotation experiment that leads to better generalization in unseen classes, demonstrating the need for more temporal data.
Posted Content

V4D:4D Convolutional Neural Networks for Video-level Representation Learning

TL;DR: This paper designs a new 4D residual block able to capture inter-clip interactions, which could enhance the representation power of the original clip-level 3D CNNs, and introduces the training and inference methods for the proposed V4D.
Proceedings ArticleDOI

Video Action Recognition Via Neural Architecture Searching

TL;DR: This paper makes the first attempt to let algorithm automatically design neural networks for video action recognition tasks using a spatio-temporal network developed in a differentiable space modeled by a directed acyclic graph.
Proceedings ArticleDOI

Dynamic Motion Representation for Human Action Recognition

TL;DR: The experimental results show that training a convolutional neural network with the dynamic motion representation outperforms state-of-the-art action recognition models and is obtainable on HMDB, JHMDB, UCF-101, and AVA datasets.
Proceedings ArticleDOI

Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework

TL;DR: In this article, a self-supervised method is proposed to learn feature representations from videos. But the method is limited to video retrieval and video recognition tasks, and it is not suitable for video classification.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.