scispace - formally typeset
Open AccessJournal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

Reads0
Chats0
TLDR
The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Abstract
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Robotic grasp detection using deep convolutional neural networks

TL;DR: In this article, the authors used a deep convolutional neural network to extract features from the scene and then used a shallow CNN to predict the grasp configuration for the object of interest.
Posted Content

From Generic to Specific Deep Representations for Visual Recognition

TL;DR: This paper thoroughly investigates the transferability of ConvNet representations w.r.t. several factors, and shows that different visual recognition tasks can be categorically ordered based on their distance from the source task.
Posted Content

Online Multi-Target Tracking Using Recurrent Neural Networks

TL;DR: In this article, an end-to-end learning approach for online multi-target tracking is proposed for real-world scenes using recurrent neural networks (RNNs), which is shown to achieve promising results on both synthetic and real data.
Posted Content

Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results

TL;DR: In this article, Mean Teacher, a method that averages model weights instead of label predictions, is proposed to improve test accuracy and enable training with fewer labels than Temporal Ensembling.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Related Papers (5)