ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky,Jia Deng,Hao Su,Jonathan Krause,Sanjeev Satheesh,Sean Ma,Zhiheng Huang,Andrej Karpathy,Aditya Khosla,Michael S. Bernstein,Alexander C. Berg,Li Fei-Fei +11 more
Reads0
Chats0
TLDR
The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.Abstract:
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.read more
Citations
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Book
Deep Learning
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Proceedings Article
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings ArticleDOI
Densely Connected Convolutional Networks
TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
References
More filters
Posted Content
Sparse arrays of signatures for online character recognition
TL;DR: A sparse CNN implementation is developed that make it practical to train CNNs with many layers of maxpooling and extends the MNIST dataset by translations, which gets a test error of 0.31%.
Proceedings ArticleDOI
Detecting Avocados to Zucchinis: What Have We Done, and Where Are We Going?
TL;DR: A large-scale study on the Image Net Large Scale Visual Recognition Challenge data, inspired by the recent work of Hoiem et al, shows that this dataset provides many of the same detection challenges as the PASCAL VOC.
Proceedings Article
Some Improvements on Deep Convolutional Neural Network Based Image Classification
TL;DR: In the Imagenet Large Scale Visual Recognition Challenge (ILSVRC) 2013, this article achieved a top 5 classification error rate of 13.55% using no external data.
Proceedings ArticleDOI
Fisher and VLAD with FLAIR
TL;DR: This work starts from state-of-the-art, fast selective search, and achieves a Fast Local Area Independent Representation with FLAIR, which allows for very fast evaluation of any box encoding and still enables spatial pooling.
Book ChapterDOI
CloudCV: Large-Scale Distributed Computer Vision as a Cloud Service
Harsh Agrawal,Clint Solomon Mathialagan,Yash Goyal,Neelima Chavali,Prakriti Banik,Akrit Mohapatra,Ahmed A. A. Osman,Dhruv Batra +7 more
TL;DR: CloudCV as discussed by the authors is a comprehensive system to provide access to state-of-the-art distributed computer vision algorithms as a cloud service through a web interface and APIs, which can be used for big data applications.