scispace - formally typeset
Open AccessProceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

Reads0
Chats0
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

read more

Citations
More filters
Proceedings Article

Learned in translation: contextualized word vectors

TL;DR: Adding context vectors to a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation to contextualize word vectors improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks.
Proceedings Article

Dynamic network surgery for efficient DNNs

TL;DR: A novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning by proving that it outperforms the recent pruning method by considerable margins.
Posted Content

Large-Margin Softmax Loss for Convolutional Neural Networks

TL;DR: In this article, a generalized large-margin softmax (L-Softmax) loss is proposed to encourage intra-class compactness and inter-class separability between learned features.
Posted Content

Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding

TL;DR: Bayesian SegNet as discussed by the authors uses Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels, which improves segmentation performance by 2-3% across a number of state-of-the-art architectures.
Proceedings ArticleDOI

End-to-End Learning of Action Detection from Frame Glimpses in Videos

TL;DR: In this article, the authors introduce an end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions by observing frames and deciding both where to look next and when to emit a prediction.
References
More filters
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Proceedings ArticleDOI

Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation

TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Posted Content

Fully Convolutional Networks for Semantic Segmentation

TL;DR: It is shown that convolutional networks by themselves, trained end- to-end, pixels-to-pixels, improve on the previous best result in semantic segmentation.
Journal ArticleDOI

Backpropagation applied to handwritten zip code recognition

TL;DR: This paper demonstrates how constraints from the task domain can be integrated into a backpropagation network through the architecture of the network, successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service.
Journal ArticleDOI

The Pascal Visual Object Classes Challenge: A Retrospective

TL;DR: A review of the Pascal Visual Object Classes challenge from 2008-2012 and an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.
Related Papers (5)