scispace - formally typeset
Open AccessProceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TLDR
In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

read more

Citations
More filters
Journal ArticleDOI

Accelerating Very Deep Convolutional Networks for Classification and Detection

TL;DR: This paper aims to accelerate the test-time computation of convolutional neural networks, especially very deep CNNs, and develops an effective solution to the resulting nonlinear optimization problem without the need of stochastic gradient descent (SGD).
Posted Content

Digging Into Self-Supervised Monocular Depth Estimation

TL;DR: It is shown that a surprisingly simple model, and associated design choices, lead to superior predictions, and together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods.
Proceedings ArticleDOI

EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis

TL;DR: In this article, a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training is proposed.
Proceedings ArticleDOI

Recurrent convolutional neural network for object recognition

TL;DR: With fewer trainable parameters, RCNN outperforms the state-of-the-art models on all of these datasets and demonstrates the advantage of the recurrent structure over purely feed-forward structure for object recognition.
Journal ArticleDOI

A survey on deep learning for big data

TL;DR: The emerging researches of deep learning models for big data feature learning are reviewed and the remaining challenges of big data deep learning are pointed out and the future topics are discussed.
References
More filters
Book ChapterDOI

I and J

Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

A and V.

Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Related Papers (5)