scispace - formally typeset
Open AccessProceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

Reads0
Chats0
TLDR
In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

read more

Citations
More filters
Proceedings ArticleDOI

Towards Optimal Structured CNN Pruning via Generative Adversarial Learning

TL;DR: This paper proposes an effective structured pruning approach that jointly prunes filters as well as other structures in an end-to-end manner and effectively solves the optimization problem by generative adversarial learning (GAL), which learns a sparse soft mask in a label-free and an end to end manner.
Proceedings ArticleDOI

Large-Scale Learnable Graph Convolutional Networks

TL;DR: In this paper, a learnable graph convolutional layer (LGCL) is proposed to transform graph data into grid-like structures in 1-D format, thereby enabling the use of regular convolution operations on generic graphs.
Book ChapterDOI

Convolutional Neural Networks

Nikhil Ketkar
TL;DR: Convolution Neural Networks (CNNs) in essence are neural networks that employ the convolution operation (instead of a fully connected layer) as one of its layers.
Proceedings ArticleDOI

Searching for a Robust Neural Architecture in Four GPU Hours

TL;DR: In this paper, a differentiable sampler over the directed acyclic graph (DAG) is developed to avoid traversing all the possibilities of the sub-graphs, which can be learnable and optimized by the validation loss after training the sampled architecture.
Proceedings ArticleDOI

DeepRoad: GAN-based metamorphic testing and input validation framework for autonomous driving systems

TL;DR: The experimental results demonstrate that DeepRoad can detect thousands of inconsistent behaviors for DNN-based autonomous driving systems, and effectively validate input images to potentially enhance the system robustness as well.
References
More filters
Book ChapterDOI

I and J

Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

A and V.

Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Related Papers (5)