Open AccessProceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TLDR
In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.Abstract:
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.read more
Citations
More filters
Journal ArticleDOI
Deep Expectation of Real and Apparent Age from a Single Image Without Facial Landmarks
TL;DR: A deep learning solution to age estimation from a single face image without the use of facial landmarks is proposed and the IMDB-WIKI dataset is introduced, the largest public dataset of face images with age and gender labels.
Journal ArticleDOI
U2-Net: Going deeper with nested U-structure for salient object detection
Xuebin Qin,Zichen Vincent Zhang,Chenyang Huang,Masood Dehghan,Osmar R. Zaïane,Martin Jagersand +5 more
TL;DR: A simple yet powerful deep network architecture, U2-Net, for salient object detection (SOD), a two-level nested U-structure that enables us to train a deep network from scratch without using backbones from image classification tasks.
Posted Content
Quantized Convolutional Neural Networks for Mobile Devices
TL;DR: This paper proposes an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models.
Proceedings ArticleDOI
Visual7W: Grounded Question Answering in Images
TL;DR: In this article, an LSTM model with spatial attention was proposed to tackle the 7W QA task, which enables a new type of QA with visual answers, in addition to textual answers used in previous work.
Proceedings ArticleDOI
ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic Segmentation
TL;DR: Zhang et al. as discussed by the authors proposed to use scribbles to annotate images, and developed an algorithm to train convolutional networks for semantic segmentation supervised by scribbles.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Proceedings ArticleDOI
Going deeper with convolutions
Christian Szegedy,Wei Liu,Yangqing Jia,Pierre Sermanet,Scott Reed,Dragomir Anguelov,Dumitru Erhan,Vincent Vanhoucke,Andrew Rabinovich +8 more
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).