Open AccessProceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
Reads0
Chats0
TLDR
In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.Abstract:
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.read more
Citations
More filters
Proceedings ArticleDOI
Robust Scene Text Recognition with Automatic Rectification
TL;DR: This article proposed a robust text recognizer with automatic rectification (RARE), which consists of a Spatial Transformer Network (STN) and a Sequence Recognition Network (SRN).
Proceedings ArticleDOI
DEX: Deep EXpectation of Apparent Age from a Single Image
TL;DR: The proposed method, Deep EXpectation (DEX) of apparent age, first detects the face in the test image and then extracts the CNN predictions from an ensemble of 20 networks on the cropped face, significantly outperforming the human reference.
Ieee transactions on neural networks and learning systems
Derong Liu,Murad Abu-Khalaf,Adel M. Alimi,Charles Anderson,Aluizio Fausto,Ahmad Taher Azar,Bart Baesens,Giorgio Battistelli,Eduardo Bayro-Corrochano,Sander Bohte,Pantelis Bouboulis,Padua Braga,Cristiano Cervellera,Badong Chen,Sergio Cruces,Qionghai Dai,Steven Damelin,Daoyi Dong,El-Sayed El-Alfy,King Fahd,Saudi Arabia,David Elizondo,Maurizio Filippone,Yun Raymond Fu,Giorgio Gnecco,Haibo He,Shuiwang Ji,Preben Kidmose,Rhee Man Kil,Robert Legenstein,Hongyi Li,Zhijun Li,Jinling Liang,Juwei Lu,Wenlian Lu,Jiancheng Lv,Ana Maria Madureira,Massimo Panella,Robi Polikar,Danil Prokhorov,Manuel Roveri,Björn W. Schuller,Madhusudana Shashanka,Chunhua Shen,Igor Skrjanc,Yongduan Song,Stefano Squartini,Changyin Sun,Toshihisa Tanaka,Huajin Tang,Dacheng Tao,Peter Tino,Dianhui Wang,Michael J. Watts,Qinglai Wei,Stefan Wermter,Marco Wiering,Jonathan Wu,Shengli Xie,Dong Xu +59 more
TL;DR: Equipped with the global directional matching module and the directional appearance model learning module, DDEAL learns static cues from the labeled first frame and dynamically updates cues of the subsequent frames for object segmentation without using online fine-tuning.
Proceedings ArticleDOI
Exploring Self-Attention for Image Recognition
TL;DR: This work considers two forms of self-attention, pairwise and patchwise, which generalizes standard dot-product attention and is fundamentally a set operator and strictly more powerful than convolution.
Posted Content
Rethinking ImageNet Pre-training.
TL;DR: Experiments show that ImageNet pre-training speeds up convergence early in training, but does not necessarily provide regularization or improve final target task accuracy, and these discoveries will encourage people to rethink the current de facto paradigm of `pre-training and fine-tuning' in computer vision.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Proceedings ArticleDOI
Going deeper with convolutions
Christian Szegedy,Wei Liu,Yangqing Jia,Pierre Sermanet,Scott Reed,Dragomir Anguelov,Dumitru Erhan,Vincent Vanhoucke,Andrew Rabinovich +8 more
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).