Going deeper with convolutions
Christian Szegedy,Wei Liu,Yangqing Jia,Pierre Sermanet,Scott Reed,Dragomir Anguelov,Dumitru Erhan,Vincent Vanhoucke,Andrew Rabinovich +8 more
- pp 1-9
TLDR
Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).Abstract:
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.read more
Citations
More filters
Proceedings ArticleDOI
Mind the Class Weight Bias: Weighted Maximum Mean Discrepancy for Unsupervised Domain Adaptation
TL;DR: In this article, a weighted maximum mean discrepancy (MMD) model is proposed to exploit the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable.
Book ChapterDOI
Big Transfer (BiT): General Visual Representation Learning
Alexander Kolesnikov,Lucas Beyer,Xiaohua Zhai,Joan Puigcerver,Jessica Yung,Sylvain Gelly,Neil Houlsby +6 more
TL;DR: Big Transfer (BiT) as discussed by the authors uses pre-trained representations to improve sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision, achieving state-of-the-art performance on 20 datasets.
Posted Content
Latent Embeddings for Zero-shot Classification
TL;DR: A novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification, that improves the state-of-the-art for various classembeddings consistently on three challenging publicly available datasets for the zero- shot setting.
Posted Content
Synthesizing the preferred inputs for neurons in neural networks via deep generator networks
TL;DR: In this paper, a deep generator network (DGN) is proposed to generate synthetic images that look almost real, reveal the features learned by each neuron in an interpretable way, generalizes well to new datasets and somewhat well to different network architectures without requiring the prior to be relearned.
Proceedings ArticleDOI
Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions
TL;DR: This work develops methods to locate and distinguish between hands in egocentric video using strong appearance models with Convolutional Neural Networks, and introduces a simple candidate region generation approach that outperforms existing techniques at a fraction of the computational cost.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI
Gradient-based learning applied to document recognition
Yann LeCun,Léon Bottou,Léon Bottou,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio,Patrick Haffner +6 more
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI
Regression Shrinkage and Selection via the Lasso
TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book ChapterDOI
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,James Hays,Pietro Perona,Deva Ramanan,Piotr Dollár,C. Lawrence Zitnick +7 more
TL;DR: A new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding by gathering images of complex everyday scenes containing common objects in their natural context.