scispace - formally typeset
Open AccessProceedings ArticleDOI

Going deeper with convolutions

TLDR
Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

Action Recognition using Visual Attention

TL;DR: In this article, a soft attention based model was proposed for action recognition in videos using multi-layered RNNs with Long Short-Term Memory (LSTM) units which are deep both spatially and temporally.
Posted Content

Big Transfer (BiT): General Visual Representation Learning

TL;DR: By combining a few carefully selected components, and transferring using a simple heuristic, Big Transfer achieves strong performance on over 20 datasets and performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples.
Journal ArticleDOI

Video Super-Resolution With Convolutional Neural Networks

TL;DR: This paper proposes a CNN that is trained on both the spatial and the temporal dimensions of videos to enhance their spatial resolution and shows that by using images to pretrain the model, a relatively small video database is sufficient for the training of the model to achieve and improve upon the current state-of-the-art.
Proceedings ArticleDOI

Understanding Data Augmentation for Classification: When to Warp?

TL;DR: In this article, the authors investigate the benefit of augmenting data with synthetically created samples when training a machine learning classifier, and they find that if plausible transforms for the data are known then augmentation in data-space provides a greater benefit for improving performance and reducing overfitting.
Proceedings ArticleDOI

Low-Shot Learning with Imprinted Weights

TL;DR: The process weight imprinting is called as it directly sets weights for a new category based on an appropriately scaled copy of the embedding layer activations for that training example, which provides immediate good classification performance and an initialization for any further fine-tuning in the future.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book ChapterDOI

Microsoft COCO: Common Objects in Context

TL;DR: A new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding by gathering images of complex everyday scenes containing common objects in their natural context.
Related Papers (5)