scispace - formally typeset
Open AccessProceedings ArticleDOI

Going deeper with convolutions

Reads0
Chats0
TLDR
Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Posted Content

Deep Residual Learning for Image Recognition

TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Journal ArticleDOI

ImageNet classification with deep convolutional neural networks

TL;DR: A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
References
More filters
Book

Kernels for Vector-Valued Functions: A Review

TL;DR: This monograph reviews different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.
Proceedings ArticleDOI

Human action recognition by learning bases of action attributes and parts

TL;DR: This work proposes to use attributes and parts for recognizing human actions in still images by learning a set of sparse bases that are shown to carry much semantic meaning, and shows that this dual sparsity provides theoretical guarantee of the bases learning and feature reconstruction approach.
Journal ArticleDOI

Non-uniform Deblurring for Shaken Images

TL;DR: A new parametrized geometric model of the blurring process in terms of the rotational motion of the camera during exposure is proposed, able to capture non-uniform blur in an image due to camera shake using a single global descriptor, and can be substituted into existing deblurring algorithms with only small modifications.
Book ChapterDOI

Building Rome on a cloudless day

TL;DR: This paper introduces an approach for dense 3D reconstruction from unregistered Internet-scale photo collections with about 3 million images within the span of a day on a single PC ("cloudless"), leveraging geometric and appearance constraints to obtain a highly parallel implementation on modern graphics processors and multi-core architectures.
Proceedings ArticleDOI

Robust L/sub 1/ norm factorization in the presence of outliers and missing data by alternative convex programming

TL;DR: This paper forms matrix factorization as a L/sub 1/ norm minimization problem that is solved efficiently by alternative convex programming that is robust without requiring initial weighting, handles missing data straightforwardly, and provides a framework in which constraints and prior knowledge can be conveniently incorporated.
Related Papers (5)