scispace - formally typeset
Open AccessProceedings ArticleDOI

Going deeper with convolutions

Reads0
Chats0
TLDR
Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Posted Content

Deep Residual Learning for Image Recognition

TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Journal ArticleDOI

ImageNet classification with deep convolutional neural networks

TL;DR: A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
References
More filters
Book ChapterDOI

Hand pose estimation and hand shape classification using multi-layered randomized decision forests

TL;DR: Two novel types of multi---layered RDFs are introduced: Global Expert Network (GEN) and Local Expert network (LEN), which achieve significantly better hand pose estimates than a single---layering skeleton estimator and generalize better to previously unseen hand poses.

Class-Specific Hough Forests for Object Detection

TL;DR: It is demonstrated that Hough forests improve the results of the Hough-transform object detection significantly and achieve state-of-the-art performance for several classes and datasets.
Proceedings ArticleDOI

Skeletal graphs for efficient structure from motion

TL;DR: This work computes a small skeletal subset of images, reconstructs the skeletal set, and adds the remaining images using pose estimation, resulting in dramatic speedups, while provably approximating the covariance of the full set of parameters.
Proceedings Article

Provable Bounds for Learning Some Deep Representations

TL;DR: This work gives algorithms with provable guarantees that learn a class of deep nets in the generative model view popularized by Hinton and others, based upon a novel idea of observing correlations among features and using these to infer the underlying edge structure via a global graph recovery procedure.
Journal ArticleDOI

Algebraic functions for recognition

TL;DR: The trilinearity result is shown to be of much practical use in visual recognition by alignment-yielding a direct reprojection method that cuts through the computations of camera transformation, scene structure and epipolar geometry.
Related Papers (5)