scispace - formally typeset
Open AccessProceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Sergey Ioffe, +1 more
- Vol. 1, pp 448-456
TLDR
Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Abstract
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI

Temporal Relational Reasoning in Videos

TL;DR: This paper introduces an effective and interpretable network module, the Temporal Relation Network (TRN), designed to learn and reason about temporal dependencies between video frames at multiple time scales.
Posted Content

Bilinear CNN Models for Fine-grained Visual Recognition

TL;DR: This paper proposed bilinear models, which consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain an image descriptor, which can model local pairwise feature interactions in a translationally invariant manner.
Proceedings ArticleDOI

A Simple Yet Effective Baseline for 3d Human Pose Estimation

TL;DR: In this paper, a relatively simple deep feed-forward network was proposed to estimate 3D human pose from 2D joint locations with a remarkably low error rate, achieving state-of-the-art results on Human3.6M.
Posted Content

Deep Networks with Stochastic Depth

TL;DR: Stochastic depth as discussed by the authors randomly drops a subset of layers during training and bypasses them with the identity function, which can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error.
Book ChapterDOI

Unified Perceptual Parsing for Scene Understanding

TL;DR: A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations and it is shown that it is able to effectively segment a wide range of concepts from images.
References
More filters
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Proceedings Article

Rectified Linear Units Improve Restricted Boltzmann Machines

TL;DR: Restricted Boltzmann machines were developed using binary stochastic hidden units that learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset.
Related Papers (5)