scispace - formally typeset
Open AccessProceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Sergey Ioffe, +1 more
- Vol. 1, pp 448-456
TLDR
Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Abstract
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Very deep convolutional neural network based image classification using small training sample size

TL;DR: In this article, a modified VGG-16 network was used to fit CIFAR-10 without severe overfitting and achieved 8.45% error rate on the dataset.
Journal ArticleDOI

Graph Convolutional Networks for Hyperspectral Image Classification

TL;DR: In this paper, a mini-batch graph convolutional network (called miniGCN) is proposed for hyperspectral image classification, which allows to train large-scale GCNs in a minibatch fashion.
Book ChapterDOI

ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation

TL;DR: ESPapernot et al. as discussed by the authors introduced a fast and efficient convolutional neural network, ESPNet, for semantic segmentation of high resolution images under resource constraints, which is efficient in terms of computation, memory, and power.
Proceedings Article

Genetic CNN

TL;DR: Zhang et al. as mentioned in this paper proposed an encoding method to represent each network structure in a fixed-length binary string, which is initialized by generating a set of randomized individuals and defined standard genetic operations, e.g., selection, mutation and crossover, to generate competitive individuals and eliminate weak ones.
Posted Content

The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches.

TL;DR: This report presents a brief survey on development of DL approaches, including Deep Neural Network (DNN), Convolutional neural network (CNN), Recurrent Neural network (RNN) including Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL).
References
More filters
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Proceedings Article

Rectified Linear Units Improve Restricted Boltzmann Machines

TL;DR: Restricted Boltzmann machines were developed using binary stochastic hidden units that learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset.
Related Papers (5)