scispace - formally typeset
Open AccessPosted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Reads0
Chats0
TLDR
Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Abstract
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.

read more

Citations
More filters
Journal ArticleDOI

Age and Sex Estimation Using Artificial Intelligence From Standard 12-Lead ECGs.

TL;DR: This research presents a novel probabilistic approach that allows us to assess the importance of knowing the carrier and removal status of canine coronavirus as a source of infection for other animals.
Journal ArticleDOI

An application of cascaded 3D fully convolutional networks for medical image segmentation.

TL;DR: This work shows that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models.
Posted Content

CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images

TL;DR: In this paper, a learning curriculum is designed to measure the complexity of data using its distribution density in a feature space, and rank the complexity in an unsupervised manner, resulting in a high-performance CNN model, where the negative impact of noisy labels is reduced substantially.
Journal ArticleDOI

Deep learning for real-time single-pixel video.

TL;DR: This work develops and implements a novel approach to solving the inverse problem for single-pixel cameras efficiently and represents a significant step towards real-time operation of computational imagers.
Posted Content

Ensemble Adversarial Training: Attacks and Defenses.

TL;DR: Ensemble adversarial training as discussed by the authors augments training data with perturbations transferred from other models to improve robustness to black-box attacks, which has been shown to yield models with strong robustness against adversarial examples.
References
More filters
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings ArticleDOI

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

TL;DR: In this paper, a Parametric Rectified Linear Unit (PReLU) was proposed to improve model fitting with nearly zero extra computational cost and little overfitting risk, which achieved a 4.94% top-5 test error on ImageNet 2012 classification dataset.
Journal ArticleDOI

Independent component analysis: algorithms and applications

TL;DR: The basic theory and applications of ICA are presented, and the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible.
Journal Article

Adaptive Subgradient Methods for Online Learning and Stochastic Optimization

TL;DR: This work describes and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal functions that can be chosen in hindsight.
Related Papers (5)