scispace - formally typeset
Open AccessPosted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Reads0
Chats0
TLDR
Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Abstract
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.

read more

Citations
More filters
Proceedings ArticleDOI

NormFace: L2 Hypersphere Embedding for Face Verification

TL;DR: This work identifies and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings, and proposes two strategies for training using normalized features.
Proceedings ArticleDOI

Fully-Adaptive Feature Sharing in Multi-Task Networks with Applications in Person Attribute Classification

TL;DR: In this article, the authors propose an automatic approach for designing compact multi-task deep learning architectures by starting with a thin multi-layer network and dynamically widening it in a greedy manner during training.
Proceedings ArticleDOI

Dual Motion GAN for Future-Flow Embedded Video Prediction

TL;DR: Wang et al. as mentioned in this paper proposed a dual motion generative adversarial network (GAN) to explicitly enforce future-frame predictions to be consistent with the pixel-wise flows in the video through a duallearning mechanism.
Posted Content

CNN-based Segmentation of Medical Imaging Data.

TL;DR: A CNN-based method with three-dimensional filters is demonstrated and applied to hand and brain MRI and is validated on data both from the central nervous system as well as the bones of the hand.
Journal ArticleDOI

Deep Learning for Hyperspectral Image Classification: An Overview

TL;DR: This survey paper presents a systematic review of deep learning-based HSI classification literatures and compares several strategies to improve classification performance, which can provide some guidelines for future studies on this topic.
References
More filters
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings ArticleDOI

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

TL;DR: In this paper, a Parametric Rectified Linear Unit (PReLU) was proposed to improve model fitting with nearly zero extra computational cost and little overfitting risk, which achieved a 4.94% top-5 test error on ImageNet 2012 classification dataset.
Journal ArticleDOI

Independent component analysis: algorithms and applications

TL;DR: The basic theory and applications of ICA are presented, and the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible.
Journal Article

Adaptive Subgradient Methods for Online Learning and Stochastic Optimization

TL;DR: This work describes and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal functions that can be chosen in hindsight.
Related Papers (5)