scispace - formally typeset
Open AccessPosted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Reads0
Chats0
TLDR
Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Abstract
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.

read more

Citations
More filters
Journal ArticleDOI

DEEPre: sequence-based enzyme EC number prediction by deep learning

TL;DR: This paper proposes an end‐to‐end feature selection and classification model training approach, as well as an automatic and robust feature dimensionality uniformization method, DEEPre, in the field of enzyme function prediction, which improves the prediction performance over the previous state‐of‐the‐art methods.
Proceedings ArticleDOI

Synthesizing 3D Shapes via Modeling Multi-view Depth Maps and Silhouettes with Deep Generative Networks

TL;DR: This work takes an alternative approach to the problem of learning generative models of 3D shapes: learning a generative model over multi-view depth maps or their corresponding silhouettes, and using a deterministic rendering function to produce3D shapes from these images.
Posted Content

Attention Branch Network: Learning of Attention Mechanism for Visual Explanation

TL;DR: Zhang et al. as discussed by the authors proposed Attention Branch Network (ABN), which extends the top-down visual explanation model by introducing a branch structure with an attention mechanism and is trainable for the visual explanation and image recognition in end-to-end manner.
Proceedings ArticleDOI

RAM: A Region-Aware Deep Model for Vehicle Re-Identification

TL;DR: A novel learning algorithm is introduced to jointly use vehicle IDs, types/models, and colors to train the Region-Aware deep Model (RAM), which fuses more cues for training and results in more discriminative global and regional features.
Proceedings ArticleDOI

Dynamic Multi-Scale Filters for Semantic Segmentation

TL;DR: A Dynamic Multi-scale Network (DMNet) is proposed to adaptively capture multi-scale contents for predicting pixel-level semantic labels and obtains state-of-the-art performance on Pascal-Context and ADE20K.
References
More filters
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings ArticleDOI

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

TL;DR: In this paper, a Parametric Rectified Linear Unit (PReLU) was proposed to improve model fitting with nearly zero extra computational cost and little overfitting risk, which achieved a 4.94% top-5 test error on ImageNet 2012 classification dataset.
Journal ArticleDOI

Independent component analysis: algorithms and applications

TL;DR: The basic theory and applications of ICA are presented, and the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible.
Journal Article

Adaptive Subgradient Methods for Online Learning and Stochastic Optimization

TL;DR: This work describes and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal functions that can be chosen in hindsight.
Related Papers (5)