scispace - formally typeset
Open AccessProceedings Article

Regularizing Deep Neural Networks by Noise: Its Interpretation and Optimization

Reads0
Chats0
TLDR
This paper interprets that the conventional training methods with regularization by noise injection optimize the lower bound of the true objective and proposes a technique to achieve a tighter lower bound using multiple noise samples per training example in a stochastic gradient descent iteration.
Abstract
Overfitting is one of the most critical challenges in deep neural networks, and there are various types of regularization methods to improve generalization performance. Injecting noises to hidden units during training, e.g., dropout, is known as a successful regularizer, but it is still not clear enough why such training techniques work well in practice and how we can maximize their benefit in the presence of two conflicting objectives---optimizing to true data distribution and preventing overfitting by regularization. This paper addresses the above issues by 1) interpreting that the conventional training methods with regularization by noise injection optimize the lower bound of the true objective and 2) proposing a technique to achieve a tighter lower bound using multiple noise samples per training example in a stochastic gradient descent iteration. We demonstrate the effectiveness of our idea in several computer vision applications.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Adaptive Weight Decay for Deep Neural Networks

TL;DR: The quantitative evaluation of the proposed algorithm, called adaptive weight-decay (AdaDecay), indicates that AdaDecay improves generalization leading to better accuracy across all the datasets and models.
Journal ArticleDOI

A Dual Camera System for High Spatiotemporal Resolution Video Acquisition

TL;DR: In this paper, a dual camera system for high spatio-temporal resolution (HSTR) video acquisition is presented, where one camera shoots a video with high spatial resolution and low frame rate (HSR-LFR) and another one captures a low spatial resolution HFR video.
Journal ArticleDOI

The impact of extraneous features on the performance of recurrent neural network models in clinical tasks.

TL;DR: This work investigated the effect of extraneous input features on the predictive performance of Recurrent Neural Networks by including in the input vector extraneous features that were randomly drawn from theoretical and empirical distributions.
Journal ArticleDOI

Neural Spike Sorting Using Binarized Neural Networks

TL;DR: The designed BNN-based spike sorting system is implemented on a field-programmable gate array and is shown to reduce the required on-chip memory by 89% compared to those of the alternative state-of-the-art spike sorting systems.
Posted Content

Unpacking Information Bottlenecks: Unifying Information-Theoretic Objectives in Deep Learning

TL;DR: It is found that these surrogate objectives allow us to apply the information bottleneck to modern neural network architectures and demonstrate the insights on Permutation-MNIST, MNIST and CIFAR10.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Posted Content

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

TL;DR: Faster R-CNN as discussed by the authors proposes a Region Proposal Network (RPN) to generate high-quality region proposals, which are used by Fast R-NN for detection.
Related Papers (5)