scispace - formally typeset
Open AccessProceedings Article

Regularizing Deep Neural Networks by Noise: Its Interpretation and Optimization

Reads0
Chats0
TLDR
This paper interprets that the conventional training methods with regularization by noise injection optimize the lower bound of the true objective and proposes a technique to achieve a tighter lower bound using multiple noise samples per training example in a stochastic gradient descent iteration.
Abstract
Overfitting is one of the most critical challenges in deep neural networks, and there are various types of regularization methods to improve generalization performance. Injecting noises to hidden units during training, e.g., dropout, is known as a successful regularizer, but it is still not clear enough why such training techniques work well in practice and how we can maximize their benefit in the presence of two conflicting objectives---optimizing to true data distribution and preventing overfitting by regularization. This paper addresses the above issues by 1) interpreting that the conventional training methods with regularization by noise injection optimize the lower bound of the true objective and 2) proposing a technique to achieve a tighter lower bound using multiple noise samples per training example in a stochastic gradient descent iteration. We demonstrate the effectiveness of our idea in several computer vision applications.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Towards a Robust Differentiable Architecture Search under Label Noise

TL;DR: In this paper , a noise injecting operation is proposed to prevent the network from learning from noisy samples, which does not degrade the performance of the NAS algorithm if the data is indeed clean.
Proceedings ArticleDOI

Regularization Learning for Image Recognition

TL;DR: A novel probabilistic representation for explaining the architecture of Deep Neural Networks (DNNs), which demonstrates that the hidden layers close to the input formulate prior distributions, thus DNNs have an explicit regularization, namely the prior distributions.
Journal ArticleDOI

Using an optimised neural architecture search for predicting the quantum yield of photosynthesis of winter wheat

TL;DR: In this article , a novel prediction model based on an optimised neural architecture search (B-NAS) was proposed, and achieved comparatively better results compared with support vector regression (SVR) and other traditional prediction methods.
Journal ArticleDOI

Robust Neural Networks Learning via a Minimization of Stochastic Output Sensitivity

TL;DR: Experimental results show that the SML significantly outperforms several regularization techniques and yields much lower classification error when testing sets are contaminated with noises.
Proceedings ArticleDOI

A Novel Gradient Accumulation Method for Calibration of Named Entity Recognition Models

TL;DR: This work proposes a novel calibration method based on gradient accumulation in conjunction with existing loss regularization techniques that shows an improvement of the performance/calibration ratio compared to the current methods.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Posted Content

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

TL;DR: Faster R-CNN as discussed by the authors proposes a Region Proposal Network (RPN) to generate high-quality region proposals, which are used by Fast R-NN for detection.
Related Papers (5)