scispace - formally typeset
Open AccessProceedings Article

Dropout as a Bayesian approximation: representing model uncertainty in deep learning

Reads0
Chats0
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Abstract
Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs - extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and nonlinearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Dropout as data augmentation

TL;DR: An approach to projecting the dropout noise within a network back into the input space, thereby generating augmented versions of the training data, and it is shown that training a deterministic network on the augmented samples yields similar results.
Proceedings Article

Random feature expansions for Deep Gaussian Processes

TL;DR: In this paper, the authors introduce a novel formulation of DGPs based on random feature expansions that are trained using stochastic variational inference, which significantly advances the state-of-the-art in inference for DGPs and enables accurate quantification of uncertainty.
Journal ArticleDOI

Learning and the Unknown: Surveying Steps toward Open World Recognition

TL;DR: This paper summarizes the state of the art, core ideas, and results and explains why, despite the efforts to date, the current techniques are genuinely insufficient for handling unknown inputs, especially for deep networks.
Posted Content

Survey of Dropout Methods for Deep Neural Networks.

TL;DR: The history of dropout methods, their various applications, and current areas of research interest are summarized.
Posted Content

Short-term Load Forecasting with Deep Residual Networks

TL;DR: The proposed model is able to integrate domain knowledge and researchers’ understanding of the task by virtue of different neural network building blocks and has high generalization capability.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI

Human-level control through deep reinforcement learning

TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Proceedings Article

Auto-Encoding Variational Bayes

TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Related Papers (5)