scispace - formally typeset
Open AccessProceedings Article

Dropout as a Bayesian approximation: representing model uncertainty in deep learning

Reads0
Chats0
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Abstract
Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs - extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and nonlinearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Diminishing Uncertainty within the Training Pool: Active Learning for Medical Image Segmentation

TL;DR: In this article, a query-by-committee approach for active learning where a joint optimizer is used for the committee is presented. And three new strategies are proposed: increasing frequency of uncertain data to bias the training data set; using mutual information among the input images as a regularizer for acquisition to ensure diversity in the training dataset; adaptation of Dice log-likelihood for Stein variational gradient descent (SVGD).
Posted Content

Quantifying and Leveraging Predictive Uncertainty for Medical Image Assessment

TL;DR: It is demonstrated that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks, e.g., by 8% to 0.91 with an expected rejection rate of under 25% for the classification of different abnormalities in chest radiographs.
Journal ArticleDOI

Uncertainty analysis in well log classification by Bayesian long short-term memory networks

TL;DR: Dropout, a technique to reduce overfitting and co-adaption in hidden neurons, is proposed to approximate the Bayesian inference realized in deep learning and is applied not only in the training process, but also in the testing step by the trained model.
Posted Content

A Simple Probabilistic Method for Deep Classification under Input-Dependent Label Noise

TL;DR: By tuning the softmax temperature, this work improves accuracy, log-likelihood and calibration on both image classification benchmarks with controlled label noise as well as Imagenet-21k which has naturally occurring label noise.
Journal ArticleDOI

NNVA: Neural Network Assisted Visual Analysis of Yeast Cell Polarization Simulation

TL;DR: In this article, a neural network-based surrogate model is used for visual analysis of high-dimensional input parameter space for a complex yeast cell polarization simulation, which can assist the computational biologists, who designed the simulation model, to visually calibrate the input parameters by modifying the parameter values and immediately visualizing the predicted simulation outcome without having the need to run the original expensive simulation for every instance.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI

Human-level control through deep reinforcement learning

TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Proceedings Article

Auto-Encoding Variational Bayes

TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Related Papers (5)