scispace - formally typeset
Open AccessProceedings Article

Dropout as a Bayesian approximation: representing model uncertainty in deep learning

Reads0
Chats0
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Abstract
Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs - extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and nonlinearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Efficient Multi-Scale 3D CNN with Fully Connected CRF for Accurate Brain Lesion Segmentation

TL;DR: An efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data, and improves on the state-of-the‐art for all three applications.
Posted Content

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

TL;DR: In this article, a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes was developed, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Journal ArticleDOI

Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning

TL;DR: Virtual adversarial training (VAT) as discussed by the authors is a regularization method based on virtual adversarial loss, which is a measure of local smoothness of the conditional label distribution given input.
Posted Content

On Calibration of Modern Neural Networks

TL;DR: It is discovered that modern neural networks, unlike those from a decade ago, are poorly calibrated, and on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
Posted Content

Concrete Problems in AI Safety

TL;DR: A list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function, an objective function that is too expensive to evaluate frequently, or undesirable behavior during the learning process, are presented.
References
More filters
Posted Content

Variational Bayesian Inference with Stochastic Search

TL;DR: This work presents an alternative algorithm based on stochastic optimization that allows for direct optimization of the variational lower bound and demonstrates the approach on two non-conjugate models: logistic regression and an approximation to the HDP.
Proceedings Article

Computing with Infinite Networks

TL;DR: For neural networks with a wide class of weight-priors, it can be shown that in the limit of an infinite number of hidden units the prior over functions tends to a Gaussian process as discussed by the authors.
Proceedings ArticleDOI

Neural Network based Intrusion Detection System for critical infrastructures

TL;DR: The IDS-NNM algorithm proved to be capable of capturing all intrusion attempts presented in the network communication while not generating any false alerts.
Proceedings Article

Doubly Stochastic Variational Bayes for non-Conjugate Inference

TL;DR: This article proposed a simple and effective variational inference algorithm based on stochastic optimisation that can be widely applied for Bayesian non-conjugate inference in continuous parameter spaces.

Ensemble learning in Bayesian neural networks

TL;DR: This chapter shows how the ensemble learning approach can be extended to full-covariance Gaussian distributions while remaining computationally tractable, and extends the framework to deal with hyperparameters, leading to a simple re-estimation procedure.
Related Papers (5)