Open AccessProceedings Article
Dropout as a Bayesian approximation: representing model uncertainty in deep learning
Yarin Gal,Zoubin Ghahramani +1 more
- pp 1050-1059
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.Abstract:
Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs - extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and nonlinearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.read more
Citations
More filters
Journal ArticleDOI
Efficient Multi-Scale 3D CNN with Fully Connected CRF for Accurate Brain Lesion Segmentation
Konstantinos Kamnitsas,Christian Ledig,Virginia F. J. Newcombe,Joanna P. Simpson,Andrew D. Kane,David K. Menon,Daniel Rueckert,Ben Glocker +7 more
TL;DR: An efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data, and improves on the state-of-the‐art for all three applications.
Posted Content
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Yarin Gal,Zoubin Ghahramani +1 more
TL;DR: In this article, a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes was developed, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Journal ArticleDOI
Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
TL;DR: Virtual adversarial training (VAT) as discussed by the authors is a regularization method based on virtual adversarial loss, which is a measure of local smoothness of the conditional label distribution given input.
Posted Content
On Calibration of Modern Neural Networks
TL;DR: It is discovered that modern neural networks, unlike those from a decade ago, are poorly calibrated, and on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
Posted Content
Concrete Problems in AI Safety
TL;DR: A list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function, an objective function that is too expensive to evaluate frequently, or undesirable behavior during the learning process, are presented.
References
More filters