scispace - formally typeset
Open AccessProceedings Article

Dropout as a Bayesian approximation: representing model uncertainty in deep learning

Reads0
Chats0
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Abstract
Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs - extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and nonlinearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Prediction of future capacity and internal resistance of Li-ion cells from one cycle of input data

TL;DR: This work provides a data-centric model accurately predicting a cell’s entire capacity and IR trajectory from one single cycle of input data, representing a significant reduction in the amount ofinput data needed over previous works.
Posted Content

Bayesian Neural Network Priors Revisited

TL;DR: In this paper, the authors study summary statistics of neural network weights in different networks trained using SGD and find that fully connected networks (FCNNs) display heavy-tailed weight distributions, while convolutional neural network (CNN) weights display strong spatial correlations.
Proceedings ArticleDOI

Use of machine learning in the forecast of clinical consequences of cancer diseases

TL;DR: It is shown that existing systems of machine learning are differently effective and have automated training, assessment and interpretation of deep learning models.
Proceedings ArticleDOI

ApDeepSense: Deep Learning Uncertainty Estimation without the Pain for IoT Applications

TL;DR: ApDeepSense is an effective and efficient deep learning uncertainty estimation method for resource-constrained IoT devices that leverages an implicit Bayesian approximation that links neural networks to deep Gaussian processes, allowing output uncertainty to be quantified.
Journal ArticleDOI

Linking Datasets Using Semantic Textual Similarity

TL;DR: This work presents an evaluation of some metrics that have performed well in recent semantic textual similarity evaluations and applies these to linking existing datasets.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI

Human-level control through deep reinforcement learning

TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Proceedings Article

Auto-Encoding Variational Bayes

TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Related Papers (5)