Open AccessProceedings Article
Dropout as a Bayesian approximation: representing model uncertainty in deep learning
Yarin Gal,Zoubin Ghahramani +1 more
- pp 1050-1059
Reads0
Chats0
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.Abstract:
Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs - extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and nonlinearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.read more
Citations
More filters
Journal ArticleDOI
Transient-optimized real-bogus classification with Bayesian convolutional neural networks - sifting the GOTO candidate stream
T. Killestein,J. D. Lyman,Danny Steeghs,Kendall Ackley,Martin J. Dyer,Krzysztof Ulaczyk,R. Cutter,Y. L. Mong,Duncan K. Galloway,V. S. Dhillon,P. T. O'Brien,G. Ramsay,Saran Poshyachinda,Rubina Kotak,Rene P. Breton,L. K. Nuttall,Enric Palle,Don Pollacco,Eric Thrane,S. Aukkaravittayapun,Supachai Awiphan,U. Burhanudin,Paul Chote,A. Chrimes,A. Chrimes,E. J. Daw,Christopher J. Duffy,R. Eyles-Ferris,Benjamin P. Gompertz,T. Heikkilä,Puji Irawati,Mark Kennedy,Andrew J. Levan,Andrew J. Levan,S. P. Littlefair,L. Makrygianni,D. Mata Sánchez,Seppo Mattila,Justyn R. Maund,James McCormac,David Mkrtichian,James Mullaney,E. Rol,Utane Sawangwit,Elizabeth R. Stanway,R. L. C. Starling,P. A. Strøm,S. Tooke,Klaas Wiersema,S. Williams +49 more
TL;DR: This paper presents a new real-bogus classifier based on a Bayesian convolutional neural network that provides nuanced, uncertainty-aware classification of transient candidates in difference imaging, and demonstrates its application to the datastream from the GOTO wide-field optical survey.
Posted Content
Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?
TL;DR: This work presents the first study of certifiable robustness for DBU models, and proposes novel uncertainty attacks that fool models into assigning high confidence to OOD data and low confidence to ID data, respectively.
Posted Content
Noisy Machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation
TL;DR: This paper outlines how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output, and proposes using knowledge distillation combined with noise injection during training to achieve more noise robust networks.
Posted Content
Probabilistic Safety for Bayesian Neural Networks
TL;DR: In this article, the authors study probabilistic safety of Bayesian Neural Networks (BNNs) under adversarial input perturbations and derive explicit procedures for computing a lower bound on the probability of BNNs being vulnerable to adversarial attacks.
Posted Content
Inhibited Softmax for Uncertainty Estimation in Neural Networks
TL;DR: A new method for uncertainty estimation and out-of-distribution detection in neural networks with softmax output is presented, which performs comparably to more computationally expensive methods and outperforms baselines on experiments from image recognition and sentiment analysis domains.
References
More filters
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI
Gradient-based learning applied to document recognition
Yann LeCun,Léon Bottou,Léon Bottou,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio,Patrick Haffner +6 more
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal Article
Dropout: a simple way to prevent neural networks from overfitting
TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI
Human-level control through deep reinforcement learning
Volodymyr Mnih,Koray Kavukcuoglu,David Silver,Andrei Rusu,Joel Veness,Marc G. Bellemare,Alex Graves,Martin Riedmiller,Andreas K. Fidjeland,Georg Ostrovski,Stig Petersen,Charles Beattie,Amir Sadik,Ioannis Antonoglou,Helen King,Dharshan Kumaran,Daan Wierstra,Shane Legg,Demis Hassabis +18 more
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Proceedings Article
Auto-Encoding Variational Bayes
Diederik P. Kingma,Max Welling +1 more
TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.