scispace - formally typeset
Open AccessProceedings Article

Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles

Reads0
Chats0
TLDR
The authors proposed an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates, which are as good or better than approximate Bayesian nNs.
Abstract
Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

MixMatch: A Holistic Approach to Semi-Supervised Learning

TL;DR: MixMatch as discussed by the authors predicts low-entropy labels for unlabeled examples and combines them with labeled and unlabelled data using MixUp to obtain state-of-the-art results.
Posted Content

Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift

TL;DR: A large-scale benchmark of existing state-of-the-art methods on classification problems and the effect of dataset shift on accuracy and calibration is presented, finding that traditional post-hoc calibration does indeed fall short, as do several other previous methods.