Open AccessProceedings Article
What uncertainties do we need in Bayesian deep learning for computer vision
Alex Kendall,Yarin Gal +1 more
- Vol. 30, pp 5580-5590
Reads0
Chats0
TLDR
In this paper, a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty was proposed for semantic segmentation and depth regression tasks, which can be interpreted as learned attenuation.Abstract:
There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model - uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.read more
Citations
More filters
Posted Content
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision
Alex Kendall,Yarin Gal +1 more
TL;DR: A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.
Proceedings ArticleDOI
Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics
TL;DR: In this article, the authors make the observation that the performance of multi-task learning is strongly dependent on the relative weighting between each task's loss, and propose a principled approach to weight multiple loss functions by considering the homoscedastic uncertainty of each task.
Proceedings ArticleDOI
Deep Ordinal Regression Network for Monocular Depth Estimation
TL;DR: Deep Ordinal Regression Network (DORN) as discussed by the authors discretizes depth and recast depth network learning as an ordinal regression problem by training the network using an ordinary regression loss, which achieves much higher accuracy and faster convergence in synch.
Journal ArticleDOI
A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges
Moloud Abdar,Farhad Pourpanah,Sadiq Hussain,Dana Rezazadegan,Li Liu,Mohammad Ghavamzadeh,Paul Fieguth,Xiaochun Cao,Abbas Khosravi,U. Rajendra Acharya,U. Rajendra Acharya,U. Rajendra Acharya,Vladimir Makarenkov,Saeid Nahavandi +13 more
TL;DR: This study reviews recent advances in UQ methods used in deep learning and investigates the application of these methods in reinforcement learning (RL), and outlines a few important applications of UZ methods.
Posted Content
Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift
Yaniv Ovadia,Emily Fertig,Jie Ren,Zachary Nado,D. Sculley,Sebastian Nowozin,Joshua V. Dillon,Balaji Lakshminarayanan,Jasper Snoek +8 more
TL;DR: A large-scale benchmark of existing state-of-the-art methods on classification problems and the effect of dataset shift on accuracy and calibration is presented, finding that traditional post-hoc calibration does indeed fall short, as do several other previous methods.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
TL;DR: Quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures, including FCN and DeconvNet.
Journal ArticleDOI
Fully Convolutional Networks for Semantic Segmentation
TL;DR: Fully convolutional networks (FCN) as mentioned in this paper were proposed to combine semantic information from a deep, coarse layer with appearance information from shallow, fine layer to produce accurate and detailed segmentations.
Book ChapterDOI
Indoor segmentation and support inference from RGBD images
TL;DR: The goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships, to better understand how 3D cues can best inform a structured 3D interpretation.
Journal ArticleDOI
An introduction to variational methods for graphical models
TL;DR: This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models (Bayesian networks and Markov random fields), and describes a general framework for generating variational transformations based on convex duality.