scispace - formally typeset
Search or ask a question
Topic

Uncertainty quantification

About: Uncertainty quantification is a research topic. Over the lifetime, 8599 publications have been published within this topic receiving 132551 citations. The topic is also known as: UQ.


Papers
More filters
Posted Content
TL;DR: A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.
Abstract: There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model -- uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.

2,616 citations

Journal ArticleDOI
TL;DR: This work develops methods for applying existing analytical tools to perform analyses on a variety of mathematical and computer models and provides a complete methodology for performing these analyses, in both deterministic and stochastic settings, and proposes novel techniques to handle problems encountered during these types of analyses.

2,014 citations

Proceedings Article
04 Dec 2017
TL;DR: In this paper, a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty was proposed for semantic segmentation and depth regression tasks, which can be interpreted as learned attenuation.
Abstract: There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model - uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.

1,263 citations

Journal ArticleDOI
TL;DR: This survey introduces the application, implementation, and underlying principles of sensitivity and uncertainty quantification inredictive modeling.
Abstract: Predictive modeling's effectiveness is hindered by inherent uncertainties in the input parameters. Sensitivity and uncertainty analysis quantify these uncertainties and identify the relationships between input and output variations, leading to the construction of a more accurate model. This survey introduces the application, implementation, and underlying principles of sensitivity and uncertainty quantification

988 citations

Journal ArticleDOI
TL;DR: This work focuses on combining observations from field experiments with detailed computer simulations of a physical process to carry out statistical inference, and makes use of basis representations to reduce the dimensionality of the problem and speed up the computations required for exploring the posterior distribution.
Abstract: This work focuses on combining observations from field experiments with detailed computer simulations of a physical process to carry out statistical inference. Of particular interest here is determining uncertainty in resulting predictions. This typically involves calibration of parameters in the computer simulator as well as accounting for inadequate physics in the simulator. The problem is complicated by the fact that simulation code is sufficiently demanding that only a limited number of simulations can be carried out. We consider applications in characterizing material properties for which the field data and the simulator output are highly multivariate. For example, the experimental data and simulation output may be an image or may describe the shape of a physical object. We make use of the basic framework of Kennedy and O'Hagan. However, the size and multivariate nature of the data lead to computational challenges in implementing the framework. To overcome these challenges, we make use of basis repre...

838 citations


Network Information
Related Topics (5)
Monte Carlo method
95.9K papers, 2.1M citations
79% related
Robustness (computer science)
94.7K papers, 1.6M citations
79% related
Boundary value problem
145.3K papers, 2.7M citations
78% related
Nonlinear system
208.1K papers, 4M citations
78% related
Finite element method
178.6K papers, 3M citations
78% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023452
2022824
20211,221
20201,109
2019996
2018857