scispace - formally typeset
Search or ask a question
Topic

Uncertainty analysis

About: Uncertainty analysis is a research topic. Over the lifetime, 11876 publications have been published within this topic receiving 290036 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The material presented in this paper covers the method of describing the uncertainties in an engineering experiment and the necessary background material, as well as a technique for numerically executing uncertainty analyses when computerized data interpretation is involved.

6,868 citations

Journal ArticleDOI
TL;DR: A Bayesian calibration technique which improves on this traditional approach in two respects and attempts to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best‐fitting parameter values is presented.
Abstract: We consider prediction and uncertainty analysis for systems which are approximated using complex mathematical models. Such models, implemented as computer codes, are often generic in the sense that by a suitable choice of some of the model's input parameters the code can be used to predict the behaviour of the system in a variety of specific applications. However, in any specific application the values of necessary parameters may be unknown. In this case, physical observations of the system in the specific context are used to learn about the unknown parameters. The process of fitting the model to the observed data by adjusting the parameters is known as calibration. Calibration is typically effected by ad hoc fitting, and after calibration the model is used, with the fitted input values, to predict the future behaviour of the system. We present a Bayesian calibration technique which improves on this traditional approach in two respects. First, the predictions allow for all sources of uncertainty, including the remaining uncertainty over the fitted parameters. Second, they attempt to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best-fitting parameter values. The method is illustrated by using data from a nuclear radiation release at Tomsk, and from a more complex simulated nuclear accident exercise.

3,745 citations

Book
01 Jan 1990
TL;DR: In this paper, the authors present a software tool for uncertainty analysis, called Analytica, for quantitative policy analysis, which can be used to perform probability assessment and propagation and analysis of uncertainty.
Abstract: Preface 1. Introduction 2. Recent milestones 3. An overview of quantitative policy analysis 4. The nature and sources of uncertainty 5. Probability distributions and statistical estimation 6. Human judgement about and with uncertainty 7. Performing probability assessment 8. The propagation and analysis of uncertainty 9. The graphic communication of uncertainty 10. Analytica: a software tool for uncertainty analysis 11. Large and complex models 12. The value of knowing how little you know Index.

2,666 citations

Book
01 Apr 2004
TL;DR: In this paper, the authors present a method for sensitivity analysis of a fish population model using Monte Carlo filtering and variance-based methods, which is based on the Bayesian uncertainty estimation.
Abstract: PREFACE. 1. A WORKED EXAMPLE. 1.1 A simple model. 1.2 Modulus version of the simple model. 1.3 Six--factor version of the simple model. 1.4 The simple model 'by groups'. 1.5 The (less) simple correlated--input model. 1.6 Conclusions. 2. GLOBAL SENSITIVITY ANALYSIS FOR IMPORTANCE ASSESSMENT. 2.1 Examples at a glance. 2.2 What is sensitivity analysis? 2.3 Properties of an ideal sensitivity analysis method. 2.4 Defensible settings for sensitivity analysis. 2.5 Caveats. 3. TEST CASES. 3.1 The jumping man. Applying variance--based methods. 3.2 Handling the risk of a financial portfolio: the problem of hedging. Applying Monte Carlo filtering and variance--based methods. 3.3 A model of fish population dynamics. Applying the method of Morris. 3.4 The Level E model. Radionuclide migration in the geosphere. Applying variance--based methods and Monte Carlo filtering. 3.5 Two spheres. Applying variance based methods in estimation/calibration problems. 3.6 A chemical experiment. Applying variance based methods in estimation/calibration problems. 3.7 An analytical example. Applying the method of Morris. 4. THE SCREENING EXERCISE. 4.1 Introduction. 4.2 The method of Morris. 4.3 Implementing the method. 4.4 Putting the method to work: an analytical example. 4.5 Putting the method to work: sensitivity analysis of a fish population model. 4.6 Conclusions. 5. METHODS BASED ON DECOMPOSING THE VARIANCE OF THE OUTPUT. 5.1 The settings. 5.2 Factors Prioritisation Setting. 5.3 First--order effects and interactions. 5.4 Application of Si to Setting 'Factors Prioritisation'. 5.5 More on variance decompositions. 5.6 Factors Fixing (FF) Setting. 5.7 Variance Cutting (VC) Setting. 5.8 Properties of the variance based methods. 5.9 How to compute the sensitivity indices: the case of orthogonal input. 5.9.1 A digression on the Fourier Amplitude Sensitivity Test (FAST). 5.10 How to compute the sensitivity indices: the case of non--orthogonal input. 5.11 Putting the method to work: the Level E model. 5.11.1 Case of orthogonal input factors. 5.11.2 Case of correlated input factors. 5.12 Putting the method to work: the bungee jumping model. 5.13 Caveats. 6. SENSITIVITY ANALYSIS IN DIAGNOSTIC MODELLING: MONTE CARLO FILTERING AND REGIONALISED SENSITIVITY ANALYSIS, BAYESIAN UNCERTAINTY ESTIMATION AND GLOBAL SENSITIVITY ANALYSIS. 6.1 Model calibration and Factors Mapping Setting. 6.2 Monte Carlo filtering and regionalised sensitivity analysis. 6.2.1 Caveats. 6.3 Putting MC filtering and RSA to work: the problem of hedging a financial portfolio. 6.4 Putting MC filtering and RSA to work: the Level E test case. 6.5 Bayesian uncertainty estimation and global sensitivity analysis. 6.5.1 Bayesian uncertainty estimation. 6.5.2 The GLUE case. 6.5.3 Using global sensitivity analysis in the Bayesian uncertainty estimation. 6.5.4 Implementation of the method. 6.6 Putting Bayesian analysis and global SA to work: two spheres. 6.7 Putting Bayesian analysis and global SA to work: a chemical experiment. 6.7.1 Bayesian uncertainty analysis (GLUE case). 6.7.2 Global sensitivity analysis. 6.7.3 Correlation analysis. 6.7.4 Further analysis by varying temperature in the data set: fewer interactions in the model. 6.8 Caveats. 7. HOW TO USE SIMLAB. 7.1 Introduction. 7.2 How to obtain and install SIMLAB. 7.3 SIMLAB main panel. 7.4 Sample generation. 7.4.1 FAST. 7.4.2 Fixed sampling. 7.4.3 Latin hypercube sampling (LHS). 7.4.4 The method of Morris. 7.4.5 Quasi--Random LpTau. 7.4.6 Random. 7.4.7 Replicated Latin Hypercube (r--LHS). 7.4.8 The method of Sobol'. 7.4.9 How to induce dependencies in the input factors. 7.5 How to execute models. 7.6 Sensitivity analysis. 8. FAMOUS QUOTES: SENSITIVITY ANALYSIS IN THE SCIENTIFIC DISCOURSE. REFERENCES. INDEX.

2,297 citations

Posted Content
TL;DR: In this article, a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes was developed, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Abstract: Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.

2,261 citations


Network Information
Related Topics (5)
Monte Carlo method
95.9K papers, 2.1M citations
75% related
Heat transfer
181.7K papers, 2.9M citations
73% related
Finite element method
178.6K papers, 3M citations
73% related
Software
130.5K papers, 2M citations
72% related
Artificial neural network
207K papers, 4.5M citations
72% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023137
2022299
2021436
2020440
2019505
2018537