scispace - formally typeset
Search or ask a question

Showing papers on "Uncertainty quantification published in 2011"


Journal ArticleDOI
TL;DR: An overview of a comprehensive framework is given for estimating the predictive uncertainty of scientific computing applications, which treats both types of uncertainty (aleatory and epistemic), incorporates uncertainty due to the mathematical form of the model, and provides a procedure for including estimates of numerical error in the Predictive uncertainty.

649 citations


Journal ArticleDOI
TL;DR: This work enhances the NWP model with an ensemble-based uncertainty quantification strategy implemented in a distributed-memory parallel computing architecture and validate the model using real wind-speed data obtained from a set of meteorological stations.
Abstract: We present a computational framework for integrating a state-of-the-art numerical weather prediction (NWP) model in stochastic unit commitment/economic dispatch formulations that account for wind power uncertainty. We first enhance the NWP model with an ensemble-based uncertainty quantification strategy implemented in a distributed-memory parallel computing architecture. We discuss computational issues arising in the implementation of the framework and validate the model using real wind-speed data obtained from a set of meteorological stations. We build a simulated power system to demonstrate the developments.

272 citations


Journal ArticleDOI
TL;DR: Bayesian uncertainty quantification techniques are applied to the analysis of the Spalart–Allmaras turbulence model in the context of incompressible, boundary layer flows and it is shown that by using both the model plausibility and predicted QoI, one has the opportunity to reject some model classes after calibration, before subjecting the remaining classes to additional validation challenges.

242 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the number of forward PDE solves required for an accurate low-rank approximation is independent of the problem dimension, which permits scalable estimation of the uncertainty in large-scale ill-posed linear inverse problems at a small multiple of the cost of solving the forward problem.
Abstract: We consider the problem of estimating the uncertainty in large-scale linear statistical inverse problems with high-dimensional parameter spaces within the framework of Bayesian inference. When the noise and prior probability densities are Gaussian, the solution to the inverse problem is also Gaussian and is thus characterized by the mean and covariance matrix of the posterior probability density. Unfortunately, explicitly computing the posterior covariance matrix requires as many forward solutions as there are parameters and is thus prohibitive when the forward problem is expensive and the parameter dimension is large. However, for many ill-posed inverse problems, the Hessian matrix of the data misfit term has a spectrum that collapses rapidly to zero. We present a fast method for computation of an approximation to the posterior covariance that exploits the low-rank structure of the preconditioned (by the prior covariance) Hessian of the data misfit. Analysis of an infinite-dimensional model convection-diffusion problem, and numerical experiments on large-scale three-dimensional convection-diffusion inverse problems with up to 1.5 million parameters, demonstrate that the number of forward PDE solves required for an accurate low-rank approximation is independent of the problem dimension. This permits scalable estimation of the uncertainty in large-scale ill-posed linear inverse problems at a small multiple (independent of the problem dimension) of the cost of solving the forward problem.

189 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a methodology for uncertainty quantification and model validation in fatigue crack growth analysis using a Bayes network, where several models are connected through a Bayesian network that aids in model calibration and validation.

181 citations


Journal ArticleDOI
TL;DR: This work proposes the Method of Uncertainty Minimization using Polynomial Chaos Expansions (MUM-PCE) to quantify and constrain these uncertainties in the rate parameters of a well-characterized, detailed chemical model.

177 citations


Journal ArticleDOI
TL;DR: New algorithmic capabilities for mixed UQ are demonstrated in which the analysis procedures are more closely tailored to the requirements of aleatory and epistemic propagation and stochastic expansions for computing statistics and interval optimization for computing bounds.

176 citations


Journal ArticleDOI
TL;DR: In this paper, a computational strategy is proposed for robust structural topology optimization in the presence of uncertainties with known second order statistics, which combines deterministic top-ology optimization techniques with a perturbation method for the quantification of uncertainties associated with structural stiffness, such as uncertain material properties and/or structure geometry.

172 citations


Journal ArticleDOI
TL;DR: This work develops a general set of tools to evaluate the sensitivity of output parameters to input uncertainties in cardiovascular simulations and develops an adaptive collocation algorithm for Gauss-Lobatto-Chebyshev grid points that significantly reduces computational cost.
Abstract: Simulations of blood flow in both healthy and diseased vascular models can be used to compute a range of hemodynamic parameters including velocities, time varying wall shear stress, pressure drops, and energy losses. The confidence in the data output from cardiovascular simulations depends directly on our level of certainty in simulation input parameters. In this work, we develop a general set of tools to evaluate the sensitivity of output parameters to input uncertainties in cardiovascular simulations. Uncertainties can arise from boundary conditions, geometrical parameters, or clinical data. These uncertainties result in a range of possible outputs which are quantified using probability density functions (PDFs). The objective is to systemically model the input uncertainties and quantify the confidence in the output of hemodynamic simulations. Input uncertainties are quantified and mapped to the stochastic space using the stochastic collocation technique. We develop an adaptive collocation algorithm for Gauss-Lobatto-Chebyshev grid points that significantly reduces computational cost. This analysis is performed on two idealized problems--an abdominal aortic aneurysm and a carotid artery bifurcation, and one patient specific problem--a Fontan procedure for congenital heart defects. In each case, relevant hemodynamic features are extracted and their uncertainty is quantified. Uncertainty quantification of the hemodynamic simulations is done using (a) stochastic space representations, (b) PDFs, and (c) the confidence intervals for a specified level of confidence in each problem.

156 citations


Journal ArticleDOI
TL;DR: The approach is extended to non-parametric PDFs, wherein the entire distribution can be discretized at a finite number of points and the probability density values at these points can be inferred using the principle of maximum likelihood, thus avoiding the assumption of any particular distribution.

142 citations


Journal ArticleDOI
TL;DR: A new adaptive delayed‐acceptance MH algorithm (ADAMH) is implemented to adaptively build a stochastic model of the error introduced by the use of a reduced‐order model, which could offer significant improvement in computational efficiency when implementing sample‐based inference in other large‐scale inverse problems.
Abstract: [1] The aim of this research is to estimate the parameters of a large-scale numerical model of a geothermal reservoir using Markov chain Monte Carlo (MCMC) sampling, within the framework of Bayesian inference. All feasible parameters that are consistent with the measured data are summarized by the posterior distribution, and hence parameter estimation and uncertainty quantification are both given by calculating expected values of statistics of interest over the posterior distribution. It appears to be computationally infeasible to use the standard Metropolis-Hastings algorithm (MH) to sample the high dimensional computationally expensive posterior distribution. To improve the sampling efficiency, a new adaptive delayed-acceptance MH algorithm (ADAMH) is implemented to adaptively build a stochastic model of the error introduced by the use of a reduced-order model. This use of adaptivity differs from existing adaptive MCMC algorithms that tune proposal distributions of the Metropolis-Hastings algorithm (MH), though ADAMH also implements that technique. For the 3-D geothermal reservoir model we present here, ADAMH shows a great improvement in the computational efficiency of the MCMC sampling, and promising results for parameter estimation and uncertainty quantification are obtained. This algorithm could offer significant improvement in computational efficiency when implementing sample-based inference in other large-scale inverse problems.

Journal ArticleDOI
22 Dec 2011
TL;DR: In this paper, a Bayesian uncertainty quantification approach is developed and applied to RANS turbulence models of fully-developed channel flow, which aims to capture uncertainty due to both uncertain parameters and model inadequacy.
Abstract: A Bayesian uncertainty quantification approach is developed and applied to RANS turbulence models of fully-developed channel flow. The approach aims to capture uncertainty due to both uncertain parameters and model inadequacy. Parameter uncertainty is represented by treating the parameters of the turbulence model as random variables. To capture model uncertainty, four stochastic extensions of four eddy viscosity turbulence models are developed. The sixteen coupled models are calibrated using DNS data according to Bayes' theorem, producing posterior probability density functions. In addition, the competing models are compared in terms of two items: posterior plausibility and predictions of a quantity of interest. The posterior plausibility indicates which model is preferred by the data according to Bayes' theorem, while the predictions allow assessment of how strongly the model differences impact the quantity of interest. Results for the channel flow case show that both the stochastic model and the turbulence model affect the predicted quantity of interest. The posterior plausibility favors an inhomogeneous stochastic model coupled with the Chien k- model. After calibration with data at Reτ = 944 and Reτ = 2003, this model gives a prediction of the centerline velocity at Reτ = 5000 with uncertainty of approximately ± 4%.

Journal ArticleDOI
TL;DR: Results from both theoretical predictions and the experimental work show the feasibility and accuracy of the predictive formulae for estimating the uncertainty in the stereo-based deformation measurements.
Abstract: Increasing interest in the use of digital image correlation (DIC) for full-field surface shape and deformation measurements has led to an on-going need for both the development of theoretical formulae capable of providing quantitative confidence margins and controlled experiments for validation of the theoretical predictions. In the enclosed work, a series of stereo vision experiments are performed in a manner that provides sufficient information for direct comparison with theoretical predictions using formulae developed in Part I. Specifically, experiments are performed to obtain appropriate optimal estimates and the uncertainty margins for the image locations/displacements, 3-D locations/displacements and strains when using the method of subset-based digital image correlation for image matching. The uncertainty of locating the 3-D space points using subset-based pattern matching is estimated by using theoretical formulae developed in Part I and the experimentally defined confidence margins for image locations. Finally, the uncertainty in strains is predicted using formulae that involves both the variance and covariance of intermediate variables during the strain calculation process. Results from both theoretical predictions and the experimental work show the feasibility and accuracy of the predictive formulae for estimating the uncertainty in the stereo-based deformation measurements.

Journal ArticleDOI
TL;DR: In this article, a simplified method for seismic risk assessment with consideration of aleatory and epistemic uncertainties is proposed based on the widely used closed-form solution for estimating the mean annual frequency of exceeding a limit state (LS).
Abstract: A simplified method for seismic risk assessment with consideration of aleatory and epistemic uncertainties is proposed based on the widely used closed-form solution for estimating the mean annual frequency of exceeding a limit state (LS). The method for the determination of fragility parameters involves a non-linear static analysis of a set of structural models, which is defined by utilising Latin hypercube sampling, and non-linear dynamic analyses of equivalent single degree-of-freedom models. The set of structural models captures the epistemic uncertainties, whereas the aleatory uncertainty due to the random nature of the ground motion is, as usual, simulated by a set of ground motion records. Although the method is very simple to implement, it goes beyond the widely used assumption of independent effects due to aleatory and epistemic uncertainty. Thus, epistemic uncertainty has a potential influence on both fragility parameters, and not only on dispersion, as has been assumed in some other approximate ...


Journal ArticleDOI
TL;DR: Two approaches for moment design sensitivities are presented,one involving response function expansions over both design and uncertain variables and one involving response derivative expansions over only the uncertain variables.
Abstract: Non-intrusive polynomial chaos expansion (PCE) and stochastic collocation (SC) methods are attractive techniques for uncertainty quantification (UQ) due to their strong mathematical basis and ability to produce functional representations of stochastic variability. PCE estimates coefficients for known orthogonal polynomial basis functions based on a set of response function evaluations, using sampling, linear regression, tensor-product quadrature, or Smolyak sparse grid approaches. SC, on the other hand, forms interpolation functions for known coefficients, and requires the use of structured collocation point sets derived from tensor product or sparse grids. When tailoring the basis functions or interpolation grids to match the forms of the input uncertainties, exponential convergence rates can be achieved with both techniques for general probabilistic analysis problems. Once PCE or SC representations have been obtained for a response metric of interest, analytic expressions can be derived for the moments of the expansion and for the design derivatives of these moments, allowing for efficient design under uncertainty formulations involving moment control (e.g., robust design). This paper presents two approaches for moment design sensitivities, one involving response function expansions over both design and uncertain variables and one involving response derivative expansions over only the uncertain variables. These approaches present a trade-off between increased dimensionality in the expansions (and therefore increased simulation runs required to construct them) with global expansion validity versus increased data requirements per simulation with local expansion validity. Given this capability for analytic moments and their sensitivities, we explore bilevel, sequential, and multifidelity formulations for OUU. Initial experiences with these approaches is presented for a number of benchmark test problems.

Journal ArticleDOI
TL;DR: In this paper, a general Bayesian hierarchical framework is proposed to implement RFA schemes that avoid the difficulty of using strong hypotheses that may limit their application and complicate the quantification of predictive uncertainty.
Abstract: [1] Regional frequency analysis (RFA) has a long history in hydrology, and numerous distinct approaches have been proposed over the years to perform the estimation of some hydrologic quantity at a regional level. However, most of these approaches still rely on strong hypotheses that may limit their application and complicate the quantification of predictive uncertainty. The objective of this paper is to propose a general Bayesian hierarchical framework to implement RFA schemes that avoid these difficulties. The proposed framework is based on a two-level hierarchical model. The first level of the hierarchy describes the joint distribution of observations. An arbitrary marginal distribution, whose parameters may vary in space, is assumed for at-site series. The joint distribution is then derived by means of an elliptical copula, therefore providing an explicit description of the spatial dependence between data. The second level of the hierarchy describes the spatial variability of parameters using a regression model that links the parameter values with covariates describing site characteristics. Regression errors are modeled with a Gaussian spatial field, which may exhibit spatial dependence. This framework enables performing prediction at both gaged and ungaged sites and, importantly, rigorously quantifying the associated predictive uncertainty. A case study based on the annual maxima of daily rainfall demonstrates the applicability of this hierarchical approach. Although numerous avenues for improvement can already be identified (among which is the inclusion of temporal covariates to model time variability), the proposed model constitutes a general framework for implementing flexible RFA schemes and quantifying the associated predictive uncertainty.

Journal ArticleDOI
TL;DR: Determining the posterior measure, given the data, solves the problem of uncertainty quantification for this inverse problem and is shown to be Lipschitz with respect to the data in the Hellinger metric, giving rise to a form of well posedness of the inverse problem.
Abstract: We consider the inverse problem of determining the permeability from the pressure in a Darcy model of flow in a porous medium. Mathematically the problem is to find the diffusion coefficient for a linear uniformly elliptic partial differential equation in divergence form, in a bounded domain in dimension $d \le 3$, from measurements of the solution in the interior. We adopt a Bayesian approach to the problem. We place a prior random field measure on the log permeability, specified through the Karhunen-Loeve expansion of its draws. We consider Gaussian measures constructed this way, and study the regularity of functions drawn from them. We also study the Lipschitz properties of the observation operator mapping the log permeability to the observations. Combining these regularity and continuity estimates, we show that the posterior measure is well defined on a suitable Banach space. Furthermore the posterior measure is shown to be Lipschitz with respect to the data in the Hellinger metric, giving rise to a form of well posedness of the inverse problem. Determining the posterior measure, given the data, solves the problem of uncertainty quantification for this inverse problem. In practice the posterior measure must be approximated in a finite dimensional space. We quantify the errors incurred by employing a truncated Karhunen-Loeve expansion to represent this meausure. In particular we study weak convergence of a general class of locally Lipschitz functions of the log permeability, and apply this general theory to estimate errors in the posterior mean of the pressure and the pressure covariance, under refinement of the finite-dimensional Karhunen-Loeve truncation.

Journal ArticleDOI
TL;DR: Efficient algorithms based on continuous optimization to find the bounds on second and higher moments of interval data and bounding envelopes for the family of Johnson distributions are presented, analogous to the notion of empirical p-box in the literature.

Journal ArticleDOI
TL;DR: In this article, an extended finite element method (XFEM) coupled with a Monte Carlo approach is proposed to quantify the uncertainty in the homogenized effective elastic properties of multiphase materials.
Abstract: An extended finite element method (XFEM) coupled with a Monte Carlo approach is proposed to quantify the uncertainty in the homogenized effective elastic properties of multiphase materials. The methodology allows for an arbitrary number, aspect ratio, location and orientation of elliptic inclusions within a matrix, without the need for fine meshes in the vicinity of tightly packed inclusions and especially without the need to remesh for every different generated realization of the microstructure. Moreover, the number of degrees of freedom in the enriched elements is dynamically reallocated for each Monte Carlo sample run based on the given volume fraction. The main advantage of the proposed XFEM-based methodology is a major reduction in the computational effort in extensive Monte Carlo simulations compared with the standard FEM approach. Monte Carlo and XFEM appear to work extremely efficiently together. The Monte Carlo approach allows for the modeling of the size, aspect ratios, orientations, and spatial distribution of the elliptical inclusions as random variables with any prescribed probability distributions. Numerical results are presented and the uncertainty of the homogenized elastic properties is discussed. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A minimally subjective approach for uncertainty quantification in data-sparse situations, based on a new and purely data-driven version of polynomial chaos expansion (PCE), achieves a significant computational speed-up compared with Monte Carlo as well as high accuracy even for small orders of expansion, and shows how this novel approach helps overcome subjectivity.

Journal ArticleDOI
TL;DR: This presentation discusses and illustrates the conceptual and computational basis of QMU in analyses that use computational models to predict the behavior of complex systems.

ReportDOI
01 Dec 2011
TL;DR: The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides aible and extensible interface between simulation codes and iterative analysis methods.

Journal ArticleDOI
TL;DR: The current presentation introduces and illustrates the use of interval analysis, possibility theory and evidence theory as alternatives to theUse of probability theory for the representation of epistemic uncertainty in QMU-type analyses.

Book ChapterDOI
29 Aug 2011
TL;DR: A collection of statistical algorithms and programming constructs supporting research into the uncertainty quantification of models and their predictions and an easy insertion of new and improved algorithms are described.
Abstract: QUESO is a collection of statistical algorithms and programming constructs supporting research into the uncertainty quantification (UQ) of models and their predictions. It has been designed with three objectives: it should (a) be sufficiently abstract in order to handle a large spectrum of models, (b) be algorithmically extensible, allowing an easy insertion of new and improved algorithms, and (c) take advantage of parallel computing, in order to handle realistic models. Such objectives demand a combination of an object-oriented design with robust software engineering practices. QUESO is written in C++, uses MPI, and leverages libraries already available to the scientific community. We describe some UQ concepts, present QUESO, and list planned enhancements.

Journal ArticleDOI
TL;DR: Two numerical examples, namely an antenna reflector and a full-scale satellite model, will be used for demonstrating the applicability of the employed updating procedure to complex aerospace structures.

Journal ArticleDOI
TL;DR: A systematic error quantification methodology for computational models to correct the model prediction for deterministic errors or to account for the stochastic errors through sampling is developed.
Abstract: Multiple sources of errors and uncertainty arise in mechanics computational models and contribute to the uncertainty in the final model prediction. This paper develops a systematic error quantification methodology for computational models. Some types of errors are deterministic, and some are stochastic. Appropriate procedures are developed to either correct the model prediction for deterministic errors or to account for the stochastic errors through sampling. First, input error, discretization error in finite element analysis (FEA), surrogate model error, and output measurement error are considered. Next, uncertainty quantification error, which arises due to the use of sampling-based methods, is also investigated. Model form error is estimated based on the comparison of corrected model prediction against physical observations and after accounting for solution approximation errors, uncertainty quantification errors, and experimental errors (input and output). Both local and global sensitivity measures are investigated to estimate and rank the contribution of each source of error to the uncertainty in the final result. Two numerical examples are used to demonstrate the proposed methodology by considering mechanical stress analysis and fatigue crack growth analysis.

Proceedings ArticleDOI
04 Jan 2011
TL;DR: In this article, the authors introduce a new approach for addressing epistemic uncertainty which is then demonstrated for the ow over a 2D transonic bump conguration, where the well known SST k! turbulence model is considered.
Abstract: The Reynolds averaged Navier-Stokes equations represent an attractive alternative to direct numerical simulation of turbulence due to their simplicity and reduced computational expense. In the literature it is well established that structure of Reynolds averaged turbulence models are fundamentally limited in their ability to represent the turbulent processes - introducing epistemic \model-form" uncertainty into the predictions. Sensitivity analysis and probablistic approaches have been used to address these uncertainties, however there is no well established framework within the turbulence modeling community to quantify this important source of error. This work introduces a new approach for addressing epistemic uncertainty which is then demonstrated for the ow over a 2D transonic bump conguration. The well known SST k! turbulence model is considered. The reported quantities are the wall pressure, separation location, and reattachment location along the bottom wall of the domain. The results show the new method is able to introduce bounding behavior on the numerical and experimental predictions for these quantities.

Journal ArticleDOI
TL;DR: In this article, a decoupled approach is proposed to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency.
Abstract: This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point data and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.

Journal ArticleDOI
TL;DR: In this article, the shape optimization in a 2D BZT flow with multiple-source uncertainties (thermodynamic model, operating conditions and geometry) is investigated, and the optimal Pareto sets corresponding to the minimization of various substitute functions are obtained using a genetic algorithm as optimizer and their differences are discussed.