scispace - formally typeset
Search or ask a question

Showing papers on "Uncertainty quantification published in 2010"



Book
01 Jan 2010
TL;DR: This chapter discusses Galerkin Methods for Wavelet and Multiresolution Analysis Schemes and their applications to Navier-Stokes Equations and uncertainty Quantification and Propagation.
Abstract: Introduction: Uncertainty Quantification and Propagation.- Basic Formulations.- Spectral Expansions.- Non-intrusive Methods.- Galerkin Methods.- Detailed Elementary Applications.- Application to Navier-Stokes Equations.- Advanced topics.- Solvers for Stochastic Galerkin Problems.- Wavelet and Multiresolution Analysis Schemes.- Adaptive Methods.- Epilogue.

795 citations


ReportDOI
01 May 2010
TL;DR: This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
Abstract: The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications. DAKOTA Version 5.0 Reference Manual generated on May 7, 2010

757 citations


Journal ArticleDOI
TL;DR: In this paper, an interval Monte Carlo method has been developed which combines simulation process with the interval analysis to estimate the interval failure probability, in which epistemic uncertainty and aleatory uncertainty are propagated separately through finite element-based reliability analysis.

191 citations


BookDOI
12 Oct 2010
TL;DR: In this article, the Ensemble Kalman Filter (EnKF) algorithm was used to solve the inverse problem of estimating the response surface of a BNM with an ensemble of KF filters.
Abstract: 1 Introduction 1.1 Introduction 1.2 Statistical Methods 1.3 Approximation Methods 1.4 Kalman Filtering 1.5 Optimization 2 A Primer of Frequentist and Bayesian Inference in Inverse Problems 2.1 Introduction 2.2 Prior Information and Parameters: What do you know, and what do you want to know? 2.3 Estimators: What can you do with what you measure? 2.4 Performance of estimators: How well can you do? 2.5 Frequentist performance of Bayes estimators for a BNM 2.6 Summary Bibliography 3 Subjective Knowledge or Objective Belief? An Oblique Look to Bayesian Methods 3.1 Introduction 3.2 Belief, information and probability 3.3 Bayes' formula and updating probabilities 3.4 Computed examples involving hypermodels 3.5 Dynamic updating of beliefs 3.6 Discussion Bibliography 4 Bayesian and Geostatistical Approaches to Inverse Problems 4.1 Introduction 4.2 The Bayesian and Frequentist Approaches 4.3 Prior Distribution 4.4 A Geostatistical Approach 4.5 Concluding Bibliography 5 Using the Bayesian Framework to Combine Simulations and Physical Observations for Statistical Inference 5.1 Introduction 5.2 Bayesian Model Formulation 5.3 Application: Cosmic Microwave Background 5.4 Discussion Bibliography 6 Bayesian Partition Models for Subsurface Characterization 6.1 Introduction 6.2 Model equations and problem setting 6.3 Approximation of the response surface using the Bayesian Partition Model and two-stage MCMC 6.4 Numerical results 6.5 Conclusions Bibliography 7 Surrogate and reduced-order modeling: a comparison of approaches for large-scale statistical inverse problems 7.1 Introduction 7.2 Reducing the computational cost of solving statistical inverse problems 7.3 General formulation 7.4 Model reduction 7.5 Stochastic spectral methods 7.6 Illustrative example 7.7 Conclusions Bibliography 8 Reduced basis approximation and a posteriori error estimation for parametrized parabolic PDEs Application to real-time Bayesian parameter estimation 8.1 Introduction 8.2 Linear Parabolic Equations 8.3 Bayesian Parameter Estimation 8.4 Concluding Remarks Bibliography 9 Calibration and Uncertainty Analysis for Computer Simulations with Multivariate Output 9.1 Introduction 9.2 Gaussian Process Models 9.3 Bayesian Model Calibration 9.4 Case Study: Thermal Simulation of Decomposing Foam 9.5 Conclusions Bibliography 10 Bayesian Calibration of Expensive Multivariate Computer Experiments 10.1 Calibration of computer experiments 10.2 Principal component emulation 10.3 Multivariate calibration 10.4 Summary Bibliography 11 The Ensemble Kalman Filter and Related Filters 11.1 Introduction 11.2 Model Assumptions 11.3 The Traditional Kalman Filter (KF) 11.4 The Ensemble Kalman Filter (EnKF) 11.5 The Randomized Maximum Likelihood Filter (RMLF) 11.6 The Particle Filter (PF) 11.7 Closing Remarks 11.8 Appendix A: Properties of the EnKF Algorithm 11.9 Appendix B: Properties of the RMLF Algorithm Bibliography 12 Using the ensemble Kalman Filter for history matching and uncertainty quantification of complex reservoir models 12.1 Introduction 12.2 Formulation and solution of the inverse problem 12.3 EnKF history matching workflow 12.4 Field Case 12.5 Conclusion Bibliography 13 Optimal Experimental Design for the Large-Scale Nonlinear Ill-posed Problem of Impedance Imaging 13.1 Introduction 13.2 Impedance Tomography 13.3 Optimal Experimental Design - Background 13.4 Optimal Experimental Design for Nonlinear Ill-Posed Problems 13.5 Optimization Framework 13.6 Numerical Results 13.7 Discussion and Conclusions Bibliography 14 Solving Stochastic Inverse Problems: A Sparse Grid Collocation Approach 14.1 Introduction 14.2 Mathematical developments 14.3 Numerical Examples 14.4 Summary Bibliography 15 Uncertainty analysis for seismic inverse problems: two practical examples 15.1 Introduction 15.2 Traveltime inversion for velocity determination. 15.3 Prestack stratigraphic inversion 15.4 Conclusions Bibliography 16 Solution of inverse problems using discrete ODE adjoints 16.1 Introduction 16.2 Runge-Kutta Methods 16.3 Adaptive Steps 16.4 Linear Multistep Methods 16.5 Numerical Results 16.6 Application to Data Assimilation 16.7 Conclusions Bibliography TBD

163 citations


Journal ArticleDOI
TL;DR: The generalized polynomial chaos expansion (GPC) as mentioned in this paper is a nonsampling method which represents the uncertain quantities as an expansion including the decomposition of deterministic coefficients and random orthogonal bases.
Abstract: In recent years, extensive research has been reported about a method which is called the generalized polynomial chaos expansion. In contrast to the sampling methods, e.g., Monte Carlo simulations, polynomial chaos expansion is a nonsampling method which represents the uncertain quantities as an expansion including the decomposition of deterministic coefficients and random orthogonal bases. The generalized polynomial chaos expansion uses more orthogonal polynomials as the expansion bases in various random spaces which are not necessarily Gaussian. A general review of uncertainty quantification methods, the theory, the construction method, and various convergence criteria of the polynomial chaos expansion are presented. We apply it to identify the uncertain parameters with predefined probability density functions. The new concepts of optimal and nonoptimal expansions are defined and it demonstrated how we can develop these expansions for random variables belonging to the various random spaces. The calculation of the polynomial coefficients for uncertain parameters by using various procedures, e.g., Galerkin projection, collocation method, and moment method is presented. A comprehensive error and accuracy analysis of the polynomial chaos method is discussed for various random variables and random processes and results are compared with the exact solution or/and Monte Carlo simulations. The method is employed for the basic stochastic differential equation and, as practical application, to solve the stochastic modal analysis of the microsensor quartz fork. We emphasize the accuracy in results and time efficiency of this nonsampling procedure for uncertainty quantification of stochastic systems in comparison with sampling techniques, e.g., Monte Carlo simulation.

122 citations


Journal ArticleDOI
TL;DR: A general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems.
Abstract: We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call \emph{Optimal Uncertainty Quantification} (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions they have finite-dimensional reductions. As an application, we develop \emph{Optimal Concentration Inequalities} (OCI) of Hoeffding and McDiarmid type. Surprisingly, these results show that uncertainties in input parameters, which propagate to output uncertainties in the classical sensitivity analysis paradigm, may fail to do so if the transfer functions (or probability distributions) are imperfectly known. We show how, for hierarchical structures, this phenomenon may lead to the non-propagation of uncertainties or information across scales. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems. The introduction of this paper provides both an overview of the paper and a self-contained mini-tutorial about basic concepts and issues of UQ.

120 citations



Journal ArticleDOI
TL;DR: In this article, a parsimonious conceptual model for the evaluation of the pollutant load in-sewers is presented, which includes two modules: a hydrological and hydraulic module that calculate the hydrographs at the inlet and at the outlet of the sewer system, and a solid transfer module that calculates the pollutographs.

113 citations


Journal ArticleDOI
TL;DR: It is shown how the Bayesian paradigm can be applied to formulate and solve the inverse problem and the estimated polynomial chaos coefficients are characterized as random variables whose probability density function is theBayesian posterior.

108 citations


Journal ArticleDOI
TL;DR: Bayesian uncertainty estimation methods are explored investigating the influence of the choice of a prior distribution in the selection of the prior distribution and the effect the user-defined choice has on the reliability of the uncertainty analysis results.

Book ChapterDOI
12 Oct 2010
TL;DR: Reduced basis approximation and a posteriori error estimation for general linear parabolic equations and subsequently for a nonlinear parabolic equation, the incompressible Navier-- Stokes equations are presented.
Abstract: In this paper we consider reduced basis approximation and a posteriori error estimation for linear functional outputs of affinely parametrized linear and non-linear parabolic partial differential equations. The essential ingredients are Galerkin projection onto a low-dimensional space associated with a smooth ``parametric manifold'' --- dimension reduction; efficient and effective Greedy and POD-Greedy sampling methods for identification of optimal and numerically stable approximations --- rapid convergence; rigorous and sharp a posteriori error bounds (and associated stability factors) for the linear-functional outputs of interest --- certainty; and Offline-Online computational decomposition strategies --- minimum marginal cost for high performance in the real-time/embedded (e.g., parameter estimation, control) and many-query (e.g., design optimization, uncertainty quantification, multi- scale) contexts. In this paper we first present reduced basis approximation and a posteriori error estimation for general linear parabolic equations and subsequently for a nonlinear parabolic equation, the incompressible Navier-- Stokes equations. We then present results for the application of our (parabolic) reduced basis methods to Bayesian parameter estimation: detection and characterization of a delamination crack by transient thermal analysis.

Journal ArticleDOI
TL;DR: Results based on real-world station-by-station traffic speed data showed that the proposed online algorithm can generate workable short-term traffic speed level forecasts and associated prediction confidence intervals.
Abstract: Short-term traffic condition forecasting has long been argued as essential for developing proactive traffic control systems that could alleviate the growing congestion in the United States. In this field, short-term traffic condition level forecasting and short-term traffic condition uncertainty forecasting play an equally important role. Past literature showed that linear stochastic time series models are promising in modeling and hence forecasting traffic condition levels and traffic conditional variance with workable performance. On the basis of this finding, an autoregressive moving average plus generalized autoregressive conditional heteroscedasticity structure was proposed for modeling the station-by-station traffic speed series. An online algorithm based on layered Kalman filter was developed for processing this structure in real time. Empirical results based on real-world station-by-station traffic speed data showed that the proposed online algorithm can generate workable short-term traffic speed ...

Journal ArticleDOI
TL;DR: A numerical framework for quantification of epistemic uncertainty is proposed and it is shown that in the case where probability distributions of the epistemic variables become known a posteriori, the information can be used to post-process the solution and evaluate solution statistics.

Proceedings ArticleDOI
04 Jan 2010
TL;DR: This paper gives a broad overview of a complete framework for assessing the predictive uncertainty of scientific computing applications and methods for conveying the total predictive uncertainty to decision makers.
Abstract: This paper gives a broad overview of a complete framework for assessing the predictive uncertainty of scientific computing applications. The framework is complete in the sense that it treats both types of uncertainty (aleatory and epistemic) and inco rporates uncertainty due to the form of the model and any numerical approximations used. Aleatory (or random) uncertainties in model inputs are treated using cumulative distribution functions, while epistemic (lack of knowledge) uncertainties are treated a s intervals. Approaches for propagating both types of uncertainties through the model to the system response quantities of interest are discussed. Numerical approximation errors (due to discretization, iteration, and round off) are estimated using verifica tion techniques, and the conversion of these errors into epistemic uncertainties is discussed. Model form uncertainties are quantified using model validation procedures, which include a comparison of model predictions to experimental data and then extrapolation of this uncertainty structure to points in the application domain where experimental data do not exist. Finally, methods for conveying the total predictive uncertainty to decision makers are presented.

01 Jan 2010
TL;DR: This dissertation discusses uncertainty quantication as posed in the Data Collaboration framework using techniques of nonconvex quadratically constrained quadratic programming to provide both lower and upper bounds on the various objectives.
Abstract: Author(s): Russi, Trent Michael | Advisor(s): Packard, Andrew K; Frenklach, Michael | Abstract: This dissertation discusses uncertainty quantication as posed in the Data Collaboration framework. Data Collaboration is a methodology for combining experimental data and system models to induce constraints on a set of uncertain system parameters. The framework is summarized, including outlines of notation and techniques. The main techniques include polynomial optimization and surrogate modeling to ascertain the consistency of all data and models as well as propagate uncertainty in the form of a model prediction. One of the main methods of Data Collaboration is using techniques of nonconvex quadratically constrained quadratic programming to provide both lower and upper bounds on the various objectives. The Lagrangian dual of the NQCQP provides both an outer bound to the optimal objective as well as Lagrange multipliers. These multipliers act as sensitivity measures relaying the effects of changes to the parameter constraint bounds on the optimal objective. These multipliers are rewritten to provide the sensitivity to uncertainty in the response prediction with respect to uncertainty in the parameters and experimental data. It is often of interest to find a vector of parameters that is both feasible and representative of the current community work and knowledge. This is posed as the problem of finding the minimal number of parameters that must deviate from their literature value to achieve concurrence with all experimental data constraints. This problem is heuristically solved using the l1-norm in place of the cardinality function. A lower bound on the objective is provided through an NQCQP formulation. In order to use the NQCQP techniques, the system models need to have quadratic forms. When they do not have quadratic forms, surrogate models are fitted. Surrogate modeling can be difficult for complex models with large numbers of parameters and long simulation times because of the amount of evaluation-time required to make a good fit. New techniques are developed for searching for an active subspace of the parameters, and subsequently creating an experiment design on the active subspace that adheres to the original parameter constraints. The active subspace can have a dimension signicantly lower than the original parameter dimension thereby reducing the computational complexity of generating the surrogate model. The technique is demonstrated on several examples from combustion chemistry and biology. Several other applications of the Data Collaboration framework are presented. They are used to demonstrate the complexity of describing a high dimensional feasible set of parameter values as constrained by experimental data. Approximating the feasible set can lead to a simple description, but the predictive capability of such a set is significantly reduced compared to the actual feasible set. This is demonstrated on an example from combustion chemistry.

Journal ArticleDOI
TL;DR: A surrogate model is constructed as a goal-oriented projection onto an incomplete space of polynomials; find coordinates of the projection by regression; and use derivative information to significantly reduce the number of the sample points required to obtain a good model.
Abstract: In this work we describe a polynomial regression approach that uses derivative information for analyzing the performance of a complex system that is described by a mathematical model depending on s...

Journal ArticleDOI
TL;DR: In this article, the authors proposed formulations of Mixed Variables (random and fuzzy variables) Multidisciplinary Design Optimization (MVMDO) and a method of MVMDO within the framework of Sequential Optimization and Reliability Assessment (SORA).
Abstract: In engineering design, to achieve high reliability and safety in complex and coupled systems (e.g., Multidisciplinary Systems), Reliability Based Multidisciplinary Design Optimization (RBMDO) has been received increasing attention. If there are sufficient data of uncertainties to construct the probability distribution of each input variable, the RBMDO can efficiently deal with the problem. However there are both Aleatory Uncertainty (AU) and Epistemic Uncertainty (EU) in most Multidisciplinary Systems (MS). In this situation, the results of the RBMDO will be unreliable or risky because there are insufficient data to precisely construct the probability distribution about EU due to time, money, etc. This paper proposes formulations of Mixed Variables (random and fuzzy variables) Multidisciplinary Design Optimization (MVMDO) and a method of MVMDO within the framework of Sequential Optimization and Reliability Assessment (MVMDO-SORA). The MVMDO overcomes difficulties caused by insufficient information for uncertainty. The proposed method enables designers to solve MDO problems in the presence of both AU and EU. Besides, the proposed method can efficiently reduce the computational demand. Examples are used to demonstrate the proposed formulations and the efficiency of MVMDO-SORA.

Journal ArticleDOI
TL;DR: It is illustrated how data availability guides the choice of the settings and how results and sensitivity analyses can be interpreted in the domain of DST, concluding with recommendations for industrial practice.

Journal ArticleDOI
TL;DR: In this article, the authors developed a methodology for the optimum layout design of sensor arrays of structural health monitoring systems under uncertainty, including finite element analysis under transient mechanical and thermal loads and incorporation of uncertainty quantification methods.
Abstract: This paper develops a methodology for the optimum layout design of sensor arrays of structural health monitoring systems under uncertainty. This includes finite element analysis under transient mechanical and thermal loads and incorporation of uncertainty quantification methods. The finite element model is validated with experimental data, accounting for uncertainties in experimental measurements and model predictions. The structural health monitoring sensors need to be placed optimally in order to detect with high reliability any structural damage before it turns critical. The proposed methodology achieves this objective by combining probabilistic finite element analysis, structural damage detection algorithms, and reliability-based optimization concepts.

Journal ArticleDOI
TL;DR: The qualification procedure of coupled multi-physics code systems is based on the qualification framework (verification and validation) of separate physics models/codes, and includes in addition Verification and Validation (V&V) of the coupling methodologies of the different physics models.

Journal ArticleDOI
TL;DR: A proposed two-scale framework combines a random domain decomposition (RDD) and a probabilistic collocation method (PCM) on sparse grids to quantify these two sources of uncertainty, respectively and yields efficient, robust and non-intrusive approximations for the statistics of diffusion in random composites.

Journal ArticleDOI
TL;DR: The results show that the proposed algorithms are capable of capturing the channel boundaries and describe the permeability variations within the channels using dynamic production history at the wells.

Journal ArticleDOI
TL;DR: A review of the uncertainty classification in engineering literature and the nature of uncertainty in TLC estimation and the potential value of imprecise probability should be explored within the domain of TLC to assist cost estimators and decision makers in understanding and assessing the uncertainty.
Abstract: Estimating through-life cost (TLC) is an area that is critical to many industrial sectors, and in particular, within the defense and aerospace where products are complex and have extended life cycles. One of the key problems in modeling the cost of these products is the limited life-cycle information at the early stage. This leads to epistemic and aleatory uncertainty within the estimation process in terms of data, estimation techniques, and scenarios analysis. This paper presents a review of the uncertainty classification in engineering literature and the nature of uncertainty in TLC estimation. Based on the review, the paper then presents a critique of the current uncertainty modeling approaches in cost estimation and concludes with suggestion for the requirement of a different approach to handling uncertainty in TLC. The potential value of imprecise probability should be explored within the domain of TLC to assist cost estimators and decision makers in understanding and assessing the uncertainty. The implication of such a representation in terms of decision making under risk and decision making under uncertainty is also discussed.


Proceedings ArticleDOI
04 Jan 2010
TL;DR: In this paper, uncertainty quantification in computational fluid dynamics (CFD) with non-intrusive polynomial chaos (NIPC) methods which require no modification to the existing deterministic models is examined.
Abstract: This paper examines uncertainty quantification in computational fluid dynamics (CFD) with non-intrusive polynomial chaos (NIPC) methods which require no modification to the existing deterministic models. The NIPC methods have been increasingly used for uncertainty propagation in high-fidelity CFD simulations due to their non-intrusive nature and strong potential for addressing the computational efficiency and accuracy requirements associated with large-scale complex stochastic simulations. We give the theory and description of various NIPC methods used for non-deterministic CFD simulations. We also present several stochastic fluid dynamics examples to demonstrate the application and effectiveness of NIPC methods for uncertainty quantification in fluid dynamics. These examples include stochastic computational analysis of a laminar boundary layer flow over a flat plate, supersonic expansion wave problem, and inviscid transonic flow over a three-dimensional wing with rigid and aeroelastic assumptions.

Journal ArticleDOI
TL;DR: In this article, a probabilistic model for marine corrosion is adopted, and selected uncertainty models are applied, and their features are compared to find an upper bound for the failure probability.

Journal ArticleDOI
TL;DR: In this article, a stochastic derivative-free optimization approach is proposed for numerical optimization problems with uncertainties in both simulation parameters and design variables, which is shown to have significant improvement in efficiency over traditional Monte-Carlo schemes.

Journal ArticleDOI
TL;DR: In this paper, a nonparametric probabilistic approach is employed to model the uncertainties of the drill-string structure and bit-rock interaction model to maximize the expected mean rate of penetration, respecting, vibration limits, stress limit and fatigue limit of the dynamical system.
Abstract: This work proposes a strategy for the robust optimization of the nonlinear dynamics of a drill-string, which is a structure that rotates and digs into the rock to search for oil. The nonparametric probabilistic approach is employed to model the uncertainties of the structure as well as the uncertainties of the bit-rock interaction model. This paper is particularly concerned with the robust optimization of the rate of penetration of the column, i.e., we aim to maximize the mathematical expectation of the mean rate of penetration, respecting the integrity of the system. The variables of the optimization problem are the rotational speed at the top and the initial reaction force at the bit; they are considered deterministic. The goal is to find the set of variables that maximizes the expected mean rate of penetration, respecting, vibration limits, stress limit and fatigue limit of the dynamical system.

Journal ArticleDOI
TL;DR: In this article, a method is introduced with which the uncertainty of the coupled system's FRFs can be quantified based on the uncertainties of the subsystem FRFs, which can be applied to a numerical example, where it is shown that the uncertainty on substructure FRF can significantly influence the accuracy of a coupled system.