scispace - formally typeset
Search or ask a question

Showing papers on "Uncertainty quantification published in 2016"


Journal ArticleDOI
TL;DR: The framework is demonstrated on a number, solving both the flow and adjoint systems of equations to provide a high-fidelity predictive capability and sensitivity information that can be used for optimal shape design using a gradient-based framework, goal-oriented adaptive mesh refinement, or uncertainty quantification.
Abstract: This paper presents the main objectives and a description of the SU2 suite, including the novel software architecture and open-source software engineering strategy. SU2 is a computational analysis and design package that has been developed to solve multiphysics analysis and optimization tasks using unstructured mesh topologies. Its unique architecture is well suited for extensibility to treat partial-differential-equation-based problems not initially envisioned. The common framework adopted enables the rapid implementation of new physics packages that can be tightly coupled to form a powerful ensemble of analysis tools to address complex problems facing many engineering communities. The framework is demonstrated on a number, solving both the flow and adjoint systems of equations to provide a high-fidelity predictive capability and sensitivity information that can be used for optimal shape design using a gradient-based framework, goal-oriented adaptive mesh refinement, or uncertainty quantification.

581 citations


Journal ArticleDOI
TL;DR: A sensitivity analysis toolbox consisting of a set of Matlab functions that offer utilities for quantifying the influence of uncertain input parameters on uncertain model outputs is provided.

490 citations


Journal ArticleDOI
TL;DR: In this article, the authors discussed the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field and derived the expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses.
Abstract: This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the x and y particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5-10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.

317 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a nonparametric approach to generate very short-term predictive densities, i.e., for lead times between a few minutes to one hour ahead, with fast frequency updates.
Abstract: Due to the inherent uncertainty involved in renewable energy forecasting, uncertainty quantification is a key input to maintain acceptable levels of reliability and profitability in power system operation. A proposal is formulated and evaluated here for the case of solar power generation, when only power and meteorological measurements are available, without sky-imaging and information about cloud passages. Our empirical investigation reveals that the distribution of forecast errors do not follow any of the common parametric densities. This therefore motivates the proposal of a nonparametric approach to generate very short-term predictive densities, i.e., for lead times between a few minutes to one hour ahead, with fast frequency updates. We rely on an Extreme Learning Machine (ELM) as a fast regression model, trained in varied ways to obtain both point and quantile forecasts of solar power generation. Four probabilistic methods are implemented as benchmarks. Rival approaches are evaluated based on a number of test cases for two solar power generation sites in different climatic regions, allowing us to show that our approach results in generation of skilful and reliable probabilistic forecasts in a computationally efficient manner.

240 citations


Journal ArticleDOI
Heng Xiao1, Jin-Long Wu1, Jian-Xun Wang1, R. Sun1, Christopher J. Roy1 
TL;DR: In this article, a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations is proposed, where uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints.

226 citations


Journal ArticleDOI
TL;DR: A comprehensive uncertainty quantification procedure is presented to quantify multiple types of uncertainty using multiplicative and additive UQ methods and the factors that contribute the most to the resulting output uncertainty are investigated and identified for uncertainty reduction in decision-making.

220 citations


Journal ArticleDOI
Keith Beven1
TL;DR: It is suggested that a condition tree is used to be explicit about the assumptions that underlie any assessment of uncertainty in the analysis and modelling of hydrological systems, and provides an audit trail for providing evidence to decision makers.
Abstract: This paper presents a discussion of some of the issues associated with the multiple sources of uncertainty and non-stationarity in the analysis and modelling of hydrological systems. Different forms of aleatory, epistemic, semantic, and ontological uncertainty are defined. The potential for epistemic uncertainties to induce disinformation in calibration data and arbitrary non-stationarities in model error characteristics, and surprises in predicting the future, are discussed in the context of other forms of non-stationarity. It is suggested that a condition tree is used to be explicit about the assumptions that underlie any assessment of uncertainty. This also provides an audit trail for providing evidence to decision makers.

210 citations


Journal ArticleDOI
TL;DR: In this article, a method for estimating the (co)variance of modal characteristics that are identified with the stochastic subspace identification method is validated for two civil engineering structures.

143 citations


Journal ArticleDOI
TL;DR: A probabilistic version of AS which is gradient-free and robust to observational noise is developed which is able to discover the same AS as the classical approach in a challenging one-hundred-dimensional problem involving an elliptic stochastic partial differential equation with random conductivity.

142 citations


Journal ArticleDOI
TL;DR: By introducing correlative global sensitivity analysis and uncertainty quantification, it is shown that neglecting correlations in the energies of species and reactions can lead to an incorrect identification of influential parameters and key reaction intermediates and reactions.
Abstract: Kinetic models based on first principles are becoming common place in heterogeneous catalysis because of their ability to interpret experimental data, identify the rate-controlling step, guide experiments and predict novel materials. To overcome the tremendous computational cost of estimating parameters of complex networks on metal catalysts, approximate quantum mechanical calculations are employed that render models potentially inaccurate. Here, by introducing correlative global sensitivity analysis and uncertainty quantification, we show that neglecting correlations in the energies of species and reactions can lead to an incorrect identification of influential parameters and key reaction intermediates and reactions. We rationalize why models often underpredict reaction rates and show that, despite the uncertainty being large, the method can, in conjunction with experimental data, identify influential missing reaction pathways and provide insights into the catalyst active site and the kinetic reliability of a model. The method is demonstrated in ethanol steam reforming for hydrogen production for fuel cells.

135 citations


Journal ArticleDOI
TL;DR: This paper presents two case studies in which probability distributions, instead of individual numbers, are inferred from data to describe quantities such as maximal current densities, and shows how changes in these probability distributions across data sets offer insight into which currents cause beat-to-beat variability in canine APs.

Journal ArticleDOI
TL;DR: An observer-based prognostic algorithm to estimate the state of health and the dynamic of the degradation with the associated uncertainty of the proton exchange membrane fuel cell suffers from a limited lifespan due to degradation mechanisms.
Abstract: Although, the proton exchange membrane fuel cell is a promising clean and efficient energy converter that can be used to power an entire building in electricity and heat in a combined manner, it suffers from a limited lifespan due to degradation mechanisms. As a consequence, in the past years, researches have been conducted to estimate the state of health and now the remaining useful life (RUL) in order to extend the life of such devices. However, the developed methods are unable to perform prognostics with an online uncertainty quantification due to the computational cost. This paper aims at tackling this issue by proposing an observer-based prognostic algorithm. An extended Kalman filter estimates the actual state of health and the dynamic of the degradation with the associated uncertainty. An inverse first-order reliability method is used to extrapolate the state of health until a threshold is reached, for which the RUL is given with a 90% confidence interval. The global method is validated using a simulation model built from degradation data. Finally, the algorithm is tested on a dataset coming from a long-term experimental test on an eight-cell fuel cell stack subjected to a variable power profile.

Journal ArticleDOI
TL;DR: The Monte Carlo simulation method is utilized to compute the DFT model with consideration of system replacement policy and the results show that this integrated approach is more flexible and effective for assessing the reliability of complex dynamic systems.

Journal ArticleDOI
TL;DR: This work explains the concepts of global UQ and global, variance-based SA along with two often-used methods that are applicable to any model without requiring model implementation changes: Monte Carlo (MC) and polynomial chaos (PC).
Abstract: As we shift from population-based medicine towards a more precise patient-specific regime guided by predictions of verified and well-established cardiovascular models, an urgent question arises: how sensitive are the model predictions to errors and uncertainties in the model inputs? To make our models suitable for clinical decision-making, precise knowledge of prediction reliability is of paramount importance. Efficient and practical methods for uncertainty quantification (UQ) and sensitivity analysis (SA) are therefore essential. In this work, we explain the concepts of global UQ and global, variance-based SA along with two often-used methods that are applicable to any model without requiring model implementation changes: Monte Carlo (MC) and polynomial chaos (PC). Furthermore, we propose a guide for UQ and SA according to a six-step procedure and demonstrate it for two clinically relevant cardiovascular models: model-based estimation of the fractional flow reserve (FFR) and model-based estimation of the total arterial compliance (CT ). Both MC and PC produce identical results and may be used interchangeably to identify most significant model inputs with respect to uncertainty in model predictions of FFR and CT . However, PC is more cost-efficient as it requires an order of magnitude fewer model evaluations than MC. Additionally, we demonstrate that targeted reduction of uncertainty in the most significant model inputs reduces the uncertainty in the model predictions efficiently. In conclusion, this article offers a practical guide to UQ and SA to help move the clinical application of mathematical models forward. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: It is shown that uncertainty in minimum lumen diameter (MLD) has the largest impact on hemodynamic simulations, followed by boundary resistance, viscosity and lesion length, and the method presented here can be used to output interval estimates of hemodynamic indices and visualize patient-specific maps of sensitivities.

Journal ArticleDOI
TL;DR: In this article, a joint estimation of the input and the system states is performed in a minimum-variance unbiased way, based on a limited number of response measurements and a system model.

Journal ArticleDOI
TL;DR: The need for formalized approaches to unifying numerical modelling with expert judgement is highlighted in order to facilitate characterization of uncertainty in a reproducible, consistent and transparent fashion and recommend indicators or signposts that characterize successful science-based uncertainty quantification.
Abstract: Expert judgement is often used to assess uncertainties in model-based climate change projections. This Perspective describes a statistical approach to formalizing the role of expert judgement, using Antarctic ice loss as an illustrative example. Expert judgement is an unavoidable element of the process-based numerical models used for climate change projections, and the statistical approaches used to characterize uncertainty across model ensembles. Here, we highlight the need for formalized approaches to unifying numerical modelling with expert judgement in order to facilitate characterization of uncertainty in a reproducible, consistent and transparent fashion. As an example, we use probabilistic inversion, a well-established technique used in many other applications outside of climate change, to fuse two recent analyses of twenty-first century Antarctic ice loss. Probabilistic inversion is but one of many possible approaches to formalizing the role of expert judgement, and the Antarctic ice sheet is only one possible climate-related application. We recommend indicators or signposts that characterize successful science-based uncertainty quantification.

Journal ArticleDOI
TL;DR: In this paper, the authors explore probability modeling of discretization uncertainty for system states defined implicitly by ordinary or partial differential equations, and propose a formalism to infer a fixed but a priori unknown model trajectory through Bayesian updating of a prior process conditional on model information.
Abstract: We explore probability modelling of discretization uncertainty for system states defined implicitly by ordinary or partial differential equations. Accounting for this uncertainty can avoid posterior under-coverage when likelihoods are constructed from a coarsely discretized approximation to system equations. A formalism is proposed for inferring a fixed but a priori unknown model trajectory through Bayesian updating of a prior process conditional on model information. A one-step-ahead sampling scheme for interrogating the model is described, its consistency and first order convergence properties are proved, and its computational complexity is shown to be proportional to that of numerical explicit one-step solvers. Examples illustrate the flexibility of this framework to deal with a wide variety of complex and large-scale systems. Within the calibration problem, discretization uncertainty defines a layer in the Bayesian hierarchy, and a Markov chain Monte Carlo algorithm that targets this posterior distribution is presented. This formalism is used for inference on the JAK-STAT delay differential equation model of protein dynamics from indirectly observed measurements. The discussion outlines implications for the new field of probabilistic numerics.

Journal ArticleDOI
TL;DR: It is found that a simultaneous fit to different sets of data is critical to identify the optimal set of LECs, capture all relevant correlations, reduce the statistical uncertainty, and attain order-by-order convergence in chi EFT.
Abstract: Chiral effective field theory (chi EFT) provides a systematic approach to describe low-energy nuclear forces. Moreover, chi EFT is able to provide well-founded estimates of statistical and systematic uncertainties-although this unique advantage has not yet been fully exploited. We fill this gap by performing an optimization and statistical analysis of all the low-energy constants (LECs) up to next-to-next-to-leading order. Our optimization protocol corresponds to a simultaneous fit to scattering and bound-state observables in the pion-nucleon, nucleon-nucleon, and few-nucleon sectors, thereby utilizing the full model capabilities of chi EFT. Finally, we study the effect on other observables by demonstrating forward-error-propagation methods that can easily be adopted by future works. We employ mathematical optimization and implement automatic differentiation to attain efficient and machine-precise first-and second-order derivatives of the objective function with respect to the LECs. This is also vital for the regression analysis. We use power-counting arguments to estimate the systematic uncertainty that is inherent to chi EFT, and we construct chiral interactions at different orders with quantified uncertainties. Statistical error propagation is compared with Monte Carlo sampling, showing that statistical errors are, in general, small compared to systematic ones. In conclusion, we find that a simultaneous fit to different sets of data is critical to (i) identify the optimal set of LECs, (ii) capture all relevant correlations, (iii) reduce the statistical uncertainty, and (iv) attain order-by-order convergence in chi EFT. Furthermore, certain systematic uncertainties in the few-nucleon sector are shown to get substantially magnified in the many-body sector, in particular when varying the cutoff in the chiral potentials. The methodology and results presented in this paper open a new frontier for uncertainty quantification in ab initio nuclear theory.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a sequential design algorithm MICE (mutual information for computer experiments) that adaptively selects the input values at which to run the computer simulator in order to maximize the expected information gain over the input space.
Abstract: Computer simulators can be computationally intensive to run over a large number of input values, as required for optimization and various uncertainty quantification tasks. The standard paradigm for the design and analysis of computer experiments is to employ Gaussian random fields to model computer simulators. Gaussian process models are trained on input-output data obtained from simulation runs at various input values. Following this approach, we propose a sequential design algorithm MICE (mutual information for computer experiments) that adaptively selects the input values at which to run the computer simulator in order to maximize the expected information gain (mutual information) over the input space. The superior computational efficiency of the MICE algorithm compared to other algorithms is demonstrated by test functions and by a tsunami simulator with overall gains of up to 20% in that case.

Journal ArticleDOI
TL;DR: Bayesian approach is adopted in SMU for parameter uncertainty quantification, and the use of the powerful variance-based global sensitivity analysis (GSA) in parameter selection to exclude non-influential parameters from calibration parameters, which yields a reduced-order model and thus further alleviates the computational burden.

Journal ArticleDOI
TL;DR: The theoretical foundation of a state-of-the-art uncertainty quantification method, the dimension-adaptive sparse grid interpolation (DASGI), is presented for introducing it into the applications of probabilistic power flow (PPF), specifically as discussed herein.
Abstract: In this paper, the authors firstly present the theoretical foundation of a state-of-the-art uncertainty quantification method, the dimension-adaptive sparse grid interpolation (DASGI), for introducing it into the applications of probabilistic power flow (PPF), specifically as discussed herein. It is well-known that numerous sources of uncertainty are being brought into the present-day electrical grid, by large-scale integration of renewable, thus volatile, generation, e.g., wind power, and by unprecedented load behaviors. In presence of these added uncertainties, it is imperative to change traditional deterministic power flow (DPF) calculation to take them into account in the routine operation and planning. However, the PPF analysis is still quite challenging due to two features of the uncertainty in modern power systems: high dimensionality and presence of stochastic interdependence. Both are traditionally addressed by the Monte Carlo simulation (MCS) at the cost of cumbersome computation; in this paper instead, they are tackled with the joint application of the DASGI and Copula theory (especially advantageous for constructing nonlinear dependence among various uncertainty sources), in order to accomplish the dependent high-dimensional PPF analysis in an accurate and faster manner. Based on the theory of DASGI, its combination with Copula and the DPF for the PPF is also introduced systematically in this work. Finally, the feasibility and the effectiveness of this methodology are validated by the test results of two standard IEEE test cases.

Journal ArticleDOI
TL;DR: It is shown that a simplified version of the original KO method leads to asymptotically $L_2$-inconsistent calibration, which can be remedied by modifying the original estimation procedure.
Abstract: Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or are not available in physical experiments. Kennedy and O'Hagan [M.C. Kennedy and A. O'Hagan, J. R. Stat. Soc. Ser. B Stat. Methodol., 63 (2001), pp. 425--464] suggested an approach to estimating them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the $L_2$-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original Kennedy--O'Hagan (KO) method leads to asymptotically $L_2$-inconsistent calibration. This $L_2$-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called $L_2$ calibration, is proposed, proven to be $L_2$-consistent, and enjoys optimal convergence rate. A numerical example and some mathematical analysis are used to illustrate th...

Journal ArticleDOI
TL;DR: A probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states is proposed and results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation.

Journal ArticleDOI
TL;DR: This work investigates a gradient-enhanced ?

Journal ArticleDOI
TL;DR: It is shown that a simple singular value decomposition of gPC related coefficients combined with the fast Fourier-spectral method allows one to compute the high-dimensional collision operator very efficiently.

Journal ArticleDOI
TL;DR: The pyEMU framework as mentioned in this paper implements several types of linear (first-order, second-moment (FOSM)) and non-linear uncertainty analyses, which can be used to design objective functions and parameterizations.
Abstract: We have developed pyEMU, a python framework for Environmental Modeling Uncertainty analyses, open-source tool that is non-intrusive, easy-to-use, computationally efficient, and scalable to highly-parameterized inverse problems. The framework implements several types of linear (first-order, second-moment (FOSM)) and non-linear uncertainty analyses. The FOSM-based analyses can also be completed prior to parameter estimation to help inform important modeling decisions, such as parameterization and objective function formulation. Complete workflows for several types of FOSM-based and non-linear analyses are documented in example notebooks implemented using Jupyter that are available in the online pyEMU repository. Example workflows include basic parameter and forecast analyses, data worth analyses, and error-variance analyses, as well as usage of parameter ensemble generation and management capabilities. These workflows document the necessary steps and provides insights into the results, with the goal of educating users not only in how to apply pyEMU, but also in the underlying theory of applied uncertainty quantification. pyEMU is a python framework for model-independent uncertainty analysis and supports highly-parameterized inversion.pyEMU exposes several methods for data-worth analysis for designing observation networks and data collection activities.pyEMU can be used to estimate parameter and forecast uncertainty before inversion.pyEMU can be used to design objective functions and parameterizations.

Journal ArticleDOI
TL;DR: The effect of noise on a PNN based uncertainty quantification algorithm is explored and the convergence of the proposed algorithm for stochastic natural frequency analysis of composite plates is verified and validated with original finite element method (FEM).

Posted Content
TL;DR: In this article, a Gaussian process regression with built-in dimensionality reduction with the active subspace represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data is proposed.
Abstract: The prohibitive cost of performing Uncertainty Quantification (UQ) tasks with a very large number of input parameters can be addressed, if the response exhibits some special structure that can be discovered and exploited. Several physical responses exhibit a special structure known as an active subspace (AS), a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction with the AS represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the orthogonality of the projection matrix by exploiting recent results on the Stiefel manifold. The additional benefit of our probabilistic formulation is that it allows us to select the dimensionality of the AS via the Bayesian information criterion. We validate our approach by showing that it can discover the right AS in synthetic examples without gradient information using both noiseless and noisy observations. We demonstrate that our method is able to discover the same AS as the classical approach in a challenging one-hundred-dimensional problem involving an elliptic stochastic partial differential equation with random conductivity. Finally, we use our approach to study the effect of geometric and material uncertainties in the propagation of solitary waves in a one-dimensional granular system.

Journal ArticleDOI
TL;DR: In this paper, an effective stochastic geological modeling framework is proposed based on Markov random field theory, which is conditional on site investigation data, such as observations of soil types from ground surface, borehole logs, and strata orientation from geophysical tests.