scispace - formally typeset
Search or ask a question

Showing papers on "Uncertainty quantification published in 2013"


Book
02 Dec 2013
TL;DR: Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines.
Abstract: The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers can find data used in the exercises and other supplementary material. Uncertainty Quantification: Theory, Implementation, and Applications includes a large number of definitions and examples that use a suite of relatively simple models to illustrate concepts; numerous references to current and open research issues; and exercises that illustrate basic concepts and guide readers through the numerical implementation of algorithms for prototypical problems. It also features a wide range of applications, including weather and climate models, subsurface hydrology and geology models, nuclear power plant design, and models for biological phenomena, along with recent advances and topics that have appeared in the research literature within the last 15 years, including aspects of Bayesian model calibration, surrogate model development, parameter selection techniques, and global sensitivity analysis. Audience: The text is intended for advanced undergraduates, graduate students, and researchers in mathematics, statistics, operations research, computer science, biology, science, and engineering. It can be used as a textbook for one- or two-semester courses on uncertainty quantification or as a resource for researchers in a wide array of disciplines. A basic knowledge of probability, linear algebra, ordinary and partial differential equations, and introductory numerical analysis techniques is assumed. Contents: Chapter 1: Introduction; Chapter 2: Large-Scale Applications; Chapter 3: Prototypical Models; Chapter 4: Fundamentals of Probability, Random Processes, and Statistics; Chapter 5: Representation of Random Inputs; Chapter 6: Parameter Selection Techniques; Chapter 7: Frequentist Techniques for Parameter Estimation; Chapter 8: Bayesian Techniques for Parameter Estimation; Chapter 9: Uncertainty Propagation in Models; Chapter 10: Stochastic Spectral Methods; Chapter 11: Sparse Grid Quadrature and Interpolation Techniques; Chapter 12: Prediction in the Presence of Model Discrepancy; Chapter 13: Surrogate Models; Chapter 14: Local Sensitivity Analysis; Chapter 15: Global Sensitivity Analysis; Appendix A: Concepts from Functional Analysis; Bibliography; Index

782 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a comprehensive newly composed strategy emphasising tests on high-frequency raw data, expanding existing tests on statistics, fluxes and corrections, plus quantification of errors.

382 citations


Journal ArticleDOI
TL;DR: This work proposes a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models, and focuses on finding sets of experiments that provide the most information about targeted sets of parameters.

372 citations


Journal ArticleDOI
TL;DR: Sciacchitano et al. as discussed by the authors presented a method to quantify the uncertainty of PIV data, i.e., the unknown actual error of the measured velocity field is estimated using the velocity field itself as input along with the original images.
Abstract: A novel method is presented to quantify the uncertainty of PIV data. The approach is a posteriori, i.e. the unknown actual error of the measured velocity field is estimated using the velocity field itself as input along with the original images. The principle of the method relies on the concept of super-resolution: the image pair is matched according to the cross-correlation analysis and the residual distance between matched particle image pairs (particle disparity vector) due to incomplete match between the two exposures is measured. The ensemble of disparity vectors within the interrogation window is analyzed statistically. The dispersion of the disparity vector returns the estimate of the random error, whereas the mean value of the disparity indicates the occurrence of a systematic error. The validity of the working principle is first demonstrated via Monte Carlo simulations. Two different interrogation algorithms are considered, namely the cross-correlation with discrete window offset and the multi-pass with window deformation. In the simulated recordings, the effects of particle image displacement, its gradient, out-of-plane motion, seeding density and particle image diameter are considered. In all cases good agreement is retrieved, indicating that the error estimator is able to follow the trend of the actual error with satisfactory precision. Experiments where time-resolved PIV data are available are used to prove the concept under realistic measurement conditions. In this case the ‘exact’ velocity field is unknown; however a high accuracy estimate is obtained with an advanced interrogation algorithm that exploits the redundant information of highly temporally oversampled data (pyramid correlation, Sciacchitano et al (2012 Exp. Fluids 53 1087–105)). The image-matching estimator returns the instantaneous distribution of the estimated velocity measurement error. The spatial distribution compares very well with that of the actual error with maxima in the highly sheared regions and in the 3D turbulent regions. The high level of correlation between the estimated error and the actual error indicates that this new approach can be utilized to directly infer the measurement uncertainty from PIV data. A procedure is shown where the results of the error estimation are employed to minimize the measurement uncertainty by selecting the optimal interrogation window size.

238 citations


Journal ArticleDOI
TL;DR: In this article, the uncertainty in the numerical solution of linearized infinite-dimensional statistical inverse problems is estimated using the Bayesian inference formulation, where the prior probability distribution is chosen appropriately in order to guarantee wellposedness of the inverse problem and facilitate computation of the posterior.
Abstract: We present a computational framework for estimating the uncertainty in the numerical solution of linearized infinite-dimensional statistical inverse problems. We adopt the Bayesian inference formulation: given observational data and their uncertainty, the governing forward problem and its uncertainty, and a prior probability distribution describing uncertainty in the parameter field, find the posterior probability distribution over the parameter field. The prior must be chosen appropriately in order to guarantee well-posedness of the infinite-dimensional inverse problem and facilitate computation of the posterior. Furthermore, straightforward discretizations may not lead to convergent approximations of the infinite-dimensional problem. And finally, solution of the discretized inverse problem via explicit construction of the covariance matrix is prohibitive due to the need to solve the forward problem as many times as there are parameters. Our computational framework builds on the infinite-dimensional form...

229 citations


Journal ArticleDOI
TL;DR: A finite element approximation of elliptic partial differential equations with random coefficients is considered, which is used to perform a rigorous analysis of the multilevel Monte Carlo method for these elliptic problems that lack full regularity and uniform coercivity and boundedness.
Abstract: We consider a finite element approximation of elliptic partial differential equations with random coefficients. Such equations arise, for example, in uncertainty quantification in subsurface flow modeling. Models for random coefficients frequently used in these applications, such as log-normal random fields with exponential covariance, have only very limited spatial regularity and lead to variational problems that lack uniform coercivity and boundedness with respect to the random parameter. In our analysis we overcome these challenges by a careful treatment of the model problem almost surely in the random parameter, which then enables us to prove uniform bounds on the finite element error in standard Bochner spaces. These new bounds can then be used to perform a rigorous analysis of the multilevel Monte Carlo method for these elliptic problems that lack full regularity and uniform coercivity and boundedness. To conclude, we give some numerical results that confirm the new bounds.

226 citations


Journal ArticleDOI
TL;DR: It is shown that seemingly unrelated, recent advances can be interpreted, fused and consolidated within the framework of ECC, the common thread being the adoption of the empirical copula of the raw ensemble.
Abstract: Critical decisions frequently rely on high-dimensional output from complex computer simulation models that show intricate cross-variable, spatial and temporal dependence structures, with weather and climate predictions being key examples. There is a strongly increasing recognition of the need for uncertainty quantification in such settings, for which we propose and review a general multi-stage procedure called ensemble copula coupling (ECC), proceeding as follows: 1. Generate a raw ensemble, consisting of multiple runs of the computer model that differ in the inputs or model parameters in suitable ways. 2. Apply statistical postprocessing techniques, such as Bayesian model averaging or nonhomogeneous regression, to correct for systematic errors in the raw ensemble, to obtain calibrated and sharp predictive distributions for each univariate output variable individually. 3. Draw a sample from each postprocessed predictive distribution. 4. Rearrange the sampled values in the rank order structure of the raw ensemble to obtain the ECC postprocessed ensemble. The use of ensembles and statistical postprocessing have become routine in weather forecasting over the past decade. We show that seemingly unrelated, recent advances can be interpreted, fused and consolidated within the framework of ECC, the common thread being the adoption of the empirical copula of the raw ensemble. Depending on the use of Quantiles, Random draws or Transformations at the sampling stage, we distinguish the ECC-Q, ECC-R and ECC-T variants, respectively. We also describe relations to the Schaake shuffle and extant copula-based techniques. In a case study, the ECC approach is applied to predictions of temperature, pressure, precipitation and wind over Germany, based on the 50-member European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble.

193 citations


Journal ArticleDOI
TL;DR: In this paper, a general multi-stage procedure called ensemble copula coupling (ECC) is proposed, in which a raw ensemble is generated, consisting of multiple runs of the computer model that differ in the inputs or model parameters in suitable ways.
Abstract: Critical decisions frequently rely on high-dimensional output from complex computer simulation models that show intricate cross-variable, spatial and temporal dependence structures, with weather and climate predictions being key examples. There is a strongly increasing recognition of the need for uncertainty quantification in such settings, for which we propose and review a general multi-stage procedure called ensemble copula coupling (ECC), proceeding as follows: 1. Generate a raw ensemble, consisting of multiple runs of the computer model that differ in the inputs or model parameters in suitable ways. 2. Apply statistical postprocessing techniques, such as Bayesian model averaging or nonhomogeneous regression, to correct for systematic errors in the raw ensemble, to obtain calibrated and sharp predictive distributions for each univariate output variable individually. 3. Draw a sample from each postprocessed predictive distribution. 4. Rearrange the sampled values in the rank order structure of the raw ensemble to obtain the ECC postprocessed ensemble. The use of ensembles and statistical postprocessing have become routine in weather forecasting over the past decade. We show that seemingly unrelated, recent advances can be interpreted, fused and consolidated within the framework of ECC, the common thread being the adoption of the empirical copula of the raw ensemble. Depending on the use of Quantiles, Random draws or Transformations at the sampling stage, we distinguish the ECC-Q, ECC-R and ECC-T variants, respectively. We also describe relations to the Schaake shuffle and extant copula-based techniques. In a case study, the ECC approach is applied to predictions of temperature, pressure, precipitation and wind over Germany, based on the 50-member European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble.

182 citations


Journal ArticleDOI
TL;DR: The novelty of this work, is the recognition that the Gaussian process model defines a posterior probability measure on the function space of possible surrogates for the computer code and the derivation of an algorithmic procedure that allows us to sample it efficiently.

176 citations


Journal ArticleDOI
TL;DR: In this article, an intrusive spectral simulator for statistical circuit analysis is presented, which employs the recently developed generalized polynomial chaos expansion to perform uncertainty quantification of nonlinear transistor circuits with both Gaussian and non-Gaussian random parameters.
Abstract: Uncertainties have become a major concern in integrated circuit design. In order to avoid the huge number of repeated simulations in conventional Monte Carlo flows, this paper presents an intrusive spectral simulator for statistical circuit analysis. Our simulator employs the recently developed generalized polynomial chaos expansion to perform uncertainty quantification of nonlinear transistor circuits with both Gaussian and non-Gaussian random parameters. We modify the nonintrusive stochastic collocation (SC) method and develop an intrusive variant called stochastic testing (ST) method. Compared with the popular intrusive stochastic Galerkin (SG) method, the coupled deterministic equations resulting from our proposed ST method can be solved in a decoupled manner at each time point. At the same time, ST requires fewer samples and allows more flexible time step size controls than directly using a nonintrusive SC solver. These two properties make ST more efficient than SG and than existing SC methods, and more suitable for time-domain circuit simulation. Simulation results of several digital, analog and RF circuits are reported. Since our algorithm is based on generic mathematical models, the proposed ST algorithm can be applied to many other engineering problems.

167 citations


Journal ArticleDOI
TL;DR: It is hoped that this study could provide insights to the database community on how uncertainty is managed in other disciplines, and further challenge and inspire database researchers to develop more advanced data management techniques and tools to cope with a variety of uncertainty issues in the real world.
Abstract: Uncertainty accompanies our life processes and covers almost all fields of scientific studies. Two general categories of uncertainty, namely, aleatory uncertainty and epistemic uncertainty, exist in the world. While aleatory uncertainty refers to the inherent randomness in nature, derived from natural variability of the physical world (e.g., random show of a flipped coin), epistemic uncertainty origins from human's lack of knowledge of the physical world, as well as ability of measuring and modeling the physical world (e.g., computation of the distance between two cities). Different kinds of uncertainty call for different handling methods. Aggarwal, Yu, Sarma, and Zhang et al. have made good surveys on uncertain database management based on the probability theory. This paper reviews multidisciplinary uncertainty processing activities in diverse fields. Beyond the dominant probability theory and fuzzy theory, we also review information-gap theory and recently derived uncertainty theory. Practices of these uncertainty handling theories in the domains of economics, engineering, ecology, and information sciences are also described. It is our hope that this study could provide insights to the database community on how uncertainty is managed in other disciplines, and further challenge and inspire database researchers to develop more advanced data management techniques and tools to cope with a variety of uncertainty issues in the real world.

Journal ArticleDOI
TL;DR: A short overview on stochastic modeling of uncertainties can be found in this paper, where the authors introduce the types of uncertainties, the variability of real systems, the type of probabilistic approaches, the representations for the stochastically models of uncertainties.

Journal ArticleDOI
TL;DR: In this article, a review of recent advances in uncertainty quantification, probabilistic risk assessment (PRA), and decision-making under uncertainty is presented, with a brief discussion of ways to communicate results of uncertainty quantifications and risk assessment.

Journal ArticleDOI
TL;DR: This paper uses a small but highly nonlinear reservoir model so that it can generate the reference posterior distribution of reservoir properties using a very long chain generated by a Markov chain Monte Carlo sampling algorithm.
Abstract: The application of the ensemble Kalman filter (EnKF) for history matching petroleum reservoir models has been the subject of intense investigation during the past 10 years. Unfortunately, EnKF often fails to provide reasonable data matches for highly nonlinear problems. This fact motivated the development of several iterative ensemble-based methods in the last few years. However, there exists no study comparing the performance of these methods in the literature, especially in terms of their ability to quantify uncertainty correctly. In this paper, we compare the performance of nine ensemble-based methods in terms of the quality of the data matches, quantification of uncertainty, and computational cost. For this purpose, we use a small but highly nonlinear reservoir model so that we can generate the reference posterior distribution of reservoir properties using a very long chain generated by a Markov chain Monte Carlo sampling algorithm. We also consider one adjoint-based implementation of the randomized maximum likelihood method in the comparisons.

Journal ArticleDOI
TL;DR: In this paper, an approach to model-form uncertainty quantification that does not assume the eddy-viscosity hypothesis to be exact is proposed, and the methodology for estimation of uncertainty is demonstrated for plane channel flow, for a duct with secondary flows, and for the shock/boundary-layer interaction over a transonic bump.
Abstract: Estimation of the uncertainty in numerical predictions by Reynolds-averaged Navier-Stokes closures is a vital step in building confidence in such predictions. An approach to model-form uncertainty quantification that does not assume the eddy-viscosity hypothesis to be exact is proposed. The methodology for estimation of uncertainty is demonstrated for plane channel flow, for a duct with secondary flows, and for the shock/boundary-layer interaction over a transonic bump.

Journal ArticleDOI
TL;DR: This work presents a new methodology, based on the Laplace approximation for the integration of the posterior probability density function (pdf), to accelerate the estimation of the expected information gains in the model parameters and predictive quantities of interest.

Journal ArticleDOI
TL;DR: In this paper, a uniformity approach is used to deal with the evidence variables, through which the original reliability problem can be transformed to a traditional reliability problem with only random uncertainty, and a most probable point (MPP) is obtained.

Journal ArticleDOI
TL;DR: In this article, the effects of the epistemic uncertainties on the seismic response performance of reinforced concrete structures were investigated using the first-order second-moment (FOSM) method and the latin hypercube sampling (LHS) technique.

Journal ArticleDOI
TL;DR: An integrated prognostics method is developed for gear remaining life prediction, which utilizes both gear physical models and real-time condition monitoring data, and the general prognosis framework for gears is proposed.
Abstract: Accurate health prognosis is critical for ensuring equipment reliability and reducing the overall life-cycle costs. The existing gear prognosis methods are primarily either model-based or data-driven. In this paper, an integrated prognostics method is developed for gear remaining life prediction, which utilizes both gear physical models and real-time condition monitoring data. The general prognosis framework for gears is proposed. The developed physical models include a gear finite element model for gear stress analysis, a gear dynamics model for dynamic load calculation, and a damage propagation model described using Paris' law. A gear mesh stiffness computation method is developed based on the gear system potential energy, which results in more realistic curved crack propagation paths. Material uncertainty and model uncertainty are considered to account for the differences among different specific units that affect the damage propagation path. A Bayesian method is used to fuse the collected condition monitoring data to update the distributions of the uncertainty factors for the current specific unit being monitored, and to achieve the updated remaining useful life prediction. An example is used to demonstrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: This work attempts to lay out a framework, based on Bayesian probability, for systematically addressing the questions of Validation, the process of investigating the accuracy with which a mathematical model is able to reproduce particular physical events, and Uncertainty quantification, developing measures of the degree of confidence withWhich a computer model predicts particular quantities of interest.
Abstract: The idea that one can possibly develop computational models that predict the emergence, growth, or decline of tumors in living tissue is enormously intriguing as such predictions could revolutionize medicine and bring a new paradigm into the treatment and prevention of a class of the deadliest maladies affecting humankind. But at the heart of this subject is the notion of predictability itself, the ambiguity involved in selecting and implementing effective models, and the acquisition of relevant data, all factors that contribute to the difficulty of predicting such complex events as tumor growth with quantifiable uncertainty. In this work, we attempt to lay out a framework, based on Bayesian probability, for systematically addressing the questions of Validation, the process of investigating the accuracy with which a mathematical model is able to reproduce particular physical events, and Uncertainty quantification, developing measures of the degree of confidence with which a computer model predicts particular quantities of interest. For illustrative purposes, we exercise the process using virtual data for models of tumor growth based on diffuse-interface theories of mixtures utilizing virtual data.

Journal ArticleDOI
TL;DR: This research constructs a number of linear equations that map the traffic measurements as functions of cumulative vehicle counts on both ends of a traffic segment, extending Newell’s method to solve a stochastic three-detector problem.
Abstract: This study focuses on how to use multiple data sources, including loop detector counts, AVI Bluetooth travel time readings and GPS location samples, to estimate macroscopic traffic states on a homogeneous freeway segment. With a generalized least square estimation framework, this research constructs a number of linear equations that map the traffic measurements as functions of cumulative vehicle counts on both ends of a traffic segment. We extend Newell’s method to solve a stochastic three-detector problem, where the mean and variance estimates of cell-based density and flow can be analytically derived through a multinomial probit model and an innovative use of Clark’s approximation method. An information measure is further introduced to quantify the value of heterogeneous traffic measurements for improving traffic state estimation on a freeway segment.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss salient issues concerning uncertainty quantification from a variety of fields and review the sparse literature on UQ in materials simulations, identifying needs for conceptual advances, needs for the development of best practices, and needs for specific implementations.
Abstract: Simulation has long since joined experiment and theory as a valuable tool to address materials problems. Analysis of errors and uncertainties in experiment and theory is well developed; such analysis for simulations, particularly for simulations linked across length scales and timescales, is much less advanced. In this prospective, we discuss salient issues concerning uncertainty quantification (UQ) from a variety of fields and review the sparse literature on UQ in materials simulations. As specific examples, we examine the development of atomistic potentials and multiscale simulations of crystal plasticity. We identify needs for conceptual advances, needs for the development of best practices, and needs for specific implementations.

Journal ArticleDOI
TL;DR: In this paper, the authors consider a scale of priors of varying regularity and choose the regularity by an empirical Bayes method, and show that an adaptive Bayes credible set gives correct uncertainty quantification of "polished tail" parameters, in the sense of high probability of coverage of such parameters.
Abstract: We investigate the frequentist coverage of Bayesian credible sets in a nonparametric setting. We consider a scale of priors of varying regularity and choose the regularity by an empirical Bayes method. Next we consider a central set of prescribed posterior probability in the posterior distribution of the chosen regularity. We show that such an adaptive Bayes credible set gives correct uncertainty quantification of "polished tail" parameters, in the sense of high probability of coverage of such parameters. On the negative side, we show by theory and example that adaptation of the prior necessarily leads to gross and haphazard uncertainty quantification for some true parameters that are still within the hyperrectangle regularity scale.

Journal ArticleDOI
TL;DR: Support vector machine (SVM) classification from machine learning was applied to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters, which experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4).
Abstract: . Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.

Journal ArticleDOI
TL;DR: In this paper, the authors consider quantifying uncertainty on facies models in the early development stage of a reservoir when there is still considerable uncertainty on the nature of the spatial distribution of the facies.
Abstract: Uncertainty quantification is currently one of the leading challenges in the geosciences, in particular in reservoir modeling A wealth of subsurface data as well as expert knowledge are available to quantify uncertainty and state predictions on reservoir performance or reserves The geosciences component within this larger modeling framework is partially an interpretive science Geologists and geophysicists interpret data to postulate on the nature of the depositional environment, for example on the type of fracture system, the nature of faulting, and the type of rock physics model Often, several alternative scenarios or interpretations are offered, including some associated belief quantified with probabilities In the context of facies modeling, this could result in various interpretations of facies architecture, associations, geometries, and the way they are distributed in space A quantitative approach to specify this uncertainty is to provide a set of alternative 3D training images from which several geostatistical models can be generated In this paper, we consider quantifying uncertainty on facies models in the early development stage of a reservoir when there is still considerable uncertainty on the nature of the spatial distribution of the facies At this stage, production data are available to further constrain uncertainty We develop a workflow that consists of two steps: (1) determining which training images are no longer consistent with production data and should be rejected and (2) to history match with a given fixed training image We illustrate our ideas and methodology on a test case derived from a real field case of predicting flow in a newly planned well in a turbidite reservoir off the African West coast

Journal ArticleDOI
TL;DR: A new approach is developed to improve the computational efficiency of Bayesian inference by constructing a surrogate of the PPDF, using an adaptive sparse‐grid high‐order stochastic collocation (aSG‐hSC) method, resulting in a significant reduction in the number of required model executions.
Abstract: [1] Bayesian analysis has become vital to uncertainty quantification in groundwater modeling, but its application has been hindered by the computational cost associated with numerous model executions required by exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, a new approach is developed to improve the computational efficiency of Bayesian inference by constructing a surrogate of the PPDF, using an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, this paper utilizes a compactly supported higher-order hierarchical basis to construct the surrogate system, resulting in a significant reduction in the number of required model executions. In addition, using the hierarchical surplus as an error indicator allows locally adaptive refinement of sparse grids in the parameter space, which further improves computational efficiency. To efficiently build the surrogate system for the PPDF with multiple significant modes, optimization techniques are used to identify the modes, for which high-probability regions are defined and components of the aSG-hSC approximation are constructed. After the surrogate is determined, the PPDF can be evaluated by sampling the surrogate system directly without model execution, resulting in improved efficiency of the surrogate-based MCMC compared with conventional MCMC. The developed method is evaluated using two synthetic groundwater reactive transport models. The first example involves coupled linear reactions and demonstrates the accuracy of our high-order hierarchical basis approach in approximating high-dimensional posteriori distribution. The second example is highly nonlinear because of the reactions of uranium surface complexation, and demonstrates how the iterative aSG-hSC method is able to capture multimodal and non-Gaussian features of PPDF caused by model nonlinearity. Both experiments show that aSG-hSC is an effective and efficient tool for Bayesian inference.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate how three major classes of uncertainty, linguistic uncertainty, epistemic uncertainty (uncertainty about facts), and human decision uncertainty, have been accounted for in scientific literature about climate change and conclude that evaluating conservation strategies in terms of different types of uncertainty will facilitate communication between disciplines and stakeholders.
Abstract: Climate change is an important threat to biodiversity globally, but there are major uncertainties associated with its magnitude and ecological consequences Here, we investigate how three major classes of uncertainty, linguistic uncertainty, epistemic uncertainty (uncertainty about facts), and human decision uncertainty, have been accounted for in scientific literature about climate change Some sources of uncertainty are poorly characterized and epistemic uncertainty is much more commonly treated than linguistic or human decision uncertainty Furthermore, we show that linguistic and human decision uncertainties are relatively better treated in the literature on sociopolitics or economics than in natural sciences, which often overlook communication between stakeholders and socioeconomic consequences As uncertainty can significantly influence implementation of conservation, we discuss uncertainties associated with some commonly proposed conservation adaptation actions to mitigate climate change There may be major differences between strategies, with implications on how they should be viewed in conservation planning We conclude that evaluating conservation strategies in terms of different types of uncertainty will facilitate communication between disciplines and stakeholders While accounting for uncertainties in a quantitative manner is difficult and data demanding, even qualitative appreciation about the uncertainties inherent in conservation strategies can facilitate and improve decision making

Journal ArticleDOI
TL;DR: In this paper, instead of using point forecasts, the authors employ the delta and bootstrap methods for construction of prediction intervals (PIs) for uncertainty quantification, and the confidence level of PIs is changed between 50% and 90% to check how their quality is affected.

Journal ArticleDOI
TL;DR: In this paper, an algorithm is proposed that efficiently estimates the covariances on modal parameters obtained from this multi-setup subspace identification, which merges the data from different setups prior to the identification step, taking the possibly different ambient excitation characteristics between the measurements into account.

Journal ArticleDOI
TL;DR: In this paper, a model-structure-independent approach based on information theory is proposed to assess and compute the information content in multivariate hydrological data and present practical methods for quantifying the uncertainty and shared information in data while accounting for heteroscedasticity.
Abstract: [1] With growing interest in understanding the magnitudes and sources of uncertainty in hydrological modeling, the difficult problem of characterizing model structure adequacy is now attracting considerable attention. Here, we examine this problem via a model-structureindependent approach based in information theory. In particular, we (a) discuss how to assess and compute the information content in multivariate hydrological data, (b) present practical methods for quantifying the uncertainty and shared information in data while accounting for heteroscedasticity, (c) show how these tools can be used to estimate the best achievable predictive performance of a model (for a system given the available data), and (d) show how model adequacy can be characterized in terms of the magnitude and nature of its aleatory uncertainty that cannot be diminished (and is resolvable only up to specification of its density), and its epistemic uncertainty that can, in principle, be suitably resolved by improving the model. An illustrative modeling example is provided using catchment-scale data from three river basins, the Leaf and Chunky River basins in the United States and the Chuzhou basin in China. Our analysis shows that the aleatory uncertainty associated with making catchment simulations using this data set is significant (� 50%). Further, estimated epistemic uncertainties of the HyMod, SAC-SMA, and Xinanjiang model hypotheses indicate that considerable room for model structural improvements remain.