scispace - formally typeset
Search or ask a question

Showing papers on "Uncertainty quantification published in 2012"


Journal ArticleDOI
TL;DR: This work addresses the solution of large-scale statistical inverse problems in the framework of Bayesian inference with a so-called Stochastic Monte Carlo method.
Abstract: We address the solution of large-scale statistical inverse problems in the framework of Bayesian inference. The Markov chain Monte Carlo (MCMC) method is the most popular approach for sampling the posterior probability distribution that describes the solution of the statistical inverse problem. MCMC methods face two central difficulties when applied to large-scale inverse problems: first, the forward models (typically in the form of partial differential equations) that map uncertain parameters to observable quantities make the evaluation of the probability density at any point in parameter space very expensive; and second, the high-dimensional parameter spaces that arise upon discretization of infinite-dimensional parameter fields make the exploration of the probability density function prohibitive. The challenge for MCMC methods is to construct proposal functions that simultaneously provide a good approximation of the target density while being inexpensive to manipulate. Here we present a so-called Stoch...

411 citations


Journal ArticleDOI
TL;DR: The key idea is to align the complexity level and order of analysis with the reliability and detail level of statistical information on the input parameters to avoid the necessity to assign parametric probability distributions that are not sufficiently supported by limited available data.

350 citations


Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate that using multiple responses, each of which depends on a common set of calibration parameters, can substantially enhance identifiability, and explain the mechanisms behind it, and attempt to shed light on when a system may or may not be identifiable.
Abstract: To use predictive models in engineering design of physical systems, one should first quantify the model uncertainty via model updating techniques employing both simulation and experimental data. While calibration is often used to tune unknown calibration parameters of a computer model, the addition of a discrepancy function has been used to capture model discrepancy due to underlying missing physics, numerical approximations, and other inaccuracies of the computer model that would exist even if all calibration parameters are known. One of the main challenges in model updating is the difficulty in distinguishing between the effects of calibration parameters versus model discrepancy. We illustrate this identifiability problem with several examples, explain the mechanisms behind it, and attempt to shed light on when a system may or may not be identifiable. In some instances, identifiability is achievable under mild assumptions, whereas in other instances, it is virtually impossible. In a companion paper, we demonstrate that using multiple responses, each of which depends on a common set of calibration parameters, can substantially enhance identifiability.

284 citations


Journal ArticleDOI
TL;DR: In this article, an improved particle filter with variable variance multipliers and Markov Chain Monte Carlo (MCMC) methods is proposed to improve the reliability of the particle filter for hydrologic models.
Abstract: [1] Particle filters (PFs) have become popular for assimilation of a wide range of hydrologic variables in recent years. With this increased use, it has become necessary to increase the applicability of this technique for use in complex hydrologic/land surface models and to make these methods more viable for operational probabilistic prediction. To make the PF a more suitable option in these scenarios, it is necessary to improve the reliability of these techniques. Improved reliability in the PF is achieved in this work through an improved parameter search, with the use of variable variance multipliers and Markov Chain Monte Carlo methods. Application of these methods to the PF allows for greater search of the posterior distribution, leading to more complete characterization of the posterior distribution and reducing risk of sample impoverishment. This leads to a PF that is more efficient and provides more reliable predictions. This study introduces the theory behind the proposed algorithm, with application on a hydrologic model. Results from both real and synthetic studies suggest that the proposed filter significantly increases the effectiveness of the PF, with marginal increase in the computational demand for hydrologic prediction.

219 citations


Journal ArticleDOI
TL;DR: A highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters and efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures.
Abstract: We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.

205 citations


Proceedings ArticleDOI
23 Apr 2012
TL;DR: An adaptive greedy multifidelity approach is proposed in which the generalized sparse grid concept is extended to consider candidate index set refinements drawn from multiple sparse grids and it is demonstrated that the multif fidelity UQ process converges more rapidly than a single-fidelity UQ in cases where the variance is reduced relative to the variance of the high fidelity model.
Abstract: This paper explores the extension of multifidelity modeling concepts to the field of uncertainty quantification. Motivated by local correction functions that enable the provable convergence of a multifidelity optimization approach to an optimal high-fidelity point solution, we extend these ideas to global discrepancy modeling within a stochastic domain and seek convergence of a multifidelity uncertainty quantification process to globally integrated high-fidelity statistics. For constructing stochastic models of both the low fidelity model and the model discrepancy, we employ stochastic expansion methods (nonintrusive polynomial chaos and stochastic collocation) computed from sparse grids, where we seek to employ a coarsely resolved grid for the discrepancy in combination with a more finely resolved grid for the low fidelity model. The resolutions of these grids may be statically defined or determined through uniform and adaptive refinement processes. Adaptive refinement is particularly attractive, as it has the ability to preferentially target stochastic regions where the model discrepancy becomes more complex; i.e., where the predictive capabilities of the low-fidelity model start to break down and greater reliance on the high fidelity model (via the discrepancy) is necessary. These adaptive refinement processes can either be performed separately for the different sparse grids or within a unified multifidelity algorithm. In particular, we propose an adaptive greedy multifidelity approach in which we extend the generalized sparse grid concept to consider candidate index set refinements drawn from multiple sparse grids. We demonstrate that the multifidelity UQ process converges more rapidly than a single-fidelity UQ in cases where the variance of the discrepancy is reduced relative to the variance of the high fidelity model (resulting in reductions in initial stochastic error) and/or where the spectrum of the expansion coefficients of the model discrepancy decays more rapidly than that of the high-fidelity model (resulting in accelerated convergence rates).

167 citations



Journal ArticleDOI
TL;DR: An efficient, Bayesian Uncertainty Quantification framework using a novel treed Gaussian process model is developed and numerically demonstrate the effectiveness of the suggested framework in identifying discontinuities, local features and unimportant dimensions in the solution of stochastic differential equations.

157 citations


Journal ArticleDOI
TL;DR: In this article, a parametric deterministic formulation of Bayesian inverse problems with an input parameter from infinite-dimensional, separable Banach spaces is presented, and the sparsity of the posterior density in terms of the summability of the input data's coefficient sequence is analyzed.
Abstract: We present a parametric deterministic formulation of Bayesian inverse problems with an input parameter from infinite-dimensional, separable Banach spaces. In this formulation, the forward problems are parametric, deterministic elliptic partial differential equations, and the inverse problem is to determine the unknown, parametric deterministic coefficients from noisy observations comprising linear functionals of the solution. We prove a generalized polynomial chaos representation of the posterior density with respect to the prior measure, given noisy observational data. We analyze the sparsity of the posterior density in terms of the summability of the input data's coefficient sequence. The first step in this process is to estimate the fluctuations in the prior. We exhibit sufficient conditions on the prior model in order for approximations of the posterior density to converge at a given algebraic rate, in terms of the number N of unknowns appearing in the parametric representation of the prior measure. Similar sparsity and approximation results are also exhibited for the solution and covariance of the elliptic partial differential equation under the posterior. These results then form the basis for efficient uncertainty quantification, in the presence of data with noise.

127 citations


Journal ArticleDOI
TL;DR: In this paper, a stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA), was employed to sample the input parameters in the Kain-Fritsch (KF) convective parameterization scheme used in the weather research and forecasting (WRF) model.
Abstract: . The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e. the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

118 citations


Proceedings ArticleDOI
10 Nov 2012
TL;DR: This work addresses uncertainty quantification for large-scale inverse problems in a Bayesian inference framework by exploiting the fact that the data are typically informative about low-dimensional manifolds of parameter space to construct low rank approximations of the covariance matrix of the posterior pdf via a matrix-free randomized method.
Abstract: Quantifying uncertainties in large-scale simulations has emerged as the central challenge facing CS&E. When the simulations require supercomputers, and uncertain parameter dimensions are large, conventional UQ methods fail. Here we address uncertainty quantification for large-scale inverse problems in a Bayesian inference framework: given data and model uncertainties, find the pdf describing parameter uncertainties. To overcome the curse of dimensionality of conventional methods, we exploit the fact that the data are typically informative about low-dimensional manifolds of parameter space to construct low rank approximations of the covariance matrix of the posterior pdf via a matrix-free randomized method. We obtain a method that scales independently of the forward problem dimension, the uncertain parameter dimension, the data dimension, and the number of cores. We apply the method to the Bayesian solution of an inverse problem in 3D global seismic wave propagation with over one million uncertain earth model parameters, 630 million wave propagation unknowns, on up to 262K cores, for which we obtain a factor of over 2000 reduction in problem dimension. This makes UQ tractable for the inverse problem.

MonographDOI
14 Nov 2012
TL;DR: Christian Soize presents the main concepts, formulations, and recent advances in the use of a mathematical-mechanical modeling process to predict the responses of a real structural system in its environment.
Abstract: Christian Soize presents the main concepts, formulations, and recent advances in the use of a mathematical-mechanical modeling process to predict the responses of a real structural system in its environment.

Journal ArticleDOI
TL;DR: This study makes a first attempt to identify, model, and jointly propagate aleatory and epistemic uncertainties in the context of DG systems modeling for adequacy assessment, and introduces the hybrid propagation approach.

Journal ArticleDOI
TL;DR: In this paper, the evolution of the dominant dimensionality of dynamical systems with uncertainty governed by stochastic partial differential equations, within the context of dynamically orthogonal (DO) field equations, is studied.

Journal ArticleDOI
TL;DR: A novel approach for quantifying the parametric uncertainty associated with a stochastic problem output is presented, based on ideas directly linked to the recently developed compressed sensing theory, allowing the retrieval of the modes that contribute most significantly to the approximation of the solution using a minimal amount of information.
Abstract: In this paper, a novel approach for quantifying the parametric uncertainty associated with a stochastic problem output is presented. As with Monte-Carlo and stochastic collocation methods, only point-wise evaluations of the stochastic output response surface are required allowing the use of legacy deterministic codes and precluding the need for any dedicated stochastic code to solve the uncertain problem of interest. The new approach differs from these standard methods in that it is based on ideas directly linked to the recently developed compressed sensing theory. The technique allows the retrieval of the modes that contribute most significantly to the approximation of the solution using a minimal amount of information. The generation of this information, via many solver calls, is almost always the bottle-neck of an uncertainty quantification procedure. If the stochastic model output has a reasonably compressible representation in the retained approximation basis, the proposed method makes the best use of the available information and retrieves the dominant modes. Uncertainty quantification of the solution of both a 2-D and 8-D stochastic Shallow Water problem is used to demonstrate the significant performance improvement of the new method, requiring up to several orders of magnitude fewer solver calls than the usual sparse grid-based Polynomial Chaos (Smolyak scheme) to achieve comparable approximation accuracy.

Journal ArticleDOI
TL;DR: In this paper, an uncertainty quantification (UQ) framework was introduced to analyze sensitivity of simulated surface fluxes to selected hydrologic parameters in the Community Land Model (CLM4) through forward modeling.
Abstract: Uncertainties in hydrologic parameters could have significant impacts on the simulated water and energy fluxes and land surface states, which will in turn affect atmospheric processes and the carbon cycle. Quantifying such uncertainties is an important step toward better understanding and quantification of uncertainty of integrated earth system models. In this paper, we introduce an uncertainty quantification (UQ) framework to analyze sensitivity of simulated surface fluxes to selected hydrologic parameters in the Community Land Model (CLM4) through forward modeling. Thirteen flux tower footprints spanning a wide range of climate and site conditions were selected to perform sensitivity analyses by perturbing the parameters identified. In the UQ framework, prior information about the parameters was used to quantify the input uncertainty using the Minimum-Relative-Entropy approach. The quasi-Monte Carlo approach was applied to generate samples of parameters on the basis of the prior pdfs. Simulations corresponding to sampled parameter sets were used to generate response curves and response surfaces and statistical tests were used to rank the significance of the parameters for output responses including latent (LH) and sensible heat (SH) fluxes. Overall, the CLM4 simulated LH and SH show the largest sensitivity to subsurface runoff generation parameters. However, study sites with deepmore » root vegetation are also affected by surface runoff parameters, while sites with shallow root zones are also sensitive to the vadose zone soil water parameters. Generally, sites with finer soil texture and shallower rooting systems tend to have larger sensitivity of outputs to the parameters. Our results suggest the necessity of and possible ways for parameter inversion/calibration using available measurements of latent/sensible heat fluxes to obtain the optimal parameter set for CLM4. This study also provided guidance on reduction of parameter set dimensionality and parameter calibration framework design for CLM4 and other land surface models under different hydrologic and climatic regimes.« less

Journal ArticleDOI
TL;DR: A large number of new theoretical and computational phenomena which arise in the emerging statistical-stochastic framework for quantifying and mitigating model error in imperfect predictions, such as the existence of information barriers to model improvement, are developed and reviewed here with the intention to introduce mathematician, applied mathematicians, and scientists to these remarkable emerging topics with increasing practical importance.
Abstract: The modus operandi of modern applied mathematics in developing very recent mathematical strategies for uncertainty quantification in partially observed high-dimensional turbulent dynamical systems is emphasized here. The approach involves the synergy of rigorous mathematical guidelines with a suite of physically relevant and progressively more complex test models which are mathematically tractable while possessing such important features as the two-way coupling between the resolved dynamics and the turbulent fluxes, intermittency and positive Lyapunov exponents, eddy diffusivity parameterization and turbulent spectra. A large number of new theoretical and computational phenomena which arise in the emerging statistical-stochastic framework for quantifying and mitigating model error in imperfect predictions, such as the existence of information barriers to model improvement, are developed and reviewed here with the intention to introduce mathematicians, applied mathematicians, and scientists to these remarkable emerging topics with increasing practical importance.

Journal ArticleDOI
TL;DR: This paper focuses on estimating a set of force-field parameters for the four-site, TIP4P, water model, based on a synthetic problem involving isothermal, isobaric molecular dynamics simulations of water at ambient conditions.
Abstract: This paper explores the inference of small-scale, atomistic parameters, based on the specification of large, or macroscale, observables. Specifically, we focus on estimating a set of force-field parameters for the four-site, TIP4P, water model, based on a synthetic problem involving isothermal, isobaric molecular dynamics (MD) simulations of water at ambient conditions. We exploit the polynomial chaos (PC) expansions developed in Part I as surrogate representations of three macroscale observables, namely density, self-diffusion, and enthalpy, as a function of the force-field parameters. We analyze and discuss the use of two different PC representations in a Bayesian framework for the inference of atomistic parameters, based on synthetic observations of three macroscale observables. The first surrogate is a deterministic PC representation, constructed in Part I using nonintrusive spectral projection (NISP). An alternative strategy exploits a nondeterministic PC representation obtained using Bayesian infere...

Journal ArticleDOI
TL;DR: In this article, the role of the flow, topography, and roughness coefficient is investigated on the output of a one-dimensional Hydrologic Engineering Center-River Analysis System (HEC-RAS) model and flood inundation map for an observed flood event on East Fork White River near Seymour, Indiana (Seymour reach) and Strouds Creek in Orange County, North Carolina.
Abstract: The process of creating flood inundation maps is affected by uncertainties in data, modeling approaches, parameters, and geoprocessing tools. Generalized likelihood uncertainty estimation (GLUE) is one of the popular techniques used to represent uncertainty in model predictions through Monte Carlo analysis coupled with Bayesian estimation. The objectives of this study are to (1) compare the uncertainty arising from multiple variables in flood inundation mapping using Monte Carlo simulations and GLUE and (2) investigate the role of subjective selection of the GLUE likelihood measure in quantifying uncertainty in flood inundation mapping. The role of the flow, topography, and roughness coefficient is investigated on the output of a one-dimensional Hydrologic Engineering Center–River Analysis System (HEC–RAS) model and flood inundation map for an observed flood event on East Fork White River near Seymour, Indiana (Seymour reach) and Strouds Creek in Orange County, North Carolina. Performance of GLUE ...

Journal ArticleDOI
TL;DR: In this paper, a probabilistic methodology for low cycle fatigue life prediction using an energy-based damage parameter with Bayes' theorem was developed, and the verification cases were based on experimental data in the literature for the Ni-base superalloy GH4133 tested at various temperatures.
Abstract: Probabilistic methods have been widely used to account for uncer- tainty of various sources in predicting fatigue life for components or materials. The Bayesian approach can potentially give more complete estimates by combining test data with technological knowledge available from theoretical analyses and/or previ- ous experimental results, and provides for uncertainty quantification and the ability to update predictions based on new data, which can save time and money. The aim of the present article is to develop a probabilistic methodology for low cycle fatigue life prediction using an energy-based damage parameter with Bayes' theorem and to demonstrate the use of an efficient probabilistic method, moreover, to quantify model uncertainty resulting from creation of different deterministic model parame- ters. For most high-temperature structures, more than one model was created to represent the complicated behaviors of materials at high temperature. The uncer- tainty involved in selecting the best model from among all the possible models should not be ignored. Accordingly, a black-box approach is used to quantify the model uncertainty for three damage parameters (the generalized damage parameter, SmithWatsonTopper and plastic strain energy density) using measured differences between experimental data and model predictions under a Bayesian inference frame- work. The verification cases were based on experimental data in the literature for the Ni-base superalloy GH4133 tested at various temperatures. Based on the experimen- tally determined distributions of material properties and model parameters, the pre- dicted distributions of fatigue life agree with the experimental results. The results

Journal ArticleDOI
TL;DR: A procedure for identifying the major sources of uncertainty in a conceptual lumped dynamic stormwater runoff quality model that is used in a study catchment to estimate copper loads, compliance with dissolved Cu concentration limits on stormwater discharge and the fraction of Cu loads potentially intercepted by a planned treatment facility is presented.
Abstract: The need for estimating micropollutants fluxes in stormwater systems increases the role of stormwater quality models as support for urban water managers, although the application of such models is affected by high uncertainty. This study presents a procedure for identifying the major sources of uncertainty in a conceptual lumped dynamic stormwater runoff quality model that is used in a study catchment to estimate (i) copper loads, (ii) compliance with dissolved Cu concentration limits on stormwater discharge and (iii) the fraction of Cu loads potentially intercepted by a planned treatment facility. The analysis is based on the combination of variance-decomposition Global Sensitivity Analysis (GSA) with the Generalized Likelihood Uncertainty Estimation (GLUE) technique. The GSA-GLUE approach highlights the correlation between the model factors defining the mass of pollutant in the system and the importance of considering hydrological parameters as source of uncertainty when calculating Cu loads and concentrations due to their influence. The influence of hydrological parameters on simulated concentrations changes during rain events. Four informal likelihood measures are used to quantify model prediction bounds. The width of the uncertainty bounds depends on the likelihood measure, with the inverse variance based likelihood more suitable for covering measured pollutographs. Uncertainty for simulated concentration is higher than for Cu loads, which again shows lower uncertainty compared to studies neglecting the hydrological submodel as source of uncertainty. A combined likelihood measure ensuring both good predictions in flow and concentration is used to identify the parameter sets used for long time simulations. These results provide a basis for reliable application of models as support in the development of strategies aiming to reduce discharge of stormwater micropollutants to the aquatic environment.

Journal ArticleDOI
TL;DR: Estimation of the parameters of Susceptible-Infective-Recovered models in the context of least squares finds that estimates of different parameters are correlated, with the magnitude and sign of this correlation depending on the value of R0.
Abstract: We examine estimation of the parameters of Susceptible-Infective-Recovered (SIR) models in the context of least squares. We review the use of asymptotic statistical theory and sensitivity analysis to obtain measures of uncertainty for estimates of the model parameters and the basic reproductive number ($R_0$)---an epidemiologically significant parameter grouping. We find that estimates of different parameters, such as the transmission parameter and recovery rate, are correlated, with the magnitude and sign of this correlation depending on the value of $R_0$. Situations are highlighted in which this correlation allows $R_0$ to be estimated with greater ease than its constituent parameters. Implications of correlation for parameter identifiability are discussed. Uncertainty estimates and sensitivity analysis are used to investigate how the frequency at which data is sampled affects the estimation process and how the accuracy and uncertainty of estimates improves as data is collected over the course of an outbreak. We assess the informativeness of individual data points in a given time series to determine when more frequent sampling (if possible) would prove to be most beneficial to the estimation process. This technique can be used to design data sampling schemes in more general contexts.

Journal ArticleDOI
TL;DR: It is demonstrated that stochastic analyses can be performed on large and complex models within affordable costs and the applicability of the proposed tools for practical applications is demonstrated.

Proceedings ArticleDOI
22 May 2012
TL;DR: In this paper, various types of meta-models that have been used in the last decade in the context of structural reliability are reviewed and a new technique that solves the problem of the potential biasedness in the estimation of a probability of failure through the use of meta models is finally presented.
Abstract: A meta-model (or a surrogate model) is the modern name for what was traditionally called a response surface. It is intended to mimic the behaviour of a computational model M (e.g. a finite element model in mechanics) while being inexpensive to evaluate, in contrast to the original model M which may take hours or even days of computer processing time. In this paper various types of meta-models that have been used in the last decade in the context of structural reliability are reviewed. More specifically classical polynomial response surfaces, polynomial chaos expansions and kriging are addressed. It is shown how the need for error estimates and adaptivity in their construction has brought this type of approaches to a high level of efficiency. A new technique that solves the problem of the potential biasedness in the estimation of a probability of failure through the use of meta-models is finally presented.

Journal ArticleDOI
TL;DR: In this paper, the structural health monitoring benchmark study results for the Canton Tower using Bayesian methods were reported using a given set of structural acceleration measurements and the corresponding ambient conditions of 24 hours.
Abstract: This paper reports the structural health monitoring benchmark study results for the Canton Tower using Bayesian methods. In this study, output-only modal identification and finite element model updating are considered using a given set of structural acceleration measurements and the corresponding ambient conditions of 24 hours. In the first stage, the Bayesian spectral density approach is used for output-only modal identification with the acceleration time histories as the excitation to the tower is unknown. The modal parameters and the associated uncertainty can be estimated through Bayesian inference. Uncertainty quantification is important for determination of statistically significant change of the modal parameters and for weighting assignment in the subsequent stage of model updating. In the second stage, a Bayesian model updating approach is utilized to update the finite element model of the tower. The uncertain stiffness parameters can be obtained by minimizing an objective function that is a weighted sum of the square of the differences (residuals) between the identified modal parameters and the corresponding values of the model. The weightings distinguish the contribution of different residuals with different uncertain levels. They are obtained using the Bayesian spectral density approach in the first stage. Again, uncertainty of the stiffness parameters can be quantified with Bayesian inference. Finally, this Bayesian framework is applied to the 24- hour field measurements to investigate the variation of the modal and stiffness parameters under changing ambient conditions. Results show that the Bayesian framework successfully achieves the goal of the first task of this benchmark study.

Journal ArticleDOI
TL;DR: This paper introduces a simplex stochastic collocation (SSC) method, as a multielement UQ method based on simplex elements, that can efficiently discretize nonhypercube probability spaces and achieves superlinear convergence and a linear increase of the initial number of samples with increasing dimensionality.
Abstract: Stochastic collocation (SC) methods for uncertainty quantification (UQ) in computational problems are usually limited to hypercube probability spaces due to the structured grid of their quadrature rules. Nonhypercube probability spaces with an irregular shape of the parameter domain do, however, occur in practical engineering problems. For example, production tolerances and other geometrical uncertainties can lead to correlated random inputs on nonhypercube domains. In this paper, a simplex stochastic collocation (SSC) method is introduced, as a multielement UQ method based on simplex elements, that can efficiently discretize nonhypercube probability spaces. It combines the Delaunay triangulation of randomized sampling at adaptive element refinements with polynomial extrapolation to the boundaries of the probability domain. The robustness of the extrapolation is quantified by the definition of the essentially extremum diminishing (EED) robustness principle. Numerical examples show that the resulting SSC-EED method achieves superlinear convergence and a linear increase of the initial number of samples with increasing dimensionality. These properties are demonstrated for uniform and nonuniform distributions, and correlated and uncorrelated parameters in problems with 15 dimensions and discontinuous responses.

Journal ArticleDOI
TL;DR: A new methodology for uncertainty quantification in systems that require multidisciplinary iterative analysis between two or more coupled component models is proposed, based on computing the probability of satisfying the interdisciplinary compatibility equations, conditioned on specific values of the coupling variables.
Abstract: This paper proposes a new methodology for uncertainty quantification in systems that require multidisciplinary iterative analysis between two or more coupled component models. This methodology is based on computing the probability of satisfying the interdisciplinary compatibility equations, conditioned on specific values of the coupling (or feedback) variables, and this information is used to estimate the probability distributions of the coupling variables. The estimation of the coupling variables is analogous to likelihood-based parameter estimation in statistics and thus leads to the proposed likelihood approach for multidisciplinary analysis (LAMDA). Using the distributions of the feedback variables, the coupling can be removed in any one direction without loss of generality, while still preserving the mathematical relationship between the coupling variables. The calculation of the probability distributions of the coupling variables is theoretically exact and does not require a fully coupled system analysis. The proposed method is illustrated using a mathematical example and an aerospace system application—a fire detection satellite.

Journal ArticleDOI
TL;DR: This work develops a framework that enables us to isolate the impact of parametric uncertainty on the MD predictions and control the effect of the intrinsic noise, and construct the PC representations of quantities of interest (QoIs) using two different approaches: nonintrusive spectral projection (NISP) and Bayesian inference.
Abstract: This work focuses on quantifying the effect of intrinsic (thermal) noise and parametric uncertainty in molecular dynamics (MD) simulations. We consider isothermal, isobaric MD simulations of TIP4P (or four-site) water at ambient conditions, $T=298$ K and $P=1$ atm. Parametric uncertainty is assumed to originate from three force-field parameters that are parametrized in terms of standard uniform random variables. The thermal fluctuations inherent in MD simulations combine with parametric uncertainty to yield nondeterministic, noisy MD predictions of bulk water properties. Relying on polynomial chaos (PC) expansions, we develop a framework that enables us to isolate the impact of parametric uncertainty on the MD predictions and control the effect of the intrinsic noise. We construct the PC representations of quantities of interest (QoIs) using two different approaches: nonintrusive spectral projection (NISP) and Bayesian inference. We verify a priori the legitimacy of the NISP approach by verifying that the...

Journal ArticleDOI
TL;DR: The theoretical analysis shows that, for linear or linearized‐nonlinear models, confidence and credible intervals are always numerically identical when consistent prior information is used, and suggests that for environmental problems with lengthy execution times that make credible intervals inconvenient or prohibitive, confidence intervals can provide important insight.
Abstract: [1] Confidence intervals based on classical regression theories augmented to include prior information and credible intervals based on Bayesian theories are conceptually different ways to quantify parametric and predictive uncertainties. Because both confidence and credible intervals are used in environmental modeling, we seek to understand their differences and similarities. This is of interest in part because calculating confidence intervals typically requires tens to thousands of model runs, while Bayesian credible intervals typically require tens of thousands to millions of model runs. Given multi-Gaussian distributed observation errors, our theoretical analysis shows that, for linear or linearized-nonlinear models, confidence and credible intervals are always numerically identical when consistent prior information is used. For nonlinear models, nonlinear confidence and credible intervals can be numerically identical if parameter confidence regions defined using the approximate likelihood method and parameter credible regions estimated using Markov chain Monte Carlo realizations are numerically identical and predictions are a smooth, monotonic function of the parameters. Both occur if intrinsic model nonlinearity is small. While the conditions of Gaussian errors and small intrinsic model nonlinearity are violated by many environmental models, heuristic tests using analytical and numerical models suggest that linear and nonlinear confidence intervals can be useful approximations of uncertainty even under significantly nonideal conditions. In the context of epistemic model error for a complex synthetic nonlinear groundwater problem, the linear and nonlinear confidence and credible intervals for individual models performed similarly enough to indicate that the computationally frugal confidence intervals can be useful in many circumstances. Experiences with these groundwater models are expected to be broadly applicable to many environmental models. We suggest that for environmental problems with lengthy execution times that make credible intervals inconvenient or prohibitive, confidence intervals can provide important insight. During model development when frequent calculation of uncertainty intervals is important to understanding the consequences of various model construction alternatives and data collection strategies, strategic use of both confidence and credible intervals can be critical.

Journal ArticleDOI
TL;DR: An approach is presented to the calibration of an agent-based model of experimental autoimmune encephalomyelitis (EAE), a mouse proxy for multiple sclerosis (MS), which harnesses interaction between a modeller and domain expert in mitigating uncertainty in the data derived from the real domain.
Abstract: For computational agent-based simulation, to become a serious tool for investigating biological systems requires the implications of simulation-derived results to be appreciated in terms of the original system. However, epistemic uncertainty regarding the exact nature of biological systems can complicate the calibration of models and simulations that attempt to capture their structure and behaviour, and can obscure the interpretation of simulation-derived experimental results with respect to the real domain. We present an approach to the calibration of an agent-based model of experimental autoimmune encephalomyelitis (EAE), a mouse proxy for multiple sclerosis (MS), which harnesses interaction between a modeller and domain expert in mitigating uncertainty in the data derived from the real domain. A novel uncertainty analysis technique is presented that, in conjunction with a latin hypercube-based global sensitivity analysis, can indicate the implications of epistemic uncertainty in the real domain. These ...