scispace - formally typeset
Search or ask a question

Showing papers on "Uncertainty analysis published in 2009"


Book
27 Jul 2009
TL;DR: This work focuses on using the Taylor Series Method for Uncertainty Propagation in Experiments and Validation to assess the importance of uncertainty in the design and execution of experiments and applications.
Abstract: Preface. 1 Experimentation, Errors, and Uncertainty. 1-1 Experimentation. 1-2 Experimental Approach. 1-3 Basic Concepts and Definitions. 1-4 Experimental Results Determined from Multiple Measured Variables. 1-5 Guides and Standards. 1-6 A Note on Nomenclature. References. Problems. 2 Errors and Uncertainties in a Measured Variable. 2-1 Statistical Distributions. 2-2 Gaussian Distribution. 2-3 Samples from Gaussian Parent Population. 2-4 Statistical Rejection of Outliers from a Sample. 2-5 Uncertainty of a Measured Variable. 2-6 Summary. References. Problems. 3 Uncertainty in a Result Determined from Multiple Variables. 3-1 Taylor Series Method for Propagation of Uncertainties. 3-2 Monte Carlo Method for Propagation of Uncertainties. References. Problems. 4 General Uncertainty Analysis: Planning an Experiment and Application in Validation. 4-1 Overview: Using Uncertainty Propagation in Experiments and Validation. 4-2 General Uncertainty Analysis Using the Taylor Series Method. 4-3 Application to Experiment Planning (TSM). 4-4 Using TSM Uncertainty Analysis in Planning an Experiment. 4-5 Example: Analysis of Proposed Particulate Measuring System. 4-6 Example: Analysis of Proposed Heat Transfer Experiment. 4-7 Examples of Presentation of Results from Actual Applications. 4-8 Application in Validation: Estimating Uncertainty in Simulation Result Due to Uncertainties in Inputs. References. Problems. 5 Detailed Uncertainty Analysis: Designing, Debugging, and Executing an Experiment. 5-1 Using Detailed Uncertainty Analysis. 5-2 Detailed Uncertainty Analysis: Overview of Complete Methodology. 5-3 Determining Random Uncertainty of Experimental Result. 5-4 Determining Systematic Uncertainty of Experimental Result. 5-5 Comprehensive Example: Sample-to-Sample Experiment. 5-6 Comprehensive Example: Debugging and Qualification of a Timewise Experiment. 5-7 Some Additional Considerations in Experiment Execution. References. Problems. 6 Validation Of Simulations. 6-1 Introduction to Validation Methodology. 6-2 Errors and Uncertainties. 6-3 Validation Nomenclature. 6-4 Validation Approach. 6-5 Code and Solution Verification. 6-6 Estimation of Validation Uncertainty u val . 6-7 Interpretation of Validation Results Using E and u val . 6-8 Some Practical Points. References. 7 Data Analysis, Regression, and Reporting of Results. 7-1 Overview of Regression Analysis and Its Uncertainty. 7-2 Least-Squares Estimation. 7-3 Classical Linear Regression Uncertainty: Random Uncertainty. 7-4 Comprehensive Approach to Linear Regression Uncertainty. 7-5 Reporting Regression Uncertainties. 7-6 Regressions in Which X and Y Are Functional Relations. 7-7 Examples of Determining Regressions and Their Uncertainties. 7-8 Multiple Linear Regression. References. Problems. Appendix A Useful Statistics. Appendix B Taylor Series Method (TSM) for Uncertainty Propagation. B-1 Derivation of Uncertainty Propagation Equation. B-2 Comparison with Previous Approaches. B-3 Additional Assumptions for Engineering Applications. References. Appendix C Comparison of Models for Calculation of Uncertainty. C-1 Monte Carlo Simulations. C-2 Simulation Results. References. Appendix D Shortest Coverage Interval for Monte Carlo Method. Reference. Appendix E Asymmetric Systematic Uncertainties. E-1 Procedure for Asymmetric Systematic Uncertainties Using TSM Propagation. E-2 Procedure for Asymmetric Systematic Uncertainties Using MCM Propagation. E-3 Example: Biases in a Gas Temperature Measurement System. References. Appendix F Dynamic Response of Instrument Systems. F-1 General Instrument Response. F-2 Response of Zero-Order Instruments. F-3 Response of First-Order Instruments. F-4 Response of Second-Order Instruments. F-5 Summary. References. Index.

1,110 citations


Journal ArticleDOI
TL;DR: The thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.
Abstract: Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

530 citations


Proceedings ArticleDOI
01 May 2009
TL;DR: The latest ideas for tailoring these expansion methods to numerical integration approaches will be explored, in which expansion formulations are modified to best synchronize with tensor-product quadrature and Smolyak sparse grids using linear and nonlinear growth rules.
Abstract: Non-intrusive polynomial chaos expansion (PCE) and stochastic collocation (SC) methods are attractive techniques for uncertainty quantification (UQ) due to their strong mathematical basis and ability to produce functional representations of stochastic variability. PCE estimates coefficients for known orthogonal polynomial basis functions based on a set of response function evaluations, using sampling, linear regression, tensor-product quadrature, or Smolyak sparse grid approaches. SC, on the other hand, forms interpolation functions for known coefficients, and requires the use of structured collocation point sets derived from tensor product or sparse grids. When tailoring the basis functions or interpolation grids to match the forms of the input uncertainties, exponential convergence rates can be achieved with both techniques for a range of probabilistic analysis problems. In addition, analytic features of the expansions can be exploited for moment estimation and stochastic sensitivity analysis. In this paper, the latest ideas for tailoring these expansion methods to numerical integration approaches will be explored, in which expansion formulations are modified to best synchronize with tensor-product quadrature and Smolyak sparse grids using linear and nonlinear growth rules. The most promising stochastic expansion approaches are then carried forward for use in new approaches for mixed aleatory-epistemic UQ, employing second-order probability approaches, and design under uncertainty, employing bilevel, sequential, and multifidelity approaches.

354 citations


Journal ArticleDOI
TL;DR: The basic theory concerning the use of special functions, copulae, for dependence modeling is presented and focus is given on a basic function, the Normal copula, and the case study shows the application of the technique for the study of the large-scale integration of wind power in the Netherlands.
Abstract: The increasing penetration of renewable generation in power systems necessitates the modeling of this stochastic system infeed in operation and planning studies. The system analysis leads to multivariate uncertainty analysis problems, involving non-Normal correlated random variables. In this context, the modeling of stochastic dependence is paramount for obtaining accurate results; it corresponds to the concurrent behavior of the random variables, having a major impact to the aggregate uncertainty (in problems where the random variables correspond to spatially spread stochastic infeeds) or their evolution in time (in problems where the random variables correspond to infeeds over specific time-periods). In order to investigate, measure and model stochastic dependence, one should transform all different random variables to a common domain, the rank/uniform domain, by applying the cumulative distribution function transformation. In this domain, special functions, copulae, can be used for modeling dependence. In this contribution the basic theory concerning the use of these functions for dependence modeling is presented and focus is given on a basic function, the Normal copula. The case study shows the application of the technique for the study of the large-scale integration of wind power in the Netherlands.

351 citations


Journal ArticleDOI
TL;DR: An improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis, and a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described.

312 citations


Journal ArticleDOI
TL;DR: In this paper, a comparative study on the performances of several representative uncertainty propagation methods, including a few newly developed methods that have received growing attention, is performed, and the insights gained are expected to direct designers for choosing the most applicable uncertainty propagation technique in design under uncertainty.
Abstract: A wide variety of uncertainty propagation methods exist in literature; however, there is a lack of good understanding of their relative merits. In this paper, a comparative study on the performances of several representative uncertainty propagation methods, including a few newly developed methods that have received growing attention, is performed. The full factorial numerical integration, the univariate dimension reduction method, and the polynomial chaos expansion method are implemented and applied to several test problems. They are tested under different settings of the performance nonlinearity, distribution types of input random variables, and the magnitude of input uncertainty. The performances of those methods are compared in moment estimation, tail probability calculation, and the probability density function construction, corresponding to a wide variety of scenarios of design under uncertainty, such as robust design, and reliability-based design optimization. The insights gained are expected to direct designers for choosing the most applicable uncertainty propagation technique in design under uncertainty.

290 citations


Journal ArticleDOI
TL;DR: A novel approach for undertaking a spatial sensitivity analysis (based on the method of Sobol' and its related improvements) is proposed and tested and enables the analysis of spatially distributed, uncertain inputs.
Abstract: Sensitivity analysis is the study of how uncertainty in model predictions is determined by uncertainty in model inputs. A global sensitivity analysis considers the potential effects from the simultaneous variation of model inputs over their finite range of uncertainty. A number of techniques are available to carry out global sensitivity analysis from a set of Monte Carlo simulations; some techniques are more efficient than others, depending on the strategy used to sample the uncertainty of model inputs and on the formulae employed for estimating sensitivity measures. The most common approaches are summarised in this paper by focusing on the limitations of each in the context of a sensitivity analysis of a spatial model. A novel approach for undertaking a spatial sensitivity analysis (based on the method of Sobol' and its related improvements) is proposed and tested. This method makes no assumptions about the model and enables the analysis of spatially distributed, uncertain inputs. The proposed approach is illustrated with a simple test model and a groundwater contaminant model.

247 citations


Journal ArticleDOI
TL;DR: This work analyzes how the fair offset ratio is influenced by uncertainty in the effectiveness of restoration action, correlation between success of different compensation areas, and time discounting to guarantee a robustly fair exchange.
Abstract: Biodiversity offset areas may compensate for ecological damage caused by human activity elsewhere. One way of determining the offset ratio, or the compensation area needed, is to divide the present conservation value of the development site by the predicted future conservation value of a compensation area of the same size. Matching mean expected utility in this way is deficient because it ignores uncertainty and time lags in the growth of conservation value in compensation areas. Instead, we propose an uncertainty analytic framework for calculating what we call robustly fair offset ratios, which guarantee a high enough probability of the exchange producing at least as much conservation value in the offset areas than is lost from the development site. In particular, we analyze how the fair offset ratio is influenced by uncertainty in the effectiveness of restoration action, correlation between success of different compensation areas, and time discounting. We find that very high offset ratios may be needed to guarantee a robustly fair exchange, compared to simply matching mean expected utilities. These results demonstrate that considerations of uncertainty, correlated success/failure, and time discounting should be included in the determination of the offset ratio to avoid a significant risk that the exchange is unfavorable for conservation in the long run. This is essential because the immediate loss is certain, whereas future gain is uncertain. The proposed framework is also applicable to the case when offset areas already hold conservation value and do not require restoration action, in which case uncertainty about the conservation outcome will be lower.

239 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose to separate two fundamentally different types of uncertainty in flood risk analyses: aleatory and epistemic uncertainty, which refers to quantities that are inherently variable in time, space or populations of individuals or objects.
Abstract: Although flood risk assessments are frequently associated with significant uncertainty, formal uncertainty analyses are the exception rather than the rule. We propose to separate two fundamentally different types of uncertainty in flood risk analyses: aleatory and epistemic uncertainty. Aleatory uncertainty refers to quantities that are inherently variable in time, space or populations of individuals or objects. Epistemic uncertainty results from incomplete knowledge and is related to our inability to understand, measure and describe the system under investigation. The separation between aleatory and epistemic uncertainty is exemplified for the flood risk analysis of the city of Cologne, Germany. This flood risk assessment consists of three modules: (1) flood frequency analysis, (2) inundation estimation and (3) damage estimation. By the concept of parallel models, the epistemic uncertainty of each module is quantified. The epistemic uncertainty associated with the risk estimate is reduced by introducing additional information into the risk analysis. Finally, the contribution of different modules to the total uncertainty is quantified. The flood risk analysis results in a flood risk curve, representing aleatory uncertainty, and in associated uncertainty bounds, representing epistemic uncertainty. In this way, the separation reveals the uncertainty (epistemic) that can be reduced by more knowledge and the uncertainty (aleatory) that is not reducible.

223 citations



DOI
01 Jan 2009
TL;DR: In this paper, a prototype simulation based environment that provides add-ons like uncertainty and sensitivity analysis, multi-criteria and disciplinary decision making under uncertainty, and multi-objective optimization is presented.
Abstract: Building performance simulation (BPS) uses computer-based models that cover performance aspects such as energy consumption and thermal comfort in buildings. The uptake of BPS in current building design projects is limited. Although there is a large number of building simulation tools available, the actual application of these tools is mostly restricted to code compliance checking or thermal load calculations for sizing of heating, ventilation and air-conditions systems in detailed design. The aim of the presented work is to investigate opportunities in BPS during the later phases of the design process, and to research and enable innovative applications of BPS for design support. The research started from an existing and proven design stage specific simulation software tool. The research methods applied comprise of literature review, interviews, rapid iterative prototyping, and usability testing. The result of this research is a prototype simulation based environment that provides add-ons like uncertainty and sensitivity analysis, multi-criteria and disciplinary decision making under uncertainty, and multi-objective optimization. The first prototype addressing the uncertainties in physical, scenario, and design parameters provides additional information through figures and tables. This outcome helps the designer in understanding how parameters relate to each other and to comprehend how variations in the model input affect the output. It supports the design process by providing a basis to compare different design options and leads therefore to an improved guidance in the design process. The second approach addresses the integration of a decision making protocol with the extension of uncertainty and sensitivity analysis. This prototype supports the design team in the design process by providing a base for communication. Furthermore, it supports the decision process by providing the possibility to compare different design options by minimizing the risk that is related to different concepts. It reduces the influence of preoccupation in common decision making and avoids pitfalls due to a lack of planning and focus. The third and last approach shows the implementation of two multi-objective algorithms and the integration of uncertainty in optimization. The results show the optimization of parameters for the objectives energy consumption and weighted overand underheating hours. It shows further how uncertainties impact the Pareto frontier achieved. The applicability and necessity of the three implemented approaches has further been validated with the help of usability testing by conducting mock-up presentations and an online survey. The outcome has shown that the presented results enhance the capabilities of BPS and fulfil the requirements in detailed design by providing a better understanding of results, guidance through the design process, and supporting the decision process. All three approaches have been found important to be integrated in BPS.

Proceedings ArticleDOI
26 Jul 2009
TL;DR: The basic theory concerning the use of special functions, copulae, for dependence modeling is presented and focus is given on a basic function, the Normal copula, and the case study shows the application of the technique for the study of the large-scale integration of wind power in the Netherlands.
Abstract: The increasing penetration of renewable generation in power systems necessitates the modeling of this stochastic system infeed in operation and planning studies. The system analysis leads to multivariate uncertainty analysis problems, involving non-Normal correlated random variables. In this context, the modeling of stochastic dependence is paramount for obtaining accurate results; it corresponds to the concurrent behavior of the random variables, having a major impact to the aggregate uncertainty (in problems where the random variables correspond to spatially spread stochastic infeeds) or their evolution in time (in problems where the random variables correspond to infeeds over specific time-periods).

Journal ArticleDOI
TL;DR: In this article, a combined GA and Bayesian Model Averaging (BMA) method was used to simultaneously conduct calibration and uncertainty analysis for the Soil and Water Assessment Tool (SWAT).

Journal ArticleDOI
TL;DR: The uncertainty and sensitivity analysis are found promising for helping to build reliable mechanistic models and to interpret the model outputs properly and make part of good modeling practice.
Abstract: The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input uncertainty resulting from assumptions of the model was propagated using the Monte Carlo procedure to estimate the output uncertainty. The results showed that significant uncertainty exists in the model outputs. Moreover the uncertainty in the biomass, glucose, ammonium and base-consumption were found low compared to the large uncertainty observed in the antibiotic and off-gas CO2 predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases - meaning the model describes some periods better than others. To understand which input parameters are responsible for the output uncertainty, three sensitivity methods (Standardized Regression Coefficients, Morris and differential analysis) were evaluated and compared. The results from these methods were mostly in agreement with each other and revealed that only few parameters (about 10) out of a total 56 were mainly responsible for the output uncertainty. Among these significant parameters, one finds parameters related to fermentation characteristics such as biomass metabolism, chemical equilibria and mass-transfer. Overall the uncertainty and sensitivity analysis are found promising for helping to build reliable mechanistic models and to interpret the model outputs properly. These tools make part of good modeling practice, which can contribute to successful PAT applications for increased process understanding, operation and control purposes. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009

Journal ArticleDOI
TL;DR: Subspace Monte Carlo (SSMC) as mentioned in this paper reduces the burden of calibration-constrained Monte Carlo when undertaken with highly parameterized models by using a subspace regularization method.
Abstract: We describe a subspace Monte Carlo (SSMC) technique that reduces the burden of calibration-constrained Monte Carlo when undertaken with highly parameterized models When Monte Carlo methods are used to evaluate the uncertainty in model outputs, ensuring that parameter realizations reproduce the calibration data requires many model runs to condition each realization In the new SSMC approach, the model is first calibrated using a subspace regularization method, ideally the hybrid Tikhonov-TSVD "superparameter'' approach described by Tonkin and Doherty (2005) Sensitivities calculated with the calibrated model are used to define the calibration null-space, which is spanned by parameter combinations that have no effect on simulated equivalents to available observations Next, a stochastic parameter generator is used to produce parameter realizations, and for each a difference is formed between the stochastic parameters and the calibrated parameters This difference is projected onto the calibration null-space and added to the calibrated parameters If the model is no longer calibrated, parameter combinations that span the calibration solution space are reestimated while retaining the null-space projected parameter differences as additive values The recalibration can often be undertaken using existing sensitivities, so that conditioning requires only a small number of model runs Using synthetic and real-world model applications we demonstrate that the SSMC approach is general (it is not limited to any particular model or any particular parameterization scheme) and that it can rapidly produce a large number of conditioned parameter sets

Journal ArticleDOI
TL;DR: The results suggest that the predicted corridor is robust to uncertainty, and the three carnivore focal species, alone or in combination, were not effective umbrellas for the other focal species.
Abstract: Least-cost models for focal species are widely used to design wildlife corridors. To evaluate the least-cost modeling approach used to develop 15 linkage designs in southern California, USA, we assessed robustness of the largest and least constrained linkage. Species experts parameterized models for eight species with weights for four habitat factors (land cover, topographic position, elevation, road density) and resistance values for each class within a factor (e.g., each class of land cover). Each model produced a proposed corridor for that species. We examined the extent to which uncertainty in factor weights and class resistance values affected two key conservation-relevant outputs, namely, the location and modeled resistance to movement of each proposed corridor. To do so, we compared the proposed corridor to 13 alternative corridors created with parameter sets that spanned the plausible ranges of biological uncertainty in these parameters. Models for five species were highly robust (mean overlap 88%, little or no increase in resistance). Although the proposed corridors for the other three focal species overlapped as little as 0% (mean 58%) of the alternative corridors, resistance in the proposed corridors for these three species was rarely higher than resistance in the alternative corridors (mean difference was 0.025 on a scale of 1- 10; worst difference was 0.39). As long as the model had the correct rank order of resistance values and factor weights, our results suggest that the predicted corridor is robust to uncertainty. The three carnivore focal species, alone or in combination, were not effective umbrellas for the other focal species. The carnivore corridors failed to overlap the predicted corridors of most other focal species and provided relatively high resistance for the other focal species (mean increase of 2.7 resistance units). Least-cost modelers should conduct uncertainty analysis so that decision-makers can appreciate the potential impact of model uncertainty on conservation decisions. Our approach to uncertainty analysis (which can be called a worst-case scenario approach) is appropriate for complex models in which distribution of the input parameters cannot be specified.

Journal ArticleDOI
TL;DR: A Bayesian approach, the Markov Chain Monte Carlo (MCMC) technique, was applied to a newly developed large-scale crop model for paddy rice to optimize a new set of regional-specific parameters and quantify the uncertainty of yield estimation associated with model parameters as mentioned in this paper.

Journal ArticleDOI
TL;DR: It can be concluded that the novel method for model uncertainty estimation using machine learning techniques and its application in rainfall runoff modeling generates consistent, interpretable and improved model uncertainty estimates.
Abstract: A novel method is presented for model uncertainty estimation using machine learning techniques and its application in rainfall runoff modeling. In this method, first, the probability distribution of the model error is estimated separately for different hydrological situations and second, the parameters characterizing this distribution are aggregated and used as output target values for building the training sets for the machine learning model. This latter model, being trained, encapsulates the information about the model error localized for different hydrological conditions in the past and is used to estimate the probability distribution of the model error for the new hydrological model runs. The M5 model tree is used as a machine learning model. The method is tested to estimate uncertainty of a conceptual rainfall runoff model of the Bagmati catchment in Nepal. In this paper the method is extended further to enable it to predict an approximation of the whole error distribution, and also the new results of comparing this method to other uncertainty estimation approaches are reported. It can be concluded that the method generates consistent, interpretable and improved model uncertainty estimates.

Journal ArticleDOI
TL;DR: Only parameterizing the uncertainty directly in the model can inform the decision to conduct further research to resolve this source of uncertainty, and the value of research was particularly sensitive to these uncertainties and the methods used to characterize it.

Journal ArticleDOI
TL;DR: It is argued that future development of LCA requires that much more attention be paid to assessing and managing uncertainties, and a hybrid approach combining process and economic inputoutput (I-O) approaches to uncertainty analysis of life cycle inventories (LCI).
Abstract: Summary Life cycle assessment (LCA) is increasingly being used to inform decisions related to environmental technologies and polices, such as carbon footprinting and labeling, national emission inventories, and appliance standards. However, LCA studies of the same product or service often yield very different results, affecting the perception of LCA as a reliable decision tool. This does not imply that LCA is intrinsically unreliable; we argue instead that future development of LCA requires that much more attention be paid to assessing and managing uncertainties. In this article we review past efforts to manage uncertainty and propose a hybrid approach combining process and economic input–output (I-O) approaches to uncertainty analysis of life cycle inventories (LCI). Different categories of uncertainty are sometimes not tractable to analysis within a given model framework but can be estimated from another perspective. For instance, cutoff or truncation error induced by some processes not being included in a bottom-up process model can be estimated via a top-down approach such as the economic I-O model. A categorization of uncertainty types is presented (data, cutoff, aggregation, temporal, geographic) with a quantitative discussion of methods for evaluation, particularly for assessing temporal uncertainty. A long-term vision for LCI is proposed in which hybrid methods are employed to quantitatively estimate different uncertainty types, which are then reduced through an iterative refinement of the hybrid LCI method.

Journal ArticleDOI
TL;DR: This study focuses on uncertainty analysis of WWTP models and analyzes the issue of framing and how it affects the interpretation of uncertainty analysis results, demonstrating that depending on the way the uncertainty analysis is framed, the estimated uncertainty of design performance criteria differs significantly.

Journal ArticleDOI
TL;DR: In this article, a framework for characterizing geotechnical model uncertainty using observation data is proposed based on the concept of multivariable Bayesian updating, in which the statistics of model uncertainty are updated using observed performance data.
Abstract: As any model is only an abstraction of the real world, model uncertainty always exists. The magnitude of model uncertainty is important for geotechnical decision making. If model uncertainty is not considered, the geotechnical predictions and hence the decisions based on the geotechnical predictions might be biased. In this study, a framework for characterizing geotechnical model uncertainty using observation data is proposed. The framework is based on the concept of multivariable Bayesian updating, in which the statistics of model uncertainty are updated using observed performance data. Uncertainties in both input parameters and observed data can be considered in the proposed framework. To bypass complex computational works involved in the proposed framework, a practical approximate solution is presented. The proposed framework is illustrated by characterizing the model uncertainty of four limit equilibrium methods for slope stability analysis using quality centrifuge test data. Parametric study in the illustrative example shows that both quality and quantity of the performance data could affect the determination of the model uncertainty, and that such effects can be systematically quantified with the proposed method.

01 Jan 2009
TL;DR: In this article, the authors describe what should be the main components of a risk description when following this approach and also indicate how this approach relates to decision-making, and how to communicate the shortcomings and limitations of probabilities and expected values.
Abstract: A quantitative risk analysis (QRA) should provide a broad, informative and balanced picture of risk, in order to support decisions. To achieve this, a proper treatment of uncertainty is a prerequisite. Most approaches to treatment of uncertainty in QRA seem to be based on the thinking that uncertainty relates to the calculated probabilities and expected values. This causes difficulties when it comes to communicating what the analysis results mean, and could easily lead to weakened conclusions if large uncertainties are involved. An alternative approach is to hold uncertainty, not probability, as a main component of risk, and regard probabilities purely as epistemic-based expressions of uncertainty. In the paper the latter view is taken, and we describe what should be the main components of a risk description when following this approach. We also indicate how this approach relates to decision-making. An important issue addressed is how to communicate the shortcomings and limitations of probabilities and expected values. Sensitivity analysis plays a key role in this regard. Examples are included to illustrate ideas and findings.

01 Jan 2009
TL;DR: A Bayes Linear approach is presented in order to identify the subset of the input space that could give rise to acceptable matches between model output and measured data, and was successful in producing a large collection of model evaluations that exhibit good fits to the observed data.
Abstract: In many scientific disciplines complex computer models are used to understand the behaviour of large scale physical systems. An uncertainty analysis of such a computer model known as Galform is presented. Galform models the creation and evolution of approximately one million galaxies from the beginning of the Universe until the current day, and is regarded as a state-of-the-art model within the cosmology community. It requires the specification of many input parameters in order to run the simulation, takes significant time to run, and provides various outputs that can be compared with real world data. A Bayes Linear approach is presented in order to identify the subset of the input space that could give rise to acceptable matches between model output and measured data. This approach takes account of the major sources of uncertainty in a consistent and unified manner, including input parameter uncertainty, function uncertainty, observational error, forcing function uncertainty and structural uncertainty. The approach is known as History Matching, and involves the use of an iterative succession of emulators (stochastic belief specifications detailing beliefs about the Galform function), which are used to cut down the input parameter space. The analysis was successful in producing a large collection of model evaluations that exhibit good fits to the observed data.

Journal ArticleDOI
TL;DR: It is argued that properly using the IPCC’s Guidance Notes for Lead Authors for addressing uncertainty, adding a pedigree analysis for key findings, and particularly communicating the diverse nature of uncertainty to the users of the assessment would increase the quality of the Assessment.
Abstract: Dealing consistently with risk and uncertainty across the IPCC reports is a difficult challenge. Huge practical difficulties arise from the Panel's scale and interdisciplinary context, the complexity of the climate change issue and its political context. The key question of this paper is if the observed differences in the handling of uncertainties by the three IPCC Working Groups can be clarified. To address this question, the paper reviews a few key issues on the foundations of uncertainty analysis, and summarizes the history of the treatment of uncertainty by the IPCC. One of the key findings is that there is reason to agree to disagree: the fundamental differences between the issues covered by the IPCC's three interdisciplinary Working Groups, between the type of information available, and between the dominant paradigms of the practitioners, legitimately lead to different approaches. We argue that properly using the IPCC's Guidance Notes for Lead Authors for addressing uncertainty, adding a pedigree analysis for key findings, and particularly communicating the diverse nature of uncertainty to the users of the assessment would increase the quality of the assessment. This approach would provide information about the nature of the uncertainties in addition to their magnitude and the confidence assessors have in their findings.

Journal ArticleDOI
TL;DR: This study investigates systematically the performance of FE model updating for damage identification in a full-scale reinforced concrete shear wall building structure tested on the UCSD- NEES shake table in the period October 2005-January 2006.
Abstract: A full-scale seven-story reinforced concrete shear wall building structure was tested on the UCSD- NEES shake table in the period October 2005-January 2006. The shake table tests were designed so as to damage the building progressively through several historical seis- mic motions reproduced on the shake table. A sensitivity- based finite element (FE) model updating method was used to identify damage in the building. The estimation uncertainty in the damage identification results was ob- served to be significant, which motivated the authors to perform, through numerical simulation, an uncertainty analysis on a set of damage identification results. This study investigates systematically the performance of FE model updating for damage identification. The dam- aged structure is simulated numerically through a change in stiffness in selected regions of a FE model of the shear wall test structure. The uncertainty of the identified damage (location and extent) due to variability of five input factors is quantified through analysis-of-variance

Journal ArticleDOI
TL;DR: A user study that evaluates the perception of uncertainty amongst four of the most commonly used techniques for visualizing uncertainty in one-dimensional and two-dimensional data, finding a significant difference in user performance between searching for locations of high and searching for Locations of low uncertainty.
Abstract: Many techniques have been proposed to show uncertainty in data visualizations. However, very little is known about their effectiveness in conveying meaningful information. In this paper, we present a user study that evaluates the perception of uncertainty amongst four of the most commonly used techniques for visualizing uncertainty in one-dimensional and two-dimensional data. The techniques evaluated are traditional errorbars, scaled size of glyphs, color-mapping on glyphs, and color-mapping of uncertainty on the data surface. The study uses generated data that was designed to represent the systematic and random uncertainty components. Twenty-seven users performed two types of search tasks and two types of counting tasks on 1D and 2D datasets. The search tasks involved finding data points that were least or most uncertain. The counting tasks involved counting data features or uncertainty features. A 4 times 4 full-factorial ANOVA indicated a significant interaction between the techniques used and the type of tasks assigned for both datasets indicating that differences in performance between the four techniques depended on the type of task performed. Several one-way ANOVAs were computed to explore the simple main effects. Bonferronni's correction was used to control for the family-wise error rate for alpha-inflation. Although we did not find a consistent order among the four techniques for all the tasks, there are several findings from the study that we think are useful for uncertainty visualization design. We found a significant difference in user performance between searching for locations of high and searching for locations of low uncertainty. Errorbars consistently underperformed throughout the experiment. Scaling the size of glyphs and color-mapping of the surface performed reasonably well. The efficiency of most of these techniques were highly dependent on the tasks performed. We believe that these findings can be used in future uncertainty visualization design. In addition, the framework developed in this user study presents a structured approach to evaluate uncertainty visualization techniques, as well as provides a basis for future research in uncertainty visualization.

Journal ArticleDOI
Yiqi Luo1, Ensheng Weng1, Xiaowen Wu1, Chao Gao1, Xuhui Zhou1, Li Zhang 
TL;DR: The current status of the knowledge on parameter identifiability is reviewed and major factors that influence it are discussed, which include parameter constraint and equifinality.
Abstract: One of the most desirable goals of scientific endeavor is to discover laws or principles behind ‘‘mystified’’ phenomena. A cherished example is the discovery of the law of universal gravitation by Isaac Newton, which can precisely describe falling of an apple from a tree and predict the existence of Neptune. Scientists pursue mechanistic understanding of natural phenomena in an attempt to develop relatively simple equations with a small number of parameters to describe patterns in nature and to predict changes in the future. In this context, uncertainty had been considered to be incompatible with science (Klir 2006). Not until the early 20th century was the notion gradually changed when physicists studied the behavior of matter and energy on the scale of atoms and subatomic particles in quantum mechanics. In 1927, Heisenberg observed that the electron could not be considered as in an exact location, but rather in points of probable location in its orbital, which can be described by a probability distribution (Heisenberg 1958). Quantum mechanics lets scientists realize that inherent uncertainty exists in nature and is an unavoidable and essential property of most systems. Since then, scientists have developed methods to analyze and describe uncertainty. Ecosystem ecologists have recently directed attention to studying uncertainty in ecosystem processes. The Bayesian paradigm allows ecologists to generate a posteriori probabilistic density functions (PDF) for parameters of ecosystem models by assimilating a priori PDFs and measurements (Dowd and Meyer 2003). Xu et al. (2006), for example, evaluated uncertainty in parameter estimation and projected carbon sinks by a Bayesian framework using six data sets and a terrestrial ecosystem (TECO) model. The Bayesian framework has been applied to assimilation of eddy-flux data into simplified photosynthesis and evapotranspiration model (SIPNET) to evaluate information content of the net ecosystem exchange (NEE) observations for constraints of process parameters (e.g., Braswell et al. 2005) and to partition NEE into its component fluxes (Sacks et al. 2006). Verstraeten et al. (2008) evaluate error propagation and uncertainty of evaporation, soil moisture content, and net ecosystem productivity with remotely sensed data assimilation. Nevertheless, uncertainty in data assimilation with ecosystem models has not been systematically explored. Cressie et al. (2009) proposed a general framework to account for multiple sources of uncertainty in measurements, in sampling, in specification of the process, in parameters, and in initial and boundary conditions. They proposed to separate the multiple sources of uncertainty using a conditional-probabilistic approach. With this approach, ecologists need to build a hierarchical statistical model based on the Bayesian theorem, and to use Markov chain Monte Carlos (MCMC) techniques for sampling before probability distributions of interested parameters or projected state variables can be obtained for quantification of uncertainty. It is an elegant framework for quantifying uncertainties in the parameters and processes of ecological models. At the core of uncertainty analysis is parameter identifiability. When parameters can be constrained by a set of data with a given model structure, we can identify maximum likelihood values of the parameters and then those parameters are identifiable. Conversely, there is an issue of equifinality in data assimilation (Beven 2006) that different models, or different parameter values of the same model, may fit data equally well without the ability to distinguish which models or parameter values are better than others. Thus, the issue of identifiability is reflected by parameter constraint and equifinality. This essay first reviews the current status of our knowledge on parameter identifiability and then discusses major factors that influence it. To enrich discussion, we use examples in ecosystem ecology that are different from the one on population dynamics of harbor seals in Cressie et al. (2009).

Journal ArticleDOI
TL;DR: The results show that SVM in general exhibited better generalization ability than ANN and the effect of cross‐validation schemes, parameter dimensions, and training sample sizes on the performance of SVM was evaluated and discussed.
Abstract: With the popularity of complex, physically based hydrologic models, the time consumed for running these models is increasing substantially. Using surrogate models to approximate the computationally intensive models is a promising method to save huge amounts of time for parameter estimation. In this study, two learning machines [Artificial Neural Network (ANN) and support vector machine (SVM)] were evaluated and compared for approximating the Soil and Water Assessment Tool (SWAT) model. These two learning machines were tested in two watersheds (Little River Experimental Watershed in Georgia and Mahatango Creek Experimental Watershed in Pennsylvania). The results show that SVM in general exhibited better generalization ability than ANN. In order to effectively and efficiently apply SVM to approximate SWAT, the effect of cross-validation schemes, parameter dimensions, and training sample sizes on the performance of SVM was evaluated and discussed. It is suggested that 3-fold cross-validation is adequate for training the SVM model, and reducing the parameter dimension through determining the parameter values from field data and the sensitivity analysis is an effective means of improving the performance of SVM. As far as the training sample size, it is difficult to determine the appropriate number of samples for training SVM based on the test results obtained in this study. Simple examples were used to illustrate the potential applicability of combining the SVM model with uncertainty analysis algorithm to save efforts for parameter uncertainty of SWAT. In the future, evaluating the applicability of SVM for approximating SWAT in other watersheds and combining SVM with different parameter uncertainty analysis algorithms and evolutionary optimization algorithms deserve further research.

Journal ArticleDOI
TL;DR: In this article, residual resampling particle filter is applied to assess parameter, precipitation, and predictive uncertainty in the distributed rainfall-runoff model LISFLOOD, which results in well identifiable posterior parameter distributions and provide a reasonable fit to the observed hydrograph.