scispace - formally typeset
Search or ask a question
Book•

Sensitivity Analysis in Practice: A Guide to Assessing Scientific Models

TL;DR: In this paper, the authors present a method for sensitivity analysis of a fish population model using Monte Carlo filtering and variance-based methods, which is based on the Bayesian uncertainty estimation.
Abstract: PREFACE. 1. A WORKED EXAMPLE. 1.1 A simple model. 1.2 Modulus version of the simple model. 1.3 Six--factor version of the simple model. 1.4 The simple model 'by groups'. 1.5 The (less) simple correlated--input model. 1.6 Conclusions. 2. GLOBAL SENSITIVITY ANALYSIS FOR IMPORTANCE ASSESSMENT. 2.1 Examples at a glance. 2.2 What is sensitivity analysis? 2.3 Properties of an ideal sensitivity analysis method. 2.4 Defensible settings for sensitivity analysis. 2.5 Caveats. 3. TEST CASES. 3.1 The jumping man. Applying variance--based methods. 3.2 Handling the risk of a financial portfolio: the problem of hedging. Applying Monte Carlo filtering and variance--based methods. 3.3 A model of fish population dynamics. Applying the method of Morris. 3.4 The Level E model. Radionuclide migration in the geosphere. Applying variance--based methods and Monte Carlo filtering. 3.5 Two spheres. Applying variance based methods in estimation/calibration problems. 3.6 A chemical experiment. Applying variance based methods in estimation/calibration problems. 3.7 An analytical example. Applying the method of Morris. 4. THE SCREENING EXERCISE. 4.1 Introduction. 4.2 The method of Morris. 4.3 Implementing the method. 4.4 Putting the method to work: an analytical example. 4.5 Putting the method to work: sensitivity analysis of a fish population model. 4.6 Conclusions. 5. METHODS BASED ON DECOMPOSING THE VARIANCE OF THE OUTPUT. 5.1 The settings. 5.2 Factors Prioritisation Setting. 5.3 First--order effects and interactions. 5.4 Application of Si to Setting 'Factors Prioritisation'. 5.5 More on variance decompositions. 5.6 Factors Fixing (FF) Setting. 5.7 Variance Cutting (VC) Setting. 5.8 Properties of the variance based methods. 5.9 How to compute the sensitivity indices: the case of orthogonal input. 5.9.1 A digression on the Fourier Amplitude Sensitivity Test (FAST). 5.10 How to compute the sensitivity indices: the case of non--orthogonal input. 5.11 Putting the method to work: the Level E model. 5.11.1 Case of orthogonal input factors. 5.11.2 Case of correlated input factors. 5.12 Putting the method to work: the bungee jumping model. 5.13 Caveats. 6. SENSITIVITY ANALYSIS IN DIAGNOSTIC MODELLING: MONTE CARLO FILTERING AND REGIONALISED SENSITIVITY ANALYSIS, BAYESIAN UNCERTAINTY ESTIMATION AND GLOBAL SENSITIVITY ANALYSIS. 6.1 Model calibration and Factors Mapping Setting. 6.2 Monte Carlo filtering and regionalised sensitivity analysis. 6.2.1 Caveats. 6.3 Putting MC filtering and RSA to work: the problem of hedging a financial portfolio. 6.4 Putting MC filtering and RSA to work: the Level E test case. 6.5 Bayesian uncertainty estimation and global sensitivity analysis. 6.5.1 Bayesian uncertainty estimation. 6.5.2 The GLUE case. 6.5.3 Using global sensitivity analysis in the Bayesian uncertainty estimation. 6.5.4 Implementation of the method. 6.6 Putting Bayesian analysis and global SA to work: two spheres. 6.7 Putting Bayesian analysis and global SA to work: a chemical experiment. 6.7.1 Bayesian uncertainty analysis (GLUE case). 6.7.2 Global sensitivity analysis. 6.7.3 Correlation analysis. 6.7.4 Further analysis by varying temperature in the data set: fewer interactions in the model. 6.8 Caveats. 7. HOW TO USE SIMLAB. 7.1 Introduction. 7.2 How to obtain and install SIMLAB. 7.3 SIMLAB main panel. 7.4 Sample generation. 7.4.1 FAST. 7.4.2 Fixed sampling. 7.4.3 Latin hypercube sampling (LHS). 7.4.4 The method of Morris. 7.4.5 Quasi--Random LpTau. 7.4.6 Random. 7.4.7 Replicated Latin Hypercube (r--LHS). 7.4.8 The method of Sobol'. 7.4.9 How to induce dependencies in the input factors. 7.5 How to execute models. 7.6 Sensitivity analysis. 8. FAMOUS QUOTES: SENSITIVITY ANALYSIS IN THE SCIENTIFIC DISCOURSE. REFERENCES. INDEX.
Citations
More filters
Journal Article•DOI•
TL;DR: This paper presents a meta-modelling framework for estimating Output from Computer Experiments-Predicting Output from Training Data and Criteria Based Designs for computer Experiments.
Abstract: Many scientific phenomena are now investigated by complex computer models or codes A computer experiment is a number of runs of the code with various inputs A feature of many computer experiments is that the output is deterministic--rerunning the code with the same inputs gives identical observations Often, the codes are computationally expensive to run, and a common objective of an experiment is to fit a cheaper predictor of the output to the data Our approach is to model the deterministic output as the realization of a stochastic process, thereby providing a statistical basis for designing experiments (choosing the inputs) for efficient prediction With this model, estimates of uncertainty of predictions are also available Recent work in this area is reviewed, a number of applications are discussed, and we demonstrate our methodology with an example

6,583 citations

Report Series•DOI•
TL;DR: In this paper, the authors present a handbook for constructing and using composite indicators for policy makers, academics, the media and other interested parties, which is concerned with those which compare and rank country performance in areas such as industrial competitiveness, sustainable development, globalisation and innovation.
Abstract: This Handbook aims to provide a guide for constructing and using composite indicators for policy makers, academics, the media and other interested parties. While there are several types of composite indicators, this Handbook is concerned with those which compare and rank country performance in areas such as industrial competitiveness, sustainable development, globalisation and innovation. The Handbook aims to contribute to a better understanding of the complexity of composite indicators and to an improvement of the techniques currently used to build them. In particular, it contains a set of technical guidelines that can help constructors of composite indicators to improve the quality of their outputs. It has been prepared jointly by the OECD (the Statistics Directorate and the Directorate for Science, Technology and Industry) and the Applied Statistics and Econometrics Unit of the Joint Research Centre of the European Commission in Ispra, Italy. Primary authors from the JRC are Michela Nardo, Michaela Saisana, Andrea Saltelli and Stefano Tarantola. Primary authors from the OECD are Anders Hoffmann and Enrico Giovannini. Editorial assistance was provided by Candice Stevens, Gunseli Baygan and Karsten Olsen. The research is partly funded by the European Commission, Research Directorate, under the project KEI (Knowledge Economy Indicators), Contract FP6 No. 502529. In the OECD context, the work has benefitted from a grant from the Danish government. The views expressed are those of the authors and should not be regarded as stating an official position of either the European Commission or the OECD.

2,892 citations

Journal Article•DOI•
TL;DR: This work develops methods for applying existing analytical tools to perform analyses on a variety of mathematical and computer models and provides a complete methodology for performing these analyses, in both deterministic and stochastic settings, and proposes novel techniques to handle problems encountered during these types of analyses.

2,014 citations

Book•
30 Jul 2007
TL;DR: In this article, the authors proposed a method to estimate causal effects by conditioning on observed variables to block backdoor paths in observational social science research, but the method is limited to the case of causal exposure and identification criteria for conditioning estimators.
Abstract: Part I. Causality and Empirical Research in the Social Sciences: 1. Introduction Part II. Counterfactuals, Potential Outcomes, and Causal Graphs: 2. Counterfactuals and the potential-outcome model 3. Causal graphs Part III. Estimating Causal Effects by Conditioning on Observed Variables to Block Backdoor Paths: 4. Models of causal exposure and identification criteria for conditioning estimators 5. Matching estimators of causal effects 6. Regression estimators of causal effects 7. Weighted regression estimators of causal effects Part IV. Estimating Causal Effects When Backdoor Conditioning Is Ineffective: 8. Self-selection, heterogeneity, and causal graphs 9. Instrumental-variable estimators of causal effects 10. Mechanisms and causal explanation 11. Repeated observations and the estimation of causal effects Part V. Estimation When Causal Effects Are Not Point Identified by Observables: 12. Distributional assumptions, set identification, and sensitivity analysis Part VI. Conclusions: 13. Counterfactuals and the future of empirical research in observational social science.

1,701 citations

Journal Article•DOI•
TL;DR: A revised version of the elementary effects method is proposed, improved in terms of both the definition of the measure and the sampling strategy, having the advantage of a lower computational cost.
Abstract: In 1991 Morris proposed an effective screening sensitivity measure to identify the few important factors in models with many factors. The method is based on computing for each input a number of incremental ratios, namely elementary effects, which are then averaged to assess the overall importance of the input. Despite its value, the method is still rarely used and instead local analyses varying one factor at a time around a baseline point are usually employed. In this piece of work we propose a revised version of the elementary effects method, improved in terms of both the definition of the measure and the sampling strategy. In the present form the method shares many of the positive qualities of the variance-based techniques, having the advantage of a lower computational cost, as demonstrated by the analytical examples. The method is employed to assess the sensitivity of a chemical reaction model for dimethylsulphide (DMS), a gas involved in climate change. Results of the sensitivity analysis open up the ground for model reconsideration: some model components may need a more thorough modelling effort while some others may need to be simplified.

1,528 citations

References
More filters
Book•
01 Jan 1956
TL;DR: Though it incorporates much new material, this new edition preserves the general character of the book in providing a collection of solutions of the equations of diffusion and describing how these solutions may be obtained.
Abstract: Though it incorporates much new material, this new edition preserves the general character of the book in providing a collection of solutions of the equations of diffusion and describing how these solutions may be obtained

20,495 citations

Journal Article•DOI•
TL;DR: This paper presents a meta-modelling framework for estimating Output from Computer Experiments-Predicting Output from Training Data and Criteria Based Designs for computer Experiments.
Abstract: Many scientific phenomena are now investigated by complex computer models or codes A computer experiment is a number of runs of the code with various inputs A feature of many computer experiments is that the output is deterministic--rerunning the code with the same inputs gives identical observations Often, the codes are computationally expensive to run, and a common objective of an experiment is to fit a cheaper predictor of the output to the data Our approach is to model the deterministic output as the realization of a stochastic process, thereby providing a statistical basis for designing experiments (choosing the inputs) for efficient prediction With this model, estimates of uncertainty of predictions are also available Recent work in this area is reviewed, a number of applications are discussed, and we demonstrate our methodology with an example

6,583 citations

01 Jan 1998

5,511 citations

Journal Article•DOI•
TL;DR: The GLUE procedure works with multiple sets of parameter values and allows that, within the limitations of a given model structure and errors in boundary conditions and field observations, different sets of values may be equally likely as simulators of a catchment.
Abstract: This paper describes a methodology for calibration and uncertainty estimation of distributed models based on generalized likelihood measures. The GLUE procedure works with multiple sets of parameter values and allows that, within the limitations of a given model structure and errors in boundary conditions and field observations, different sets of values may be equally likely as simulators of a catchment. Procedures for incorporating different types of observations into the calibration; Bayesian updating of likelihood values and evaluating the value of additional observations to the calibration process are described. The procedure is computationally intensive but has been implemented on a local parallel processing computer.

4,146 citations

Book•
01 Jan 2001
TL;DR: In this article, the age-classified matrix model was used to analyze the life-cycle graph sensitivity analysis and evolutionary demography statistical inference time-varying and stochastic models.
Abstract: The age-classified matrix model stage-classified life cycles stage-classified matrix models analysis of the life-cycle graph sensitivity analysis and evolutionary demography statistical inference time-varying and stochastic models density-dependent models two-sex models.

3,491 citations