scispace - formally typeset
Search or ask a question
Author

Ronald L. Iman

Other affiliations: Texas Tech University
Bio: Ronald L. Iman is an academic researcher from Sandia National Laboratories. The author has contributed to research in topics: Latin hypercube sampling & Uncertainty analysis. The author has an hindex of 24, co-authored 49 publications receiving 9609 citations. Previous affiliations of Ronald L. Iman include Texas Tech University.

Papers
More filters
Journal ArticleDOI
TL;DR: Rank as mentioned in this paper is a nonparametric procedure that is applied to the ranks of the data instead of to the data themselves, and it can be viewed as a useful tool for developing non-parametric procedures to solve new problems.
Abstract: Many of the more useful and powerful nonparametric procedures may be presented in a unified manner by treating them as rank transformation procedures. Rank transformation procedures are ones in which the usual parametric procedure is applied to the ranks of the data instead of to the data themselves. This technique should be viewed as a useful tool for developing nonparametric procedures to solve new problems.

3,637 citations

Journal ArticleDOI
TL;DR: In this article, a method for inducing a desired rank correlation matrix on a multivariate input random variable for use in a simulation study is introduced, which preserves the exact form of the marginal distributions on the input variables, and may be used with any type of sampling scheme for which correlation of input variables is a meaningful concept.
Abstract: A method for inducing a desired rank correlation matrix on a multivariate input random variable for use in a simulation study is introduced in this paper. This method is simple to use, is distribution free, preserves the exact form of the marginal distributions on the input variables, and may be used with any type of sampling scheme for which correlation of input variables is a meaningful concept. A Monte Carlo study provides an estimate of the bias and variability associated with the method. Input variables used in a model for study of geologic disposal of radioactive waste provide an example of the usefulness of this procedure. A textbook example shows how the output may be affected by the method presented in this paper.

1,571 citations

Journal ArticleDOI
TL;DR: In this paper, the authors compare two new approximations with the usual x2 and F large sample approximings for the one-way Kruskal-Wallis test statistic.
Abstract: The Friedman (1937) test for the randomized complete block design is used to test the hypothesis of no treatment effect among k treatments with b blocks. Difficulty in determination of the size of the critical region for this hypothesis is com¬pounded by the facts that (1) the most recent extension of exact tables for the distribution of the test statistic by Odeh (1977) go up only to the case with k6 and b6, and (2) the usual chi-square approximation is grossly inaccurate for most commonly used combinations of (k,b). The purpose of this paper 2 is to compare two new approximations with the usual x2 and F large sample approximations. This work represents an extension to the two-way layout of work done earlier by the authors for the one-way Kruskal-Wallis test statistic.

857 citations

Journal ArticleDOI
TL;DR: In this paper, Latin hypercube sampling has been shown to work well on this type of problem, and a judicious selection procedure for the choic of values of input variables is required, a variety of situations require that decisions and judgments be made in the face of uncertainty.
Abstract: As modeling efforts expand to a broader spectrum of areas the amount of computer time required to exercise the corresponding computer codes has become quite costly (several hours for a single run is not uncommon). This costly process can be directly tied to the complexity of the modeling and to the large number of input variables (often numbering in the hundreds) Further, the complexity of the modeling (usually involving systems of differential equations) makes the relationships among the input variables not mathematically tractable. In this setting it is desired to perform sensitivity studies of the input-output relationships. Hence, a judicious selection procedure for the choic of values of input variables is required, Latin hypercube sampling has been shown to work well on this type of problem. However, a variety of situations require that decisions and judgments be made in the face of uncertainty. The source of this uncertainty may be lack ul knowledge about probability distributions associated with i...

701 citations

Journal ArticleDOI
TL;DR: This study investigates the applicability of three widely used techniques to three computer models having large uncertainties and varying degrees of complexity in order to highlight some of the problem areas that must be addressed in actual applications.
Abstract: Many different techniques have been proposed for performing uncertainty and sensitivity analyses on computer models for complex processes. The objective of the present study is to investigate the applicability of three widely used techniques to three computer models having large uncertainties and varying degrees of complexity in order to highlight some of the problem areas that must be addressed in actual applications. The following approaches to uncertainty and sensitivity analysis are considered: (1) response surface methodology based on input determined from a fractional factorial design; (2) Latin hypercube sampling with and without regression analysis; and (3) differential analysis. These techniques are investigated with respect to (1) ease of implementation, (2) flexibility, (3) estimation of the cumulative distribution function of the output, and (4) adaptability to different methods of sensitivity analysis. With respect to these criteria, the technique using Latin hypercube sampling and regression analysis had the best overall performance. The models used in the investigation are well documented, thus making it possible for researchers to make comparisons of other techniques with the results in this study.

529 citations


Cited by
More filters
Journal Article
TL;DR: A set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers is recommended: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparisons of more classifiers over multiple data sets.
Abstract: While methods for comparing two learning algorithms on a single data set have been scrutinized for quite some time already, the issue of statistical tests for comparisons of more algorithms on multiple data sets, which is even more essential to typical machine learning studies, has been all but ignored. This article reviews the current practice and then theoretically and empirically examines several suitable tests. Based on that, we recommend a set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparison of more classifiers over multiple data sets. Results of the latter can also be neatly presented with the newly introduced CD (critical difference) diagrams.

10,306 citations

Book
21 Mar 2002
TL;DR: An essential textbook for any student or researcher in biology needing to design experiments, sample programs or analyse the resulting data is as discussed by the authors, covering both classical and Bayesian philosophies, before advancing to the analysis of linear and generalized linear models Topics covered include linear and logistic regression, simple and complex ANOVA models (for factorial, nested, block, split-plot and repeated measures and covariance designs), and log-linear models Multivariate techniques, including classification and ordination, are then introduced.
Abstract: An essential textbook for any student or researcher in biology needing to design experiments, sample programs or analyse the resulting data The text begins with a revision of estimation and hypothesis testing methods, covering both classical and Bayesian philosophies, before advancing to the analysis of linear and generalized linear models Topics covered include linear and logistic regression, simple and complex ANOVA models (for factorial, nested, block, split-plot and repeated measures and covariance designs), and log-linear models Multivariate techniques, including classification and ordination, are then introduced Special emphasis is placed on checking assumptions, exploratory data analysis and presentation of results The main analyses are illustrated with many examples from published papers and there is an extensive reference list to both the statistical and biological literature The book is supported by a website that provides all data sets, questions for each chapter and links to software

9,509 citations

Journal ArticleDOI
TL;DR: This paper presents a meta-modelling framework for estimating Output from Computer Experiments-Predicting Output from Training Data and Criteria Based Designs for computer Experiments.
Abstract: Many scientific phenomena are now investigated by complex computer models or codes A computer experiment is a number of runs of the code with various inputs A feature of many computer experiments is that the output is deterministic--rerunning the code with the same inputs gives identical observations Often, the codes are computationally expensive to run, and a common objective of an experiment is to fit a cheaper predictor of the output to the data Our approach is to model the deterministic output as the realization of a stochastic process, thereby providing a statistical basis for designing experiments (choosing the inputs) for efficient prediction With this model, estimates of uncertainty of predictions are also available Recent work in this area is reviewed, a number of applications are discussed, and we demonstrate our methodology with an example

6,583 citations

Journal ArticleDOI
TL;DR: The basics are discussed and a survey of a complete set of nonparametric procedures developed to perform both pairwise and multiple comparisons, for multi-problem analysis are given.
Abstract: a b s t r a c t The interest in nonparametric statistical analysis has grown recently in the field of computational intelligence. In many experimental studies, the lack of the required properties for a proper application of parametric procedures - independence, normality, and homoscedasticity - yields to nonparametric ones the task of performing a rigorous comparison among algorithms. In this paper, we will discuss the basics and give a survey of a complete set of nonparametric procedures developed to perform both pairwise and multiple comparisons, for multi-problem analysis. The test problems of the CEC'2005 special session on real parameter optimization will help to illustrate the use of the tests throughout this tutorial, analyzing the results of a set of well-known evolutionary and swarm intelligence algorithms. This tutorial is concluded with a compilation of considerations and recommendations, which will guide practitioners when using these tests to contrast their experimental results.

3,832 citations

Journal ArticleDOI
TL;DR: Rank as mentioned in this paper is a nonparametric procedure that is applied to the ranks of the data instead of to the data themselves, and it can be viewed as a useful tool for developing non-parametric procedures to solve new problems.
Abstract: Many of the more useful and powerful nonparametric procedures may be presented in a unified manner by treating them as rank transformation procedures. Rank transformation procedures are ones in which the usual parametric procedure is applied to the ranks of the data instead of to the data themselves. This technique should be viewed as a useful tool for developing nonparametric procedures to solve new problems.

3,637 citations